Speech to Text

Our advanced ASR technology is built on the robust foundation of OpenAI’s Whisper, known for its exceptional performance in multilingual speech recognition. However, we’ve significantly enhanced its capabilities with in-house innovations, including the implementation of phonetic time-stamps. These detailed markers provide an extra layer of precision by capturing the timing of specific phonetic elements within the audio, enabling more granular analysis and synchronization.

Our ASR component support’s for up to 49 languages and robust code-switching capabilities, it effortlessly transcribes audio that blends multiple languages. Its built-in automatic language detection ensures that users do not have to manually specify the language, streamlining the workflow, while precise time-stamps allow for easy navigation and review of audio content.

Audio transcription without PHI reduction

Transcirption is by default applied for the task protect. To have the transcription without any reduction use the task parameter and set it to transcribe.

Transcribe multilingual audio data

BASE_URL = "https://voiceharbor.ai"
usage_token = "USAGE_TOKEN"
# Create a new job on the server via the class method.
job_id = VoiceHarborClient.create_job(BASE_URL, usage_token)

client = VoiceHarborClient(
    base_url=BASE_URL,
    job_id=job_id,
    token=usage_token,
    inputs_dir="./inputs/tests"
)

# Submit input files and the job file.and 
job_params = {"files": [], "task": "transcribe"}
job_params = client.submit_files(job_params)
job_file = client.submit_job(job_params)
logger.info(f"Job file created: {job_file}")

Set target transcription language


supported_codes = [
    "af", "ar", "az", "be", "bg", "bn", "bs", "br", "ca", "cs", "cy", "da", "de",
    "el", "en", "es", "et", "eu", "fa", "fi", "fr", "gl", "he", "hi", "hr", "hu",
    "hy", "id", "is", "it", "ja", "ka", "km", "kn", "ko", "kk", "la", "lt", "lv",
    "mi", "ml", "mn", "mr", "ms", "ne", "nl", "no", "oc", "pa", "pl", "pt", "ro",
    "ru", "si", "sk", "sl", "sq", "sr", "sn", "so", "sw", "ta", "te", "th", "tg",
    "tr", "uk", "ur", "vi", "yo", "zh"
]

BASE_URL = "https://voiceharbor.ai"
usage_token = "USAGE_TOKEN"
# Create a new job on the server via the class method.
job_id = VoiceHarborClient.create_job(BASE_URL, usage_token)

client = VoiceHarborClient(
    base_url=BASE_URL,
    job_id=job_id,
    token=usage_token,
    inputs_dir="./inputs/tests"
)

# Submit input files and the job file.and 
job_params = {"files": [], "task": "transcribe", "language":"en"}  
job_params = client.submit_files(job_params)
job_file = client.submit_job(job_params)
logger.info(f"Job file created: {job_file}")

Output example

{
  "speaker 1": {
    "transcription": [
      {
        "start": 0.05,
        "end": 5.27,
        "text": "The sandwich comes with ham, cheese, tomatoes, mayonnaise, pickles.",
        "words": [
          {
            "word": " The",
            "start": 0.05,
            "end": 0.55
          },
          {
            "word": " sandwich",
            "start": 0.55,
            "end": 0.93
          },
          {
            "word": " comes",
            "start": 0.93,
            "end": 1.33
          },
          {
            "word": " with",
            "start": 1.33,
            "end": 1.57
          },
          {
            "word": " ham",
            "start": 1.57,
            "end": 1.95
          },
          {
            "word": ",",
            "start": 1.95,
            "end": 2.13
          },
          {
            "word": " cheese",
            "start": 2.13,
            "end": 2.57
          },
          {
            "word": ",",
            "start": 2.57,
            "end": 2.73
          },
          {
            "word": " tomatoes",
            "start": 2.73,
            "end": 3.25
          },
          {
            "word": ",",
            "start": 3.25,
            "end": 3.55
          },
          {
            "word": " mayonnaise",
            "start": 3.55,
            "end": 3.93
          },
          {
            "word": ",",
            "start": 3.93,
            "end": 4.21
          },
          {
            "word": " pickles",
            "start": 4.21,
            "end": 4.51
          },
          {
            "word": ".",
            "start": 4.51,
            "end": 5.27
          }
        ]
      }
    ],
    "language": "en"
  }
}

Benchmarks for top 50 supported languages

RankCodeLanguageWER (%) on FLEURS
1esSpanish3.0
2itItalian4.0
3enEnglish4.2
4ptPortuguese4.3
5deGerman4.5
6jaJapanese5.0
7plPolish5.6
8ruRussian5.6
9nlDutch6.1
10idIndonesian6.4
11frFrench7.1
12trTurkish7.3
13svSwedish8.1
14ukUkrainian8.3
15msMalay8.7
16noNorwegian9.1
17fiFinnish9.2
18viVietnamese10.9
19thThai11.5
20elGreek13.0
21csCzech13.4
22hrCroatian13.9
23tlTagalog14.3
24daDanish14.3
25koKorean14.4
26roRomanian14.6
27bgBulgarian14.7
28zhChinese15.6
29htHaitian Creole16.1
30mkMacedonian17.5
31hiHindi21.5
32etEstonian21.9
33urUrdu23.1
34faPersian23.4
35ltLithuanian24.2
36azAzerbaijani27.1
37heHebrew27.7
38hyArmenian28.1
39beBelarusian31.3
40afAfrikaans31.8
41sqAlbanian32.7
42skSlovak33.9
43srSerbian34.7
44kkKazakh37.7
45knKannada38.1
46bnBengali39.7
47mrMarathi40.9
48euBasque44.3
49neNepali45.4

Good news to share with you!

2025-07-01 Upcoming Release
NextGen Medical ASR

Changelog

Looking ahead, we’re also pushing the envelope by refining our model with an extensive trove of medical data to eliminate hallucinations and boost reliability in even the most demanding environments. The Q2 realease will not only improve the recognition of complex medical terminology but also significantly mitigate transcirption errors, ensuring that results are both accurate and reliable. Whether you need rapid transcription for global communications or precise documentation in critical healthcare settings, our ASR component is designed to deliver excellence.