Jul 22, 2022 | Read time 3 min

Great Improvements for Brazilian Portuguese and Canadian French ASR

Our Accuracy Team Lead, John Hughes, talks us through the facts and figures for the latest testing of our Brazilian-Portuguese and Canadian French ASR.
Great Improvements for Brazilian Portuguese and Canadian French ASR
John Hughes
John HughesAccuracy Team Lead

There’s something hugely satisfying about overtaking your competitors after putting in the hard work. It’s even more of a win when you find yourself way out in front, as we have with our latest uplift across two of our most often used languages French and Portuguese. In particular, looking at the significant improvements to these models and what they mean for Brazilian Portuguese and Canadian French ASR.

Using a wide range of test sets, we’ve challenged both our Enhanced and Standard models on two fronts. First, we see how both models test against our last update from earlier this year, then it’s up against rivals from Big Tech, like Google, and Microsoft.

Huge Uplift for Canadian French

Let’s begin with our hugely impressive uplift concerning Canadian French. In Quebec alone, it’s believed over 7 million people speak a variety of Canadian French, making up 22% of the Canadian population. A further 2 million speak it as a second language. Using a 6-hour dataset from Journal des débats de l'Assemblée nationale, we saw an average relative reduction in Word Error Rate (WER) of 18% for our Standard model and 13% for our Enhanced.

What’s more, the huge leaps we’ve made with our Standard model mean we’re now ahead of all our competitors, as illustrated in the table of results below. (We’ve displayed these results in terms of accuracy of speech recognition for Canadian French, which is simply 100 – WER).

Continued Improvement for Brazilian Portuguese

We’ve been equally pleased with our latest round of results for Brazilian Portuguese. With over 200 million Brazilian Portuguese speakers, around the world, it’s essential that they’re serviced by great ASR. Speechmatics can proudly claim that we deliver great ASR for Brazilian Portuguese.

On two internal test sets we see an average relative reduction in WER of 5% for our Standard model, and 3% for our Enhanced model, compared to our last quarterly release. While we’d love to say we’ve again overtaken our competitors, we can’t. Because we’ve always been ahead of them when it comes to Brazilian Portuguese! These latest results increase our market lead, smashing the 80% barrier, where other much larger names hover around 70%.

The Benefits of Self-Supervised Learning

As we continue to use self-supervised learning, we’ll continue to see the benefits. With a number of new languages on the way to add to our already impressive list of 34, we’ll do our best to improve on every one of them continuously. After all, our mission is to understand every voice.

Voice by voice, we’re getting there.

John Hughes, Accuracy Team Lead

Latest Articles

[alt: Bilingual medical model featuring terms related to various health conditions and medications in Arabic and English. Key terms include "Chronic kidney disease," "Heart attack," "Diabetes," and "Insulin," among others, displayed in an organized layout.]
Product

Speechmatics achieves a world first in bilingual Voice AI with new Arabic–English model

Sets a new accuracy bar for real-world code-switching: 35% fewer errors than the closest competitor.

Speechmatics
SpeechmaticsEditorial Team
[alt: Illuminated ancient mud-brick structures stand against a dusk sky, showcasing architectural details and textures. Palm trees are in the foreground, adding to the setting's ambiance. Visually captures a historic site in twilight.]
Product

Your voice agent speaks perfect Arabic. That's the problem.

Most voice AI models are trained on formal Arabic, but real conversations across the Middle East mix dialects and English in ways those systems aren’t built to handle.

Yahia Abaza
Yahia AbazaSenior Product Manger
new blog image header
Technical

How Nvidia Dominates the HuggingFace Leaderboards in This Key Metric

A technical deep-dive into Token Duration Transducers (TDT) — the frame-skipping architecture behind Nvidia's Parakeet models. Covers inference mechanics, training with forward-backward algorithm, and how TDT achieves up to 2.82x faster decoding than standard RNN-T.

Oliver Parish
Oliver Parish Machine Learning Engineer
[alt: Healthcare professionals in scrubs and lab coats walk briskly down a hospital corridor. A nurse uses a tablet while others carry patient charts and attend to a gurney. The setting conveys a busy, clinical environment focused on patient care.]
Use Cases

Why AI-native EHR platforms will treat speech as core infrastructure in 2026

As clinical workflows become automated and AI-driven, real-time speech is shifting from a transcription feature to the foundational intelligence layer inside modern EHR systems.

Vamsi Edara
Vamsi EdaraFounder and CEO, Edvak EHR
[alt: Logos of Speechmatics and Edvak are displayed side by side, interconnected by a stylized x symbol. The background features soft, wavy lines in light blue, creating a modern and tech-focused aesthetic.]
Company

One word changes everything: Speechmatics and Edvak EHR partner to make voice AI safe for clinical automation at scale

Turning real-time clinical speech into trusted, EHR-native automation.

Speechmatics
SpeechmaticsEditorial Team
[alt: Concentric circles radiate outward from a central orange icon with a white Speechmatics logo. The background is dark blue, enhancing the orange glow. A thin green line runs horizontally across the lower part of the image.]
Technical

Speed you can trust: The STT metrics that matter for voice agents

What “fast” actually means for voice agents — and why Pipecat’s TTFS + semantic accuracy is the clearest benchmark we’ve seen.

Archie McMullan
Archie McMullanSpeechmatics Graduate