Neural vocoder is the final model in the Text to Speech (TTS) pipeline. It turns a mel‑spectrogram into the sound you can actually hear. WaveNet, WaveGlow, HiFi‑GAN, and FastDiff are the four contenders.Neural vocoder is the final model in the Text to Speech (TTS) pipeline. It turns a mel‑spectrogram into the sound you can actually hear. WaveNet, WaveGlow, HiFi‑GAN, and FastDiff are the four contenders.

Inside the Neural Vocoder Zoo: WaveNet to Diffusion in Four Audio Clips

2025/09/09 02:33

Hey everyone, I’m Oleh Datskiv, Lead AI Engineer at the R&D Data Unit of N-iX. Lately, I’ve been working on text-to-speech systems and, more specifically, on the unsung hero behind them: the neural vocoder.

Let me introduce you to this final step of the TTS pipeline — the part that turns abstract spectrograms into the natural-sounding speech we hear.

Introduction

If you’ve worked with text‑to‑speech in the past few years, you’ve used a vocoder - even if you didn’t notice it. The neural vocoder is the final model in the Text to Speech (TTS) pipeline; it turns a mel‑spectrogram into the sound you can actually hear.

Since the release of WaveNet in 2016, neural vocoders have evolved rapidly. They become faster, lighter, and more natural-sounding. From flow-based to GANs to diffusion, each new approach has pushed the field closer to real-time, high-fidelity speech.

2024 felt like a definitive turning point: diffusion-based vocoders like FastDiff were finally fast enough to be considered for real-time usage, not just batch synthesis as before. That opened up a range of new possibilities. The most notable ones were smarter dubbing pipelines, higher-quality virtual voices, and more expressive assistants, even if you’re not utilizing a high-end GPU cluster.

But with so many options that we now have, the questions remain:

  • How do these models sound side-by-side?
  • Which ones keep latency low enough for live or interactive use?
  • What is the best choice of a vocoder for you?

This post will examine four key vocoders: WaveNet, WaveGlow, HiFi‑GAN, and FastDiff. We’ll explain how each model works and what makes them different. Most importantly, we’ll let you hear the results of their work so you can decide which one you like better. Also, we will share custom benchmarks of model evaluation that were done through our research.

What Is a Neural Vocoder?

At a high level, every modern TTS system still follows the same basic path:

\ Let’s quickly go over what each of these blocks does and why we are focusing on the vocoder today:

  1. Text encoder: It changes raw text or phonemes into detailed linguistic embeddings.
  2. Acoustic model: This stage predicts how the speech should sound over time. It turns linguistic embeddings into mel spectrograms that show timing, melody, and expression. It has two critical sub-components:
  3. Alignment & duration predictor: This component determines how long each phoneme should last, ensuring the rhythm of speech feels natural and human
  4. Variance/prosody adaptor: At this stage, the adaptor injects pitch, energy, and style, shaping the melody, emphasis, and emotional contour of the sentence.
  5. Neural vocoder: Finally, this model converts the prosody-rich mel spectrogram into actual sound, the waveform we can hear.

The vocoder is where good pipelines live or die. Map mels to waveforms perfectly, and the result is a studio-grade actor. Get it wrong, and even with the best acoustic model, you will get metallic buzz in the generated audio. That’s why choosing the right vocoder matters - because they’re not all built the same. Some optimize for speed, others for quality. The best models balance naturalness, speed, and clarity.

The Vocoder Lineup

Now, let's meet our four contenders. Each represents a different generation of neural speech synthesis, with its unique approach to balancing the trade-offs between audio quality, speed, and model size. The numbers below are drawn from the original papers. Thus, the actual performance will vary depending on your hardware and batch size. We will share our benchmark numbers later in the article for a real‑world check.

  1. WaveNet (2016): The original fidelity benchmark

Google's WaveNet was a landmark that redefined audio quality for TTS. As an autoregressive model, it generates audio one sample at a time, with each new sample conditioned on all previous ones. This process resulted in unprecedented naturalness at the time (MOS=4.21), setting a "gold standard" that researchers still benchmark against today. However, this sample-by-sample approach also makes WaveNet painfully slow, restricting its use to offline studio work rather than live applications.

  1. WaveGlow (2019): Leap to parallel synthesis

To solve WaveNet's critical speed problem, NVIDIA's WaveGlow introduced a flow-based, non-autoregressive architecture. Generating the entire waveform in a single forward pass drastically reduced inference time to approximately 0.04 RTF, making it much faster than in real time. While the quality is excellent (MOS≈3.961), it was considered a slight step down from WaveNet's fidelity. Its primary limitations are a larger memory footprint and a tendency to produce a subtle high-frequency hiss, especially with noisy training data.

  1. HiFi-GAN (2020): Champion of efficiency

HiFi-GAN marked a breakthrough in efficiency using a Generative Adversarial Network (GAN) with a clever multi-period discriminator. This architecture allows it to produce extremely high-fidelity audio (MOS=4.36), which is competitive with WaveNet, but is fast from a remarkably small model (13.92 MB). It's ultra-fast on a GPU (<0.006×RTF) and can even achieve real-time performance on a CPU, which is why HiFi-GAN quickly became the default choice for production systems like chatbots, game engines, and virtual assistants.

  1. FastDiff (2025): Diffusion quality at real-time speed

Proving that diffusion models don't have to be slow, FastDiff represents the current state-of-the-art in balancing quality and speed. Pruning the reverse diffusion process to as few as four steps achieves top-tier audio quality (MOS=4.28) while maintaining fast speeds for interactive use (~0.02×RTF on a GPU). This combination makes it one of the first diffusion-based vocoders viable for high-quality, real-time speech synthesis, opening the door for more expressive and responsive applications.

Each of these models reflects a significant shift in vocoder design. Now that we've seen how they work on paper, it's time to put them to the test with our own benchmarks and audio comparisons.

\n Let’s Hear It — A/B Audio Gallery

Nothing beats your ears!

We will use the following sentences from the LJ Speech Dataset to test our vocoders. Later in the article, you can also listen to the original audio recording and compare it with the generated one.

Sentences:

  1. “A medical practitioner charged with doing to death persons who relied upon his professional skill.”
  2. “Nothing more was heard of the affair, although the lady declared that she had never instructed Fauntleroy to sell.”
  3. “Under the new rule, visitors were not allowed to pass into the interior of the prison, but were detained between the grating.”

The metrics we will use to evaluate the model’s results are listed below. These include both objective and subjective metrics:

  • Naturalness (MOS): How human-like does it sound (rated by real people on a 1/5 scale)
  • Clarity (PESQ / STOI): Objective scores that help measure intelligibility and noise/artifacts. The higher, the better.
  • Speed (RTF): An RTF of 1 means it takes 1 second to generate 1 second of audio. For anything interactive, you’ll want this at 1 or below

Audio Players

(Grab headphones and tap the buttons to hear each model.)

| Sentence | Ground truth | WaveNet | WaveGlow | HiFi‑GAN | FastDiff | |----|:---:|:---:|:---:|:---:|:---:| | S1 | ▶️ | ▶️ | ▶️ | ▶️ | ▶️ | | S2 | ▶️ | ▶️ | ▶️ | ▶️ | ▶️ | | S3 | ▶️ | ▶️ | ▶️ | ▶️ | ▶️ |

\n Quick‑Look Metrics

Here, we will show you the results obtained for the models we evaluate.

| Model | RTF ↓ | MOS ↑ | PESQ ↑ | STOI ↑ | |----|:---:|:---:|:---:|:---:| | WaveNet | 1.24 | 3.4 | 1.0590 | 0.1616 | | WaveGlow | 0.058 | 3.7 | 1.0853 | 0.1769 | | HiFi‑GAN | 0.072 | 3.9 | 1.098 | 0.186 | | FastDiff | 0.081 | 4.0 | 1.131 | 0.19 |

\n *For the MOS evaluation, we used voices from 150 participants with no background in music.

** As an acoustic model, we used Tacotron2 for WaveNet and WaveGlow, and FastSpeech2 for HiFi‑GAN and FastDiff.

\n Bottom line

Our journey through the vocoder zoo shows that while the gap between speed and quality is shrinking, there’s no one-size-fits-all solution. Your choice of a vocoder in 2025 and beyond should primarily depend on your project's needs and technical requirements, including:

  • Runtime constraints (Is it an offline generation or a live, interactive application?)
  • Quality requirements (What’s a higher priority: raw speed or maximum fidelity?)
  • Deployment targets (Will it run on a powerful cloud GPU, a local CPU, or a mobile device?)

As the field progresses, the lines between these choices will continue to blur, paving the way for universally accessible, high-fidelity speech that is heard and felt.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Short-Term Bitcoin Profits Dominate For The First Time Since 2023

Short-Term Bitcoin Profits Dominate For The First Time Since 2023

The post Short-Term Bitcoin Profits Dominate For The First Time Since 2023 appeared on BitcoinEthereumNews.com. Bitcoin is making another attempt to break the downtrend that has kept the crypto king capped since late October. Price is hovering near $91,000 as investors watch a rare shift in market structure unfold.  For the first time in more than two and a half years, short-term holders have surpassed long-term holders in realized profits, creating both opportunities and risks for BTC. Sponsored Sponsored Bitcoin Sees Some Shift The MVRV Long/Short Difference highlights a notable change in Bitcoin’s profit distribution. A positive reading usually signals long-term holders hold more unrealized gains, while a negative value indicates short-term holders are ahead. In Bitcoin’s case, the difference has dipped into negative territory for the first time since March 2023. This marks 30 months since short-term holders last led in profits. Such dominance raises concerns because short-term holders tend to sell aggressively when volatility increases. Their profit-taking behavior could add pressure on BTC’s price if the broader market weakens, especially during attempts to break the downtrend. Want more token insights like this? Sign up for Editor Harsh Notariya’s Daily Crypto Newsletter here. Bitcoin MVRV Long/Short Difference. Source: Santiment Sponsored Sponsored Despite this shift, Bitcoin’s broader momentum shows encouraging signs. Exchange net position change data confirms rising outflows across major platforms, signaling a shift in investor accumulation. BTC leaving exchanges is often treated as a bullish indicator, reflecting confidence in long-term appreciation. This trend suggests that many traders view the $90,000 range as a reasonable bottom zone and are preparing for a potential recovery. Sustained outflows support price stability and strengthen the probability of BTC breaking above immediate resistance levels. Bitcoin Exchange Net Position Change. Source: Glassnode BTC Price Is Trying Its Best Bitcoin is trading at $91,330 at the time of writing, positioned just below the $91,521 resistance. Reclaiming this level and flipping it into support…
Share
BitcoinEthereumNews2025/12/08 05:57
OKX founder responds to Moore Threads co-founder 1,500 BTC debt

OKX founder responds to Moore Threads co-founder 1,500 BTC debt

The post OKX founder responds to Moore Threads co-founder 1,500 BTC debt appeared on BitcoinEthereumNews.com. The successful stock market debut of Moore Threads, a company that’s being touted as China’s answer to Nvidia, has been overshadowed by resurfaced allegations that link one of its co-founders to an unpaid cryptocurrency debt that has been lingering for roughly a decade. Shares in the GPU maker skyrocketed to as much as 470% on Thursday following its initial public offering (IPO) on the Shanghai Stock Exchange, valuing the company at around RMB 282 billion ($39.9 billion). However, as the success was being celebrated online, a social media post revived claims that Moore Threads’ co-founder Li Feng borrowed 1,500 Bitcoins from Mingxing “Star” Xu, founder and CEO of cryptocurrency exchange OKX, and never repaid the loan. Crypto past with OKX founder resurfaces In an X post, AB Kuai.Dong referenced Feng’s involvement in a 2017 initial coin offering that raised 5,000 ETH alongside controversial angel investor Xue Manzi. Feng allegedly dismissed the Bitcoin loan, stating, “It was just that Xu Mingxing’s investment in me had failed.” Xu responded to the post with a conciliatory message, writing, “People cannot always remain in the shadow of negative history. Face the future and contribute more positive energy.” He added, “Let the legal system handle the debt issue,” and offered blessings to every entrepreneur. Feng reportedly partnered with Xue Manzi and Li Xiaolai in 2017 to launch Malego Coin, which was later renamed Alpaca Coin MGD. The project reportedly raised approximately 5,000 ETH, but it was around this period that China banned ICOs, allowing regulators to crack down on what they viewed as speculative excess and potential fraud in the cryptocurrency sector. The Bitcoin loan dispute appears separate from the ICO controversy. According to sources familiar with the matter, the original loan agreement was dated December 17, 2014, with an expiry of December 16, 2016.…
Share
BitcoinEthereumNews2025/12/08 06:13