Skip to main content
  1. All Posts/

Blazing-Fast Audio Transcription on Mac with MLX Whisper

Aaron
Author
Aaron
I only know that I know nothing.
Table of Contents

Introduction
#

I recently had a need for audio transcription and happened to come across a tweet from Awni Hannun — the latest MLX Whisper is even faster now, transcribing 12 minutes of audio in 12.3 seconds on an M2 Ultra, nearly 60x real-time speed:

Since I have a Mac, I decided to give it a try. Turns out it’s very straightforward to set up, so here’s a quick write-up.

Installation
#

MLX Whisper is a Whisper implementation built on Apple’s MLX framework, optimized for Apple Silicon. There are two ways to install it.

Standard Way
#

pip install -U mlx-whisper

Using uv
#

I have a bit of a system cleanliness obsession — I don’t like installing things into the global Python environment. uv 1 is a Python package manager whose uv tool install command installs CLI tools into isolated environments without polluting your system Python. If you share this preference, I’d recommend this approach:

uv tool install mlx-whisper

After installation, the mlx_whisper command is ready to use.

Usage
#

Command Line
#

mlx_whisper audio.mp3 --model mlx-community/whisper-large-v3-turbo

Batch Processing Script
#

I wrote a small script that accepts multiple audio files and automatically saves transcripts as .txt files alongside the originals:

import sys
import os
import mlx_whisper

if len(sys.argv) < 2:
    print("Usage: python whisper.py <audio_file> [audio_file...]")
    sys.exit(1)

model = "mlx-community/whisper-large-v3-turbo"

for file in sys.argv[1:]:
    print(f"Transcribing: {file}")
    result = mlx_whisper.transcribe(file, path_or_hf_repo=model)

    output = "\n\n".join(seg["text"].strip() for seg in result["segments"])

    txt_path = os.path.splitext(file)[0] + ".txt"
    with open(txt_path, "w", encoding="utf-8") as f:
        f.write(output + "\n")
    print(f"Saved: {txt_path}")
    print()

Just drop in your audio files and run. Since I use uv, the script is also run through it — --with automatically installs the dependency in a temporary environment:

uv run --with mlx-whisper python whisper.py audio1.mp3 audio2.m4a

Model Storage Location
#

The model is automatically downloaded via Hugging Face Hub and cached at:

~/.cache/huggingface/hub/models--mlx-community--whisper-large-v3-turbo/

If you decide you no longer need it, simply delete this directory — no leftover files.

Wrap-up
#

MLX Whisper on Apple Silicon is genuinely fast. You basically throw a file at it and get results in seconds. Installation and cleanup are both clean. Highly recommended for anyone on a Mac.


  1. uv, a Python package manager by Astral ↩︎