FFmpeg 8.0 merges OpenAI "Whisper Filter" for automatic speech recognition, Vulkan AV1 encoding, & VP9 decoding
14 Comments
Comments from other communities
Whisper is what I think is one of the best uses for machine learning.
Deleted by moderator
Check out Voxtral Mini or Small if you have the GPU for it. It works really well on English but it comes from French company so I would be surprised if French doesn't work well as well.
Whisper struggles with non-english languages and background noise - you might get better results using the larger models (medium/large) with a lower temperature setting to reduce the hallucinations and repetitions your experiencing.
I’ve always thought gpu/hardware accelerated encoding was the same thing and worse than software encoding quality wise but way way faster. Does that mean the gpu can software encode with vulkan support?
“Software encoding” is just kinda a colloquial term for running on the CPU. Because the alternative was dedicated hardware for the task.
As for the Vulcan part I have no clue how that works. Does vulkan have an Open equivalent or something?
This Reddit comment says it’s more a compatibility layer.
The Vulkan AV1 video encode just wraps the native fixed function calls in a compatibility layer that makes it easy for ffmpeg users and libavcodec integrators to support multiple vendors and OSes. It does not add a new Vulkan-compute based encoder that enables AV1 encoding on GPUs that don't officially advertise it. So expect it'll work with Nvidia Ada/Blackwell, AMD RDNA3/4, Intel Arc, and other hardware that officially support for AV1 encoding via their respective APIs.
If you're already happy using
av1_nvenc, just use that. At best,av1_vulkanshould perform about the same. At worst, it may add slight overhead and limit options/features compared to the latest native API on a given platform. However, Vulkan Video seems to be well-supported for Nvidia.
yessikg
I think including the word "OpenAI" in the post name is somewhat a misnomer that implies an encrapification not really happening to the FFMPEG project.
Yes, it is true OpenAI originally developed the Whisper model, and I hate OpenAI; however:
- Whisper is actually open source, unlike most OpenAI crap.
- FFmpeg isn't even directly using the OpenAI version, written in Python - they're using a port to C++ called Whisper.cpp
- We've been able to use speech recognition for decades, so unlike other AI models, I don't think a speech recognition model that does it better is problem.
- You don't even necessarily have to compile FFmpeg with Whisper support.
I get the dislike of AI, but the idea of association with OpenAI is overblown and not really reflective of reality. Now I can get not wanting to use open source projects whose developers don't reflect your principles; however, I think this ethical issue is more indirect than may initially appear and is not a strong reason to quit using what is still the most effective media conversion tool.
Hopefully the speech recognition is better than whatever the fuck most online video platforms use for automatic subtitles at the moment.
I've built an app with Whisper, the level of 'hit or miss' entirely depends on the size of the model and language. Even audio quality is a lesser factor in my experience. So, it depends...
I was messing around with HomeAssistant the other day, which uses the same speech recognition engine, and I found it to be decent.
has anyone compared vulkan av1 to nvenc or vaapi? too new still?
Deleted by moderator
I mean, here it is used optionally to help with accessibility. That is objectively a good use of AI.
Also it's running locally. I think the biggest problem with AI is the data harvesting and this is just not that
This isn't really GenAI, so I don't really have a problem with it much like how I don't have a problem with AI upscaling, for instance. It also can run locally.
I mostly agree with you. However, I think there are some caveats to upscaling; there are so many lazy "4K AI UPSCALE BEST QUALITY" videos online that just don't look good and were clearly put there just to get views.
However, I've also found they have their uses; for instance, I wanted to laser cut a TMBG Flood logo once, but there were very few good images online that traced well in Inkscape. I ended up doing an AI upscale of the least terrible one with a white background, and that traced pretty well in Inkscape.
ugh so what's the alternative package to ffmpeg?
No need to panic in this case. While I hate OpenAI, there's two things to note here:
- Whisper is an open source library for speech recognition rather than generative AI, run entirely locally. It's just using ML to do something we could already do with computers (speech recognition), but better.
- They aren't even directly using the OpenAI version - they're using whisper.cpp, a port of the model.
Good luck with that.. ffmpeg is the de facto standard.
This is one of the actually decent uses of this model. I have used Whisper to transcribe to phone calls, and just the other week I had to export the audio from a video I was working on to run whisper to get subtitles for the video. It's still not a set it and forget it solution, but correcting it's small mistakes here and there is so much faster than manually transcribing the audio.
Given how modular ffmpeg is with the way the switches work a user never has to interact with that portion of the application. I can technically use ffmpeg to trsnscode an mp3 without ever using the video components.