-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Audio normalization #7779
Comments
I think this can be done with audio context, I'm not an expert on the subject, if someone gives me a guide or tutorial on how to do this, I could implement it... |
+1 |
@avelad this is one of JS implementations I've seen, it uses AudioContext indeed. https://github.com/domchristie/needles |
AudioContext is a little annoying in that a user action is required to start one. So autoplay will be 100% dead when AudioContext is involved. At least, based on my experience with both UpFish and some web-based console emualtors. So I would prefer this be opt-in on the part of the application, however it ends up working. But at least using Web Audio APIs to adjust volume is extremely straightforward. |
Or maybe you know what solution the YouTube player uses ? |
I do not. If I did know, I would not be able to share that information, because my employer generally frowns on leaks. I'm sure there are people at YouTube who both know and might have the authority to share what they know. But I suspect it's not a mind-blowingly difficult problem to solve. Adjusting volume is easy. It's merely a question of how you want to drive it and where/when/how you want to compute the adjustment. The Web Audio part is simple enough. You source data from the video element, run it through a node to adjust the volume, then output. If normalization is fixed, you just set the volume adjustment on the node and you're done. If normalization is dynamic and based on some pre-computed map of volume levels, you need to track the current timestamp of the video element ( If normalization is both dynamic and dynamically computed, you want a custom audio node that allows you to read the amplitude and compute adjustments on-the-fly. I don't know any good algorithm for this off-hand, but I'm sure you could find some research on the topic. This would be the in-browser, on-the-fly version of the tool used to pre-compute the levels in the scenario above. In this version, there's no metadata to communicate, because everything is done in-browser by the Web Audio node. |
Hi, I would like to raise the issue of a new feature. Audio normalization is a necessary feature in shaka player. Currently, if you have material with different volumes, the user watching has to increase or decrease the volume, which is annoying. I thought using lufs to check the given volume and normalize it exactly as it is on youtube.
The text was updated successfully, but these errors were encountered: