-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How might we handle adaptive streaming in a video info.json? #57
Comments
Thanks for writing this up, Jason. My view would be not to make things too complicated and to follow your suggestion of just having an entry in the info.json pointing to the adaptive streaming manifest (M3U8 or MPD) along with as many properties as are common to and known about the sources referenced by that manifest. I agree it should be possible to list adaptive sources alongside standalone sources. |
Completely agree with Jon -- lets start simple and get traction first. That would mean simply pointing to the streaming manifest, with the same basic properties as for the other formats. |
Sounds good to me. I'll work on adding an example to a my prototype. |
(outsourced as Prezi can just refer to HLS, and clients use the format specific info) |
Propose close, out of scope / solved by adaptive formats already |
Agree with close / out of scope. |
This issue came up on 2017-03-07 AV call in the context of a set of streams at different bitrates from Avalon. Agreement on call was that adaptive streaming based on these is out of scope (use a format that supports it instead) and that the use case is really about user selection (where |
This question is a specialization of how we are handling progressive download videos by listing them out as individual sources. You can see an example of listing sources in #50
With adaptive streaming like HLS and MPEG-DASH what information should we include for a source? Each of these formats basically points to a document which is a manifest (media presentation description) of all the different adaptations that are available for a video. The client can then select which to play based on bandwidth and resolution and switch sources mid-stream based on conditions. In some cases a single file is used for each adaptation (on demand profile which then uses byte-range requests for segments) and in others the source video at different bitrates and resolutions are each pre-segmented (live profile).
The assumption is that for some cases multiple sources are still likely to be listed even with an adaptive version available. This could be because both an HLS and MPEG-DASH stream are made available to reach more devices. (Currently these both use different underlying media formats and so work on different platforms--though the latest HLS allows for fragmented MP4s as MPEG-DASH has been using.) The listed sources could also include a progressive download source to work for clients that don't support the adaptive formats and that can't run a client-side library (e.g. hls.js and dash.js).
I could see this going two ways. In both cases we would list the URL to the media presentation description document as one of the sources. We would list out information relevant to the source at a high level like format. But then we could either stop there, or we could also list out the available adaptations within each adaptive streaming source. Basically the server creating the info.json for a video could look inside the media presentation document and pull out information about each adaptation. This seems duplicative but maybe useful to have this information all with the single request for the info.json? Would it ever change the decision about whether to play an adaptive stream based on which adaptations are available?
Though it isn't required it is also common, recommended practice for the audio adaptation(s) to be separate from the video adaptation(s). This means that if we were going to describe the adaptations for a source we would be describing the technical details of both the audio and video which could either be separate files or multiplexed.
Thoughts on this? Other ways we could handle adaptive bitrate streaming in a video info.json?
The text was updated successfully, but these errors were encountered: