HTTP Dynamic Streaming – Part 2: Late-Binding Audio Using OSMF

Part 1 of this series discussed HTTP Dynamic Streaming (HDS) at a fairly high level. The next few editions in the series will explore some of the more powerful features that make using this protocol advantageous. Multi-bitrate stream switching and file encryption are two important features that we’ll cover in the very near future, as they’re very big reasons to stream over any protocol. However, in this article I’d like to discuss a brand new feature of theOpen Source Media Framework (OSMF) known as “late-binding audio”.


Late Binding Audio Defined


Late-binding audio refers to the ability to stream videos with multiple associated audio tracks. This makes it possible to play an alternative audio track on the client-side using the same video file. There’s no need to encode, store, and deliver separate video + audio assets for each version you would like to provide. Say for example that you would like to provide video content with audio translated into multiple languages. Instead of  creating separate video + audio files for each language, you instead encode the video only once, and include the alternate audio-only tracks along with the it. This represents a huge savings in time, storage, and bandwith that anyone making the switch to HTTP Dynamic Streaming can take advantage of.

Updates to OSMF that came in version 1.6, Sprint 5 make streaming late-binding audio files over HTTP possible. Specifically, the MediaPlayer class now contains the read-only public property hasAlternativeAudio : Boolean. By using the LateBindingAudio example application included in the latest OSMF release, I’ll demonstrate step-by-step how to get this new feature to work.

Many of the steps we’ll be taking are the same steps we took when packaging our files for simple streaming over HTTP, so if you’d like to review, please check out HTTP Dynamic Streaming – Part 1: An Introduction to Streaming Media.


Late-Binding Audio, Step-by-Step

1. Gather your media assets
In this example, we’ll be working with a video that has one alternate audio track. (President Barack Obama’s speech from July 25th, and an alternate audio track of the transcription translated into Spanish) You can include as many alternate audio tracks as you’d like, however there are some recommendations from the OSMF team in regards to how you prepare your media. One suggestion is that you should use audio tracks that are at least as long as the main video + audio track to ensure smooth stream switching. Other guidelines relate to encoding best practices for streaming over HTTP in general. You can read the white paper on encoding standards here. A list of known issues with OSMF 1.6 Sprint 5 can be found in the release notes.

The creation of the media assets prior to packaging them for HTTP streaming is beyond the scope of this article, but for your information:

  • I used Adobe Premiere Pro 5.5 to edit the original video file down to something shorter (~2 min).

  • I used Adobe Audition CS 5.5 to edit the audio, and to create the alternate audio track.

  • I encoded the video and audio files to .f4v using Adobe Media Encoder (see part 1 of the series for file type requirements).

  • I happily found a transcription of the speech online.

  • Google Translate helped me with the translation (it’s been awhile since I’ve spoken Spanish).

  • At&t Natural Voices text-to-speech demo provided me with the .wav files of the Spanish audio.
So, to start you’ll need a minimum of 3 separate files:
  • The original video + audio file encoded into an .flv or Mp4-compatible format

  • The audio track from the original video + audio encoded the same as above

  • An alternate audio track, hopefully of the same duration as the original audio, encoded the same as above
2. Package your media using the f4fpackager tool
This step is the same as it is for packaging files for simple streaming over HTTP, covered in part 1.

At this point, if you’d like to send additional arguments to the packager, you can enter them here and they’ll show up in the XML of the .f4m file, otherwise use the minimum arguments. We’ll be editing the XML of the main video’s .f4m file in the next step. After you’ve packaged all of the files, it’s time to create a “master” .f4m file. I’m using 3 source files, so I have 3 sets of 3 packaged files:
  • Obama.f4m

  • ObamaSeg1.f4x

  • ObamaSeg1.f4f

  • Obama_Audio.f4m

  • Obama_AudioSeg1.f4x

  • Obama_AudioSeg1.f4f

  • Obama_altAudio.f4m

  • Obama_altAudioSeg1.f4x

  • Obama_altAudioSeg1.f4f
3. Create master .f4m file
Next, we’ll be adding some information from the two audio tracks’ .f4m files (the separated audio from the original video, and our alternate Spanish track) to the .f4m of the packaged main video file. Copy the “bootstrapInfo” and “media” tags from inside the .f4m files of the two audio tracks, and paste them into the main video’s .f4m file.
4. Add attributes to media tags in master .f4m
In order for late-binding audio to work, we’ll need to add a few attributes to the media tags inside the main .f4m file. In the media tag of your alternate audio, add:
  • alternate=”true”

  • type=”audio”
In order to get the example application that I’m using to behave the way I’d like it to, I added another attribute to the alternate audio’s media tag:
  • lang=”Spanish”
The player is using that attribute to populate a dropdown menu of available alternate audio tracks, and by including this attribute, I get a nicely-named menu item in the player.


*Note*I’ve noticed that when using packaged .f4v’s, the example player can’t load the files unless I add yet another attribute to (every) media tag:
  • bitrate=”"
Apparently, the player doesn’t care what the bitrate value is, even if it’s an empty String, but it does seem to require that you include that attribute when streaming packaged .f4v’s.

5. Place all packaged files into vod folder in the webroot of your Apache server

When done, it should look something like this: (“readme.htm” and “sample2_1000kbps.f4v” are files that come with Flash Media Server, and can be ignored)

Setting Up Flash Builder

6. Make sure you’re using the latest versions of Flash Builder, Flash Player, and OSMF
In order for this example to work, you’ll need to ensure that you’re using Flash Builder 4.5.1and the latest OSMF .swc. You’ll need to replace the OSMF .swc that comes with the latestFlex SDK with the one from OSMF 1.6 Sprint 5, and deploy your project to the latest version of the Flash Player. (At least 10.2)

As mentioned earlier, this example uses the LateBindingAudioSample application that comes bundled with the latest OSMF release. It can be found in OSMF/apps/samples/framework/LateBindingAudioSample. Modify this application to point to your main video’s .f4m file on the server.

That’s it! Ensure that your Apache web server is running, and if you’re using the same example application, run the application in debug mode to get valuable information about the stream in the Console. Select your video asset from the dropdown menu up top, and hit “Play”. Choose the alternate audio stream at any time from the dropdown in the lower left of the application.

Where to go from here

For a more in-depth look into HDS, including discussions on file encryption, and live streaming, please refer to John Crosby’s series on HTTP Dynamic Streaming

For an informative look into the world of OSMF, including deep-dives into such things as building custom media players and plugin integration and development, please see David Hassoun and John Crosby’s article series “Mastering OSMF“on the Adobe Developer Connection site .

For information on how Realeyes Media can help you make the switch to HTTP Dynamic Streaming, please feel free to contact us today.


Documentation

Adobe HTTP Dynamic Streaming documentation

OSMF.org

f4fpackager documentation

F4M file format specification
Downloads

HTTP origin module

f4fpackager

Flash Media Development Server (free)

Apache web server


Post a Comment

Mới hơn Cũ hơn