HTTP Live Streaming transport stream segments
Generally, the HTTP Live Streaming application uses segmented MPEG-2 transport streams or fragmented MP4 files to deliver a multimedia presentation that contains both audio and video content. Apple recommends using the fragmented MP4 file solution.
For specific delivery and playback scenarios, such as music services and audio-only playback at low bit rate, delivery of audio-only content is also supported (in which case, either packed audio or fragmented MP4 file is used).
A transport stream segment, packed audio, and a fragmented MP4 file are each referred to as a content segment in this documentation.
From HTTP Live Streaming version 4, audio and video content can be packaged and delivered separately. This feature benefits situations where variations of a multimedia presentation must be delivered. For example, when delivering one video stream with four audio streams in four different languages, in order to avoid repeat packaging of the video four separate times, we recommend packaging each language in a separate packed audio or a fragmented MP4 file, and packaging the video in a video-only MPEG-2 transport stream or another fragmented MP4 file.
Typically, these media segments are created using segmentation tools from either a multiplexed MPEG-2 transport stream or a fragmented MP4 file, each of which provides the multimedia presentation for a certain variant.
Media stream segmentation should be performed based on the presentation time
stamps (PTSs). The difference between the first (earliest) PTS in segment n
and segment n
+ 1
is less than or equal to the segment duration indicated by the #EXT-X-TARGETDURATION
attribute.