AVFrame

public final class AVFrame

This structure describes decoded (raw) audio or video data.

AVFrame is typically allocated once and then reused multiple times to hold different data (e.g. a single AVFrame to hold frames received from a decoder). In such a case, unref() will free any references held by the frame and reset it to its original clean state before it is reused again.

The data described by an AVFrame is usually reference counted through the AVBuffer API. The underlying buffer references are stored in buffer / extendedBuffer. An AVFrame is considered to be reference counted if at least one reference is set, i.e. if buffer[0] != nil. In such a case, every single data plane must be contained in one of the buffers in buffer or extendedBuffer. There may be a single buffer for all the data, or one separate buffer for each plane, or anything in between.

  • Creates an AVFrame and set its fields to default values.

    Note

    This only allocates the AVFrame itself, not the data buffers. Those must be allocated through other means, e.g. with allocBuffer(align:) or manually.

    Declaration

    Swift

    public init()
  • Pointer to the picture/channel planes.

    Declaration

    Swift

    public var data: UnsafeMutableBufferPointer<UnsafeMutablePointer<UInt8>?> { get set }
  • For video, size in bytes of each picture line. For audio, size in bytes of each plane.

    For audio, only linesize[0] may be set. For planar audio, each channel plane must be the same size.

    For video the linesizes should be multiples of the CPUs alignment preference, this is 16 or 32 for modern desktop CPUs. Some code requires such alignment other code can be slower without correct alignment, for yet other it makes no difference.

    Note

    The linesize may be larger than the size of usable data – there may be extra padding present for performance reasons.

    Declaration

    Swift

    public var linesize: UnsafeMutableBufferPointer<Int32> { get set }
  • pointers to the data planes/channels.

    For video, this should simply point to data.

    For planar audio, each channel has a separate data pointer, and linesize[0] contains the size of each channel buffer. For packed audio, there is just one data pointer, and linesize[0] contains the total size of the buffer for all channels.

    Note

    Both data and extendedData should always be set in a valid frame, but for planar audio with more channels that can fit in data, extendedData must be used in order to access all channels.

    Declaration

    Swift

    public var extendedData: UnsafeMutableBufferPointer<UnsafeMutablePointer<UInt8>?> { get set }
  • pts

    Presentation timestamp in timebase units (time when frame should be shown to user).

    Declaration

    Swift

    public var pts: Int64 { get set }
  • dts

    DTS copied from the AVPacket that triggered returning this frame. (if frame threading isn’t used) This is also the presentation time of this AVFrame calculated from only AVPacket.dts values without pts values.

    Declaration

    Swift

    public var dts: Int64 { get }
  • Picture number in bitstream order.

    Declaration

    Swift

    public var codedPictureNumber: Int { get }
  • Picture number in display order.

    Declaration

    Swift

    public var displayPictureNumber: Int { get }
  • AVBuffer references backing the data for this frame.

    If all elements of this array are nil, then this frame is not reference counted. This array must be filled contiguously – if buffer[i] is non-nil then buffer[j] must also be non-nil for all j < i.

    There may be at most one AVBuffer per data plane, so for video this array always contains all the references. For planar audio with more than AVConstant.dataPointersNumber channels, there may be more buffers than can fit in this array. Then the extra AVBuffer are stored in the extendedBuffer array.

    Declaration

    Swift

    public var buffer: [AVBuffer?] { get }
  • For planar audio which requires more than AVConstant.dataPointersNumber AVBuffer, this array will hold all the references which cannot fit into buffer.

    Note that this is different from extendedData, which always contains all the pointers. This array only contains the extra pointers, which cannot fit into buffer.

    Declaration

    Swift

    public var extendedBuffer: [AVBuffer] { get }
  • The number of elements in extendedBuffer.

    Declaration

    Swift

    public var extendedBufferCount: Int { get }
  • The frame timestamp estimated using various heuristics, in stream timebase.

    • encoding: Unused.
    • decoding: Set by libavcodec, read by user.

    Declaration

    Swift

    public var bestEffortTimestamp: Int64 { get }
  • Reordered pos from the last AVPacket that has been input into the decoder.

    • encoding: Unused.
    • decoding: Set by libavcodec, read by user.

    Declaration

    Swift

    public var pktPosition: Int64 { get }
  • Duration of the corresponding packet, expressed in AVStream.timebase units, 0 if unknown.

    • encoding: Unused.
    • decoding: Set by libavcodec, read by user.

    Declaration

    Swift

    public var pktDuration: Int64 { get }
  • Size of the corresponding packet containing the compressed frame. It is set to a negative value if unknown.

    • encoding: Unused.
    • decoding: Set by libavcodec, read by user.

    Declaration

    Swift

    public var pktSize: Int { get }
  • The metadata of the frame.

    • encoding: Set by user.
    • decoding: Set by libavcodec.

    Declaration

    Swift

    public var metadata: [String : String] { get set }
  • A Boolean value indicating whether the frame data is writable.

    true if the frame data is writable (which is true if and only if each of the underlying buffers has only one reference, namely the one stored in this frame).

    Declaration

    Swift

    public var isWritable: Bool { get }
  • Allocate new buffer(s) for audio or video data.

    The following fields must be set on frame before calling this function:

    This function will fill data and buffer arrays and, if necessary, allocate and fill extendedData and extendedBuffer. For planar formats, one buffer will be allocated for each plane.

    Warning

    If frame already has been allocated, calling this function will leak memory. In addition, undefined behavior can occur in certain cases.

    Throws

    AVError

    Declaration

    Swift

    public func allocBuffer(align: Int = 0) throws

    Parameters

    align

    Required buffer size alignment. If equal to 0, alignment will be chosen automatically for the current CPU. It is highly recommended to pass 0 here unless you know what you are doing.

  • Set up a new reference to the data described by the source frame.

    Copy frame properties from src to dst and create a new reference for each AVBuffer from src. If src is not reference counted, new buffers are allocated and the data is copied.

    Warning

    dst must have been either unreferenced with unref(), or newly created before calling this function, or undefined behavior will occur.

    Throws

    AVError

    Declaration

    Swift

    public func ref(from src: AVFrame) throws

    Parameters

    src

    the source frame

  • Unreference all the buffers referenced by frame and reset the frame fields.

    Declaration

    Swift

    public func unref()
  • Move everything contained in src to dst and reset src.

    Warning

    dst is not unreferenced, but directly overwritten without reading or deallocating its contents. Call dst.unref() manually before calling this function to ensure that no memory is leaked.

    Declaration

    Swift

    public func moveRef(from src: AVFrame)

    Parameters

    src

    the source frame

  • Create a new frame that references the same data as src.

    This is a shortcut for init() + ref(from:).

    Declaration

    Swift

    public func clone() -> AVFrame?

    Return Value

    newly created AVFrame on success, nil on error.

  • Ensure that the frame data is writable, avoiding data copy if possible.

    Do nothing if the frame is writable, allocate new buffers and copy the data if it is not.

    Throws

    AVError

    Declaration

    Swift

    public func makeWritable() throws
  • Copy the frame data from src to dst.

    This function does not allocate anything, dst must be already initialized and allocated with the same parameters as src.

    This function only copies the frame data (i.e. the contents of the data / extended data arrays), not any other properties.

    Throws

    AVError

    Declaration

    Swift

    public func copy(from src: AVFrame) throws

    Parameters

    src

    the source frame

  • Copy only metadata fields from src to dst.

    Metadata for the purpose of this function are those fields that do not affect the data layout in the buffers. E.g. pts, sample rate (for audio) or sample aspect ratio (for video), but not width/height or channel layout. Side data is also copied.

    Throws

    AVError

    Declaration

    Swift

    public func copyProperties(from src: AVFrame) throws

    Parameters

    src

    the source frame

  • Get the buffer reference a given data plane is stored in.

    Declaration

    Swift

    public func planeBuffer(at plane: Int) -> AVBuffer?

    Parameters

    plane

    index of the data plane of interest in extendedData.

    Return Value

    the buffer reference that contains the plane or nil if the input frame is not valid.

  • The pixel format of the picture.

    Declaration

    Swift

    public var pixelFormat: AVPixelFormat { get set }
  • The width of the picture, in pixels.

    Declaration

    Swift

    public var width: Int { get set }
  • The height of the picture, in pixels.

    Declaration

    Swift

    public var height: Int { get set }
  • A Boolean value indicating whether this frame is key frame.

    Declaration

    Swift

    public var isKeyFrame: Bool { get }
  • A Boolean value indicating whether this frame is interlaced or progressive frame.

    Declaration

    Swift

    public var isInterlacedFrame: Bool { get }
  • The picture type of the frame.

    Declaration

    Swift

    public var pictureType: AVPictureType { get }
  • The sample aspect ratio for the video frame, 0/1 if unknown/unspecified.

    Declaration

    Swift

    public var sampleAspectRatio: AVRational { get set }
  • When decoding, this signals how much the picture must be delayed. extra_delay = repeat_pict / (2*fps)

    Declaration

    Swift

    public var repeatPicture: Int { get }
  • The sample format of the audio data.

    Declaration

    Swift

    public var sampleFormat: AVSampleFormat { get set }
  • The sample rate of the audio data.

    Declaration

    Swift

    public var sampleRate: Int { get set }
  • The channel layout of the audio data.

    Declaration

    Swift

    public var channelLayout: AVChannelLayout { get set }
  • The number of audio samples (per channel) described by this frame.

    Declaration

    Swift

    public var sampleCount: Int { get set }
  • The number of audio channels.

    • encoding: Unused.
    • decoding: Read by user.

    Declaration

    Swift

    public var channelCount: Int { get set }
  • Undocumented

    Declaration

    Swift

    public static let `class`: AVClass
  • Undocumented

    Declaration

    Swift

    public func withUnsafeClassObjectPointer<T>(_ body: (UnsafeMutableRawPointer) throws -> T) rethrows -> T
  • For hwaccel-format frames, this should be a reference to the AVHWFramesContext describing the frame.

    Declaration

    Swift

    public var hwFramesContext: AVHWFramesContext? { get }
  • Copy data to or from a hw surface. At least one of dst/src must have an AVHWFramesContext attached.

    If src has an AVHWFramesContext attached, then the format of dst (if set) must use one of the formats returned by AVHWFramesContext.getPixelFormats(.from). If dst has an AVHWFramesContext attached, then the format of src must use one of the formats returned by AVHWFramesContext.getPixelFormats(.to).

    dst may be clean (i.e. with data/buffer pointers unset), in which case the data buffers will be allocated by this function using allocBuffer(align:). If dst.pixelFormat is set, then this format will be used, otherwise (when dst.pixelFormat is AVPixelFormat.none) the first acceptable format will be chosen.

    The two frames must have matching allocated dimensions (i.e. equal to AVHWFramesContext.width/height), since not all device types support transferring a sub-rectangle of the whole surface. The display dimensions (i.e. AVFrame.width/height) may be smaller than the allocated dimensions, but also have to be equal for both frames. When the display dimensions are smaller than the allocated dimensions, the content of the padding in the destination frame is unspecified.

    Throws

    AVError

    Declaration

    Swift

    public func transferData(from frame: AVFrame) throws

    Parameters

    frame

    the source frame