如何将包含AAC数据的AudioBufferList转换为CMSampleBuffer

时间:2019-12-18 23:23:56

标签: swift avassetwriter cmsamplebuffer video-toolbox audiobufferlist

我正在使用AudioConverter将通过CMSampleBuffer捕获的未压缩AVCaptureSession转换为AudioBufferList

let status: OSStatus = AudioConverterFillComplexBuffer(
            converter,
            inputDataProc,
            Unmanaged.passUnretained(self).toOpaque(),
            &ioOutputDataPacketSize,
            outOutputData.unsafeMutablePointer,
            nil
        )

我的输出asbd设置如下:

AudioStreamBasicDescription
- mSampleRate : 44100.0
- mFormatID : 1633772320
- mFormatFlags : 2
- mBytesPerPacket : 0
- mFramesPerPacket : 1024
- mBytesPerFrame : 0
- mChannelsPerFrame : 1
- mBitsPerChannel : 0
- mReserved : 0

我想将AudioBufferList转换回包含压缩数据的CMSampleBuffer,以便随后可以使用AVAssetWriter将其写入mp4文件(我已经弄清楚了如何用视频来完成),但到目前为止几乎没有。我尝试咨询this answer,但在那种情况下,有PCM数据,在这里似乎不可用。

我可以访问AudioBufferList以及原始示例的presentationTimeStamp。我已经尝试了以下方法,但是我不确定如何计算numSamples以及这种方法是否有意义:

 func createCMSampleBuffer(_ data: UnsafeMutableAudioBufferListPointer, presentationTimeStamp: CMTime) -> CMSampleBuffer? {
    let numSamples = // not sure how to get this

    var status: OSStatus = noErr
    var sampleBuffer: CMSampleBuffer?
    var timing: CMSampleTimingInfo = CMSampleTimingInfo(
        duration: CMTime(value: CMTimeValue(numSamples), timescale: presentationTimeStamp.timescale),
        presentationTimeStamp: presentationTimeStamp,
        decodeTimeStamp: CMTime.invalid
    )

    status = CMSampleBufferCreate(
        allocator: kCFAllocatorDefault,
        dataBuffer: nil,
        dataReady: false,
        makeDataReadyCallback: nil,
        refcon: nil,
        formatDescription: formatDescription,
        sampleCount: CMItemCount(numSamples),
        sampleTimingEntryCount: 1,
        sampleTimingArray: &timing,
        sampleSizeEntryCount: 0,
        sampleSizeArray: nil,
        sampleBufferOut: &sampleBuffer
    )

    guard status == noErr else {
        return nil
    }

    status = CMSampleBufferSetDataBufferFromAudioBufferList(
        sampleBuffer!,
        blockBufferAllocator: kCFAllocatorDefault,
        blockBufferMemoryAllocator: kCFAllocatorDefault,
        flags: 0,
        bufferList: data.unsafePointer
    )

    guard status == noErr else {
        return nil
    }

    return sampleBuffer
}

最后,我确实设法创建了一个CMSampleBuffer,但是当我尝试完成编写时,出现以下错误:

Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x174442ac0 {Error Domain=NSOSStatusErrorDomain Code=-12735 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-12735), NSLocalizedDescription=The operation could not be completed}

2 个答案:

答案 0 :(得分:1)

我可以分享一些研究工作,也许会对您有帮助...

CMSampleBufferSetDataBufferFromAudioBufferList returned error: -12731

解决方案:-https://lists.apple.com/archives/coreaudio-api/2014/Mar/msg00008.html

Converting AudioBuffer to CMSampleBuffer with accurate CMTime

可能会帮助您...:)

答案 1 :(得分:0)

因此,我设法取得了一些进展(但距离一切都还差得远)。我没有像上面那样构造CMSampleBuffer,而是设法使以下(某种程度上)有效:

CMAudioSampleBufferCreateWithPacketDescriptions(
        allocator: kCFAllocatorDefault,
        dataBuffer: nil,
        dataReady: false,
        makeDataReadyCallback: nil,
        refcon: nil,
        formatDescription: formatDescription!,
        sampleCount: Int(data.unsafePointer.pointee.mNumberBuffers),
        presentationTimeStamp: presentationTimeStamp,
        packetDescriptions: &packetDescriptions,
        sampleBufferOut: &sampleBuffer)

此处的关键是从压缩过程中获取packetDescriptions:

let packetDescriptionsPtr = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: 1)

AudioConverterFillComplexBuffer(
                converter,
                inputDataProc,
                Unmanaged.passUnretained(self).toOpaque(),
                &ioOutputDataPacketSize,
                outOutputData.unsafeMutablePointer,
                packetDescriptionsPtr
            )

现在似乎已经正确创建了音频CMSampleBuffer,但是当我添加它时,音频无法播放,并且会在视频中产生奇怪的定时故障。