分解Uint32的数据字节缓冲区

时间:2018-10-18 13:14:40

标签: swift avcapturesession

我正在使用AVCaptureSession捕获音频。在用于处理捕获的数据的回调函数中,我将流放入了Data结构(字节缓冲区)中。看来Data是UInt8(对于字节缓冲区有意义),但我认为流数据是UInt32。

我不确定应该执行以下哪项操作,但是我无法使它们中的任何一个起作用。我可以吗?

  1. 将数据转换为UInt32而不是UInt8?
  2. 从数据读取时,需要4个字节来制作UInt32吗?
  3. 将捕获会话更改为UInt8吗?
  4. 放弃数据结构,自己做?

我的回调函数是:

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    var audioBufferList = AudioBufferList()
    var data = Data()
    var blockBuffer: CMBlockBuffer?

    // Put the sample buffer in to a list of audio buffers (audioBufferList)
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &blockBuffer)
    // Extract the BufferList in to an array of buffers
    let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
    // for each buffer, extract the frame.  There should only be one buffer as we are recording in mono!
    for audioBuffer in buffers {
        assert(audioBuffer.mNumberChannels == 1)        // it should always be 1 for mono channel
        let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
        data.append(frame!, count: Int(audioBuffer.mDataByteSize) / 8)
    }

    // limit how much of the sample we pass through.
    viewDelegate?.gotSoundData(data.prefix(MAX_POINTS))
}

所有gotSoundData都从视图转到多个子视图进行处理

func addSamples(samples: Data) {
    //if (isHidden) { return }

    samples.forEach { sample in
        [...process each byte...]
    }
}

我可以看到Data.append具有以下定义:

mutating func append(_ bytes: UnsafePointer<UInt8>, count: Int)

1 个答案:

答案 0 :(得分:1)

Meggar帮助我专注于选项4-使用我自己的结构[Int16]。如果有人对选项1感兴趣,请查看我稍后发现的该链接,该链接扩展了Data的更多数据类型: round trip Swift number type to/from Data

回调函数:

    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    var audioBufferList = AudioBufferList()
    var blockBuffer: CMBlockBuffer?

    // Put the sample buffer in to a list of audio buffers (audioBufferList)
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout<AudioBufferList>.size, blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: 0, blockBufferOut: &blockBuffer)
    // Extract the BufferList in to an array of buffers
    let audioBuffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
    // For each buffer, extract the samples
    for audioBuffer in audioBuffers {
        let samplesCount = Int(audioBuffer.mDataByteSize) / MemoryLayout<Int16>.size
        let samplesPointer = audioBufferList.mBuffers.mData!.bindMemory(to: Int16.self, capacity: samplesCount)
        let samples = UnsafeMutableBufferPointer<Int16>(start: samplesPointer, count: samplesCount)
        //convert to a "safe" array for ease of use in delegate.
        var samplesArray:[Int16] = []
        for sample in samples {
            samplesArray.append(sample)
        }
        viewDelegate?.gotSample(samplesArray)
    }        
}

和消费功能保持几乎相同:

func addSamples(samples: [Int16]) {
    samples.forEach { sample in
        [...process each Int16...]
    }
}