反正有限制AvAudioEngine的缓冲区大小吗?

时间:2019-04-17 09:04:19

标签: ios swift avaudioengine

我找不到任何地方如何限制avaudioengine或Mixer节点的输出缓冲区?我是从raywenderlich辅导网站上找到的,但他们说不能保证缓冲区大小

  

“ installTap(onBus:0,bufferSize:1024,format:format)使您可以访问mainMixerNode的输出总线上的音频数据。您>请求的缓冲区大小为1024字节,但请求的大小为' t>保证,尤其是当您请求的缓冲区太小或太大时。Apple的文档没有指定这些限制是什么。”

https://www.raywenderlich.com/5154-avaudioengine-tutorial-for-ios-getting-started

我已经尝试过installTap和SetCurrentIOBufferFrameSize(OSstatus)方法,但是所有方法都无法解决缓冲区限制问题。

func SetCurrentIOBufferFrameSize(inAUHAL: AudioUnit,inIOBufferFrameSize: UInt32) -> OSStatus {
        var inIOBufferFrameSize = inIOBufferFrameSize
        var propSize = UInt32(MemoryLayout<UInt32>.size)
        return AudioUnitSetProperty(inAUHAL, AudioUnitPropertyID(kAudioUnitProperty_ScheduledFileBufferSizeFrames), kAudioUnitScope_Global, 0, &inIOBufferFrameSize, propSize)
    }
func initalizeEngine() {

sampleRateConversionRatio = Float(44100 / SampleRate)

 engine = AVAudioEngine()
        SetCurrentIOBufferFrameSize(inAUHAL: engine.outputNode.audioUnit!, inIOBufferFrameSize: 15)

     do {
            try AVAudioSession.sharedInstance().setCategory(.playAndRecord , mode: .default , options: .defaultToSpeaker)
            try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(ioBufferDuration)
            try AVAudioSession.sharedInstance().setPreferredSampleRate(Double(SampleRate))
            try AVAudioSession.sharedInstance().setPreferredInputNumberOfChannels(channelCount)

        } catch {

            assertionFailure("AVAudioSession setup error: \(error)")
        }

}
func startRecording() {

   downMixer.installTap(onBus: 0, bufferSize: bufferSize, format: format) { buffer, when in

            self.serialQueue.async {

                let pcmBuffer = AVAudioPCMBuffer(pcmFormat: self.format16KHzMono, frameCapacity: AVAudioFrameCount(Float(buffer.frameCapacity)/self.sampleRateConversionRatio))
                var error: NSError? = nil

                let inputBlock: AVAudioConverterInputBlock = {inNumPackets, outStatus in
                    outStatus.pointee = AVAudioConverterInputStatus.haveData
                    return buffer
                }

                self.formatConverter.convert(to: pcmBuffer!, error: &error, withInputFrom: inputBlock)

                if error != nil {
                    print(error!.localizedDescription)
                }
                else if let channelData = pcmBuffer!.int16ChannelData {

                    let channelDataPointer = channelData.pointee
                    let channelData = stride(from: 0,
                                             to: Int(pcmBuffer!.frameLength),
                                             by: buffer.stride).map{ channelDataPointer[$0] }
                    //Return channelDataValueArray
                    let data = Data(fromArray: channelData)
                    var byteArray = data.toByteArray()
                }
            }
        }
}

0 个答案:

没有答案
相关问题