Android上的WebRTC回声消除。缓冲区太小了

时间:2013-12-23 17:56:59

标签: android webrtc aec echo-cancellation

我在尝试使用webRTC在Android上进行回音消除时遇到问题。我正在关注大部分发布Here的项目,但是我正在尝试直接从远程设备流式传输。

/* Prepare AEC */
    MobileAEC aecm = new MobileAEC(null);
    aecm.setAecmMode(MobileAEC.AggressiveMode.MILD)
            .prepare();
         /*Get Minimum Buffer Size */
    int minBufSize = AudioRecord.getMinBufferSize(HBConstants.SAMPLE_RATE,
            AudioFormat.CHANNEL_CONFIGURATION_STEREO,
            AudioFormat.ENCODING_PCM_16BIT) ;
    int audioLength=minBufSize/2;
    byte[] buf = new byte[minBufSize];

    short[] audioBuffer = new short[audioLength];
    short[] aecOut = new short[audioLength];


/*Prepare Audio Track */
    AudioTrack speaker = new AudioTrack(AudioManager.STREAM_MUSIC,
            HBConstants.SAMPLE_RATE,
            AudioFormat.CHANNEL_OUT_MONO,
            AudioFormat.ENCODING_PCM_16BIT, audioLength,
            AudioTrack.MODE_STREAM);
    speaker.play();
    isRunning = true;

    /*Loop around and read incoming network buffer.  PlayerQueue is a read LinkedBlockingQueue set elsewhere containing incoming network data */
    while (isRunning) {
        try {
            buf = playerQueue.take();

            /* Convert to short buffer and send to aecm*/
            ByteBuffer.wrap(buf).order(ByteOrder.nativeOrder())
                .asShortBuffer().get(audioBuffer);

            aecm.farendBuffer(audioBuffer, audioLength);
            aecm.echoCancellation(audioBuffer, null, aecOut,
                    (short) (audioLength), (short) 10);
/*Send output to speeker */
            speaker.write(aecOut, 0, audioLength);

        } catch (Exception ie) {

        }

        try {
            Thread.sleep(5);
        } catch (InterruptedException e) {
            // TODO Auto-generated catch block
        }

    }

当我这样做时,我得到了这个例外:

12-23 17:31:11.290: W/System.err(8717): java.lang.Exception: setFarendBuffer() failed due to invalid arguments.
12-23 17:31:11.290: W/System.err(8717):     at com.android.webrtc.audio.MobileAEC.farendBuffer(MobileAEC.java:204)
12-23 17:31:11.290: W/System.err(8717):     at com.example.twodottwo.PlayerThread.run(PlayerThread.java:62)
12-23 17:31:11.290: W/System.err(8717):     at java.lang.Thread.run(Thread.java:841)

现在我在代码中挖了一下,发现采样器一次只能接受80或160个样本。为了弥补这一点,我试图一次只获得160个样本,但这比AudioRecord对象中的最小缓冲区大小小,并产生错误。

因此,为了解决这个问题,我还尝试了这个代码并将队列设置为一次最多只传送320个字节(因为我们短暂使用2个字节):

ShortBuffer sb = ShortBuffer.allocate(audioLength);
int samples = audioLength / 160;
while(i < samples) {
    buf = playerQueue.take();
    ByteBuffer.wrap(buf).order(ByteOrder.nativeOrder()).asShortBuffer().get(audioBuffer);
    aecm.farendBuffer(audioBuffer, 160);
    aecm.echoCancellation(audioBuffer, null, aecOut, (short) (160), (short) 10);
    sb.put(aecOut);
    i ++;
}
speaker.write(sb.array(), 0, audioLength);

现在这应该缓冲每个160元素数组并将其传递给WebRtc库以进行回声消除。它似乎只是产生随机噪音。我试图改变结果数组的顺序,但仍会产生随机噪声。

有没有什么方法可以将声音样本分开并使其听起来像WebRTC喜欢的原始声音?或者有没有办法让WebRtc一次接受更多样品?我认为要么会好,但目前我有点卡住了。

0 个答案:

没有答案
相关问题