采样率&使用LibAV API进行格式转换(libavresample)

时间:2014-03-17 08:59:05

标签: audio libav sample-rate

我正在编写一个代码,该代码将合并多个音频(具有不同的格式)并创建单个音频。当我将编码器 sample_rate sample_fmt 设置为与输入视频相同时,我可以合并音频。但是很明显,所有输入音频格式与输出格式都不相同,所以我必须进行格式转换。我尝试使用" avresample"为此目的,但当输入和输出 sample_rate sample_fmt 不同时,无法对输出帧进行编码。

它可以手工完成(通过样本删除,插值等),但由于libav提供转换API,我认为这可以(并且可能应该是整洁)自动完成。

以下是我如何设置编码器和重新采样上下文参数:

AVCodecContext* avAudioEncoder = outputAudioStream->codec;
AVCodec * audioEncoder = avcodec_find_encoder(AV_CODEC_ID_MP3);

avcodec_get_context_defaults3(avAudioEncoder, audioEncoder);

avAudioEncoder->sample_fmt = AV_SAMPLE_FMT_S16P;
avAudioEncoder->sample_rate = 48000;
avAudioEncoder->channels = 2;
avAudioEncoder->time_base.num = 1;
avAudioEncoder->time_base.den = 48000;
avAudioEncoder->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;

if (outputAVFormat->oformat->flags & AVFMT_GLOBALHEADER)
{
    avAudioEncoder->flags |= CODEC_FLAG_GLOBAL_HEADER;
}

avcodec_open2(avAudioEncoder, audioEncoder, nullptr);

std::shared_ptr<AVAudioResampleContext> avAudioResampleContext(avresample_alloc_context(), [](AVAudioResampleContext * avARC){avresample_close(avARC), avresample_free(&avARC); });

av_opt_set_int(avAudioResampleContext.get(), "in_channel_layout", 2, 0);
av_opt_set_int(avAudioResampleContext.get(), "in_sample_rate", 44100, 0);
av_opt_set_int(avAudioResampleContext.get(), "in_sample_fmt", AV_SAMPLE_FMT_S16P, 0);
av_opt_set_int(avAudioResampleContext.get(), "out_channel_layout", avAudioEncoder->channels, 0);
av_opt_set_int(avAudioResampleContext.get(), "out_sample_rate", avAudioEncoder->sample_rate, 0);
av_opt_set_int(avAudioResampleContext.get(), "out_sample_fmt", avAudioEncoder->sample_fmt, 0);

以下是我的阅读和阅读方式。编码帧

...
int result = avcodec_decode_audio4(avAudioDecoder.get(), audioFrame.get(), &isFrameAvailable, &decodingPacket);
...
if (isFrameAvailable)
{
    decodingPacket.size -= result;
    decodingPacket.data += result;

    encodeAudioFrame->format = outputAudioStream->codec->sample_fmt;
    encodeAudioFrame->channel_layout = outputAudioStream->codec->channel_layout;

    auto available = avresample_available(avAudioResampleContext.get());
    auto delay = avresample_get_delay(avAudioResampleContext.get());

    encodeAudioFrame->nb_samples =  available + av_rescale_rnd( delay + audioFrame->nb_samples, avAudioEncoder->sample_rate, audioStream->codec->sample_rate, AV_ROUND_ZERO);
    int linesize;
    av_samples_alloc(encodeAudioFrame->data, &linesize, avAudioEncoder->channels, encodeAudioFrame->nb_samples, avAudioEncoder->sample_fmt, 1);
    encodeAudioFrame->linesize[0] = linesize;

    avresample_convert(avAudioResampleContext.get(), nullptr, encodeAudioFrame->linesize[0], encodeAudioFrame->nb_samples, &audioFrame->data[0], audioFrame->linesize[0], audioFrame->nb_samples*outputAudioStream->codec->channels); 


    std::shared_ptr<AVPacket> outPacket(new AVPacket, [](AVPacket* p){ av_free_packet(p); delete p; });
    av_init_packet(outPacket.get());
    outPacket->data = nullptr;
    outPacket->size = 0;

    while (avresample_available(avAudioResampleContext.get()) >= encodeAudioFrame->nb_samples)
    {
        avresample_read(avAudioResampleContext.get(), &encodeAudioFrame->data[0], encodeAudioFrame->nb_samples*outputAudioStream->codec->channels);


        encodeAudioFrame->pts = av_rescale_q(++encodedAudioPts, outputAudioStream->codec->time_base, outputAudioStream->time_base);

        encodeAudioFrame->pts *= avAudioEncoder->frame_size;
        ... 
        auto ret = avcodec_encode_audio2(avAudioEncoder, outPacketPtr, encodeAudioFramePtr, &got_output);
        ...
    }

似乎我无法正确使用avresample,但我无法弄清楚如何解决这个问题。任何帮助将不胜感激。

0 个答案:

没有答案
相关问题