如何从音频文件

时间:2017-02-27 10:48:12

标签: c# uwp naudio audiobuffer naudio-framework

我正在开发适用于音频数据的UWP应用程序(适用于Windows 10)。它以浮点数样本的形式在开始时接收样本缓冲区,这些项目从-1f变为1f。 之前我使用过NAudio.dll 1.8.0,它提供了所有必要的功能。 使用WaveFileReader,waveBuffer.FloatBuffer,WaveFileWriter类。 但是,当我完成此应用程序并尝试构建Relese版本时,出现此错误: ILT0042:目前不支持指针类型数组:' System.Int32 * []'。

我试图解决它:

1)https://forums.xamarin.com/discussion/73169/uwp-10-build-fail-arrays-of-pointer-types-error

建议删除.dll的链接,但我需要它。

2)我尝试使用Manage NuGet Packages安装相同版本的NAudio,但WaveFileReader,WaveFileWriter不可用。

3)在NAudio开发人员的回答(How to store a .wav file in Windows 10 with NAudio)中,我已经阅读了有关使用AudioGraph的内容,但我只能在实时播放中构建浮点数样本,但我需要在音频后立即获取完整的样本包文件上传。在录制过程或播放过程中获取样本的示例: https://docs.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs

这就是我需要帮助的原因:如何在音频文件上传后让FloatBuffer处理样本?例如,用于构建音频波或应用音频效果的计算。

提前谢谢。

  1. 我尝试过使用FileStream和BitConverter.ToSingle(),但是与NAudio相比,我有不同的结果。 换句话说,我仍在寻找解决方案。

    private float[] GetBufferArray()
    {
        string _path = ApplicationData.Current.LocalFolder.Path.ToString() + "/track_1.mp3";
        FileStream _stream = new FileStream(_path, FileMode.Open);
        BinaryReader _binaryReader = new BinaryReader(_stream);
        int _dataSize = _binaryReader.ReadInt32();
        byte[] _byteBuffer = _binaryReader.ReadBytes(_dataSize);
    
        int _sizeFloat = sizeof(float);
        float[] _floatBuffer = new float[_byteBuffer.Length / _sizeFloat];
        for (int i = 0, j = 0; i < _byteBuffer.Length - _sizeFloat; i += _sizeFloat, j++)
        {
            _floatBuffer[j] = BitConverter.ToSingle(_byteBuffer, i);
        }
        return _floatBuffer;
    }
    

3 个答案:

答案 0 :(得分:2)

从UWP中的音频文件中读取样本的另一种方法是使用AudioGraph API。它适用于Windows10支持的所有音频格式

以下是示例代码

namespace AudioGraphAPI_read_samples_from_file
{
    // App opens a file using FileOpenPicker and reads samples into array of 
    // floats using AudioGragh API
// Declare COM interface to access AudioBuffer
[ComImport]
[Guid("5B0D3235-4DBA-4D44-865E-8F1D0E4FD04D")]
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown)]
unsafe interface IMemoryBufferByteAccess
{
    void GetBuffer(out byte* buffer, out uint capacity);
}

public sealed partial class MainPage : Page
{
    StorageFile mediaFile;

    AudioGraph audioGraph;
    AudioFileInputNode fileInputNode;
    AudioFrameOutputNode frameOutputNode;

    /// <summary>
    /// We are going to fill this array with audio samples
    /// This app loads only one channel 
    /// </summary>
    float[] audioData;
    /// <summary>
    /// Current position in audioData array for loading audio samples 
    /// </summary>
    int audioDataCurrentPosition = 0;

    public MainPage()
    {
        this.InitializeComponent();            
    }

    private async void Open_Button_Click(object sender, RoutedEventArgs e)
    {
        // We ask user to pick an audio file
        FileOpenPicker filePicker = new FileOpenPicker();
        filePicker.SuggestedStartLocation = PickerLocationId.MusicLibrary;
        filePicker.FileTypeFilter.Add(".mp3");
        filePicker.FileTypeFilter.Add(".wav");
        filePicker.FileTypeFilter.Add(".wma");
        filePicker.FileTypeFilter.Add(".m4a");
        filePicker.ViewMode = PickerViewMode.Thumbnail;
        mediaFile = await filePicker.PickSingleFileAsync();

        if (mediaFile == null)
        {
            return;
        }

        // We load samples from file
        await LoadAudioFromFile(mediaFile);

        // We wait 5 sec
        await Task.Delay(5000);

        if (audioData == null)
        {
            ShowMessage("Error loading samples");
            return;
        }

        // After LoadAudioFromFile method finished we can use audioData
        // For example we can find max amplitude
        float max = audioData[0];
        for (int i = 1; i < audioData.Length; i++)
            if (Math.Abs(audioData[i]) > Math.Abs(max))
                max = audioData[i];
        ShowMessage("Maximum is " + max.ToString());
    }

    private async void ShowMessage(string Message)
    {
        var dialog = new MessageDialog(Message);
        await dialog.ShowAsync();
    }

    private async Task LoadAudioFromFile(StorageFile file)
    {
        // We initialize an instance of AudioGraph
        AudioGraphSettings settings = 
            new AudioGraphSettings(
                Windows.Media.Render.AudioRenderCategory.Media
                );
        CreateAudioGraphResult result1 = await AudioGraph.CreateAsync(settings);
        if (result1.Status != AudioGraphCreationStatus.Success)
        {
            ShowMessage("AudioGraph creation error: " + result1.Status.ToString());
        }
        audioGraph = result1.Graph;

        if (audioGraph == null)
            return;

        // We initialize FileInputNode
        CreateAudioFileInputNodeResult result2 = 
            await audioGraph.CreateFileInputNodeAsync(file);
        if (result2.Status != AudioFileNodeCreationStatus.Success)
        {
            ShowMessage("FileInputNode creation error: " + result2.Status.ToString());
        }
        fileInputNode = result2.FileInputNode;

        if (fileInputNode == null)
            return;

        // We read audio file encoding properties to pass them to FrameOutputNode creator
        AudioEncodingProperties audioEncodingProperties = fileInputNode.EncodingProperties;

        // We initialize FrameOutputNode and connect it to fileInputNode
        frameOutputNode = audioGraph.CreateFrameOutputNode(audioEncodingProperties);
        fileInputNode.AddOutgoingConnection(frameOutputNode);

        // We add a handler achiving the end of a file
        fileInputNode.FileCompleted += FileInput_FileCompleted;
        // We add a handler which will transfer every audio frame into audioData 
        audioGraph.QuantumStarted += AudioGraph_QuantumStarted;

        // We initialize audioData
        int numOfSamples = (int)Math.Ceiling(
            (decimal)0.0000001
            * fileInputNode.Duration.Ticks
            * fileInputNode.EncodingProperties.SampleRate
            );
        audioData = new float[numOfSamples];

        audioDataCurrentPosition = 0;

        // We start process which will read audio file frame by frame
        // and will generated events QuantumStarted when a frame is in memory
        audioGraph.Start();

    }

    private void FileInput_FileCompleted(AudioFileInputNode sender, object args)
    {
        audioGraph.Stop();
    }

    private void AudioGraph_QuantumStarted(AudioGraph sender, object args)
    {
        AudioFrame frame = frameOutputNode.GetFrame();
        ProcessInputFrame(frame);

    }

    unsafe private void ProcessInputFrame(AudioFrame frame)
    {
        using (AudioBuffer buffer = frame.LockBuffer(AudioBufferAccessMode.Read))
        using (IMemoryBufferReference reference = buffer.CreateReference())
        {
            // We get data from current buffer
            ((IMemoryBufferByteAccess)reference).GetBuffer(
                out byte* dataInBytes,
                out uint capacityInBytes
                );
            // We discard first frame; it's full of zeros because of latency
            if (audioGraph.CompletedQuantumCount == 1) return;

            float* dataInFloat = (float*)dataInBytes;
            uint capacityInFloat = capacityInBytes / sizeof(float);
            // Number of channels defines step between samples in buffer
            uint step = fileInputNode.EncodingProperties.ChannelCount;
            // We transfer audio samples from buffer into audioData
            for (uint i = 0; i < capacityInFloat; i += step)
            {
                if (audioDataCurrentPosition < audioData.Length)
                {
                    audioData[audioDataCurrentPosition] = dataInFloat[i];
                    audioDataCurrentPosition++;
                }
            }
        }
    }
}

}

已编辑:它解决了这个问题,因为它将文件中的样本读入浮点数组

答案 1 :(得分:1)

导入声明

using NAudio.Wave;
using NAudio.Wave.SampleProviders;

内部功能

AudioFileReader reader = new AudioFileReader(filename);
ISampleProvider isp = reader.ToSampleProvider();
float[] buffer = new float[reader.Length / 2];
isp.Read(buffer, 0, buffer.Length);

缓冲区数组将具有32位IEEE浮点样本。 这是使用NAudio Nuget包Visual Studio。

答案 2 :(得分:0)

从Wav文件获取AudioData的第一种流行方式。

感谢PI用户的回答How to read the data in a wav file to an array,我解决了UWP项目中浮点数组中wav文件读取的问题。 但是当使用AudioGraph在wav文件中记录时,文件的结构与标准结构不同(可能只在我的项目中有这样的问题)。这会导致不可预测的结果。我们收到value1263424842而不是可预测的544501094获取格式ID。之后,以下所有值都显示不正确。我已经找到了正确的id顺序搜索字节。我意识到AudioGraph为记录的wav文件添加了额外的数据块,但是记录的格式仍然是PCM。这些额外的数据块看起来像有关文件格式的数据,但它也包含空值,空字节。我找不到任何有关这方面的信息,也许这里有人知道吗? PI的解决方案改变了我的需求。这就是我所拥有的:

           using (FileStream fs = File.Open(filename, FileMode.Open))
            {
                BinaryReader reader = new BinaryReader(fs);

                int chunkID = reader.ReadInt32();
                int fileSize = reader.ReadInt32();
                int riffType = reader.ReadInt32();
                int fmtID;

                long _position = reader.BaseStream.Position;
                while (_position != reader.BaseStream.Length-1)
                {
                    reader.BaseStream.Position = _position;
                    int _fmtId = reader.ReadInt32();
                    if (_fmtId == 544501094) {
                        fmtID = _fmtId;
                        break;
                    }
                    _position++;
                }
                int fmtSize = reader.ReadInt32();
                int fmtCode = reader.ReadInt16();

                int channels = reader.ReadInt16();
                int sampleRate = reader.ReadInt32();
                int byteRate = reader.ReadInt32();
                int fmtBlockAlign = reader.ReadInt16();
                int bitDepth = reader.ReadInt16();

                int fmtExtraSize;
                if (fmtSize == 18)
                {
                    fmtExtraSize = reader.ReadInt16();
                    reader.ReadBytes(fmtExtraSize);
                }

                int dataID = reader.ReadInt32();
                int dataSize = reader.ReadInt32();

                byte[] byteArray = reader.ReadBytes(dataSize);

                int bytesForSamp = bitDepth / 8;
                int samps = dataSize / bytesForSamp;

                float[] asFloat = null;
                switch (bitDepth)
                {
                    case 16:
                        Int16[] asInt16 = new Int16[samps];
                        Buffer.BlockCopy(byteArray, 0, asInt16, 0, dataSize);
                        IEnumerable<float> tempInt16 =
                            from i in asInt16
                            select i / (float)Int16.MaxValue;
                        asFloat = tempInt16.ToArray();
                        break;
                    default:
                        return false;
                }

                //For one channel wav audio
                floatLeftBuffer.AddRange(asFloat);

从缓冲区到文件记录具有逆算法。此时,这是唯一一个使用wav文件的正确算法,可以获取音频数据。 使用本文使用AudioGraph - https://docs.microsoft.com/ru-ru/windows/uwp/audio-video-camera/audio-graphs。请注意,您可以使用从MIC到文件的AudioEncodingQuality重新设置记录格式的必要数据。

使用Nugget包中的NAudio获取AudioData的第二种方式。

我使用了MediaFoundationReader类。

        float[] floatBuffer;
        using (MediaFoundationReader media = new MediaFoundationReader(path))
        {
            int _byteBuffer32_length = (int)media.Length * 2;
            int _floatBuffer_length = _byteBuffer32_length / sizeof(float);

            IWaveProvider stream32 = new Wave16ToFloatProvider(media);
            WaveBuffer _waveBuffer = new WaveBuffer(_byteBuffer32_length);
            stream32.Read(_waveBuffer, 0, (int)_byteBuffer32_length);
            floatBuffer = new float[_floatBuffer_length];

            for (int i = 0; i < _floatBuffer_length; i++) {
                floatBuffer[i] = _waveBuffer.FloatBuffer[i];
            }
        }

比较我注意到的两种方式:

  • 收到的样本数值相差1/1 000 000.我不知道哪种方式更精确(如果你知道,会很高兴听到);
  • 获取AudioData的第二种方式也适用于MP3文件。

如果您发现任何错误或对此有任何意见,欢迎。