在iphone中播放与频率相关的哔声分贝

时间:2012-06-08 06:17:08

标签: iphone avaudioplayer frequency beep

我已经研究过要在iphone中播放与频率有关的哔声。我给的分贝。

我提到的链接:

http://developer.apple.com/library/ios/#samplecode/MusicCube/Introduction/Intro.html#//apple_ref/doc/uid/DTS40008978

http://www.politepix.com/2010/06/18/decibel-metering-from-an-iphone-audio-unit/

http://atastypixel.com/blog/using-remoteio-audio-unit/

http://www.politepix.com/2010/06/18/decibel-metering-from-an-iphone-audio-unit/

How to play a sound of paticular frequency and framework not found AudioUnit question

我也使用Flite在我的应用程序中进行文本到语音。

我可以知道,是否可以在iphone中播放与频率相关的哔声。使用flite分贝。

我知道他们正在根据输入创建一个音频文件(仅与音高,方差,速度和给定的字符串相关)并在创建后通过它播放Audioplayer。

但他们没有自定义方法来设置频率和频率。分贝!!!!

所以任何人都可以在iPhone上为我提供一个好方法。

对此问题的任何帮助表示赞赏。

由于

1 个答案:

答案 0 :(得分:0)

此类允许您以给定频率和给定幅度发出蜂鸣声。 它使用来自 AudioToolbox.framework 的AudioQueues。它只是一个草图,许多事情应该被改进,但创建信号的机制是有效的。

如果您看到@interface

,则使用非常简单
#import <AudioToolbox/AudioToolbox.h>
#define TONE_SAMPLERATE 44100.

@interface Tone : NSObject {
    AudioQueueRef queue;
    AudioQueueBufferRef buffer;
    BOOL rebuildBuffer;
}
@property (nonatomic, assign) NSUInteger frequency;
@property (nonatomic, assign) CGFloat dB;

- (void)play;
- (void)pause;
@end


@implementation Tone
@synthesize dB=_dB,frequency=_frequency;

void handleBuffer(void *inUserData,
                  AudioQueueRef inAQ,
                  AudioQueueBufferRef inBuffer);

#pragma mark - Initialization and deallocation -

- (id)init
{
    if ((self=[super init])) {

        _dB=0.;
        _frequency=440;
        rebuildBuffer=YES;

        // TO DO: handle AudioQueueXYZ's failures!!

        // create a descriptor containing a LPCM, mono, float format
        AudioStreamBasicDescription desc;

        desc.mSampleRate=TONE_SAMPLERATE;
        desc.mFormatID=kAudioFormatLinearPCM;
        desc.mFormatFlags=kLinearPCMFormatFlagIsFloat;
        desc.mBytesPerPacket=sizeof(float);
        desc.mFramesPerPacket=1;
        desc.mBytesPerFrame=sizeof(float);
        desc.mChannelsPerFrame=1;
        desc.mBitsPerChannel=8*sizeof(float);

        // create a new queue
        AudioQueueNewOutput(&desc,
                            &handleBuffer,
                            self,
                            CFRunLoopGetCurrent(),
                            kCFRunLoopCommonModes,
                            0,
                            &queue);

        // and its buffer, ready to hold 1" of data
        AudioQueueAllocateBuffer(queue,
                                 sizeof(float)*TONE_SAMPLERATE,
                                 &buffer);

        // create the buffer and enqueue it
        handleBuffer(self, queue, buffer);

    }
    return self;
}

- (void)dealloc
{
    AudioQueueStop(queue, YES);
    AudioQueueFreeBuffer(queue, buffer);
    AudioQueueDispose(queue, YES);

    [super dealloc];
}

#pragma mark - Main function -

void handleBuffer(void *inUserData,
                AudioQueueRef inAQ,
                AudioQueueBufferRef inBuffer) {

    // this function takes care of building the buffer and enqueuing it.

    // cast inUserData type to Tone
    Tone *tone=(Tone *)inUserData;

    // check if the buffer must be rebuilt
    if (tone->rebuildBuffer) {

        // precompute some useful qtys
        float *data=inBuffer->mAudioData;
        NSUInteger max=inBuffer->mAudioDataBytesCapacity/sizeof(float);

        // multiplying the argument by 2pi changes the period of the cosine
        //  function to 1s (instead of 2pi). then we must divide by the sample
        //  rate to get TONE_SAMPLERATE samples in one period.
        CGFloat unit=2.*M_PI/TONE_SAMPLERATE;
        // this is the amplitude converted from dB to a linear scale
        CGFloat amplitude=pow(10., tone.dB*.05);

        // loop and simply set data[i] to the value of cos(...)
        for (NSUInteger i=0; i<max; ++i)
            data[i]=(float)(amplitude*cos(unit*(CGFloat)(tone.frequency*i)));

        // inform the queue that we have filled the buffer
        inBuffer->mAudioDataByteSize=sizeof(float)*max;

        // and set flag
        tone->rebuildBuffer=NO;
    }

    // reenqueue the buffer
    AudioQueueEnqueueBuffer(inAQ,
                            inBuffer,
                            0,
                            NULL);

    /* TO DO: the transition between two adjacent buffers (the same one actually)
              generates a "tick", even if the adjacent buffers represent a continuous signal.
              maybe using two buffers instead of one would fix it.
     */
}

#pragma - Properties and methods -

- (void)play
{
    // generate an AudioTimeStamp with "0" simply!
    //  (copied from FillOutAudioTimeStampWithSampleTime)

    AudioTimeStamp time;

    time.mSampleTime=0.;
    time.mRateScalar=0.;
    time.mWordClockTime=0.;
    memset(&time.mSMPTETime, 0, sizeof(SMPTETime));
    time.mFlags = kAudioTimeStampSampleTimeValid;

    // TO DO: maybe it could be useful to check AudioQueueStart's return value
    AudioQueueStart(queue, &time);
}

- (void)pause
{
    // TO DO: maybe it could be useful to check AudioQueuePause's return value
    AudioQueuePause(queue);
}

- (void)setFrequency:(NSUInteger)frequency
{
    if (_frequency!=frequency) {
        _frequency=frequency;

        // we need to update the buffer (as soon as it stops playing)
        rebuildBuffer=YES;
    }
}

- (void)setDB:(CGFloat)dB
{
    if (dB!=_dB) {
        _dB=dB;

        // we need to update the buffer (as soon as it stops playing)
        rebuildBuffer=YES;
    }
}

@end
  • 该类产生以给定整数频率振荡的cos波形(幅度* cos(2pi *频率* t));整个工作由void handleBuffer(...)完成,使用具有线性PCM,mono,float @ 44.1kHz格式的AudioQueue。要更改信号形状,您只需更改该行即可。例如,以下代码将生成方波:

    float x = fmodf(unit*(CGFloat)(tone.frequency*i), 2 * M_PI);
    data[i] = amplitude * (x > M_PI ? -1.0 : 1.0);
    
  • 对于浮点频率,您应该考虑在一秒钟的音频数据中没有必要的整数个振荡,因此所表示的信号在两个缓冲区之间的连接处是不连续的,并产生一个奇怪的'蜱'。例如,您可以设置较少的样本,以便结点处于信号周期的末尾。

  • 正如Paul R所指出的那样,您应首先校准硬件,以便在实现中设置的值与设备产生的声音之间实现可靠的转换。实际上,此代码中生成的浮点样本的范围是-1到1,所以我只是将幅度值转换为dB( 20 * log_10(幅度))。
  • 查看有关实施中其他细节的评论和“已知限制”(所有那些“要做”)。所使用的功能在Apple的参考文献中有很好的记录。