AudioToolBox

一、声音的基本知识

1.声音是由物体震动而产生的,当演奏乐其拍打一扇门或者敲击桌面时,它们的震动都会引起空气有节奏的震动,使周围的空气产生疏密变化,形成疏密相间的纵波,由此产生了声波。
2.声波的三要素是频率、振幅和波形,频率代表音阶的高低,振幅代表响度,波形代表音色。
3.人类耳朵的听力有一个频率范围,大约是20Hz~20kHz。
4.声音在真空中是无法传播的。
5.模拟信号数字化的三个步骤:采样、量化和编码。
采样就是在时间轴上对信号进行数字化
量化是指在幅度轴上对信号进行数字化
编码就是按照一定的格式记录采样和量化后的数字数据

二、音频编码

音频压缩编码的基本指标之一是压缩比,压缩比通常小于1(否则就没有必要去做压缩,因为压缩就是要减小数据容量)。压缩算法包括有损压缩和无损压缩。无损压缩是指解压后的数据可以完全复原。在常用的压缩格式中,用得较多的是有损压缩,有损压缩是指解压后的数据不能完全复原,会丢失一部分信息,压缩比越小,丢失的信息就越多,信号还原后的失真就会越大。

压缩编码的原理实际上是压缩掉冗余信号,冗余信号是指不能被人耳感知到的信号,包含人耳听觉范围之外的音频信号以及被掩蔽掉的音频信号等。被掩蔽掉的音频信号则主要是因为人耳的掩蔽效应 ,主要表现为频域掩蔽效应与时域掩蔽效应。

下面介绍几种常见的压缩编码格式。
(1)WAV编码
PCM(脉冲编码调制)是Pluse Code Modulation的缩写。WAV编码的一种实现(有种实现方式,但是都不会进行压缩操作)就是在PCM数据格式的前面加上44字节,分别用来描述PCM的采样率、声道数、数据格式等信息。
特点:音质非常好,大量软件都支持
适用场合:多媒体开发的中间文件、保存音乐和音效素材。

(2)MP3编码
MP3具有不错的压缩比,使用LAME编码(MP3编码格式的一种实现)的中高码率的MP3文件,听感上非常接近源WAV文件,当然在不同的应用场景下,应该调整合适的参数以达到最好的效果。
特点:音质在128Kbit/s以上表现还不错,压缩比比较高,大量软件和硬件都支持,兼容性好。
适用场合:高比特率下对兼容性有要求的音乐欣赏。

(3)AAC编码
AAC是新一代的音频有损压缩技术,它通过一些附加的编码技术(比如PS、SBR等),衍生出了LC-AAC、HE-ACC、HE-AAC v2三种主要的编码格式。LC-AAC是比较传统的AAC,相对而言,其主要应用于中高码率场景的编码(>=80Kbit/s);HE-AAC(相当于AAC+SBR)主要应用于中低码率场景的编码(<=80Kbit/s);而新近推出的HE-AAC v2(相当于AAC+SBR+PS)主要应用于低码率场景的编码(<=48Kbit/s)。
特点:在小于128Kbit/s的码率下表现优异,并且多用于视频中的音频编码。
使用场合:128Kbit/s以下的音频编码,多用于视频中音频编码。

(4)Ogg编码
Ogg是一种非常有潜力的编码,在各种码率下都有比较优秀的表现,尤其是在中低码率场景下。Ogg除了音质好之外,还是完全免费的,这为Ogg获得更多的支持打好了基础。Ogg有着出色的算法,可以用更小的码率达到更好的音质,128Kbit/s的Ogg比192Kbit/s甚至更高码率的MP3还要出色。但目前因为还没有媒体服务软件的支持,因此基于Ogg的数字广播还无法实现。Ogg目前受支持的情况还不够好,无论是软件上还是硬件上的支持,都无法和MP3相提并论。
特点:可以用比MP3更小的码率实现比MP3更好的音质,高中低码率下均有良好的表现,兼容性不够好,流媒体特性不支持。
使用场合:语音聊天的音频消息场景。

三、AudioToolBox在音频的编码解码中的应用

1.编码
CCAudioEncoder.h

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
@class CCAudioConfig;

/**AAC编码器代理*/
@protocol CCAudioEncoderDelegate <NSObject>
- (void)audioEncodeCallback:(NSData *)aacData;
@end

/**AAC硬编码器 (编码和回调均在异步队列执行)*/
@interface CCAudioEncoder : NSObject

/**编码器配置*/
@property (nonatomic, strong) CCAudioConfig *config;
@property (nonatomic, weak) id<CCAudioEncoderDelegate> delegate;

/**初始化传入编码器配置*/
- (instancetype)initWithConfig:(CCAudioConfig*)config;

/**编码*/
- (void)encodeAudioSamepleBuffer: (CMSampleBufferRef)sampleBuffer;
@end

CCAudioEncoder.m

#import "CCAudioEncoder.h"
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import "CCAVConfig.h"
@interface CCAudioEncoder()

@property (nonatomic, strong) dispatch_queue_t encoderQueue;
@property (nonatomic, strong) dispatch_queue_t callbackQueue;

//对音频转换器对象
@property (nonatomic, unsafe_unretained) AudioConverterRef audioConverter;
//PCM缓存区
@property (nonatomic) char *pcmBuffer;
//PCM缓存区大小
@property (nonatomic) size_t pcmBufferSize;

@end

@implementation CCAudioEncoder

//编码器回调函数
static OSStatus aacEncodeInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData) {
    
    //获取self
    CCAudioEncoder *aacEncoder = (__bridge CCAudioEncoder *)(inUserData);
   
    //判断pcmBuffsize大小
    if (!aacEncoder.pcmBufferSize) {
        *ioNumberDataPackets = 0;
        return  - 1;
    }
    
    //填充
    ioData->mBuffers[0].mData = aacEncoder.pcmBuffer;
    ioData->mBuffers[0].mDataByteSize = (uint32_t)aacEncoder.pcmBufferSize;
    ioData->mBuffers[0].mNumberChannels = (uint32_t)aacEncoder.config.channelCount;
    
    //填充完毕,则清空数据
    aacEncoder.pcmBufferSize = 0;
    *ioNumberDataPackets = 1;
    return noErr;
}

#pragma mark --initConfig
- (instancetype)initWithConfig:(CCAudioConfig*)config {
    self = [super init];
    if (self) {
        //音频编码队列
        _encoderQueue = dispatch_queue_create("aac hard encoder queue", DISPATCH_QUEUE_SERIAL);
        //音频回调队列
        _callbackQueue = dispatch_queue_create("aac hard encoder callback queue", DISPATCH_QUEUE_SERIAL);
        //音频转换器
        _audioConverter = NULL;
        _pcmBufferSize = 0;
        _pcmBuffer = NULL;
        _config = config;
        if (config == nil) {
            _config = [[CCAudioConfig alloc] init];
        }
        
    }
    return self;
}

//音频编码(当AVFoundation捕获到音频内容之后)
- (void)encodeAudioSamepleBuffer: (CMSampleBufferRef)sampleBuffer {
    CFRetain(sampleBuffer);
    
    //1.判断音频转换器是否创建成功.如果未创建成功.则配置音频编码参数且创建转码器
    if (!_audioConverter) {
        [self setupEncoderWithSampleBuffer:sampleBuffer];
    }
    
    //2.来到音频编码异步队列
    dispatch_async(_encoderQueue, ^{
        
        //3.获取CMBlockBuffer, 这里面保存了PCM数据
        CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
        CFRetain(blockBuffer);
        //4.获取BlockBuffer中音频数据大小以及音频数据地址
        OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &_pcmBufferSize, &_pcmBuffer);
        //5.判断status状态
        NSError *error = nil;
        if (status != kCMBlockBufferNoErr) {
            error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
            NSLog(@"Error: ACC encode get data point error: %@",error);
            return;
        }
        //2.设置_aacBuffer 为0
        //开辟_pcmBuffsize大小的pcm内存空间
        uint8_t *pcmBuffer = malloc(_pcmBufferSize);
        //将_pcmBufferSize数据set到pcmBuffer中.
        memset(pcmBuffer, 0, _pcmBufferSize);
        
        //3.输出buffer
        /*
         typedef struct AudioBufferList {
         UInt32 mNumberBuffers;
         AudioBuffer mBuffers[1];
         } AudioBufferList;
         
         struct AudioBuffer
         {
         UInt32              mNumberChannels;
         UInt32              mDataByteSize;
         void* __nullable    mData;
         };
         typedef struct AudioBuffer  AudioBuffer;
         */
        //将pcmBuffer数据填充到outAudioBufferList 对象中
        AudioBufferList outAudioBufferList = {0};
        outAudioBufferList.mNumberBuffers = 1;
        outAudioBufferList.mBuffers[0].mNumberChannels = (uint32_t)_config.channelCount;
        outAudioBufferList.mBuffers[0].mDataByteSize = (UInt32)_pcmBufferSize;
        outAudioBufferList.mBuffers[0].mData = pcmBuffer;
        
        //输出包大小为1
        UInt32 outputDataPacketSize = 1;
        
        //配置填充函数,获取输出数据
        //转换由输入回调函数提供的数据
        /*
         参数1: inAudioConverter 音频转换器
         参数2: inInputDataProc 回调函数.提供要转换的音频数据的回调函数。当转换器准备好接受新的输入数据时,会重复调用此回调.
         参数3: inInputDataProcUserData
         参数4: inInputDataProcUserData,self
         参数5: ioOutputDataPacketSize,输出缓冲区的大小
         参数6: outOutputData,需要转换的音频数据
         参数7: outPacketDescription,输出包信息
         */
        status = AudioConverterFillComplexBuffer(_audioConverter, aacEncodeInputDataProc, (__bridge void * _Nullable)(self), &outputDataPacketSize, &outAudioBufferList, NULL);
        
        if (status == noErr) {
            //获取数据
            NSData *rawAAC = [NSData dataWithBytes: outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];
            //释放pcmBuffer
            free(pcmBuffer);
            //添加ADTS头,想要获取裸流时,请忽略添加ADTS头,写入文件时,必须添加
            //            NSData *adtsHeader = [self adtsDataForPacketLength:rawAAC.length];
            //            NSMutableData *fullData = [NSMutableData dataWithCapacity:adtsHeader.length + rawAAC.length];;
            //            [fullData appendData:adtsHeader];
            //            [fullData appendData:rawAAC];
            //将数据传递到回调队列中
            dispatch_async(_callbackQueue, ^{
                [_delegate audioEncodeCallback:rawAAC];
            });
        } else {
            error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
        }
        
        CFRelease(blockBuffer);
        CFRelease(sampleBuffer);
        if (error) {
            NSLog(@"error: AAC编码失败 %@",error);
        }
    });
}

//配置音频编码参数
- (void)setupEncoderWithSampleBuffer: (CMSampleBufferRef)sampleBuffer {
    
    //获取输入参数
    AudioStreamBasicDescription inputAduioDes = *CMAudioFormatDescriptionGetStreamBasicDescription( CMSampleBufferGetFormatDescription(sampleBuffer));
    
    //设置输出参数
    AudioStreamBasicDescription outputAudioDes = {0};
    outputAudioDes.mSampleRate = (Float64)_config.sampleRate;       //采样率
    outputAudioDes.mFormatID = kAudioFormatMPEG4AAC;                //输出格式
    outputAudioDes.mFormatFlags = kMPEG4Object_AAC_LC;              // 如果设为0 代表无损编码
    outputAudioDes.mBytesPerPacket = 0;                             //自己确定每个packet 大小
    outputAudioDes.mFramesPerPacket = 1024;                         //每一个packet帧数 AAC-1024;
    outputAudioDes.mBytesPerFrame = 0;                              //每一帧大小
    outputAudioDes.mChannelsPerFrame = (uint32_t)_config.channelCount; //输出声道数
    outputAudioDes.mBitsPerChannel = 0;                             //数据帧中每个通道的采样位数。
    outputAudioDes.mReserved =  0;                                  //对其方式 0(8字节对齐)
    
    //填充输出相关信息
    UInt32 outDesSize = sizeof(outputAudioDes);
    AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &outDesSize, &outputAudioDes);
    
    //获取编码器的描述信息(只能传入software)
    AudioClassDescription *audioClassDesc = [self getAudioCalssDescriptionWithType:outputAudioDes.mFormatID fromManufacture:kAppleSoftwareAudioCodecManufacturer];
    
    /** 创建converter
     参数1:输入音频格式描述
     参数2:输出音频格式描述
     参数3:class desc的数量
     参数4:class desc
     参数5:创建的解码器
     */
    OSStatus status = AudioConverterNewSpecific(&inputAduioDes, &outputAudioDes, 1, audioClassDesc, &_audioConverter);
    if (status != noErr) {
        NSLog(@"Error!:硬编码AAC创建失败, status= %d", (int)status);
        return;
    }
    
    // 设置编解码质量
    /*
     kAudioConverterQuality_Max                              = 0x7F,
     kAudioConverterQuality_High                             = 0x60,
     kAudioConverterQuality_Medium                           = 0x40,
     kAudioConverterQuality_Low                              = 0x20,
     kAudioConverterQuality_Min                              = 0
     */
    UInt32 temp = kAudioConverterQuality_High;
    //编解码器的呈现质量
    AudioConverterSetProperty(_audioConverter, kAudioConverterCodecQuality, sizeof(temp), &temp);
    
    //设置比特率
    uint32_t audioBitrate = (uint32_t)self.config.bitrate;
    uint32_t audioBitrateSize = sizeof(audioBitrate);
    status = AudioConverterSetProperty(_audioConverter, kAudioConverterEncodeBitRate, audioBitrateSize, &audioBitrate);
    if (status != noErr) {
        NSLog(@"Error!:硬编码AAC 设置比特率失败");
    }
    
    //    //获取最大输出(用于填充数据时检查是否填满)
    //    UInt32 audioMaxOutput = 0;
    //    UInt32 audioMaxOutputSize = sizeof(audioMaxOutput);
    //    self.audioMaxOutputFrameSize = audioMaxOutputSize;
    //    status = AudioConverterGetProperty(_audioConverter, kAudioConverterPropertyMaximumOutputPacketSize, &audioMaxOutputSize, &audioBitrate);
    //
    //    if (audioMaxOutputSize == 0) {
    //        NSLog(@"Error!: 硬编码AAC 获取最大frame size失败");
    //    }
}

//将sampleBuffer数据提取出PCM数据返回给ViewController.可以直接播放PCM数据
- (NSData *)convertAudioSamepleBufferToPcmData: (CMSampleBufferRef)sampleBuffer {
    //获取pcm数据大小
    size_t size = CMSampleBufferGetTotalSampleSize(sampleBuffer);
    //分配空间
    int8_t *audio_data = (int8_t *)malloc(size);
    memset(audio_data, 0, size);
    
    //获取CMBlockBuffer, 这里面保存了PCM数据
    CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
    //将数据copy到我们分配的空间中
    CMBlockBufferCopyDataBytes(blockBuffer, 0, size, audio_data);
    NSData *data = [NSData dataWithBytes:audio_data length:size];
    free(audio_data);
    return data;
}

/**
 获取编码器类型描述
 参数1:类型
 */
- (AudioClassDescription *)getAudioCalssDescriptionWithType: (AudioFormatID)type fromManufacture: (uint32_t)manufacture {
    
    static AudioClassDescription desc;
    UInt32 encoderSpecific = type;
    
    //获取满足AAC编码器的总大小
    UInt32 size;
    
    /**
     参数1:编码器类型
     参数2:类型描述大小
     参数3:类型描述
     参数4:大小
     */
    OSStatus status = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(encoderSpecific), &encoderSpecific, &size);
    if (status != noErr) {
        NSLog(@"Error!:硬编码AAC get info 失败, status= %d", (int)status);
        return nil;
    }
    //计算aac编码器的个数
    unsigned int count = size / sizeof(AudioClassDescription);
    //创建一个包含count个编码器的数组
    AudioClassDescription description[count];
    //将满足aac编码的编码器的信息写入数组
    status = AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(encoderSpecific), &encoderSpecific, &size, &description);
    if (status != noErr) {
        NSLog(@"Error!:硬编码AAC get propery 失败, status= %d", (int)status);
        return nil;
    }
    for (unsigned int i = 0; i < count; i++) {
        if (type == description[i].mSubType && manufacture == description[i].mManufacturer) {
            desc = description[i];
            return &desc;
        }
    }
    return nil;
}

- (void)dealloc {
    if (_audioConverter) {
        AudioConverterDispose(_audioConverter);
        _audioConverter = NULL;
    }
    
}


/**
 *  Add ADTS header at the beginning of each and every AAC packet.
 *  This is needed as MediaCodec encoder generates a packet of raw
 *  AAC data.
 *
 *  AAC ADtS头
 *  Note the packetLen must count in the ADTS header itself.
 *  See: http://wiki.multimedia.cx/index.php?title=ADTS
 *  Also: http://wiki.multimedia.cx/index.php?title=MPEG-4_Audio#Channel_Configurations
 **/
- (NSData*)adtsDataForPacketLength:(NSUInteger)packetLength {
    int adtsLength = 7;
    char *packet = malloc(sizeof(char) * adtsLength);
    // Variables Recycled by addADTStoPacket
    int profile = 2;  //AAC LC
    //39=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
    int freqIdx = 4;  //3: 48000 Hz、4:44.1KHz、8: 16000 Hz、11: 8000 Hz
    int chanCfg = 1;  //MPEG-4 Audio Channel Configuration. 1 Channel front-center
    NSUInteger fullLength = adtsLength + packetLength;
    // fill in ADTS data
    packet[0] = (char)0xFF;    // 11111111      = syncword
    packet[1] = (char)0xF9;    // 1111 1 00 1  = syncword MPEG-2 Layer CRC
    packet[2] = (char)(((profile-1)<<6) + (freqIdx<<2) +(chanCfg>>2));
    packet[3] = (char)(((chanCfg&3)<<6) + (fullLength>>11));
    packet[4] = (char)((fullLength&0x7FF) >> 3);
    packet[5] = (char)(((fullLength&7)<<5) + 0x1F);
    packet[6] = (char)0xFC;
    NSData *data = [NSData dataWithBytesNoCopy:packet length:adtsLength freeWhenDone:YES];
    return data;
}
/**
 .AAC文件处理流程
 (1) 判断文件格式,确定为ADIF或ADTS
 (2) 若为ADIF,解ADIF头信息,跳至第6步。
 (3) 若为ADTS,寻找同步头。
 (4)解ADTS帧头信息。
 (5)若有错误检测,进行错误检测。
 (6)解块信息。
 (7)解元素信息。
 */


@end

2.解码
CCAudioDecoder.h

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
@class CCAudioConfig;

/**AAC解码回调代理*/
@protocol CCAudioDecoderDelegate <NSObject>
- (void)audioDecodeCallback:(NSData *)pcmData;
@end

@interface CCAudioDecoder : NSObject
@property (nonatomic, strong) CCAudioConfig *config;
@property (nonatomic, weak) id<CCAudioDecoderDelegate> delegate;

//初始化 传入解码配置
- (instancetype)initWithConfig:(CCAudioConfig *)config;

/**解码aac*/
- (void)decodeAudioAACData: (NSData *)aacData;
@end

CCAudioDecoder.m

#import "CCAudioDecoder.h"
#import <AVFoundation/AVFoundation.h>
#import <AudioToolbox/AudioToolbox.h>
#import "CCAVConfig.h"

typedef struct {
    char * data;
    UInt32 size;
    UInt32 channelCount;
    AudioStreamPacketDescription packetDesc;
} CCAudioUserData;

@interface CCAudioDecoder()
@property (strong, nonatomic) NSCondition *converterCond;
@property (nonatomic, strong) dispatch_queue_t decoderQueue;
@property (nonatomic, strong) dispatch_queue_t callbackQueue;

@property (nonatomic) AudioConverterRef audioConverter;
@property (nonatomic) char *aacBuffer;
@property (nonatomic) UInt32 aacBufferSize;
@property (nonatomic) AudioStreamPacketDescription *packetDesc;

@end

@implementation CCAudioDecoder
//解码器回调函数
static OSStatus AudioDecoderConverterComplexInputDataProc(  AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData,  AudioStreamPacketDescription **outDataPacketDescription,  void *inUserData) {
    
    
    CCAudioUserData *audioDecoder = (CCAudioUserData *)(inUserData);
    if (audioDecoder->size <= 0) {
        ioNumberDataPackets = 0;
        return -1;
    }
   
    //填充数据
    *outDataPacketDescription = &audioDecoder->packetDesc;
    (*outDataPacketDescription)[0].mStartOffset = 0;
    (*outDataPacketDescription)[0].mDataByteSize = audioDecoder->size;
    (*outDataPacketDescription)[0].mVariableFramesInPacket = 0;
    
    ioData->mBuffers[0].mData = audioDecoder->data;
    ioData->mBuffers[0].mDataByteSize = audioDecoder->size;
    ioData->mBuffers[0].mNumberChannels = audioDecoder->channelCount;
    
    return noErr;
}

//初始化
- (instancetype)initWithConfig:(CCAudioConfig *)config {
    self = [super init];
    if (self) {
        _decoderQueue = dispatch_queue_create("aac hard decoder queue", DISPATCH_QUEUE_SERIAL);
        _callbackQueue = dispatch_queue_create("aac hard decoder callback queue", DISPATCH_QUEUE_SERIAL);
        _audioConverter = NULL;
        _aacBufferSize = 0;
        _aacBuffer = NULL;
        _config = config;
        if (_config == nil) {
            _config = [[CCAudioConfig alloc] init];
        }
        AudioStreamPacketDescription desc = {0};
        _packetDesc = &desc;
        [self setupEncoder];
    }
    return self;
}

- (void)decodeAudioAACData:(NSData *)aacData {
   
    if (!_audioConverter) { return; }
    
    dispatch_async(_decoderQueue, ^{
     
        //记录aac 作为参数参入解码回调函数
        CCAudioUserData userData = {0};
        userData.channelCount = (UInt32)_config.channelCount;
        userData.data = (char *)[aacData bytes];
        userData.size = (UInt32)aacData.length;
        userData.packetDesc.mDataByteSize = (UInt32)aacData.length;
        userData.packetDesc.mStartOffset = 0;
        userData.packetDesc.mVariableFramesInPacket = 0;
        
        //输出大小和packet个数
        UInt32 pcmBufferSize = (UInt32)(2048 * _config.channelCount);
        UInt32 pcmDataPacketSize = 1024;
        
        //创建临时容器pcm
        uint8_t *pcmBuffer = malloc(pcmBufferSize);
        memset(pcmBuffer, 0, pcmBufferSize);
        
        //输出buffer
        AudioBufferList outAudioBufferList = {0};
        outAudioBufferList.mNumberBuffers = 1;
        outAudioBufferList.mBuffers[0].mNumberChannels = (uint32_t)_config.channelCount;
        outAudioBufferList.mBuffers[0].mDataByteSize = (UInt32)pcmBufferSize;
        outAudioBufferList.mBuffers[0].mData = pcmBuffer;
        
        //输出描述
        AudioStreamPacketDescription outputPacketDesc = {0};
        
        //配置填充函数,获取输出数据
        OSStatus status = AudioConverterFillComplexBuffer(_audioConverter, &AudioDecoderConverterComplexInputDataProc, &userData, &pcmDataPacketSize, &outAudioBufferList, &outputPacketDesc);
        if (status != noErr) {
            NSLog(@"Error: AAC Decoder error, status=%d",(int)status);
            return;
        }
        //如果获取到数据
        if (outAudioBufferList.mBuffers[0].mDataByteSize > 0) {
            NSData *rawData = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];
            dispatch_async(_callbackQueue, ^{
                [_delegate audioDecodeCallback:rawData];
            });
        }
        free(pcmBuffer);
    });
    
}

- (void)setupEncoder {
    
    //输出参数pcm
    AudioStreamBasicDescription outputAudioDes = {0};
    outputAudioDes.mSampleRate = (Float64)_config.sampleRate;       //采样率
    outputAudioDes.mChannelsPerFrame = (UInt32)_config.channelCount; //输出声道数
    outputAudioDes.mFormatID = kAudioFormatLinearPCM;                //输出格式
    outputAudioDes.mFormatFlags = (kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked); //编码 12
    outputAudioDes.mFramesPerPacket = 1;                            //每一个packet帧数 ;
    outputAudioDes.mBitsPerChannel = 16;                             //数据帧中每个通道的采样位数。
    outputAudioDes.mBytesPerFrame = outputAudioDes.mBitsPerChannel / 8 *outputAudioDes.mChannelsPerFrame;                              //每一帧大小(采样位数 / 8 *声道数)
    outputAudioDes.mBytesPerPacket = outputAudioDes.mBytesPerFrame * outputAudioDes.mFramesPerPacket;                             //每个packet大小(帧大小 * 帧数)
    outputAudioDes.mReserved =  0;                                  //对其方式 0(8字节对齐)
    
    //输入参数aac
    AudioStreamBasicDescription inputAduioDes = {0};
    inputAduioDes.mSampleRate = (Float64)_config.sampleRate;
    inputAduioDes.mFormatID = kAudioFormatMPEG4AAC;
    inputAduioDes.mFormatFlags = kMPEG4Object_AAC_LC;
    inputAduioDes.mFramesPerPacket = 1024;
    inputAduioDes.mChannelsPerFrame = (UInt32)_config.channelCount;
    
    //填充输出相关信息
    UInt32 inDesSize = sizeof(inputAduioDes);
    AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &inDesSize, &inputAduioDes);
    
    //获取解码器的描述信息(只能传入software)
    AudioClassDescription *audioClassDesc = [self getAudioCalssDescriptionWithType:outputAudioDes.mFormatID fromManufacture:kAppleSoftwareAudioCodecManufacturer];
    /** 创建converter
     参数1:输入音频格式描述
     参数2:输出音频格式描述
     参数3:class desc的数量
     参数4:class desc
     参数5:创建的解码器
     */
    OSStatus status = AudioConverterNewSpecific(&inputAduioDes, &outputAudioDes, 1, audioClassDesc, &_audioConverter);
    if (status != noErr) {
        NSLog(@"Error!:硬解码AAC创建失败, status= %d", (int)status);
        return;
    }
}


/**
 获取解码器类型描述
 参数1:类型
 */
- (AudioClassDescription *)getAudioCalssDescriptionWithType: (AudioFormatID)type fromManufacture: (uint32_t)manufacture {
    
    static AudioClassDescription desc;
    UInt32 decoderSpecific = type;
    //获取满足AAC解码器的总大小
    UInt32 size;
    /**
     参数1:编码器类型(解码)
     参数2:类型描述大小
     参数3:类型描述
     参数4:大小
     */
    OSStatus status = AudioFormatGetPropertyInfo(kAudioFormatProperty_Decoders, sizeof(decoderSpecific), &decoderSpecific, &size);
    if (status != noErr) {
        NSLog(@"Error!:硬解码AAC get info 失败, status= %d", (int)status);
        return nil;
    }
    //计算aac解码器的个数
    unsigned int count = size / sizeof(AudioClassDescription);
    //创建一个包含count个解码器的数组
    AudioClassDescription description[count];
    //将满足aac解码的解码器的信息写入数组
    status = AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(decoderSpecific), &decoderSpecific, &size, &description);
    if (status != noErr) {
        NSLog(@"Error!:硬解码AAC get propery 失败, status= %d", (int)status);
        return nil;
    }
    for (unsigned int i = 0; i < count; i++) {
        if (type == description[i].mSubType && manufacture == description[i].mManufacturer) {
            desc = description[i];
            return &desc;
        }
    }
    return nil;
}

- (void)dealloc {
    if (_audioConverter) {
        AudioConverterDispose(_audioConverter);
        _audioConverter = NULL;
    }
    
}

@end

3.播放
CCAudioDataQueue.h

#import <Foundation/Foundation.h>

@interface CCAudioDataQueue : NSObject
@property (nonatomic, readonly) int count;

+(instancetype) shareInstance;

- (void)addData:(id)obj;

- (id)getData;
@end

CCAudioDataQueue.m

#import "CCAudioDataQueue.h"

@interface CCAudioDataQueue ()
@property (nonatomic, strong) NSMutableArray *bufferArray;
@end

@implementation CCAudioDataQueue
@synthesize count;
static CCAudioDataQueue *_instance = nil;

+(instancetype) shareInstance
{
    static dispatch_once_t onceToken ;
    dispatch_once(&onceToken, ^{
        _instance = [[self alloc] init];
    }) ;
    return _instance ;
}

- (instancetype)init{
    if (self = [super init]) {
        _bufferArray = [NSMutableArray array];
        count = 0;
    }
    return self;
}

-(void)addData:(id)obj{
    @synchronized (_bufferArray) {
        [_bufferArray addObject:obj];
        count = (int)_bufferArray.count;
    }
}

- (id)getData{
    @synchronized (_bufferArray) {
        id obj = nil;
        if (count) {
            obj = [_bufferArray firstObject];
            [_bufferArray removeObject:obj];
            count = (int)_bufferArray.count;
        }
        return obj;
    }
}
@end

CCAudioPCMPlayer.h

#import <Foundation/Foundation.h>
@class CCAudioConfig;
@interface CCAudioPCMPlayer : NSObject

- (instancetype)initWithConfig:(CCAudioConfig *)config;
/**播放pcm*/
- (void)playPCMData:(NSData *)data;
/** 设置音量增量 0.0 - 1.0 */
- (void)setupVoice:(Float32)gain;
/**销毁 */
- (void)dispose;

@end

CCAudioPCMPlayer.m

#import "CCAudioPCMPlayer.h"
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#import "CCAVConfig.h"
#import "CCAudioDataQueue.h"
#define MIN_SIZE_PER_FRAME 2048 //每帧最小数据长度
static const int kNumberBuffers_play = 3;                              // 1
typedef struct AQPlayerState
{
    AudioStreamBasicDescription   mDataFormat;                    // 2
    AudioQueueRef                 mQueue;                         // 3
    AudioQueueBufferRef           mBuffers[kNumberBuffers_play];       // 4
    AudioStreamPacketDescription  *mPacketDescs;                  // 9
}AQPlayerState;

@interface CCAudioPCMPlayer ()
@property (nonatomic, assign) AQPlayerState aqps;
@property (nonatomic, strong) CCAudioConfig *config;
@property (nonatomic, assign) BOOL isPlaying;
@end


@implementation CCAudioPCMPlayer
static void TMAudioQueueOutputCallback(void * inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
   
    AudioQueueFreeBuffer(inAQ, inBuffer);
}

- (instancetype)initWithConfig:(CCAudioConfig *)config
{
    self = [super init];
    if (self) {
        _config = config;
        //配置
        AudioStreamBasicDescription dataFormat = {0};
        dataFormat.mSampleRate = (Float64)_config.sampleRate;       //采样率
        dataFormat.mChannelsPerFrame = (UInt32)_config.channelCount; //输出声道数
        dataFormat.mFormatID = kAudioFormatLinearPCM;                //输出格式
        dataFormat.mFormatFlags = (kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked); //编码 12
        dataFormat.mFramesPerPacket = 1;                            //每一个packet帧数 ;
        dataFormat.mBitsPerChannel = 16;                             //数据帧中每个通道的采样位数。
        dataFormat.mBytesPerFrame = dataFormat.mBitsPerChannel / 8 *dataFormat.mChannelsPerFrame;                              //每一帧大小(采样位数 / 8 *声道数)
        dataFormat.mBytesPerPacket = dataFormat.mBytesPerFrame * dataFormat.mFramesPerPacket;                             //每个packet大小(帧大小 * 帧数)
        dataFormat.mReserved =  0;
        AQPlayerState state = {0};
        state.mDataFormat = dataFormat;
        _aqps = state;
        
        [self setupSession];
        
        //创建播放队列
        OSStatus status = AudioQueueNewOutput(&_aqps.mDataFormat, TMAudioQueueOutputCallback, NULL, NULL, NULL, 0, &_aqps.mQueue);
        if (status != noErr) {
            NSError *error = [[NSError alloc] initWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
            NSLog(@"Error: AudioQueue create error = %@", [error description]);
            return self;
        }
        
        [self setupVoice:1];
        _isPlaying = false;
    }
    return self;
}


- (void)setupSession {
    NSError *error = nil;
    //将会话设置为活动或非活动。请注意,激活音频会话是一个同步(阻塞)操作
    [[AVAudioSession sharedInstance] setActive:YES error:&error];
    if (error) {
        NSLog(@"Error: audioQueue palyer AVAudioSession error, error: %@", error);
    }
    //设置会话类别
    [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
    if (error) {
        NSLog(@"Error: audioQueue palyer AVAudioSession error, error: %@", error);
    }
}

- (void)playPCMData:(NSData *)data {
   
    //指向音频队列缓冲区
    AudioQueueBufferRef inBuffer;
    /*
     要求音频队列对象分配音频队列缓冲区。
     参数1:要分配缓冲区的音频队列
     参数2:新缓冲区所需的容量(字节)
     参数3:输出,指向新分配的音频队列缓冲区
     */
    AudioQueueAllocateBuffer(_aqps.mQueue, MIN_SIZE_PER_FRAME, &inBuffer);
    //将data里的数据拷贝到inBuffer.mAudioData中
    memcpy(inBuffer->mAudioData, data.bytes, data.length);
    //设置inBuffer.mAudioDataByteSize
    inBuffer->mAudioDataByteSize = (UInt32)data.length;
    
    //将缓冲区添加到录制或播放音频队列的缓冲区队列。
    /*
     参数1:拥有音频队列缓冲区的音频队列
     参数2:要添加到缓冲区队列的音频队列缓冲区。
     参数3:inBuffer参数中音频数据包的数目,对于以下任何情况,请使用值0:
            * 播放恒定比特率(CBR)格式时。
            * 当音频队列是录制(输入)音频队列时。
            * 当使用audioqueueallocateBufferWithPacketDescriptions函数分配要重新排队的缓冲区时。在这种情况下,回调应该描述缓冲区的mpackedDescriptions和mpackedDescriptionCount字段中缓冲区的数据包。
     参数4:一组数据包描述。对于以下任何情况,请使用空值
            * 播放恒定比特率(CBR)格式时。
            * 当音频队列是输入(录制)音频队列时。
            * 当使用audioqueueallocateBufferWithPacketDescriptions函数分配要重新排队的缓冲区时。在这种情况下,回调应该描述缓冲区的mpackedDescriptions和mpackedDescriptionCount字段中缓冲区的数据包
     */
    OSStatus status = AudioQueueEnqueueBuffer(_aqps.mQueue, inBuffer, 0, NULL);
    if (status != noErr) {
        NSLog(@"Error: audio queue palyer  enqueue error: %d",(int)status);
    }
    
    //开始播放或录制音频
    /*
     参数1:要开始的音频队列
     参数2:音频队列应开始的时间。
     要指定相对于关联音频设备时间线的开始时间,请使用audioTimestamp结构的msampletime字段。使用NULL表示音频队列应尽快启动
     */
    AudioQueueStart(_aqps.mQueue, NULL);
}

//不需要该函数,
//- (void)pause {
//     AudioQueuePause(_aqps.mQueue);
//}

//设置音量增量//0.0 - 1.0
- (void)setupVoice:(Float32)gain {
    
    Float32 gain0 = gain;
    if (gain < 0) {
        gain0 = 0;
    }else if (gain > 1) {
        gain0 = 1;
    }
    //设置播放音频队列参数值
    /*
     参数1:要开始的音频队列
     参数2:属性
     参数3:value
     */
    AudioQueueSetParameter(_aqps.mQueue, kAudioQueueParam_Volume, gain0);
}
//销毁
- (void)dispose {

    AudioQueueStop(_aqps.mQueue, true);
    AudioQueueDispose(_aqps.mQueue, true);
}

@end

推荐阅读更多精彩内容

  • 前言: 记载资料多为网络搜集,侵删。 根据最近接触的整机项目做了一些整机音频相关基础知识的总结,如有不足或表述问题...
    BUFFERS阅读 655评论 0 3
  • 要在计算机内播放或是处理音频文件,也就是要对声音文件进行数、模转换,这个过程同样由采样和量化构成,人耳所能听到的声...
    Viking_Den阅读 3,143评论 1 8
  • ### YUV颜色空间 视频是由一帧一帧的数据连接而成,而一帧视频数据其实就是一张图片。 yuv是一种图片储存格式...
    天使君阅读 682评论 0 2
  • 概述 本片文章主要介绍音频基础,在做音频开发之前首先必须要对音频的相关概念了解。以下是具体内容概述: 常见的音频格...
    iosmedia阅读 209评论 0 1
  • 最近在做一个直播项目,项目按钮点击有音效,进入某个页面还需要有背景音乐。这里主要用到SoundPool、Servi...
    黄海佳阅读 2,875评论 0 3