[AVFoundation]导出

原文:AVFoundation Programming Guide

要读取和写入视听资源,您必须使用AVFoundation框架提供的导出API。AVAssetExportSession类提供了一个简单导出需求的接口,例如修改文件格式或修剪assets的长度(请参考Trimming and Transcoding a Movie)。对于更高的导出需求,请使用AVAssetReaderAVAssetWriter类。

当您要对资源(asset)的内容执行操作时,请使用AVAssetReader。 例如,您可以读取资源(asset)的音轨以产生波形的可视化表示。 要从媒体(如样本缓冲区或静态图像)生成资源(asset),请使用AVAssetWriter对象。

注意: 资源的读写器类不能用于实时处理。 实际上,资源的读取器甚至不能用于从HTTP实时流中实时读取数据。 但是,如果您正在使用具有实时数据源的资源编写器(例如AVCaptureOutput 对象),需要将 expectsMediaDataInRealTime属性设置为YES。对于非实时数据源,将此属性设置为YES将导致文件不正确交叉。

读取资源

每个AVAssetReader对象一次只能与单个资源相关联,但此资源可能包含多个轨道。 因此,在开始阅读之前,您必须将 AVAssetReaderOutput类的具体子类分配给资源读取器,以配置媒体数据的读取方式。 AVAssetReaderOutput基类的三个具体子类可用于资源读取需求:AVAssetReaderTrackOutputAVAssetReaderAudioMixOutputAVAssetReaderVideoCompositionOutput

创建一个读取器

初始化一个AVAssetReader对象所需要的就是你想要读取的的资源。

NSError *outError;

AVAsset *someAsset = <#AVAsset that you want to read#>;

AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:someAsset error:&outError];

BOOL success = (assetReader != nil);

注意: 始终检查资产读取器的返回值是否为nil,以确保资产读取器已初始化成功。 否则,错误参数(前面的例子中的outError)将包含相关的错误信息。

设置资源读取器的输出

创建资源读取器后,设置至少一个输出以接收正在读取的媒体数据。 设置输出时,请确保将alwaysCopiesSampleData属性设置为NO。 通过这种方式,您可以获得性能改进的好处。 在本章中的所有示例中,此属性可以并且应该设置为NO。

如果您只想从一个或多个轨道中读取媒体数据,并可能将该数据转换为不同的格式,请使用AVAssetReaderTrackOutput类,资源读取的每个AVAssetTrack对象对应一个单独的轨道输出对象。 要使用资产读取器将音轨解压缩到Linear PCM,您可以按如下方式设置音轨输出:

AVAsset *localAsset = assetReader.asset;
// Get the audio track to read.
AVAssetTrack *audioTrack = [[localAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
// Decompression settings for Linear PCM
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
// Create the output with the audio track and decompression settings.
AVAssetReaderOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:decompressionAudioSettings];
// Add the output to the reader if possible.
if ([assetReader canAddOutput:trackOutput])
    [assetReader addOutput:trackOutput];

注意:要以存储格式从特定资源轨道读取媒体数据,请将outputSettings参数设置为nil。

您可以使用AVAssetReaderAudioMixOutput和AVAssetReaderVideoCompositionOutput类来读取混合或合成在一起的媒体数据,分别使用AVAudioMix对象或AVVideoComposition对象。 通常,当您的资源读取器从AVComposition对象读取时,将使用这些输出。

对于单个音频混合输出,您可以使用AVAudioMix对象从资源中读取已混合在一起的多个音轨。 要指定音轨如何混合,请在初始化后将混音分配给AVAssetReaderAudioMixOutput对象。 以下代码展示了如何使用资源中的所有音轨创建音频混合输出,将音轨解压缩为Linear PCM,并将音频混合对象分配给输出。 有关如何配置音频混合的详细信息,请参阅 Editing

AVAudioMix *audioMix = <#An AVAudioMix that specifies how the audio tracks from the AVAsset are mixed#>;

// Assumes that assetReader was initialized with an AVComposition object.
AVComposition *composition = (AVComposition *)assetReader.asset;

// Get the audio tracks to read.
NSArray *audioTracks = [composition tracksWithMediaType:AVMediaTypeAudio];

// Get the decompression settings for Linear PCM.
NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };

// Create the audio mix output with the audio tracks and decompression setttings.
AVAssetReaderOutput *audioMixOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:decompressionAudioSettings];

// Associate the audio mix used to mix the audio tracks being read with the output.
audioMixOutput.audioMix = audioMix;

// Add the output to the reader if possible.
if ([assetReader canAddOutput:audioMixOutput])
    [assetReader addOutput:audioMixOutput];

注意: 通过将audioSettings参数设置为nil可以让资源读取器以方便的未压缩格式返回样本。 对于AVAssetReaderVideoCompositionOutput类也是如此。

视频合成输出的行为方式大致相同:您可以使用AVVideoComposition对象从资源中读取已经合成在一起的多个视频轨道。 要从多个合成视频轨道读取媒体数据并将其解压缩到ARGB,请按如下所示设置输出:

AVVideoComposition *videoComposition = <#An AVVideoComposition that specifies how the video tracks from the AVAsset are composited#>;

// Assumes assetReader was initialized with an AVComposition.
AVComposition *composition = (AVComposition *)assetReader.asset;

// Get the video tracks to read.
NSArray *videoTracks = [composition tracksWithMediaType:AVMediaTypeVideo];

// Decompression settings for ARGB.
NSDictionary *decompressionVideoSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB], (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary] };

// Create the video composition output with the video tracks and decompression setttings.
AVAssetReaderOutput *videoCompositionOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:videoTracks videoSettings:decompressionVideoSettings];

// Associate the video composition used to composite the video tracks being read with the output.
videoCompositionOutput.videoComposition = videoComposition;

// Add the output to the reader if possible.
if ([assetReader canAddOutput:videoCompositionOutput])
    [assetReader addOutput:videoCompositionOutput];
读取资源的媒体数据

在设置所需的所有输出后,可以在资源读取器上调用startReading方法开始读取。 接下来,使用copyNextSampleBuffer方法从每个输出中单独检索媒体数据。 要使用单个输出启动资源读取器并读取其所有媒体数据,请执行以下操作:

// Start the asset reader up.
[self.assetReader startReading];
BOOL done = NO;
while (!done)
{
  // Copy the next sample buffer from the reader output.
  CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];

  if (sampleBuffer)
  {
    // Do something with sampleBuffer here.
    CFRelease(sampleBuffer);
    sampleBuffer = NULL;
  }
  else
  {
    // Find out why the asset reader output couldn't copy another sample buffer.
    if (self.assetReader.status == AVAssetReaderStatusFailed)
    {
      NSError *failureError = self.assetReader.error;
      // Handle the error here.
    }
    else
    {
      // The asset reader output has read all of its samples.
      done = YES;
    }
  }
}
写资源

AVAssetWriter类将媒体数据从多个源写入指定文件格式的单个文件。 您不需要将资源写入对象与特定资源相关联,但必须为要创建的每个输出文件使用单独的资源写入器。 由于资源写入对象可以从多个源写入媒体数据,因此必须为要写入输出文件的每个单独的轨道创建一个AVAssetWriterInput对象。 每个AVAssetWriterInput对象都希望接收CMSampleBufferRef对象格式的数据,但如果要将CVPixelBufferRef对象附加到资源写入中,请使用AVAssetWriterInputPixelBufferAdaptor类。

创建资源写入对象

要创建资源写入对象,请指定输出文件的URL和所需的文件类型。 以下代码显示如何初始化资源写入对象以创建一个QuickTime电影:

NSError *outError;
NSURL *outputURL = <#NSURL object representing the URL where you want to save the video#>;
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outputURL fileType:AVFileTypeQuickTimeMovie error:&outError];
BOOL success = (assetWriter != nil);
设置Asset Writer Inputs

要使资产写入对象能够编写媒体数据,必须至少设置一个资源写入对象输入(asset writer input)。 例如,如果您的媒体数据源已经将媒体样本作为CMSampleBufferRef对象,那么只需使用AVAssetWriterInput类。 要设置资源写入对象输入(asset writer input),将音频媒体数据压缩为128 kbps AAC并将其连接到资源写入对象,请执行以下操作:

// Configure the channel layout as stereo.
AudioChannelLayout stereoChannelLayout = {
    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
    .mChannelBitmap = 0,
    .mNumberChannelDescriptions = 0
};

// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = @{
    AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
    AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
    AVSampleRateKey : [NSNumber numberWithInteger:44100],
    AVChannelLayoutKey : channelLayoutAsData,
    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
};

// Create the asset writer input with the compression settings and specify the media type as audio.
AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];

// Add the input to the writer if possible.
if ([assetWriter canAddInput:assetWriterInput])
    [assetWriter addInput:assetWriterInput];

注意: 如果要以存储格式写入媒体数据,可以将outputSettings参数设置为nil。 只有当资源写入对象是使用AVFileTypeQuickTimeMovie的文件类型初始化时,才可以设置为nil。

您的资源写入对象输入可以分别使用metadatatransform属性来选择性地包括一些元数据或为特定的轨道指定不同的变换。 对于数据源是视频轨道的资源写入对象输入,您可以通过执行以下操作在输出文件中维持视频的原始变换:

AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
assetWriterInput.transform = videoAssetTrack.preferredTransform;

注意: 在开始写入之前,设置元数据和变换属性,使其生效。

将媒体数据写入输出文件时,有时您可能需要分配像素缓冲区。 为此,请使用AVAssetWriterInputPixelBufferAdaptor类。 为了最大的效率,不要使用单独的池分配的像素缓冲区,而是使用像素缓冲适配器提供的像素缓冲池。 以下代码创建一个在RGB域中工作的像素缓冲区对象,它将使用CGImage对象创建其像素缓冲区。

NSDictionary *pixelBufferAttributes = @{
    kCVPixelBufferCGImageCompatibilityKey : [NSNumber numberWithBool:YES],
    kCVPixelBufferCGBitmapContextCompatibilityKey : [NSNumber numberWithBool:YES],
    kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInt:kCVPixelFormatType_32ARGB]
};
AVAssetWriterInputPixelBufferAdaptor *inputPixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:pixelBufferAttributes];

注意: 所有AVAssetWriterInputPixelBufferAdaptor对象必须连接到单个的资产写入对象输入。 资源写入对象输入必须接受AVMediaTypeVideo类型的媒体数据。

写入媒体数据

当您配置了资源写入所需的所有输入后,即可开始写入媒体数据。 与资源读取器一样,通过调用startWriting方法启动写入过程。 然后,您需要通过调用startSessionAtSourceTime: 方法启动示例写入会话。 资源写入器进行的所有写入都必须在这些会话之一内进行,每个会话的时间范围定义了从源中包含的媒体数据的时间范围。 例如,如果您的来源是资源读取器,它提供从AVAsset对象读取的媒体数据,并且你不希望包含资源的前半部分媒体数据,则可以执行以下操作:

CMTime halfAssetDuration = CMTimeMultiplyByFloat64(self.asset.duration, 0.5);
[self.assetWriter startSessionAtSourceTime:halfAssetDuration];
//Implementation continues.

通常,要结束写入会话,您必须调用endSessionAtSourceTime:方法。 但是,如果您的写作会话到了文件的末尾,您可以简单地通过调用finishWriting方法来结束写入会话。 要开始一个包含单个输入的资源写入器,并写入其所有媒体数据,请执行以下操作:

// Prepare the asset writer for writing.
[self.assetWriter startWriting];

// Start a sample-writing session.
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:myInputSerialQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the next sample buffer.
          CMSampleBufferRef nextSampleBuffer = [self copyNextSampleBufferToWrite];
          if (nextSampleBuffer)
          {
               // If it exists, append the next sample buffer to the output file.
               [self.assetWriterInput appendSampleBuffer:nextSampleBuffer];
               CFRelease(nextSampleBuffer);
               nextSampleBuffer = nil;
          }
          else
          {
               // Assume that lack of a next sample buffer means the sample buffer source is out of samples and mark the input as finished.
               [self.assetWriterInput markAsFinished];
               break;
          }
     }
}];

上面代码中的copyNextSampleBufferToWrite方法只是一个存根。 此存根的位置是您需要插入一些逻辑以返回表示要写入的媒体数据的CMSampleBufferRef对象。 样本缓冲区的一个可能来源是资源读取器输出。

重新编码资源

您可以一起使用资源读取器和资产写入器将资源从一种表示形式转换为另一种。 使用这些对象,您可以比使用AVAssetExportSession对象更能控制转换。 例如,您可以选择要输出到文件的轨道,指定您自己的输出格式,或在转换过程中修改资源。 此过程的第一步都是根据需要设置资源读取器输出和资产写入器输入。 当资源读写器配置好后,您可以分别启动startReading和startWriting方法。 以下代码片段显示了如何使用单个资源写入器输入来写入单个资源读取器输出提供的媒体数据:

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create a serialization queue for reading and writing.
dispatch_queue_t serializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

// Specify the block to execute when the asset writer is ready for media data and the queue to call it on.
[self.assetWriterInput requestMediaDataWhenReadyOnQueue:serializationQueue usingBlock:^{
     while ([self.assetWriterInput isReadyForMoreMediaData])
     {
          // Get the asset reader output's next sample buffer.
          CMSampleBufferRef sampleBuffer = [self.assetReaderOutput copyNextSampleBuffer];
          if (sampleBuffer != NULL)
          {
               // If it exists, append this sample buffer to the output file.
               BOOL success = [self.assetWriterInput appendSampleBuffer:sampleBuffer];
               CFRelease(sampleBuffer);
               sampleBuffer = NULL;

               // Check for errors that may have occurred when appending the new sample buffer.
               if (!success && self.assetWriter.status == AVAssetWriterStatusFailed)
               {
                    NSError *failureError = self.assetWriter.error;
                    //Handle the error.
               }
          }
          else
          {
               // If the next sample buffer doesn't exist, find out why the asset reader output couldn't vend another one.
               if (self.assetReader.status == AVAssetReaderStatusFailed)
               {
                    NSError *failureError = self.assetReader.error;
                    //Handle the error here.
               }
               else
               {
                    // The asset reader output must have vended all of its samples. Mark the input as finished.
                    [self.assetWriterInput markAsFinished];
                    break;
               }
          }
     }
}];
整合: 使用资源读写器来重新编码资源

这个简短的代码示例说明了如何使用资源读取器和写入器将资源的第一个视频和音轨重新编码为新的文件。步骤如下:

  • 使用串行队列来处理读和写视听数据
  • 初始化资源读取器并配置两个资源读取器输出,一个用于音频,另一个用于视频
  • 初始化资源写入器并配置两个资源写入器输入,一个用于音频,一个用于视频
  • 使用资源读取器通过两种不同的输出/输入组合异步地向资源写入器提供媒体数据
  • 使用dispatch group通知处理重新编码过程的完成
  • 允许用户取消重新编码过程

注意:为了专注于最相关的代码,本示例省略了一个完整应用程序的几个方面。要使用AVFoundation,您需要有足够的Cocoa编程经验,能够推断出缺少的部分。

初始设置

在创建资源读写器并配置其输出和输入之前,需要做一些初始设置。 此设置的第一部分是创建三个单独的串行队列来处理读写过程。

NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];

// Create the main serialization queue.
self.mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];

// Create the serialization queue to use for reading and writing the audio data.
self.rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];

// Create the serialization queue to use for reading and writing the video data.
self.rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

主串行队列用于处理资源读写器的启动和停止(取消),其他两个串行队列用于序列化每个输出/输入组合的读取和写入,为了以后可能存在的取消行为。

现在您有一些序列化队列,可以加载资源轨道并开始重新编码过程。

self.asset = <#AVAsset that you want to reencode#>;
self.cancelled = NO;
self.outputURL = <#NSURL representing desired output URL for file generated by asset writer#>;

// Asynchronously load the tracks of the asset you want to read.
[self.asset loadValuesAsynchronouslyForKeys:@[@"tracks"] completionHandler:^{

     // Once the tracks have finished loading, dispatch the work to the main serialization queue.
     dispatch_async(self.mainSerializationQueue, ^{

          // Due to asynchronous nature, check to see if user has already cancelled.
          if (self.cancelled)
               return;
          BOOL success = YES;
          NSError *localError = nil;

          // Check for success of loading the assets tracks.
          success = ([self.asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
          if (success)
          {
               // If the tracks loaded successfully, make sure that no file exists at the output path for the asset writer.
               NSFileManager *fm = [NSFileManager defaultManager];
               NSString *localOutputPath = [self.outputURL path];

               if ([fm fileExistsAtPath:localOutputPath])
                    success = [fm removeItemAtPath:localOutputPath error:&localError];
          }

          if (success)
               success = [self setupAssetReaderAndAssetWriter:&localError];
          if (success)
               success = [self startAssetReaderAndWriter:&localError];
          if (!success)
               [self readingAndWritingDidFinishSuccessfully:success withError:localError];
     });
}];

当轨道加载过程完成时,无论成功与否,其余的工作都将被分派到主序列化队列,以确保所有这些工作都被序列化并可以取消。 现在剩下的是在上一个代码结束时实现取消流程和三个自定义方法。

初始化资源读写器

自定义setupAssetReaderAndAssetWriter:方法用来初始化读写器并配置两个输出/输入组合,一个用于音轨,一个用于视频轨道。 在此示例中,音频使用资源读取器解压缩为Linear PCM,并使用资源写入器压缩回128 kbps AAC。 使用资源读取器将视频解压缩为YUV,并使用资源写入器将其压缩为H.264。

- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
     // Create and initialize the asset reader.
     self.assetReader = [[AVAssetReader alloc] initWithAsset:self.asset error:outError];
     BOOL success = (self.assetReader != nil);
     if (success)
     {
          // If the asset reader was successfully initialized, do the same for the asset writer.
          self.assetWriter = [[AVAssetWriter alloc] initWithURL:self.outputURL fileType:AVFileTypeQuickTimeMovie error:outError];
          success = (self.assetWriter != nil);
     }

     if (success)
     {
          // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
          AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
          NSArray *audioTracks = [self.asset tracksWithMediaType:AVMediaTypeAudio];

          if ([audioTracks count] > 0)
               assetAudioTrack = [audioTracks objectAtIndex:0];

          NSArray *videoTracks = [self.asset tracksWithMediaType:AVMediaTypeVideo];

          if ([videoTracks count] > 0)
               assetVideoTrack = [videoTracks objectAtIndex:0];

          if (assetAudioTrack)
          {
               // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
               NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
               self.assetReaderAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
               [self.assetReader addOutput:self.assetReaderAudioOutput];

               // Then, set the compression settings to 128kbps AAC and create the asset writer input.
               AudioChannelLayout stereoChannelLayout = {
                    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                    .mChannelBitmap = 0,
                    .mNumberChannelDescriptions = 0
               };

               NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];

               NSDictionary *compressionAudioSettings = @{
                    AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                    AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                    AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                    AVChannelLayoutKey    : channelLayoutAsData,
                    AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
               };

               self.assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
               [self.assetWriter addInput:self.assetWriterAudioInput];
          }

          if (assetVideoTrack)
          {
               // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
               NSDictionary *decompressionVideoSettings = @{
                    (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_422YpCbCr8],
                    (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
               };

               self.assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
               [self.assetReader addOutput:self.assetReaderVideoOutput];

               CMFormatDescriptionRef formatDescription = NULL;

               // Grab the video format descriptions from the video track and grab the first one if it exists.
               NSArray *videoFormatDescriptions = [assetVideoTrack formatDescriptions];
               if ([videoFormatDescriptions count] > 0)
                    formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
               CGSize trackDimensions = {
                    .width = 0.0,
                    .height = 0.0,
               };

               // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
               if (formatDescription)
                    trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
               else
                    trackDimensions = [assetVideoTrack naturalSize];

               NSDictionary *compressionSettings = nil;

               // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
               if (formatDescription)
               {
                    NSDictionary *cleanAperture = nil;
                    NSDictionary *pixelAspectRatio = nil;
                    CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);

                    if (cleanApertureFromCMFormatDescription)
                    {
                         cleanAperture = @{
                              AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                              AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                              AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                              AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                         };
                    }

                    CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);

                    if (pixelAspectRatioFromCMFormatDescription)
                    {
                         pixelAspectRatio = @{
                              AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                              AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                         };
                    }

                    // Add whichever settings we could grab from the format description to the compression settings dictionary.
                    if (cleanAperture || pixelAspectRatio)
                    {
                         NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                         if (cleanAperture)
                              [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];

                         if (pixelAspectRatio)
                              [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];

                         compressionSettings = mutableCompressionSettings;
                    }
               }

               // Create the video settings dictionary for H.264.
               NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                    AVVideoCodecKey  : AVVideoCodecH264,
                    AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                    AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
               };

               // Put the compression settings into the video settings dictionary if we were able to grab them.
               if (compressionSettings)
                    [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];

               // Create the asset writer input and add it to the asset writer.
               self.assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[videoTrack mediaType] outputSettings:videoSettings];
               [self.assetWriter addInput:self.assetWriterVideoInput];
          }
     }
     return success;
}
重新编码资源

如果资产读取器和编写器已成功初始化和配置,则调用Handling the Initial Setup 的startAssetReaderAndWriter:方法。 这个方法实现了资源读取和写入。

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
     BOOL success = YES;

     // Attempt to start the asset reader.
     success = [self.assetReader startReading];

     if (!success)
          *outError = [self.assetReader error];

     if (success)
     {
          // If the reader started successfully, attempt to start the asset writer.
          success = [self.assetWriter startWriting];

          if (!success)
               *outError = [self.assetWriter error];
     }

     if (success)
     {
          // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
          self.dispatchGroup = dispatch_group_create();
          [self.assetWriter startSessionAtSourceTime:kCMTimeZero];
          self.audioFinished = NO;
          self.videoFinished = NO;

          if (self.assetWriterAudioInput)
          {
               // If there is audio to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);

               // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
               [self.assetWriterAudioInput requestMediaDataWhenReadyOnQueue:self.rwAudioSerializationQueue usingBlock:^{

                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.audioFinished)
                         return;

                    BOOL completedOrFailed = NO;

                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next audio sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderAudioOutput copyNextSampleBuffer];

                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterAudioInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }

                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                         BOOL oldFinished = self.audioFinished;

                         self.audioFinished = YES;

                         if (oldFinished == NO)
                         {
                              [self.assetWriterAudioInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }

          if (self.assetWriterVideoInput)
          {
               // If we had video to reencode, enter the dispatch group before beginning the work.
               dispatch_group_enter(self.dispatchGroup);

               // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
               [self.assetWriterVideoInput requestMediaDataWhenReadyOnQueue:self.rwVideoSerializationQueue usingBlock:^{

                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (self.videoFinished)
                         return;

                    BOOL completedOrFailed = NO;

                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([self.assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                         // Get the next video sample buffer, and append it to the output file.
                         CMSampleBufferRef sampleBuffer = [self.assetReaderVideoOutput copyNextSampleBuffer];

                         if (sampleBuffer != NULL)
                         {
                              BOOL success = [self.assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                              CFRelease(sampleBuffer);
                              sampleBuffer = NULL;
                              completedOrFailed = !success;
                         }
                         else
                         {
                              completedOrFailed = YES;
                         }
                    }

                    if (completedOrFailed)
                    {
                         // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                         BOOL oldFinished = self.videoFinished;
                         self.videoFinished = YES;

                         if (oldFinished == NO)
                         {
                              [self.assetWriterVideoInput markAsFinished];
                         }
                         dispatch_group_leave(self.dispatchGroup);
                    }
               }];
          }

          // Set up the notification that the dispatch group will send when the audio and video work have both finished.
          dispatch_group_notify(self.dispatchGroup, self.mainSerializationQueue, ^{

               BOOL finalSuccess = YES;

               NSError *finalError = nil;

               // Check to see if the work has finished due to cancellation.
               if (self.cancelled)
               {
                    // If so, cancel the reader and writer.
                    [self.assetReader cancelReading];
                    [self.assetWriter cancelWriting];
               }
               else
               {
                    // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                    if ([self.assetReader status] == AVAssetReaderStatusFailed)
                    {
                         finalSuccess = NO;
                         finalError = [self.assetReader error];
                    }

                    // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                    if (finalSuccess)
                    {
                         finalSuccess = [self.assetWriter finishWriting];
                         if (!finalSuccess)
                              finalError = [self.assetWriter error];
                    }
               }
               // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
               [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:finalError];
          });
     }

     // Return success here to indicate whether the asset reader and writer were started successfully.
     return success;
}

在重新编码时,音频和视频轨道在各个串行队列中异步处理,以增加进程的整体性能,但是两个队列都包含在同一个group中。 通过将每个轨道的工作放在同一个group中,group可以在完成所有工作并且可以确定重新编码过程的成功时进行相应处理。

处理完成操作

为了处理读写过程的完成,readAndWritingDidFinishSuccessfully:方法被调用,其中的参数指示重新编码是否成功完成。 如果进程没有成功完成,则资源读取器和写入器都将被取消,任何UI相关任务都将分派到主队列中处理。

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
     if (!success)
     {
          // If the reencoding process failed, we need to cancel the asset reader and writer.
          [self.assetReader cancelReading];
          [self.assetWriter cancelWriting];
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to failure.
          });
     }
     else
     {
          // Reencoding was successful, reset booleans.
          self.cancelled = NO;
          self.videoFinished = NO;
          self.audioFinished = NO;
          dispatch_async(dispatch_get_main_queue(), ^{
               // Handle any UI tasks here related to success.
          });
     }
}
处理取消行为

使用多个序列化队列,您可以允许应用程序的用户轻松取消重新编码过程。 在主序列化队列中,消息被异步发送到每个资源重编码序列化队列以取消其读取和写入。 当这两个序列化队列完成取消时,dispatch group会向主序列化队列发送通知,其中已取消的属性设置为YES。 您可以将以下代码中的取消方法与UI上的按钮相关联。

- (void)cancel
{
     // Handle cancellation asynchronously, but serialize it with the main queue.
     dispatch_async(self.mainSerializationQueue, ^{

          // If we had audio data to reencode, we need to cancel the audio work.
          if (self.assetWriterAudioInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the audio queue.
               dispatch_async(self.rwAudioSerializationQueue, ^{

                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.audioFinished;
                    self.audioFinished = YES;

                    if (oldFinished == NO)
                    {
                         [self.assetWriterAudioInput markAsFinished];
                    }

                    // Leave the dispatch group since the audio work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }

          if (self.assetWriterVideoInput)
          {
               // Handle cancellation asynchronously again, but this time serialize it with the video queue.
               dispatch_async(self.rwVideoSerializationQueue, ^{

                    // Update the Boolean property indicating the task is complete and mark the input as finished if it hasn't already been marked as such.
                    BOOL oldFinished = self.videoFinished;
                    self.videoFinished = YES;

                    if (oldFinished == NO)
                    {
                         [self.assetWriterVideoInput markAsFinished];
                    }

                    // Leave the dispatch group, since the video work is finished now.
                    dispatch_group_leave(self.dispatchGroup);
               });
          }

          // Set the cancelled Boolean property to YES to cancel any work on the main queue as well.
          self.cancelled = YES;
     });
}
资源输出设置助手

AVOutputSettingsAssistant类有助于为资源读取器或写入器创建输出设置字典。 这使得设置更简单,特别是对于具有多个特定预设的高帧率H264电影。 下面展示了使用输出设置助手使用设置助手的示例。

AVOutputSettingsAssistant *outputSettingsAssistant = [AVOutputSettingsAssistant outputSettingsAssistantWithPreset:<some preset>];
CMFormatDescriptionRef audioFormat = [self getAudioFormat];

if (audioFormat != NULL)
    [outputSettingsAssistant setSourceAudioFormat:(CMAudioFormatDescriptionRef)audioFormat];

CMFormatDescriptionRef videoFormat = [self getVideoFormat];

if (videoFormat != NULL)
    [outputSettingsAssistant setSourceVideoFormat:(CMVideoFormatDescriptionRef)videoFormat];

CMTime assetMinVideoFrameDuration = [self getMinFrameDuration];
CMTime averageFrameDuration = [self getAvgFrameDuration]

[outputSettingsAssistant setSourceVideoAverageFrameDuration:averageFrameDuration];
[outputSettingsAssistant setSourceVideoMinFrameDuration:assetMinVideoFrameDuration];

AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:<some URL> fileType:[outputSettingsAssistant outputFileType] error:NULL];
AVAssetWriterInput *audioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:[outputSettingsAssistant audioSettings] sourceFormatHint:audioFormat];
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:[outputSettingsAssistant videoSettings] sourceFormatHint:videoFormat];
最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,847评论 4 362
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,208评论 1 292
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,587评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,942评论 0 205
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,332评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,587评论 1 218
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,853评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,568评论 0 198
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,273评论 1 242
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,542评论 2 246
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,033评论 1 260
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,373评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,031评论 3 236
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,073评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,830评论 0 195
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,628评论 2 274
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,537评论 2 269

推荐阅读更多精彩内容