Oier Etchelecou February 2016

objective c - AvAssetReader and Writer to overlay video

I am trying to overlay a recorded video with AvAssetReader and AvAssetWriter with some images. Following this tutorial, I am able to copy a video (and audio) into a new file. Now my objective is to overlay some of the initial video frames with some images with this code:

while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)
            {
                // Get the next video sample buffer, and append it to the output file.
                CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];

                CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                CVPixelBufferLockBaseAddress(pixelBuffer, 0);
                EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
                CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:@{kCIContextWorkingColorSpace : [NSNull null]}];
                UIFont *font = [UIFont fontWithName:@"Helvetica" size:40];
                NSDictionary *attributes = @{NSFontAttributeName:font, NSForegroundColorAttributeName:[UIColor lightTextColor]};
                UIImage *img = [self imageFromText:@"test" :attributes];

                CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];

                [ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];


                CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

                if (sampleBuffer != NULL)
                {
                    BOOL success = [assetWriterVideoInput appendSampleBuffer:sampleBuffer];
                    CFRelease(sampleBuffer);
                    sampleBuffer = NULL;
                            

Answers


Oier Etchelecou February 2016

I found the solutions:

To overlay video frames, you need to fix the decompression settings:

NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* decompressionVideoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
// If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
assetReaderVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];

To get the frame timestamp, you have to read the video informations and then use a counter to increment the current timestamp:

durationSeconds = CMTimeGetSeconds(asset.duration);
timePerFrame = 1.0 / (Float64)assetVideoTrack.nominalFrameRate;
totalFrames = durationSeconds * assetVideoTrack.nominalFrameRate;

Then in this loop

while ([assetWriterVideoInput isReadyForMoreMediaData] && !completedOrFailed)

You can found the timestamp:

CMSampleBufferRef sampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
if (sampleBuffer != NULL){
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer) {
Float64 secondsIn = ((float)counter/totalFrames)*durationSeconds;
CMTime imageTimeEstimate = CMTimeMakeWithSeconds(secondsIn, 600);
mergeTime = CMTimeGetSeconds(imageTimeEstimate);
                                    counter++;
}
}

I hope it could help!

Post Status

Asked in February 2016
Viewed 3,634 times
Voted 6
Answered 1 times

Search




Leave an answer