2017-01-23 88 views
2

經過相當長的時間後,我們花費了大量時間解決問題而沒有結果,於是我決定在這裏提問。在Swift 3處使用AVCaptureVideoDataOutput錄製視頻

我們使用AVCaptureVideoDataOutput來獲取攝像機的實況視頻像素數據,並在captureOutput函數中使用。但我們也想用這些數據來錄製視頻。另外,我們想知道這個視頻錄製是否會被壓縮到和AVCaptureMovieFileOutput錄製的視頻一樣多。

我想通知你,我們使用AVCaptureMovieFileOutput錄製沒有問題。但是AVCaptureMovieFileOutputAVCaptureVideoDataOutput不能同時工作。

您可以在下面找到我們的captureOutput函數;

func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) { 

    let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)! 

    CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0)) 

    let baseAddress    = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0) 
    let bytesPerRow    = CVPixelBufferGetBytesPerRow(imageBuffer) 
     videoWidth    = CVPixelBufferGetWidth(imageBuffer) 
     videoHeight    = CVPixelBufferGetHeight(imageBuffer) 
    let colorSpace    = CGColorSpaceCreateDeviceRGB() 

    var bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue) 

    let context = CGContext(data: baseAddress, width: videoWidth, height: videoHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) 

    let imageRef = context!.makeImage() 

    CVPixelBufferUnlockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0)) 

    let data = imageRef!.dataProvider!.data as! NSData 
    let pixels = data.bytes.assumingMemoryBound(to: UInt8.self) 


    /* Because what we are doing with pixel data irrelevant to the question we omitted the rest of the code to make it simple */ 




} 

回答

1

消費生活的某一部分後,我發現如何錄製視頻的時候我得到的像素信息,使上視頻直播的一些基本分析。

首先,我在設置AVAssetWriter並在給出實際記錄順序之前調用該函數。

var sampleBufferGlobal : CMSampleBuffer? 
let writerFileName = "tempVideoAsset.mov" 
var presentationTime : CMTime! 
var outputSettings = [String: Any]() 
var videoWriterInput: AVAssetWriterInput! 
var assetWriter: AVAssetWriter! 


func setupAssetWriter() { 

    eraseFile(fileToErase: writerFileName) 

    presentationTime = CMSampleBufferGetPresentationTimeStamp(sampleBufferGlobal!) 

    outputSettings = [AVVideoCodecKey : AVVideoCodecH264, 
         AVVideoWidthKey : NSNumber(value: Float(videoWidth)), 
         AVVideoHeightKey : NSNumber(value: Float(videoHeight))] as [String : Any] 

    videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: outputSettings) 


    assetWriter = try? AVAssetWriter(outputURL: createFileURL(writerFileName), fileType: AVFileTypeQuickTimeMovie) 

    assetWriter.add(videoWriterInput) 

} 

我寫的其他功能進行記錄,並呼籲在captureOutput函數功能,使記錄我複製sampleBuffer後sampleBufferGlobal,sampleBufferGlobal = sampleBuffer,在相同的功能。

func writeVideoFromData() { 

    if assetWriter?.status == AVAssetWriterStatus.unknown { 

     if ((assetWriter?.startWriting) != nil) { 

      assetWriter?.startWriting() 
      assetWriter?.startSession(atSourceTime: presentationTime) 

     } 
    } 



     if assetWriter?.status == AVAssetWriterStatus.writing { 

      if (videoWriterInput.isReadyForMoreMediaData == true) { 


       if videoWriterInput.append(sampleBufferGlobal!) == false { 

        print(" we have a problem writing video") 

       } 
      } 
     }   
    } 

然後停止錄音我用下面的函數。

func stopAssetWriter() { 

    videoWriterInput.markAsFinished() 

    assetWriter?.finishWriting(completionHandler: { 


     if (self.assetWriter?.status == AVAssetWriterStatus.failed) { 

      print("creating movie file is failed ") 

     } else { 

      print(" creating movie file was a success ") 

      DispatchQueue.main.async(execute: {() -> Void in 




      }) 

     } 

    }) 

}