1

我有兩個攝像頭源進入OSX應用程序,我試圖用AVCaptureMovieFileOutput保存它們。視頻不同步需要很長時間。經過一分鐘的測試後,他們可以關閉1到5秒。經過一個小時的測試,他們已經休息了20秒。我覺得必須有一些簡單的解決方案來保持兩個輸出同步。我們已經嘗試在會話和輸出中使用相同的設備,並且我們遇到了同樣的問題。我們試圖將fps降低到15,但仍然沒有運氣。如何保持兩個AVCaptureMovieFileOutput同步

設置輸出

func assignDeviceToPreview(captureSession: AVCaptureSession, device: AVCaptureDevice, previewView: NSView, index: Int){ 

    captureSession.stopRunning() 

    captureSession.beginConfiguration() 

    //clearing out old inputs 
    for input in captureSession.inputs { 
     let i = input as! AVCaptureInput 
     captureSession.removeInput(i) 
    } 

    let output = self.outputs[index] 
    output.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG] 

    //removing old outputs 
    for o in captureSession.outputs{ 
     if let oc = o as? AVCaptureStillImageOutput{ 
      captureSession.removeOutput(oc) 
      print("removed image out") 
     } 
    } 

    //Adding input 
    do { 

     try captureSession.addInput(AVCaptureDeviceInput(device:device)) 

     let camViewLayer = previewView.layer! 
     camViewLayer.backgroundColor = CGColorGetConstantColor(kCGColorBlack) 

     let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) 
     previewLayer.frame = camViewLayer.bounds 
     previewLayer.autoresizingMask = [.LayerWidthSizable, .LayerHeightSizable] 

     camViewLayer.addSublayer(previewLayer) 

     let overlayPreview = overlayPreviews[index] 
     overlayPreview.frame.origin = CGPoint.zero 

     previewView.addSubview(overlayPreview) 

     //adding output 
     captureSession.addOutput(output) 

     if captureSession == session2{ 
      let audio = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio) 

      do { 
       let input = try AVCaptureDeviceInput(device: audio) 
       captureSession.addInput(input) 
      } 
     } 

    } catch { 
     print("Failed to add webcam as AV input") 
    } 

    captureSession.commitConfiguration() 
    captureSession.startRunning() 
} 

開始錄製

func startRecording(){ 

    startRecordingTimer() 

    let base = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0] 
    let appFolder = "Sessions" 
    let sessionFolder = "session_" + session.UUID 

    let path = base+"/"+appFolder+"/"+sessionFolder 

    do{ 
     try NSFileManager.defaultManager().createDirectoryAtPath(path, withIntermediateDirectories: true, attributes: nil) 
    }catch{ 
     print("issue creating folder") 
    } 

    for fileOutput in fileOutputs{ 

     let fileName = "cam\(String(fileOutputs.indexOf(fileOutput)!))" + ".mov" 

     let fileURL = NSURL.fileURLWithPathComponents([path, fileName]) 
     fileURLs.append(fileURL!) 
     print(fileURL?.absoluteString) 

     var captureConnection = fileOutput.connections.first as? AVCaptureConnection 
     captureConnection!.videoMinFrameDuration = CMTimeMake(1, 15) 
     captureConnection!.videoMaxFrameDuration = CMTimeMake(1, 15) 

     if fileOutput == movieFileOutput1{ 
      fileOutput.setOutputSettings([AVVideoScalingModeKey: AVVideoScalingModeResize, AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: 1280, AVVideoHeightKey: 720], forConnection: captureConnection) 
     }else{ 
      fileOutput.setOutputSettings([AVVideoScalingModeKey: AVVideoScalingModeResizeAspect, AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: 640, AVVideoHeightKey: 360], forConnection: captureConnection) 
     } 
     captureConnection = fileOutput.connections.first as? AVCaptureConnection 
     print(fileOutput.outputSettingsForConnection(captureConnection)) 

     fileOutput.startRecordingToOutputFileURL(fileURL, recordingDelegate: self) 

     print("start recording") 
    } 

} 

回答

2

爲了實現精確的時序控制,我想你需要考慮使用低級別AVAssetWriter框架。這使您可以控制各個幀的寫入和時間。

使用AVAssetWriter.startSession(atSourceTime:CMTime),您可以精確控制每臺攝像機開始錄製的時間。

在寫入過程中,使用AVCaptureVideoDataOutputSampleBufferDelegate,您可以進一步處理生成的CMSampleBuffer,以調整其定時信息並進一步保持兩個視頻同步。有關調整CMSampleBuffer的時序部分的參考,請參見https://developer.apple.com/reference/coremedia/1669345-cmsamplebuffer

這就是說,我從來沒有嘗試過這樣做,但這並不確定,但如果沿着這條路走下去,我相信你會接近你想要達到的目標。

+0

謝謝。我會研究這個並回復你。 –

+1

非常感謝。採取了大量的Googleing和試驗和錯誤,但我確實達到了可以手動控制的地步,在這裏我編寫了一個單獨的框架,並基於實時計算,而不是緩衝區認爲是正確的。兩個視頻的1小時23分58秒。現在開始找出音頻部分。 –

+0

太棒了,很高興幫助! –