2011-05-10 113 views
4

我有在Objective-C中運行時創建,配置和啓動視頻捕獲會話的代碼,沒有任何問題。我移植樣品C#和MonoTouch的4.0.3,並有幾個問題,這裏是代碼:使用MonoTouch在iOS中捕獲視頻

void Initialize() 
    { 
     // Create notifier delegate class 
     captureVideoDelegate = new CaptureVideoDelegate(this); 

     // Create capture session 
     captureSession = new AVCaptureSession(); 
     captureSession.SessionPreset = AVCaptureSession.Preset640x480; 

     // Create capture device 
     captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video); 

     // Create capture device input 
     NSError error; 
     captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error); 
     captureSession.AddInput(captureDeviceInput); 

     // Create capture device output 
     captureVideoOutput = new AVCaptureVideoDataOutput(); 
     captureSession.AddOutput(captureVideoOutput); 
     captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA; 
     captureVideoOutput.MinFrameDuration = new CMTime(1, 30); 
     // 
     // ISSUE 1 
     // In the original Objective-C code I was creating a dispatch_queue_t object, passing it to 
     // setSampleBufferDelegate:queue message and worked, here I could not find an equivalent to 
     // the queue mechanism. Also not sure if the delegate should be used like this). 
     // 
     captureVideoOutput.SetSampleBufferDelegatequeue(captureVideoDelegate, ???????); 

     // Create preview layer 
     previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession); 
     previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeRight; 
     // 
     // ISSUE 2: 
     // Didn't find any VideoGravity related enumeration in MonoTouch (not sure if string will work) 
     // 
     previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill"; 
     previewLayer.Frame = new RectangleF(0, 0, 1024, 768); 
     this.View.Layer.AddSublayer(previewLayer); 

     // Start capture session 
     captureSession.StartRunning(); 

    } 

    #endregion 

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate 
    { 
     private VirtualDeckViewController mainViewController; 

     public CaptureVideoDelegate(VirtualDeckViewController viewController) 
     { 
      mainViewController = viewController; 
     } 

     public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection) 
     { 
      // TODO: Implement - see: http://go-mono.com/docs/index.aspx?link=T%3aMonoTouch.Foundation.ModelAttribute 

     } 
    } 

問題1: 不知道如何在SetSampleBufferDelegatequeue方法正確使用委託。還沒有找到一個等效的機制dispatch_queue_t對象,在Objective-C中正常工作以傳遞第二個參數。

問題2: 我在MonoTouch庫中找不到任何VideoGravity枚舉,不知道傳遞一個具有常量值的字符串是否可行。

我有尋找任何線索來解決這個問題,但沒有明確的樣本。任何有關如何在MonoTouch中執行相同操作的示例或信息都將不勝感激。

非常感謝。

回答

1

所有問題終於做工精細,凍結是因爲發生在我的測試中,我還沒有處置的sampleBuffer的方法DidOutputSampleBuffer。我的觀點的最終代碼是這樣的:

UPDATE 1:更改VideoSettings CVPixelFormat的分配,是不正確的,並會導致sampleBuffer中出現錯誤的BytesPerPixel。

public partial class VirtualDeckViewController : UIViewController 
{ 
    public CaptureVideoDelegate captureVideoDelegate; 

    public AVCaptureVideoPreviewLayer previewLayer; 
    public AVCaptureSession captureSession; 
    public AVCaptureDevice captureDevice; 
    public AVCaptureDeviceInput captureDeviceInput; 
    public AVCaptureVideoDataOutput captureVideoOutput; 

...

public override void ViewDidLoad() 
    { 
     base.ViewDidLoad(); 

     SetupVideoCaptureSession(); 
    } 

    public void SetupVideoCaptureSession() 
    { 
     // Create notifier delegate class 
     captureVideoDelegate = new CaptureVideoDelegate(); 

     // Create capture session 
     captureSession = new AVCaptureSession(); 
     captureSession.BeginConfiguration(); 
     captureSession.SessionPreset = AVCaptureSession.Preset640x480; 

     // Create capture device 
     captureDevice = AVCaptureDevice.DefaultDeviceWithMediaType(AVMediaType.Video); 

     // Create capture device input 
     NSError error; 
     captureDeviceInput = new AVCaptureDeviceInput(captureDevice, out error); 
     captureSession.AddInput(captureDeviceInput); 

     // Create capture device output 
     captureVideoOutput = new AVCaptureVideoDataOutput(); 
     captureVideoOutput.AlwaysDiscardsLateVideoFrames = true; 
        // UPDATE: Wrong videosettings assignment 
     //captureVideoOutput.VideoSettings.PixelFormat = CVPixelFormatType.CV32BGRA; 
        // UPDATE Correct videosettings assignment 
        captureVideoOutput.VideoSettings = new AVVideoSettings(CVPixelFormatType.CV32BGRA); 
     captureVideoOutput.MinFrameDuration = new CMTime(1, 30); 
     DispatchQueue dispatchQueue = new DispatchQueue("VideoCaptureQueue"); 
     captureVideoOutput.SetSampleBufferDelegateAndQueue(captureVideoDelegate, dispatchQueue); 
     captureSession.AddOutput(captureVideoOutput); 

     // Create preview layer 
     previewLayer = AVCaptureVideoPreviewLayer.FromSession(captureSession); 
     previewLayer.Orientation = AVCaptureVideoOrientation.LandscapeLeft; 
     previewLayer.VideoGravity = "AVLayerVideoGravityResizeAspectFill"; 
     previewLayer.Frame = new RectangleF(0, 0, 1024, 768); 
     this.View.Layer.AddSublayer(previewLayer); 

     // Start capture session 
     captureSession.CommitConfiguration(); 
     captureSession.StartRunning(); 
    } 

    public class CaptureVideoDelegate : AVCaptureVideoDataOutputSampleBufferDelegate 
    { 
     public CaptureVideoDelegate() : base() 
     { 
     } 

     public override void DidOutputSampleBuffer (AVCaptureOutput captureOutput, CMSampleBuffer sampleBuffer, AVCaptureConnection connection) 
     { 
      // TODO: Implement buffer processing 

      // Very important (buffer needs to be disposed or it will freeze) 
      sampleBuffer.Dispose(); 
     } 
    } 

拼圖的最後一塊得到的回答是米格爾奧德伊卡薩樣品我終於找到了這裏:link

由於米格爾和Pavel

1

這是我的代碼。好好使用它。我只是刪除了重要的東西,所有初始化都在那裏,以及讀取輸出緩衝區的示例。

然後,我有處理CVImageBuffer形式的鏈接自定義ObjC庫的代碼,如果您需要在Monotouch中處理此內容,那麼您需要多花一分鐘並將其轉換爲CGImage或UIImage。在Monotouch(AFAIK)中沒有這個功能,所以你需要自己綁定它,從普通的ObjC。在樣品ObjC是在這裏:解決how to convert a CVImageBufferRef to UIImage

public void InitCapture() 
     { 
      try 
      { 
       // Setup the input 
       NSError error = new NSError(); 
       captureInput = new AVCaptureDeviceInput (AVCaptureDevice.DefaultDeviceWithMediaType (AVMediaType.Video), out error); 

       // Setup the output 
       captureOutput = new AVCaptureVideoDataOutput(); 
       captureOutput.AlwaysDiscardsLateVideoFrames = true; 
       captureOutput.SetSampleBufferDelegateAndQueue (avBufferDelegate, dispatchQueue); 
       captureOutput.MinFrameDuration = new CMTime (1, 10); 

       // Set the video output to store frame in BGRA (compatible across devices) 
       captureOutput.VideoSettings = new AVVideoSettings (CVPixelFormatType.CV32BGRA); 

       // Create a capture session 
       captureSession = new AVCaptureSession(); 
       captureSession.SessionPreset = AVCaptureSession.PresetMedium; 
       captureSession.AddInput (captureInput); 
       captureSession.AddOutput (captureOutput); 

       // Setup the preview layer 
       prevLayer = new AVCaptureVideoPreviewLayer (captureSession); 
       prevLayer.Frame = liveView.Bounds; 
       prevLayer.VideoGravity = "AVLayerVideoGravityResize"; // image may be slightly distorted, but red bar position will be accurate 

       liveView.Layer.AddSublayer (prevLayer); 

       StartLiveDecoding(); 
      } 
      catch (Exception ex) 
      { 
       Console.WriteLine (ex.ToString()); 
      } 
     } 

public void DidOutputSampleBuffer (AVCaptureOutput captureOutput, MonoTouch.CoreMedia.CMSampleBuffer sampleBuffer, AVCaptureConnection connection) 
     { 
      Console.WriteLine ("DidOutputSampleBuffer: enter"); 

      if (isScanning) 
      { 
       CVImageBuffer imageBuffer = sampleBuffer.GetImageBuffer(); 

       Console.WriteLine ("DidOutputSampleBuffer: calling decode"); 

       //  NSLog(@"got image w=%d h=%d bpr=%d",CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer), CVPixelBufferGetBytesPerRow(imageBuffer)); 
       // call the decoder 
       DecodeImage (imageBuffer); 
      } 
      else 
      { 
       Console.WriteLine ("DidOutputSampleBuffer: not scanning"); 
      } 

      Console.WriteLine ("DidOutputSampleBuffer: quit"); 
     } 
+0

的在Init結尾的StartLiveDecoding函數並沒有做太多的工作,只需調用 //啓動視頻捕捉 captureSession.StartRunning(); – 2011-05-10 17:22:30

+0

謝謝,這意味着MonoTouch支持一種解決方案。問題#2的答案在那裏,但仍不知道您的dispatchQueue是如何創建的。我猜avBufferDelegate是一個從委託類下降的類的實例。剩下的問題與dispatchQueue有關。非常感謝Pavel,緩衝區轉換不成問題。 – 2011-05-10 21:53:37

+0

託管創建捕獲會話工作,其中一個問題是嘗試使用SetSampleBufferDelegatequeue而不是SetSampleBufferDelegateAndQueue(不知道有什麼區別)。但是現在我遇到了一個問題,預覽中的圖像會在幾幀後凍結,但是如果我在DidOutputSampleBuffer中放置了斷點,那麼在該斷點中停止執行時,預覽圖層中的圖像會繼續顯示正常。我想它必須做我創建調度隊列的方式。有關如何正確設置調度隊列的任何線索?感謝任何幫助。 – 2011-05-11 13:16:01