2017-04-26 59 views
5

我正在實施google-vision face tracker中給出的示例。 MyFaceDetector類:來自CameraSource的裁剪面

public class MyFaceDetector extends Detector<Face> { 
    private Detector<Face> mDelegate; 

    MyFaceDetector(Detector<Face> delegate) { 
     mDelegate = delegate; 
    } 

    public SparseArray<Face> detect(Frame frame) { 
     return mDelegate.detect(frame); 
    } 

    public boolean isOperational() { 
     return mDelegate.isOperational(); 
    } 

    public boolean setFocus(int id) { 
     return mDelegate.setFocus(id); 
    } 

} 

FaceTrackerActivity類:

private void createCameraSource() { 

    imageView = (ImageView) findViewById(R.id.face); 

    FaceDetector faceDetector = new FaceDetector.Builder(this).build(); 
    myFaceDetector = new MyFaceDetector(faceDetector); 
    myFaceDetector.setProcessor(new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory()) 
      .build()); 
    mCameraSource = new CameraSource.Builder(this, myFaceDetector) 
      .setRequestedPreviewSize(640, 480) 
      .setFacing(CameraSource.CAMERA_FACING_FRONT) 
      .setRequestedFps(60.0f) 
      .build(); 

    if (!myFaceDetector.isOperational()) { 
     Log.w(TAG, "Face detector dependencies are not yet available."); 
    } 
} 

我需要裁剪的臉,把它放在ImageView。我無法在這裏實現我的自定義Frameframe.getBitmap()總是在detect(Frame frame)中返回null。我如何實現這一目標?

+0

看吧https://stackoverflow.com/questions/32299947/mobile-vision-api-concatenate-new-detector-object-to-continue-frame-processing/ 32314136#32314136 – George

回答

2

如果幀最初是從位圖創建的,frame.getBitmap()將只返回一個值。 CameraSource將圖像信息作爲ByteBuffers而不是位圖提供,因此這是可用的圖像信息。

frame.getGrayscaleImageData()將返回圖像數據。

frame.getMetadata()將返回元數據,如圖像尺寸和圖像格式。

+0

你是對的!如果其他人看起來類似,我會在下面發佈代碼。 – Andro

0

這正好CameraSource.java

Frame outputFrame = new Frame.Builder() 
    .setImageData(mPendingFrameData, mPreviewSize.getWidth(), 
        mPreviewSize.getHeight(), ImageFormat.NV21) 
    .setId(mPendingFrameId) 
    .setTimestampMillis(mPendingTimeMillis) 
    .setRotation(mRotation) 
    .build(); 

int w = outputFrame.getMetadata().getWidth(); 
int h = outputFrame.getMetadata().getHeight(); 
SparseArray<Face> detectedFaces = mDetector.detect(outputFrame); 
Bitmap bitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888); 

if (detectedFaces.size() > 0) { 
    ByteBuffer byteBufferRaw = outputFrame.getGrayscaleImageData(); 
    byte[] byteBuffer = byteBufferRaw.array(); 
    YuvImage yuvimage = new YuvImage(byteBuffer, ImageFormat.NV21, w, h, null); 

    Face face = detectedFaces.valueAt(0); 
    int left = (int) face.getPosition().x; 
    int top = (int) face.getPosition().y; 
    int right = (int) face.getWidth() + left; 
    int bottom = (int) face.getHeight() + top; 

    ByteArrayOutputStream baos = new ByteArrayOutputStream(); 
    yuvimage.compressToJpeg(new Rect(left, top, right, bottom), 80, baos); 
    byte[] jpegArray = baos.toByteArray(); 
    bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length); 
} 
((FaceTrackerActivity) mContext).setBitmapToImageView(bitmap);