slow face detection android - android

Hi my face detection thread is working too slow,
I call this thread from onPreviewFrame only if the thread is not working else i just skip the call and after the thread detect face i call onDraw inside the view to draw rectangle
public void run() {
FaceDetector faceDetector = new FaceDetector(bitmapImg.getWidth(), bitmapImg.getHeight(), 1);
numOfFacesDetected = faceDetector.findFaces(bitmapImg, detectedFaces);
if (numOfFacesDetected != 0) {
detectedFaces.getMidPoint(eyesMidPoint);
eyesDistance = detectedFaces.eyesDistance();
handler.post(new Runnable() {
public void run() {
mPrev.invalidate();
// turn off thread lock
}
});
mPrev.setEyesDistance(eyesDistance);
mPrev.setEyesMidPoint(eyesMidPoint);
}
isThreadWorking = false;
}
public void onPreviewFrame(byte[] yuv, Camera camera) {
if (isThreadWorking)
return;
isThreadWorking = true;
ByteBuffer bbuffer = ByteBuffer.wrap(yuv);
bbuffer.get(grayBuff_, 0, bufflen_);
detectThread = new FaceDetectThread(handler);
detectThread.setBuffer(grayBuff_);
detectThread.start();
my questions is maybe because am working with bitmap and not gray scale it's taking too long ? how can i improve the speed ?

The FaceDetector API is not really made to process frames in a live preview. It's way to slow for that.
If you are running on a fairly new device, a better option is to use the FaceDetectionListener API in Android 14+. It is very fast and can be used to create an overlay on a preview SurfaceHolder.

Related

Why my OpenGL texture are painted in pink?

I setup webrtc on my android (peer to peer video chat). When I draw texture that comes from the local camera, everything is fine, but when I try to draw texture that comes from the remote smartphone then I have a pink image, something like this :
on webrtc i just do this to get the remote stream:
mRemoteVideoTrack = getRemoteVideoTrack();
mRemoteVideoTrack.setEnabled(true);
mRemoteVideoTrack.addSink(mRemoteProxyVideoSink);
private VideoTrack getRemoteVideoTrack() {
for (RtpTransceiver transceiver : mPeerConnection.getTransceivers()) {
MediaStreamTrack track = transceiver.getReceiver().track();
if (track instanceof VideoTrack) {
return (VideoTrack) track;
}
}
return null;
}
and I get the texture id in the mRemoteProxyVideoSink :
private class RemoteProxyVideoSink implements VideoSink {
#Override
synchronized public void onFrame(VideoFrame frame) {
VideoFrame.TextureBuffer textureBuffer = (VideoFrame.TextureBuffer) frame.getBuffer();
mTextureID = textureBuffer.getTextureId();
.. draw mTextureID (in UI thread because onFrame is not fired in UI thread) ...
}
}
Any idea why my textures are painted in Pink?

Surface Texture object is not getting the frames from a Surface Class

On the one hand, I have a Surface Class which when instantiated, automatically initialize a new thread and start grabbing frames from a streaming source via native code based on FFMPEG. Here is the main parts of the code for the aforementioned Surface Class:
public class StreamingSurface extends Surface implements Runnable {
...
public StreamingSurface(SurfaceTexture surfaceTexture, int width, int height) {
super(surfaceTexture);
screenWidth = width;
screenHeight = height;
init();
}
public void init() {
mDrawTop = 0;
mDrawLeft = 0;
mVideoCurrentFrame = 0;
this.setVideoFile();
this.startPlay();
}
public void setVideoFile() {
// Initialise FFMPEG
naInit("");
// Get stream video res
int[] res = naGetVideoRes();
mDisplayWidth = (int)(res[0]);
mDisplayHeight = (int)(res[1]);
// Prepare Display
mBitmap = Bitmap.createBitmap(mDisplayWidth, mDisplayHeight, Bitmap.Config.ARGB_8888);
naPrepareDisplay(mBitmap, mDisplayWidth, mDisplayHeight);
}
public void startPlay() {
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
while (true) {
while (2 == mStatus) {
//pause
SystemClock.sleep(100);
}
mVideoCurrentFrame = naGetVideoFrame();
if (0 < mVideoCurrentFrame) {
//success, redraw
if(isValid()){
Canvas canvas = lockCanvas(null);
if (null != mBitmap) {
canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint);
}
unlockCanvasAndPost(canvas);
}
} else {
//failure, probably end of video, break
naFinish(mBitmap);
mStatus = 0;
break;
}
}
}
}
In my MainActivity class, I instantiated this class in the following way:
public void startCamera(int texture)
{
mSurface = new SurfaceTexture(texture);
mSurface.setOnFrameAvailableListener(this);
Surface surface = new StreamingSurface(mSurface, 640, 360);
surface.release();
}
I read the following line in the Android developer page, regarding the Surface class constructor:
"Images drawn to the Surface will be made available to the SurfaceTexture, which can attach them to an OpenGL ES texture via updateTexImage()."
That is exactly what I want to do, and I have everything ready for the further renderization. But definitely, with the above code, I never get my frames captured in the surface class transformed to its corresponding SurfaceTexture. I know this because the debugger, for instace, never call the OnFrameAvailableLister method associated with that Surface Texture.
Any ideas? Maybe the fact that I am using a thread to call the drawing functions is messing everything? In such a case, what alternatives I have to grab the frames?
Thanks in advance

Unable to blit from External Texture to EGLSurface in android

When i have tried to render texture and transformation matrix to the EGLSurface, no display is seen in the view.
As a follow up of this issue , slightly i have modified slightly the code by following grafika/fadden sample code continuous capture
Here is my code:
Here is a draw method which runs on RenderThread.
This draw method is getting invoked properly whevener the data is produced at the producer end from Native Code.
public void drawFrame() {
mOffScreenSurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
mSurfaceWindowUser.makeCurrent();
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mSurfaceWindowUser.swapBuffers();
}
run method of RenderThread ->
public void run() {
Looper.prepare();
mHandler = new RenderHandler(this);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mOffScreenSurface = new OffscreenSurface(mEglCore, 640, 480);
mOffScreenSurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraSurface = new Surface (mCameraTexture); // This surface i am sending to Native Code where i use ANativeWindow reference and copy the data using post method. {producer}
mCameraTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
#Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
Log.d (TAG, "Long breath.. data is pumbed by Native Layer producer..");
mHandler.frameReceivedFromProducer();
}
});
mSurfaceWindowUser = new WindowSurface(mEglCore, mSurfaceUser, false); // this mSurfaceUser is a surface received from MainActivity TextureView.
}
To confirm if the produce at the native side producing the data, if i pass directly the user surface Without any EGL configurations, the frames are rendered into the screen.
At the native Level,
geometryResult = ANativeWindow_setBuffersGeometry(userNaiveWindow,640, 480, WINDOW_FORMAT_RGBA_8888);
To Render the frame i use
ANativeWindow_lock and ANativeWindow_unlockAndPost() to render directly frame into buffer.
I could not able to think what could be wrong and where i have to dig more ?
Thanks fadden for your help.

Video frame capture in Android

I'm trying to implement an Android app that captures about 1 picture per second, performs some processing on each picture and sends the output to a file for storage. My first pass at this tries something like the following:
public class MainActivity extends Activity {
...
Handler loopHandler = new Handler();
Runnable loopRunnable = new Runnable() {
#Override
public void run() {
Thread pictureThread = new Thread(pictureRunnable);
pictureThread.start();
loopHandler.postDelayed(this, 1000);
}
};
Runnable pictureRunnable = new Runnable() {
#Override
public void run() {
mCamera.takePicture(null, null, mPicture);
}
};
private PictureCallback mPicture = new PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
... My processing code ...
}
}
The app freezes after taking about 4 pictures in this way. So, I'm guessing this is probably too naive an approach but would appreciate a deeper understanding of why this can't work.
Is there any way to do this without engaging with video directly or will I ultimately have to create something that pulls frames out of a video stream?
The best thing to do would be to study your code a bit deeper and understand exactly what is causing the app to freeze. Maybe it's not related to the code you posted but to the actual processing of the image.
Another approach which seemed to work better for me in the past, would be to skip PictureCallback altogether. Instead, you can use PreviewCallback (http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html). This callback gets triggered on each frame, so you can simply check inside this frame if it's been over 1 second since you last processed an image, and if so, do your image processing on another thread.
I haven't tested this, but something like this:
myCamera.setPreviewCallback(new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// previousTime is defined as a member variable
long timeElapsed = System.currentTimeMillis() - previousTime;
if(timeElapsed > 1000) {
// reset values for the next run
previousTime = System.currentTimeMillis();
// process the image (just an example, you should do this inside an AsyncTask)
Size previewSize = myCamera.getParameters().getPreviewSize();
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, myCamera.getParameters().getPreviewFormat(), previewSize.width,previewSize.height, null);
yuvImage.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), 100, out);
byte[] imageBytes = out.toByteArray();
Options options = new Options();
options.inSampleSize = 1;
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length, options);
// you now have the image bitmap which you can use to apply your processing ...
}
}
});
You can use OpenCV for that. It allows you to perform actions on each frame received from the camera. http://opencv.org/

setPreviewDisplay and setDisplayOrientation

I'm puzzled by OpenCV's Android camera sample code. They make a custom class which implements SurfaceHolder.Callback and put the following line inside the method surfaceChanged:
mCamera.setPreviewDisplay(null);
The Android documentation for setPreviewDisplay explains:
This method must be called before startPreview(). The one exception is
that if the preview surface is not set (or set to null) before
startPreview() is called, then this method may be called once with a
non-null parameter to set the preview surface. (This allows camera
setup and surface creation to happen in parallel, saving time.) The
preview surface may not otherwise change while preview is running.
Unusually, OpenCV's code never calls setPreviewDisplay with a non-null SurfaceHolder. It works fine, but changing the rotation of the image using setDisplayOrientation doesn't work. This line also doesn't appear to do anything, since I get the same results without it.
If I call setPreviewDisplay with the SurfaceHolder supplied to surfaceChanged instead of null, the image rotates but does not include the results of the image processing. I also get an IllegalArgumentException when calling lockCanvas later on.
What's going on?
Here are the (possibly) most relevant parts of their code, slightly simplified and with methods inlined. Here is the full version.
Class definition
public abstract class SampleViewBase extends SurfaceView
implements SurfaceHolder.Callback, Runnable {
When the camera is opened
mCamera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
synchronized (SampleViewBase.this) {
System.arraycopy(data, 0, mFrame, 0, data.length);
SampleViewBase.this.notify();
}
camera.addCallbackBuffer(mBuffer);
}
});
When the surface changes
/* Now allocate the buffer */
mBuffer = new byte[size];
/* The buffer where the current frame will be copied */
mFrame = new byte [size];
mCamera.addCallbackBuffer(mBuffer);
try {
mCamera.setPreviewDisplay(null);
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay/setPreviewTexture fails: " + e);
}
[...]
/* Now we can start a preview */
mCamera.startPreview();
The run method
public void run() {
mThreadRun = true;
Log.i(TAG, "Starting processing thread");
while (mThreadRun) {
Bitmap bmp = null;
synchronized (this) {
try {
this.wait();
bmp = processFrame(mFrame);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
if (bmp != null) {
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - getFrameWidth()) / 2,
(canvas.getHeight() - getFrameHeight()) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
}
}
}
Log.i(TAG, "Finishing processing thread");
}
I ran into this same problem. Instead of using a SurfaceView.Callback, I subclassed their class JavaCameraView. See my live face detection and drawing sample here. It was then trivial to rotate the matrix coming out of the camera according to the device's orientation, prior to processing. Relevant excerpt of linked code:
#Override
public Mat onCameraFrame(Mat inputFrame) {
int flipFlags = 1;
if(display.getRotation() == Surface.ROTATION_270) {
flipFlags = -1;
Log.i(VIEW_LOG_TAG, "Orientation is" + getRotation());
}
Core.flip(inputFrame, mRgba, flipFlags);
inputFrame.release();
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
if (mAbsoluteFaceSize == 0) {
int height = mGray.rows();
if (Math.round(height * mRelativeFaceSize) > 0) {
mAbsoluteFaceSize = Math.round(height * mRelativeFaceSize);
}
}
}
I solved the rotation issue using OpenCV itself: after finding out how much the screen rotation needs to be corrected using this code, I apply a rotation matrix to the raw camera image (after converting from YUV to RGB):
Point center = new Point(mFrameWidth/2, mFrameHeight/2);
Mat rotationMatrix = Imgproc.getRotationMatrix2D(center, totalRotation, 1);
[...]
Imgproc.cvtColor(mYuv, mIntermediate, Imgproc.COLOR_YUV420sp2RGBA, 4);
Imgproc.warpAffine(mIntermediate, mRgba, rotationMatrix,
new Size(mFrameHeight, mFrameWidth));
A separate issue is that setPreviewDisplay(null) gives a blank screen on some phones. The solution, which I got from here and draws on this bugreport and this SO question, passes a hidden, "fake" SurfaceView to the preview display to get it to start, but actually displays the output on an overlaid custom view, which I call CameraView. So, after calling setContentView() in the activity's onCreate(), stick in this code:
if (VERSION.SDK_INT < VERSION_CODES.HONEYCOMB) {
final SurfaceView fakeView = new SurfaceView(this);
fakeView.setLayoutParams(new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));
fakeView.setZOrderMediaOverlay(false);
final CameraView cameraView = (CameraView) this.findViewById(R.id.cameraview);
cameraView.setZOrderMediaOverlay(true);
cameraView.fakeView = fakeView;
}
Then, when setting the preview display, use this code:
try {
if (VERSION.SDK_INT >= VERSION_CODES.HONEYCOMB)
mCamera.setPreviewTexture(new SurfaceTexture(10));
else
mCamera.setPreviewDisplay(fakeView.getHolder());
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay fails: "+ e);
}
If you are only developing for Honeycomb and above, just replace setPreviewDisplay(null) with mCamera.setPreviewTexture(new SurfaceTexture(10)); and be done with it. setDisplayOrientation() still doesn't work if you do this, though, so you'll still have to use the rotation matrix solution.

Categories

Resources