Video frame capture in Android - android

I'm trying to implement an Android app that captures about 1 picture per second, performs some processing on each picture and sends the output to a file for storage. My first pass at this tries something like the following:
public class MainActivity extends Activity {
...
Handler loopHandler = new Handler();
Runnable loopRunnable = new Runnable() {
#Override
public void run() {
Thread pictureThread = new Thread(pictureRunnable);
pictureThread.start();
loopHandler.postDelayed(this, 1000);
}
};
Runnable pictureRunnable = new Runnable() {
#Override
public void run() {
mCamera.takePicture(null, null, mPicture);
}
};
private PictureCallback mPicture = new PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
... My processing code ...
}
}
The app freezes after taking about 4 pictures in this way. So, I'm guessing this is probably too naive an approach but would appreciate a deeper understanding of why this can't work.
Is there any way to do this without engaging with video directly or will I ultimately have to create something that pulls frames out of a video stream?

The best thing to do would be to study your code a bit deeper and understand exactly what is causing the app to freeze. Maybe it's not related to the code you posted but to the actual processing of the image.
Another approach which seemed to work better for me in the past, would be to skip PictureCallback altogether. Instead, you can use PreviewCallback (http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html). This callback gets triggered on each frame, so you can simply check inside this frame if it's been over 1 second since you last processed an image, and if so, do your image processing on another thread.
I haven't tested this, but something like this:
myCamera.setPreviewCallback(new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// previousTime is defined as a member variable
long timeElapsed = System.currentTimeMillis() - previousTime;
if(timeElapsed > 1000) {
// reset values for the next run
previousTime = System.currentTimeMillis();
// process the image (just an example, you should do this inside an AsyncTask)
Size previewSize = myCamera.getParameters().getPreviewSize();
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, myCamera.getParameters().getPreviewFormat(), previewSize.width,previewSize.height, null);
yuvImage.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), 100, out);
byte[] imageBytes = out.toByteArray();
Options options = new Options();
options.inSampleSize = 1;
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length, options);
// you now have the image bitmap which you can use to apply your processing ...
}
}
});

You can use OpenCV for that. It allows you to perform actions on each frame received from the camera. http://opencv.org/

Related

Unable to blit from External Texture to EGLSurface in android

When i have tried to render texture and transformation matrix to the EGLSurface, no display is seen in the view.
As a follow up of this issue , slightly i have modified slightly the code by following grafika/fadden sample code continuous capture
Here is my code:
Here is a draw method which runs on RenderThread.
This draw method is getting invoked properly whevener the data is produced at the producer end from Native Code.
public void drawFrame() {
mOffScreenSurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
mSurfaceWindowUser.makeCurrent();
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mSurfaceWindowUser.swapBuffers();
}
run method of RenderThread ->
public void run() {
Looper.prepare();
mHandler = new RenderHandler(this);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mOffScreenSurface = new OffscreenSurface(mEglCore, 640, 480);
mOffScreenSurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraSurface = new Surface (mCameraTexture); // This surface i am sending to Native Code where i use ANativeWindow reference and copy the data using post method. {producer}
mCameraTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
#Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
Log.d (TAG, "Long breath.. data is pumbed by Native Layer producer..");
mHandler.frameReceivedFromProducer();
}
});
mSurfaceWindowUser = new WindowSurface(mEglCore, mSurfaceUser, false); // this mSurfaceUser is a surface received from MainActivity TextureView.
}
To confirm if the produce at the native side producing the data, if i pass directly the user surface Without any EGL configurations, the frames are rendered into the screen.
At the native Level,
geometryResult = ANativeWindow_setBuffersGeometry(userNaiveWindow,640, 480, WINDOW_FORMAT_RGBA_8888);
To Render the frame i use
ANativeWindow_lock and ANativeWindow_unlockAndPost() to render directly frame into buffer.
I could not able to think what could be wrong and where i have to dig more ?
Thanks fadden for your help.

setPreviewDisplay and setDisplayOrientation

I'm puzzled by OpenCV's Android camera sample code. They make a custom class which implements SurfaceHolder.Callback and put the following line inside the method surfaceChanged:
mCamera.setPreviewDisplay(null);
The Android documentation for setPreviewDisplay explains:
This method must be called before startPreview(). The one exception is
that if the preview surface is not set (or set to null) before
startPreview() is called, then this method may be called once with a
non-null parameter to set the preview surface. (This allows camera
setup and surface creation to happen in parallel, saving time.) The
preview surface may not otherwise change while preview is running.
Unusually, OpenCV's code never calls setPreviewDisplay with a non-null SurfaceHolder. It works fine, but changing the rotation of the image using setDisplayOrientation doesn't work. This line also doesn't appear to do anything, since I get the same results without it.
If I call setPreviewDisplay with the SurfaceHolder supplied to surfaceChanged instead of null, the image rotates but does not include the results of the image processing. I also get an IllegalArgumentException when calling lockCanvas later on.
What's going on?
Here are the (possibly) most relevant parts of their code, slightly simplified and with methods inlined. Here is the full version.
Class definition
public abstract class SampleViewBase extends SurfaceView
implements SurfaceHolder.Callback, Runnable {
When the camera is opened
mCamera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
synchronized (SampleViewBase.this) {
System.arraycopy(data, 0, mFrame, 0, data.length);
SampleViewBase.this.notify();
}
camera.addCallbackBuffer(mBuffer);
}
});
When the surface changes
/* Now allocate the buffer */
mBuffer = new byte[size];
/* The buffer where the current frame will be copied */
mFrame = new byte [size];
mCamera.addCallbackBuffer(mBuffer);
try {
mCamera.setPreviewDisplay(null);
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay/setPreviewTexture fails: " + e);
}
[...]
/* Now we can start a preview */
mCamera.startPreview();
The run method
public void run() {
mThreadRun = true;
Log.i(TAG, "Starting processing thread");
while (mThreadRun) {
Bitmap bmp = null;
synchronized (this) {
try {
this.wait();
bmp = processFrame(mFrame);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
if (bmp != null) {
Canvas canvas = mHolder.lockCanvas();
if (canvas != null) {
canvas.drawBitmap(bmp, (canvas.getWidth() - getFrameWidth()) / 2,
(canvas.getHeight() - getFrameHeight()) / 2, null);
mHolder.unlockCanvasAndPost(canvas);
}
}
}
Log.i(TAG, "Finishing processing thread");
}
I ran into this same problem. Instead of using a SurfaceView.Callback, I subclassed their class JavaCameraView. See my live face detection and drawing sample here. It was then trivial to rotate the matrix coming out of the camera according to the device's orientation, prior to processing. Relevant excerpt of linked code:
#Override
public Mat onCameraFrame(Mat inputFrame) {
int flipFlags = 1;
if(display.getRotation() == Surface.ROTATION_270) {
flipFlags = -1;
Log.i(VIEW_LOG_TAG, "Orientation is" + getRotation());
}
Core.flip(inputFrame, mRgba, flipFlags);
inputFrame.release();
Imgproc.cvtColor(mRgba, mGray, Imgproc.COLOR_RGBA2GRAY);
if (mAbsoluteFaceSize == 0) {
int height = mGray.rows();
if (Math.round(height * mRelativeFaceSize) > 0) {
mAbsoluteFaceSize = Math.round(height * mRelativeFaceSize);
}
}
}
I solved the rotation issue using OpenCV itself: after finding out how much the screen rotation needs to be corrected using this code, I apply a rotation matrix to the raw camera image (after converting from YUV to RGB):
Point center = new Point(mFrameWidth/2, mFrameHeight/2);
Mat rotationMatrix = Imgproc.getRotationMatrix2D(center, totalRotation, 1);
[...]
Imgproc.cvtColor(mYuv, mIntermediate, Imgproc.COLOR_YUV420sp2RGBA, 4);
Imgproc.warpAffine(mIntermediate, mRgba, rotationMatrix,
new Size(mFrameHeight, mFrameWidth));
A separate issue is that setPreviewDisplay(null) gives a blank screen on some phones. The solution, which I got from here and draws on this bugreport and this SO question, passes a hidden, "fake" SurfaceView to the preview display to get it to start, but actually displays the output on an overlaid custom view, which I call CameraView. So, after calling setContentView() in the activity's onCreate(), stick in this code:
if (VERSION.SDK_INT < VERSION_CODES.HONEYCOMB) {
final SurfaceView fakeView = new SurfaceView(this);
fakeView.setLayoutParams(new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));
fakeView.setZOrderMediaOverlay(false);
final CameraView cameraView = (CameraView) this.findViewById(R.id.cameraview);
cameraView.setZOrderMediaOverlay(true);
cameraView.fakeView = fakeView;
}
Then, when setting the preview display, use this code:
try {
if (VERSION.SDK_INT >= VERSION_CODES.HONEYCOMB)
mCamera.setPreviewTexture(new SurfaceTexture(10));
else
mCamera.setPreviewDisplay(fakeView.getHolder());
} catch (IOException e) {
Log.e(TAG, "mCamera.setPreviewDisplay fails: "+ e);
}
If you are only developing for Honeycomb and above, just replace setPreviewDisplay(null) with mCamera.setPreviewTexture(new SurfaceTexture(10)); and be done with it. setDisplayOrientation() still doesn't work if you do this, though, so you'll still have to use the rotation matrix solution.

Getting smaller data from camera preview in Android

Hi i am developing real time image processing application on android. I am using PreviewCallback to get image in every frame. When i get data in Tablet devices the data returns very big. So its too hard to work in large data in real time.
My question is, is there any way to get smaller resolution data from camera preview.
CAMERA PREVIEW CODE:
public void onPreviewFrame(byte[] data, Camera camera) {
// TODO Auto-generated method stub
Camera.Parameters params = camera.getParameters();
Log.v("image format", Integer.toString(params.getPreviewFormat()));
//Frame captureing via frameManager
frameManager.initCamFrame(params.getPreviewSize().width, params.getPreviewSize().height,
data);
}
});
You can call parameters.setPreviewSize(width, height), but you want to do it before camera preview starts. And you need to use supported value, viz previous answer.
And you also should not call camera.getParameters() every frame, just do that once and save the values to some variable. You have some limited time in onPreviewFrame, because byte[] data is overwritten on each frame, so try to do only the important stuff here.
And you should use setPreviewCallbackWithBuffer, it quite improves performance - check this post.
Are you aware you can get a list of supported preview sizes from the camera parameters by calling getSupportedPreviewSizes()? The devices I've tried this on all returned a sorted list, although sometimes in ascending and sometimes in descending order. You'll probably want to manually iterate the list to find the 'smallest' preview size, or sort it first and grab the first item.
you can try this:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}

Can't read a QR code from camera

EDIT:
After playing around with it for a few hours, I came to believe that the problem is in the image quality. For example, to first image is how it came from the camera. Decoder can't read it. The second image is turned into B/W with adjusted contrast and the decoder reads it great.
Since the demo app that came with zxing is able to read the fist image off the monitor in a few seconds, I think the problem might be in some setting deep within the zxing library. It doesn't wait long enough to process the image, but spits out NotFound almost instantly.
I'm making a simple QR-reader app. Here's a screenshot.
The top black area is a surfaceview, that shows frames from the camera. It works fine, only you can't see it in the screenshot.
Then, when I press the button, a bitmap is taken from that surfaceview, placed on an ImageView below and is attempted to be read by the zxing library.
Yet it will give out a NotFoundException. :/
**10-17 19:53:15.382: WARN/System.err(2238): com.google.zxing.NotFoundException
10-17 19:53:15.382: WARN/dalvikvm(2238): getStackTrace() called but no trace available**
On the other hand, if I crop the qr image from this screenshot, place it into the imageview ( instead of a camera feed ) and try to decode it, it works fine. Therefor the QR image itself and its quality are OK... but then why doesn't it decode in the first scenario?
Thanks!
public void dec(View v)
{
ImageView ivCam2 = (ImageView)findViewById(R.id.imageView2);
ivCam2.setImageBitmap(bm);
BitmapDrawable drawable = (BitmapDrawable) ivCam2.getDrawable();
Bitmap bMap = drawable.getBitmap();
TextView textv = (TextView) findViewById(R.id.mytext);
LuminanceSource source = new RGBLuminanceSource(bMap);
BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(source));
Reader reader = new MultiFormatReader();
try {
Result result = reader.decode(bitmap);
Global.text = result.getText();
byte[] rawBytes = result.getRawBytes();
BarcodeFormat format = result.getBarcodeFormat();
ResultPoint[] points = result.getResultPoints();
textv.setText(Global.text);
} catch (NotFoundException e) {
textv.setText("NotFoundException");
} catch (ChecksumException e) {
textv.setText("ChecksumException");
} catch (FormatException e) {
textv.setText("FormatException");
}
}
how the bitmap is created:
#Override
public void surfaceCreated(SurfaceHolder holder)
{
try
{
this.camera = Camera.open();
this.camera.setPreviewDisplay(this.holder);
this.camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] _data, Camera _camera) {
Camera.Parameters params = _camera.getParameters();
int w = params.getPreviewSize().width;
int h = params.getPreviewSize().height;
int format = params.getPreviewFormat();
YuvImage image = new YuvImage(_data, format, w, h, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
Rect area = new Rect(0, 0, w, h);
image.compressToJpeg(area, 50, out);
bm = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
}
});
}
catch(IOException ioe)
{
ioe.printStackTrace(System.out);
}
}
I wrote this code. Returning quickly isn't a problem. Decoding is very fast on a mobile, and very very fast on a desktop.
The general answer to this type of question is that some images just aren't going to decode. That's life -- the heuristics don't always get it right. But I don't think that is the problem here.
QR codes don't decode without a minimal white "quiet zone" around them. The image beyond its borders is considered white for this purpose. But in your raw camera image, there's little border around the code and it's not all considered white by the binarizer, I'd bet.
Still, there's more you can do. Set the TRY_HARDER hint to the decoder, for one, to have it spend a lot more CPU to try to decode. You can also try a different Binarizer implementation than the default HybridBinarizer.
(The rest looks just fine. I assume that RGBLuminanceSource is getting data in the format it expects; it ought to from Bitmap)
See this: http://zxing.org/w/docs/javadoc/com/google/zxing/NotFoundException.html The exception means that a barcode wasn't found in the image. My suggestion would be to use your work around that works instead of trying to decode the un-cropped image.

slow face detection android

Hi my face detection thread is working too slow,
I call this thread from onPreviewFrame only if the thread is not working else i just skip the call and after the thread detect face i call onDraw inside the view to draw rectangle
public void run() {
FaceDetector faceDetector = new FaceDetector(bitmapImg.getWidth(), bitmapImg.getHeight(), 1);
numOfFacesDetected = faceDetector.findFaces(bitmapImg, detectedFaces);
if (numOfFacesDetected != 0) {
detectedFaces.getMidPoint(eyesMidPoint);
eyesDistance = detectedFaces.eyesDistance();
handler.post(new Runnable() {
public void run() {
mPrev.invalidate();
// turn off thread lock
}
});
mPrev.setEyesDistance(eyesDistance);
mPrev.setEyesMidPoint(eyesMidPoint);
}
isThreadWorking = false;
}
public void onPreviewFrame(byte[] yuv, Camera camera) {
if (isThreadWorking)
return;
isThreadWorking = true;
ByteBuffer bbuffer = ByteBuffer.wrap(yuv);
bbuffer.get(grayBuff_, 0, bufflen_);
detectThread = new FaceDetectThread(handler);
detectThread.setBuffer(grayBuff_);
detectThread.start();
my questions is maybe because am working with bitmap and not gray scale it's taking too long ? how can i improve the speed ?
The FaceDetector API is not really made to process frames in a live preview. It's way to slow for that.
If you are running on a fairly new device, a better option is to use the FaceDetectionListener API in Android 14+. It is very fast and can be used to create an overlay on a preview SurfaceHolder.

Categories

Resources