Android: how to save the first preview frame from SurfaceView - android

I was wondering if it is possible to save the first frame from camera preview, visible on a SurfaceView.
Within my activity I've currently implemented all the SurfaceHolder.Callback methods. When I call startPreview on camera object and than takePicture the callback is executed correctly, but I need to take and save the picture more quickly, so take the first visible frame of the preview would be very useful.
What can I do?
Any suggestion would be greatly appreciated.

It might seem incredibly simple, but I would do something like this to save the first frame into a Bitmap.
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
if (image == null) {
out = new ByteArrayOutputStream();
yuvImage = new YuvImage(data, ImageFormat.NV21, camera
.getParameters().getPreviewSize().width, camera
.getParameters().getPreviewSize().height, null);
yuvImage.compressToJpeg(new Rect(0, 0, camera
.getParameters().getPreviewSize().width, camera
.getParameters().getPreviewSize().height), 0,
out);
imageBytes = out.toByteArray();
image = BitmapFactory.decodeByteArray(imageBytes, 0,
imageBytes.length);
}
}
});

Related

How to process individual camera frames while using Android mobile vision library

I am trying to make a camera app that detects faces using the Google mobile vision API with a custom camera instance, NOT the same "CameraSource" in the Google API as I am also processing the frames to detect colors too and with Camerasource I am not allowed to get the camera frames.
After searching for this issue, the only results I've found are about using mobile vision with it's CameraSource, and not with any custom camera1 API.
I've tried to override the frame processing, then do the detection on the outputted pics like here:
camera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Log.d("onPreviewFrame", "" + data.length);
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 20, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
detector = new FaceDetector.Builder(getApplicationContext())
.setTrackingEnabled(true)
.setClassificationType(FaceDetector.ALL_LANDMARKS)
.setMode(FaceDetector.FAST_MODE)
.build();
detector.setProcessor(
new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
.build());
if (detector.isOperational()) {
frame = new Frame.Builder().setBitmap(bmp).build();
mFaces = detector.detect(frame);
// detector.release();
}
}
});
So is there any way that I can link mobile vision with my camera instance for the sake of frame processing and to detect faces with it?
You can see what I've done so far here:
https://github.com/etman55/FaceDetectionSampleApp
**NEW UPDATE
After finding an open source file for the CameraSource class I solved most of my problems, but now when trying to detect faces the detector receives the frames correctly but it can't detect anything >> you can see my last commit in the github repo.
I can provide you with some very useful tips.
Building a new FaceDetector for each frame the camera provides is very bad idea, and also unnecessary. You only have to start it once, outside the camera frames receiver.
It is not necessary to get the YUV_420_SP (or NV21) frames, then convert it to YUV instance, then convert it to Bitmap, then create a Frame.Builder() with the Bitmap. If you take a look at the Frame.Builder Documentation you can see that it allows NV21 directly from Camera Preview.
Like this:
#override public void onPreviewFrame(byte[] data, Camera camera) {detector.detect(new Frame.Builder().setImageData(ByteBuffer.wrap(data), previewW, previewH, ImageFormat.NV21));}
And the Kotin version:
import com.google.android.gms.vision.Frame as GoogleVisionFrame
import io.fotoapparat.preview.Frame as FotoapparatFrame
fun recogniseFrame(frame: FotoapparatFrame) = detector.detect(buildDetectorFrame(frame))
.asSequence()
.firstOrNull { it.displayValue.isNotEmpty() }
?.displayValue
private fun buildDetectorFrame(frame: FotoapparatFrame) =
GoogleVisionFrame.Builder()
.setRotation(frame.rotation.toGoogleVisionRotation())
.setImageData(
ByteBuffer.wrap(frame.image),
frame.size.width,
frame.size.height,
ImageFormat.NV21
).build()

How to get frame of SurfaceView each second?

I want to take screenshot from camera preview each second.
I show preview of my Camera using SurfaceView. I need to get preview each second(screenshot) but not using photo taking.
I know about method mCamera.setPreviewCallbackWithBuffer but I can take frame only one time from it. To make it updating every second I need to start MediaRecorder and record video. But For video I need to set outputFile, which means that it can use a lot of memory.
mCamera.setPreviewCallbackWithBuffer(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] bytes = out.toByteArray();
final Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
});
How can I do it without setting outputfile and taking photo each second?
If you are using SurfaceView to display your camera preview you may try calling Camera#takePicture every second.
To schedule it approximately every one second, you may use postDelayed method of any view. For example:
private Runnable capturePreview = new Runnable() {
#Override
public void run() {
camera.takePicture(null, null, callback);
// Run again after approximately 1 second.
surfaceView.postDelayed(this, 1000);
}
}
private Camera.PictureCallback callback = new Camera.PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
// Do whatever you need with your bitmap.
// Consider freeing the memory afterwards.
bitmap.recycle();
}
}
And you can start it whenever preview is ready by calling:
surfaceView.postDelayed(capturePreview, 1000);
And stop whenever preview is no longer displayed:
surfaceView.removeCallbacks(capturePreview);
If you are using TextureView you can simply use getBitmap() which allows you to easily grab its current contents.
Therefore the code above becomes something like:
private Runnable capturePreview = new Runnable() {
#Override
public void run() {
Bitmap preview = textureView.getBitmap();
// Do whatever you need with the bitmap.
// Run again after approximately 1 second.
textureView.postDelayed(this, 1000);
}
}, 1000);
And again start:
textureView.postDelayed(capturePreview, 1000);
and stop:
textureView.removeCallbacks(capturePreview);

Capturing camera frame in android after face detection

I am working with face detection in Android and I want achieve the following:
1. Use face detection listener in Android for detecting faces on camera frame.
2. If a face is detected on the camera frame, then extract the face and save it to external storage.
After surfing through existing questions, I have found that there is no direct way to convert detected face to bitmap and store it on the disk. So now I want to capture and save the entire camera frame in which the face has been detected and I have not been able to do so.
The current code structure is as follows:
FaceDetectionListener faceDetectionListener = new FaceDetectionListener() {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length == 0) {
} else {
displayMessage("Face detected!");
// CODE TO SAVE CURRENT FRAME AS IMAGE
finish();
}
}
};
I tried to achieve this by calling takePicture in the above method but I was unable to save the frame using that approach. Kindly suggest a way in which I can save the camera frame.
I could not figure out a direct way to save the camera frame within FaceDetectionListener. Therefore, for my application, I changed the way in which I was handling the camera preview data. I used the PreviewCallback interface of Camera class and implemented the logic in onPreviewFrame method of the interface. The outline of implementation is as follows:
class SaveFaceFrames extends Activity implements Camera.PreviewCallback, Camera.FaceDetectionListener {
boolean lock = false;
public void onPreviewFrame(byte[] data, Camera camera) {
...
if(lock) {
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(), size.width, size.height, null);
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
image.compressToJpeg(new Rect(0, 0, image.getWidth(), image.getHeight()), 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
lock = false;
}
}
public void onFaceDetection(Camera.Face[] faces, Camera camera) {
...
if(!lock) {
if(faces.length() != 0) lock = true;
}
}
}
This is not an ideal solution, but it worked in my case. There are third party libraries which can be used in these scenarios. One library which I have used and works very well is Qualcomm's Snapdragon SDK. I hope someone finds this useful.

Android byte[] to image in Camera.onPreviewFrame

When trying to convert the byte[] of Camera.onPreviewFrame to Bitamp using BitmapFactory.decodeByteArray gives me an error SkImageDecoder::Factory returned null
Following is my code:
public void onPreviewFrame(byte[] data, Camera camera) {
Bitmap bmp=BitmapFactory.decodeByteArray(data, 0, data.length);
}
This has been hard to find! But since API 8, there is a YuvImage class in android.graphics. It's not an Image descendent, so all you can do with it is save it to Jpeg, but you could save it to memory stream and then load into Bitmap Image if that's what you need.
import android.graphics.YuvImage;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(),
size.width, size.height, null);
File file = new File(Environment.getExternalStorageDirectory()
.getPath() + "/out.jpg");
FileOutputStream filecon = new FileOutputStream(file);
image.compressToJpeg(
new Rect(0, 0, image.getWidth(), image.getHeight()), 90,
filecon);
} catch (FileNotFoundException e) {
Toast toast = Toast
.makeText(getBaseContext(), e.getMessage(), 1000);
toast.show();
}
}
Since Android 3.0 you can use a TextureView and TextureSurface to display the camera, and then use mTextureView.getBitmap() to retrieve a friendly RGB preview frame.
A very skeletal example of how to do this is given in the TextureView docs. Note that you'll have to set your application or activity to be hardware accelerated by putting android:hardwareAccelerated="true" in the manifest.
I found the answer after a long time. Here it is...
Instead of using BitmapFactory, I used my custom method to decode this byte[] data to a valid image format. To decode the image to a valid image format, one need to know what picture format is being used by the camera by calling camera.getParameters().getPictureFormat(). This returns a constant defined by ImageFormat. After knowing the format, use the appropriate encoder to encode the image.
In my case, the byte[] data was in the YUV format, so I looked for YUV to BMP conversion and that solved my problem.
you can try this:
This example send camera frames to server
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}

Best way to scale size of camera picture before saving to SD

The code below is executed as the jpeg picture callback after TakePicture is called. If I save data to disk, it is a 1280x960 jpeg. I've tried to change the picture size but that's not possible as no smaller size is supported. JPEG is the only available picture format.
PictureCallback jpegCallback = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
FileOutputStream out = null;
Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap sbm = Bitmap.createScaledBitmap(bm, 640, 480, false);
data.Length is something like 500k as expected. After executing BitmapFactory.decodeByteArray(), bm has a height and width of -1 so it appears the operation is failing.
It's unclear to me if Bitmap can handle jpeg data. I would think not but I have seem some code examples that seem to indicate it is.
Does data need to be in bitmap format before decoding and scaling?
If so, how to do this?
Thanks!
On your surfaceCreated, you code set the camara's Picture Size, as shown the code below:
public void surfaceCreated(SurfaceHolder holder) {
camera = Camera.open();
try {
camera.setPreviewDisplay(holder);
Camera.Parameters p = camera.getParameters();
p.set("jpeg-quality", 70);
p.setPictureFormat(PixelFormat.JPEG);
p.setPictureSize(640, 480);
camera.setParameters(p);
} catch (IOException e) {
e.printStackTrace();
}
}

Categories

Resources