Null bitmap in Android Surfaceview - android

I'm attempting to get a bitmap from a camera preview in Android, then examine the bitmap and draw something to the screen based on what the camera is seeing. This all has to be done live due to the nature of the project I'm working on.
At the moment I'm using a surfaceview to display the live preview and I'm getting the bitmap using the following code I found on a separate question on here.
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
snipeCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 100, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
bit = bmp.copy(Bitmap.Config.ARGB_8888, true);
}
});
}
Bit is a:
public static Bitmap bit;
Whenever I try and access this bitmap anywhere else I get a null exception error. I'm guessing it has something to do with the fact that it's being set inside setPreviewCallback, but I don't know enough about Android to fix it. Is there something I can do to get access to this bitmap? Or is there another way I can work with a live bitmap of what the camera is seeing?

Is there something I can do to get access to this bitmap?
You already have access to your Bitmap. You are getting it from decodeByteArray().
(note that I am assuming that the code leading up to and including decodeByteArray() is correct — there could be additional problems lurking in there)
You just need to consume the Bitmap in your onPreviewFrame() method. If your work to do is quick (sub-millisecond), probably just do that work right there. If your work to do is not-so-quick, you'll now need to work out background threading plans, arranging to update your UI with the results of that work on the main application thread, and related issues.
If, to consume the Bitmap, you need access to other objects in your camera UI, just makes sure that the SurfaceHolder has references to those other objects. Your onPreviewFrame() method is in an anonymous inner class inside the SurfaceHolder, and onPreviewFrame() has access to everything that the SurfaceHolder has.

Related

How to process individual camera frames while using Android mobile vision library

I am trying to make a camera app that detects faces using the Google mobile vision API with a custom camera instance, NOT the same "CameraSource" in the Google API as I am also processing the frames to detect colors too and with Camerasource I am not allowed to get the camera frames.
After searching for this issue, the only results I've found are about using mobile vision with it's CameraSource, and not with any custom camera1 API.
I've tried to override the frame processing, then do the detection on the outputted pics like here:
camera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Log.d("onPreviewFrame", "" + data.length);
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
ByteArrayOutputStream outstr = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvimage.compressToJpeg(rect, 20, outstr);
Bitmap bmp = BitmapFactory.decodeByteArray(outstr.toByteArray(), 0, outstr.size());
detector = new FaceDetector.Builder(getApplicationContext())
.setTrackingEnabled(true)
.setClassificationType(FaceDetector.ALL_LANDMARKS)
.setMode(FaceDetector.FAST_MODE)
.build();
detector.setProcessor(
new MultiProcessor.Builder<>(new GraphicFaceTrackerFactory())
.build());
if (detector.isOperational()) {
frame = new Frame.Builder().setBitmap(bmp).build();
mFaces = detector.detect(frame);
// detector.release();
}
}
});
So is there any way that I can link mobile vision with my camera instance for the sake of frame processing and to detect faces with it?
You can see what I've done so far here:
https://github.com/etman55/FaceDetectionSampleApp
**NEW UPDATE
After finding an open source file for the CameraSource class I solved most of my problems, but now when trying to detect faces the detector receives the frames correctly but it can't detect anything >> you can see my last commit in the github repo.
I can provide you with some very useful tips.
Building a new FaceDetector for each frame the camera provides is very bad idea, and also unnecessary. You only have to start it once, outside the camera frames receiver.
It is not necessary to get the YUV_420_SP (or NV21) frames, then convert it to YUV instance, then convert it to Bitmap, then create a Frame.Builder() with the Bitmap. If you take a look at the Frame.Builder Documentation you can see that it allows NV21 directly from Camera Preview.
Like this:
#override public void onPreviewFrame(byte[] data, Camera camera) {detector.detect(new Frame.Builder().setImageData(ByteBuffer.wrap(data), previewW, previewH, ImageFormat.NV21));}
And the Kotin version:
import com.google.android.gms.vision.Frame as GoogleVisionFrame
import io.fotoapparat.preview.Frame as FotoapparatFrame
fun recogniseFrame(frame: FotoapparatFrame) = detector.detect(buildDetectorFrame(frame))
.asSequence()
.firstOrNull { it.displayValue.isNotEmpty() }
?.displayValue
private fun buildDetectorFrame(frame: FotoapparatFrame) =
GoogleVisionFrame.Builder()
.setRotation(frame.rotation.toGoogleVisionRotation())
.setImageData(
ByteBuffer.wrap(frame.image),
frame.size.width,
frame.size.height,
ImageFormat.NV21
).build()

Android onPreviewFrame process raw data

I am using Android Studio and I am creating an app that starts a camera preview and processes the image that is returned in the onPreviewFrame(byte[] data, Camera camera) callback method of android.hardware.Camera. The data that is returned is in NV21 format (the default for Camera class). In this method I receive a raw image that I need to convert to an RGB one in order to process it, since my application needs to process the colors from the image. I am using the following method to convert the byte[] array into a bitmap and then use the bitmap appropriately.
private Bitmap getBitmap(byte[] data, Camera camera) {
// Convert to JPG
ByteArrayOutputStream baos = new ByteArrayOutputStream();
previewSize = camera.getParameters().getPreviewSize();
yuvimage = new YuvImage(data, ImageFormat.NV21, previewSize.width, previewSize.height, null);
yuvimage.compressToJpeg(new Rect(0, 0, previewSize.width, previewSize.height), 80, baos);
jdata = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);
// rotate image
bitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), mtx, true);
return bitmap;
}
This method works well and I get desired results. But since my preview size is set to the maximum supported by the device that the application is running on(currently on my phone 1920x1088) this method takes too long and as a result I can only process 1 to 2 images per second. If I remove the conversion method I can see that the onPreviewFrame method is called 10 to 12 times per second, meaning I can receive that much per second but I can only process 1 or 2 because of the conversion.
Is there a faster way that I can use in order to receive a RGB matrix from the byte[] array that is passed?
Actually in my case it was sufficient to remove the rotation of the bitmap, because decoding the bitmap took 250-350ms and rotating the bitmap took ~500ms. So I removed the rotation and changed the orientation of the scanning. Fortunetly this isn't hard at all. If I have a function that checks a given pixel for the color and it looked like this:
boolean foo(int X, int Y) {
// statements
// ...
}
Now it looks like this:
boolean foo(int X, int Y) {
int oldX = X;
int oldY = Y;
Y = bitmap.getHeight() - oldX;
X = oldY;
// statements
// ...
}
Hope this helps. :)

Android Camera2 API Showing Processed Preview Image

New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?
Example Flow :
camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)
Workaround Idea : Sending bitmaps to imageview every time new frame processed.
Edit after clarification of the question; original answer at bottom
Depends on where you're doing your processing.
If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.
If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.
If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.
If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.
Or as you say, draw to an ImageView every frame, but that'll be slow.
Original answer:
If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[], and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.
If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.
If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.
I had the same need, and wanted a quick and dirty manipulation for a demo. I was not worried about efficient processing for a final product. This was easily achieved using the following java solution.
My original code to connect the camera2 preview to a TextureView was commented-out and replaced with a surface to an ImageReader:
// Get the surface of the TextureView on the layout
//SurfaceTexture texture = mTextureView.getSurfaceTexture();
//if (null == texture) {
// return;
//}
//texture.setDefaultBufferSize(mPreviewWidth, mPreviewHeight);
//Surface surface = new Surface(texture);
// Capture the preview to the memory reader instead of a UI element
mPreviewReader = ImageReader.newInstance(mPreviewWidth, mPreviewHeight, ImageFormat.JPEG, 1);
Surface surface = mPreviewReader.getSurface();
// This part stays the same regardless of where we render
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(surface);
mCameraDevice.createCaptureSession(...
Then I registered a listener for the image:
mPreviewReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image != null) {
Image.Plane plane = image.getPlanes()[0];
ByteBuffer buffer = plane.getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap preview = BitmapFactory.decodeByteArray(bytes, 0, buffer.capacity());
image.close();
if(preview != null ) {
// This gets the canvas for the same mTextureView we would have connected to the
// Camera2 preview directly above.
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
float[] colorTransform = {
0, 0, 0, 0, 0,
.35f, .45f, .25f, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 1, 0};
ColorMatrix colorMatrix = new ColorMatrix();
colorMatrix.set(colorTransform); //Apply the monochrome green
ColorMatrixColorFilter colorFilter = new ColorMatrixColorFilter(colorMatrix);
Paint paint = new Paint();
paint.setColorFilter(colorFilter);
canvas.drawBitmap(preview, 0, 0, paint);
mTextureView.unlockCanvasAndPost(canvas);
}
}
}
}
}, mBackgroundPreviewHandler);

How to do effective coding in Camera PreviewCallback

I have to create grey Scale camera live Preview. It is working fine. But i want to make the code faster.
private android.hardware.Camera.PreviewCallback previewCallback = new android.hardware.Camera.PreviewCallback()
{
public void onPreviewFrame(byte abyte0[] , Camera camera)
{
Size size = cameraParams.getPreviewSize();
int[] rgbData = YuvUtils.decodeGreyscale(abyte0, size.width, size.height);
Bitmap bitmapung = Bitmap.createBitmap(rgbData, size.width, size.height, Bitmap.Config.ARGB_8888);
Bitmap bitmapToSet = Bitmap.createBitmap(bitmapung, 0, 0, widthPreview, heightPreview, matrix, true);
MyActivity.View.setBitmapToDraw(bitmapToSet);
};
As i am creating Bitmap object twice.Can i do this job with one
bitmap Object.
where (on which method like onResume or onCreate) should i getCamera
Size(means width and height once). So that i don't need to set it for each callback.
And I know i should use AsyncTask to do it. I will do it after
solving my current issue.
EDIT
I can create bitmap while getting Camera Size(width,height). Then single instance of bitmap is associated to my Class. but i have to call setPixels for it.
bitmapung.setPixels(rgbData, offset, stride, x, y, size.width, size.height);
What should I set the values of Stride ,offset and x,y. And also explain them plz.
Also have a look on these questions also
Create custom Color Effect
Add thermal effect to yuvImage

How rotate picture from "onPictureTaken" without Out of memory Exception?

I read many posts there? But i don't find correctly answer.
I try do something this:
#Override
public void onPictureTaken(byte[] paramArrayOfByte, Camera paramCamera) {
try {
Bitmap bitmap = BitmapFactory.decodeByteArray(paramArrayOfByte, 0,
paramArrayOfByte.length);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
FileOutputStream os = new ileOutputStream(Singleton.mPushFilePath);
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap resizedBitmap = Bitmap.createBitmap(bitmap, 0, 0, width,
height, matrix, false);
resizedBitmap.compress(Bitmap.CompressFormat.JPEG, 95, os);
os.close();
...
Is there a way to rotate picture, without using BitmapFactory? I want rotate picture without loss of quality!
Perhaps you can take the picture already rotated as you desire using Camera.setDisplayOrientation? Check Android camera rotate. Further, investigate Camera.Parameters.setRotation(). One of these techniques should do the trick for you.
Otherwise your code looks fine except for using parameter 95 on Bitmap.compress, you need to use 100 for lossless compression.
To avoid out-of-memory exception, use Camera.Parameters.setPictureSize() to take a lower resolution picture (e.g. 3Mpx). i.e. do you really need an 8Mpx photo? Make sure to use Camera.Parameters.getSupportedPictureSizes() to determine the supported sizes on your device.

Categories

Resources