Is there any equivalent for Camera.PreviewCallback in Camera2 from API 21,better than mapping to a SurfaceTexture and pulling a Bitmap ? I need to be able to pull preview data off of the camera as YUV?
You can start from the Camera2Basic sample code from Google.
You need to add the surface of the ImageReader as a target to the preview capture request:
//set up a CaptureRequest.Builder with the output Surface
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
After that, you can retrieve the image in the ImageReader.OnImageAvailableListener:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
try {
image = reader.acquireLatestImage();
if (image != null) {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
Bitmap bitmap = fromByteBuffer(buffer);
image.close();
}
} catch (Exception e) {
Log.w(LOG_TAG, e.getMessage());
}
}
};
To get a Bitmap from the ByteBuffer:
Bitmap fromByteBuffer(ByteBuffer buffer) {
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes, 0, bytes.length);
return BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
Yes, use the ImageReader class.
Create an ImageReader using the format ImageFormat.YUV_420_888 and your desired size (make sure you select a size that's supported by the camera device you're using).
Then use ImageReader.getSurface() for a Surface to provide to CameraDevice.createCaptureSession(), along with your other preview outputs, if any.
Finally, in your repeating capture request, add the ImageReader provided surface as a target before setting it as the repeating request in your capture session.
Related
I'm trying to create a program that processes images directly from the Camera2 preview, and I keep running into a problem when it comes to actually processing the incoming images.
In my OnImageAvailableListener.onImageAvailable() callback, I'm getting an ImageReader object, from which I acquireNextImage() and pass that Image object into my helper function. From there, I convert it into a byte array, and attempt to do the processing. Instead, every time I get to the part where I convert it to Bitmap, the BitmapFactory.getByteArray returns null, even though the byte array is a properly-formatted JPEG.
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
= new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader imageReader) {
Image image = imageReader.acquireNextImage();
ProcessBarcode(image);
image.close();
}
};
private void ProcessBarcode(Image image) {
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
int bufferSize = buffer.remaining();
byte[] bytes = new byte[bufferSize];
buffer.get(bytes);
FileOutputStream output = null;
try {
output = new FileOutputStream(mFile);
output.write(bytes);
output.close();
} catch (IOException e) {
// Do something clever
}
// This call FAILS
//Bitmap b = BitmapFactory.decodeByteArray(bytes, 0, buffer.remaining(), o);
// But this call WORKS?
Bitmap b = BitmapFactory.decodeFile(mFile.getAbsolutePath());
detector = new BarcodeDetector.Builder(getActivity())
.setBarcodeFormats(Barcode.EAN_13 | Barcode.ISBN)
.build();
if (b != null) {
Frame isbnFrame = new Frame.Builder().setBitmap(b).build();
SparseArray<Barcode> barcodes = detector.detect(isbnFrame);
if (barcodes.size() != 0) {
Log.d("Barcode decoded: ",barcodes.valueAt(0).displayValue);
}
else {
Log.d(TAG, "No barcode detected");
}
}
else {
Log.d(TAG, "No bitmap detected");
}
}
The ImageReader is set up like:
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(), ImageFormat.JPEG, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
Basically what I see is that if I use the byte array directly without first saving it to the internal memory, the camera preview is fast and snappy, although the Bitmap is always null so I'm not actually performing any processing. If I save the byte array to the memory, then I get maybe 2-3fps, but the processing works as I'm intending.
After the call to buffer.get(bytes), buffer.remaining() will return 0, since you just read through the whole ByteBuffer. remaining() tells you how many bytes are left between your current position and the limit of the buffer, and calls to get() move the position forward by the number of bytes read.
Therefore, when you do decodeByteArray(bytes,0, buffer.remaining(), 0), you don't actually decode any bytes.
If you try decodeByteArray(bytes, 0, bytes.length, 0), does it work?
I am currently developing an application which use Camera2.
I display the preview on a TextureView which is scaled and translated (I only need to display a part of the image). My problem is that I need to analyze the entire image.
What I have in my CameraDevice.StateCallback :
#Override
public void onOpened(CameraDevice camera) {
mCameraDevice = camera;
SurfaceTexture texture = mTextureView.getSurfaceTexture();
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface surface = new Surface(texture);
try {
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
} catch (CameraAccessException e){
e.printStackTrace();
}
try {
mCameraDevice.createCaptureSession(Arrays.asList(surface), mPreviewStateCallback, null);
mPreviewBuilder.addTarget(surfaceFull);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
and in my SurfaceTextureListener :
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
Thread thread = new Thread(new Runnable() {
#Override
public void run() {
my_analyze(mTextureView.getBitmap());
}
});
thread.start();
}
And the bitmap is only what I see in the TextureView (which is logical) but I want the entire image.
Is it possible ?
Thanks,
NiCLO
You can send the frames to a SurfaceTexture you create, rather than one that's part of TextureView, then get the pixels by rendering them to a GLES pbuffer and reading them back with glReadPixels().
If you can work in YUV rather than RGB, you can get to the data faster and more simply by directing the Camera2 output to an ImageReader.
Grafika has some useful examples, e.g. "texture from camera".
For me, the next implementation works fine for me:
Bitmap bitmap = mTextureView.getBitmap(mWidth, mHeight);
int[] argb = new int[mWidth * mHeight];
// get ARGB pixels and then proccess it with 8UC4 OpenCV convertion
bitmap.getPixels(argb, 0, mWidth, 0, 0, mWidth, mHeight);
// native method (NDK or CMake)
processFrame8UC4(argb, mWidth, mHeight);
A complete implementation for Camera API2 and NDK (OpenCV) here: https://stackoverflow.com/a/49331546/471690
When trying to convert the byte[] of Camera.onPreviewFrame to Bitamp using BitmapFactory.decodeByteArray gives me an error SkImageDecoder::Factory returned null
Following is my code:
public void onPreviewFrame(byte[] data, Camera camera) {
Bitmap bmp=BitmapFactory.decodeByteArray(data, 0, data.length);
}
This has been hard to find! But since API 8, there is a YuvImage class in android.graphics. It's not an Image descendent, so all you can do with it is save it to Jpeg, but you could save it to memory stream and then load into Bitmap Image if that's what you need.
import android.graphics.YuvImage;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(),
size.width, size.height, null);
File file = new File(Environment.getExternalStorageDirectory()
.getPath() + "/out.jpg");
FileOutputStream filecon = new FileOutputStream(file);
image.compressToJpeg(
new Rect(0, 0, image.getWidth(), image.getHeight()), 90,
filecon);
} catch (FileNotFoundException e) {
Toast toast = Toast
.makeText(getBaseContext(), e.getMessage(), 1000);
toast.show();
}
}
Since Android 3.0 you can use a TextureView and TextureSurface to display the camera, and then use mTextureView.getBitmap() to retrieve a friendly RGB preview frame.
A very skeletal example of how to do this is given in the TextureView docs. Note that you'll have to set your application or activity to be hardware accelerated by putting android:hardwareAccelerated="true" in the manifest.
I found the answer after a long time. Here it is...
Instead of using BitmapFactory, I used my custom method to decode this byte[] data to a valid image format. To decode the image to a valid image format, one need to know what picture format is being used by the camera by calling camera.getParameters().getPictureFormat(). This returns a constant defined by ImageFormat. After knowing the format, use the appropriate encoder to encode the image.
In my case, the byte[] data was in the YUV format, so I looked for YUV to BMP conversion and that solved my problem.
you can try this:
This example send camera frames to server
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}
I am writing an app to capture the camera preview frames and convert it to bitmap in Android. Here is my code:
Camera.PreviewCallback previewCallback = new Camera.PreviewCallback()
{
public void onPreviewFrame(byte[] data, Camera camera)
{
try
{
BitmapFactory.Options opts = new BitmapFactory.Options();
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);//,opts);
}
catch(Exception e)
{
}
}
};
mCamera = Camera.open();
mCamera.setPreviewCallback(previewCallback);
After I start preview, the callback got called with data, but the bitmap is null.
What did I do wrong when convert the byte array to BitMap?
In the onPreviewFrame() function, you should check the image format first.
This the NV21 example.
public void onPreviewFrame(byte[] data, Camera camera)
{
Parameters parameters = camera.getParameters();
imageFormat = parameters.getPreviewFormat();
if (imageFormat == ImageFormat.NV21)
{
Rect rect = new Rect(0, 0, PreviewSizeWidth, PreviewSizeHeight);
YuvImage img = new YuvImage(data, ImageFormat.NV21, PreviewSizeWidth, PreviewSizeHeight, null);
OutputStream outStream = null;
File file = new File(NowPictureFileName);
try
{
outStream = new FileOutputStream(file);
img.compressToJpeg(rect, 100, outStream);
outStream.flush();
outStream.close();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
For another way to take pictures, check out this article: how to use camera in android
Have you tried decoding the preview frame data to RGB before you use BitmapFactory? The default format is YUV which I'm not sure is compatible with BitmapFactory. Dave Manpearl's decode method can be found here:
Getting frames from Video Image in Android
Let me know if it works.
Cheers,
Paul
The code below is executed as the jpeg picture callback after TakePicture is called. If I save data to disk, it is a 1280x960 jpeg. I've tried to change the picture size but that's not possible as no smaller size is supported. JPEG is the only available picture format.
PictureCallback jpegCallback = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
FileOutputStream out = null;
Bitmap bm = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap sbm = Bitmap.createScaledBitmap(bm, 640, 480, false);
data.Length is something like 500k as expected. After executing BitmapFactory.decodeByteArray(), bm has a height and width of -1 so it appears the operation is failing.
It's unclear to me if Bitmap can handle jpeg data. I would think not but I have seem some code examples that seem to indicate it is.
Does data need to be in bitmap format before decoding and scaling?
If so, how to do this?
Thanks!
On your surfaceCreated, you code set the camara's Picture Size, as shown the code below:
public void surfaceCreated(SurfaceHolder holder) {
camera = Camera.open();
try {
camera.setPreviewDisplay(holder);
Camera.Parameters p = camera.getParameters();
p.set("jpeg-quality", 70);
p.setPictureFormat(PixelFormat.JPEG);
p.setPictureSize(640, 480);
camera.setParameters(p);
} catch (IOException e) {
e.printStackTrace();
}
}