What format is for Android camera with raw pictureCallback? - android

I am trying to use data from Android picture. I do not like JPEG format, since eventually I will use gray scale data. YUV format is fine with me, since the first half part is gray-scale.
from the Android development tutorial,
public final void takePicture (Camera.ShutterCallback shutter,
Camera.PictureCallback raw, Camera.PictureCallback postview,
Camera.PictureCallback jpeg)
Added in API level 5
Triggers an asynchronous image capture. The camera service will
initiate a series of callbacks to the application as the image capture
progresses. The shutter callback occurs after the image is captured.
This can be used to trigger a sound to let the user know that image
has been captured. The raw callback occurs when the raw image data is
available (NOTE: the data will be null if there is no raw image
callback buffer available or the raw image callback buffer is not
large enough to hold the raw image). The postview callback occurs when
a scaled, fully processed postview image is available (NOTE: not all
hardware supports this). The jpeg callback occurs when the compressed
image is available. If the application does not need a particular
callback, a null can be passed instead of a callback method.
It talks about "the raw image data". However, I find nowhere information about the format for the raw image data?
Do you have any idea about that?
I want to get the gray-scale data of the picture taken by the photo, and the data are located in the phone memory, so it would not cost time to write/read from image files, or convert between different image formats. Or maybe I have to sacrifice some to get it??

After some search, I think I found the answer:
From the Android tutorial:
"The raw callback occurs when the raw image data is available (NOTE:
the data will be null if there is no raw image callback buffer
available or the raw image callback buffer is not large enough to hold
the raw image)."
See this link (2011/05/10)
Android: Raw image callback supported devices
Not all devices support raw pictureCallback.
https://groups.google.com/forum/?fromgroups=#!topic/android-developers/ZRkeoCD2uyc (2009)
The employee Dave Sparks at Google said:
"The original intent was to return an uncompressed RGB565 frame, but
this proved to be impractical. " "I am inclined to deprecate that API
entirely and replace it with hooks for native signal processing. "
Many people report the similar problem. See:
http://code.google.com/p/android/issues/detail?id=10910
Since many image processing processes are based on gray scale images, I am looking forward gray scale raw data in the memory produced for each picture by the Android.

You may have some luck with getSupportedPictureFormats(). If it lists some YUV format, you can use setPictureFormat() and the desired resolution, and ciunterintuitively you will get the uncompressed high quality image in JpegPreview callback, from which grayscale (a.k.a. luminance) can be easily extracted.
Most devices will only list JPEG as a valid choice. That's because they perform compression in hardware, on the camera side. Note that the data transfer from camera to application RAM is often the bottleneck; if you can use stagefright hw JPEG decoder, you will actually get the result faster.

The biggest problem with using the raw callback is that many developers have trouble with getting anything returned on many phones.
If you are satisfied with just the YUV array, your camera preview SurfaceView can implement PreviewCallback and you can add the onPreviewFrame method to your class. This function will allow you direct access to the YUV array for every frame. You can fetch it when you choose.
EDIT: I should specify that I was assuming you were building a custom camera application in which you extended SurfaceView for a custom camera preview surface. In order to follow my advice you will need to build a custom camera. If you are trying to do things quickly though I suggest building a new bitmap out of the JPEG data where you implement the greyscale yourself.

Related

Saving Camera2 output stream in byte []

I am support the application with videochat functions. I am use Camera2 for API>=21. Camera works. Now I need receive data from the camera of my device and write it into a byte [],and then pass the array to a native method for processing and transmitting images opponent. Video transfer functionality written in C ++.My task - to properly record video in byte [] (because this argument accepts a native method, which is to carry out all next actions on the video display).
if I start something to add, the camera stop working.
Help me correctly and easy as possible implement this task. I tried to use MediaRecorder , but it does not write data in the byte []. I watched standart Google-examples such as Camera2Basic, Camera2Video . I tried to realize MediaRecorder like in this tutorials. but it does not work.
ImageReader as I understand, used only for images.
MediaCodec- it is too complicated, I could not really understand.
What the better and eaziest way to implement for obtaining image from camera of my device and for recording it into byte[]. and if possible,give me sample of code or a resource where I can see it. Thanks
You want to use an ImageReader; it's the intended replacement of the old camera API preview callbacks (as well as for taking JPEG or RAW images, the other common use).
Use the YUV_420_888 format.
ImageReader's Images use ByteBuffer instead of byte[], but you can pass the ByteBuffer directly through JNI and get a void* pointer to each plane of the image by using standard JNI methods. That is much more efficient than copying to a byte[] first.
Edit: A few more details:
This is assuming you have your own software video encoding/network transmission library, and you don't want to use Android's hardware video encoders. (If you do, you need to use the MediaCodec class).
Set up preview View (SurfaceView or TextureView), set its size to be the desired preview resolution.
Create ImageReader with YUV_420_888 format and the desired recording resolution. Connect a listener to it.
Open the camera device (can be done in parallel with the previous steps)
Get a Surface from the both the View and the ImageReader, and use them both to create a camera capture session
Once the session is created, create a capture request builder with TEMPLATE_RECORDING (to optimize the settings for a recording use case), and add both the Surfaces as targets for the request
Build the request and set it as the repeating request.
The camera will start pushing buffers into both the preview and the ImageReader. You'll get a onImageAvailable callback whenever a new frame is ready. Acquire the latest Image from the ImageReader's queue, get the three ByteBuffers that make up the YCbCr image, and pass them through JNI to your native code.
Once done with processing an Image, be sure to close it. For efficiency, there's a fixed number of Images in the ImageReader, and if you don't return them, the camera will stall since it will have no buffers to write to. If you need to process multiple frames in parallel, you may need to increase the ImageReader constructor's maxImages argument.

Camera2 get continuous access to camera preview images

I want to extend an app from Camera1 to Camera2 depending on the API. One core mechanism of the app consists in taking preview pictures at a rate of about 20 pics per second. With Camera1 I realized that by creating a SurfaceView, adding a Callback on its holder and after creation of the surface accessing the preview pics via periodic setOneShotPreviewCallbacks. That was pretty easy and reliable.
Now, when studying Camera2, I came "from the end" and managed to convert YUV420_888 to Bitmap (see YUV420_888 to Bitmap Conversion ). However I am struggling now with the "capture technique". From the Google example I see that you need to make a "setRepeating" CaptureRequest with CameraDevice.TEMPLATE_PREVIEW for displaying the preview e.g. on a surface view. That is fine. However, in order to take an actual picture I need to make another capture request with (this time) builder.addTarget(imageReader.getSurface()). I.e. data will be available within the onImageAvailable method of the imageReader.
The problem: the creation of the captureRequest is a rather heavy operation taking about 200ms on my device. Therefore, the usage of a capture request (whether with Template STILL_CAPTUR nor PREVIEW) can impossibly be a feasible approach for capturing 20 images per second, as I need it. The proposals I found here on SO are primarily based on the (educationally moderately efficient) Google example, which I don't really understand...
I feel the solution must be to feed the ImageReader with a contiuous stream of preview pics, which can be picked from there in a given frequency. Can someone please give some guidance on how to implement this? Many thanks.
If you want to send a buffer to both the preview SurfaceView and to your YUV ImageReader for every frame, simply add both Surfaces to the repeating preview request as targets.
Generally, a capture request can target any subset (or all) of the
session's configured output targets.
Also, if you do want to only capture an occasional frame to your YUV ImageReader with .capture(), you don't have to recreate the capture request builder each time; just call .build() again on the same builder, or just reuse the actual constructed CaptureRequest if you're not changing any settings.
Even with this occasional capture, you probably want to include the preview Surface as a target in the YUV capture request, so that there's no skipped frame in the displayed preview.

Android- capture highest resolution image on onPreviewFrame callback

I'm using PreviewDisplay to create custom camera app, and onPreviewFrame callback to manipulate each frame (in my case, send image to server once in pre-defined number of frames while keep displaying smooth video stream to the user).
The highest resolution returned by getSupportedPreviewSizes is lower than the best resolution of images captured by built in camera application.
Is there any way to get the frames in best resolution as achieved by built in camera application?
Try getSupportedVideoSizes(), also getPreferredPreviewSizeForVideo().
Note that in some cases, the camera may be able to produce higher res frames, and pipe them to the hardware encoder for video recording, but not have bandwidth to push them to onPreviewFrame() callback.

How to Get Full Resolution & Uncompressed Image Data with Android Camera?

I want to get a full resolution(and not compressed)image data, then i can do some image processing.
As far as i know, the android api takePicture (shutter, raw, jpg) can do something.But what i need is not a compressed JPEG data, a uncompressed image data instead.Also I knew the raw callback doesn't work according to some posts i have read.
I also found the api onPreviewFrame, and the larggest picture size i got from this is 1280*720(use setPreviewSize) while the original image i caputure from the camera is a resolution of 1952*3264.
Also, Intent.putExtra(MediaStore.EXTRA_OUTPUT, uri) may be help, but it likely that the file of the uri should be a jpeg file which is a compressed format.
But is there anyway to get a full size(the same as captured) and uncompressed(not a JPEG) image data?
The documentation for takePicture() clearly says that raw data can be requested, but the callback is optional. Today, most of devices do not support raw callback. This should not be a surprise: modern cameras perform Jpeg compression in hardware, and the memory bus between the camera and the application processor cannot handle 24 Megabyte of raw data fast enough (for a modest 8 megapixel camera).
Avoid temptation to use preview callback instead of takePicture(): even at same resolution, image qualiity of a still picture will be better. Preview image may have imprecise autofocus, stabilization, exposure and even white balance.

How to capture an Android Camera image without saving a file to the phone/sdcard?

I would like to capture an image with the Android Camera but because the image may contain sensitive data I dont want the image saved to the phone or sd card. Instead I would like a base64 string (compressed) which would be sent to the server immediately
In PhoneGap it seems files are saved to various places automatically.
Natively I was never able to get the image stream - in onJpegPictureTaken() the byte[] parameter was always null.
can anyone suggest a way?
See Camera.onPreviewFrame() and the YuvImage.compresstoJpeg() to be able to get a byte array you can convert into a bitmap.
Note that YuvImage.compressToJpeg() is only available in SDK 8 or later, I think. For earlier versions you'll need to implement your own YUV decoder. There are several examples around or, I could provide you an example.
Those two methods will allow you to get a camera picture in memory and never persist it to SD. Beware that bitmaps of most camera preview sizes will chew up memory pretty quickly and you'll need to be very careful to recycle the bitmaps and probably also have to scale them down a bit to do much with them and still fit inside the native heap restrictions on most devices.
Good luck!

Categories

Resources