Camera2 get continuous access to camera preview images - android

I want to extend an app from Camera1 to Camera2 depending on the API. One core mechanism of the app consists in taking preview pictures at a rate of about 20 pics per second. With Camera1 I realized that by creating a SurfaceView, adding a Callback on its holder and after creation of the surface accessing the preview pics via periodic setOneShotPreviewCallbacks. That was pretty easy and reliable.
Now, when studying Camera2, I came "from the end" and managed to convert YUV420_888 to Bitmap (see YUV420_888 to Bitmap Conversion ). However I am struggling now with the "capture technique". From the Google example I see that you need to make a "setRepeating" CaptureRequest with CameraDevice.TEMPLATE_PREVIEW for displaying the preview e.g. on a surface view. That is fine. However, in order to take an actual picture I need to make another capture request with (this time) builder.addTarget(imageReader.getSurface()). I.e. data will be available within the onImageAvailable method of the imageReader.
The problem: the creation of the captureRequest is a rather heavy operation taking about 200ms on my device. Therefore, the usage of a capture request (whether with Template STILL_CAPTUR nor PREVIEW) can impossibly be a feasible approach for capturing 20 images per second, as I need it. The proposals I found here on SO are primarily based on the (educationally moderately efficient) Google example, which I don't really understand...
I feel the solution must be to feed the ImageReader with a contiuous stream of preview pics, which can be picked from there in a given frequency. Can someone please give some guidance on how to implement this? Many thanks.

If you want to send a buffer to both the preview SurfaceView and to your YUV ImageReader for every frame, simply add both Surfaces to the repeating preview request as targets.
Generally, a capture request can target any subset (or all) of the
session's configured output targets.
Also, if you do want to only capture an occasional frame to your YUV ImageReader with .capture(), you don't have to recreate the capture request builder each time; just call .build() again on the same builder, or just reuse the actual constructed CaptureRequest if you're not changing any settings.
Even with this occasional capture, you probably want to include the preview Surface as a target in the YUV capture request, so that there's no skipped frame in the displayed preview.

Related

(Camera2 API) Can I run 2 ImageReader instances of different configs at the same time?

I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.

Android taking images without preview using new "Camera2" api

I am new to Android, i want to take pictures in background without surface view/preview. I have searched online but the methods don't seem working for me. I want to use the latest Camera2 API.
Regards!
Muhammad Awais
Just create an ImageReader, and a camera capture session with that ImageReader's Surface. No need to have a SurfaceView or TextureView as well.
You'll need to stream some number of captures before starting to save any, though, to ensure that the auto-exposure/focus/etc routines of the camera have time to converge.

Saving Camera2 output stream in byte []

I am support the application with videochat functions. I am use Camera2 for API>=21. Camera works. Now I need receive data from the camera of my device and write it into a byte [],and then pass the array to a native method for processing and transmitting images opponent. Video transfer functionality written in C ++.My task - to properly record video in byte [] (because this argument accepts a native method, which is to carry out all next actions on the video display).
if I start something to add, the camera stop working.
Help me correctly and easy as possible implement this task. I tried to use MediaRecorder , but it does not write data in the byte []. I watched standart Google-examples such as Camera2Basic, Camera2Video . I tried to realize MediaRecorder like in this tutorials. but it does not work.
ImageReader as I understand, used only for images.
MediaCodec- it is too complicated, I could not really understand.
What the better and eaziest way to implement for obtaining image from camera of my device and for recording it into byte[]. and if possible,give me sample of code or a resource where I can see it. Thanks
You want to use an ImageReader; it's the intended replacement of the old camera API preview callbacks (as well as for taking JPEG or RAW images, the other common use).
Use the YUV_420_888 format.
ImageReader's Images use ByteBuffer instead of byte[], but you can pass the ByteBuffer directly through JNI and get a void* pointer to each plane of the image by using standard JNI methods. That is much more efficient than copying to a byte[] first.
Edit: A few more details:
This is assuming you have your own software video encoding/network transmission library, and you don't want to use Android's hardware video encoders. (If you do, you need to use the MediaCodec class).
Set up preview View (SurfaceView or TextureView), set its size to be the desired preview resolution.
Create ImageReader with YUV_420_888 format and the desired recording resolution. Connect a listener to it.
Open the camera device (can be done in parallel with the previous steps)
Get a Surface from the both the View and the ImageReader, and use them both to create a camera capture session
Once the session is created, create a capture request builder with TEMPLATE_RECORDING (to optimize the settings for a recording use case), and add both the Surfaces as targets for the request
Build the request and set it as the repeating request.
The camera will start pushing buffers into both the preview and the ImageReader. You'll get a onImageAvailable callback whenever a new frame is ready. Acquire the latest Image from the ImageReader's queue, get the three ByteBuffers that make up the YCbCr image, and pass them through JNI to your native code.
Once done with processing an Image, be sure to close it. For efficiency, there's a fixed number of Images in the ImageReader, and if you don't return them, the camera will stall since it will have no buffers to write to. If you need to process multiple frames in parallel, you may need to increase the ImageReader constructor's maxImages argument.

Getting the RAW camera data on android?

I want to work with the bit array of each incoming frame, basically tapping to the YUV format received from the camera sensor for each frame and do some processing on it.
I'm new to java/android and learning as I go so some of my questions are pretty basic but I couldn't find any answers that suites my needs.
Q1: How do I get a bit array of each frame received by the camera sensor? (how do I save the YUV byte stream for further use?)
Q2: How to set that for each new frame received a new data array will be received for processing?
Q3: Do I have to set a preview to do that or could I tap straight to a buffer holding the raw data from the open camera?
Q4: Will a preview slow down the process (of receiving new frames)?
Some further explanations if needed: The idea is to create one way communication with a flickering LED light and a smartphone, by pointing the phones camera to the LED, a real-time process will register the slight changes and decode them to the original sent data. to do so I plan to receive the YUV data for each frame, strip it to the Y part and decide for each frame if the light is on or off.
Yes, that's the Camera API. Android 21 and newer support camera2 API which can give you faster response, but this depends on device. I would recommend still to use the deprecated older API if your goal is maximum reach.
Usually, Android camera produces NV21 format, from which it is very easy to extract the 8bpp luminance.
Android requires live preview if you want to capture camera frames. There are quite a few workarounds to keep the preview hidden from the end user, but this is not supported, and any such trick may fail on the next device or on the next system upgrade. But no worry: live preview does not delay your processing speed at all, because it is all done in a separate hardware channel.
All in all, you can expect to receive 30 fps on average device when you use Camera.setPreviewCallbackWithBuffer() and do everything correctly. The high-end devices that have full implementation of camera2 api may deliver higher frame rates. Samsung published their own camera sdk. Use it if you need some special features of Samsung devices.
On a multi-core device, you can offload image processing to a thread pool, but still the frame rate will probably be limited by camera hardware.
Note that you can perform some limited image processing in GPU, applying shaders to the texture that is acquired from camera.
Assuming you have done the basics and you have got a camera preview and a Camera object. You can call:
Camera.PreviewCallback callback = new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Do your processing here... Use the byte[] called "data"
}
};
And then:
mCamera.setPreviewCallback(callback);
If Camera.Parameters.setPreviewFormat() is never called, the default format will be NV21, which uses the NV21 encoding format.

How to capture a screenshot programmatically with Lollipop

The Media Projection package is new Lollipop, and allows an app to capture the device's screen in realtime for streaming to video. I was hoping this could also be used to capture a single still screenshot, but so far I have not been successful. Of course, the first frame of a captured video could work, but I'm aiming for a perfect, lossless screenshot matching the pixel resolution of the device. A still from a captured video cannot provide that.
I've tried a lot of things, but the closest I came to a solution was to first launch an invisible activity. This activity then follows the API example for starting screen capture, which can include asking the user's permission. Once screen capture is enabled, the screen image is live in a SurfaceView. However, I cannot find a way to capture a bitmap from the SurfaceView. There are lots of questions and discussions about this, but no solutions seem to work, and there is some evidence that it is impossible.
Any ideas?
You can't capture the contents of a SurfaceView.
What you can do is replace the SurfaceView with a Surface object that has an in-process consumer, such as SurfaceTexture. In the android-ScreenCapture example linked from the question, mMediaProjection.createVirtualDisplay() wants a Surface to send images to. If you create a SurfaceTexture, and use that to construct a Surface, the images generated by the MediaProjection will be available from an OpenGL ES texture.
If GLES isn't your thing, the ImageReader class can be used. It also provides a Surface that can be passed to createVirtualDisplay(), but it's easier to access the pixels from software.

Categories

Resources