I want to work with the bit array of each incoming frame, basically tapping to the YUV format received from the camera sensor for each frame and do some processing on it.
I'm new to java/android and learning as I go so some of my questions are pretty basic but I couldn't find any answers that suites my needs.
Q1: How do I get a bit array of each frame received by the camera sensor? (how do I save the YUV byte stream for further use?)
Q2: How to set that for each new frame received a new data array will be received for processing?
Q3: Do I have to set a preview to do that or could I tap straight to a buffer holding the raw data from the open camera?
Q4: Will a preview slow down the process (of receiving new frames)?
Some further explanations if needed: The idea is to create one way communication with a flickering LED light and a smartphone, by pointing the phones camera to the LED, a real-time process will register the slight changes and decode them to the original sent data. to do so I plan to receive the YUV data for each frame, strip it to the Y part and decide for each frame if the light is on or off.
Yes, that's the Camera API. Android 21 and newer support camera2 API which can give you faster response, but this depends on device. I would recommend still to use the deprecated older API if your goal is maximum reach.
Usually, Android camera produces NV21 format, from which it is very easy to extract the 8bpp luminance.
Android requires live preview if you want to capture camera frames. There are quite a few workarounds to keep the preview hidden from the end user, but this is not supported, and any such trick may fail on the next device or on the next system upgrade. But no worry: live preview does not delay your processing speed at all, because it is all done in a separate hardware channel.
All in all, you can expect to receive 30 fps on average device when you use Camera.setPreviewCallbackWithBuffer() and do everything correctly. The high-end devices that have full implementation of camera2 api may deliver higher frame rates. Samsung published their own camera sdk. Use it if you need some special features of Samsung devices.
On a multi-core device, you can offload image processing to a thread pool, but still the frame rate will probably be limited by camera hardware.
Note that you can perform some limited image processing in GPU, applying shaders to the texture that is acquired from camera.
Assuming you have done the basics and you have got a camera preview and a Camera object. You can call:
Camera.PreviewCallback callback = new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Do your processing here... Use the byte[] called "data"
}
};
And then:
mCamera.setPreviewCallback(callback);
If Camera.Parameters.setPreviewFormat() is never called, the default format will be NV21, which uses the NV21 encoding format.
Related
I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.
I am support the application with videochat functions. I am use Camera2 for API>=21. Camera works. Now I need receive data from the camera of my device and write it into a byte [],and then pass the array to a native method for processing and transmitting images opponent. Video transfer functionality written in C ++.My task - to properly record video in byte [] (because this argument accepts a native method, which is to carry out all next actions on the video display).
if I start something to add, the camera stop working.
Help me correctly and easy as possible implement this task. I tried to use MediaRecorder , but it does not write data in the byte []. I watched standart Google-examples such as Camera2Basic, Camera2Video . I tried to realize MediaRecorder like in this tutorials. but it does not work.
ImageReader as I understand, used only for images.
MediaCodec- it is too complicated, I could not really understand.
What the better and eaziest way to implement for obtaining image from camera of my device and for recording it into byte[]. and if possible,give me sample of code or a resource where I can see it. Thanks
You want to use an ImageReader; it's the intended replacement of the old camera API preview callbacks (as well as for taking JPEG or RAW images, the other common use).
Use the YUV_420_888 format.
ImageReader's Images use ByteBuffer instead of byte[], but you can pass the ByteBuffer directly through JNI and get a void* pointer to each plane of the image by using standard JNI methods. That is much more efficient than copying to a byte[] first.
Edit: A few more details:
This is assuming you have your own software video encoding/network transmission library, and you don't want to use Android's hardware video encoders. (If you do, you need to use the MediaCodec class).
Set up preview View (SurfaceView or TextureView), set its size to be the desired preview resolution.
Create ImageReader with YUV_420_888 format and the desired recording resolution. Connect a listener to it.
Open the camera device (can be done in parallel with the previous steps)
Get a Surface from the both the View and the ImageReader, and use them both to create a camera capture session
Once the session is created, create a capture request builder with TEMPLATE_RECORDING (to optimize the settings for a recording use case), and add both the Surfaces as targets for the request
Build the request and set it as the repeating request.
The camera will start pushing buffers into both the preview and the ImageReader. You'll get a onImageAvailable callback whenever a new frame is ready. Acquire the latest Image from the ImageReader's queue, get the three ByteBuffers that make up the YCbCr image, and pass them through JNI to your native code.
Once done with processing an Image, be sure to close it. For efficiency, there's a fixed number of Images in the ImageReader, and if you don't return them, the camera will stall since it will have no buffers to write to. If you need to process multiple frames in parallel, you may need to increase the ImageReader constructor's maxImages argument.
I want to extend an app from Camera1 to Camera2 depending on the API. One core mechanism of the app consists in taking preview pictures at a rate of about 20 pics per second. With Camera1 I realized that by creating a SurfaceView, adding a Callback on its holder and after creation of the surface accessing the preview pics via periodic setOneShotPreviewCallbacks. That was pretty easy and reliable.
Now, when studying Camera2, I came "from the end" and managed to convert YUV420_888 to Bitmap (see YUV420_888 to Bitmap Conversion ). However I am struggling now with the "capture technique". From the Google example I see that you need to make a "setRepeating" CaptureRequest with CameraDevice.TEMPLATE_PREVIEW for displaying the preview e.g. on a surface view. That is fine. However, in order to take an actual picture I need to make another capture request with (this time) builder.addTarget(imageReader.getSurface()). I.e. data will be available within the onImageAvailable method of the imageReader.
The problem: the creation of the captureRequest is a rather heavy operation taking about 200ms on my device. Therefore, the usage of a capture request (whether with Template STILL_CAPTUR nor PREVIEW) can impossibly be a feasible approach for capturing 20 images per second, as I need it. The proposals I found here on SO are primarily based on the (educationally moderately efficient) Google example, which I don't really understand...
I feel the solution must be to feed the ImageReader with a contiuous stream of preview pics, which can be picked from there in a given frequency. Can someone please give some guidance on how to implement this? Many thanks.
If you want to send a buffer to both the preview SurfaceView and to your YUV ImageReader for every frame, simply add both Surfaces to the repeating preview request as targets.
Generally, a capture request can target any subset (or all) of the
session's configured output targets.
Also, if you do want to only capture an occasional frame to your YUV ImageReader with .capture(), you don't have to recreate the capture request builder each time; just call .build() again on the same builder, or just reuse the actual constructed CaptureRequest if you're not changing any settings.
Even with this occasional capture, you probably want to include the preview Surface as a target in the YUV capture request, so that there's no skipped frame in the displayed preview.
I'm very new to android. I'm trying to use the new Android Camera2 api to build a real time image processing application. My application requires to maintain a good FPS rate as well. Following some examples i managed to do the image processing inside the onImageAvailable(ImageReader reader) method available with ImageReader class. However by doing so, i can only manage to get a frame rate around 5-7 FPS.
I've seen that it is advised to use RenderScript for YUV processing with Android camera2 api. Will using RenderScript gain me higher FPS rates?
If so please can someone guide me on how to implement that, as i'm new to android i'm having a hard time grasping concepts of Allocation and RenderScript. Thanks in advance.
I don't know what type of image processing you want to perform. But in case that you are interested only in the intensity of the image (i.e. grayvalue information) you don't need any conversion of the YUV data array (e.g. into jpeg). For an image consisting of n pixels the intensity information is given by the first n bytes of the YUV data array. So, just cut those bytes out of the YUV data array:
byte[] intensity = new byte[width*height];
intensity = Arrays.copyOfRange(data, 0, width*height);
In theory, you can get the available fps ranges with this call:
characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
and set the desired fps range here:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, bestFPSRange);
So in principle, you should choose a range with the same lower and upper bound, and that should keep your frame rate constant.
HOWEVER, on devices with a LEGACY profile, none of the devices I have tested have been able to achieve 30fps at 1080p (S5, Z3 Compact, Huawei Mate S, and HTC One M9). The only way I was able to achieve that was by using a device (LG G4) that turned out to have a FULL profile.
Renderscript will not buy you anything here if you are going to use it inside the onImageAvailable callback. It appears that getting the image at that point is the bottleneck on LEGACY devices since the new camera2 API simply wraps the old one, and is presumably creating so much overhead that the callback does not occur at 30fps anymore. So if Renderscript is to work, you would need to need to create a Surface and find another way of grabbing the frames off of it.
Here is the kicker though... if you move back the deprecated API, I would almost guarantee 30fps at whatever resolution you want. At least that is what I found on all of the devices I tested....
I am testing imaging algorithms using a android phone's camera as input, and need a way to consistently test the algorithms. Ideally I want to take a pre-recorded video feed and have the phone 'pretend' that the video feed is a live video from the camera.
My ideal solution would be where the app running the algorithms has no knowledge that the video is pre-recorded. I do not want to load the video file directly into the app, but rather read it in as sensor data if at all possible.
Is this approach possible? If so, any pointers in the right direction would be extremely helpful, as Google searches have failed me so far
Thanks!
Edit: To clarify, my understanding is that the Camera class uses a camera service to read video from the hardware. Rather than do something application-side, I would like to create a custom camera service that reads from a video file instead of the hardware. Is that doable?
When you are doing processing on a live android video feed you will need to build your own custom camera application that feeds you individual frames via the PreviewCallback interface that Android provides.
Now, simulating this would be a little bit tricky seen as the format for the preview frames will generally be in the NV21 format. If you are using a pre-recorded video, I don't think there is any clear way of reading frames one by one unless you try the getFrameAtTime method which will give you bitmaps in an entirely different format.
This leads me to suggest that you could probably test with these Bitmaps (though I'm really not sure what you are trying to do here) from the getFrameAtTime method. In order for this code to then work on a live camera preview, you would need to have to convert your NV21 frames from the PreviewCallback interface into the same format as the Bitmaps from getFrameAtTime, or you could then adapt your algorithm to process NV21 format frames. NV21 is a pretty neat format, presenting color and luminance data separately, but it can be tricky to use.