Saving Camera2 output stream in byte [] - android

I am support the application with videochat functions. I am use Camera2 for API>=21. Camera works. Now I need receive data from the camera of my device and write it into a byte [],and then pass the array to a native method for processing and transmitting images opponent. Video transfer functionality written in C ++.My task - to properly record video in byte [] (because this argument accepts a native method, which is to carry out all next actions on the video display).
if I start something to add, the camera stop working.
Help me correctly and easy as possible implement this task. I tried to use MediaRecorder , but it does not write data in the byte []. I watched standart Google-examples such as Camera2Basic, Camera2Video . I tried to realize MediaRecorder like in this tutorials. but it does not work.
ImageReader as I understand, used only for images.
MediaCodec- it is too complicated, I could not really understand.
What the better and eaziest way to implement for obtaining image from camera of my device and for recording it into byte[]. and if possible,give me sample of code or a resource where I can see it. Thanks

You want to use an ImageReader; it's the intended replacement of the old camera API preview callbacks (as well as for taking JPEG or RAW images, the other common use).
Use the YUV_420_888 format.
ImageReader's Images use ByteBuffer instead of byte[], but you can pass the ByteBuffer directly through JNI and get a void* pointer to each plane of the image by using standard JNI methods. That is much more efficient than copying to a byte[] first.
Edit: A few more details:
This is assuming you have your own software video encoding/network transmission library, and you don't want to use Android's hardware video encoders. (If you do, you need to use the MediaCodec class).
Set up preview View (SurfaceView or TextureView), set its size to be the desired preview resolution.
Create ImageReader with YUV_420_888 format and the desired recording resolution. Connect a listener to it.
Open the camera device (can be done in parallel with the previous steps)
Get a Surface from the both the View and the ImageReader, and use them both to create a camera capture session
Once the session is created, create a capture request builder with TEMPLATE_RECORDING (to optimize the settings for a recording use case), and add both the Surfaces as targets for the request
Build the request and set it as the repeating request.
The camera will start pushing buffers into both the preview and the ImageReader. You'll get a onImageAvailable callback whenever a new frame is ready. Acquire the latest Image from the ImageReader's queue, get the three ByteBuffers that make up the YCbCr image, and pass them through JNI to your native code.
Once done with processing an Image, be sure to close it. For efficiency, there's a fixed number of Images in the ImageReader, and if you don't return them, the camera will stall since it will have no buffers to write to. If you need to process multiple frames in parallel, you may need to increase the ImageReader constructor's maxImages argument.

Related

Camera2 get continuous access to camera preview images

I want to extend an app from Camera1 to Camera2 depending on the API. One core mechanism of the app consists in taking preview pictures at a rate of about 20 pics per second. With Camera1 I realized that by creating a SurfaceView, adding a Callback on its holder and after creation of the surface accessing the preview pics via periodic setOneShotPreviewCallbacks. That was pretty easy and reliable.
Now, when studying Camera2, I came "from the end" and managed to convert YUV420_888 to Bitmap (see YUV420_888 to Bitmap Conversion ). However I am struggling now with the "capture technique". From the Google example I see that you need to make a "setRepeating" CaptureRequest with CameraDevice.TEMPLATE_PREVIEW for displaying the preview e.g. on a surface view. That is fine. However, in order to take an actual picture I need to make another capture request with (this time) builder.addTarget(imageReader.getSurface()). I.e. data will be available within the onImageAvailable method of the imageReader.
The problem: the creation of the captureRequest is a rather heavy operation taking about 200ms on my device. Therefore, the usage of a capture request (whether with Template STILL_CAPTUR nor PREVIEW) can impossibly be a feasible approach for capturing 20 images per second, as I need it. The proposals I found here on SO are primarily based on the (educationally moderately efficient) Google example, which I don't really understand...
I feel the solution must be to feed the ImageReader with a contiuous stream of preview pics, which can be picked from there in a given frequency. Can someone please give some guidance on how to implement this? Many thanks.
If you want to send a buffer to both the preview SurfaceView and to your YUV ImageReader for every frame, simply add both Surfaces to the repeating preview request as targets.
Generally, a capture request can target any subset (or all) of the
session's configured output targets.
Also, if you do want to only capture an occasional frame to your YUV ImageReader with .capture(), you don't have to recreate the capture request builder each time; just call .build() again on the same builder, or just reuse the actual constructed CaptureRequest if you're not changing any settings.
Even with this occasional capture, you probably want to include the preview Surface as a target in the YUV capture request, so that there's no skipped frame in the displayed preview.

Getting the RAW camera data on android?

I want to work with the bit array of each incoming frame, basically tapping to the YUV format received from the camera sensor for each frame and do some processing on it.
I'm new to java/android and learning as I go so some of my questions are pretty basic but I couldn't find any answers that suites my needs.
Q1: How do I get a bit array of each frame received by the camera sensor? (how do I save the YUV byte stream for further use?)
Q2: How to set that for each new frame received a new data array will be received for processing?
Q3: Do I have to set a preview to do that or could I tap straight to a buffer holding the raw data from the open camera?
Q4: Will a preview slow down the process (of receiving new frames)?
Some further explanations if needed: The idea is to create one way communication with a flickering LED light and a smartphone, by pointing the phones camera to the LED, a real-time process will register the slight changes and decode them to the original sent data. to do so I plan to receive the YUV data for each frame, strip it to the Y part and decide for each frame if the light is on or off.
Yes, that's the Camera API. Android 21 and newer support camera2 API which can give you faster response, but this depends on device. I would recommend still to use the deprecated older API if your goal is maximum reach.
Usually, Android camera produces NV21 format, from which it is very easy to extract the 8bpp luminance.
Android requires live preview if you want to capture camera frames. There are quite a few workarounds to keep the preview hidden from the end user, but this is not supported, and any such trick may fail on the next device or on the next system upgrade. But no worry: live preview does not delay your processing speed at all, because it is all done in a separate hardware channel.
All in all, you can expect to receive 30 fps on average device when you use Camera.setPreviewCallbackWithBuffer() and do everything correctly. The high-end devices that have full implementation of camera2 api may deliver higher frame rates. Samsung published their own camera sdk. Use it if you need some special features of Samsung devices.
On a multi-core device, you can offload image processing to a thread pool, but still the frame rate will probably be limited by camera hardware.
Note that you can perform some limited image processing in GPU, applying shaders to the texture that is acquired from camera.
Assuming you have done the basics and you have got a camera preview and a Camera object. You can call:
Camera.PreviewCallback callback = new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
//Do your processing here... Use the byte[] called "data"
}
};
And then:
mCamera.setPreviewCallback(callback);
If Camera.Parameters.setPreviewFormat() is never called, the default format will be NV21, which uses the NV21 encoding format.

Editing frames and encoding with MediaCodec

I was able to decode an mp4 video. If I configure the decoder using a Surface I can see the video on screen. Now, I want to edit the frame (adding a yellow line or even better overlapping a tiny image) and encode the video as a new video. It is not necessary to show the video and I don't care now about the performance.(If I show the frames while editing I could have a gap if the editing function takes a lot of time), So, What do you recommend to me, configure the decoder with a GlSurface anyway and use OpenGl (GLES), or configure it with null and somehow convert the Bytebuffer to a Bitmap, modify it, and encode the bitmap as a byte array? Also I saw in Grafika page that you cand use a Surface with a custom Rederer and use OpenGl (GLES). Thanks
You will have to use OpenGLES. ByteBuffer/Bitmap approach can not give realistic performance/features.
Now that you've been able to decode the Video (using MediaExtractor and Codec) to a Surface, you need to use the SurfaceTexture used to create the Surface as an External Texture and render using GLES to another Surface retrieved from MediaCodec configured as an encoder.
Though Grafika doesn't have an exactly similar complete project, you can start with your existing project and then try to use either of the following subprojects in grafika Continuous Camera or Show + capture camera, which currently renders Camera frames (fed to SurfaceTexture) to a Video (and display).
So essentially, the only change is the MediaCodec feeding frames to SurfaceTexture instead of the Camera.
Google CTS DecodeEditEncodeTest does exactly the same and can be used as a reference in order to make the learning curve smoother.
Using this approach, you can certainly do all sorts of things like manipulating the playback speed of video (fast forward and slow-down), adding all sorts of overlays on the scene, play with colors/pixels in the video using shaders etc.
Checkout filters in Show + capture camera for an illustration for the same.
Decode-edit-Encode flow
When using OpenGLES, 'editing' of the frame happens via rendering using GLES to the Encoder's input surface.
If decoding and rendering+encoding are separated out in different threads, you're bound to skip a few frames every frame, unless you implement some sort of synchronisation between the two threads to keep the decoder waiting until the render+encode for that frame has happened on the other thread.
Although modern hardware codecs support simultaneous video encoding and decoding, I'd suggest, do the decoding, rendering and encoding in the same thread, especially in your case, when the performance is not a major concern right now. That will help avoiding the problems of having to handle synchronisation on your own and/or frame jumps.

Simulating an Android Camera

I am testing imaging algorithms using a android phone's camera as input, and need a way to consistently test the algorithms. Ideally I want to take a pre-recorded video feed and have the phone 'pretend' that the video feed is a live video from the camera.
My ideal solution would be where the app running the algorithms has no knowledge that the video is pre-recorded. I do not want to load the video file directly into the app, but rather read it in as sensor data if at all possible.
Is this approach possible? If so, any pointers in the right direction would be extremely helpful, as Google searches have failed me so far
Thanks!
Edit: To clarify, my understanding is that the Camera class uses a camera service to read video from the hardware. Rather than do something application-side, I would like to create a custom camera service that reads from a video file instead of the hardware. Is that doable?
When you are doing processing on a live android video feed you will need to build your own custom camera application that feeds you individual frames via the PreviewCallback interface that Android provides.
Now, simulating this would be a little bit tricky seen as the format for the preview frames will generally be in the NV21 format. If you are using a pre-recorded video, I don't think there is any clear way of reading frames one by one unless you try the getFrameAtTime method which will give you bitmaps in an entirely different format.
This leads me to suggest that you could probably test with these Bitmaps (though I'm really not sure what you are trying to do here) from the getFrameAtTime method. In order for this code to then work on a live camera preview, you would need to have to convert your NV21 frames from the PreviewCallback interface into the same format as the Bitmaps from getFrameAtTime, or you could then adapt your algorithm to process NV21 format frames. NV21 is a pretty neat format, presenting color and luminance data separately, but it can be tricky to use.

What format is for Android camera with raw pictureCallback?

I am trying to use data from Android picture. I do not like JPEG format, since eventually I will use gray scale data. YUV format is fine with me, since the first half part is gray-scale.
from the Android development tutorial,
public final void takePicture (Camera.ShutterCallback shutter,
Camera.PictureCallback raw, Camera.PictureCallback postview,
Camera.PictureCallback jpeg)
Added in API level 5
Triggers an asynchronous image capture. The camera service will
initiate a series of callbacks to the application as the image capture
progresses. The shutter callback occurs after the image is captured.
This can be used to trigger a sound to let the user know that image
has been captured. The raw callback occurs when the raw image data is
available (NOTE: the data will be null if there is no raw image
callback buffer available or the raw image callback buffer is not
large enough to hold the raw image). The postview callback occurs when
a scaled, fully processed postview image is available (NOTE: not all
hardware supports this). The jpeg callback occurs when the compressed
image is available. If the application does not need a particular
callback, a null can be passed instead of a callback method.
It talks about "the raw image data". However, I find nowhere information about the format for the raw image data?
Do you have any idea about that?
I want to get the gray-scale data of the picture taken by the photo, and the data are located in the phone memory, so it would not cost time to write/read from image files, or convert between different image formats. Or maybe I have to sacrifice some to get it??
After some search, I think I found the answer:
From the Android tutorial:
"The raw callback occurs when the raw image data is available (NOTE:
the data will be null if there is no raw image callback buffer
available or the raw image callback buffer is not large enough to hold
the raw image)."
See this link (2011/05/10)
Android: Raw image callback supported devices
Not all devices support raw pictureCallback.
https://groups.google.com/forum/?fromgroups=#!topic/android-developers/ZRkeoCD2uyc (2009)
The employee Dave Sparks at Google said:
"The original intent was to return an uncompressed RGB565 frame, but
this proved to be impractical. " "I am inclined to deprecate that API
entirely and replace it with hooks for native signal processing. "
Many people report the similar problem. See:
http://code.google.com/p/android/issues/detail?id=10910
Since many image processing processes are based on gray scale images, I am looking forward gray scale raw data in the memory produced for each picture by the Android.
You may have some luck with getSupportedPictureFormats(). If it lists some YUV format, you can use setPictureFormat() and the desired resolution, and ciunterintuitively you will get the uncompressed high quality image in JpegPreview callback, from which grayscale (a.k.a. luminance) can be easily extracted.
Most devices will only list JPEG as a valid choice. That's because they perform compression in hardware, on the camera side. Note that the data transfer from camera to application RAM is often the bottleneck; if you can use stagefright hw JPEG decoder, you will actually get the result faster.
The biggest problem with using the raw callback is that many developers have trouble with getting anything returned on many phones.
If you are satisfied with just the YUV array, your camera preview SurfaceView can implement PreviewCallback and you can add the onPreviewFrame method to your class. This function will allow you direct access to the YUV array for every frame. You can fetch it when you choose.
EDIT: I should specify that I was assuming you were building a custom camera application in which you extended SurfaceView for a custom camera preview surface. In order to follow my advice you will need to build a custom camera. If you are trying to do things quickly though I suggest building a new bitmap out of the JPEG data where you implement the greyscale yourself.

Categories

Resources