Is it possible to grap one pixel from CameraPreview? - android

I followed the Android Studio tutorial to get the CameraPreview to work (Camera API Android Developer Guide). This works fine for me and i can view the camera stream in my FrameLayout.
But I would like to get the RGB values from a specific Pixel in the Preview everytime it changes. I did not find a method which gives me the previewImage as a bitmap and was not able to understand the usage of the onPreviewFrame method
#Override
public void onPreviewFrame(byte[] data, Camera camera) {}
How can I get the RGB values from a Camerapreview Pixel?

If you are using the Camera2 API, you can implement the ImageReader.OnImageAvailableListener class in your application. After that, you override the onImageAvailable function , which gets an ImageReader as argument. Then you can access the image just recorded with imageReader.acquireNextImage().

With either API, you need to handle processing YUV data yourself, unfortunately.
Camera devices natively produce YUV data, not RGB, so the API doesn't spend extra resources to auto-convert the data. The main easy exception is piping data to the GPU, where the GPU driver auto-converts YUV to RGB for you within your pixel shader.
But if you're just in regular app code, you need to parse the data.
For the deprecated android.hardware.Camera API, the output is NV21 by default, and you can usually select YV12 as another option.
The wikipedia article on YUV is relatively helpful: https://en.wikipedia.org/wiki/YUV
But it does have the wrong conversion coefficients for YUV->RGB conversion; they should be:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
(Cb = U, Cr = V)
You can also take a look at this stackoverflow post:
Extract black and white image from android camera's NV21 format
which has code that looks to be correct for the conversion.

Related

Separate YUV420SP to be used in opengl

Im currently trying to display a video frame using opengl.
So far it works but I have some color problem.
Im using this as my
Reference for my logic
I have this code
//YUV420SP data
uint8_t *decodedBuff = AMediaCodec_getOutputBuffer(d->codec, status, &bufSize);
buildTexture(decodedBuff, decodedBuff+w*h, decodedBuff+w*h, w, h);
renderFrame();
but it displays with wrong color.
decodedBuff = Y
decodedBuff+w*h = U
decodedBuff+w*h*5 = V
but this separation formula is for YUV420P.
Do you guys happen to know whats for YUV420SP?
Your help is very much appreciated
If you are doing it this way you are doing it wrong. You should never manually read raw data from video surfaces in fragment shaders.
Generate a SurfaceTexture, bind it to an OpenGL ES texture, and use EGL_image_external to access the texture via an external image sampler.
This will give you direct access to the video data in your shader, including automatic handling of the memory format and color conversion, in many cases for "free" because it's backed by GPU hardware acceleration.

MediaCodec and ImageReader

I would like to access the YUV images of a video, and passing a ImageReader surface to a MediaCodec (as refered in the documentation) looked like a really smart way to do it. However, i can't make sense of the data inside the Image instance supplied by the onImageAvailable callback. Just looking at its Y plane, it looks to be mostly 0 values, no matter what is the video i provide.
I read some #fadden comments that looked a bit old by now, referring that the ImageReader surface was not available yet for MediaCodec, is this still the case? Did anyone succeeded in implementing a MediaCodec decoding to a ImageReader surface solution?
To illustrate, i was hopping for the Y plane to be like this:
and it comes out as:
Thanks for any pointers
I met the same problem. In my case, it is because I did not specicify the MediaCodec to output the same format image as the ImageReader.
int imageFormat = ImageFormat.YUV_420_888;
mImageReader = ImageReader.newInstance(
videoFormat.getInteger(MediaFormat.KEY_WIDTH),
videoFormat.getInteger(MediaFormat.KEY_HEIGHT),
imageFormat,
3
);
// I add the following line of code.
videoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible);
// Then initialize the MediaCodec with videoFormat.

Android Camera2 take picture while processing frames

I am using Camera2 API to create a Camera component that can scan barcodes and has ability to take pictures during scanning. It is kinda working but the preview is flickering - it seems like previous frames and sometimes green frames are interrupting realtime preview.
My code is based on Google's Camera2Basic. I'm just adding one more ImageReader and its surface as a new output and target for CaptureRequest.Builder. One of the readers uses JPEG and the other YUV. Flickering disappears when I remove the JPEG reader's surface from outputs (not passing this into createCaptureSession).
There's quite a lot of code so I created a gist: click - Tried to get rid of completely irrelevant code.
Is the device you're testing on a LEGACY-level device?
If so, any captures targeting a JPEG output may be much slower since they can run a precapture sequence, and may briefly pause preview as well.
But it should not cause green frames, unless there's a device-level bug.
If anyone ever struggles with this. There is table in the docs showing that if there are 3 targets specified, the YUV ImageReader can use images with maximum size equal to the preview size (maximum 1920x1080). Reducing this helped!
Yes you can. Assuming that you configure your preview to feed the ImageReader with YUV frames (because you could also put JPEG there, check it out), like so:
mImageReaderPreview = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 1);
You can process those frames inside your OnImageAvailable listener:
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireNextImage();
if (mImage == null) {
return;
}
try {
// Do some custom processing like YUV to RGB conversion, cropping, etc.
mFrameProcessor.setNextFrame(mImage));
mImage.close();
} catch (IllegalStateException e) {
Log.e("TAG", e.getMessage());
}

Has anyone managed to obtain a YUV_420_888 frame using RenderScript and the new Camera API?

I'm using RenderScript and Allocation to obtain YUV_420_888 frames from the Android Camera2 API, but once I copy the byte[] from the Allocation I receive only the Y plane from the 3 planes which compose the frame, while the U and V planes values are set to 0 in the byte[]. I'm trying to mimic the onPreviewframe from the previos camera API in order to perform in app processing of the camera frames. My Allocation is created like:
Type.Builder yuvTypeBuilderIn = new Type.Builder(rs, Element.YUV(rs));
yuvTypeBuilderIn.setX(dimensions.getWidth());
yuvTypeBuilderIn.setY(dimensions.getHeight());
yuvTypeBuilderIn.setYuvFormat(ImageFormat.YUV_420_888);
allocation = Allocation.createTyped(rs, yuvTypeBuilderIn.create(),
Allocation.USAGE_IO_INPUT | Allocation.USAGE_SCRIPT);
while my script looks like:
#pragma version(1)
#pragma rs java_package_name(my_package)
#pragma rs_fp_relaxed
rs_allocation my_frame;
The Android sample app HdrViewfinderDemo uses RenderScript to process YUV data from camera2.
https://github.com/googlesamples/android-HdrViewfinder
Specifically, the ViewfinderProcessor sets up the Allocations, and hdr_merge.rs reads from them.
Yes I did it, since I couldn't find anything useful. But I didn't go the proposed way of defining an allocation to the surface. Instead I just converted the output of the three image planes to RGB. The reason for this approach is that I use the YUV420_888 data twofold. First on a high frequency basis just the intensity values (Y). Second, I need to make some color Bitmaps too. Thus, the following solution. The script takes about 80ms for a 1280x720 YUV_420_888 image, maybe not ultra fast, but ok for my purpose.
UPDATE: I deleted the code here, since I wrote a more general solution here YUV_420_888 -> Bitmap conversion that takes into account pixelStride and rowStride too.
I think that you can use an ImageReader to get the frames of you camera into YUV_420_888
reader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);
Then you set an OnImageAvailableListener to the reader :
reader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
int jump = 4; //Le nombre d'image à sauter avant d'en traiter une, pour liberer de la mémoire
Image readImage = reader.acquireNextImage();
readImage.getPlane[0] // The Y plane
readImage.getPlane[1] //The U plane
readImage.getPlane[2] //The V plane
readImage.close();
}
}, null);
Hope that will help you
I'm using almost the same method as widea in their answer.
The exception you keep getting after ~50 frames might be due to the fact that you're processing all the frames by using acquireNextImage. The documentation suggest to:
Warning: Consider using acquireLatestImage() instead, as it will automatically release older images, and allow slower-running processing routines to catch up to the newest frame. [..]
So in case your exception is a IllegalStateException, switching to acquireLatestImage might help.
And make sure you call close() on all images retrieved from ImageReader.

YUV (NV21) to BGR conversion on mobile devices (Native Code)

I'm developing a mobile application that runs on Android and IOS. It's capable of real-time-processing of a video stream. On Android I get the Preview-Videostream of the camera via android.hardware.Camera.PreviewCallback.onPreviewFrame. I decided to use the NV21-Format, since it should be supported by all Android-devices, whereas RGB isn't (or just RGB565).
For my algorithms, which mostly are for pattern recognition, I need grayscale images as well as color information. Grayscale is not a problem, but the color conversion from NV21 to BGR takes way too long.
As described, I use the following method to capture the images;
In the App, I override the onPreviewFrame-Handler of the Camera. This is done in CameraPreviewFrameHandler.java:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
{
try {
AvCore.getInstance().onFrame(data, _prevWidth, _prevHeight, AvStreamEncoding.NV21);
} catch (NativeException e)
{
e.printStackTrace();
}
}
The onFrame-Function then calls a native function which fetches data from the Java-Objects as local references. This is then converted to an unsigned char* bytestream and calls the following c++ function, which uses OpenCV to convert from NV21 to BGR:
void CoreManager::api_onFrame(unsigned char* rImageData, avStreamEncoding_t vImageFormat, int vWidth, int vHeight)
{
// rImageData is a local JNI-reference to the java-byte-array "data" from onPreviewFrame
Mat bgrMat; // Holds the converted image
Mat origImg; // Holds the original image (OpenCV-Wrapping around rImageData)
double ts; // for profiling
switch(vImageFormat)
{
// other formats
case NV21:
origImg = Mat(vHeight + vHeight/2, vWidth, CV_8UC1, rImageData); // fast, only creates header around rImageData
bgrMat = Mat(vHeight, vWidth, CV_8UC3); // Prepare Mat for target image
ts = avUtils::gettime(); // PROFILING START
cvtColor(origImg, bgrMat, CV_YUV2BGRA_NV21);
_onFrameBGRConversion.push_back(avUtils::gettime()-ts); // PROFILING END
break;
}
[...APPLICATION LOGIC...]
}
As one might conclude from comments in the code, I profiled the conversion already and it turned out that it takes ~30ms on my Nexus 4, which is unacceptable long for such a "trivial" pre-processing step. (My profiling methods are double-checked and working properly for real-time measurement)
Now I'm trying desperately to find a faster implementation of this color conversion from NV21 to BGR. This is what I've already done;
Adopted the code "convertYUV420_NV21toRGB8888" to C++ provided in this topic (multiple of the conversion-time)
Modified the code from 1 to use only integer operations (doubled conversion-time of openCV-Solution)
Browsed through a couple other implementations, all with similar conversion-times
Checked OpenCV-Implementation, they use a lot of bit-shifting to get performance. Guess I'm not able to do better on my own
Do you have suggestions / know good implementations or even have a completely different way to work around this Problem? I somehow need to capture RGB/BGR-Frames from the Android-Camera and it should work on as many Android-devices as possible.
Thanks for your replies!
Did you try libyuv? I used it in the past and if you compile it with NEON support, it uses an asm code optimized for ARM processors, you can start from there to further optimize for your special situation.

Categories

Resources