I want to use the onPreviewFrame to post-process the image before displaying it to the user (i.e. apply a color tint, sepia, etc). As I understand, the byte[] data returned to the callback is encoded in YUV420sp. Have people been decoding this to RGB in Java or using NDK (native code)? Does anyone have an example of a function that decodes this to RGB and how the RGB values are used afterwards?
Thanks.
I found a sample application that translates the YUV420 into RGB and displays (sort of) real time histograms over the preview image.
http://www.stanford.edu/class/ee368/Android/index.html
This helps?
Parameters params = mCamera.getParameters();
param.setPreviewFormat(ImageFormat.RGB_565);
mCamera.setParameters(param);
First check if rgb is supported
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getPreviewFormat%28%29
and then set preview format to rgb
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#setPreviewFormat%28int%29
Related
I want to get raw data from javacv Frame.
I am using FFmpegFrmaeFilter to rotate Android camera preview. So, I am pulling Frame from the FFmpegFrameFilter and then provides converted byte[] to MediaCodec.
But, while doing so, I am getting wrong data(green picture).I am taking raw data from Frame.Image[0].array();
Is there any other way for fetching raw data from Frame which I can feed to Mediacodec.
Green picture most likely means that you pass zeros for chroma components of the image. Android camera usually produces YUV 420 images, where you have width*height array of luminance (Y) followed by U and V, width/2*height/2 each.
FFmpegFrameFilter understands different frame formats, but this depends also on some heuristics that derives the input pixel format from Frame parameters.
MediaCodec can receive frames in flexible YUV 420 format, but it is your responsibility to set up the Image correctly.
I am developing an image processing App using Android. I have confusion with color spaces and channels. My question is: Opencv is BGR color based, so when I use Android to take picture it will be in RGB format and I want to convert RGB to Lab so what is the order of channels in the Lab color space after conversion. Is it
L:0
A:1
B:2
or
L:2
A:1
B:0
Any help is appreciated.
The former; they are always listed in channel order. BGR is B:0, G:1, R:2, Lab is L:0, a:1, b:2, HSV is H:0, S:1, V:2, etc.
You can check out the OpenCV docs on RGB to Lab conversion, which show the actual formulas used to do the conversion.
I have an H264 stream that's decoded using an Android MediaCodec. When I query the output MediaFormat, the color format is 2141391875. Apparently, that's a specialized NV12 variant known as HAL_PIXEL_FORMAT_NV12_ADRENO_TILED. This is on a Nexus 7 (2013).
I want to take this data and convert it to RGB so I can create a Bitmap. I've found StackOverflow posts for converting other formats to RGB, not this format. I've tried code from those other posts, the result is just streaks of color. (To view the Bitmap, I draw on the Canvas associated with a Surface, as well as write it out as a JPEG -- it looks the same in both cases.)
How can I convert this particular data to RGB?
2141391875 decimal is 0x7FA30C03 in hex, which according to this header file is OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka. Which amounts to the same thing as the constant you found: this is a proprietary Qualcomm color format.
The easiest (and fastest) way to convert it is to let OpenGL ES do the work for you. See for example ExtractMpegFramesTest, which decodes video frames to a SurfaceTexture, renders the texture to an off-screen surface, and then reads the pixels out with glReadPixels(). The GLES driver will handle the RGB conversion for you.
If you want to do the conversion yourself, you will need to reverse-engineer the color format, or find someone who has done so already and is willing to share.
I have applied some effects to camera preview by OpenGL ES 2.0 shaders.
Next, I want to save these effect pictures (grayscale, negative ...)
I call glReadPixels() in onDrawFrame(), create a bitmap based on the pixels I read from opengl frame buffer and then save it to device storage.
However, in this way, I only "snapshot" the camera effect preview. In other words, the saved image resolution (ex: 800*480) is not the same as the image taken by Camera.PictureCallback (ex: 1920 * 1080).
I know the preview size can be changed by setPreviewSize(), but it can't equal to picture size finally.
So, is it possible to use glsl shader to post process directly the image get by Camera.PictureCallback ? Or there is another way to achieve the same goal?
Any suggestion will be greatly appreciated.
Thanks.
Justin, this was my question about setPreviewTexture(), not a suggestion. If you send the pixel data as received from onPreviewFrame() callback, it will naturally be limited by supported preview sizes.
You can use the same logic to push the pixels from onPictureTaken() to a texture, but you should decode them into RGB from JPEG format.
I want to use the onPreviewFrame to post-process the image before displaying it to the user (i.e. apply a color tint, sepia, etc). As I understand, the byte[] data returned to the callback is encoded in YUV420sp. Have people been decoding this to RGB in Java or using NDK (native code)? Does anyone have an example of a function that decodes this to RGB and how the RGB values are used afterwards?
Thanks.
I found a sample application that translates the YUV420 into RGB and displays (sort of) real time histograms over the preview image.
http://www.stanford.edu/class/ee368/Android/index.html
This helps?
Parameters params = mCamera.getParameters();
param.setPreviewFormat(ImageFormat.RGB_565);
mCamera.setParameters(param);
First check if rgb is supported
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getPreviewFormat%28%29
and then set preview format to rgb
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#setPreviewFormat%28int%29