I am using MediaCodec to decode H264 frames on Android, but instead of using a surface as target, I receive the decoded frames as a byte array. The problem is these frames are in different Color space formats (Yuv planar, Yuv semiplanar...) depending on the device.
If I want to display this myself into a Bitmap within an app, how can I properly take care of this color conversion into RGB or ARGB which Bitmap supports?
Specifically, I would like the end result of my decoding to be a ARGB8888 bitmap image.
Is it possible to achieve this directly with MediaCodec? Do I have to do all these conversions in software?
Related
I want to get raw data from javacv Frame.
I am using FFmpegFrmaeFilter to rotate Android camera preview. So, I am pulling Frame from the FFmpegFrameFilter and then provides converted byte[] to MediaCodec.
But, while doing so, I am getting wrong data(green picture).I am taking raw data from Frame.Image[0].array();
Is there any other way for fetching raw data from Frame which I can feed to Mediacodec.
Green picture most likely means that you pass zeros for chroma components of the image. Android camera usually produces YUV 420 images, where you have width*height array of luminance (Y) followed by U and V, width/2*height/2 each.
FFmpegFrameFilter understands different frame formats, but this depends also on some heuristics that derives the input pixel format from Frame parameters.
MediaCodec can receive frames in flexible YUV 420 format, but it is your responsibility to set up the Image correctly.
In Android, I need an efficient way of modifying the camera stream before displaying it on the screen. This post discusses a couple of ways of doing so and I was able to implement the first one:
Get frame buffer from onPreviewFrame
Convert frame to YUV
Modify frame
Convert modified frame to jpeg
Display frame to ImageView which was placed on SurfaceView used for the
preview
That worked but brought down the 30 fps I was getting with a regular camera preview to 5 fps or so. Converting frames back and forth from different image spaces is also power hungry, which I don't want.
Are there examples on how to get access to the raw frames directly and not have to go through so many conversions? Is using OpenGL the right way of doing this? It must be a very common thing to do but I can't find good examples.
Note: I'd rather avoid using the Camera2 APIs for backward compatibility sake.
The most efficient form of your CPU-based pipeline would look something like this:
Receive frames from the Camera on a Surface, rather than as byte[]. With Camera2 you can send the frames directly to an ImageReader; that will get you CPU access to the raw YUV data from the Camera without copying it or converting it. (I'm not sure how to rig this up with the old Camera API, as it wants either a SurfaceTexture or SurfaceHolder, and ImageReader doesn't provide those. You can run the frames through a SurfaceTexture and get RGB values from glReadPixels(), but I don't know if that'll buy you anything.)
Perform your modifications on the YUV data.
Convert the YUV data to RGB.
Either convert the RGB data into a Bitmap or a GLES texture. glTexImage2D will be more efficient, but OpenGL ES comes with a steep learning curve. Most of the pieces you need are in Grafika (e.g. the texture upload benchmark) if you decide to go that route.
Render the image. Depending on what you did in step #4, you'll either render the Bitmap through a Canvas on a custom View, or render the texture with GLES on a SurfaceView or TextureView.
I think the most significant speedup will be from eliminating the JPEG compression and uncompression, so you should probably start there. Convert the output of your frame editor to a Bitmap and just draw it on the Canvas of a TextureView or custom View, rather than converting to JPEG and using ImageView. If that doesn't get you the speedup you want, figure out what's slowing you down and work on that piece of the pipeline.
If you're restricted to the old camera API, then using a SurfaceTexture and doing your processing in a GPU shader may be most efficient.
This assumes whatever modifications you want to do can be expressed reasonably as a GL fragment shader, and that you're familiar enough with OpenGL to set up all the boilerplate necessary to render a single quadrilateral into a frame buffer, using the texture from a SurfaceTexture.
You can then read back the results with glReadPixels from the final rendering output, and save that as a JPEG.
Note that the shader will provide you with RGB data, not YUV, so if you really need YUV, you'll have to convert back to a YUV colorspace before processing.
If you can use camera2, as fadden says, ImageReader or Allocation (for Java/JNI-based processing or Renderscript, respectively) become options as well.
And if you're only using the JPEG to get to a Bitmap to place on an ImageView, and not because you want to save it, then again as fadden says you can skip the encode/decode step and draw to a view directly. For example, if using the Camera->SurfaceTexture->GL path, you can just use a GLSurfaceView as the output destination and render directly into a GLSurfaceView if that's all you need to do with the data.
I have an H264 stream that's decoded using an Android MediaCodec. When I query the output MediaFormat, the color format is 2141391875. Apparently, that's a specialized NV12 variant known as HAL_PIXEL_FORMAT_NV12_ADRENO_TILED. This is on a Nexus 7 (2013).
I want to take this data and convert it to RGB so I can create a Bitmap. I've found StackOverflow posts for converting other formats to RGB, not this format. I've tried code from those other posts, the result is just streaks of color. (To view the Bitmap, I draw on the Canvas associated with a Surface, as well as write it out as a JPEG -- it looks the same in both cases.)
How can I convert this particular data to RGB?
2141391875 decimal is 0x7FA30C03 in hex, which according to this header file is OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka. Which amounts to the same thing as the constant you found: this is a proprietary Qualcomm color format.
The easiest (and fastest) way to convert it is to let OpenGL ES do the work for you. See for example ExtractMpegFramesTest, which decodes video frames to a SurfaceTexture, renders the texture to an off-screen surface, and then reads the pixels out with glReadPixels(). The GLES driver will handle the RGB conversion for you.
If you want to do the conversion yourself, you will need to reverse-engineer the color format, or find someone who has done so already and is willing to share.
I want to use the onPreviewFrame to post-process the image before displaying it to the user (i.e. apply a color tint, sepia, etc). As I understand, the byte[] data returned to the callback is encoded in YUV420sp. Have people been decoding this to RGB in Java or using NDK (native code)? Does anyone have an example of a function that decodes this to RGB and how the RGB values are used afterwards?
Thanks.
I found a sample application that translates the YUV420 into RGB and displays (sort of) real time histograms over the preview image.
http://www.stanford.edu/class/ee368/Android/index.html
This helps?
Parameters params = mCamera.getParameters();
param.setPreviewFormat(ImageFormat.RGB_565);
mCamera.setParameters(param);
First check if rgb is supported
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getPreviewFormat%28%29
and then set preview format to rgb
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#setPreviewFormat%28int%29
I want to use the onPreviewFrame to post-process the image before displaying it to the user (i.e. apply a color tint, sepia, etc). As I understand, the byte[] data returned to the callback is encoded in YUV420sp. Have people been decoding this to RGB in Java or using NDK (native code)? Does anyone have an example of a function that decodes this to RGB and how the RGB values are used afterwards?
Thanks.
I found a sample application that translates the YUV420 into RGB and displays (sort of) real time histograms over the preview image.
http://www.stanford.edu/class/ee368/Android/index.html
This helps?
Parameters params = mCamera.getParameters();
param.setPreviewFormat(ImageFormat.RGB_565);
mCamera.setParameters(param);
First check if rgb is supported
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getPreviewFormat%28%29
and then set preview format to rgb
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#setPreviewFormat%28int%29