Getting QualComm encoders to work via MediaCodec API - android

I am trying to do hardware encoding (avc) of NV12 stream using Android MediaCodec API.
When using OMX.qcom.video.encoder.avc, resolutions 1280x720 and 640x480 work fine, while the others (i.e. 640x360, 320x240, 800x480) produce output where chroma component seems shifted (please see snapshot).
I have double-checked that the input image is correct by saving it to a jpeg file.
This problem only occurs on QualComm devices (i.e. Samsung Galaxy S4).
Anyone has this working properly? Any additional setup / quirks necessary?

Decoder(MediaCodec) has its MediaFormat, it can be received using getOutputFormat. Returned instance can be printed to log. And there you can see some useful information. For example in your case value like "slice-height" could be useful. I suspect that it is equal to height for 1280x720 and 640x480 and differs for other resolutions. Probably you should use this value to get chroma offset.

Yep, the OMX.qcom.video.encoder.avc does that but not on all devices/android version. On my Nexus 4 with Android 4.3 the encoder works fine, but not on my S3 (running 4.1)
The solution for an S3 running 4.1 with the OMX.qcom.video.encoder.avc (it seems that some S3 have another encoder) is to add 1024 bytes just before the Chroma pane.
// The encoder may need some padding before the Chroma pane
int padding = 1024;
if ((mWidth==640 && mHeight==480) || mWidth==1280 && mHeight==720) padding = 0;
// Interleave the U and V channel
System.arraycopy(buffer, 0, tmp, 0, mYSize); // Y
for (i = 0; i < mUVSize; i++) {
tmp[mYSize + i*2 + padding] = buffer[mYSize + i + mUVSize]; // Cb (U)
tmp[mYSize + i*2+1 + padding] = buffer[mYSize + i]; // Cr (V)
}
return tmp;
The camera is using YV12 and the encoder COLOR_FormatYUV420SemiPlanar.
Your snapshot shows the same kind of artefacts I had, you may need a similar hack for some resolutions, maybe with another padding length
You should also avoid resolutions that are not a multiple of 16, even on 4.3 apparently (http://code.google.com/p/android/issues/detail?id=37769) !

Related

Using Vulkan to sample from Android Camera2 hardware buffer - Issue with image formats

I'm currently working on an app in C++ using the Android ndk, and I need to create a sampler to access the camera output image.
I have done this using the AIMAGE_FORMAT_YUV_420_888, and using the VkSamplerYcbcrConversion for accessing the image in the hardware buffer. I do the yuv -> rgb conversion in a shader, and it all looks good on my phone.
I have since discovered that this doesn't work on Samsung phones, in my case specifically the Samsung Galaxy S10/S10+.
The reason is that when I set up an image reader with the AIMAGE_FORMAT_YUV_420_888 I get a camera error using Samsung. On my OnePlus and on another phone I tried the pipeline worked entirely as expected. I created a very simple test setup to even try to open the camera with that image format in the ImageReader on Samsung S10 and got the error, but when I changed the ImageReader format to AIMAGE_FORMAT_JPEG the error went away and the camera seemed to start as expected.
AImageReader* SimpleCamera::CreateJpegReader()
{
AImageReader* reader = nullptr;
// media_status_t status = AImageReader_new(640, 480, AIMAGE_FORMAT_JPEG,
//AIMAGE_FORMAT_RGBA_8888
//media_status_t status = AImageReader_new(640, 480, AIMAGE_FORMAT_RGB_565,4, &reader);
media_status_t status = AImageReader_newWithUsage(640, 480,
//AIMAGE_FORMAT_RGBA_8888,
//AIMAGE_FORMAT_RGB_565,
//AIMAGE_FORMAT_RGB_888,
AIMAGE_FORMAT_JPEG,
AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE | AHARDWAREBUFFER_USAGE_CPU_READ_RARELY,
4, &reader);
if (status != AMEDIA_OK) {
LOGE("Couldn't create new image reader");
return nullptr;
}
AImageReader_ImageListener listener{
.context = nullptr,
.onImageAvailable = imageCallback1,
};
AImageReader_setImageListener(reader, &listener);
return reader;
}
None of the other formats are guaranteed to be supported except AIMAGE_FORMAT_JPEG, but this format doesn't seem to work with the VkSamplerYcbcrConversion because the image layout is different.
Has anyone come up against this issue before? And if so how did you resolve it?
At a high level th goal is: In C++, get the image out of the camera2 api and onto a VkImage. If anyone knows an alternative way of doing that, I'm also all ears.
Try to use ImageFormat.PRIVATE with USAGE_GPU_SAMPLED_IMAGE flag. This used to work fine on the mentioned Samsung devices in particular.
Please make sure to read Vulkan specification, as there are quite a few android-specific and VkSamplerYcbcrConversion requirements.
I can also recommend to take a look at this great project which uses android camera2 api and vulkan.

WebGL video texture size limits on android - how to use 3840 * 2160?

I am developing a WebGL application that gets a stream of textures from HTML5 video, using HLS.js. It's working great on desktop, and it's working for 1920 * 1080 video on mobile Android, but not for 3840 * 2160.
I have tested the app on a couple of high-end devices (Xperia X Performance, Samsung Galaxy S8), both fail for the 4k video.
I know that the video can be played on those devices, because I also have a debug mode where the video element is attached to the DOM, and the video renders perfectly.
I have also used http://webglreport.com/ on those devices and that page shows that I should be able to use 4096 * 4096 textures.
I have also manually generated a 3840 * 2160 using Javascript ArrayBuffer and that texture was properly rendered.
This is how I copy the video
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, this.videoElement);
When performing this call for a video of size 3840 * 2160 on Chrome Android I get the following log print
SurfaceUtils: set up nativeWindow 0xc9ef2008 for 3840x2160, color 0x7fa30c04, rotation 0, usage 0x20002900
chromium: [INFO:CONSOLE(11818)] "WebGL: INVALID_VALUE: texImage2D:
width or height out of range"
Which maps to this error code from gl.getError():
GL_INVALID_VALUE, 0x0501
These are the parameters I use for the backing texture
const tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
And here is how I generate and upload the test texture
const width = 3840;
const height = 2160;
const generateTex = (w: number, h: number): number[] => {
const res = [];
for (let i = 0; i < w * h; i++) {
res.push(0, 255, 0);
}
return res;
};
const image = new Uint8Array(generateTex(width, height));
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, width, height, 0, gl.RGB, gl.UNSIGNED_BYTE, image);
The documentation for texImage2D says
source of type HTMLVideoElement
The width and height of the texture
are set to the width and height of the uploaded frame of the video in
pixels.
The question is: Is there a specification somewhere that can explain why my 3840 * 2160 texture does not get rendered?
TL;DR
Higher end devices (Xperia X Performance, Galaxy S8)
Android Chrome
I can upload a custom texture of size 3840 * 2160 and render it
I can decode & play video of size 3840 * 2160 using <video> tag
I can NOT upload frames from 3840 * 2160 video to WebGL
Lower resolution works just fine
Thanks!
After a week of effort to understand the problem, it is now clear the error was on our end.
The video was indeed created as 3840 x 2160, but was created from a source video with a resolution of 3840 x 4320. When converting the video to 3840 x 2160, ffmpeg was setting the aspect ratio to (3840/4320), leading Android to inflate the video back to that size. The inflated size has a height > 4096 and that was the reason for the texture not being able to be created.
We didn't see this issue on other platforms (native apps) so it might be some quirk of the Android media player.
We have fixed the issue on our end by setting the "correct" aspect ratio of (3840/2160) manually as an ffmpeg parameter.
What I learned:
Don't assume that the resolution reported by applications such as VLC or QuickTime will properly reflect what will happen on every other platform. After implementing the code necessary to check the resolution of the HTML Video Element, it became apparent that something different was happening on Android compared to other platforms.

Nexus 9 Camera2 API - YUV_420_888 vs. getOutputSizes()

I'm implementing the Camera2 API with the YUV_420_888 format on a Nexus 9. I checked the output sizes and wanted to use the largest (8MP, 3280 x 2460) size to save. However, it just appears as static lines, similar to how old TV's looked without a signal. I would like to stick with YUV_420_888 since my end goal is to save grayscale data (Y component).
I originally thought it was a camera bandwidth issue, but the same thing happened at some of the small sizes (320 x 240). None of the problems went away even when I increased frame duration and decreased the size of the preview to save on bandwidth. Some of the other sizes DID work (2048 x 1536, 1280 x 720) but I did not check all of them.
I'm starting to think getOutputSizes() may not necessarily be accurate. It gave me the same results for all other formats except RAW_SENSOR (JPEG, YUV_420_888, YV12). Has anyone encountered this or determined a solution?
Figured out the issue. I was not taking into account the rowStride of the returned pixels. So I had to run a for-loop to extract the non-padded data before saving it:
myRowStride = mImage.getPlanes()[0].getRowStride();
int iSkippedBytes = 0;
for (int i = 0; i < mStillSize.getWidth() * mStillSize.getHeight(); i++){
if (i % mStillSize.getWidth() == 0 && i != 0)
iSkippedBytes = iSkippedBytes + (myRowStride - mStillSize.getWidth());
imageBytes[i] = bytes[i + iSkippedBytes];
}

Slow glTexSubImage2D performance on Nexus 10/Android 4.2.2 (Samsung Exynos 5 w/ Mali-T604)

I have an Android app that decodes video into yuv420p format then renders video frames using OpenGLES.
I use glTexSubImage2D() to upload y/u/v buffer to GPU then do a YUV2RGB conversion using shader. All EGL/OpenGL setup/rendering code is native code.
Now I am not saying there is no problem with my code, but considering the same code is running perfecting fine on iOS (iPad/iPhone), Nexus 7, Kindle HD 8.9, Samsung Note 1 and a few other cheap chinese tablets (A31/RockChip 3188) running Android 4.0/4.1/4.2. I would say it's less likely my code is wrong. On those devices, glTexSubImage2D() uses less than 16ms to upload a SD or 720P HD texture.
However, on Nexus 10, glTexSubImage2D() it takes about 50~90ms for a SD or 720P HD texture which is way too slow for a 30fps or 60fps video.
I would like to know
1) if I should pick a different texture format (RGBA or BGRA). Is there a ways to detect which is the best texture format used by a GPU?
2) if there is a feature that is 'OFF' on all other SOCs but set to 'ON' on Exynos 5. For example, the automatic MIPMAP generation option. (I have it off, btw)
3) if this is a known issue of Samsung Exynos SOC - I can't find a support forum for Exynos CPU
4) Is there any option I need to set when configure the EGL surface? like, transparency, surface format, etc? (I have no idea what I am talking about)
5) It could mean GPU is doing an implicit format conversion but I checked GL_LUMINANCE is always used. Again it works on all other platform.
6) anything else?
My EGL config:
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
Initial setup:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, ctx->frameW, ctx->frameH, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, NULL); /* also for U/V */
subsequent partial replacement:
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, ctx->frameW, ctx->frameH, GL_LUMINANCE, GL_UNSIGNED_BYTE, yBuffer); /*also for U/V */
I am trying to render video at ~30FPS or ~60FPS at SD or 720P HD resolution.
This is a known driver issue that we have reported to ARM. A future update should fix it.
EDIT Status update
We've now managed to reproduce slow upload conditions for one path on the public firmware, which you are possibly hitting, and this will be fixed in the next driver release.
If you double-buffer texture IDs (e.g. frame N = ID X, N+1 = ID Y, N+2 = ID X, N+3 = ID Y, etc) for the textures you are uploading to it should help avoid this on the current firmware.
Thanks,
Iso
I can confirm this has been fixed in Android 4.3 - I'm seeing a performance increase by a factor of 2-3 with RGBA format and by a factor of 10-50 with other texture formats over Android 4.2.2. These results apply for both glTexImage2D and glTexSubImage2D. (Can't add comments yet so I had to put this here)
EDIT: If you're stuck with 4.2.2, you could try using RGBA texture instead, it should have better performance (3-10x or so with larger power-of-two texture sizes).

Opengl es 1.1, texture compression ETC1 and Mipmaping (complete set of mipmaps error)

When I activate the mipmaping on uncompressed texture, all is working perfectly.
When I do it on ETC1 texture, the texture is blank, certainly because le complete set of mipmaps was not given.
The code is very simple and works on iPhone (with PVR compression, of course).
It doesn't work on Android. The mipmap was build with an external tool, and past together.
I stop making mipmap at the size of 4, because glCompressedTexImage2D return an opengl error if try using mipmap lower.
for(u32 i=0; i<=levels; i++)
{
size = KC_TexByte(pagex, pagey, tex_type);
glCompressedTexImage2D(GL_TEXTURE_2D, i, type, pagex, pagey, 0, size, ptr);
pagex = MAX(pagex/2, 4);
pagey = MAX(pagey/2, 4);
ptr += size;
KC_Error(); // test openGL error
}
The reason your texture is blank is because it is required that the mipmap go all the way to 1x1.
I would imagine that the error you're getting with small compressed textures is because the texture format you're attempting to use (etc1?) doesn't support those sizes. You'd have to use non-compressed images at those small sizes...
Thanks, but your solution is not the right one; I found another solution.
you're right when you explain that all the mipmap is requiered, until size 1x1
you're wrong, we can't have different format between mipmap
The right way is:
using size to 1x1
keep in mind it's compressed data with bloc, so the size in BYTE doesn't divide by 4 each step. after 8x8, the size stay at the same value.
sx = size in X
sy = size in Y
byte = ((sx+3)/4)*((sy+3)/4) * 8 * 2; // 8 = bit per pixel
for(u32 i=0; i<=levels; i++)
Seems you'd want i < levels instead of <=.

Categories

Resources