I'm trying to convert from a video using FFmpeg to an OpenGL ES texture in jni, but all that I get is a black texture. I have output the OpenGL with the glGetError(), but there is no error.
Here is my code:
void* pixels;
int err;
int i;
int frameFinished = 0;
AVPacket packet;
static struct SwsContext *img_convert_ctx;
static struct SwsContext *scale_context = NULL;
int64_t seek_target;
int target_width = 320;
int target_height = 240;
GLenum error = GL_NO_ERROR;
sws_freeContext(img_convert_ctx);
i = 0;
while((i==0) && (av_read_frame(pFormatCtx, &packet)>=0)) {
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
LOGI("packet pts %llu", packet.pts);
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt,
target_width, target_height, PIX_FMT_RGB24, SWS_BICUBIC,
NULL, NULL, NULL);
if(img_convert_ctx == NULL) {
LOGE("could not initialize conversion context\n");
return;
}
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
LOGI("sws_scale");
videoTextures = new Texture*[1];
videoTextures[0]->mWidth = 256; //(unsigned)pCodecCtx->width;
videoTextures[0]->mHeight = 256; //(unsigned)pCodecCtx->height;
videoTextures[0]->mData = pFrameRGB->data[0];
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &(videoTextures[0]->mTextureID));
glBindTexture(GL_TEXTURE_2D, videoTextures[0]->mTextureID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
if(0 == got_texture)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, videoTextures[0]->mWidth, videoTextures[0]->mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
}else
{
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
}
i = 1;
error = glGetError();
if( error != GL_NO_ERROR ) {
LOGE("couldn't create texture!!");
switch (error) {
case GL_INVALID_ENUM:
LOGE("GL Error: Enum argument is out of range");
break;
case GL_INVALID_VALUE:
LOGE("GL Error: Numeric value is out of range");
break;
case GL_INVALID_OPERATION:
LOGE("GL Error: Operation illegal in current state");
break;
case GL_OUT_OF_MEMORY:
LOGE("GL Error: Not enough memory to execute command");
break;
default:
break;
}
}
}
}
av_free_packet(&packet);
}
I have succeeded in changing pFrameRGB to a java bitmap, but I just want to change it to a texture in the c code.
Edit1 I have output the Texture ID, it is 0; could texture ID be zero? I changed my code
but it always be zero.
Edit2
the Texture display, but it is a mess.
Not used to GLES, but GL. In there 320, 240 are not valied 512,256 as to power of 2. Else you need to use texture_rectangle extension which texcoords are not from 0-1 but 0-w/h. As for uploading texture data, glTexImage(...) is needed to be used the first time (even with data 0), then glTexSubImage is enough, i think sizing etc is initialized with the first, the second just sends the meat.
Regarding ffmpeg usage, perhaps a version issue but img_context is more near to be renamed to sws_getContext, and initialized only once, if cpu usage is an issue use SWS_LINEAR instead of SWS_CUBIC, also i assume pFrameRGB has been correctly avcodec_alloc_frame()'ed, if you are going to use GL_RGBA you should use PIX_FMT_RGBA, PIX_FMT_RGB24 would be for GL_RGB texture piping, finally you lack a packet stack so you can go reading in advance to keep display good in sync and not late.
I've read some comments about unpack alignment, i didn't need that (and seeing the success in the area doubt of it) to implement an ffmpeg to OpenGL/OpenAL media library (http://code.google.com/p/openmedialibrary), nicely the audio bits have also been extracted to an ffmpeg to OpenAL loader (http://code.google.com/p/openalextensions). Haves some nice features and currently i'm trying to work with texture compression to see if it can perform still better. Consider that tutorials or even ready to use gpl code.
Hope to give some enlightenment on the obscure (by lack) art of ffmpeg to OpenGL/AL integration.
Try to append 16 zero bytes to each packet before passing to decoder.
Some comments from the avcodec.h:
/*
* #warning The input buffer must be FF_INPUT_BUFFER_PADDING_SIZE larger than
* the actual read bytes because some optimized bitstream readers read 32 or 64
* bits at once and could read over the end.
* #warning The end of the input buffer buf should be set to 0 to ensure that
* no overreading happens for damaged MPEG streams.
*/
Related
I'm need to send data from GL_TEXTURE_EXTERNAL_OES to simple GL_TEXTURE_2D (Render image from Android player to Unity texture) and currently do it through read pixels from buffer with attached source texture. This process work correctly on my OnePlus 5 phone, but have some glitches with image on phones like xiaomi note 4, mi a2 and etc (like image is very green), and also there is perfomance issues becouse of this process works every frame and than more pixels to read, than worser perfomance (even my phone has low fps at 4k resolution). Any idea how to optimize this process or do it in some other way?
Thanks and best regards!
GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}
unsigned char* data = new unsigned char[g_SourceWidth * g_SourceHeight * 4];
glReadPixels(0, 0, g_SourceWidth, g_SourceHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, g_SourceWidth, g_SourceHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
delete[] data;
UPDATE.
Function which contain this code and function which calls it from Unity side
static void UNITY_INTERFACE_API OnRenderEvent(int eventID) { ... }
extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API UMDGetRenderEventFunc()
{
return OnRenderEvent;
}
Which called from Unity Update function like this:
[DllImport("RenderingPlugin")]
static extern IntPtr UMDGetRenderEventFunc();
IEnumerator UpdateVideoTexture()
{
while (true)
{
...
androidPlugin.UpdateSurfaceTexture();
GL.IssuePluginEvent(UMDGetRenderEventFunc, 1);
}
}
And Android plugin do this on its side (surfaceTexture its texture which contain this external texture on which ExoPlayer render video)
public void exportUpdateSurfaceTexture() {
synchronized (this) {
if (this.mIsStopped) {
return;
}
surfaceTexture.updateTexImage();
}
}
On the C++ side:
You're creating and destroying pixel data every frame when you do new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete[] data and that's expensive depending on the Texture size. Create the texture data once then re-use it.
One way to do this is to have static variables on the C++ side hold the texture information then a function to initialize those variables::
static void* pixelData = nullptr;
static int _x;
static int _y;
static int _width;
static int _height;
void initPixelData(void* buffer, int x, int y, int width, int height) {
pixelData = buffer;
_x = x;
_y = y;
_width = width;
_height = height;
}
Then your capture function should be re-written to remove new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete[] data but use the static variables.
static void UNITY_INTERFACE_API OnRenderEvent(int eventID)
{
if (pixelData == nullptr) {
//Debug::Log("Pointer is null", Color::Red);
return;
}
GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}
glReadPixels(_x, _y, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);
glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);
glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}
extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API
UMDGetRenderEventFunc()
{
return OnRenderEvent;
}
On the C# side:
[DllImport("RenderingPlugin", CallingConvention = CallingConvention.Cdecl)]
public static extern void initPixelData(IntPtr buffer, int x, int y, int width, int height);
[DllImport("RenderingPlugin", CallingConvention = CallingConvention.StdCall)]
private static extern IntPtr UMDGetRenderEventFunc();
Create the Texture information, pin it and send the pointer to C++:
int width = 500;
int height = 500;
//Where Pixel data will be saved
byte[] screenData;
//Where handle that pins the Pixel data will stay
GCHandle pinHandler;
//Used to test the color
public RawImage rawImageColor;
private Texture2D texture;
// Use this for initialization
void Awake()
{
Resolution res = Screen.currentResolution;
width = res.width;
height = res.height;
//Allocate array to be used
screenData = new byte[width * height * 4];
texture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);
//Pin the Array so that it doesn't move around
pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned);
//Register the screenshot and pass the array that will receive the pixels
IntPtr arrayPtr = pinHandler.AddrOfPinnedObject();
initPixelData(arrayPtr, 0, 0, width, height);
StartCoroutine(UpdateVideoTexture());
}
Then to update the texture, see the sample below. Note that there are two methods to update the texture as shown on the code below. If you run into issues with Method1, comment out the two lines which uses texture.LoadRawTextureData and texture.Apply and un-comment the Method2 code which uses the ByteArrayToColor, texture.SetPixels and texture.Apply function:
IEnumerator UpdateVideoTexture()
{
while (true)
{
//Take screenshot of the screen
GL.IssuePluginEvent(UMDGetRenderEventFunc(), 1);
//Update Texture Method1
texture.LoadRawTextureData(screenData);
texture.Apply();
//Update Texture Method2. Use this if the Method1 above crashes
/*
ByteArrayToColor();
texture.SetPixels(colors);
texture.Apply();
*/
//Test it by assigning the texture to a raw image
rawImageColor.texture = texture;
//Wait for a frame
yield return null;
}
}
Color[] colors = null;
void ByteArrayToColor()
{
if (colors == null)
{
colors = new Color[screenData.Length / 4];
}
for (int i = 0; i < screenData.Length; i += 4)
{
colors[i / 4] = new Color(screenData[i],
screenData[i + 1],
screenData[i + 2],
screenData[i + 3]);
}
}
Unpin the array when done or when the script is about to be destroyed:
void OnDisable()
{
//Unpin the array when disabled
pinHandler.Free();
}
Calling glReadPixels is always going to be slow; CPUs are not good at bulk data transfer.
Ideally you'd managed to convince Unity to accept an external image handle, and do the whole process zero copy, but failing that I would use a GPU render-to-texture and use a shader to transfer from the external image to the RGB surface.
I try to decode video and convert frame to rgb32 or gb565le format.
Then pass this frame from C to Android buffer by JNI.
So far, I know to how pass buffer from C to Android as well as how to decode video and get decoded frame.
My question is how to convert decoded frame to rgb32 (or rgb565le) and where is it stored?
The following is my code, I'm not sure is correct or not.
-Jargo
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, 100, 100, PIX_FMT_RGB32, SWS_BICUBIC, NULL, NULL, NULL);
if(!img_convert_ctx) return -6;
while(av_read_frame(pFormatCtx, &packet) >= 0) {
// Is this a packet from the video stream?
if(packet.stream_index == videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
// Did we get a video frame?
if(frameFinished) {
AVPicture pict;
if(avpicture_alloc(&pict, PIX_FMT_RGB32, 100, 100) >= 0) {
sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pict.data, pict.linesize);
}
} // End of if( frameFinished )
} // End of if( packet.stream_index == videoStream )
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
The decoded frame goes into pict. (pFrame is a raw frame.)
100x100 is probably no good you have to calculate pict size based on pFrame size.
I guess it should be pFrame->width*pFrame->height*32;
You have to allocate pict yourself.
See this tutorial http://dranger.com/ffmpeg/
I'm porting mobile game to Android and want to use compressed textures in OpenGL the same way I did on iOS with PVR textures.
I've managed to convert my textures from PNG to DXT and run the game on Galaxy Tab 10.1 with Nvidia Tegra 2 chipset.
However there were no smooth alpha in my DXT5 formatted textures. They looked like DXT1-textures with 1-bit alpha.
I've read and run the examples from here:
http://software.intel.com/en-us/articles/android-texture-compression
I've tried this very good library:
https://libregamewiki.org/Simple_OpenGL_Image_Library
But got same results. No alpha channel.
Please, help me with this problem. I'm really stuck.
Thanks.
Details:
I've used nvcompress tool version 2.1.0 with flags "-nomips -bc3 -alpha" (and of cause a lot of variations but with no success).
I'm using OpenGL ES1 library.
My openGL code:
int width = //...
int height = //...
const unsigned char* textureData = //...
int numMipMaps = //...
int format = //...
GLuint texture = 0;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
int blockSize;
GLuint format;
switch (format) {
case S3TC_DXT1:
format = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT;
blockSize = 8;
break;
case S3TC_DXT3:
format = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT;
blockSize = 16;
break;
case S3TC_DXT5:
format = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
blockSize = 16;
break;
case ATC:
format = GL_ATC_RGBA_EXPLICIT_ALPHA_AMD;
blockSize = 16;
break;
default:
//Error...
break;
}
int offset = 0;
for(int i = 0; i < numMipMaps; i++)
{
int size = ((width + 3) / 4) * ((height + 3) / 4) * blockSize;
glCompressedTexImage2D(GL_TEXTURE_2D, i, format, width, height, 0, size, textureData + offset);
offset += size;
//Scale next level.
width /= 2;
height /= 2;
}
Finally found the problem. OpenGL state of my game was configured to work with premultiplied alpha-channel.
I've added special 'premultiply' step to my build system and got proper result.
Blend-function settings:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I have a method for loading texture into OpenGL:
bool AndroidGraphicsManager::setTexture(
UINT& id, UCHAR* image, UINT width, UINT height,
bool wrapURepeat, bool wrapVRepeat, bool useMipmaps)
{
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, (GLvoid*) image);
int minFilter = GL_LINEAR;
if (useMipmaps) {
glGenerateMipmap(GL_TEXTURE_2D);
minFilter = GL_NEAREST_MIPMAP_NEAREST;
}
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_MAG_FILTER, GL_NEAREST);
int wrap_u = wrapURepeat ?
GL_REPEAT : GL_CLAMP_TO_EDGE;
int wrap_v = wrapVRepeat ?
GL_REPEAT : GL_CLAMP_TO_EDGE;
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_WRAP_S, wrap_u);
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_WRAP_T, wrap_v);
glBindTexture(GL_TEXTURE_2D, 0);
return !checkGLError("Loading texture.");
}
This works fine. I load texture using libpng. This gives me array of unsigned chars. Then I pass this array to my method specified above. Pictures are 32bit, so I assume in UCHAR array each UCHAR contains single color component and four UCHARs make up one pixel. I wanted to try using 16 bit textures. I changed GL_UNSIGNED_BYTE to GL_UNSIGNED_SHORT_4_4_4_4. But apparently this isn't enough because it gives me this result:
What else do I need to change to be able to properly display 16 bit textures?
[EDIT] As #DatenWolf suggested I tried using this code:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA4,
width, height, 0, GL_RGBA,
GL_UNSIGNED_SHORT_4_4_4_4, (GLvoid*) image);
I used it on Linux version of my engine, but the result was the exact same as in previous android screenshot. So is it safe to assume that the problem is with libpng and the way it produces my UCHAR array?
[EDIT2] So I finally managed to compress 32bit texture to 16bit.
All I needed was a simple iteration through 32bit texture data and a few bitshifting operations to compress 32 bits into 16.
void* rgba8888ToRgba4444(void* in, int size) {
int pixelCount = size / 4;
ULONG* input = static_cast<ULONG*>(in);
USHORT* output = new USHORT[pixelCount];
for (int i = 0; i < pixelCount; i++) {
ULONG pixel = input[i];
// Unpack the source data as 8 bit values
UINT r = pixel & 0xff;
UINT g = (pixel >> 8) & 0xff;
UINT b = (pixel >> 16) & 0xff;
UINT a = (pixel >> 24) & 0xff;
// Convert to 4 bit vales
r >>= 4; g >>= 4; b >>= 4; a >>= 4;
output[i] = r << 12 | g << 8 | b << 4 | a;
}
return static_cast<void*>(output);
}
I thought that for mobile device this would yield a performance increase, but unfortunately I saw no gain in performance on Galaxy SIII.
The best thing would probably be to use an image library like devil for this kind of thing.
But if you want to convert the image data you have you get from libpbg you can do something similar to the code below.
Keep in mind that you trade size for speed when doing this.
struct Color {
unsigned char r:4;
unsigned char g:4;
unsigned char b:4;
unsigned char a:4;
};
float factor = 16 / 255;
Color colors[imgWidth*imgHeight];
for(int i = 0; i < imgWidth*imgHeight; ++i)
{
colors[i].r = image[i*4]*factor;
colors[i].g = image[i*4+1]*factor;
colors[i].b = image[i*4+2]*factor;
colors[i].a = image[i*4+3]*factor;
}
But apparently this isn't enough
The token you've changed tells OpenGL the format of the data in the array passed to it. So you have to adjust that as well. However OpenGL may convert it into any internal format it desires, since you didn't force it into a particular format. That's what the internal format parameter is for (which works independently on the external data type). So if you want to have internally 16 bits resolution you must change the internal format parameter to GL_RGBA4. The data may remain in 8 bits per pixel format. So in your case
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA4,
width, height, 0, GL_RGBA,
…, (GLvoid*) image);
The type parameter must match the layout of your data. If you've got originally 8 bits per pixel then GL_UNSIGNED_BYTE. But if it's prepackaged RGBA in ushorts nibbles then GL_UNSIGNED_SHORT_4_4_4_4
I am trying to render video via the NDK, to add some features that just aren't supported in the sdk. I am using FFmpeg to decode the video and can compile that via the ndk, and used this as a starting point. I have modified that example and instead of using glDrawTexiOES to draw the texture I have setup some vertices and am rendering the texture on top of that (opengl es way of rendering quad).
Below is what I am doing to render, but creating the glTexImage2D is slow. I want to know if there is any way to speed this up, or give the appearance of speeding this up, such as trying to setup some textures in the background and render pre-setup textures. Or if there is any other way to more quickly draw the video frames to screen in android? Currently I can only get about 12fps.
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, textureConverted);
//this is slow
glTexImage2D(GL_TEXTURE_2D, /* target */
0, /* level */
GL_RGBA, /* internal format */
textureWidth, /* width */
textureHeight, /* height */
0, /* border */
GL_RGBA, /* format */
GL_UNSIGNED_BYTE,/* type */
pFrameConverted->data[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
EDIT
I changed my code to initialize a gltextImage2D only once, and modify it with glSubTexImage2D, it didn't make much of an improvement to the framerate.
I then modified the code to modify a native Bitmap object on the NDK. With this approach I have a background thread that runs that process the next frames and populates the bitmap object on the native side. I think this has potential, but I need to get the speed increased of converting the AVFrame object from FFmpeg into a native bitmap. Below is currently what I am using to convert, a brute force approach. Is there any way to increase the speed of this or optimize this conversion?
static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine;
int yy;
for (yy = 0; yy < info->height; yy++) {
uint8_t* line = (uint8_t*)pixels;
frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);
int xx;
for (xx = 0; xx < info->width; xx++) {
int out_offset = xx * 4;
int in_offset = xx * 3;
line[out_offset] = frameLine[in_offset];
line[out_offset+1] = frameLine[in_offset+1];
line[out_offset+2] = frameLine[in_offset+2];
line[out_offset+3] = 0;
}
pixels = (char*)pixels + info->stride;
}
}
Yes, texture (and buffer, and shader, and framebuffer) creation is slow.
That's why you should create texture only once. After it is created, you can modify its data by calling glSubTexImage2D.
And to make uploading texture data more faster - create two textures. While you use one to display, upload texture data from ffmpeg to second one. When you display second one, upload data to first one. And repeat from beginning.
I think it will still be not very fast. You could try to use jnigraphics library that allows to access Bitmap object pixels from NDK. After that - you just diplay this Bitmap on screen on java side.
Yes, you can optimized this code:
static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine;
int yy;
for (yy = 0; yy < info->height; yy++)
{
uint8_t* line = (uint8_t*)pixels;
frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);
int xx;
for (xx = 0; xx < info->width; xx++) {
int out_offset = xx * 4;
int in_offset = xx * 3;
line[out_offset] = frameLine[in_offset];
line[out_offset+1] = frameLine[in_offset+1];
line[out_offset+2] = frameLine[in_offset+2];
line[out_offset+3] = 0;
}
pixels = (char*)pixels + info->stride;
}
}
to be something like:
static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine = (uint8_t *)pFrame->data[0];
int yy;
for (yy = 0; yy < info->height; yy++)
{
uint8_t* line = (uint8_t*)pixels;
int xx;
int out_offset = 0;
int in_offset = 0;
for (xx = 0; xx < info->width; xx++) {
int out_offset += 4;
int in_offset += 3;
line[out_offset] = frameLine[in_offset];
line[out_offset+1] = frameLine[in_offset+1];
line[out_offset+2] = frameLine[in_offset+2];
line[out_offset+3] = 0;
}
pixels = (char*)pixels + info->stride;
frameLine += pFrame->linesize[0];
}
}
That will save you some cycles.
A couple of minor additions will solve your problem, first convert your AVFrame to RGB with swscale, then apply it directly to your texture i.e.:
AVPicture *pFrameConverted;
struct SwsContext img_convert_ctx;
void init(){
pFrameConverted=(AVPicture *)avcodec_alloc_frame();
avpicture_alloc(pFrameConverted, AV_PIX_FMT_RGB565, videoWidth, videoHeight);
img_convert_ctx = sws_getCachedContext(&img_convert_ctx,
videoWidth,
videoHeight,
pCodecCtx->pix_fmt,
videoWidth,
videoHeight,
AV_PIX_FMT_RGB565,
SWS_FAST_BILINEAR,
NULL, NULL, NULL );
ff_get_unscaled_swscale(img_convert_ctx);
}
void render(AVFrame* pFrame){
sws_scale(img_convert_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pFrame->height, pFrameConverted->data, pFrameConverted->lineSize);
glClear(GL_COLOR_BUFFER_BIT);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, videoWidth, videoHeight, GL_RGB, GL_UNSIGNED_BYTE, pFrameConverted->data[0]);
glDrawTexiOES(0, 0, 0, videoWidth, videoHeight);
}
Oh,maybe you can use jnigraphics as https://github.com/havlenapetr/FFMpeg/commits/debug.
but if when you get yuv data after decode frame,you should convert it to RGB555,it is too slowly.Use android's mediaplayer is a good idea