I'm porting mobile game to Android and want to use compressed textures in OpenGL the same way I did on iOS with PVR textures.
I've managed to convert my textures from PNG to DXT and run the game on Galaxy Tab 10.1 with Nvidia Tegra 2 chipset.
However there were no smooth alpha in my DXT5 formatted textures. They looked like DXT1-textures with 1-bit alpha.
I've read and run the examples from here:
http://software.intel.com/en-us/articles/android-texture-compression
I've tried this very good library:
https://libregamewiki.org/Simple_OpenGL_Image_Library
But got same results. No alpha channel.
Please, help me with this problem. I'm really stuck.
Thanks.
Details:
I've used nvcompress tool version 2.1.0 with flags "-nomips -bc3 -alpha" (and of cause a lot of variations but with no success).
I'm using OpenGL ES1 library.
My openGL code:
int width = //...
int height = //...
const unsigned char* textureData = //...
int numMipMaps = //...
int format = //...
GLuint texture = 0;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
int blockSize;
GLuint format;
switch (format) {
case S3TC_DXT1:
format = GL_COMPRESSED_RGBA_S3TC_DXT1_EXT;
blockSize = 8;
break;
case S3TC_DXT3:
format = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT;
blockSize = 16;
break;
case S3TC_DXT5:
format = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT;
blockSize = 16;
break;
case ATC:
format = GL_ATC_RGBA_EXPLICIT_ALPHA_AMD;
blockSize = 16;
break;
default:
//Error...
break;
}
int offset = 0;
for(int i = 0; i < numMipMaps; i++)
{
int size = ((width + 3) / 4) * ((height + 3) / 4) * blockSize;
glCompressedTexImage2D(GL_TEXTURE_2D, i, format, width, height, 0, size, textureData + offset);
offset += size;
//Scale next level.
width /= 2;
height /= 2;
}
Finally found the problem. OpenGL state of my game was configured to work with premultiplied alpha-channel.
I've added special 'premultiply' step to my build system and got proper result.
Blend-function settings:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Related
I have a cpp code implementing a media player behavior on Android.
I'm using the media player for playing a mp4 file however, I need to draw text above this.
For testing purposes, I've already tried to do as drawText() function from BootAnimation.cpp however without success.
I'm guessing there is some OpenGL calls I'm missing. Is there some call to be added inside drawText() for it to draw above the mp4?
void BootAnimation::drawText(const char* str, const Font& font, bool bold, int* x, int* y) {
glEnable(GL_BLEND); // Allow us to draw on top of the animation
glBindTexture(GL_TEXTURE_2D, font.texture.name);
const int len = strlen(str);
const int strWidth = font.char_width * len;
if (*x == TEXT_CENTER_VALUE) {
*x = (mWidth - strWidth) / 2;
} else if (*x < 0) {
*x = mWidth + *x - strWidth;
}
if (*y == TEXT_CENTER_VALUE) {
*y = (mHeight - font.char_height) / 2;
} else if (*y < 0) {
*y = mHeight + *y - font.char_height;
}
int cropRect[4] = { 0, 0, font.char_width, -font.char_height };
for (int i = 0; i < len; i++) {
char c = str[i];
if (c < FONT_BEGIN_CHAR || c > FONT_END_CHAR) {
c = '?';
}
// Crop the texture to only the pixels in the current glyph
const int charPos = (c - FONT_BEGIN_CHAR); // Position in the list of valid characters
const int row = charPos / FONT_NUM_COLS;
const int col = charPos % FONT_NUM_COLS;
cropRect[0] = col * font.char_width; // Left of column
cropRect[1] = row * font.char_height * 2; // Top of row
// Move down to bottom of regular (one char_heigh) or bold (two char_heigh) line
cropRect[1] += bold ? 2 * font.char_height : font.char_height;
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, cropRect);
glDrawTexiOES(*x, *y, 0, font.char_width, font.char_height);
*x += font.char_width;
}
glDisable(GL_BLEND); // Return to the animation's default behaviour
glBindTexture(GL_TEXTURE_2D, 0);
}
PS: this is no android app, so it won't be done in app layer.
The Bootanimation.cpp use of OpenGL ES changed a bit and now it's using a more modern way to deal with graphics.
That being said, I found that my case would need a some abstraction as done here. Basic OpenGL manipulation, as use of common vertex and fragment shaders (position and color, really nothing different from fundamentals) and VBO/VAO for data buffering and glDrawArrays is enough for my usage.
I still need to understand and apply some texture and understand the best way (in my scenario) for manipulate text, however I think that is the all.
I have a method for loading texture into OpenGL:
bool AndroidGraphicsManager::setTexture(
UINT& id, UCHAR* image, UINT width, UINT height,
bool wrapURepeat, bool wrapVRepeat, bool useMipmaps)
{
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, (GLvoid*) image);
int minFilter = GL_LINEAR;
if (useMipmaps) {
glGenerateMipmap(GL_TEXTURE_2D);
minFilter = GL_NEAREST_MIPMAP_NEAREST;
}
glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MIN_FILTER, minFilter);
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_MAG_FILTER, GL_NEAREST);
int wrap_u = wrapURepeat ?
GL_REPEAT : GL_CLAMP_TO_EDGE;
int wrap_v = wrapVRepeat ?
GL_REPEAT : GL_CLAMP_TO_EDGE;
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_WRAP_S, wrap_u);
glTexParameteri(
GL_TEXTURE_2D,
GL_TEXTURE_WRAP_T, wrap_v);
glBindTexture(GL_TEXTURE_2D, 0);
return !checkGLError("Loading texture.");
}
This works fine. I load texture using libpng. This gives me array of unsigned chars. Then I pass this array to my method specified above. Pictures are 32bit, so I assume in UCHAR array each UCHAR contains single color component and four UCHARs make up one pixel. I wanted to try using 16 bit textures. I changed GL_UNSIGNED_BYTE to GL_UNSIGNED_SHORT_4_4_4_4. But apparently this isn't enough because it gives me this result:
What else do I need to change to be able to properly display 16 bit textures?
[EDIT] As #DatenWolf suggested I tried using this code:
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA4,
width, height, 0, GL_RGBA,
GL_UNSIGNED_SHORT_4_4_4_4, (GLvoid*) image);
I used it on Linux version of my engine, but the result was the exact same as in previous android screenshot. So is it safe to assume that the problem is with libpng and the way it produces my UCHAR array?
[EDIT2] So I finally managed to compress 32bit texture to 16bit.
All I needed was a simple iteration through 32bit texture data and a few bitshifting operations to compress 32 bits into 16.
void* rgba8888ToRgba4444(void* in, int size) {
int pixelCount = size / 4;
ULONG* input = static_cast<ULONG*>(in);
USHORT* output = new USHORT[pixelCount];
for (int i = 0; i < pixelCount; i++) {
ULONG pixel = input[i];
// Unpack the source data as 8 bit values
UINT r = pixel & 0xff;
UINT g = (pixel >> 8) & 0xff;
UINT b = (pixel >> 16) & 0xff;
UINT a = (pixel >> 24) & 0xff;
// Convert to 4 bit vales
r >>= 4; g >>= 4; b >>= 4; a >>= 4;
output[i] = r << 12 | g << 8 | b << 4 | a;
}
return static_cast<void*>(output);
}
I thought that for mobile device this would yield a performance increase, but unfortunately I saw no gain in performance on Galaxy SIII.
The best thing would probably be to use an image library like devil for this kind of thing.
But if you want to convert the image data you have you get from libpbg you can do something similar to the code below.
Keep in mind that you trade size for speed when doing this.
struct Color {
unsigned char r:4;
unsigned char g:4;
unsigned char b:4;
unsigned char a:4;
};
float factor = 16 / 255;
Color colors[imgWidth*imgHeight];
for(int i = 0; i < imgWidth*imgHeight; ++i)
{
colors[i].r = image[i*4]*factor;
colors[i].g = image[i*4+1]*factor;
colors[i].b = image[i*4+2]*factor;
colors[i].a = image[i*4+3]*factor;
}
But apparently this isn't enough
The token you've changed tells OpenGL the format of the data in the array passed to it. So you have to adjust that as well. However OpenGL may convert it into any internal format it desires, since you didn't force it into a particular format. That's what the internal format parameter is for (which works independently on the external data type). So if you want to have internally 16 bits resolution you must change the internal format parameter to GL_RGBA4. The data may remain in 8 bits per pixel format. So in your case
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA4,
width, height, 0, GL_RGBA,
…, (GLvoid*) image);
The type parameter must match the layout of your data. If you've got originally 8 bits per pixel then GL_UNSIGNED_BYTE. But if it's prepackaged RGBA in ushorts nibbles then GL_UNSIGNED_SHORT_4_4_4_4
I'm trying to convert from a video using FFmpeg to an OpenGL ES texture in jni, but all that I get is a black texture. I have output the OpenGL with the glGetError(), but there is no error.
Here is my code:
void* pixels;
int err;
int i;
int frameFinished = 0;
AVPacket packet;
static struct SwsContext *img_convert_ctx;
static struct SwsContext *scale_context = NULL;
int64_t seek_target;
int target_width = 320;
int target_height = 240;
GLenum error = GL_NO_ERROR;
sws_freeContext(img_convert_ctx);
i = 0;
while((i==0) && (av_read_frame(pFormatCtx, &packet)>=0)) {
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
LOGI("packet pts %llu", packet.pts);
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt,
target_width, target_height, PIX_FMT_RGB24, SWS_BICUBIC,
NULL, NULL, NULL);
if(img_convert_ctx == NULL) {
LOGE("could not initialize conversion context\n");
return;
}
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
LOGI("sws_scale");
videoTextures = new Texture*[1];
videoTextures[0]->mWidth = 256; //(unsigned)pCodecCtx->width;
videoTextures[0]->mHeight = 256; //(unsigned)pCodecCtx->height;
videoTextures[0]->mData = pFrameRGB->data[0];
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &(videoTextures[0]->mTextureID));
glBindTexture(GL_TEXTURE_2D, videoTextures[0]->mTextureID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
if(0 == got_texture)
{
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, videoTextures[0]->mWidth, videoTextures[0]->mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
}else
{
glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, videoTextures[0]->mWidth, videoTextures[0]->mHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)videoTextures[0]->mData);
}
i = 1;
error = glGetError();
if( error != GL_NO_ERROR ) {
LOGE("couldn't create texture!!");
switch (error) {
case GL_INVALID_ENUM:
LOGE("GL Error: Enum argument is out of range");
break;
case GL_INVALID_VALUE:
LOGE("GL Error: Numeric value is out of range");
break;
case GL_INVALID_OPERATION:
LOGE("GL Error: Operation illegal in current state");
break;
case GL_OUT_OF_MEMORY:
LOGE("GL Error: Not enough memory to execute command");
break;
default:
break;
}
}
}
}
av_free_packet(&packet);
}
I have succeeded in changing pFrameRGB to a java bitmap, but I just want to change it to a texture in the c code.
Edit1 I have output the Texture ID, it is 0; could texture ID be zero? I changed my code
but it always be zero.
Edit2
the Texture display, but it is a mess.
Not used to GLES, but GL. In there 320, 240 are not valied 512,256 as to power of 2. Else you need to use texture_rectangle extension which texcoords are not from 0-1 but 0-w/h. As for uploading texture data, glTexImage(...) is needed to be used the first time (even with data 0), then glTexSubImage is enough, i think sizing etc is initialized with the first, the second just sends the meat.
Regarding ffmpeg usage, perhaps a version issue but img_context is more near to be renamed to sws_getContext, and initialized only once, if cpu usage is an issue use SWS_LINEAR instead of SWS_CUBIC, also i assume pFrameRGB has been correctly avcodec_alloc_frame()'ed, if you are going to use GL_RGBA you should use PIX_FMT_RGBA, PIX_FMT_RGB24 would be for GL_RGB texture piping, finally you lack a packet stack so you can go reading in advance to keep display good in sync and not late.
I've read some comments about unpack alignment, i didn't need that (and seeing the success in the area doubt of it) to implement an ffmpeg to OpenGL/OpenAL media library (http://code.google.com/p/openmedialibrary), nicely the audio bits have also been extracted to an ffmpeg to OpenAL loader (http://code.google.com/p/openalextensions). Haves some nice features and currently i'm trying to work with texture compression to see if it can perform still better. Consider that tutorials or even ready to use gpl code.
Hope to give some enlightenment on the obscure (by lack) art of ffmpeg to OpenGL/AL integration.
Try to append 16 zero bytes to each packet before passing to decoder.
Some comments from the avcodec.h:
/*
* #warning The input buffer must be FF_INPUT_BUFFER_PADDING_SIZE larger than
* the actual read bytes because some optimized bitstream readers read 32 or 64
* bits at once and could read over the end.
* #warning The end of the input buffer buf should be set to 0 to ensure that
* no overreading happens for damaged MPEG streams.
*/
I am trying to render video via the NDK, to add some features that just aren't supported in the sdk. I am using FFmpeg to decode the video and can compile that via the ndk, and used this as a starting point. I have modified that example and instead of using glDrawTexiOES to draw the texture I have setup some vertices and am rendering the texture on top of that (opengl es way of rendering quad).
Below is what I am doing to render, but creating the glTexImage2D is slow. I want to know if there is any way to speed this up, or give the appearance of speeding this up, such as trying to setup some textures in the background and render pre-setup textures. Or if there is any other way to more quickly draw the video frames to screen in android? Currently I can only get about 12fps.
glClear(GL_COLOR_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, textureConverted);
//this is slow
glTexImage2D(GL_TEXTURE_2D, /* target */
0, /* level */
GL_RGBA, /* internal format */
textureWidth, /* width */
textureHeight, /* height */
0, /* border */
GL_RGBA, /* format */
GL_UNSIGNED_BYTE,/* type */
pFrameConverted->data[0]);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
EDIT
I changed my code to initialize a gltextImage2D only once, and modify it with glSubTexImage2D, it didn't make much of an improvement to the framerate.
I then modified the code to modify a native Bitmap object on the NDK. With this approach I have a background thread that runs that process the next frames and populates the bitmap object on the native side. I think this has potential, but I need to get the speed increased of converting the AVFrame object from FFmpeg into a native bitmap. Below is currently what I am using to convert, a brute force approach. Is there any way to increase the speed of this or optimize this conversion?
static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine;
int yy;
for (yy = 0; yy < info->height; yy++) {
uint8_t* line = (uint8_t*)pixels;
frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);
int xx;
for (xx = 0; xx < info->width; xx++) {
int out_offset = xx * 4;
int in_offset = xx * 3;
line[out_offset] = frameLine[in_offset];
line[out_offset+1] = frameLine[in_offset+1];
line[out_offset+2] = frameLine[in_offset+2];
line[out_offset+3] = 0;
}
pixels = (char*)pixels + info->stride;
}
}
Yes, texture (and buffer, and shader, and framebuffer) creation is slow.
That's why you should create texture only once. After it is created, you can modify its data by calling glSubTexImage2D.
And to make uploading texture data more faster - create two textures. While you use one to display, upload texture data from ffmpeg to second one. When you display second one, upload data to first one. And repeat from beginning.
I think it will still be not very fast. You could try to use jnigraphics library that allows to access Bitmap object pixels from NDK. After that - you just diplay this Bitmap on screen on java side.
Yes, you can optimized this code:
static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine;
int yy;
for (yy = 0; yy < info->height; yy++)
{
uint8_t* line = (uint8_t*)pixels;
frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);
int xx;
for (xx = 0; xx < info->width; xx++) {
int out_offset = xx * 4;
int in_offset = xx * 3;
line[out_offset] = frameLine[in_offset];
line[out_offset+1] = frameLine[in_offset+1];
line[out_offset+2] = frameLine[in_offset+2];
line[out_offset+3] = 0;
}
pixels = (char*)pixels + info->stride;
}
}
to be something like:
static void fill_bitmap(AndroidBitmapInfo* info, void *pixels, AVFrame *pFrame)
{
uint8_t *frameLine = (uint8_t *)pFrame->data[0];
int yy;
for (yy = 0; yy < info->height; yy++)
{
uint8_t* line = (uint8_t*)pixels;
int xx;
int out_offset = 0;
int in_offset = 0;
for (xx = 0; xx < info->width; xx++) {
int out_offset += 4;
int in_offset += 3;
line[out_offset] = frameLine[in_offset];
line[out_offset+1] = frameLine[in_offset+1];
line[out_offset+2] = frameLine[in_offset+2];
line[out_offset+3] = 0;
}
pixels = (char*)pixels + info->stride;
frameLine += pFrame->linesize[0];
}
}
That will save you some cycles.
A couple of minor additions will solve your problem, first convert your AVFrame to RGB with swscale, then apply it directly to your texture i.e.:
AVPicture *pFrameConverted;
struct SwsContext img_convert_ctx;
void init(){
pFrameConverted=(AVPicture *)avcodec_alloc_frame();
avpicture_alloc(pFrameConverted, AV_PIX_FMT_RGB565, videoWidth, videoHeight);
img_convert_ctx = sws_getCachedContext(&img_convert_ctx,
videoWidth,
videoHeight,
pCodecCtx->pix_fmt,
videoWidth,
videoHeight,
AV_PIX_FMT_RGB565,
SWS_FAST_BILINEAR,
NULL, NULL, NULL );
ff_get_unscaled_swscale(img_convert_ctx);
}
void render(AVFrame* pFrame){
sws_scale(img_convert_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pFrame->height, pFrameConverted->data, pFrameConverted->lineSize);
glClear(GL_COLOR_BUFFER_BIT);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, videoWidth, videoHeight, GL_RGB, GL_UNSIGNED_BYTE, pFrameConverted->data[0]);
glDrawTexiOES(0, 0, 0, videoWidth, videoHeight);
}
Oh,maybe you can use jnigraphics as https://github.com/havlenapetr/FFMpeg/commits/debug.
but if when you get yuv data after decode frame,you should convert it to RGB555,it is too slowly.Use android's mediaplayer is a good idea
I get a problem when loading data from AVFrame to openGL:
int target_width = 320;
int target_height = 240;
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt,
target_width, target_height, PIX_FMT_RGBA, SWS_FAST_BILINEAR,
NULL, NULL, NULL);
if(img_convert_ctx == NULL) {
LOGE("could not initialize conversion context\n");
return;
}
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
//free(data);
int line=target_width*target_height*4;
data=(char*)malloc(line);
if (!data)
LOGE("create data frame fail");
LOGE("successful data");
filldata(data,pFrameRGB,target_width,target_height);
with function filldata as:
static void filldata(char *data,AVFrame *pFrame,int w,int h)
{uint8_t *frameLine;
int yy;
int i=0;
for (yy = 0; yy < h; yy++) {
frameLine = (uint8_t *)pFrame->data[0] + (yy * pFrame->linesize[0]);
int xx;
for (xx = 0; xx < w; xx++) {
int in_offset = xx * 4;
data[i++] = frameLine[in_offset];
data[i++] = frameLine[in_offset+1];
data[i++] = frameLine[in_offset+2];
data[i++] = frameLine[in_offset+3];
}
}
}
After that i use data to transfer to
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, *wi, *he, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)data);
but it cannot show texture, maybe data above and data in function gltextimage2D are different.
Please help me figure out what is the format for gltextimage2D so i can configure data to show texture. OR anyone has some sample code to show me.
It's not clear to me, but you can try using richq's glbuffer, Whcih I am using in my video player app. It worked for me and also has better frame rate.
Give it a try and better luck with it.
Word has it that you should use power-of-2 dimensions, when specifying width and height to sws_getContext(). In case that doesn't solve your problem, reference pointed out by Android007 is a good one, but you might also wanna take a look at https://code.google.com/p/android-native-egl-example/.