I have an Android application that displays VGA (640x480) frames using OpenGL ES. The application reads each frame from a movie file and updates the texture accordingly.
My problem is that, it is taking almost 30 ms. to draw each frame using OpenGL. Similar test using the Canvas/drawBitmap was around 6 ms on the same device.
I'm following the same OpenGL calls that VLC Media Player is using, so I'm assuming that those are optimized for this purpose.
I just wanted to hear your thoughts and ideas about it?
Are you sure that the bitmap are loaded with RBG_565?Try this :
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
bm = BitmapFactory.decodeByteArray(temp, 0, temp.length,opt);
Let me know!
Which are the calls you are using ?
make sure that u create texture only once (glTexImage2D) and next time just update it with new buffer.You can also disable other gl things like depthbuffer,stencil,accumulation,lighting, etc...
If none of these helps , check you opengl implementation. make sure that that uses hardware(gpu)
Related
Im currently trying to display a video frame using opengl.
So far it works but I have some color problem.
Im using this as my
Reference for my logic
I have this code
//YUV420SP data
uint8_t *decodedBuff = AMediaCodec_getOutputBuffer(d->codec, status, &bufSize);
buildTexture(decodedBuff, decodedBuff+w*h, decodedBuff+w*h, w, h);
renderFrame();
but it displays with wrong color.
decodedBuff = Y
decodedBuff+w*h = U
decodedBuff+w*h*5 = V
but this separation formula is for YUV420P.
Do you guys happen to know whats for YUV420SP?
Your help is very much appreciated
If you are doing it this way you are doing it wrong. You should never manually read raw data from video surfaces in fragment shaders.
Generate a SurfaceTexture, bind it to an OpenGL ES texture, and use EGL_image_external to access the texture via an external image sampler.
This will give you direct access to the video data in your shader, including automatic handling of the memory format and color conversion, in many cases for "free" because it's backed by GPU hardware acceleration.
I'm experimenting with the following Google sample: https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker
The sample is using the Play Service new Face detection APIs, and draws a square on detected faces on the camera video stream.
I'm trying to figure out if it is possible to save the frames that has detected faces in them, from following the code it seems that the face detector's processor is a good place to perform the 'saving' but it only supplies the detection meta data and not the actual frame.
Your guidance will be appreciated.
You can get it in the following way:
Bitmap source = ((BitmapDrawable) yourImageView.getDrawable()).getBitmap();
// detect faces
Bitmap faceBitmap = createBitmap(source,
face.getPosition().x,
face.getPosition().y,
face.getWidth(),
face.getHeight());
Yes it is possible. I answered to question about getting frames from CameraSource here. Most trickiest parts are to access CameraSource frames and to convert Frame datatype to Bitmap. Then having frames as Bitmaps you can pass them to you FaceGraphic class and in method draw() save those Bitmaps, because draw() is called only when faces are detected.
(This is due to the limitations of the server software I will be using, if I could change it, I would).
I am receiving a sequence of 720x480 JPEG files (about 6kb in size), over a socket. I have benchmarked the network, and have found that I am capable of receiving those JPEGs smoothly, at 60FPS.
My current drawing operation is on a Nexus 10 display of 2560x1600, and here's my decoding method, once I have received the byte array from the socket:
public static void decode(byte[] tmp, Long time) {
try {
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferQualityOverSpeed = false;
options.inDither = false;
Bitmap bitmap = BitmapFactory.decodeByteArray(tmp, 0, tmp.length, options);
Bitmap background = Bitmap.createScaledBitmap
(bitmap, MainActivity.screenwidth, MainActivity.screenheight, false);
background.setHasAlpha(false);
Canvas canvas = MainActivity.surface.getHolder().lockCanvas();
canvas.drawColor(Color.BLACK);
canvas.drawBitmap(background, 0, 0, new Paint());
MainActivity.surface.getHolder().unlockCanvasAndPost(canvas);
} catch (Exception e) {
e.printStackTrace();
}
}
As you can see, I am clearing the canvas from a SurfaceView, and then drawing the Bitmap to the SurfaceView. My issue is that it is very, very, slow.
Some tests based on adding System.currentTimeMillis() before and after the lock operation result in approximately a 30ms difference between getting the canvas, drawing the bitmap, and then pushing the canvas back. The displayed SurfaceView is very laggy, sometimes it jumps back and forth, and the frame rate is terrible.
Is there a referred method for drawing like this? Again, I can't modify what I'm getting from the server, but I'd like the bitmaps to be displayed at 60FPS when possible.
(I've tried setting the contents of an ImageView, and am receiving similar results). I have no other code in the SurfaceView that could impact this. I have set the holder to the RGBA_8888 format:
getHolder().setFormat(PixelFormat.RGBA_8888);
Is it possible to convert this stream of Bitmaps into a VideoView? Would that be faster?
Thanks.
Whenever you run into performance questions, use Traceview to figure out exactly where your problem lies. Using System.currentTimeMillis() is like attempting to trim a steak with a hammer.
The #1 thing her is to get the bitmap decoding off the main application thread. Do that in a background thread. Your main application thread should just be drawing the bitmaps, pulling them off of a queue populated by that background thread. Android has the main application thread set to render on a 60fps basis as of Android 4.1 (a.k.a., "Project Butter"), so as long as you can draw your Bitmap in a couple of milliseconds, and assuming that your network and decoding can keep your queue current, you should get 60fps results.
Also, always use inBitmap with BitmapFactory.Options on Android 3.0+ when you have images of consistent size, as part of your problem will be GC stealing CPU time. Work off a pool of Bitmap objects that you rotate through, so that you generate less garbage and do not fragment your heap so much.
I suspect that you are better served letting Android scale the image for you in an ImageView (or just by drawing to a View canvas) than you are in having BitmapFactory scale the image, as Android can take advantage of hardware graphics acceleration for rendering, which BitmapFactory cannot. Again, Traceview is your friend here.
With regards to:
and have found that I am capable of receiving those JPEGs smoothly, at 60FPS.
that will only be true sometimes. Mobile devices tend to be mobile. Assuming that by "6kb" you mean 6KB (six kilobytes), you are assuming a ~3Mbps (three megabits per second) connection, and that's far from certain.
With regards to:
Is it possible to convert this stream of Bitmaps into a VideoView?
VideoView is a widget that plays videos, and you do not have a video.
Push come to shove, you might need to drop down to the NDK and do this in native code, though I would hope not.
I have written a test class that should only paint the application icon into the screen. So far no luck. What I am doing wrong?
public class GLTester
{
void test(final Context context)
{
BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.icon, options);
setupGLES();
createProgram();
setupTexture();
draw();
}
void draw()
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
glUseProgram(glProgramHandle);
}
}
A couple of things.
I presume your squareVertices buffer is supposed to contain 4 vec3s. But your shader is setup for vec4s. Maybe this is ok, but it seems odd to me.
You also are not setting up any sort of perspective matrix with glFrustum or glOrtho, and are also not setting up any sort of viewing matrix with something like Matrix.setLookAtM. You should always try to keep the vertex pipeline in mind as well. Look at slide 2 from this lecture https://wiki.engr.illinois.edu/download/attachments/195761441/3-D+Transformational+Geometry.pptx?version=2&modificationDate=1328223370000
I think what is happening is your squareVertices are going through this pipeline and coming out on the other side as pixel coordinates. So your image is probably a very tiny spec in the corner of your screen, since you are using vertices from -1.0 to 1.0.
As a shameless sidenote, I posted some code on SourceForge that makes it possible to work on and load shaders from a file in your assets folder and not have to do it inside your .java files as strings. https://sourceforge.net/projects/androidopengles/
Theres an example project in the files section that uses this shader helper.
I hope some part of this rambling was helpful. :)
It looks pretty good, but I see that for one thing you are calling glUniform inside setupTexture, while the shader is not currenty bound. You should only call glUniform after calling glUseProgram.
I don't know if this is the problem, cause I would guess that it would probably default to 0 anyway, but I don't know for sure.
Other than that, you should get familiar calling glGetError to check if there are any error conditions pending.
Also, when creating shaders, its good habit to check their success status with glGetShader(GL_COMPILE_STATUS), also glGetShaderInfoLog if the compile fails, and similar for programs with glGetProgram/glGetProgramInfoLog.
I am trying to load textures as follows:
private Texture mTexture;
...
public Textures(final BaseGameActivity activity, final Engine engine) {
this.mTexture = new Texture(2048, 1024,
TextureOptions.BILINEAR_PREMULTIPLYALPHA);
this.mBackgroundTextureRegion = TextureRegionFactory.createFromAsset(
this.mTexture, activity, "img/back.png", 0, 0);
this.mSwingBackTextureRegion = TextureRegionFactory.createFromAsset(
this.mTexture, activity, "img/player.png", 836, 0);
...
I want to load more than 200 textures. However, the current method that I am using is too long.
Are there faster methods to complete it?
I am working in GLES1.
The easiest way to do it is with Texture Packer, found here
This allows you to add multiple image files in to one easy to load spritesheet. The engine loads this spritesheet in to a texture and creates a class that lets you easily reference each image from that spreadsheet. Turn 200 TextureRegions in to 1 TexturePack.
I'm using GLES2 and I'm not sure where the source files are for GLES1. Poke around the forums and you should be able to find out how to use them. There has been plenty of talk about it.
There is a texture packer built in AndEngine which does this automagically. Try searching the AndEngine forum.
http://www.andengine.org/forums/