I have some troubles with video processing. I'm using Surface with Texture to process video with MediaCodec and MediaMuxer (decode, crop, encode with lower quality).
And on middle-step, which is cropping, I've got problems. Basically, what I want to do is to take lesser dimension of video, then, based on this side, define offsets for bigger side and crop it out. Then result should be properly scaled into 640x640 square. I searched for pretty big amount of time, but all information that I have found points that Open GL should scale image itself. Yes, I should admit that it's doing some scale, but result is looking very ugly it seems to be "compressed" verticaly quite hardly.
So, now the question itself. Can somebody provide to me some explanation or maybe even snipet of code that perfrom desired functions in Open GL ES?
I already tried to adjust Viewport thought.
GLES20.glViewport(0, 0, 640, 640)
UPDATE
Idea that Isogen74 have proposed more less worked for me. It's still stretched quite significantly, but it's better then nothing.
Here you can find updated code: OpenGL setup
Cropping - change your texture coordinates that you loading from when loading the texture; e.g. if you you want to crop the top and bottom 10% off the image load from (0.0, 0.1) to (1.0, 0.9).
Scaling - the answer depends how big your downscale to 640*640 is. OpenGL filtering isn't designed to handle large downscaling ratios directly - it's not an image processing library - but assuming relatively small scaling ratios just ensure you have turned on GL_LINEAR texture filtering rather than GL_NEAREST.
If you have a large downscale you may need to mipmap the texture first and GL_LINEAR_MIPMAP_NEAREST or GL_LINEAR_MIPMAP_LINEAR as your minification filter, but just be aware that this isn't going to give you something as good as, for example a proper scaling algorithm you might get in image processing software (e.g. bicubic, or something like that).
Related
I've just started developing my first game with Android using SurfaceView.
I scale my Bitmaps to support different screen sizes. But I don't know if it's better to do the scaling when loading the bitmaps or when drawing them to Canvas using Matrix.
I guess the first one would occupy more memory but it's better in performance. But I don't know how really things work here so any suggestion from experts would be appreciated.
The best thing to do is to not scale the Bitmaps at all. You can just scale the SurfaceView's surface instead.
See the "hardware scaler exerciser" in Grafika for an example, and this post for an explanation. The basic idea is to call something like surfaceView.getHolder().setFixedSize(1280, 720) to set the surface's size to 1280x720, and then always render as if the display were 720p. The hardware will scale it to whatever the current screen dimensions are.
It's slightly more complicated than that -- you want to pick a size that matches the display aspect ratio so your stuff doesn't look stretched (which is something you have to deal with on Android anyway). Grafika does this to ensure that the square remains square.
This approach is much more efficient in both CPU and memory consumption than scaling individual bitmaps.
In my android game, I am using images of fixed resolution lets say 256x256. Now for different device screens, I am rendering them by calculating their width and height as appropriate for that device.
Assume that on galaxy note2 I calculated width=128 and height=128... similarly for different devices, width and height will vary.
This is how I created texture..
....
imageTexture = new Texture(...);
....
in render()..
....
spriteBatch.draw(imageTexture,x,y,width,height);
....
So, every time when I call draw() method, does libgdx/opengl scale image from 256x256 to 128x128, which I think, yes!
Is there any way to tell opengl/libgdx to calculate all scaling only once ?
I have no idea how images were rendered? loaded into memory? scaled etc ?
How does Sprite in libgdx work? I tried understanding the code of Sprite and looks to me like they are also getting image width and height and then scale it every time, even though they have setScale() method.
First rule of optimizing: get some numbers. Premature optimization is the root of many problems. That said, there are still some good rules of thumb to know.
The texture data will be uploaded by libgdx/OpenGL to the GPU when you invoke new Texture. When you actually draw the texture with spriteBatch.draw instructions are uploaded to the GPU by OpenGL that tell the hardware to use your existing texture and to fit it to the bounds. The draw call just uploads coordinates (the corners of the box that defines the Sprite) and a pointer to the texture. The actual texture data is not uploaded.
So, in practice your image is "scaled" on every frame. However, this is not that bad, as this is exactly what GPUs are designed to do very, very well. You only really need to worry about uploading so many textures that the GPU has trouble keeping track of them all, you do not need to worry much about scaling the textures beforehand.
The costs of scaling and transforming the four corners of the sprite are relatively trivial next to the costs of sending the data to the GPU and the cost of refreshing the screen, so they probably are not worth worrying about too much. The "batch" in SpriteBatch is all about "batching up" (or gathering together) a lot of coordinates to send up to the GPU at once, as roughly, each call out to the GPU can be expensive. So, its always good to do as much work within a single batch's begin/end as you can.
Again, though, modern machines are stupidly fast, and you should be able to do whatever is easiest to get your app running first. Then once you have something working correctly, you can figure out which parts are actually slow and fix those. The parts that are "inefficient" but are not actually measurably impacting your application can be left alone.
I'm not fully satisfied with the quality obtained with the mipmap automatically generated with this line of code:
glTexParameterf(GL10.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL10.GL_TRUE);
I thought to create (with Gimp) various scaled version of my texture for every texture used in my game. For example for a texture for a ball I will have:
ball256.png 256x256 px
ball128.png 128x128 px
ball64.png 64x64 px
ball32.png 32x32 px
ball16.png 16x16 px
1. Do you think is a good idea?
2. How can I create a single mipmapped texture from all these images?
This is not only a good idea, but it is a pretty standard practice (particularly in Direct3D)!
OpenGL implementations tend to use a standard box filter (uniformly weighted) when you generate mipmaps. You can use a nice tent fiter (bilinear) or even cubic spline (bicubic) when downsampling textures in image editing suites. Personally, I would prefer a lanczos filter since this is going to be done offline.
You may already be aware of this, but Direct3D has a standard texture format known as DDS (Direct Draw Surface) which allows you to pre-compute and pre-compress every mipmap level at content creation time instead of load-time. This decreases compressed texture load time and more importantly allows for much higher quality sample reconstruction during downsampling into each LOD. OpenGL also has a standardized format that does the same thing: KTX. I brought up Direct3D because although OpenGL has a standardized format very few people seem to know about it; DDS is much more familiar to most people.
If you do not want to use the standardized format I mentioned above, you can always load your levels of detail one-at-a-time manually by calling glTexImage2D (..., n, ...) where n is the LOD (0 is the highest resolution level-of-detail). You would do this in a loop for each LOD when you create your texture, this is actually how things like gluBuild2DMipmaps (...) work.
My Android app needs to display a full-screen bitmap as a background, then on top of that display some dynamic 3D graphics using OpenGL ES (either 1.1. or 2.0 - not decided yet). The background image is a snapshot of a WebView component in the same app, so its dimensions already fit the screen perfectly.
I'm new to OpenGL, but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D), configuring the matrices, creating some vertices for the rectangle and displaying that with glDrawArrays. Seems to be a lot of extra work (with loss of quality when down-scaling the image to POT size) when all that's needed is just to draw a bitmap to the screen, in 1:1 scale.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES. Is there any way to copy pixels to the screen buffer in GLES, circumventing the 3D pipeline? Or is there any way to draw OpenGL graphics on top of a "flat" background drawn by regular Android means? Or making a translucent GLView (there is RSTextureView for Renderscript-based display, but I couldn't find an equivalent for GL)?
but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D)
Then your knowledge is outdated. Modern OpenGL (version 2 and later) are fine with arbitrary image dimensions for their textures.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES.
Well, modern "desktop" OpenGL, namely version 3 core and later don't have glDrawPixels either.
However appealing this function is/was, it offers only poor performance and has so many caveats, that it's rarely used, whenever it's use can be avoided.
Just upload your unscaled image into a texture, disable mipmapping and draw it onto a fullscreen quad.
I'm writing a simple 2D game for Android with a 300x200 play area with coords running from (0,0 to 299,199). I'd want this area to fill the screen as best as possible while maintaining its aspect ratio. e.g. if the GL view fills the full 800x480 of a device I could scale the area by 2.4x to 720x480 leaving 40 pixels of space either side.
I don't expect many devices would exactly scale in both dimensions so the code has to cope with a gap either in the horizontal or vertical.
So the question is how do I do this. My play area is 2D so I can use an orthgraphic projection. I just don't understand what values I need to plug in to set this up. I also suspect that because ES 2.0 has a heavy reliance on shaders that I might need to propagate some kind of scaling matrix to a vector shader to ensure objects are rendered to the right size.
Does anyone know of a good tutorial which perhaps talks in terms that make sense for my needs? Most tutorials I've seen seem content to dump a cube or square into the middle of the screen rather than rendering an area of exact dimensions.
These problem should be easy using the old and familiar Opengl functions, like glViewport and glProjection. GLM offers that for enviroments like Opengl ES, have a look
http://glm.g-truc.net/