Rotating all OpenGL output - android

External requirements --- you have to hate them...
I have an OpenGL ES game, which uses EGL and OpenGL ES to draw on the screen. I don't have source to this; it's supplied as a binary blob. I'm implementing the interface layer that mediates between the game's calls to EGL and OpenGL and the platform's implementation.
It works fine. But I now have the unexpected external requirement that I need to be able to rotate the entire game's output 90 degrees.
Can anyone suggest any good (easy, fast) ways to do this? Off the top of my head, I can think of:
insert the appropriate transformation into the game's projection matrix. This seems to me to be the fastest solution; but I don't think I have enough knowledge of the game's manipulation of the projection matrix to do this reliably. Plus it'll confuse the game if it uses any OpenGL calls to access the screen which don't go through the projection matrix. (glReadPixels(), for example.)
give the game a rendering context to an off-screen buffer; it renders there, and then when the game calls eglSwapBuffers() I copy the result onto the screen. Render-to-texture would help here. Problems: this will affect performance as I'm effectively doing two drawing passes instead of one; and render-to-texture isn't standardised in OpenGL ES. (My target platform, Android, doesn't even reliably support shared contexts.)
render into the colour buffer, then use glReadPixels() to copy the data out and do a software rotate onto the screen. Problems: dead slow, and I have no control of the size of the buffer (i.e. if the screen is 640x480 and we're drawing 90° rotated, I really want to give the game a 480x640 colour buffer).
other?
Game-specific hacks aren't an option here because I need to be able to swap out the game binary with another one; this has to be a generic fix. Changing the game isn't an option because we don't have control of the game source code.
Any suggestions? Other than the non-technical one of trying to persuade the requirement to go away?

What is the issue with you have to use glRotate along the z axis ??
Approach 1 is the way to go.
Pixel operations are heavy and it is possible, that you could be messing up with the aspect ratio, etc etc.
The steps which go into drawing are
1. Set the transformation matrix (the model/ projection)
If landscape, apply the glRotate
2. Set the view port (this might change each time you rotate the screen)
if landscape - set a b as height/widht respectively
if landscape - set b a as height/widht respectively
3. Draw the matrix
When you rotate the screen, the objects are rendered again. So glRotate is the best way to go.

Related

Dynamic Environment mapping from camera in Augmented Reality setting

I am trying to implement something like the technique described in this (old) paper to use the phone camera's video frames to create an illusion of environment mapping in an AR app.
I want to take the camera frame, divide it into sub-areas and then use those as faces on the cube map. The division of the camera frame would look something like this:
Now the X area is easy, I can use glCopyTexImage2D to copy that square area to my cubemap texture. But I need help with the trapezoid shaped areas around X (forget about the trianlges for now).
How can I take those trapezoidal areas and distort them into square textures? I think I need the opposite transformation of the later occurring perspective projection, so that the two will cancel each other out in the final render if I render the cubemap as a skybox around my camera (does that explain what I want?).
Before doing this I tried a simpler step of putting the square X area on every side of the cubemap just to see if glCopyTexImage2D can even be used for this. It can, but the results are not rotated right, some faces are "upside down" when I render the cubemap as a skybox. The question is similar: How can I rotate them before using them as textures?
I also thought about solving the problem from the other side and modifying the "texture coordinates" to make the necessary adjustments, but that also does not seem easy since the lookup in the fragment shader with "textureCube" is more complicated than a normal texture lookup.
Any ideas?
I'm trying to do this in my AR app on Android with OpenGL ES 2.0 but I guess more general OpenGL advice might also be useful.
Update
I have come to the conclusion that this is not worth pursuing anymore. The paper makes it look nice, but my experiments with a phone camera have shown a major contradiction. If you want to reflect the environment in an object rendered in AR, the camera view is very limited. When the camera is far away from the tracked object you have enough environment information for a good reflection, but you will barely see it because the camera is far away. But when you bring the camera closer to see the awesome reflection in detail, the tracked object will fill most of the camera's field of view and you barely have any environment to reflect anymore. So in either case you lose and the result is not worth the effort.
It seems that you need to create mesh with UV mapping described in article and render it with texture from camera to another texture. Then use it as cubemap.

OpenGL ES - how to create a plane emitting light

I'm a newbie in the OpenGL ES world, and learning some basics on 3d graphics on Android OpenGL ES. I'm wondering how to create a image plane that emitting light? This is easy to be implemented in 3d model software like Blender (using the Cycles Render), see the image below for effects I'm looking for. Through some research, I learnt that they may be related to Blur or Bloom effect using shader. But I'm not very sure, and I don't know how to implement them.
As per Paul-Jan's comment, what you want is far from basic in OpenGL.
The default approach for OpenGL is forward rendering. i.e. every time you specify a piece of geometry the calculation goes forwards from triangle to pixels, a function is applied to determine the colour for each of those pixels and they're forwarded to the frame buffer. So the starting position is that each individual pixel has no concept of the world around it. Each exists in isolation.
In your scene, the floor below the box has no idea it should be blue because it has no idea that there is a box above it.
Programs like Blender use a different approach, which in this context could accurate be called backwards rendering. It starts from each pixel and asks what geometry lies behind it. In doing that it explicitly has an idea of all the geometry in the scene. So when it spots that the floor is behind a certain position it can then continue and ask "and which light sources can the floor see?" to establish lighting.
The default OpenGL approach is long established for real-time rendering. If you look at old video games you'll notice evidence of it all over the place: objects often don't cast shadows on each other (or such shadows are very rough approximations), there's only one source of light which is infinitely far away (i.e. it's in a fixed position as far as geometry is concerned; no need to know about the scene really).
So solutions are to invest the geometry with some knowledge of the whole scene. A common approach is to perform internal renderings of the scene from the point of view of the light source. That generates a depth buffer. By handing the light position and depth buffer off to every piece of geometry in the scene they can calculate whether they're visible to the light source. If so then they're illuminated by it. If not then they're not.
Another option is deferred rendering; you do a standard pass of your scene, populating at each pixel the depth, the surface colour, the surface normal, etc. So you get the full scene information broken down into pixel-by-pixel storage from the point of view of the camera. You then pretend that everything the camera can see is everything that there is. So you just need to pass that buffer around for pixels to be able to work out, approximately, which light sources they can and can't see. You can also have different parts of the screen only consider which lights they're close enough to by a broad-phase 2d distance check, which saves time.
In either case we're actually talking about relatively advanced OpenGL stuff.

Android / Offscreen rendering to texture

I am making a 2D graphical app that will display planets. I say 2D because the majority of the app will be 2D. I however want to render some 3D objects into dynamic sprites offscreen (to a texture), with transparent (possibly translucent) areas, and subsequently render those rendered textures to the active screen as 2D textured quads. Rendering directly to the screen as 3D objects is not optimal in this case, because it would require me to implement some sort of 3D picking. I am not that advanced in math yet. Note also that the main screen render will be orthographic, while the offscreen render would be perspective.
How can I accomplish this (general idea, no need for specifics), and what would be the most efficient way to do this? Would this reduce support for a wide variety of devices? Also, if the 3D sprite renderings were constantly refreshed every frame (such as being rotated fine amounts) would that kill framerates with continuous unloading/reloading of texture to memory? I suppose that some scenes could have as many as 10 of these 3D offscreen sprites.
Thanks for the help
If you really must use the offscreen rendering just search for FBO(frame buffer object) and attach a texture to it, then use the texture in your main view as 2D. It is quite a straight forward procedure but might decrease the speed. You will probably not be able to do any multithreading on it so you should create just 1 FBO. Its dimensions will probably have to be a power of 2 so the resolution might be different then you wish. This procedure does not continually load/unload anything, the data is allocated when creating the texture and GL draws/reads directly from it. The largest drawback here will be the memory.. You will create as many as 10 of this textures just to draw on them and present once.
It might be very easy to place this objects on a specific place on your main buffer though: Make all the logic as if you would want to draw a full screen planet but use "viewport" method to place it to a specific part of the screen.
If those planet images will be updated only on user request (you don't want to draw them every frame) then I suggest you try to make a combination of both: Create a FBO with a texture of same size or larger then main view and draw all the planets to this single texture using "viewport" method. Then you can update any you want, just don't clear the buffer, rather draw a clear rect on the specific part of the buffer/texture. And keep drawing the whole texture to the main buffer.

How to set my view in OpenGL ES 2.0 to show exactly the right number of coordinates

I'm writing a simple 2D game for Android with a 300x200 play area with coords running from (0,0 to 299,199). I'd want this area to fill the screen as best as possible while maintaining its aspect ratio. e.g. if the GL view fills the full 800x480 of a device I could scale the area by 2.4x to 720x480 leaving 40 pixels of space either side.
I don't expect many devices would exactly scale in both dimensions so the code has to cope with a gap either in the horizontal or vertical.
So the question is how do I do this. My play area is 2D so I can use an orthgraphic projection. I just don't understand what values I need to plug in to set this up. I also suspect that because ES 2.0 has a heavy reliance on shaders that I might need to propagate some kind of scaling matrix to a vector shader to ensure objects are rendered to the right size.
Does anyone know of a good tutorial which perhaps talks in terms that make sense for my needs? Most tutorials I've seen seem content to dump a cube or square into the middle of the screen rather than rendering an area of exact dimensions.
These problem should be easy using the old and familiar Opengl functions, like glViewport and glProjection. GLM offers that for enviroments like Opengl ES, have a look
http://glm.g-truc.net/

Android: Canvas vs OpenGL

I have a drawing app where the user can draw lines with their finger, adjust the color, thickness, etc. As the user is drawing, I am converting the massed X/Y points from MotionEvent into SVG Paths, as well as creating Android Path's and then drawing the Android Path's to the screen via a Canvas, and committing the SVG Path's to the app's database.
I am following the model used in FingerPaint, in that the 'in progress' lines are drawn on the fly by repeated calls to invalidate() (and thus, onDraw()), and once the line is complete and a new line is started, the previous line(s) are drawn in onDraw() from the underlying Canvas Bitmap, with in progress lines again generating repeated re-draws.
This works fine in this application - until you start rotating the underlying Bitmap to compensate for device rotation, supporting the ability to 'zoom in' on the drawing surface and thus having to scale the underlying Bitmap, etc. So for example, with the device rotated and the drawing scaled in, when the user is drawing, we need to scale AND rotate our Bitmap in onDraw(), and this is absolutely crawling.
I've looked at a SurfaceView, but as this still uses the same Canvas mechanism, I'm not sure I'll see noticeable improvement... so my thoughts turn to OpenGL. I have read somewhere that OpenGL can do rotations and scaling essentially 'for free', and even seen rumors (third comment) that Canvas may be disappearing in future versions.
Essentially, I am a little stuck between the Canvas and OpenGL solutions... I have a 2D drawing app that seems to fit the Canvas model perfectly when in one state, as there are not constant re-draws going on like a game (for instance when the user is not drawing I don't need any re-drawing), but when the user IS drawing, I need the maximum performance necessary to do some increasingly complex things with the surface...
Would welcome any thoughts, pointers and suggestions.
OpenGL would be able to handle the rotations and scaling easily.
Honestly, you would probably need to learn a lot of OpenGL to do this, specifically related to the topics of:
Geometry
Lighting (or just disabling it)
Picking (selecting geometry to draw on it)
Pixel Maps
Texture Mapping
Mipmapping
Also, learning OpenGL for this might be overkill, and you would have to be pretty good at it to make it efficient.
Instead, I would recommend using the graphic components of a game library built on top of openGL, such as:
Cocos2d
libgdx
any of the engines listed here
Well, this question was asked 6 years ago. Maybe Android 4.0 has not come up?
Actually, after Android 4.0 the Canvas at android.view.View is a hardware accelerated canvas, which means it is implementd by OpenGL, so you do not need to use another way for performance.
You can see the https://github.com/ChillingVan/android-openGL-canvas/blob/master/canvasglsample/src/main/java/com/chillingvan/canvasglsample/comparePerformance/ComparePerformanceActivity.java to compare the performance of normal canvas in view with GLSurfaceView.
You are right that SurfaceView uses Canvas underneath the hood. The main difference is that SurfaceView uses another thread to do the actual drawing, which generally improves performance. It sounds like it would not help you a great deal, though.
You are correct that OpenGL can do rotations very quickly, so if you need more performance that is the way to go. You should probably use GLSurfaceView. The main drawback with using OpenGL is that it is a real pain to do text. Basically you have to (okay, don't have to, but seems to be the best option) render bitmaps of text.

Categories

Resources