I want to implement an effect that, showing a picture on a bubble surface with the picture looks like surrounding the bubble surface. But I don't know how to do this kind of things...
I am doing it in Android platform, should I use OpenGL ES for this ? Or just some 2D transformation can achieve this effect ?
One more question is, I want to create many interesting graphics effects like the PhotoShop's various filter, is there any books/articles I can refer for this kind of things ? Is this kind of work belongs to the "Digital Image Processing" field or some other computer graphics related fields ?
Or just some 2D transformation
This effect is a nonlinear transformation, so doing through the 2D (linear) transformations being available will not work. You can do it using OpenGL by numerous ways. I'm currently thinking about some easy to understand way to convey, what you need to do. Basically you need to implement some kind of refraction or nonlinear radial warp.
Say p is the center of your bubble (in 2D), and r the position relative to p, then the undistorted picture is given by r+p. Now you want to distort it toward the edges. A parabolic distortion comes to mid, i.e. instead of r+p you'd show the pixel r + (|p|^2)*p/|p|
Related
I am trying to implement something like the technique described in this (old) paper to use the phone camera's video frames to create an illusion of environment mapping in an AR app.
I want to take the camera frame, divide it into sub-areas and then use those as faces on the cube map. The division of the camera frame would look something like this:
Now the X area is easy, I can use glCopyTexImage2D to copy that square area to my cubemap texture. But I need help with the trapezoid shaped areas around X (forget about the trianlges for now).
How can I take those trapezoidal areas and distort them into square textures? I think I need the opposite transformation of the later occurring perspective projection, so that the two will cancel each other out in the final render if I render the cubemap as a skybox around my camera (does that explain what I want?).
Before doing this I tried a simpler step of putting the square X area on every side of the cubemap just to see if glCopyTexImage2D can even be used for this. It can, but the results are not rotated right, some faces are "upside down" when I render the cubemap as a skybox. The question is similar: How can I rotate them before using them as textures?
I also thought about solving the problem from the other side and modifying the "texture coordinates" to make the necessary adjustments, but that also does not seem easy since the lookup in the fragment shader with "textureCube" is more complicated than a normal texture lookup.
Any ideas?
I'm trying to do this in my AR app on Android with OpenGL ES 2.0 but I guess more general OpenGL advice might also be useful.
Update
I have come to the conclusion that this is not worth pursuing anymore. The paper makes it look nice, but my experiments with a phone camera have shown a major contradiction. If you want to reflect the environment in an object rendered in AR, the camera view is very limited. When the camera is far away from the tracked object you have enough environment information for a good reflection, but you will barely see it because the camera is far away. But when you bring the camera closer to see the awesome reflection in detail, the tracked object will fill most of the camera's field of view and you barely have any environment to reflect anymore. So in either case you lose and the result is not worth the effort.
It seems that you need to create mesh with UV mapping described in article and render it with texture from camera to another texture. Then use it as cubemap.
I'm writing an android app using OpenCV for my masters that will be something like a game. The main goal is to a detect a car in selected area. The "prize" will be triggered randomly while detecting cars. When the user will hit the proper car I want to display a 3D object overlay on the screen and attach it to the middle of the car and keep it there so when the user will change the angle of his view on the car, the object will also be seen from diffrent angle.
at the moment I have EVERYTHING beside attaching the object. I've created detection, I'm drawing the 3D overlay, I've created functions that allow me to rotate the camera etc. BUT I do not have any clue how can I attach the overlay to the specific point. Cause I don't have this I have no point to recalculate the renderer to change the overlay perspective.
Please, I really need some help, even a small idea will be fine:
How can I attach the overlay to the specific in real world
(Sorry, I couldn't comment. Need at least 50 points to do that ... :P )
I assume your image of the car is coming from a camera feed and you are drawing 3d car in opengl. If so, then you can try this:
You set the pixel format of the opengl layer as RGBA_8888, so that you can set the background of the opengl camera as a transparent color.
You take a relative layout as layout of your activity.
first you add the opencv camera layout to it as full height and width.
then you add opengl layer as full height and width.
you get the position of the real car from opencv layer as pixel value or something you did.
then scale it to your opengl parameters so that you can draw it on the right spot.
it worked for me. hope it works for you too.
I am developing an augmented reality app, that should render a 3D model. So far so good. I am using Vuforia for AR, libgdx for graphics, everything is on Android, works like charm...
Problem is, that I need to create a "window - like" effect. I literally need to make the model look like a window you can look through and see behind it. That means I have some kind of wall-object, which has a hole in it(a window). Through this hole, you can see another 3D model behind the wall.
Problem is, I need to also render the video background. And this background is also behind the wall. I can't just turn of blending when rendering the wall, because that would corrupt the video image.
So I need to make the wall and everything directly behind it transparent, but not the video background.
Is such marvel even possible using only OpenGL?
I have been thinking about some combination of front-to-end and back-to-front rendering: render background first, then render the wall, but blend it only into the alpha channel (making video visible only on pixels that are not covered by wall), then render the actual content, but blend it only into the visible pixels (that are not behind the wall) and then "render" the wall once more, but this time make everything behind it visible. Would such thing work?
I can't just turn of blending when rendering the wall
What makes you think that? OpenGL is not a scene graph. It's a drawing API and everything happens in the order and as you call it.
So order of operations would be
Draw video background with blending turned off.
The the objects between video and the wall (turn blending on or off as needed)
Draw the wall, with blending or alpha test enabled, so that you can create the window.
Is such marvel even possible using only OpenGL?
The key in understanding OpenGL is, that you don't think of using it to setup a 3D world scene, but instead use it to draw a 2D picture of a 3D world (because that's what OpenGL actually does). In the end OpenGL is just a bit smarter brush to draw onto a flat canvas. Think about how you'd paint a picture on paper, how you'd mask different parts. And then you do that with OpenGL.
Update
Ohkay, now I see what you want to achieve. The wall is not really visible, but a depth dependent mask. Easy enough to achieve: Use alpha testing instead of blending to produce the window in the depth buffer. Or, instead of alpha testing you could just draw 4 quads, which form a window between them.
The trick is, that you draw it into just the depth buffer, but not into the color buffer.
glDepthMask(1);
glColorMask(0,0,0,0);
draw_wall();
Blending will not work in this case, since even fully transparent fragments will end up in the depth buffer. Hence alpha test. In fixed function OpenGL glEnable(GL_ALPHA_TEST) and glAlphaFunc(…). However on OpenGL-ES2 you've to implement it through a shader.
Say you've got a single channel texture, in the fragment shader do
float opacity = texture(sampler, uv).r;
if( opacity < threshold ) discard;
I am currently working on an Android OpenGL ES 2.0 2D game and I need to implement scrolling scoreboard, something like this:
but when there are so many players they overflow from the specified region (white). When I implement scrolling (using Matrix translation), the same problem happens on top of the list. Anyone can help me?
One approach is to use the scissor test to limit where drawing occurs. Set the scissor with glScissor(), enable it with glEnable(GL_SCISSOR_TEST), draw the text, and disable it with glDisable(GL_SCISSOR_TEST).
Note the scissor is specified in window coordinates.
Another approach would be to arrange the drawing such that the blue border is drawn on top of the text, either by setting the depth or drawing it last. (This assumes you're not drawing it with glClear().)
What I'm trying to do is have a background image, for sake of simplicity, lets say it's a picture of the front of a house. Then, I want to have a red ball move from window to window.
**I want to have a background picture, and a picture on top of it.
**I then want to be able to tell the top picture EXACTLY where to go.
How can I do this?
I'm just beginning to learn about animations in Android, and have not yet run across any way to do this.
There are two routes to animation in android: Canvas and OpenGL ES.
I would recommend OpenGL for anything requiring smoothness and speed, like a moving ball.
You should create a view using the helper class GLSurfaceView
http://android-developers.blogspot.com/2009/04/introducing-glsurfaceview.html, and implement a Renderer.
I assume you have the images saved in your res/drawable folders, in a format like png and the ball file contains an alpha channel.
You can see many tutorials online, but basically you need to load your background image and your ball resource at onSurfaceCreated and store it in a texture using GLUtils.texImage2D.
In the onDrawFrame method, you should set up a 2D projection such as glOrtho2D, then draw the background.
Then just before you draw the ball texture, you can use the glTranslate(x,y,0) function to move the ball over the house. Use an alpha blend for the ball:
glBlendFunc(GL_SRC_ALPHA, GL_SRC_ONE_MINUS_ALPHA);
glEnable(GL_BLEND);
Unfortunately writing in OpenGL isn't as straightforward as you might hope. Everything is done with 3D coordinates, despite the fact you want only a 2D image. But hopefully this gives you enough info to google for good exmaples, which are abundant!