How to crop Camera2 preview without overlay object? - android

I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.

You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#

Related

Android : Single SurfaceView vs multiple SurfaceView

I am trying to draw 3D object onto camera preview frames (Android). Should I use two surface views, one for camera preview and another GLSurfaceView for drawing. The views should be synchronized and frame rate of display should be good enough to provide a good user experience. So most of the tutorials talk about using multiple views. The alternate idea is to get texture from camera preview and merge it with the 3D object to be drawn so as to get the appropriate 2D raster image.
Which method would be better for performance gains?
P.S : I will be using the Java APIs for openGL es 2.0
Since two surface views increase the number of API calls per frame and require transparency, they will be slower.
You don't need two surface views for your purpose.
Disable depth writes.
Render the camera preview on a 2D quad filling the screen.
Enable depth writes.
Render 3D object.
This will make sure your 3D objects are rendered over the camera preview.
You can also achieve this with two surface views and transparency, but it will be slower.

Dynamic Environment mapping from camera in Augmented Reality setting

I am trying to implement something like the technique described in this (old) paper to use the phone camera's video frames to create an illusion of environment mapping in an AR app.
I want to take the camera frame, divide it into sub-areas and then use those as faces on the cube map. The division of the camera frame would look something like this:
Now the X area is easy, I can use glCopyTexImage2D to copy that square area to my cubemap texture. But I need help with the trapezoid shaped areas around X (forget about the trianlges for now).
How can I take those trapezoidal areas and distort them into square textures? I think I need the opposite transformation of the later occurring perspective projection, so that the two will cancel each other out in the final render if I render the cubemap as a skybox around my camera (does that explain what I want?).
Before doing this I tried a simpler step of putting the square X area on every side of the cubemap just to see if glCopyTexImage2D can even be used for this. It can, but the results are not rotated right, some faces are "upside down" when I render the cubemap as a skybox. The question is similar: How can I rotate them before using them as textures?
I also thought about solving the problem from the other side and modifying the "texture coordinates" to make the necessary adjustments, but that also does not seem easy since the lookup in the fragment shader with "textureCube" is more complicated than a normal texture lookup.
Any ideas?
I'm trying to do this in my AR app on Android with OpenGL ES 2.0 but I guess more general OpenGL advice might also be useful.
Update
I have come to the conclusion that this is not worth pursuing anymore. The paper makes it look nice, but my experiments with a phone camera have shown a major contradiction. If you want to reflect the environment in an object rendered in AR, the camera view is very limited. When the camera is far away from the tracked object you have enough environment information for a good reflection, but you will barely see it because the camera is far away. But when you bring the camera closer to see the awesome reflection in detail, the tracked object will fill most of the camera's field of view and you barely have any environment to reflect anymore. So in either case you lose and the result is not worth the effort.
It seems that you need to create mesh with UV mapping described in article and render it with texture from camera to another texture. Then use it as cubemap.

How to record webview activity screen using Android MediaCodec?

I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?
Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.
"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/

Displaying full-screen background in OpenGL

My Android app needs to display a full-screen bitmap as a background, then on top of that display some dynamic 3D graphics using OpenGL ES (either 1.1. or 2.0 - not decided yet). The background image is a snapshot of a WebView component in the same app, so its dimensions already fit the screen perfectly.
I'm new to OpenGL, but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D), configuring the matrices, creating some vertices for the rectangle and displaying that with glDrawArrays. Seems to be a lot of extra work (with loss of quality when down-scaling the image to POT size) when all that's needed is just to draw a bitmap to the screen, in 1:1 scale.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES. Is there any way to copy pixels to the screen buffer in GLES, circumventing the 3D pipeline? Or is there any way to draw OpenGL graphics on top of a "flat" background drawn by regular Android means? Or making a translucent GLView (there is RSTextureView for Renderscript-based display, but I couldn't find an equivalent for GL)?
but I know that the regular way to display a bitmap involve scaling it into a POT texture (glTexImage2D)
Then your knowledge is outdated. Modern OpenGL (version 2 and later) are fine with arbitrary image dimensions for their textures.
The "desktop" GL has glDrawPixels(), which seems to do exactly what's needed in this situation, but that's apparently missing in GLES.
Well, modern "desktop" OpenGL, namely version 3 core and later don't have glDrawPixels either.
However appealing this function is/was, it offers only poor performance and has so many caveats, that it's rarely used, whenever it's use can be avoided.
Just upload your unscaled image into a texture, disable mipmapping and draw it onto a fullscreen quad.

Open GL and Canvas which method is more suitable for making custome brushes in drawing application?

I want to make brushes displayed in below image for drawing application. Which is a suitable method - Open GL or Canvas & How can we implement it?
I'd say Canvas, as you'll want to modify an image. OpenGLES is good for displaying images, but does not (as far as I know) have methods for modifying its textures (unless you render to a texture that then render to screen with some modifications, which is not always so effective).
Using the Canvas you will have the methods for drawing your brush-strokes onto the Bitmap you're painting on, in GLES you would have to modify a texture (by using a canvas) and then upload that to the GPU again, before it could be rendered, and the rendering would most likely just consist of drawing a square with your texture on it (as the fillrate for most mobile GPUs are quite bad, you don't want to draw the strokes separately).
What I'm trying to say is; The most convenient way to let the user draw on an openGLES surface would be by creating a texture by drawing on a Canvas.
But, there might still be some gain in using GL for drawing, as the Canvas-operations can be performed off-screen, and you can push this data to a gl-renderer to (possibly) speed up the on-screen drawing.
However; if you are developing for Android 3.x+ you should take a look at RenderScript, (which I personally have never had a chance to use), but seems like it would be a good solution in this case.
Your best solution is going to be using native code. That's how Sketchbook does it. You could probably figure out how by browsing through the GIMP source code http://www.gimp.org/source . Out of Canvas vs OpenGL, Canvas would be the way to go.
It depends. if you want to edit the image statically, go with canvas. But if you want after brushing the screen, to have the ability to edit, scale, rotate, it would be easier with opengl.
An example with opengl: Store the motion the user do with touchs. create a class that store a motion and have fields for size, rotation etc. to draw this class, just make a path of the brush image selected following the stored motion.

Categories

Resources