Drawing to an OpenGL texture in Android - android

I have been working on modifying libstreaming so that I can stream a surfaceview up to Wowza instead of the camera. I feel that I am now at the final stage and just missing one piece that I cannot seem to figure out.
Here is what I need to replace. Everything works correctly when I use the camera and do the following,
cameraInstance.setPreviewTexture(glSurfaceView.getSurfaceTexture());
cameraInstance.startPreview();
glSurfaceView is a custom class that contains code to create a texture with glGenTextures method.
At this stage, I can start the app and images is transmitted to Wowza via libstreaming. All happy.
Now, I want to replace the cameraInstance with my own code to draw directly to the texture. How do I go about doing that? How does the camera do it to draw directly to the texture?

Related

How to crop Camera2 preview without overlay object?

I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#

Need to draw to surface used by MediaRecorder

I need to record a video that just contains one single frame: an image specified by the user (it can be of any length, but it will only have the same static image). So, I figured I could use the new MediaRecorder.VideoSource.SURFACE and just draw to the Surface being used by the recorder. I initialize the recorder properly, and I can even call MediaRecorder.getSurface() without an exception (something that is apparently tricky).
My problem is somewhat embarrassing: I don't know what to do with the surface returned. I need to draw to it somehow, but all examples I can find involve drawing to a SurfaceView. Is this surface the same surface used by MediaRecorder.setPreviewDisplay()? How do I draw something to it?
In theory you can use Surface#lockCanvas() to get a Canvas to draw on if you want to render in software. There used to be problems with this on some platforms; not sure if that has been fixed.
The other option is to create an EGLSurface from the Surface and render onto it with OpenGL ES. You can find examples of this, with some code to manage all of the EGL setup, in Grafika.
The GLES recording examples uses MediaCodec rather than MediaRecorder, but the idea is the same, and it should be much simpler with MediaRecorder.

How to record webview activity screen using Android MediaCodec?

I have the task to record user activity in a webview, in other words I need to create an mp4 video file while the user navigates in a webview. Pretty challenging :)
I font that in Android 4.3 introduced MediaCodec : was expanded to include a way to provide input through a Surface (via the createInputSurface method). This allows input to come from camera preview or OpenGL ES rendering.
I even find an example where you could record a game written in opengl : http://bigflake.com/mediacodec/
My question is : how could I record a webview activity ? I assume that If I could draw the webview content to opengl texture, than everything would be fine. But I don't know how to do this.
Can anybody help me on this?
Why not try WebView.onDraw first, instead of using OpenGL? The latter approach may be more complicated, and not supported by all devices.
Once you will be able to obtain the screenshots, then you can create the video (to create video from image sequence on android), a separate task where mediacodec should help.
"I assume that If I could draw the webview content to opengl texture".
It is possible.
The SurfaceTexture is basically your entry point into the OpenGL layer. It is initialized with an OpenGL texture id, and performs all of it's rendering onto that texture.
The steps to render your view to opengl:
1.Initialize an OpenGL texture
2.Within an OpenGL context construct a SurfaceTexture with the texture id. Use SurfaceTexture.setDefaultBufferSize(int width, int height) to make sure you have enough space on the texture for the view to render.
3.Create a Surface constructed with the above SurfaceTexture.
4.Within the View's onDraw, use the Canvas returned by Surface.lockCanvas to do the view drawing. You can obviously do this with any View, and not just WebView. Plus Canvas has a whole bunch of drawing methods, allowing you to do funky, funky things.
The source code can be found here: https://github.com/ArtemBogush/AndroidViewToGLRendering And you can find some explanations here:http://www.felixjones.co.uk/neo%20website/Android_View/

getting part of the (already rendered) screen as texture

I'm making an android opengl es 2d app, and trying to use a part of my rendered screen as a texture for a billboard.
so far, i had partial success with glCopyTexSubImage - it only works on some phones.
everywhere i read recommends using frameBufferObject to render to texture, but i can't grasp how to use it, so if anyone can help me get this, i would thank them greatly.
if i use a FBO that is binded to a texture, is it possible to render just part of the screen? if not, isn't that a bit overkill? (also much more work texture mapping and moving the texture. that and the texture would have to be big enough for the part i need to not be blurry)
i need to get a snapshot of something that should be rendered to screen anyway, does that mean i have to render my scene twice every frame(one for my texture and another for the actuall render)? am i missing something here?

How to make change on the video stream in real time ?

I want to make some change on the video that come from the camera.
So i using class that extends the class SurfaceView and implements SurfaceHolder.Callback.
Now, i still don't find any way to make this change.
How can i do it?
you can try SurfaceTexture instead of SurfaceView, and implements interface SurfaceTexture.OnFrameAvailableListener with method onFrameAvailable(...). On arrival of video frame, Surface will call back this method, you can get the current frame data.
Pls refer to class PanoramaActivity of Android Camera APK source code for sampling code.
This is complicated and difficult to do in realtime. Basically you need to grab the camera data, modify it, then write it to the SurfaceView. And you realistically have half a second to do it, otherwise the lag is unbearable.
Apps that overlay things on a camera view (think of the ZXing barcode scanner) typically do it by providing a view bound to the camera, then grabbing a second copy of the camera image data a few times per second and overlaying additional data on top of the first view.

Categories

Resources