I'm trying to upload dynamically generated picture to the screen with minimal delay, frame by frame. I have used glTexSubImage2D but it's delay is big. Haven't use PBO because it is not implemented on some devices (requires GLESv3). GraphicBuffer binded to a texture (DMA) is a bit faster, but it downloads the full picture to buffer on every lock() call. Is there a way to upload pixels to the texture (or native window) rapidly without downloading the buffer back? Just for uploading some pixels to the display?
Related
The basic issue I am trying to solve is to delay what is sent to a virtual display by a second or so. So basically, I am trying to shift all frames by 1 second after the initial recording. Note that a surface is used as an input and another surface is used as an output through this virtual display. My initial hunch is to explore a few ideas, given that modification of the Android framework or use of non-public APIs is fine. Java or native C/C++ is fine.
a) I tried delaying frames posted to the virtual display or output surface by a second or two in SurfaceFlinger. This does not work as it causes all surfaces to be delayed by the same amount of time (synchronous processing of frames).
b) MediaCodec uses a surface as an input to encode, and then produce the decoded data. Is there anyway to use MediaCodec such that it does not actually encode and only produce unencoded raw frames? Seems unlikely. Moreover, how does MediaCodec do this under the hood? Process things frame by frame. If I can extrapolate the method I might be able to extract frame by frame from my input surface and create a ring buffer delayed by the amount of time I require.
c) How do software decoders, such as FFmpeg, actually do this in Android? I assume they take in a surface but how would they extrapolate and process frame by frame
Note that I can certainly encode and decode to retrieve the frames and post them but I want to avoid actually decoding. Note that modifying the Android framework or using non-public APIs is fine.
I also found this: Getting a frame from SurfaceView
It seems like option d) could be using a SurfaceTexture but I would like to avoid the process of encoding/decoding.
As I understand it, you have a virtual display that is sending its output to a Surface. If you just use a SurfaceView for output, frames output by the virtual display appear on the physical display immediately. The goal is to introduce one second of latency between when the virtual display generates a frame and when the Surface consumer receives it, so that (again using SurfaceView as an example) the physical display shows everything a second late.
The basic concept is easy enough: send the virtual display output to a SurfaceTexture, and save the frame into a circular buffer; meanwhile another thread is reading frames out of the tail end of the circular buffer and displaying them. The trouble with this is what #AdrianCrețu pointed out in the comments: one second of full-resolution screen data at 60fps will occupy a significant fraction of the device's memory. Not to mention that copying that much data around will be fairly expensive, and some devices might not be able to keep up.
(It doesn't matter whether you do it in the app or in SurfaceFlinger... the data for up to 60 screen-sized frames has to be held somewhere for a full second.)
You can reduce the volume of data in various ways:
Reduce the resolution. Scaling 2560x1600 to 1280x800 removes 3/4 of the pixels. The loss of quality should be difficult to notice on most displays, but it depends on what you're viewing.
Reduce the color depth. Switching from ARGB8888 to RGB565 will cut the size in half. This will be noticeable though.
Reduce the frame rate. You're generating the frames for the virtual display, so you can choose to update it more slowly. Animation is still reasonably smooth at 30fps, halving the memory requirements.
Apply image compression, e.g. PNG or JPEG. Fairly effective, but too slow without hardware support.
Encode inter-frame differences. If not much is changing from frame to frame, the incremental changes can be very small. Desktop-mirroring technologies like VNC do this. Somewhat slow to do in software.
A video codec like AVC will both compress frames and encode inter-frame differences. That's how you get 1GByte/sec down to 10Mbit/sec and still have it look pretty good.
Consider, for example, the "continuous capture" example in Grafika. It feeds the Camera output into a MediaCodec encoder, and stores the H.264-encoded output in a ring buffer. When you hit "capture", it saves the last 7 seconds. This could just as easily play the camera feed with a 7-second delay, and it only needs a few megabytes of memory to do it.
The "screenrecord" command can dump H.264 output or raw frames across the ADB connection, though in practice ADB is not fast enough to keep up with raw frames (even on tiny displays). It's not doing anything you can't do from an app (now that we have the mediaprojection API), so I wouldn't recommend using it as sample code.
If you haven't already, it may be useful to read through the graphics architecture doc.
Im Making an android Game with OpenGl ES, I Want to capture what is rendered onto screen By FloatBuffer and save it for later use, For Example If this is the OutPut:
I want this for Result (as PNG Image) :
How can I do this?
What is on screen won't be a floating point buffer - it's typically RGBA8 unorm 32-bit per pixel.
Capture via glReadPixels to fetch the raw RGBA data - you'll have to supply the raw to PNG save functionality, that's not part of OpenGL ES itself.
Note that this is a relatively expensive operation, especially at high screen resolutions, so don't expect to do this at interactive frame rates.
I am using the GL_OES_EGL_image_external extension to play a video with OpenGL. The problem is that on some devices the video dimensions are exceeding the maximum texture size of OpenGL. Is there any way how I can dynamically deal with this issue, e.g. downscale the frames on the fly or do I have to reduce the video size beforehand?
If you are really hitting the max texture size in OpenGL ES (FWIW I believe this is about 2048x2048 with recent devices) then you could do a few things:
You could set setVideoScalingMode(VIDEO_SCALING_MODE_SCALE_TO_FIT) on your MediaPlayer. I believe this will scale the video resoltion to the size of the SurfaceTexture/Surface that it is attached to.
You could alternatively have four videos playing and render them to seperate TEXTURE_EXTERNAL_OES's then render these four textures seperately in GL. However that could kill your performance.
If I saw the error message and some context of the code I could maybe provide some more information.
My Problem:
I have a video (with lets say 25FPS) that has to be rendered with opengles 2.0 on the screen.
For reading the video I use a decoder that decodes that video into opengl es textures. With a renderpass I draw this texture on the screen.
What I have to do is get the image from the decoder upload it to the gpu, call the shaderprogram and render the image on the screen. If the video has 25FPS I have to update the screen in 40ms steps (1000ms/25FPS).
In each step I have todo the following:
get the image from the decoder
push it to the gpu memory
render the screen
swap buffers
So far it is working.
Now it happens, that the decoder takes longer than 40ms to decode a frame. That does not happen all the time but sometimes.
A solution would be building a cache. Meaning, I do render i.e. 5 images, before showing the first. This comes with a problem, it has to happen asynchron, so the cache can be build and the screen be rendered at the same time. If that happens you can see that on the video because it is not "fluid" anymore.
My Question:
Is there a solution for that?
Is it possible to create a ?-buffer, that can be copied(?!) on the backbuffer of the rendersurface, so that I can create a cache with that kind of buffers, and copy that onto the backbuffer without blocking the other thread which is creating this buffers?
OR
How to fill the backbuffer with another buffer?
I tried already:
Rendering Framebuffer(Textures) as cache. This works almost perfect, except that the texture has to be rendered as well. This means that (because it's asynchron) if a cacheframe is build and the image for the screen is build, you have to mutex(/synchronize) the rendermethods, otherwise the program crashes. But syncrhonizing takes the whole point of doing it asynchron. So this is not a good solution.
Remember that in OpenGL, if you do not clear and redraw the screen, the previous image will persist. If a new frame is not ready in time, simply do nothing.
It sounds like you have two threads: one decoding frames, and one rendering them. This is fine.
If render() is called and a new frame is not ready in time, your render method should return immediately. Do not clear or swap buffers. The screen will be preserved.
Now, the user /may/ notice occasional hiccups when a frame is repeated twice. 25 fps is an unnatural frame rate (OpenGL only supports 60/30/15/etc.), so it will not align perfectly to the screen refresh rate.
You could live with this (user likely won't notice). Or you could force playback to 30 fps by buffering frames.
A good idea is to place a message queue between your decoder and your renderer. It could be one or several frames deep. It could be an array, linked list, or ring buffer. This allows the decoder to upload into many cached textures while the rendering is drawing a different texture.
The decoder adds frames to the queue as they come in. The renderer runs at a fixed rate (30 fps). You could pause rendering until N frames have been buffered.
I am developing an application in android that streams the live video from android to pc. I am capturing frame by frame video on Camera.onPreviewFrame() and then sending acquired byte[] YUV data to server using socket.
This method is working fine. Only the problem I am facing is the no. of frames per second. It is now 4-5 fps and I want to achieve 15-16 fps.
To achieve this, I am thinking of compressing this YUV data. Currently my app gives me frame of resolution 320 X 240. I want it to scale it down so that I can reduce the no. of bytes to send on the network. Is there any library or algorithm which can do this?
Is there any other way of streaming live video from android phone to pc?
I recommand resizing YUV data. But the MAXIMUM resolution of your phone is better then others.
YUV -> anoter Color space ( RGBA, BGRA, ARGB.. etc...)
resizing RGBA using openCV or meth...
processing
resizing (up)