glReadPixels return 0 - android

I am doing my work about capturing screen and encoding on Android 5.0.
I want to control the frame rate and read this below
Controlling Frame Rate of VirtualDisplay
I created a SurfaceTexture and a Surface initialised by the SurfaceTexture , and pass the Surface to createVirtualDisplay.
The onFrameAvailable callback fired, almost 60 times per second.
But when I try to save a frame, the data got from glReadPixels is 0
Anyone knows about it? Any help would be greatly appreciated!

Related

MediaCodec Encoding camera surface presentationTime not uniform

I am encoding raw video (1080p) from the camera preview using the MediaCodec class in asynchronous mode. I read the presentation time using the MediaCodec.BufferInfo.presentationTimeUs parameter.
void onOutputBufferAvailable (MediaCodec codec, int index, MediaCodec.BufferInfo info)
I have set the target FPS as 30, so I am expecting a frame every 33 millisecs. However, the presentation time is never uniform and jumps up and down. Has anyone faced similar issue?
See the graph below. It is a graph of time between two consecutive video frames' presentation time as received (Y-Axis) in micro seconds. X-Axis is samples.
Graph plot of video presentation time
Thank you,
Ajay
OpenGL rendering using the Graphika sample app from Google as reference gave much more smoother presentation timestamps.

Controlling Frame Rate of VirtualDisplay

I'm writing an Android application, and in it, I have a VirtualDisplay to mirror what is on the screen and I then send the frames from the screen to an instance of a MediaCodec. It works, but, I want to add a way of specifying the FPS of the encoded video, but I'm unsure how to do so.
From what I've read and experimented with, dropping encoded frames (based on the presentation times) doesn't work well as it ends up with blocky/artifact ridden video as opposed to a smooth video at a lower framerate. Other reading suggests that the only way to do what I want (limit the FPS) would be to limit the incoming FPS to the MediaCodec, but the VirtualDisplay just receives a Surface which is constructed from the MediaCodec as below
mSurface = <instance of MediaCodec>.createInputSurface();
mVirtualDisplay = mMediaProjection.createVirtualDisplay(
"MyDisplay",
screenWidth,
screenHeight,
screenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR,
mSurface,
null,
null);
I've also tried subclassing Surface and limit the frames that are fed to the MediaCodec via the unlockCanvasAndPost(Canvas canvas) but the function never seems to be called on my instance, so, there may be some weirdness in how I extended Surface and the interaction with the Parcel as writeToParcel function is called on my instance, but that is the only function that is called in my instance (that I can tell).
Other reading suggests that I can go from encoder -> decoder -> encoder and limit the rate in which the second encoder is fed frames, but that's a lot of extra computation that I'd rather not do if I can avoid it.
Has anyone successfully limited the rate at which a VirtualDisplay feeds its Surface? Any help would be greatly appreciated!
Starting off with what you can't do...
You can't drop content from the encoded stream. Most of the frames in the encoded stream are essentially "diffs" from other frames. Without knowing how the frames interact, you can't safely drop content, and will end up with that corrupted macroblock look.
You can't specify the frame rate to the MediaCodec encoder. It might stuff that into metadata somewhere, but the only thing that really matters to the codec is the frames you're feeding into it, and the presentation time stamps associated with each frame. The encoder will not drop frames.
You can't do anything useful by subclassing Surface. The Canvas operations are only used for software rendering, which is unrelated to feeding in frames from a camera or virtual display.
What you can do is send the frames to an intermediate Surface, and then choose whether or not to forward them to the MediaCodec's input Surface. One approach would be to create a SurfaceTexture, construct a Surface from it, and pass that to the virtual display. When the SurfaceTexture's frame-available callback fires, you either ignore it, or render the texture onto the MediaCodec input Surface with GLES.
Various examples can be found in Grafika and on bigflake, none of which are an exact fit, but all of the necessary EGL and GLES classes are there.
You can reference the code sample from saki4510t's ScreenRecordingSample or RyanRQ's ScreenRecoder, they are all use the additional EGL Texture between the virtual display and media encoder, and the first one can keep at least 15 fps for the output video. You can search the keyword createVirtualDisplay from their code base for more details.

Low performance when execute eglSwapBuffer and eglMakeCurrent

I'm developing an Android Unity Plugin that allows user to record his/her gameplay
Overview of my solution:
Using OpenGl FrameBufferObject (FBO) to make Unity render offscreen to this FBO
Get the offscreen texture of this FBO then using for 2 purposes:
Render to video surface
Redraw to device screen
Execute flow per frame:
bind my FBO
render scene to FBO (Unity code)
unbind my FBO
set up video surface
configure surface size (execute first time only)
save egl state
make video surface current
draw to video surface using offscreen texture of my FBO
restore to default surface
set presentation time to video frame
swap buffer from video surface to default window
restore egl state
make default surface current
notify encoder thread that data is ready to write
My issue is performance while recording is not good. FPS downs from 60 to 40 on Samsung Galaxy S4. I tried to record execute time of render operations and recognize that the most affect performance operations are make video surface current operation and swap buffer from video surface to default window operation. Below is their code
public void makeCurrent()
{
if (!EGL14.eglMakeCurrent(this.mEGLDisplay, this.mEGLSurface, this.mEGLSurface, this.mEGLContext))
throw new RuntimeException("eglMakeCurrent failed");
}
public boolean swapBuffers()
{
return EGL14.eglSwapBuffers(this.mEGLDisplay, this.mEGLSurface);
}
Execute time of make current operation is 1 ~ 18 ms
Execute time of swap buffers operation is 4 ~ 14 ms
Execute time of other operations is usually 0 ~ 1 ms
How to improve performance of these operations?
Any help will be greatly appreciated!
A lot of OpenGL calls are assync, and some calls may cause the OpenGL wait the queued operations to execute. So the times you are seen are because of the other calls that execute before the actual call you are doing.

SurfaceTexture's onFrameAvailable() method always called too late

I'm trying to get the following MediaExtractor example to work:
http://bigflake.com/mediacodec/ - ExtractMpegFramesTest.java (requires 4.1, API 16)
The problem I have is that outputSurface.awaitNewImage(); seems to always throw RuntimeException("frame wait timed out"), which is thrown whenever the mFrameSyncObject.wait(TIMEOUT_MS) call times out. No matter what I set TIMEOUT_MS to be, onFrameAvailable() always gets called right after the timeout occurs. I tried with 50ms and with 30000ms and it's the same.
It seems like the onFrameAvailable() call can't be done while the thread is busy, and once the timeout happens which ends the thread code execution, it can parse the onFrameAvailable() call.
Has anyone managed to get this example to work, or knows how MediaExtractor is supposed to work with GL textures?
Edit: tried this on devices with API 4.4 and 4.1.1 and the same happens on both.
Edit 2:
Got it working on 4.4 thanks to fadden. The issue was that the ExtractMpegFramesWrapper.runTest() method called th.join(); which blocked the main thread and prevented the onFrameAvailable() call from being processed. Once I commented th.join(); it works on 4.4. I guess maybe the ExtractMpegFramesWrapper.runTest() itself was supposed to run on yet another thread so the main thread didn't get blocked.
There was also a small issue on 4.1.2 when calling codec.configure(), it gave the error:
A/ACodec(2566): frameworks/av/media/libstagefright/ACodec.cpp:1041 CHECK(def.nBufferSize >= size) failed.
A/libc(2566): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 2625 (CodecLooper)
Which I solved by adding the following before the call:
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
However the problem I have now on both 4.1.1 (Galaxy S2 GT-I9100) and 4.1.2 (Samsung Galaxy Tab GT-P3110) is that they both always set info.size to 0 for all frames. Here is the log output:
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
submitted frame 0 to dec, size=20562
no output from decoder available
loop
submitted frame 1 to dec, size=7193
no output from decoder available
loop
[... skipped 18 lines ...]
submitted frame 8 to dec, size=6531
no output from decoder available
loop
submitted frame 9 to dec, size=5639
decoder output format changed: {height=240, what=1869968451, color-format=19, slice-height=240, crop-left=0, width=320, crop-bottom=239, crop-top=0, mime=video/raw, stride=320, crop-right=319}
loop
submitted frame 10 to dec, size=6272
surface decoder given buffer 0 (size=0)
loop
[... skipped 1211 lines ...]
submitted frame 409 to dec, size=456
surface decoder given buffer 1 (size=0)
loop
sent input EOS
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
[... skipped 27 lines all with size=0 ...]
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
output EOS
Saving 0 frames took ? us per frame // edited to avoid division-by-zero error
So no images get saved. However the same code and video works on 4.3. The video I am using is an .mp4 file with "H264 - MPEG-4 AVC (avc1)" video codec and "MPEG AAAC Audio (mp4a)" audio codec.
I also tried other video formats, but they seem to die even sooner on 4.1.x, while both work on 4.3.
Edit 3:
I did as you suggested, and it seems to save the frame images correctly. Thank you.
Regarding KEY_MAX_INPUT_SIZE, I tried not setting, or setting it to 0, 20, 200, ... 200000000, all with the same result of info.size=0.
I am now unable to set the render to a SurfaceView or TextureView on my layout. I tried replacing this line:
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
with this, where surfaceTexture is a SurfaceTexture defined in my xml-layout:
mSurfaceTexture = textureView.getSurfaceTexture();
mSurfaceTexture.attachToGLContext(mTextureRender.getTextureId());
but it throws a weird error with getMessage()==null on the second line. I couldn't find any other way to get it to draw on a View of some kind. How can I change the decoder to display the frames on a Surface/SurfaceView/TextureView instead of saving them?
The way SurfaceTexture works makes this a bit tricky to get right.
The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):
if (this thread has a looper) {
handle events on this thread
} else if (there's a "main" looper) {
handle events on the main UI thread
} else {
no events for you
}
The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.
So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.
Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.
If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.
If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)

Android Camera onPreviewFrame frame rate not consistent

I am trying to encode a 30 frames per second video using MediaCodec through the Camera's PreviewCall back(onPreviewFrame). The video that I encoded always plays very fast(this is not desired).
So, I tried to check the number of frames that is coming into my camera's preview by setting up a int frameCount variable to remember its count. What I am expecting is 30 frames per second because I setup my camera's preview to have 30 fps preview(as shown below). The result that I get back is not the same.
I called the onPreviewFrame callback for 10 second, the number of frameCount I get back is only about 100 frames. This is bad because I am expecting 300 frames. Is my camera parameters setup correctly? Is this a limitation of Android's Camera preview call back? And if this is a limitation on the Android Camera's preview call back, then is there any other camera callback that can return the camera's image data(nv21,yuv, yv12) in 30 frames per second?
thanks for reading and taking your time to helpout. i would appreciate any comments and opinions.
Here is an example an encoded video using Camera's onPreviewFrame:
http://www.youtube.com/watch?v=I1Eg2bvrHLM&feature=youtu.be
Camera.Parameters parameters = mCamera.getParameters();
parameters.setPreviewFormat(ImageFormat.NV21);
parameters.setPictureSize(previewWidth,previewHeight);
parameters.setPreviewSize(previewWidth, previewHeight);
// parameters.setPreviewFpsRange(30000,30000);
parameters.setPreviewFrameRate(30);
mCamera.setParameters(parameters);
mCamera.setPreviewCallback(previewCallback);
mCamera.setPreviewDisplay(holder);
No, Android camera does not guarantee stable frame rate, especially at 30 FPS. For example, it may choose longer exposure at low lighting conditions.
But there are some ways we, app developers, can make things worse.
First, by using setPreviewCallback() instead of setPreviewCallbackWithBuffer(). This may cause unnecessary pressure on the garbage collector.
Second, if onPreviewFrame() arrives on the main (UI) thread, you cause any UI action directly delay the camera frames arrival. To keep onPreviewFrame() on a separate thread, you should open() the camera on a secondary Looper thread. Here I explained in detail how this can be achieved: Best use of HandlerThread over other similar classes.
Third, check that processing time is less than 20ms.

Categories

Resources