I am using a SurfaceTexture to get preview frames in the following way.
First, I set a preview texture:
camera.setPreviewTexture(new SurfaceTexture(0));
Then, just before starting the preview and then each time onPreviewFrame is called, I set the callback buffer like this:
camera.addCallbackBuffer(buffer);
camera.setPreviewCallbackWithBuffer(this);
It works. Sometimes, I take a picture using camera.takePicture(null, null, callback), which results in calling onPictureTaken successfully. The image is saved. Since I want to restart the preview after the picture has been taken, I do the following:
try
{
camera.setPreviewTexture(new SurfaceTexture(0));
camera.startPreview();
}
...
The preview restarts and everything seems to be fine. But the following error is reported in my Logcat, seemingly after the preview has be restarted:
E/BufferQueue﹕ [unnamed-5682-5] dequeueBuffer: min undequeued buffer count (2) exceeded (dequeued=5 undequeudCount=1)
Am I missing something? Should I release the old texture at some point?
Configuration: Samsung Galaxy S4, Samsung Galaxy S5, Nexus 5, running on Android KitKat.
EDIT: I am not sure wether it is linked or not, but after a while, my App does not take pictures anymore and the following messages appear continuously in my Logcat:
E/LocSvc_api_v02( 318): I/---> locClientSendReq line 2332 QMI_LOC_INJECT_SENSOR_DATA_REQ_V02
E/gsiff_dmn( 318): I/loc_api_resp_ind_callback: Received LocAPI Resp ind = 77
E/LocSvc_api_v02( 318): D/loc_sync_process_ind:172]: loc_sync_array not in use
E/LocSvc_utils_q( 318): D/msg_q_rcv: Received message 0xB899D940 rv = 0
E/gsiff_dmn( 318): I/gsiff_data_task: Handling message type = 4
E/gsiff_dmn( 318): I/gsiff_daemon_inject_sensor_data_handler: Sending Sensor Data to LocApi. opaque_id = 1226
E/LocSvc_api_v02( 318): I/---> locClientSendReq line 2332 QMI_LOC_INJECT_SENSOR_DATA_REQ_V02
E/gsiff_dmn( 318): I/loc_api_resp_ind_callback: Received LocAPI Resp ind = 77
E/LocSvc_api_v02( 318): D/loc_sync_process_ind:172]: loc_sync_array not in use
E/mm-camera( 284): [cpp_hardware_process_frame:997] Too many cpp frames dropped!!
E/mm-camera( 284): cpp_thread_handle_process_buf_event:224] get buffer fail. drop frame id:1845 identity:0x20002
W/QCamera2HWI( 269): [CHECK_BUF_LOCK] Too many preview buffer is locked by surfaceflinger : 29
E/mm-camera( 284): [cpp_hardware_process_frame:997] Too many cpp frames dropped!!
E/mm-camera( 284): cpp_thread_handle_process_buf_event:224] get buffer fail. drop frame id:1846 identity:0x20002
EDIT 2: If, instead of a new SurfaceTexture(0), I always use the same SurfaceTexture (that I keep as a member), then some errors disappear and the App continues to work. The min undequeued buffer count exceeded error and the Too many preview buffer is locked by surfaceflinger warning stay.
It seems that the camera is holding something in its buffer that is not dequeued by your activity. You have to find the way to clear the camera buffer when you start a new preview.
As you can find in Android documentation about Camera class :
The buffer queue will be cleared if this method [setPreviewCallbackWithBuffer] is called with a null callback, setPreviewCallback(Camera.PreviewCallback) is called, or setOneShotPreviewCallback(Camera.PreviewCallback) is called.
So maybe it is enough to remove your callback when you take a picture and then reinstatiate it when you restart your preview.
Related
Android MediaCodec decode takes a super long time, about 115 to 118 msecs per frame. This is a h264 frame. The Android device has a qualcomm snapdragon 845 processor, so I assume the Android MediaCodec APIs target the qualcomm GPU and not the ARM core CPU.
Wondering if anyone has experienced such issue/s before and can provide guidance on how to make this decode go faster?
The code is all native code, no java at all. With no Java, I have no active window, no surface texture... so Grafika examples don't help here. I am using AndroidP(9.0) API 28. NDK 19.2.5x.
Here's how my code is setup:
Step1: I have two codec instances configured on two separate threads as follows:
codecData.codec = AMediaCodec_createDecoderByType("video/avc");
AMediaFormat_setString(codecData.format_eye, AMEDIAFORMAT_KEY_MIME, "video/avc");
AMediaFormat_setInt32(codecData.format_eye, AMEDIAFORMAT_KEY_HEIGHT, 1920);
AMediaFormat_setInt32(codecData.format_eye, AMEDIAFORMAT_KEY_WIDTH, 1080);
AMediaFormat_setFloat(codecData.format_eye, AMEDIAFORMAT_KEY_FRAME_RATE, 60.0f);
Step2: I enqueue the encoded buffer using these calls which take 14 to 17 msecs on 60 FPS input with two separate threads populating the individual codec Qs:
bufIdx = AMediaCodec_dequeueInputBuffer(codecData.codec, -1); //-1 makes it blocking call
auto buf = AMediaCodec_getInputBuffer(codecData.codec, bufIdx, &bufSize);
uint64_t presentTime = presentTimer.getTimeUs();
memcpy(buf, data, size);
AMediaCodec_queueInputBuffer(codecData.codec, bufIdx, 0, size, presentTime, 0);
Step3: I dequeue the decoded buffer as follows, these takes 115 to 118 msecs per frame per codec on 60 FPS output. The dequeue for both the codecs are done by one consumer thread which goes through both the codec instances one at a time:
AMediaCodecBufferInfo info_eye;
bufIdx = AMediaCodec_dequeueOutputBuffer(codecData.codec, &info_eye, 1);
auto decodedBuf = AMediaCodec_getOutputBuffer(codecData.codec, bufIdx, &bufSize);
Step4: The decoded buffer is then fed to a NV12toRGBA shader on the render thread that populates the texture which takes about 2 msecs. This texture then gets displayed.
I am expecting 60 FPS but get about 50 FPS due to the delays in Step3 i.e. the 115 to 118 msecs latency is killing me :-(
Any ideas? Appreciate any and all help.
When I encode a video via Surface -> MediaCodec -> MediaMuxer, I get a very strange result when testing on the Samsung Galaxy S7. For other devices tested (emulator with Marshmallow and HTC Desire), the video comes out correctly, but on this device the video is garbled.
Using MediaCodec to save series of images as Video had a similar output of video, but I don't see how the solution could apply here because I am using a Surface as input and set the color format to COLOR_FormatSurface.
I also tried messing with the video resolution (settled on 1280 x 720) per MediaCodec Encoded video has green bar at bottom and chrominance screwed up, but that didn't solve the problem either. (c.f. Nexus 7 2013 mediacodec video encoder garbled output)
Does anyone have suggestions for what I might try to get the video formatted correctly?
Here is part of the log from the encoding:
D/ViewRootImpl: #1 mView = android.widget.LinearLayout{1dc79f2 V.E...... ......I. 0,0-0,0 #102039c android:id/toast_layout_root}
I/ACodec: [] Now uninitialized
I/OMXClient: Using client-side OMX mux.
I/ACodec: [OMX.qcom.video.encoder.avc] Now Loaded
W/ACodec: [OMX.qcom.video.encoder.avc] storeMetaDataInBuffers (output) failed w/ err -1010
W/ACodec: do not know color format 0x7fa30c06 = 2141391878
W/ACodec: do not know color format 0x7fa30c04 = 2141391876
W/ACodec: do not know color format 0x7fa30c08 = 2141391880
W/ACodec: do not know color format 0x7fa30c07 = 2141391879
W/ACodec: do not know color format 0x7f000789 = 2130708361
D/ViewRootImpl: MSG_RESIZED_REPORT: ci=Rect(0, 0 - 0, 0) vi=Rect(0, 0 - 0, 0) or=1
I/ACodec: setupVideoEncoder succeeded
W/ACodec: do not know color format 0x7f000789 = 2130708361
I/ACodec: [OMX.qcom.video.encoder.avc] Now Loaded->Idle
I/ACodec: [OMX.qcom.video.encoder.avc] Now Idle->Executing
I/ACodec: [OMX.qcom.video.encoder.avc] Now Executing
I/MPEG4Writer: setStartTimestampUs: 0
I/MPEG4Writer: Earliest track starting time: 0
The 5th unrecognized color seems to be COLOR_FormatSurface... Is that a problem?
Other details:
MIME: video/avc
Resolution: 1280 x 720
Frame rate: 30
IFrame interval: 2
Bitrate: 8847360
Per Android Docs for MediaCodec.createInputSurface():
The Surface must be rendered with a hardware-accelerated API, such as
OpenGL ES. lockCanvas(android.graphics.Rect) may fail or produce
unexpected results.
I must have missed (or ignored) that in writing the code. Since I was using lockCanvas() to get a canvas upon which to draw my video frames, the code broke. I have put a quick fix on the problem by using lockHardwareCanvas() if API level >= 23 (since it is unavailable prior to that and since the code ran fine on API level 19).
Long term however (for me and anyone else who might stumble across this), I may have to get into more OpenGL stuff for a more permanent and stable solution. It's not worth going that route though unless I find an example of a device which will not work with my quick fix.
If you are still looking for an example for rendering bitmaps to a InputSurface.
I was able to get this to work.
Look at my answers here.
https://stackoverflow.com/a/49331192/7602598
https://stackoverflow.com/a/49331352/7602598
https://stackoverflow.com/a/49331295/7602598
I am trying to save image sequences with fixed framerates (preferably up to 30) on an android device with FULL capability for camera2 (Galaxy S7), but I am unable to a) get a steady framerate, b) reach even 20fps (with jpeg encoding). I already included the suggestions from Android camera2 capture burst is too slow.
The minimum frame duration for JPEG is 33.33 milliseconds (for resolutions below 1920x1080) according to
characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputMinFrameDuration(ImageFormat.JPEG, size);
and the stallduration is 0ms for every size (similar for YUV_420_888).
My capture builder looks as follows:
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CONTROL_AE_MODE_OFF);
captureBuilder.set(CaptureRequest.SENSOR_EXPOSURE_TIME, _exp_time);
captureBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);
captureBuilder.set(CaptureRequest.SENSOR_SENSITIVITY, _iso_value);
captureBuilder.set(CaptureRequest.LENS_FOCUS_DISTANCE, _foc_dist);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CONTROL_AF_MODE_OFF);
captureBuilder.set(CaptureRequest.CONTROL_AWB_MODE, _wb_value);
// https://stackoverflow.com/questions/29265126/android-camera2-capture-burst-is-too-slow
captureBuilder.set(CaptureRequest.EDGE_MODE,CaptureRequest.EDGE_MODE_OFF);
captureBuilder.set(CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE, CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE_OFF);
captureBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_OFF);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
// Orientation
int rotation = getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION,ORIENTATIONS.get(rotation));
Focus distance is set to 0.0 (inf), iso is set to 100, exposure-time 5ms. Whitebalance can be set to OFF/AUTO/ANY VALUE, it does not impact the times below.
I start the capture session with the following command:
session.setRepeatingRequest(_capReq.build(), captureListener, mBackgroundHandler);
Note: It does not make a difference if I request RepeatingRequest or RepeatingBurst..
In the preview (only texture surface attached), everything is at 30fps.
However, as soon as I attach an image reader (listener running on HandlerThread) which I instantiate like follows (without saving, only measuring time between frames):
reader = ImageReader.newInstance(_img_width, _img_height, ImageFormat.JPEG, 2);
reader.setOnImageAvailableListener(readerListener, mBackgroundHandler);
With time-measuring code:
ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader myreader) {
Image image = null;
image = myreader.acquireNextImage();
if (image == null) {
return;
}
long curr = image.getTimestamp();
Log.d("curr- _last_ts", "" + ((curr - last_ts) / 1000000) + " ms");
last_ts = curr;
image.close();
}
}
I get periodically repeating time differences like this:
99 ms - 66 ms - 66 ms - 99 ms - 66 ms - 66 ms ...
I do not understand why these take double or triple the time that the stream configuration map advertised for jpeg? The exposure time is well below the frame duration of 33ms. Is there some other internal processing happening that I am not aware of?
I tried the same for the YUV_420_888 format, which resulted in constant time-differences of 33ms. The problem I have here is that the cellphone lacks the bandwidth to store the images fast enough (I tried the method described in How to save a YUV_420_888 image?). If you know of any method to compress or encode these images fast enough myself, please let me know.
Edit: From the documentation of getOutputStallDuration: "In other words, using a repeating YUV request would result in a steady frame rate (let's say it's 30 FPS). If a single JPEG request is submitted periodically, the frame rate will stay at 30 FPS (as long as we wait for the previous JPEG to return each time). If we try to submit a repeating YUV + JPEG request, then the frame rate will drop from 30 FPS." Does this imply that I need to periodically request a single capture()?
Edit2: From https://developer.android.com/reference/android/hardware/camera2/CaptureRequest.html: "The necessary information for the application, given the model above, is provided via the android.scaler.streamConfigurationMap field using getOutputMinFrameDuration(int, Size). These are used to determine the maximum frame rate / minimum frame duration that is possible for a given stream configuration.
Specifically, the application can use the following rules to determine the minimum frame duration it can request from the camera device:
Let the set of currently configured input/output streams be called S.
Find the minimum frame durations for each stream in S, by looking it up in android.scaler.streamConfigurationMap using getOutputMinFrameDuration(int, Size) (with its respective size/format). Let this set of frame durations be called F.
For any given request R, the minimum frame duration allowed for R is the maximum out of all values in F. Let the streams used in R be called S_r.
If none of the streams in S_r have a stall time (listed in getOutputStallDuration(int, Size) using its respective size/format), then the frame duration in F determines the steady state frame rate that the application will get if it uses R as a repeating request."
The JPEG output is by way not the fastest way to fetch frames. You can accomplish this a lot faster by drawing the frames directly onto a Quad using OpenGL.
For burst capture, a faster solution would be capturing the images to RAM without encoding them, then encoding and saving them asynchronously.
On this website you can find a lot of excellent code related to android multimedia in general.
This specific program uses OpenGL to fetch the pixel data from an MPEG video. It's not difficult to use the camera as input instead of a video. You can basically use the texture used in the CodecOutputSurface class from the mentioned program as output texture for your capture request.
A possible solution I found consists of using and dumping YUV without encoding it as JPEG in combination with a micro Sd-card that is able to save up to 95Mb per second. (I had the misconception that YUV images would be larger, so with a cellphone that has full support for the camera2-pipeline, the write speed should be the limiting factor.
With this setup, I was able to achieve the following stable rates:
1920x1080, 15fps (approx. 4Mb * 15 == 60Mb/sec)
960x720, 30fps. (approx. 1.5Mb * 30 == 45Mb/sec)
I then encode the images offline from YUV to PNG using a python script.
I'm processing a live stream via MediaCodec and have a scenario where the MediaFormat changes mid-stream (ie: resolution of the video being decoded changes). Given I'm attaching the decoder to a Surface to render it as soon as I detect the change in resolution on the incoming stream I recreate the decoder before feeding it the new resolution buffer (providing it with the proper new MediaFormat).
I've been getting some weird errors which don't give me too much info as to what could be wrong, ie when calling MediaCodec.configure with the new format and same Surface:
android.media.MediaCodec$CodecException: Error 0xffffffea
at android.media.MediaCodec.native_configure(Native Method)
at android.media.MediaCodec.configure(MediaCodec.java:577)
Which when fetching the CodecException.getDiagnosticInfo it shows nothing that I can really use to understand the reason for the failure: android.media.MediaCodec.error_neg_22
I've also noted the following on the logs and found some related information and am wondering if there's something I need to do regarding the Surface itself (like detaching it from the old instance of the decoder being giving it to the new one):
07-09 15:00:17.217 E/BufferQueueProducer( 139): [SurfaceView] connect(P): already connected (cur=3 req=3)
07-09 15:00:17.217 E/MediaCodec( 5388): native_window_api_connect returned an error: Invalid argument (-22)
07-09 15:00:17.218 E/MediaCodec( 5388): configure failed with err 0xffffffea, resetting...
Looks like calling stop() and release() as well as reinitializing any references I had to the getInputBuffers() and getOutputBuffers() did the trick. At least I don't get the messages/exceptions anymore. Now I just need to figure out the Surface reference part as it seems the resized stream (when resolution changes) is still being fit in the original surface dimensions instead of adjusting the Surface for the new resolution.
If your encoder supports adaptive playback, then apparently you can alter some codec paramaters on the fly:
https://stackoverflow.com/a/34427724/1048170
I'm trying to get the following MediaExtractor example to work:
http://bigflake.com/mediacodec/ - ExtractMpegFramesTest.java (requires 4.1, API 16)
The problem I have is that outputSurface.awaitNewImage(); seems to always throw RuntimeException("frame wait timed out"), which is thrown whenever the mFrameSyncObject.wait(TIMEOUT_MS) call times out. No matter what I set TIMEOUT_MS to be, onFrameAvailable() always gets called right after the timeout occurs. I tried with 50ms and with 30000ms and it's the same.
It seems like the onFrameAvailable() call can't be done while the thread is busy, and once the timeout happens which ends the thread code execution, it can parse the onFrameAvailable() call.
Has anyone managed to get this example to work, or knows how MediaExtractor is supposed to work with GL textures?
Edit: tried this on devices with API 4.4 and 4.1.1 and the same happens on both.
Edit 2:
Got it working on 4.4 thanks to fadden. The issue was that the ExtractMpegFramesWrapper.runTest() method called th.join(); which blocked the main thread and prevented the onFrameAvailable() call from being processed. Once I commented th.join(); it works on 4.4. I guess maybe the ExtractMpegFramesWrapper.runTest() itself was supposed to run on yet another thread so the main thread didn't get blocked.
There was also a small issue on 4.1.2 when calling codec.configure(), it gave the error:
A/ACodec(2566): frameworks/av/media/libstagefright/ACodec.cpp:1041 CHECK(def.nBufferSize >= size) failed.
A/libc(2566): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 2625 (CodecLooper)
Which I solved by adding the following before the call:
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
However the problem I have now on both 4.1.1 (Galaxy S2 GT-I9100) and 4.1.2 (Samsung Galaxy Tab GT-P3110) is that they both always set info.size to 0 for all frames. Here is the log output:
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
input buffer not available
no output from decoder available
loop
submitted frame 0 to dec, size=20562
no output from decoder available
loop
submitted frame 1 to dec, size=7193
no output from decoder available
loop
[... skipped 18 lines ...]
submitted frame 8 to dec, size=6531
no output from decoder available
loop
submitted frame 9 to dec, size=5639
decoder output format changed: {height=240, what=1869968451, color-format=19, slice-height=240, crop-left=0, width=320, crop-bottom=239, crop-top=0, mime=video/raw, stride=320, crop-right=319}
loop
submitted frame 10 to dec, size=6272
surface decoder given buffer 0 (size=0)
loop
[... skipped 1211 lines ...]
submitted frame 409 to dec, size=456
surface decoder given buffer 1 (size=0)
loop
sent input EOS
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
loop
surface decoder given buffer 1 (size=0)
loop
[... skipped 27 lines all with size=0 ...]
surface decoder given buffer 1 (size=0)
loop
surface decoder given buffer 0 (size=0)
output EOS
Saving 0 frames took ? us per frame // edited to avoid division-by-zero error
So no images get saved. However the same code and video works on 4.3. The video I am using is an .mp4 file with "H264 - MPEG-4 AVC (avc1)" video codec and "MPEG AAAC Audio (mp4a)" audio codec.
I also tried other video formats, but they seem to die even sooner on 4.1.x, while both work on 4.3.
Edit 3:
I did as you suggested, and it seems to save the frame images correctly. Thank you.
Regarding KEY_MAX_INPUT_SIZE, I tried not setting, or setting it to 0, 20, 200, ... 200000000, all with the same result of info.size=0.
I am now unable to set the render to a SurfaceView or TextureView on my layout. I tried replacing this line:
mSurfaceTexture = new SurfaceTexture(mTextureRender.getTextureId());
with this, where surfaceTexture is a SurfaceTexture defined in my xml-layout:
mSurfaceTexture = textureView.getSurfaceTexture();
mSurfaceTexture.attachToGLContext(mTextureRender.getTextureId());
but it throws a weird error with getMessage()==null on the second line. I couldn't find any other way to get it to draw on a View of some kind. How can I change the decoder to display the frames on a Surface/SurfaceView/TextureView instead of saving them?
The way SurfaceTexture works makes this a bit tricky to get right.
The docs say the frame-available callback "is called on an arbitrary thread". The SurfaceTexture class has a bit of code that does the following when initializing (line 318):
if (this thread has a looper) {
handle events on this thread
} else if (there's a "main" looper) {
handle events on the main UI thread
} else {
no events for you
}
The frame-available events are delivered to your app through the usual Looper / Handler mechanism. That mechanism is just a message queue, which means the thread needs to be sitting in the Looper event loop waiting for them to arrive. The trouble is, if you're sleeping in awaitNewImage(), you're not watching the Looper queue. So the event arrives, but nobody sees it. Eventually awaitNewImage() times out, and the thread returns to watching the event queue, where it immediately discovers the pending "new frame" message.
So the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage(). In the ExtractMpegFramesTest example, this is done by running the test in a newly-created thread (see the ExtractMpegFramesWrapper class), which does not have a Looper. (For some reason the thread that executes CTS tests has a looper.) The frame-available events arrive on the main UI thread.
Update (for "edit 3"): I'm a bit sad that ignoring the "size" field helped, but pre-4.3 it's hard to predict how devices will behave.
If you just want to display the frame, pass the Surface you get from the SurfaceView or TextureView into the MediaCodec decoder configure() call. Then you don't have to mess with SurfaceTexture at all -- the frames will be displayed as you decode them. See the two "Play video" activities in Grafika for examples.
If you really want to go through a SurfaceTexture, you need to change CodecOutputSurface to render to a window surface rather than a pbuffer. (The off-screen rendering is done so we can use glReadPixels() in a headless test.)