how to get the underlying buffer of EGLImage? - android

I want to implement OMX_UseEGLImage in my native openmax componet on android,but how to get the underlying buffer associated with an EGLImage specified by eglImage?
the client api will create a EGLImage and call OMX_UseEGLImage to notify my native openmax componet to use eglimage:
eglImage = eglCreateImageKHR(
m_egl_display,
m_egl_context,
EGL_GL_TEXTURE_2D_KHR,
(EGLClientBuffer)(egl_buffer->texture_id),
&attrib);
OMX_UseEGLImage(hComponent,ppBufferHdr,nPortIndex,pAppPrivate,eglImage);
the problem is how i can use eglImage? is there anyway get the underlying buffer associated with eglImage?

I think that the call OMX_UseEGLImage is only applicable to render.
For example, consider the two components: decoder and render with tunneled communication. Decoder output port connected to the Render input port via tunnel. Decoder output port is buffer supplier.
At the transition from OMX_StateLoaded to OMX_StateIdle:
Decoder creates native buffer:android::GraphicBuffer * buffer = new android::GraphicBuffer();android_native_buffer_t * native_buffer = buffer->getNativeBuffer();
Decoder creates EGLImage:EGLImageKHR egl_image = eglCreateImageKHR((EGLClientBuffer)native_buffer)
Decoder call on tunneled port: OMX_UseEGLImage(&buffer_header, egl_image)
Render allocates a buffer_header and remembers egl_image
In the state OMX_StateIdle:
The decoder knows the correspondence between the native buffer, buffer_header and egl_image.
The render knows the correspondence between the buffer_header and egl_image.
In the state OMX_StateExecuting:
Decoder writes frames in native buffer, and call OMX_EmptyThisBuffer(buffer_header) on the tunneled port
Render call glEGLImageTargetTexture2DOES(egl_image) to draw frames.
At the transition from OMX_StateIdle to OMX_StateLoaded:
Decoder call OMX_FreeBuffer(buffer_header) on tunneled port
Render free buffer_header
Decoder call eglDestroyImageKHR(egl_image)
Decoder delete native_buffer

Related

Underrun in Oboe/AAudio playback stream

I'm working on an Android app dealing with a device which is basically a USB microphone. I need to read the input data and process it. Sometimes, I need to send data the device (4 shorts * the number of channels which is usually 2) and this data does not depend on the input.
I'm using Oboe, and all the phones I use for testing use AAudio underneath.
The reading part works, but when I try to write data to the output stream, I get the following warning in logcat and nothing is written to the output:
W/AudioTrack: releaseBuffer() track 0x78e80a0400 disabled due to previous underrun, restarting
Here's my callback:
oboe::DataCallbackResult
OboeEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {
// check if there's data to write, agcData is a buffer previously allocated
// and h2iaudio::getAgc() returns true if data's available
if (h2iaudio::getAgc(this->agcData)) {
// padding the buffer
short* padPos = this->agcData+ 4 * playStream->getChannelCount();
memset(padPos, 0,
static_cast<size_t>((numFrames - 4) * playStream->getBytesPerFrame()));
// write the data
oboe::ResultWithValue<int32_t> result =
this->playStream->write(this->agcData, numFrames, 1);
if (result != oboe::Result::OK){
LOGE("Failed to create stream. Error: %s",
oboe::convertToText(result.error()));
return oboe::DataCallbackResult::Stop;
}
}else{
// if there's nothing to write, write silence
memset(this->agcData, 0,
static_cast<size_t>(numFrames * playStream->getBytesPerFrame()));
}
// data processing here
h2iaudio::processData(static_cast<short*>(audioData),
static_cast<size_t>(numFrames * oboeStream->getChannelCount()),
oboeStream->getSampleRate());
return oboe::DataCallbackResult::Continue;
}
//...
oboe::AudioStreamBuilder *OboeEngine::setupRecordingStreamParameters(
oboe::AudioStreamBuilder *builder) {
builder->setCallback(this)
->setDeviceId(this->recordingDeviceId)
->setDirection(oboe::Direction::Input)
->setSampleRate(this->sampleRate)
->setChannelCount(this->inputChannelCount)
->setFramesPerCallback(1024);
return setupCommonStreamParameters(builder);
}
As seen in setupRecordingStreamParameters, I'm registering the callback to the input stream. In all the Oboe examples, the callback is registered on the output stream, and the reading is blocking. Does this have an importance? If not, how many frames do I need to write to the stream to avoid underruns?
EDIT
In the meantime, I found the source of the underruns. The output stream was not reading the same amount of frames as the input stream (which in hindsight seems logical), so writing the amount of frames given by playStream->getFramesPerBurst() fix my issue. Here's my new callback:
oboe::DataCallbackResult
OboeEngine::onAudioReady(oboe::AudioStream *oboeStream, void *audioData, int32_t numFrames) {
int framesToWrite = playStream->getFramesPerBurst();
memset(agcData, 0, static_cast<size_t>(framesToWrite *
this->playStream->getChannelCount()));
h2iaudio::getAgc(agcData);
oboe::ResultWithValue<int32_t> result =
this->playStream->write(agcData, framesToWrite, 0);
if (result != oboe::Result::OK) {
LOGE("Failed to write AGC data. Error: %s",
oboe::convertToText(result.error()));
}
// data processing here
h2iaudio::processData(static_cast<short*>(audioData),
static_cast<size_t>(numFrames * oboeStream->getChannelCount()),
oboeStream->getSampleRate());
return oboe::DataCallbackResult::Continue;
}
It works this way, I'll change which stream has the callback attached if I notice any performance issue, for now I'll keep it this way.
Sometimes, I need to send data the device
You always need to write data to the output. Generally you need to write at least numFrames, maybe more. If you don't have any valid data to send then write zeros.
Warning: in your else block you are calling memset() but not writing to the stream.
->setFramesPerCallback(1024);
Do you need 1024 specifically? Is that for an FFT? If not then AAudio can optimize the callbacks better if the FramesPerCallback is not specified.
In all the Oboe examples, the callback is registered on the output stream,
and the reading is blocking. Does this have an importance?
Actually the read is NON-blocking. Whatever stream does not have the callback should be non-blocking. Use a timeoutNanos=0.
It is important to use the output stream for the callback if you want low latency. That is because the output stream can only provide low latency mode with callbacks and not with direct write()s. But an input stream can provide low latency with both callback and with read()s.
Once the streams are stabilized then you can read or write the same number of frames in each callback. But before it is stable, you may need to to read or write extra frames.
With an output callback you should drain the input for a while so that it is running close to empty.
With an input callback you should fill the output for a while so that it is running close to full.
write(this->agcData, numFrames, 1);
Your 1 nanosecond timeout is very small. But Oboe will still block. You should use a timeoutNanos of 0 for non-blocking mode.
According to Oboe documentation, during the onAudioReady callback, you have to write exactly numFrames frames directly into the buffer pointed to by *audioData. And you do not have to call Oboe "write" function but, instead, fill the buffer by yourself.
Not sure how your getAgc() function works but maybe you can give that function the pointer audioData as an argument to avoid having to copy data again from one buffer to another one.
If you really need the onAudioReady callback to request the same amount of frames, then you have to set that number while building the AudioStream using:
oboe::AudioStreamBuilder::setFramesPerCallback(int framesPerCallback)
Look here at the things that you should not do during an onAudioReady callback and you will find that oboe write function is forbidden:
https://google.github.io/oboe/reference/classoboe_1_1_audio_stream_callback.html

Unable to use Android platform's MediaCodec class in "surface input" mode

I'm trying to write a simple video encoder that uses the Android platform's MediaCodec class in "surface input" mode.
These are the steps I'm following (supporting code left out for the sake of brevity):
mediaCodec = MediaCodec::CreateByType(looper, "video/avc", true);
mediaCodec->configure(config, NULL, NULL, CONFIGURE_FLAG_ENCODE);
mediaCodec->createInputSurface(&inputSurface);
mediaCodec->start();
Following this, I'm trying to dequeue a buffer from the created input surface (which is an IGraphiBufferProducer interface object), but it fails with the NO_INIT error:
inputSurface->dequeueBuffer(&slot, &fence, w, h, format, 0);
The error message in the ADB log is:
BufferQueueProducer: [GraphicBufferSource] dequeueBuffer: BufferQueue has no connected producer
Any idea why the buffer queue has no connected producer? I would assume that the MediaCodec class would handle the creation of the buffer queue as well as the connection of the producer and consumers to the queue.
I'm using Android API level 26 (7.1.2). I'm using the platform-level libs because my use case requires access to GraphicBuffer objects.
Thanks in advance!
EDIT: The general idea is to:
Dequeue buffers from the input surface & fill them.
Queue the filled buffers back to the input surface (which would presumably trigger the media codec (video encoder) instance that the surface belongs
to).
Dequeue output buffers (containing raw H.264 bitstream data) from the media codec instance, and write it to file.
Release output buffers back to the media codec instance.
From IGraphiBufferProducer documentation:
// * NO_INIT - the buffer queue has been abandoned or the producer is not
// connected.
I guess that the part that is missing in your code is this "connect".
IGraphiBufferProducer has such a method, are you using it?

Android MediaCodec Encode and Decode In Asynchronous Mode

I am trying to decode a video from a file and encode it into a different format with MediaCodec in the new Asynchronous Mode supported in API Level 21 and up (Android OS 5.0 Lollipop).
There are many examples for doing this in Synchronous Mode on sites such as Big Flake, Google's Grafika, and dozens of answers on StackOverflow, but none of them support Asynchronous mode.
I do not need to display the video during the process.
I believe that the general procedure is to read the file with a MediaExtractor as the input to a MediaCodec(decoder), allow the output of the Decoder to render into a Surface that is also the shared input into a MediaCodec(encoder), and then finally to write the Encoder output file via a MediaMuxer. The Surface is created during setup of the Encoder and shared with the Decoder.
I can Decode the video into a TextureView, but sharing the Surface with the Encoder instead of the screen has not been successful.
I setup MediaCodec.Callback()s for both of my codecs. I believe that an issues is that I do not know what to do in the Encoder's callback's onInputBufferAvailable() function. I do not what to (or know how to) copy data from the Surface into the Encoder - that should happen automatically (as is done on the Decoder output with codec.releaseOutputBuffer(outputBufferId, true);). Yet, I believe that onInputBufferAvailable requires a call to codec.queueInputBuffer in order to function. I just don't know how to set the parameters without getting data from something like a MediaExtractor as used on the Decode side.
If you have an Example that opens up a video file, decodes it, encodes it to a different resolution or format using the asynchronous MediaCodec callbacks, and then saves it as a file, please share your sample code.
=== EDIT ===
Here is a working example in synchronous mode of what I am trying to do in asynchronous mode: ExtractDecodeEditEncodeMuxTest.java: https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java This example is working in my application
I believe you shouldn't need to do anything in the encoder's onInputBufferAvailable() callback - you should not call encoder.queueInputBuffer(). Just as you never call encoder.dequeueInputBuffer() and encoder.queueInputBuffer() manually when doing Surface input encoding in synchronous mode, you shouldn't do it in asynchronous mode either.
When you call decoder.releaseOutputBuffer(outputBufferId, true); (in both synchronous and asynchronous mode), this internally (using the Surface you provided) dequeues an input buffer from the surface, renders the output into it, and enqueues it back to the surface (to the encoder). The only difference between synchronous and asynchronous mode is in how the buffer events are exposed in the public API, but when using Surface input, it uses a different (internal) API to access the same, so synchronous vs asynchronous mode shouldn't matter for this at all.
So as far as I know (although I haven't tried it myself), you should just leave the onInputBufferAvailable() callback empty for the encoder.
EDIT:
So, I tried doing this myself, and it's (almost) as simple as described above.
If the encoder input surface is configured directly as output to the decoder (with no SurfaceTexture inbetween), things just work, with a synchronous decode-encode loop converted into an asynchronous one.
If you use SurfaceTexture, however, you may run into a small gotcha. There is an issue with how one waits for frames to arrive to the SurfaceTexture in relation to the calling thread, see https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/DecodeEditEncodeTest.java#106 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/EncodeDecodeTest.java#104 and https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#113 for references to this.
The issue, as far as I see it, is in awaitNewImage as in https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/OutputSurface.java#240. If the onFrameAvailable callback is supposed to be called on the main thread, we have an issue if the awaitNewImage call also is run on the main thread. If the onOutputBufferAvailable callbacks also are called on the main thread and you call awaitNewImage from there, we have an issue, since you'll end up waiting for a callback (with a wait() that blocks the whole thread) that can't be run until the current method returns.
So we need to make sure that the onFrameAvailable callbacks come on a different thread than the one that calls awaitNewImage. One pretty simple way of doing this is to create a new separate thread, that does nothing but service the onFrameAvailable callbacks. To do that, you can do e.g. this:
private HandlerThread mHandlerThread = new HandlerThread("CallbackThread");
private Handler mHandler;
...
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
...
mSurfaceTexture.setOnFrameAvailableListener(this, mHandler);
I hope this is enough for you to be able to solve your issue, let me know if you need me to edit one of the public examples to implement asynchronous callbacks there.
EDIT2:
Also, since the GL rendering might be done from within the onOutputBufferAvailable callback, this might be a different thread than the one that set up the EGL context. So in that case, one needs to release the EGL context in the thread that set it up, like this:
mEGL.eglMakeCurrent(mEGLDisplay, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_SURFACE, EGL10.EGL_NO_CONTEXT);
And reattach it in the other thread before rendering:
mEGL.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
EDIT3:
Additionally, if the encoder and decoder callbacks are received on the same thread, the decoder onOutputBufferAvailable that does rendering can block the encoder callbacks from being delivered. If they aren't delivered, the rendering can be blocked infinitely since the encoder don't get the output buffers returned. This can be fixed by making sure the video decoder callbacks are received on a different thread instead, and this avoids the issue with the onFrameAvailable callback instead.
I tried implementing all this on top of ExtractDecodeEditEncodeMuxTest, and got it working seemingly fine, have a look at https://github.com/mstorsjo/android-decodeencodetest. I initially imported the unchanged test, and did the conversion to asynchronous mode and fixes for the tricky details separately, to make it easy to look at the individual fixes in the commit log.
Can also set the Handler in the MediaEncoder.
---> AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
MyAudioCodecWrapper myMediaCodecWrapper;
public MyAudioEncoder(long startRecordWhenNs){
super.startRecordWhenNs = startRecordWhenNs;
}
#RequiresApi(api = Build.VERSION_CODES.M)
public MyAudioCodecWrapper prepareAudioEncoder(AudioRecord _audioRecord , int aacSamplePreFrameSize) throws Exception{
if(_audioRecord==null || aacSamplePreFrameSize<=0)
throw new Exception();
audioRecord = _audioRecord;
Log.d(TAG, "audioRecord:" + audioRecord.getAudioFormat() + ",aacSamplePreFrameSize:" + aacSamplePreFrameSize);
mHandlerThread.start();
mHandler = new Handler(mHandlerThread.getLooper());
MediaFormat audioFormat = new MediaFormat();
audioFormat.setString(MediaFormat.KEY_MIME, MIMETYPE_AUDIO_AAC);
//audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE );
audioFormat.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectLC);
audioFormat.setInteger(MediaFormat.KEY_SAMPLE_RATE, audioRecord.getSampleRate());//44100
audioFormat.setInteger(MediaFormat.KEY_CHANNEL_COUNT, audioRecord.getChannelCount());//1(單身道)
audioFormat.setInteger(MediaFormat.KEY_BIT_RATE, 128000);
audioFormat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 16384);
MediaCodec codec = MediaCodec.createEncoderByType(MIMETYPE_AUDIO_AAC);
codec.configure(audioFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
codec.setCallback(new AudioEncoderCallback(aacSamplePreFrameSize),mHandler);
//codec.start();
MyAudioCodecWrapper myMediaCodecWrapper = new MyAudioCodecWrapper();
myMediaCodecWrapper.mediaCodec = codec;
super.mediaCodec = codec;
return myMediaCodecWrapper;
}

Decode H.264 stream using MediaCodec API JNI

I am developing a H.264 decoder using MediaCodec API. I am trying to call MediaCodec java API in JNI layer inside a function like:
void Decompress(const unsigned char *encodedInputdata, unsigned int inputLength, unsigned char **outputDecodedData, int &width, int &height) {
// encodedInputdata is encoded H.264 remote stream
// .....
// outputDecodedData = call JNI function of MediaCodec Java API to decode
// .....
}
Later I will send the outputDecodedData to my existing video rendering pipeline and render on Surface.
I hope I will be able to write a Java function to decode the input stream, but these would be challenge -
This resource states that -
...you can't do anything with the decoded video frame but render them
to surface
Here a Surface has been passed decoder.configure(format, surface, null, 0) to render the output ByteBuffer on the surface and claimed We can't use this buffer but render it due to the API limit.
So, will I able to send the output ByteBuffer to native layer to cast as unsigned char* and pass to my rendering pipeline instead of passing a Surface ot configure()?
I see two fundamental problems with your proposed function definition.
First, MediaCodec operates on access units (NAL units for H.264), not arbitrary chunks of data from a stream, so you need to pass in one NAL unit at a time. Once the chunk is received, the codec may want to wait for additional frames to arrive before producing any output. You cannot in general pass in one frame of input and wait to receive one frame of output.
Second, as you noted, the ByteBuffer output is YUV-encoded in one of several color formats. The format varies from device to device; Qualcomm devices notably use their own proprietary format. (It has been reverse-engineered, though, so if you search around you can find some code to unravel it.)
The common workaround is to send the video frames to a SurfaceTexture, which converts them to GLES "external" textures. These can be manipulated in various ways, or rendered to a pbuffer and extracted with glReadPixels().

Controlling Frame Rate of VirtualDisplay

I'm writing an Android application, and in it, I have a VirtualDisplay to mirror what is on the screen and I then send the frames from the screen to an instance of a MediaCodec. It works, but, I want to add a way of specifying the FPS of the encoded video, but I'm unsure how to do so.
From what I've read and experimented with, dropping encoded frames (based on the presentation times) doesn't work well as it ends up with blocky/artifact ridden video as opposed to a smooth video at a lower framerate. Other reading suggests that the only way to do what I want (limit the FPS) would be to limit the incoming FPS to the MediaCodec, but the VirtualDisplay just receives a Surface which is constructed from the MediaCodec as below
mSurface = <instance of MediaCodec>.createInputSurface();
mVirtualDisplay = mMediaProjection.createVirtualDisplay(
"MyDisplay",
screenWidth,
screenHeight,
screenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR,
mSurface,
null,
null);
I've also tried subclassing Surface and limit the frames that are fed to the MediaCodec via the unlockCanvasAndPost(Canvas canvas) but the function never seems to be called on my instance, so, there may be some weirdness in how I extended Surface and the interaction with the Parcel as writeToParcel function is called on my instance, but that is the only function that is called in my instance (that I can tell).
Other reading suggests that I can go from encoder -> decoder -> encoder and limit the rate in which the second encoder is fed frames, but that's a lot of extra computation that I'd rather not do if I can avoid it.
Has anyone successfully limited the rate at which a VirtualDisplay feeds its Surface? Any help would be greatly appreciated!
Starting off with what you can't do...
You can't drop content from the encoded stream. Most of the frames in the encoded stream are essentially "diffs" from other frames. Without knowing how the frames interact, you can't safely drop content, and will end up with that corrupted macroblock look.
You can't specify the frame rate to the MediaCodec encoder. It might stuff that into metadata somewhere, but the only thing that really matters to the codec is the frames you're feeding into it, and the presentation time stamps associated with each frame. The encoder will not drop frames.
You can't do anything useful by subclassing Surface. The Canvas operations are only used for software rendering, which is unrelated to feeding in frames from a camera or virtual display.
What you can do is send the frames to an intermediate Surface, and then choose whether or not to forward them to the MediaCodec's input Surface. One approach would be to create a SurfaceTexture, construct a Surface from it, and pass that to the virtual display. When the SurfaceTexture's frame-available callback fires, you either ignore it, or render the texture onto the MediaCodec input Surface with GLES.
Various examples can be found in Grafika and on bigflake, none of which are an exact fit, but all of the necessary EGL and GLES classes are there.
You can reference the code sample from saki4510t's ScreenRecordingSample or RyanRQ's ScreenRecoder, they are all use the additional EGL Texture between the virtual display and media encoder, and the first one can keep at least 15 fps for the output video. You can search the keyword createVirtualDisplay from their code base for more details.

Categories

Resources