I am working on an app that records the screen of my Android device and streams it over rtsp to another client. I am using VirtualDisplay and MediaCodec for this.
I have an issue that I don't know how to solve. When I start streaming, the client doesn't receive anything until the screen changes. I guess it makes sense, the buffer contains nothing, so nothing is sent to client. The code for that is this:
MediaCodec buildMediaCodec() throws IOException {
MediaFormat format = MediaFormat.createVideoFormat(VIDEO_MIME_TYPE, VIDEO_WIDTH, VIDEO_HEIGHT);
// Set some required properties. The media codec may fail if these aren't defined.
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE);
format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1); // 1 seconds between I-frames
// Create a MediaCodec encoder and configure it. Get a Surface we can use for recording into.
MediaCodec mediaCodec = MediaCodec.createEncoderByType(VIDEO_MIME_TYPE);
mediaCodec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
return mediaCodec;
}
// This is passed to buildVirtualDisplay(), and I get it from calling buildMediaCodec()
Surface mediaCodecSurface = mMediaCodec.createInputSurface();
VirtualDisplay buildVirtualDisplay(MediaProjection mediaProjection, Surface mediaCodecSurface, DisplayMetrics displayMetrics) {
if (mediaProjection == null || mediaCodecSurface == null || displayMetrics == null) {
throw new InvalidParameterException("MediaProjection, Surface and DisplayMetrics are mandatory");
}
return mediaProjection.createVirtualDisplay("Recording Display", VIDEO_WIDTH, VIDEO_HEIGHT, SCREEN_DPI, 0 /* flags */, mediaCodecSurface, null /* callback */, null /* handler */);
}
...
mIndex = mMediaCodec.dequeueOutputBuffer(mBufferInfo, 500000);
if (mIndex >= 0) {
mBuffer = mMediaCodec.getOutputBuffer(mIndex);
if (mBuffer == null) {
throw new RuntimeException("couldn't fetch buffer at index " + mIndex);
}
mBuffer.position(0);
break;
} else if (mIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
mMediaFormat = mMediaCodec.getOutputFormat();
Log.i(TAG, mMediaFormat.toString());
} else if (mIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
Log.v(TAG, "No buffer available...");
} else {
Log.e(TAG, "Message: " + mIndex);
}
In the logs I can see No buffer available... one after another. In the moment that the screen changes it stops.
The problem is when I stop interacting with the phone. The screen is not refreshed as nothing is changing, so I keep getting MediaCodec.INFO_TRY_AGAIN_LATER. After 10 seconds or so, the client disconnects. I guess that it doesn't receive anything so it just shuts down the connection.
I also observed, that the longer I wait at the beginning the bigger is the delay between the server and client devices.
If I put a progress bar everything is ok, it seems that the screen is re-rendered so the buffer contains data to be sent.
I have looked for info about this problem. Any suggestion of what I might do to prevent this from happening? Should I used another Surface between MediaCodec and VirtualDisplay and try to "force" the rendering?
Thanks.
I found out that client disconnects after not receiving data for at least 10 seconds. I try KEY_REPEAT_PREVIOUS_FRAME_AFTER from MediaFormat to prevent this, but so far no luck.
Related
I'm pulling video from a drone that encodes the video stream in h264. It sends each NAL unit via 1-5 UDP packets. I have code that creates a NAL unit out of those packets and removes the header. It then passes it to MediaCodec which then passes it to a Surface object
The output should be a video feed but for some reason, all I get is a black screen. I know that the surface is working as intended because if I futz with the NAL unit I get this green garbage that I think happens when MediaCodec doesn't know what it's looking at.
Anyways here's the section of code that handles the actual decoding. Is there anything actually wrong with it or am I looking for the issue in the wrong place?
//This part initializes the decoder and generally sets up everything needed for the while loop down below
encodedVideo = new ServerSocket(11111, 1460, false, 0);
MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 1920, 1080);
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 100000);
try {
m_codec = MediaCodec.createDecoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
m_codec.configure(format, new Surface(droneSight.getSurfaceTexture()), null, 0);
} catch (Exception e) {
e.printStackTrace();
}
m_codec.start();
running = true;
initialFrame = Arrays.copyOf(encodedVideo.getPacketData(true,true),encodedVideo.getPacketData(true,false).length);
//This section handles the actual grabbing and decoding of each NAL unit. Or it would, if it worked.
while (running) {
byte[] frame = this.getNALUnit();
int inputIndex = m_codec.dequeueInputBuffer(-1);
if (inputIndex >= 0) {
ByteBuffer buf = m_codec.getInputBuffer(inputIndex);
buf.put(frame);
m_codec.queueInputBuffer(inputIndex, 0, frame.length, 0, 0);
}
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int outputIndex = m_codec.dequeueOutputBuffer(info, 0);
if (outputIndex >= 0) {
m_codec.releaseOutputBuffer(outputIndex, true);
}
}
I try to render a raw h264 video to a surface (after decoding) and write it at the same time to a file.
The rendering is working fine but when I want to get the current output buffer, it always have a size of 8 and the output file have a size of 3,87 Ko.
It seems like the output buffer is locked by the surface (ANativeWindow)?
Anyone can give me an advice to do it without creating another codec?
The codec is configured with an output surface :
if (AMEDIA_OK == AMediaCodec_configure(d->codec, d->format, d->window /*the native window */, NULL, 0)
Here is the code snipet when I try to get the output buffer :
if (!d->sawOutputEOS) {
AMediaCodecBufferInfo info;
auto status = AMediaCodec_dequeueOutputBuffer(d->codec, &info, -1);
if (status >= 0) {
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
LOGV("output EOS");
d->sawOutputEOS = true;
d->isPlaying = false;
}
int64_t delay = 333000;
usleep((useconds_t )delay / 15);
size_t size;
// here i get the output buffer
uint8_t *outputbuffer = AMediaCodec_getOutputBuffer(d->codec,status,&size);
write(d->fd1,outputbuffer,size); // the output is always 0
LOGV("%d",size); // the size is always 8
LOGV("FRAME num : %d", counter[d->nb]++);
AMediaCodec_releaseOutputBuffer(d->codec, status, info.size != 0);
if (d->renderonce) {
d->renderonce = false;
return;
}
} else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED) {
LOGV("output buffers changed");
} else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED) {
auto format = AMediaCodec_getOutputFormat(d->codec);
LOGV("format changed to: %s", AMediaFormat_toString(format));
AMediaFormat_delete(format);
d->formatChanged = true;
} else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
LOGV("no output buffer right now");
} else {
LOGV("unexpected info code: %zd", status);
}
}
Thanks in advance
It's not locked; you asked the decoder to work for display, so it used the fastest route to display, without exposing the pixels to readable memory. You may find that the format is the opaque COLOR_FormatSurface, as explained here.
I am using MediaCodec API's to play the video stream(H.264) I am receiving through ethernet port.
From what I understood from official documentation and various examples, I need to perform the following operations.
Create a MediaCodec instance based on the mimetype(video/avc for H.264)
Feed "csd-0" with SPS frame data. SPS frame should start with 0x00000001
Feed "csd-1" buffer with PPS frame data. PPS frame should start with 0x00000001
Call decoder.configure() and decoder.start(). If all goes well, decoder is correctly configured and there is no exception.
Once the MediaCodec is configured, we can supply to the decoder all the rest of the frames (depacketed NAL units).
Upon successfull dequeue operation, the decoded buffer can be rendered onto the screen as follows.
outputBufferId = decoder.dequeueOutputBuffer(info, 1000);
decoder.releaseOutputBuffer(outputBufferId, true);
Problem
decoder.dequeueOutputBuffer() Is returning -3 on android 6.0.
Official documentation says that in the case of any errors during dequeueOutputBuffer() only -1 (INFO_TRY_AGAIN_LATER) and -2 (INFO_OUTPUT_FORMAT_CHANGED) are returned and that -3 (INFO_OUTPUT_BUFFERS_CHANGED) is deprecated. So why am I receiving -3?
How am I supposed to correct this?
Code Snippet :
String mimeType = "video/avc";
decoder = MediaCodec.createDecoderByType(mimeType);
mformat = MediaFormat.createVideoFormat(mimeType, 1920, 1080);
while (!Thread.interrupted()) {
byte[] data = new byte[size];
bin.read(data, 0, size);
if (data is SPS frame) {
mformat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
mformat.setByteBuffer("csd-0", ByteBuffer.wrap(data));
continue;
}
if (data is PPS frame) {
mformat.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 0);
mformat.setByteBuffer("csd-1", ByteBuffer.wrap(data));
decoder.configure(mformat. surface, null, 0);
decoder.start();
is_decoder_configured = true;
continue;
}
if (!is_decoder_configured)
continue;
index = decoder.dequeueInputBuffer(1000);
if (index < 0) {
Log.e(TAG, "Dequeue in put buffer failed..\n");
continue;
}
buf = decoder.getInputBuffer(index);
if (buf == null)
continue;
buf.put(data);
decoder.queueInputBuffer(index, 0, data.length, 0, 0);
int outputBufferId = decoder.dequeueOutputBuffer(info, 1000);
switch (outputBufferId) {
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.i("DecodeActivity", "New format " + decoder.getOutputFormat());
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.i("DecodeActivity", "dequeueOutputBuffer timed out!");
break;
default:
Log.i(TAG, "Successfully decoded the output : " + outputBufferId);
decoder.releaseOutputBuffer(outputBufferId, true);
break;
}
}
if (is_decoder_configured()) {
decoder.stop();
decoder.release();
}
In the case you see any other errors, please let me know. Will be grateful!
I don't see anything saying that it will never be returned. The documentation says this:
This constant was deprecated in API level 21.
This return value can be ignored as getOutputBuffers() has been deprecated. Client should request a current buffer using on of the get-buffer or get-image methods each time one has been dequeued.
That is, if you're using getOutputBuffers(), you need to listen for this return value and act on it - but doing this is deprecated. If you're not using getOutputBuffers(), just ignore this return value, i.e. call dequeueOutputBuffer() again with the same parameters and see what it returns afterwards.
I've a problem with this library
https://github.com/fyhertz/libstreaming
it allows to send via wireless the streaming of photocamera, it use 3 methods: two with mediacodec and one with mediarecorder.
I would like to modify it, and I have to use only the mediacodec;however first of all I tried the code of the example 2 of the library, but I've always found the same error:
the log tell me that the device can use the mediacodec, it set the encoder and when it test the decoder it fall and the buffer is filled with -1.
This is the method in the EncoderDebugger class where the exception occurs, some kind soul can help me please?
private long decode(boolean withPrefix) {
int n =3, i = 0, j = 0;
long elapsed = 0, now = timestamp();
int decInputIndex = 0, decOutputIndex = 0;
ByteBuffer[] decInputBuffers = mDecoder.getInputBuffers();
ByteBuffer[] decOutputBuffers = mDecoder.getOutputBuffers();
BufferInfo info = new BufferInfo();
while (elapsed<3000000) {
// Feeds the decoder with a NAL unit
if (i<NB_ENCODED) {
decInputIndex = mDecoder.dequeueInputBuffer(1000000/FRAMERATE);
if (decInputIndex>=0) {
int l1 = decInputBuffers[decInputIndex].capacity();
int l2 = mVideo[i].length;
decInputBuffers[decInputIndex].clear();
if ((withPrefix && hasPrefix(mVideo[i])) || (!withPrefix && !hasPrefix(mVideo[i]))) {
check(l1>=l2, "The decoder input buffer is not big enough (nal="+l2+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (withPrefix && !hasPrefix(mVideo[i])) {
check(l1>=l2+4, "The decoder input buffer is not big enough (nal="+(l2+4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(new byte[] {0,0,0,1});
decInputBuffers[decInputIndex].put(mVideo[i],0,mVideo[i].length);
} else if (!withPrefix && hasPrefix(mVideo[i])) {
check(l1>=l2-4, "The decoder input buffer is not big enough (nal="+(l2-4)+", capacity="+l1+").");
decInputBuffers[decInputIndex].put(mVideo[i],4,mVideo[i].length-4);
}
mDecoder.queueInputBuffer(decInputIndex, 0, l2, timestamp(), 0);
i++;
} else {
if (VERBOSE) Log.d(TAG,"No buffer available !7");
}
}
// Tries to get a decoded image
decOutputIndex = mDecoder.dequeueOutputBuffer(info, 1000000/FRAMERATE);
if (decOutputIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
decOutputBuffers = mDecoder.getOutputBuffers();
} else if (decOutputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
mDecOutputFormat = mDecoder.getOutputFormat();
} else if (decOutputIndex>=0) {
if (n>2) {
// We have successfully encoded and decoded an image !
int length = info.size;
mDecodedVideo[j] = new byte[length];
decOutputBuffers[decOutputIndex].clear();
decOutputBuffers[decOutputIndex].get(mDecodedVideo[j], 0, length);
// Converts the decoded frame to NV21
convertToNV21(j);
if (j>=NB_DECODED-1) {
flushMediaCodec(mDecoder);
if (VERBOSE) Log.v(TAG, "Decoding "+n+" frames took "+elapsed/1000+" ms");
return elapsed;
}
j++;
}
mDecoder.releaseOutputBuffer(decOutputIndex, false);
n++;
}
elapsed = timestamp() - now;
}
throw new RuntimeException("The decoder did not decode anything.");
}
Here's my suggestions:
(1) check the settings of encoder and decoder, and make sure that they match. For example, revolution and color format are the same.
(2) make sure the very first packet generated by the encoder has been sent and pushed into the decoder. This packet defines the basic settings of the video stream.
(3) the decoder usually buffers 5-10 frames. So data in the buffer is invalid for a few hundred ms.
(4) while initiating the decoder, set the surface as null. Otherwise the output buffer will be read by the surface and probably released automatically.
I'm using the following code to prepare the hardware decoder. I expect outputBufferIndex to be -1 and then followed by MediaCodec.INFO_OUTPUT_FORMAT_CHANGED. It shouldn't >=0 before notifying format changed.
I tested the code in 25 different devices, and 7 of them never return INFO_OUTPUT_FORMAT_CHANGED. mediaCodec.getOutputFormat() returned IllegalStateException when I got outputBufferIndex >= 0. I have no idea if it was a coincidence that all devices did't work were android 4.2.2 with OMX.qcom.video.decoder.avc decoder.
for (int i = 0; i < videoExtractor.getTrackCount(); i++) {
MediaFormat mediaFormat = videoExtractor.getTrackFormat(i);
String mime = mediaFormat.getString(MediaFormat.KEY_MIME);
if (mime.startsWith("video/")) {
videoExtractor.selectTrack(i);
videoCodec = MediaCodec.createDecoderByType(mediaFormat.getString(MediaFormat.KEY_MIME));
videoCodec.configure(mediaFormat, null, null, 0);
videoCodec.start();
}
}
ByteBuffer[] videoInputBuffers = videoCodec.getInputBuffers();
while (true) {
int sampleTrackIndex = videoExtractor.getSampleTrackIndex();
if (sampleTrackIndex == -1) {
break;
} else { // decode video
int inputBufferIndex = videoCodec.dequeueInputBuffer(0);
if (inputBufferIndex >= 0) {
int bytesRead = videoExtractor.readSampleData(videoInputBuffers[inputBufferIndex], 0);
if (bytesRead >= 0) {
videoCodec.queueInputBuffer(inputBufferIndex, 0, bytesRead,
videoExtractor.getSampleTime(), 0);
videoExtractor.advance();
}
}
MediaCodec.BufferInfo videoBufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = videoCodec.dequeueOutputBuffer(videoBufferInfo, 0);
if (outputBufferIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
MediaFormat format = videoCodec.getOutputFormat();
Log.w("video format changed: " + videoCodec.getOutputFormat());
//do something...
break;
} else if (outputBufferIndex >= 0) {
//not supposed to happen!
}
}
}
Thank you very much for the clues and helps!
In Android 4.3, a collection of MediaCodec tests were added to CTS. If you look at the way doEncodeDecodeVideoFromBuffer() works in EncodeDecodeTest, you can see that it expects the INFO_OUTPUT_FORMAT_CHANGED result before any data. If it doesn't get it, the call to checkFrame() will fail when it tries to get the color format. Prior to Android 4.3, there were no tests, and any behavior is possible.
Having said that, I don't recall seeing this behavior on the (Qualcomm-based) Nexus 4.
At any rate, I'm not sure how much this will actually hold you back, unless you're able to decode the proprietary buffer layout Qualcomm uses. You can see in that same checkFrame() function that it punts when it sees OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar64x32Tile2m8ka. Sending the output to a Surface may be a viable alternative depending on what you're up to.
Most of the MediaCodec code on bigflake and in Grafika targets API 18 (Android 4.3), because that's when the behavior became more predictable. (The availability of surface input and MediaMuxer is also of tremendous value.)