Android MediaMetadataRetriever.getFrameAtTime skips frames - android

I wrote a simple Android application that is using MediaMetadataRetriver class to get frames. It works fine, except that I realized that it skips frames.
The video clip I am trying to decode is one shot with the phone camera. Follow relevant code snippets:
MediaMetadataRetriever mediaDataRet = new MediaMetadataRetriever();
mediaDataRet.setDataSource(path);
String lengthMsStr = mediaDataRet
.extractMetadata(mediaDataRet.METADATA_KEY_DURATION);
final long lenMs = Long.parseLong(lengthMsStr);
String widthStr = mediaDataRet
.extractMetadata(mediaDataRet.METADATA_KEY_VIDEO_WIDTH);
int width = Integer.parseInt(widthStr);
String heightStr = mediaDataRet
.extractMetadata(mediaDataRet.METADATA_KEY_VIDEO_HEIGHT);
int height = Integer.parseInt(heightStr);
note the variable lenMs, it holds the clid duration in milliseconds. Then for every frame I do:
int pace = 30; // 30 fps ms spacing
for (long i = 0; i < lenMs; i += pace) {
if (is_abort())
return;
Bitmap bitmap = mediaDataRet.getFrameAtTime(i * 1000); // I tried the other version of this method with OPTION_CLOSEST, with no luck.
if (bc == null)
bc = bitmap.getConfig();
bitmap.getPixels(pixBuffer, 0, width, 0, 0, width, height);
[...]
}
After checking visually I noticed that some frames are skipped (like short sequences). Why? And ho do I avoid this?

Use:
mediaDataRet.getFrameAtTime(i * 1000, MediaMetadataRetriever.OPTION_CLOSEST);
The getFrameAtTime(n) uses OPTION_CLOSEST_SYNC which would give you key frames only.

Related

Android YUV to grayscale performance optimization

I'm trying to convert an YUV image to grayscale, so basically I just need the Y values.
To do so I wrote this little piece of code (with frame being the YUV image):
imageConversionTime = System.currentTimeMillis();
size = frame.getSize();
byte nv21ByteArray[] = frame.getImage();
int lol;
for (int i = 0; i < size.width; i++) {
for (int j = 0; j < size.height; j++) {
lol = size.width*j + i;
yMatrix.put(j, i, nv21ByteArray[lol]);
}
}
bitmap = Bitmap.createBitmap(size.width, size.height, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(yMatrix, bitmap);
imageConversionTime = System.currentTimeMillis() - imageConversionTime;
However, this takes about 13500 ms. I need it to be A LOT faster (on my computer it takes 8.5 ms in python) (I work on a Motorola Moto E 4G 2nd generation, not super powerful but it should be enough for converting images right?).
Any suggestions?
Thanks in advance.
First of all I would assign size.width and size.height to a variable. I don't think the compiler will optimize this by default, but I am not sure about this.
Furthermore Create a byte[] representing the result instead of using a Matrix.
Then you could do something like this:
int[] grayScalePixels = new int[size.width * size.height];
int cntPixels = 0;
In your inner loop set
grayScalePixels[cntPixels] = nv21ByteArray[lol];
cntPixels++;
To get your final image do the following:
Bitmap grayScaleBitmap = Bitmap.createBitmap(grayScalePixels, size.width, size.height, Bitmap.Config.ARGB_8888);
Hope it works properly (I have not tested it, however at least the shown principle should be applicable -> relying on a byte[] instead of Matrix)
Probably 2 years too late but anyways ;)
To convert to gray scale, all you need to do is set the u/v values to 128 and leave the y values as is. Note that this code is for YUY2 format. You can refer to this document for other formats.
private void convertToBW(byte[] ptrIn, String filePath) {
// change all u and v values to 127 (cause 128 will cause byte overflow)
byte[] ptrOut = Arrays.copyOf(ptrIn, ptrIn.length);
for (int i = 0, ptrInLength = ptrOut.length; i < ptrInLength; i++) {
if (i % 2 != 0) {
ptrOut[i] = (byte) 127;
}
}
convertToJpeg(ptrOut, filePath);
}
For NV21/NV12, I think the loop would change to:
for (int i = ptrOut.length/2, ptrInLength = ptrOut.length; i < ptrInLength; i++) {}
Note: (didn't try this myself)
Also I would suggest to profile your utils method and createBitmap functions separately.

How do I do surfaceview scaling in android ndk?

Good morning.
I am making a camera video player using ffmpeg.
During the production process, we are confronted with one problem.
If you take one frame through ffmpeg, decode the frame, and sws_scale it to fit the screen size, it will take too long and the camera image will be burdened.
For example, when the incoming input resolution is 1920 * 1080, and the resolution of my phone is 2550 * 1440, the speed of sws_scale is about 6 times slower.
[Contrast when changing to the same size]
Currently, the NDK converts sws_scale to the resolution that was input from the camera, so the speed is improved and the image is not interrupted.
However, SurfaceView is full screen, but input resolution is below full resolution.
Scale AVFrame
ctx->m_SwsCtx = sws_getContext(
ctx->m_CodecCtx->width,
ctx->m_CodecCtx->height,
ctx->m_CodecCtx->pix_fmt,
//width, // 2550 (SurfaceView)
//height, // 1440
ctx->m_CodecCtx->width, // 1920 (Camera)
ctx->m_CodecCtx->height, // 1080
AV_PIX_FMT_RGBA,
SWS_FAST_BILINEAR,
NULL, NULL, NULL);
if(ctx->m_SwsCtx == NULL)
{
__android_log_print(
ANDROID_LOG_DEBUG,
"[ VideoStream::SetResolution Fail ] ",
"[ Error Message : %s ]",
"SwsContext Alloc fail");
SET_FIELD_TO_INT(pEnv, ob, err, 0x40);
return ob;
}
sws_scale(
ctx->m_SwsCtx,
(const uint8_t * const *)ctx->m_SrcFrame->data,
ctx->m_SrcFrame->linesize,
0,
ctx->m_CodecCtx->height,
ctx->m_DstFrame->data,
ctx->m_DstFrame->linesize);
PDRAWOBJECT drawObj = (PDRAWOBJECT)malloc(sizeof(DRAWOBJECT));
if(drawObj != NULL)
{
drawObj->m_Width = ctx->m_Width;
drawObj->m_Height = ctx->m_Height;
drawObj->m_Format = WINDOW_FORMAT_RGBA_8888;
drawObj->m_Frame = ctx->m_DstFrame;
SET_FIELD_TO_INT(pEnv, ob, err, -1);
SET_FIELD_TO_LONG(pEnv, ob, addr, (jlong)drawObj);
}
Draw SurfaceView;
PDRAWOBJECT d = (PDRAWOBJECT)drawObj;
long long curr1 = CurrentTimeInMilli();
ANativeWindow *window = ANativeWindow_fromSurface(pEnv, surface);
ANativeWindow_setBuffersGeometry(window, 0, 0, WINDOW_FORMAT_RGBA_8888);
ANativeWindow_setBuffersGeometry(
window,
d->m_Width,
d->m_Height,
WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer windowBuffer;
ANativeWindow_lock(window, &windowBuffer, 0);
uint8_t * dst = (uint8_t*)windowBuffer.bits;
int dstStride = windowBuffer.stride * 4;
uint8_t * src = (uint8_t*) (d->m_Frame->data[0]);
int srcStride = d->m_Frame->linesize[0];
for(int h = 0; h < d->m_Height; ++h)
{
// Draw SurfaceView;
memcpy(dst + h * dstStride, src + h * srcStride, srcStride);
}
ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
Result;
enter image description here
I would like to change the whole screen from full screen to full screen. Is there a way to change the size of a SurfaceView in NDK or Android, rather than sws_scale?
Thank you.
You don't need to scale your video. Actually, you don't even need to convert it to RGB (this is also a significant burden for the CPU).
The trick is to use OpenGL render with a shader that takes YUV input and displays this texture scaled tho your screen.
Start with this solution (reusing code from Android system): https://stackoverflow.com/a/14999912/192373

Memory allocation for YUV buffer to convert into RGB

I'm facing issue on few android devices while copying the data from DecodeFrame2()
This is my code:
uint8_t* m_yuvData[3];
SBufferInfo yuvDataInfo;
memset(&yuvDataInfo, 0, sizeof(SBufferInfo));
m_yuvData[0] = NULL;
m_yuvData[1] = NULL;
m_yuvData[2] = NULL;
DECODING_STATE decodingState = m_decoder->DecodeFrame2(bufferData, bufferDataSize, m_yuvData, &yuvDataInfo);
if(yuvDataInfo.iBufferStatus == 1)
{
int yStride = yuvDataInfo->UsrData.sSystemBuffer.iStride[0];
int uvStride = yuvDataInfo->UsrData.sSystemBuffer.iStride[1];
uint32_t width = yuvDataInfo->UsrData.sSystemBuffer.iWidth;
uint32_t height = yuvDataInfo->UsrData.sSystemBuffer.iHeight;
size_t yDataSize = (width * height) + (height * yStride);
size_t uvDataSize = (((width * height) / 4) + (height * uvStride));
size_t requiredSize = yDataSize + (2 * uvDataSize);
uint8_t* yuvBufferedData = (uint8_t*)malloc(requiredSize);
// when i move yuvData[0] to another location i am getting crash.
memcpy(yuvBufferedData, yuvData[0], yDataSize);
memcpy(yuvBufferedData + yDataSize, yuvData[1], uvDataSize);
memcpy(yuvBufferedData + yDataSize + uvDataSize, yuvData[2], uvDataSize);
}
The above code snippet is working on high end android devices. but on few android devices after processing first frame, second frame onwards i am getting crash in first memcpy() statement.
What is wrong in this code? and how to calculate the buffer size from the output of DecodeFrame2().
If i process alternative frames(instead of 30, just 15 frames alternative ones),
it is copying fine.
Please help me to fix this?
yDataSize and uvDataSize is very huge based on the above formula,
This issue has been fixed by modifying the size.

How to split video with MediaMetaDataRetriever

I am programming in Android environment, and I want to load a video file from the device, to split it into frames and process each of them with some image-processing techniques.
In order to split correctly the video into an array of frames, I am using MetaDataRetriever class, giving it in input the path of the file.
My problem is that I don't understand how to split correctly the file using the getFrameAtTime method. In particular, my code is:
MediaMetaDataRetriever media = new MediaMetaDataRetriever();
media.setDataSource(this.path);
String durata = media.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
int durata_millisec = Integer.parseInt(durata);
durata_video_micros = durata_millisec * 1000;
durata_secondi = durata_millisec / 1000;
String bitrate = media.extractMetadata(MediaMetadataRetriever.METADATA_KEY_BITRATE); //numero bit al secondo.
int fps = 10;
int numeroFrameCaptured = fps * durata_secondi;
int i=0;
while(i<numeroFrameCaptured){
vettoreFrame[i] = media.getFrameAtTime();
}
With a fps of 10, I should have 10 frame per second. So, in a file of 5 seconds, I should have 50 captured frames at all. How to do this? How to write correctly the while/for statement to capture all these frames with getFramesAtTime method?
Thanks in advance.
Try this:
String durata = media.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
int durata_millisec = Integer.parseInt(durata);
durata_video_micros = durata_millisec * 1000;
durata_secondi = durata_millisec / 1000;
String bitrate = media.extractMetadata(MediaMetadataRetriever.METADATA_KEY_BITRATE);
int fps = 10;
int numeroFrameCaptured = fps * durata_secondi;
ArrayList<Bitmap> frames;
frames = new ArrayList<Bitmap>();
Bitmap bmFrame;
MediaMetaDataRetriever media = new MediaMetaDataRetriever();
media.setDataSource(this.path);
totalFotogramas = durata_millisec/1000*fps; //video duration(micro seg)/1000s*fotogramas/s
for(int i = 0; i < totalFotogramas; i++){
bmFrame = mediaLowResolution.getFrameAtTime(100000*i , MediaMetadataRetriever.OPTION_CLOSEST);
frames.add(bmFrame);
}
Hope it´s useful

Handle MediaCodec video with dropped frames

I'm currently doing fast precise seeking using MediaCodec. What I currently do to skip frame by frame is, I first get the total frames:
mediaInfo.totalFrames = videoTrack.getSamples().size();
Then I get the length of the video file:
mediaInfo.durationUs = videoTrack.getDuration() * 1000 *1000 / timeScale;
//then calling:
public long getDuration() {
if (mMediaInfo != null) {
return (int) mMediaInfo.durationUs / 1000; // to millisecond
}
return -1;
}
Now, when I want to get the next frame I call the following:
mNextFrame.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
int frames = Integer.parseInt(String.valueOf(getTotalFrames));
int intervals = Integer.parseInt(String.valueOf(mPlayer.getDuration() / frames));
if (mPlayer.isPlaying()) {
mPlayer.pause();
mPlayer.seekTo(mPlayer.getCurrentPosition() + intervals);
} else {
mPlayer.seekTo(mPlayer.getCurrentPosition() + intervals);
}
}
});
Here is the info about the file I'm testing with:
Frames = 466
Duration = 15523
So the interval between frames are
33,311158798283262
In other words, each time I press the next button the intervals will be rounded to 33, when I press the next button it will call mPlayer.seekTo(mPlayer.getCurrentPosition() + 33 meaning that some frames will be lost, or that is what I thought. I tested and got the following back when logging getCurrentPosition after each time the button is pressed and here is the result:
33 -> 66 -> 99 -> 132 -> 166
Going from 132 to 166 is 34ms instead of 33, so there was a compensation to make up with the frames that would have be lost.
The above works perfectly fine, I can skip through frames without any problem, here is the issue I facing.
Taking the same logic I used above I created a custom RangeBar. I created a method setTickCount (it's basically the same as seekbar.setMax) and I set the "TickCount" like this:
int frames = Integer.parseInt(String.valueOf(getTotalFrames));
mrange_bar.setTickCount(frames);
So the max value of my RangeBar is the amout of frames in the video.
When the "Tick" value changes I call the following:
int frames = Integer.parseInt(String.valueOf(getTotalFrames));
int intervals = Integer.parseInt(String.valueOf(mPlayer.getDuration() / frames));
mPlayer.seekTo(intervals * TickPosition);
So the above will work like this, if my tickCount position is, let's say 40:
mPlayer.seekTo(33 * 40); //1320ms
I would think that the above would work fine because I used the exact same logic, but instead the video "jump/skip" back to (what I assume is the key frame) and the continues the seeking.
Why is happening and how I can resolve this issue?
EDIT 1:
I mentioned above that it is jumping to the previous key frame, but I had a look again and it is calling end of stream while seeking (at spesific points during the video). When I reach end of stream I release my previous buffer so that one frame can still be displayed to avoid a black screen, by calling:
mDecoder.releaseOutputBuffer(prevBufferIndex, true);
So, for some reason, end of stream is called, where I then restart mediacodec causing a "lag/jump" effect.
If I remove the above, I don't get the frame "jump", but there is still a lag while mediacodec is being initialized.
EDIT 2:
After digging deeper I found that readSampleData is -1:
ByteBuffer[] inputBuffers = mDecoder.getInputBuffers();
int inIndex = mDecoder.dequeueInputBuffer(TIMEOUT_USEC);
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
int sampleSize = mExtractor.readSampleData(buffer, 0);
if (sampleSize < 0) {
mDecoder.queueInputBuffer(inIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
mIsExtractorReachedEOS = true;
} else {
mDecoder.queueInputBuffer(inIndex, 0, sampleSize, mExtractor.getSampleTime(), 0);
mExtractor.advance();
}
}
For some reason my sampleSize is -1 at a specific point during seeking.
EDIT 3
This issue is definitely regarding the time that I pass, I tried 2 different approaches, the first:
mPlayer.seekTo(progress);
//position is retrieved by setting mSeekBar.setMax(mPlayer.getDuration); ....
and the second approach, I determine the frame intervals:
//Total amount of frames in video
long TotalFramesInVideo = videoTrack.getSamples().size();
//Duration of file in milliseconds
int DurationOfVideoInMs = mPlayer.getDuration();
//Determine interval between frames
int frameIntervals = DurationOfVideoInMs / Integer.parseInt(String.valueOf(TotalFramesInVideo));
//Then I seek to the frames like this:
mPlayer.seekTo(position * frameIntervals);
After trying both the above methods, I realised that the issue is related to the time being passed to mediaCodec because the "lag/jump" happens at different places.
I'm not sure why this doesn't happen when I call:
mPlayer.seekTo(mPlayer.getCurrentPosition() + intervals);

Categories

Resources