Camera 2, increase FPS - android

I'm using Camera 2 API to save JPEG images on disk. I currently have 3-4 fps on my Nexus 5X, I'd like to improve it to 20-30. Is it possible?
Changing the image format to YUV I manage to generate 30 fps. Is it possible to save them at this frame-rate, or should I give up and live with my 3-4 fps?
Obviously I can share code if needed, but if everyone agree that it's not possible, I'll just give up. Using the NDK (with libjpeg for instance) is an option (but obviously I'd prefer to avoid it...).
Thanks
EDIT: here is how I convert the YUV android.media.Image to a single byte[]:
private byte[] toByteArray(Image image, File destination) {
ByteBuffer buffer0 = image.getPlanes()[0].getBuffer();
ByteBuffer buffer2 = image.getPlanes()[2].getBuffer();
int buffer0_size = buffer0.remaining();
int buffer2_size = buffer2.remaining();
byte[] bytes = new byte[buffer0_size + buffer2_size];
buffer0.get(bytes, 0, buffer0_size);
buffer2.get(bytes, buffer0_size, buffer2_size);
return bytes;
}
EDIT 2: another method I found to convert the YUV image into a byte[]:
private byte[] toByteArray(Image image, File destination) {
Image.Plane yPlane = image.getPlanes()[0];
Image.Plane uPlane = image.getPlanes()[1];
Image.Plane vPlane = image.getPlanes()[2];
int ySize = yPlane.getBuffer().remaining();
// be aware that this size does not include the padding at the end, if there is any
// (e.g. if pixel stride is 2 the size is ySize / 2 - 1)
int uSize = uPlane.getBuffer().remaining();
int vSize = vPlane.getBuffer().remaining();
byte[] data = new byte[ySize + (ySize/2)];
yPlane.getBuffer().get(data, 0, ySize);
ByteBuffer ub = uPlane.getBuffer();
ByteBuffer vb = vPlane.getBuffer();
int uvPixelStride = uPlane.getPixelStride(); //stride guaranteed to be the same for u and v planes
if (uvPixelStride == 1) {
uPlane.getBuffer().get(data, ySize, uSize);
vPlane.getBuffer().get(data, ySize + uSize, vSize);
}
else {
// if pixel stride is 2 there is padding between each pixel
// converting it to NV21 by filling the gaps of the v plane with the u values
vb.get(data, ySize, vSize);
for (int i = 0; i < uSize; i += 2) {
data[ySize + i + 1] = ub.get(i);
}
}
return data;
}

The dedicated JPEG encoder units on mobile phones are efficient, but not generally optimized for throughput. (Historically, users took one photo every second or two). At full resolution, the 5X's camera pipeline will not generate JPEGs at faster than a few FPS.
If you need higher rates, you need to capture in uncompressed YUV. As mentioned by CommonsWare, there's not enough disk bandwidth to stream full-resolution uncompressed YUV to disk, so you can only hold on to some number of frames before you run out of memory.
You can use libjpeg-turbo or some other high-efficiency JPEG encoder and see how many frames per second you can compress yourself - this may be higher than the hardware JPEG unit. The simplest way to maximize the rate is to capture YUV at 30fps, and run some number of JPEG encoding threads in parallel. For maximum speed, you'll want to hand-write the code talking to the JPEG encoder, because your source data is YUV, not RGB, which most JPEG encoding interfaces tend to accept (even though typically the colorspace of an encoded JPEG is actually YUV as well).
Whenever an encoder thread finishes the previous frame, it can grab the next frame that comes from the camera (you can maintain a small circular buffer of the latest YUV Images to make this simpler).

Related

android camera2 get nv21 byte array from preview image

I'm trying to get the black and white values (the Y-plane) from the preview frame in the camera2 API. This is what I have so far:
public void onImageAvailable(ImageReader, reader) {
Image image = reader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer yPlane = planes[0].getBuffer();
if (firstRun) {
ySize = yPlane.remaining();
nv21 = new byte[ySize];
}
yPlane.get(nv21, 0, ySize);
Log.i(TAG, String.valueOf(nv21.length) + " " + String.valueOf(nv21[0]));
image.close();
}
However, the length of the array is not as expected (1280*960=1 228 800, nv21.length returns 12 979 200) and nv21[0] gives random values.
What Am I doing wrong?
Thank you in advance
The size of buffer doesn't have to be exactly 1280*960, since there can be row stride between each row of pixels. That said, a 10x difference in total size seems surprising, but not infeasible - check what the value is.
I'd recommend trying to actually draw the Y plane into an ImageView (for debugging this doesn't need to be efficient, so you can just use a Bitmap and a Canvas and drawColor), to see what it looks like. Is it just complete garbage, or is it a real Y plane with weird padding, etc?

How could I distinguish between NV21 and YV12 codification in imageReader camera API 2?

I am developing custom camera API 2 app, and I notice that the capture format conversion is different on some devices when I use ImageReader callback.
For example in Nexus 4 doesn't work fine and in Nexus5X looks OK, here is the output.
I initialize the ImageReader in this form:
mImageReader = ImageReader.newInstance(320, 240, ImageFormat.YUV_420_888,2);
And my callback is simple callback ImageReader Callback.
mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable( ImageReader reader) {
try {
mBackgroundHandler.post(
new ImageController(reader.acquireNextImage())
);
}
catch(Exception e)
{
//exception
}
}
};
And in the case of Nexus 4: I had this error.
D/qdgralloc: gralloc_lock_ycbcr: Invalid format passed: 0x32315659
When I try to write the raw file in both devices, I have these different images. So I understand that the Nexus 5X image has NV21 codification and the Nexus 4 has YV12 codification.
I found a specification of image format and I try to get the format in ImageReader.
There are YV12 and NV21 options, but obviously, I get the YUV_420_888 format when I try to obtain the format.
int test=mImageReader.getImageFormat();
So is there any way to get the camera input format (NV21 or YV12) to discriminate this codification types in the camera class? CameraCharacteristics maybe?
Thanks in advance.
Unai.
PD: I use OpenGL for displayin RGB images, and I use Opencv to make the conversions to YUV_420_888.
YUV_420_888 is a wrapper that can host (among others) both NV21 and YV12 images. You must use the planes and strides to access individual colors:
ByteBuffer Y = image.getPlanes()[0];
ByteBuffer U = image.getPlanes()[1];
ByteBuffer V = image.getPlanes()[2];
If the underlying pixels are in NV21 format (as on Nexus 4), the pixelStride will be 2, and
int getU(image, col, row) {
return getPixel(image.getPlanes()[1], col/2, row/2);
}
int getPixel(plane, col, row) {
return plane.getBuffer().get(col*plane.getPixelStride() + row*plane.getRowStride());
}
We take half column and half row because this is how U and V (chroma) planes are stored in 420 image.
This code is for illustration, it is very inefficient, you probably want to access pixels at bulk, using get(byte[], int, int), or via a fragment shader, or via JNI function GetDirectBufferAddress in native code. What you cannot use, is method plane.array(), because the planes are guaranteed to be direct byte buffers.
Here useful method which converts from YV12 to NV21.
public static byte[] fromYV12toNV21(#NonNull final byte[] yv12,
final int width,
final int height) {
byte[] nv21 = new byte[yv12.length];
final int size = width * height;
final int quarter = size / 4;
final int vPosition = size; // This is where V starts
final int uPosition = size + quarter; // This is where U starts
System.arraycopy(yv12, 0, nv21, 0, size); // Y is same
for (int i = 0; i < quarter; i++) {
nv21[size + i * 2] = yv12[vPosition + i]; // For NV21, V first
nv21[size + i * 2 + 1] = yv12[uPosition + i]; // For Nv21, U second
}
return nv21;
}

Issue with libyuv::ConvertToI420 on Android?

I have an onPreviewFrame callback set up. This gets a byte[] with NV21 data in it. I have set the preview size to 176*144. When device is held in landscape mode, byte[] with 176*144 dimensions is perfect but when device is held in portrait mode I still get byte[] with the same dimensions.
I want to rotate the byte[] by 90 degrees and obtain byte[] with dimensions 144*176.
So the question is, how to rotate the data, not just the preview image? Camera.Parameters.setRotation only affects taking the picture, not video. Camera.setDisplayOrientation specifically says it only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in
onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
After checking out various posts I have found this one stating to use ConvertToI420 from libyuv.
Now the deal is I have compiled libyuv and able to call libyuv::ConvertToI420 method but the resulting I420 that I get is all messed up in terms of color and showing lines and all..... however the dimensions that I get are now 144*176, can check the image here.
The code snippet that i've used is as follows.
//sourceWidth = 176 and sourceHeight = 144
unsigned char I420M = new unsigned char[(int)(sourceWidth*sourceHeight*1.5)];
unsigned int YSize = sourceWidth * sourceHeight;
// yuvPtr is the NV21 data passed from onPreviewCallback (from JAVA layer)
const uint8* src_frame = const_cast<const uint8*>(yuvPtr);
size_t src_size = YSize;
uint8* pDstY = I420M;
uint8* pDstU = I420M + YSize;
uint8* pDstV = I420M + (YSize/4);
libyuv::RotationMode mode;
if(landscapeLeft){
mode = libyuv::kRotate90;
}else{
mode = libyuv::kRotate270;
}
uint32 format = libyuv::FOURCC_NV21;
int retVal = libyuv::ConvertToI420(src_frame, src_size,
pDstY, sourceHeight,
pDstU, (sourceHeight/2),
pDstV, (sourceHeight/2),
0, 0,
sourceWidth, sourceHeight,
sourceWidth, sourceHeight,
mode,
format);
I don't wish to crop the image, just rotate it by 90 (clockwise/anticlockwise) the attached image is for kRotate90.
Could anyone please point me where am going wrong, I strongly doubt it has o do something with the parameters am passing to the ConvertToI420 method.
Any help appreciated.
use sourceWidth not sourceHeight
int retVal = libyuv::ConvertToI420(src_frame, src_size,
pDstY, sourceWidth,
pDstU, (sourceWidth/2),
pDstV, (sourceWidth/2),
0, 0,
sourceWidth, sourceHeight,
sourceWidth, sourceHeight,
mode,
format);
I have figured out what was going wrong. The above code snippet works perfectly well and I420M contains the rotated YUV with 144*176 dimensions.
The problem was the in the way I was converting the I420M to jbyte[] while passing it back to Java Layer.

how to convert from yv12 to yuv420p

On camera preview frame I get data in yv12 format on Android side. I need to convert it to YUV420P on jni side. How can I do it? As I have read from many sources in YUV420P format y samples appears first which is followed by u samples. u samples are followed by v sample. yv12 format is same as YUV420P except u and v samples appears in reverse order, that means y samples are followed by v and then u samples. Keeping that in mind I have used following swapping code to produce YUV420P data from yv12 data format before encoding.
avpicture_fill((AVPicture*)outframe, (uint8_t*)camData, codecCtx->pix_fmt, codecCtx->width, codecCtx->height);
uint8_t * buf_store = outframe->data[1];
outframe->data[1]=outframe->data[2];
outframe->data[2]=buf_store;
But it does not seems to be working. How should I adjust my code?
Do, t use avpicture_fill. I have implemented in my application like this and its working fine
picture->linesize[0] = frameWidth;
picture->linesize[1] = frameWidth/2;
picture->linesize[2] = frameWidth/2;
picture->data[0] = camData;
picture->data[1] = camData + picture->linesize[0]*frameHeight+picture->linesize[1]*frameHeight/2;
picture->data[2] = camData + picture->linesize[0]*frameHeight;
May be you need this method:
//yv12 to yuv420p
public static void swapYV12toI420(final byte[] yv12bytes, final byte[] i420bytes, int width, int height) {
int size = width * height;
int part = size / 4;
System.arraycopy(yv12bytes, 0, i420bytes, 0, size);
System.arraycopy(yv12bytes, size + part, i420bytes, size, part);
System.arraycopy(yv12bytes, size, i420bytes, size + part, part);
}
For every YV12 data packages you received, swap to i420.

Encoding H.264 from camera with Android MediaCodec

I'm trying to get this to work on Android 4.1 (using an upgraded Asus Transformer tablet). Thanks to Alex's response to my previous question, I already was able to write some raw H.264 data to a file, but this file is only playable with ffplay -f h264, and it seems like it's lost all information regarding the framerate (extremely fast playback). Also the color-space looks incorrect (atm using the camera's default on encoder's side).
public class AvcEncoder {
private MediaCodec mediaCodec;
private BufferedOutputStream outputStream;
public AvcEncoder() {
File f = new File(Environment.getExternalStorageDirectory(), "Download/video_encoded.264");
touch (f);
try {
outputStream = new BufferedOutputStream(new FileOutputStream(f));
Log.i("AvcEncoder", "outputStream initialized");
} catch (Exception e){
e.printStackTrace();
}
mediaCodec = MediaCodec.createEncoderByType("video/avc");
MediaFormat mediaFormat = MediaFormat.createVideoFormat("video/avc", 320, 240);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, 125000);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, 15);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 5);
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
}
public void close() {
try {
mediaCodec.stop();
mediaCodec.release();
outputStream.flush();
outputStream.close();
} catch (Exception e){
e.printStackTrace();
}
}
// called from Camera.setPreviewCallbackWithBuffer(...) in other class
public void offerEncoder(byte[] input) {
try {
ByteBuffer[] inputBuffers = mediaCodec.getInputBuffers();
ByteBuffer[] outputBuffers = mediaCodec.getOutputBuffers();
int inputBufferIndex = mediaCodec.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0) {
ByteBuffer inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(input);
mediaCodec.queueInputBuffer(inputBufferIndex, 0, input.length, 0, 0);
}
MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo();
int outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo,0);
while (outputBufferIndex >= 0) {
ByteBuffer outputBuffer = outputBuffers[outputBufferIndex];
byte[] outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
outputStream.write(outData, 0, outData.length);
Log.i("AvcEncoder", outData.length + " bytes written");
mediaCodec.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = mediaCodec.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (Throwable t) {
t.printStackTrace();
}
}
Changing the encoder type to "video/mp4" apparently solves the framerate-problem, but since the main goal is to make a streaming service, this is not a good solution.
I'm aware that I dropped some of Alex' code considering the SPS and PPS NALU's, but I was hoping this would not be necessary since that information was also coming from outData and I assumed the encoder would format this correctly. If this is not the case, how should I arrange the different types of NALU's in my file/stream?
So, what am I missing here in order to make a valid, working H.264 stream? And which settings should I use to make a match between the camera's colorspace and the encoder's colorspace?
I have a feeling this is more of a H.264-related question than a Android/MediaCodec topic. Or am I still not using the MediaCodec API correctly?
Thanks in advance.
For your fast playback - frame rate issue, there is nothing you have to do here. Since it is a streaming solution the other side has to be told the frame rate in advance or timestamps with each frame. Both of these are not part of elementary stream. Either pre-determined framerate is chosen or you pass on some sdp or something like that or you use existing protocols like rtsp. In the second case the timestamps are part of the stream sent in form of something like rtp. Then the client has to depay the rtp stream and play it bacl. This is how elementary streaming works. [either fix your frame rate if you have a fixed rate encoder or give timestamps]
Local PC playback will be fast because it will not know the fps. By giving the fps parameter before the input e.g
ffplay -fps 30 in.264
you can control the playback on the PC.
As for the file not being playable: Does it have a SPS and PPS. Also you should have NAL headers enabled - annex b format. I don't know much about android, but this is requirement for any h.264 elementary stream to be playable when they are not in any containers and need to be dumped and played later.
If android default is mp4, but default annexb headers will be switched off, so perhaps there is a switch to enable it. Or if you are getting data frame by frame, just add it yourself.
As for color format: I would guess the default should work. So try not setting it.
If not try 422 Planar or UVYV / VYUY interleaved formats. usually cameras are one of those. (but not necessary, these may be the ones I have encountered more often).
Android 4.3 (API 18) provides an easy solution. The MediaCodec class now accepts input from Surfaces, which means you can connect the camera's Surface preview to the encoder and bypass all the weird YUV format issues.
There is also a new MediaMuxer class that will convert your raw H.264 stream to a .mp4 file (optionally blending in an audio stream).
See the CameraToMpegTest source for an example of doing exactly this. (It also demonstrates the use of an OpenGL ES fragment shader to perform a trivial edit on the video as it's being recorded.)
You can convert color spaces like this, if you have set the preview color space to YV12:
public static byte[] YV12toYUV420PackedSemiPlanar(final byte[] input, final byte[] output, final int width, final int height) {
/*
* COLOR_TI_FormatYUV420PackedSemiPlanar is NV12
* We convert by putting the corresponding U and V bytes together (interleaved).
*/
final int frameSize = width * height;
final int qFrameSize = frameSize/4;
System.arraycopy(input, 0, output, 0, frameSize); // Y
for (int i = 0; i < qFrameSize; i++) {
output[frameSize + i*2] = input[frameSize + i + qFrameSize]; // Cb (U)
output[frameSize + i*2 + 1] = input[frameSize + i]; // Cr (V)
}
return output;
}
Or
public static byte[] YV12toYUV420Planar(byte[] input, byte[] output, int width, int height) {
/*
* COLOR_FormatYUV420Planar is I420 which is like YV12, but with U and V reversed.
* So we just have to reverse U and V.
*/
final int frameSize = width * height;
final int qFrameSize = frameSize/4;
System.arraycopy(input, 0, output, 0, frameSize); // Y
System.arraycopy(input, frameSize, output, frameSize + qFrameSize, qFrameSize); // Cr (V)
System.arraycopy(input, frameSize + qFrameSize, output, frameSize, qFrameSize); // Cb (U)
return output;
}
You can query the MediaCodec for it's supported bitmap format and query your preview.
Problem is, some MediaCodecs only support proprietary packed YUV formats that you can't get from the preview.
Particularly 2130706688 = 0x7F000100 = COLOR_TI_FormatYUV420PackedSemiPlanar .
Default format for the preview is 17 = NV21 = MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV411Planar = YCbCr 420 Semi Planar
If you did not explicitly request another pixel format, the camera preview buffers will arrive in a YUV 420 format known as NV21, for which COLOR_FormatYCrYCb is the MediaCodec equivalent.
Unfortunately, as other answers on this page mention, there is no guarantee that on your device, the AVC encoder supports this format. Note that there exist some strange devices that do not support NV21, but I don't know any that can be upgraded to API 16 (hence, have MediaCodec).
Google documentation also claims that YV12 planar YUV must be supported as camera preview format for all devices with API >= 12. Therefore, it may be useful to try it (the MediaCodec equivalent is COLOR_FormatYUV420Planar which you use in your code snippet).
Update: as Andrew Cottrell reminded me, YV12 still needs chroma swapping to become COLOR_FormatYUV420Planar.

Categories

Resources