Nexus 9 Camera2 API - YUV_420_888 vs. getOutputSizes() - android

I'm implementing the Camera2 API with the YUV_420_888 format on a Nexus 9. I checked the output sizes and wanted to use the largest (8MP, 3280 x 2460) size to save. However, it just appears as static lines, similar to how old TV's looked without a signal. I would like to stick with YUV_420_888 since my end goal is to save grayscale data (Y component).
I originally thought it was a camera bandwidth issue, but the same thing happened at some of the small sizes (320 x 240). None of the problems went away even when I increased frame duration and decreased the size of the preview to save on bandwidth. Some of the other sizes DID work (2048 x 1536, 1280 x 720) but I did not check all of them.
I'm starting to think getOutputSizes() may not necessarily be accurate. It gave me the same results for all other formats except RAW_SENSOR (JPEG, YUV_420_888, YV12). Has anyone encountered this or determined a solution?

Figured out the issue. I was not taking into account the rowStride of the returned pixels. So I had to run a for-loop to extract the non-padded data before saving it:
myRowStride = mImage.getPlanes()[0].getRowStride();
int iSkippedBytes = 0;
for (int i = 0; i < mStillSize.getWidth() * mStillSize.getHeight(); i++){
if (i % mStillSize.getWidth() == 0 && i != 0)
iSkippedBytes = iSkippedBytes + (myRowStride - mStillSize.getWidth());
imageBytes[i] = bytes[i + iSkippedBytes];
}

Related

How to calculate remaining photos counts in custom camera in android

I'm trying to calculate the remaining number of photos that can be taken using my custom camera and show that count to the user. I tried with the following code:
private void numberOfPhotosAvailable() {
long photosAvailable = 0;
StatFs stat = new StatFs(Environment.getExternalStorageDirectory().getPath());
resolution=getResolution();
long bytesPerPhoto=resolution/1048576;
long bytesAvailable = (long) stat.getAvailableBlocksLong() * (long) stat.getBlockSizeLong();
long megAvailable = bytesAvailable / 1048576;
System.out.println("Megs :" + megAvailable);
photosAvailable = megAvailable / bytesPerPhoto;
tvAvailablePhotos.setText("" + photosAvailable);
}
Method for getting resolution.
public long getResolution(){
long resolution=0;
Camera.Parameters params=mCamera.getParameters();
List<Camera.Size> sizes = params.getSupportedPictureSizes();
Camera.Size size = sizes.get(0);
int width=size.width;
int height=size.height;
resolution=width*height;
return resolution;
}
PROBLEM:
There is a lot of difference in the count shown in the phone's camera and count that is shown in my app.
So what is the proper way of doing this ?
NOTE: I will only be capturing image in the highest quality available. Therefore I am only calculating count according to one resolution only.
It is impossible to find the exact size of the output image for JPEG/PNG compression. The compression algorithms are optimized in such a way that they use as less size as possible but preserve the image pixels (although JPEG is slightly lossy).
However, you can estimate the no of images by taking multiple sample photos and calculating the average compression ratio.
From Wikipedia:
JPEG typically achieves 10:1 compression with little perceptible loss in image quality.
So estimated storage size can be calculated as:
int bytes = width * height * 2 / compressionRatio;
Here it is multiplied by 2 because in RGB_565 config, it requires 2 bytes to store 1 pixel.
I think that there is no ultimate solution for images left because there are too much different devices and cameras and also it depends heavily on image content.
One good explanation for that I have found here
As suggested before you can predict image size and calculate images count left based on a device free space. To get that prediction best solution is to try it on your device first.
After user starts using the app, you can include his last 10 photo sizes in calculation.
If it is not a key feature you can just present it as prediction based on usage, not as a binding fact.
P.S
I am using Samsung Galaxy S7 edge and there is no images left count in camera at all. (Or I am just unable to find it)
Well after lot of researching and googling, I came to this site.
According to this site, following are the steps to get the file size.
Multiply the detectors number of horizontal pixels by the number of vertical pixels to get the total number of pixels of the detector.
Multiply total number of pixels by the bit depth of the detector (16 bit, 14 bit etc.) to get the total number of bits of data.
Dividing the total number of bits by 8 equals the file size in bytes.
Divide the number of bytes by 1024 to get the file size in kilobytes. Divide by 1024 again and get the file size in megabytes.
So when I followed the above steps, i.e. my detectors resolution is 5376X3024. Proceeding with the above steps I finally got 39 MB as answer for image size.
But the image taken by camera was around 8-10 MB in size which was still not close to what I got in result above.
My phone (HTC Desire 10 pro) has a pro mode settings available. In this mode photos are captured as raw images. So when I checked for the size of the captured raw image, I was amused as the size of the raw file was indeed around 39 MB, which states that above steps are correct for calculating image's original size.
CONCLUSION
With the above steps I came to the conclusion that phone's softwares indeed use some compression algorithms to make image size less. So what I was comparing to was actually compressed images hence the count of images was different.
PROBABLE SOLUTION
The approach that I am now aiming is to get the last clicked image from my camera, get its file size and show count according to that file size. This will also be a approximate result but I don't think there can be any solution for getting exact count.
This is the code I am using to implement the above solution
private void numberOfPhotosAvailable() {
long photosAvailable = 0;
StatFs stat = new StatFs(Environment.getExternalStorageDirectory().getPath());
File lastFile=null;
lastFile=utils.getLatestFilefromDir(prefManager.getString(PrefrenceConstants.STORAGE_PATH));
if (lastFile!=null){
double fileSize=(lastFile.length())/(1024*1024);
long bytesAvailable = (long) stat.getAvailableBlocksLong() * (long) stat.getBlockSizeLong();
long megAvailable = bytesAvailable / 1048576;
System.out.println("Megs :" + megAvailable);
photosAvailable = (long) (megAvailable / fileSize);
tvAvailablePhotos.setText("" + photosAvailable);
}else{
tvAvailablePhotos.setVisibility(View.INVISIBLE);
}
}
I think you can check the DCIM directory(Default Camera Directory) for the number of files and calculate the size of all the files and get the average size by dividing the number of files.
and you will get the average size camera is capturing the images
do above steps in Asynctask.
and you already calculated the remaining size in bytes now divide again the remaining size with the average size and you will get the approx number of images you can capture.

Saving off raw with Camera2 on a camera with rowstride > width

What I have developed thus far is the capability to write out various devices raw information using the standard DngCreator scheme as per below.
On one device that I am encountering however (HTC 10) the Image class contains planar information whose row stride is larger than the width. I so far have an understanding that this can happen with images, but I can't find out how to correct for it with the SDK available to us.
ByteBuffer byteBuffer = ByteBuffer.wrap(cameraImageF.getRawBytes());
byteBuffer.rewind();
dngCreator.writeByteBuffer(new FileOutputStream(rawLoggerFileF),
new Size(cameraImageF.getRawImageSize().getWidth(), cameraImageF.getRawImageSize().getHeight()),
byteBuffer, 0);
I have held onto the bytes from the original Image class and do some substantial calculations in between when I received them and when they were taken (this is the point of the application). So, I need to let go of the Image so that I can keep getting additional frames from the camera.
Now, this approach works fine for various devices (Samsung S7, Nexus 5, Nexus 6p, etc.). However on the HTC 10 the stride is 16 bytes longer per row and it seems as though I have no way of letting the DngCreator know that.
Underneath in the source code, the writeBuffer defaults to an internal rowStride = width * pixelStride. I do not have the capability to send in a different stride for a parameter. The rowStride does not equal the defaults.
The dngCreator.saveImage(Outputstream, Image) uses the internal Image's stride when it writes out to a buffer. However, I can't hold on to an Image class on the camera because it needs to be released and it is not a cloneable object.
I am a bit lost and trying to understand how to write out a valid .dng for a photograph that has rowStride > width.
You'll have to remove the extra bytes manually - that is, copy the raw image to a new ByteBuffer, and remove the extra bytes at the end of each row. So something like:
byte[] rawBytes = cameraImageF.getRawBytes();
ByteBuffer dst = ByteBuffer.allocate(cameraImageF.getRawImageSize().getWidth() * cameraImageF.getRawImageSize().getHeight() * 2);
for (int row = 0; row < cameraImageF.getRawImageSize().getHeight(); row++) {
dst.put(rawBytes,
row * cameraImageF.getRawImageRowStride(),
cameraImageF.getRawImageSize().getWidth() * 2);
}
dst.rewind();
dngCreator.writeByteBuffer(new FileOutputStream(rawLoggerFileF),
new Size(cameraImageF.getRawImageSize().getWidth(),
cameraImageF.getRawImageSize().getHeight()),
dst, 0);
That's of course not lovely for performance, but since DngCreator won't let you specify a row stride with the ByteBuffer interface, it's your only option.
Is there a reason you can't just increase your RAW ImageReader's maxCount to a higher one, so that you can hold on to the Image until you're done processing it?

Getting QualComm encoders to work via MediaCodec API

I am trying to do hardware encoding (avc) of NV12 stream using Android MediaCodec API.
When using OMX.qcom.video.encoder.avc, resolutions 1280x720 and 640x480 work fine, while the others (i.e. 640x360, 320x240, 800x480) produce output where chroma component seems shifted (please see snapshot).
I have double-checked that the input image is correct by saving it to a jpeg file.
This problem only occurs on QualComm devices (i.e. Samsung Galaxy S4).
Anyone has this working properly? Any additional setup / quirks necessary?
Decoder(MediaCodec) has its MediaFormat, it can be received using getOutputFormat. Returned instance can be printed to log. And there you can see some useful information. For example in your case value like "slice-height" could be useful. I suspect that it is equal to height for 1280x720 and 640x480 and differs for other resolutions. Probably you should use this value to get chroma offset.
Yep, the OMX.qcom.video.encoder.avc does that but not on all devices/android version. On my Nexus 4 with Android 4.3 the encoder works fine, but not on my S3 (running 4.1)
The solution for an S3 running 4.1 with the OMX.qcom.video.encoder.avc (it seems that some S3 have another encoder) is to add 1024 bytes just before the Chroma pane.
// The encoder may need some padding before the Chroma pane
int padding = 1024;
if ((mWidth==640 && mHeight==480) || mWidth==1280 && mHeight==720) padding = 0;
// Interleave the U and V channel
System.arraycopy(buffer, 0, tmp, 0, mYSize); // Y
for (i = 0; i < mUVSize; i++) {
tmp[mYSize + i*2 + padding] = buffer[mYSize + i + mUVSize]; // Cb (U)
tmp[mYSize + i*2+1 + padding] = buffer[mYSize + i]; // Cr (V)
}
return tmp;
The camera is using YV12 and the encoder COLOR_FormatYUV420SemiPlanar.
Your snapshot shows the same kind of artefacts I had, you may need a similar hack for some resolutions, maybe with another padding length
You should also avoid resolutions that are not a multiple of 16, even on 4.3 apparently (http://code.google.com/p/android/issues/detail?id=37769) !

Constant and probably inaccurate Frame Rate

I am using the following code to calculate Frame Rate in Unity3d 4.0. It's applied on a NGUI label.
void Update () {
timeleft -= Time.deltaTime;
accum += Time.timeScale/Time.deltaTime;
++frames;
// Interval ended - update GUI text and start new interval
if( timeleft <= 0.0 )
{
// display two fractional digits (f2 format)
float fps = accum/frames;
string format = System.String.Format("{0:F2} FPS",fps);
FPSLabel.text = format;
timeleft = updateInterval;
accum = 0.0F;
frames = 0;
}
}
It was working previously, or at least seemed to be working.Then I was having some problem with physics, so I changed the fixed timestep to 0.005 and the Max timestep to 0.017 . Yeah I know it's too low, but my game is working fine on that.
Now the problem is the above FPS code returns 58.82 all the time. I've checked on separate devices (Android). It just doesn't budge. I thought it might be correct, but when I saw profiler, I can clearly see ups and downs there. So obviously it's something fishy.
Am I doing something wrong? I copied the code from somewhere (must be from the script wiki). Is there any other way to know the correct FPS?
By taking cues from this questions, I've tried all methods in the first answer. Even the following code is returning a constant 58.82 FPS. It's happening in the android device only. In the editor I can see fps difference.
float fps = 1.0f/Time.deltaTime;
So I checked the value of Time.deltaTime and it's 0.017 constant in the device. How can this be possible :-/
It seems to me that the fps counter is correct and the FPS of 58.82 is caused by the changes in your physics time settings. The physics engine probably cannot finish its computation in the available timestep (0.005, which is very low), and that means it will keep computing until it reaches the maximum timestep, which in your case is 0.017. That means all frames will take 0.017 plus any other overhead from rendering / scripts you may have. And 1 / 0.017 equals exactly 58.82.
Maybe you can fix any problems you have with the physics in other ways, without lowering the fixed timestep so much.

Opengl es 1.1, texture compression ETC1 and Mipmaping (complete set of mipmaps error)

When I activate the mipmaping on uncompressed texture, all is working perfectly.
When I do it on ETC1 texture, the texture is blank, certainly because le complete set of mipmaps was not given.
The code is very simple and works on iPhone (with PVR compression, of course).
It doesn't work on Android. The mipmap was build with an external tool, and past together.
I stop making mipmap at the size of 4, because glCompressedTexImage2D return an opengl error if try using mipmap lower.
for(u32 i=0; i<=levels; i++)
{
size = KC_TexByte(pagex, pagey, tex_type);
glCompressedTexImage2D(GL_TEXTURE_2D, i, type, pagex, pagey, 0, size, ptr);
pagex = MAX(pagex/2, 4);
pagey = MAX(pagey/2, 4);
ptr += size;
KC_Error(); // test openGL error
}
The reason your texture is blank is because it is required that the mipmap go all the way to 1x1.
I would imagine that the error you're getting with small compressed textures is because the texture format you're attempting to use (etc1?) doesn't support those sizes. You'd have to use non-compressed images at those small sizes...
Thanks, but your solution is not the right one; I found another solution.
you're right when you explain that all the mipmap is requiered, until size 1x1
you're wrong, we can't have different format between mipmap
The right way is:
using size to 1x1
keep in mind it's compressed data with bloc, so the size in BYTE doesn't divide by 4 each step. after 8x8, the size stay at the same value.
sx = size in X
sy = size in Y
byte = ((sx+3)/4)*((sy+3)/4) * 8 * 2; // 8 = bit per pixel
for(u32 i=0; i<=levels; i++)
Seems you'd want i < levels instead of <=.

Categories

Resources