What I have developed thus far is the capability to write out various devices raw information using the standard DngCreator scheme as per below.
On one device that I am encountering however (HTC 10) the Image class contains planar information whose row stride is larger than the width. I so far have an understanding that this can happen with images, but I can't find out how to correct for it with the SDK available to us.
ByteBuffer byteBuffer = ByteBuffer.wrap(cameraImageF.getRawBytes());
byteBuffer.rewind();
dngCreator.writeByteBuffer(new FileOutputStream(rawLoggerFileF),
new Size(cameraImageF.getRawImageSize().getWidth(), cameraImageF.getRawImageSize().getHeight()),
byteBuffer, 0);
I have held onto the bytes from the original Image class and do some substantial calculations in between when I received them and when they were taken (this is the point of the application). So, I need to let go of the Image so that I can keep getting additional frames from the camera.
Now, this approach works fine for various devices (Samsung S7, Nexus 5, Nexus 6p, etc.). However on the HTC 10 the stride is 16 bytes longer per row and it seems as though I have no way of letting the DngCreator know that.
Underneath in the source code, the writeBuffer defaults to an internal rowStride = width * pixelStride. I do not have the capability to send in a different stride for a parameter. The rowStride does not equal the defaults.
The dngCreator.saveImage(Outputstream, Image) uses the internal Image's stride when it writes out to a buffer. However, I can't hold on to an Image class on the camera because it needs to be released and it is not a cloneable object.
I am a bit lost and trying to understand how to write out a valid .dng for a photograph that has rowStride > width.
You'll have to remove the extra bytes manually - that is, copy the raw image to a new ByteBuffer, and remove the extra bytes at the end of each row. So something like:
byte[] rawBytes = cameraImageF.getRawBytes();
ByteBuffer dst = ByteBuffer.allocate(cameraImageF.getRawImageSize().getWidth() * cameraImageF.getRawImageSize().getHeight() * 2);
for (int row = 0; row < cameraImageF.getRawImageSize().getHeight(); row++) {
dst.put(rawBytes,
row * cameraImageF.getRawImageRowStride(),
cameraImageF.getRawImageSize().getWidth() * 2);
}
dst.rewind();
dngCreator.writeByteBuffer(new FileOutputStream(rawLoggerFileF),
new Size(cameraImageF.getRawImageSize().getWidth(),
cameraImageF.getRawImageSize().getHeight()),
dst, 0);
That's of course not lovely for performance, but since DngCreator won't let you specify a row stride with the ByteBuffer interface, it's your only option.
Is there a reason you can't just increase your RAW ImageReader's maxCount to a higher one, so that you can hold on to the Image until you're done processing it?
Related
I'm trying to calculate the remaining number of photos that can be taken using my custom camera and show that count to the user. I tried with the following code:
private void numberOfPhotosAvailable() {
long photosAvailable = 0;
StatFs stat = new StatFs(Environment.getExternalStorageDirectory().getPath());
resolution=getResolution();
long bytesPerPhoto=resolution/1048576;
long bytesAvailable = (long) stat.getAvailableBlocksLong() * (long) stat.getBlockSizeLong();
long megAvailable = bytesAvailable / 1048576;
System.out.println("Megs :" + megAvailable);
photosAvailable = megAvailable / bytesPerPhoto;
tvAvailablePhotos.setText("" + photosAvailable);
}
Method for getting resolution.
public long getResolution(){
long resolution=0;
Camera.Parameters params=mCamera.getParameters();
List<Camera.Size> sizes = params.getSupportedPictureSizes();
Camera.Size size = sizes.get(0);
int width=size.width;
int height=size.height;
resolution=width*height;
return resolution;
}
PROBLEM:
There is a lot of difference in the count shown in the phone's camera and count that is shown in my app.
So what is the proper way of doing this ?
NOTE: I will only be capturing image in the highest quality available. Therefore I am only calculating count according to one resolution only.
It is impossible to find the exact size of the output image for JPEG/PNG compression. The compression algorithms are optimized in such a way that they use as less size as possible but preserve the image pixels (although JPEG is slightly lossy).
However, you can estimate the no of images by taking multiple sample photos and calculating the average compression ratio.
From Wikipedia:
JPEG typically achieves 10:1 compression with little perceptible loss in image quality.
So estimated storage size can be calculated as:
int bytes = width * height * 2 / compressionRatio;
Here it is multiplied by 2 because in RGB_565 config, it requires 2 bytes to store 1 pixel.
I think that there is no ultimate solution for images left because there are too much different devices and cameras and also it depends heavily on image content.
One good explanation for that I have found here
As suggested before you can predict image size and calculate images count left based on a device free space. To get that prediction best solution is to try it on your device first.
After user starts using the app, you can include his last 10 photo sizes in calculation.
If it is not a key feature you can just present it as prediction based on usage, not as a binding fact.
P.S
I am using Samsung Galaxy S7 edge and there is no images left count in camera at all. (Or I am just unable to find it)
Well after lot of researching and googling, I came to this site.
According to this site, following are the steps to get the file size.
Multiply the detectors number of horizontal pixels by the number of vertical pixels to get the total number of pixels of the detector.
Multiply total number of pixels by the bit depth of the detector (16 bit, 14 bit etc.) to get the total number of bits of data.
Dividing the total number of bits by 8 equals the file size in bytes.
Divide the number of bytes by 1024 to get the file size in kilobytes. Divide by 1024 again and get the file size in megabytes.
So when I followed the above steps, i.e. my detectors resolution is 5376X3024. Proceeding with the above steps I finally got 39 MB as answer for image size.
But the image taken by camera was around 8-10 MB in size which was still not close to what I got in result above.
My phone (HTC Desire 10 pro) has a pro mode settings available. In this mode photos are captured as raw images. So when I checked for the size of the captured raw image, I was amused as the size of the raw file was indeed around 39 MB, which states that above steps are correct for calculating image's original size.
CONCLUSION
With the above steps I came to the conclusion that phone's softwares indeed use some compression algorithms to make image size less. So what I was comparing to was actually compressed images hence the count of images was different.
PROBABLE SOLUTION
The approach that I am now aiming is to get the last clicked image from my camera, get its file size and show count according to that file size. This will also be a approximate result but I don't think there can be any solution for getting exact count.
This is the code I am using to implement the above solution
private void numberOfPhotosAvailable() {
long photosAvailable = 0;
StatFs stat = new StatFs(Environment.getExternalStorageDirectory().getPath());
File lastFile=null;
lastFile=utils.getLatestFilefromDir(prefManager.getString(PrefrenceConstants.STORAGE_PATH));
if (lastFile!=null){
double fileSize=(lastFile.length())/(1024*1024);
long bytesAvailable = (long) stat.getAvailableBlocksLong() * (long) stat.getBlockSizeLong();
long megAvailable = bytesAvailable / 1048576;
System.out.println("Megs :" + megAvailable);
photosAvailable = (long) (megAvailable / fileSize);
tvAvailablePhotos.setText("" + photosAvailable);
}else{
tvAvailablePhotos.setVisibility(View.INVISIBLE);
}
}
I think you can check the DCIM directory(Default Camera Directory) for the number of files and calculate the size of all the files and get the average size by dividing the number of files.
and you will get the average size camera is capturing the images
do above steps in Asynctask.
and you already calculated the remaining size in bytes now divide again the remaining size with the average size and you will get the approx number of images you can capture.
I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.
Given a GraphicBufferProducer I create a Surface and then retrieve ANativeWindow. Using ANativeWindow_lock I get a pointer to the buffer. Using the buffer, I do a memcpy into the buffer. The problem is that whatever I draw on this buffer is restricted to less than 25% of the screen. Keep in mind that the dimensions of buffer.width and buffer.height are very close to the resolution of the screen itself.
My question is, why does the buffer only cover a small portion of the screen? And how do I make sure it covers most if not all of the screen? For reference here's the code:
ANativeWindow_Buffer buffer;
// window is created from a "new Surface(sp<IGraphicBufferProducer>)"
if (ANativeWindow_lock(window, &buffer, NULL) == 0) {
// For testing purposes just put grey in the buffer
memcpy(buffer.bits, 0x99, buffer.width * buffer.height);
ANativeWindow_unlockAndPost(window);
}
I have a guess, although I've never used an ANativeWindow_Buffer. memcpy copies a certain number of bytes. How many bits per pixel is your image? If the value is greater than 8, you aren't transfering the full buffer. Since its probably 4 bytes per pixel (AARRGGBB), you probably need to multiply that by 4.
I want to send a fax from my app.
A fax document has a resolution of 1728 x 2444 pixels.
So I create a bitmap, add text and/or pictures and encode it to CCITT (Huffman):
Bitmap image = Bitmap.createBitmap(1728, 2444, Config.ALPHA_8);
Canvas canvas = new Canvas(image);
canvas.drawText("This is a fax", 100, 100, new Paint());
ByteBuffer buffer = ByteBuffer.allocateDirect(image.getWidth() * image.getHeight());
image.copyPixelsToBuffer(buffer);
image.recycle();
encodeCCITT(buffer, width, height);
This works perfect on my Galaxy SII (64 MB heap size), but not at emulator (24 MB). After creating the second fax page I get "4223232-byte external allocation too large for this process...java.lang.OutOfMemoryError" while allocating the buffer.
I already reduced color depth from ARGB_8888 (4 byte per pixel) to ALPHA_8 (1 byte), because fax pages are monochrome anyway.
I need this resolution and I need to have access to the pixels for encoding.
What is the best way?
Android doesn't support 1-Bpp bitmaps, and the Java heap size limit of 24/32/48MB is part of Android. Real devices can't allocate more than the Java heap limit no matter how much RAM they have. There appear to be only two possible solutions:
1) Work within the limitations of the Java heap.
2) Use native code (NDK).
In native code you can allocate the entire available RAM of the device. The only down side is that you will need to write your own code to edit and encode your bitmap.
In addition to BitBank's already good answer, you have to null the reference if you want the Garbage collector to actually clean up your Bitmap's references. The documentation for that method states:
This is an advanced call, and normally need not be called, since the
normal GC process will free up this memory when there are no more
references to this bitmap.
instead of copy all pixels to a ByteBuffer, you can copy step by step. Here with a int[] array. So, you need less memory:
int countLines = 100;
int[] pixels = new int[width * countLines];
for (int y = 0; y < heigth; y += countLines) {
image.getPixels(line, 0, width, 0, y, width, countLines);
// do something with pixels...
image.setPixels(line, 0, width, 0, y, width, countLines);
}
Honestly, I don't like to ask things this way, but I have no clue about this one!
Have you seen this before??
It can be seen that the image is scrambled following some defined pattern. This happens only in some (low end) devices, with Non Power of two images (FBO). It works well on other devices.
What I do, is to load an Android Bitmap to a FBO (works OK, as it shows ok on the screen). I do some editing (I paste a sticker, which in the image seems to be in the right place), and finally save the FBO into a Bitmap again. It works ok for a 512x512 FBO (the FBO has the image size), but no for that one (507x800).
Any Ideas??? I don't post code because I have no clue, please tell me and I'll add it.
This is the GL call to retrieve info from FBO
public Buffer toPixelBuffer(){
final int w = this.getWidth(); //colorTexture width
final int h = this.getHeight();
final ByteBuffer pixels = BufferUtils.newByteBuffer(w*h * 4);
Gdx.gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT, 1);
Gdx.gl.glReadPixels(0,0, w, h, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixels);
pixels.clear();
return pixels;
}
I also don't have a buggy device with me to test right now :(
Thank you!
I had the exact same problem. I experienced this on Galaxy Ace, Galaxy Y, and some other devices.
After lot of testing I did found out that it wasn't even required POT textures, so keeping the texture size with a 64 pixel increment made the trick. So lets say if I have a 122x53 texture, I need to convert it to 128x64. An so on.
Next is the function I use to get a valid texture dimension. Call it for both Width and Height.
/**
* Some GPUs such as the "VideoCore IV HW" on the Samsung Galaxy Ace
* require texture (FBO) sizes to be in '64' increments (WTF!!!!)
*
* #param dimension
* Base dimension to calculate
* #return Resolved 64 dimension
*/
public static int calculate64Dimension(final int dimension)
{
return (((dimension - 1) >> 6) << 6) + 64;
}