I'm working on a custom camera app, and when I try to take higher resolution pictures, my jpeg callback is never called. I run logcat, and I see this message:
E/Camera-JNI(14689): Manually set buffer was too small! Expected 1473253 bytes, but got 768000!
As far as I know, I don't manually set the buffer size for taking a picture, but I do call addCallbackBuffer for capturing preview images.
Are those same buffers used for taking a picture as well as the preview? The description in the android developer docs says "Adds a pre-allocated buffer to the preview callback buffer queue." which uses the word "preview", so I wouldn't think it has anything to do with the takePicture().
So where is this manually allocated buffer it speaks of coming from?
Related
I am working on Camera2 api with real time Image processing, i get
found method
onCaptureProgressed(CameraCaptureSession, CaptureRequest, CaptureResult)
call on every capturing fram but i have no idea how to get byte[] or data from CaptureResult
You can't get image data from CaptureResult; it only provides image metadata.
Take a look at the Camera2Basic sample app, which captures JPEG images with an ImageReader. If you change the JPEG format to YUV, set the resolution to preview size, and set the ImageReader Surface as a target for the preview repeating request, you'll get an ImageReader.Image for every frame captured.
As I know, when you using camera it crops some part of image. I mean that the application cuts out that part of the photo that goes beyond the rectangle.
Is there any way to get the original image that is full-sized and received directly from the camera's matrix?
Root access on my device is available.
I did a small demo years ago:
https://sourceforge.net/p/javaocr/code/HEAD/tree/trunk/demos/camera-utils/src/main/java/net/sf/javaocr/demos/android/utils/camera/CameraManager.java#l8
Basic idea is to set up callback, then you raw image data is delivered via byte array ( getPreviewFrame() / onPreviewFrame ) - no root access is necessary.
Actually, this data comes as mmapped memory buffer directly from adress space of camera app - no root is necessary
As this byte array does not provide any meta information, you have to get all the params from camera object yourself
I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080
I am trying to implement a camera app, but it looks only the onJepgTaken in PictureCallback can take a buffer, while onRaw and onPostView takes null. And getSupportedPictureFormats always returns 256, so no hope to get YUV directly. If this is the case, I guess if I want to process the large image after image taking, I can only get the jpeg decoder for processing.
Updated: seems NV16 is available in takepicture callback and even if YUV buffer is not available, libjpeg is still there for codec stuff.
This might sound like a strange/silly question. But hear me out.
Android applications are, at least on the T-Mobile G1, limited to 16
MB of heap.
And it takes 4 bytes per pixel to store an image (in Bitmap form):
public void onPictureTaken(byte[] _data, Camera _camera) {
Bitmap temp = BitmapFactory.decodeByteArray(_data, 0, _data.length);
}
So 1 image, at 6 Megapixels takes up 24MB of heap. (Cue Memory overflow).
Now I am very much aware of the ability to decode with parameters, to effectively reduce the size of the image. I even have a method which will scale it down to a desired size.
But what about in the scenario when I want to use the camera as a quality camera!
I have no idea how to get this image into the database. As soon as I decode, it errors.
Note: I need(?) to convert it to Bitmap so that I can rotate it before storing it.
So to sum it up:
Limited to 16MB of heap
Image takes up 24MB of heap
Not enough space to take and manipulate an image
This doesnt address the problem, but I Recommend it as a starting point for others who are just loading images from a location:
Displaying Bitmaps on android
I can only think of a couple of things that might help, none of them are optimal
Do your rotations server side
Store the data from the capture directly to the SDCARD without decoding it, then rotate it chunk by chunk using the file system, then send that to your DB. Lots of examples on the web (if your angles are simple, 90 180 etc) though this would be time consuming since IO operations against SDCARD's are not exactly fast.
When you decode drop the alpha channel, this may not solve you issue though and if you are using a matrix to rotate the image then you would need a target/source anyway
Options opt = new Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
// Decode the raw camera a bitmap with no alpha channel
bmp = BitmapFactory.decodeByteArray(raw, 0,raw.length, opt);
There may be a better way to do this, but since your device is so limited in heap etc. I can't think of any.
It would be nice if there was an optional file based matrix method (which in general is what I am suggesting as option 2) or some kind of "paging" system for Android but that's the best I can come up.
First save it to the filesystem, do your operations with the file from the filesystem...