camera matrix original image - android

As I know, when you using camera it crops some part of image. I mean that the application cuts out that part of the photo that goes beyond the rectangle.
Is there any way to get the original image that is full-sized and received directly from the camera's matrix?
Root access on my device is available.

I did a small demo years ago:
https://sourceforge.net/p/javaocr/code/HEAD/tree/trunk/demos/camera-utils/src/main/java/net/sf/javaocr/demos/android/utils/camera/CameraManager.java#l8
Basic idea is to set up callback, then you raw image data is delivered via byte array ( getPreviewFrame() / onPreviewFrame ) - no root access is necessary.
Actually, this data comes as mmapped memory buffer directly from adress space of camera app - no root is necessary
As this byte array does not provide any meta information, you have to get all the params from camera object yourself

Related

How to get depth map in Android Q

in Android Q there is a option to get depth map from image.
Starting in Android Q, cameras can store the depth data for an image in a separate file, using a new schema called Dynamic Depth Format (DDF). Apps can request both the JPG image and its depth metadata, using that information to apply any blur they want in post-processing without modifying the original image data.
To read the specification for the new format, see Dynamic Depth Format.
I have read this file Dynamic Depth Format and it looks that depth data is stored in JPG file as XMP metadata and now my question is how to get this depth map from file or directly from android API?
I am using galaxy s10 with anrdoid Q
If you retrieve a DYNAMIC_DEPTH jpeg, the depth image is stored in the bytes immediately after the main jpeg image. The documentation leaves a lot to be desired in explaining this; I finally figured it out by searching the whole byte buffer for JPEG start and end markers.
I wrote a parser that you can look at or use here: https://github.com/kmewhort/funar/blob/master/app/src/main/java/com/kmewhort/funar/preprocessors/JpegParser.java. It does the following:
Uses com.drew.imaging.ImageMetadataReader to read the EXIF.
Uses com.adobe.internal.xmp to read info about the depth image sizes (and some other interesting depth attributes that you don't necessarily need, depending on your use case)
Calculates the byte locations of the depth image by subtracting each trailer size from the final byte of the whole byte buffer (there can be other trailer images, such as thumbnails). It returns a wrapped byte buffer of the actual depth JPEG.

Got wrong data by ImageReader when reading from a video?

I am writing an app to grab every frame from a video,so that I can do some cv processing.
According to Android `s API doc`s description,I should set Mediaplayer`s surface as ImageReader.getSurface(). so that I can get every video frame on the callback OnImageAvailableListener .And it really work on some device and some video.
However ,on my Nexus5(API24-25).I got almost green pixel when ImageAvailable.
I have checked the byte[] in image`s Yuv planes,and i discover that the bytes I read from video must some thing wrong!Most of the bytes are Y = 0,UV = 0,which leed to a strange imager full of green pixel.
I have make sure the Video is YUV420sp.Could anyone help me?Or recommend another way for me to grab frame ?(I have try javacv but the grabber is too slow)
I fix my question!!
when useing Image ,we should use getCropRect to get the valid area of the Image.
forexample ,i get image.width==1088 when I decode a 1920*1080 frame,I should use image.getCropImage() to get the right size of the image which will be 1920,1080

How to get pixel values from Android Image

I'm using the camera2 api to capture a burst of images. To ensure fastest capture speed, I am currently using yuv420888.
(jpeg results in approximately 3 fps capture while yuv results in approximately 30fps)
So what I'm asking is how can I access the yuv values for each pixel in the image.
i.e.
Image image = reader.AcquireNextImage();
Pixel pixel = image.getPixel(x,y);
pixel.y = ...
pixel.u = ...
pixel.v = ...
Also if another format would be faster please let me know.
If you look at the Image class you will see the immediate answer is simply the .getPlanes() method.
Of course, for YUV_420_888 this will yield three planes of YUV data which you will have to do a bit of work with in order to get the pixel value at any given location, because the U and V channels have been downsampled and may be interlaced in how they are stored in the Image.Planes. But that is beyond the scope of this question.
Also, you are correct that YUV will be the fastest available output for your camera. JPEG require extra time for encoding which will slow down the pipeline output, and RAW are very large and take a lot of time to read out because they are so large. YUV (of whatever type) is the data format that most camera pipelines work in so it is the 'native' output, and thus the fastest.

Video as a source for the canvas drawImage() method is supported on Android?

do you know if the video as a source for the canvas drawImage() method is supported on Android?
The goal is to display the video and to select one moment to take a picture of this moment in a frame (drawImage (video,0,0), return canvas)). Do you think it is doable?
Thanks!
There is an approach in Android that will return you a bitmap for a given point in a video, which may give you want you need (or needed as this is an old question!):
MediaMetadataRetriever (https://developer.android.com/reference/android/media/MediaMetadataRetriever.html#getFrameAtTime%28long,%20int%29)
getFrameAtTime
Added in API level 10
Bitmap getFrameAtTime (long timeUs,
int option)
Call this method after setDataSource(). This method finds a representative frame close to the given time position by considering the given option if possible, and returns it as a bitmap. This is useful for generating a thumbnail for an input data source or just obtain and display a frame at the given time position.

Android Take Photo successfully

This might sound like a strange/silly question. But hear me out.
Android applications are, at least on the T-Mobile G1, limited to 16
MB of heap.
And it takes 4 bytes per pixel to store an image (in Bitmap form):
public void onPictureTaken(byte[] _data, Camera _camera) {
Bitmap temp = BitmapFactory.decodeByteArray(_data, 0, _data.length);
}
So 1 image, at 6 Megapixels takes up 24MB of heap. (Cue Memory overflow).
Now I am very much aware of the ability to decode with parameters, to effectively reduce the size of the image. I even have a method which will scale it down to a desired size.
But what about in the scenario when I want to use the camera as a quality camera!
I have no idea how to get this image into the database. As soon as I decode, it errors.
Note: I need(?) to convert it to Bitmap so that I can rotate it before storing it.
So to sum it up:
Limited to 16MB of heap
Image takes up 24MB of heap
Not enough space to take and manipulate an image
This doesnt address the problem, but I Recommend it as a starting point for others who are just loading images from a location:
Displaying Bitmaps on android
I can only think of a couple of things that might help, none of them are optimal
Do your rotations server side
Store the data from the capture directly to the SDCARD without decoding it, then rotate it chunk by chunk using the file system, then send that to your DB. Lots of examples on the web (if your angles are simple, 90 180 etc) though this would be time consuming since IO operations against SDCARD's are not exactly fast.
When you decode drop the alpha channel, this may not solve you issue though and if you are using a matrix to rotate the image then you would need a target/source anyway
Options opt = new Options();
opt.inPreferredConfig = Bitmap.Config.RGB_565;
// Decode the raw camera a bitmap with no alpha channel
bmp = BitmapFactory.decodeByteArray(raw, 0,raw.length, opt);
There may be a better way to do this, but since your device is so limited in heap etc. I can't think of any.
It would be nice if there was an optional file based matrix method (which in general is what I am suggesting as option 2) or some kind of "paging" system for Android but that's the best I can come up.
First save it to the filesystem, do your operations with the file from the filesystem...

Categories

Resources