I have used the official sample for using the Camera2 API to capture a RAW sensor frame. The code is in java but I transformed it to Kotlin with the help of Android Studio. I tested it, and I'm capable of taking and saving a dng picture to my phone. Not a problem so far.
But what I really want is to be able to retrieve some information about the picture, I don't care about saving it. I want to do the processing directly with my smartphone.
What I tried so far, is to get the byte array of the image.
In the function dequeueAndSaveImage, I retrieve the image from a ImageReader : image = reader.get()!!.acquireNextImage().
I suppose that is here that I have to process the image. I tried to log the image.width, image.height and the image.planes.count and there was no problem.
By the way, since the format is RAW_SENSOR, the image.planes.count is 1, corresponding to a single plane of raw sensor image data, with 16 bits per color sample.
But when I'm trying to log the image.planes[0].buffer.array().size for example, I'm getting a FATAL EXCEPTION: CameraBackground with java.lang.UnsupportedOperationException.
And if I'm trying to log the same thing, but in the function that saves the image to a dng file, I'm getting another type of error : FATAL EXCEPTION: AsyncTask #1 with java.lang.UnsupportedOperationException
Am I even going the right way in order to retrieve information about the image? For example the the intensity of the pixels, the average, the standard deviation for each channel of color, etc...
EDIT : I think that I found the problem, although not the solution.
When I log image.planes[0].buffer.hasArray(), it returns false, that's why calling array() throws an exception.
But then, how do I get the data from the image?
Related
I'm trying to convert a base64 string in a BitMap, in order to show a photo in an ImageView. I'm failing because I'm receiving a bad string (how to fix this problem is not the scope of this topic), so I tried to handle this situation with a try-catch block.
Well, this block doesn't work because there's no exception throwed. As you can see from the Logcat in the lower part of the below image,the Base64 object (or the BitmapFactory one) just write a log about the failure (D/skia: failed to create image decoder with message 'unimplemented'), but don't launch any exception. There's no trace of my PHOTO-tagged log instead.
How could I do to manage this situation manually?
(I'm sorry if you'll find my english strange or difficult to read. I'm not mothertongue, but any help or criticism about it is well accepted)
Is it a compressed image encoded to Base64? (like .jpg or .png)
If so, the format of the image is not supported by the image decoder.
Otherwise, if it's raw data encoded in Base64, you should use Bitmap.createBitmap() to create a Bitmap.
We are currently working on a specific live image manipulation/effect application, where we are working with NDK and using Camera2 and MediaNDK apis.
I'm using AImageReader as a way to call current frames from camera and assign effects in realtime. This works pretty good, we are getting at least 30 fps in hd resolutions.
However, my job also requires me to return this edited image back to the given java endpoint, that is a (Landroid/media/Image;)V signatured method. However this can be changed to any other jobject I want, but must be an image/bitmap kind.
I found out that the AImage I was using was just a c struct so I'll not able to convert it to jobject.
Our current process is something like this in order:
AImageReader_ImageListener is calling a static method with an assigned this context.
Method uses AImageReader_acquireNextImage and if the media is ok, sends it to a child class/object.
In here we manipulate the image data inside multiple std::thread operations, and merge the resulting image. I'm receiving YUV422 formatted data, but I'm converting it to RGB for easier processing.
Then we lock the mutex, return the resulting data to delegate, and delete the original image.
Delegate calls a static method that is responsible for finding/asking the Java method.
Now I need a simple and low-on-resource solution of converting the data at hand to a C++ object that can be represented also as a jobject.
We are using OpenCv in the processes, so it is possible for me to return a Bitmap object, but looks like job consumes more cpu time than I can let it.
How can I approach this problem? Is there a known fast way of converting uint8_t *buffer to a image like jobject context?
I have got the following exception in my OpenCV program. I had the following image in my computer and I moved it to my mobile phone and read it by Mat imageRead = Highgui.imread("/mnt/sdcard/Pictures/2im00.png");
Then I tried to convert its color space to HSV using the following statement, and got the exception on this statement.
Imgproc.cvtColor(imageRead, hsvImage, Imgproc.COLOR_RGB2HSV);
But the exception does not seem to tell me anything more than that it is in the function cvtColor, or I can't read the encoded information there.
So the question is that how do I find out why I am getting this exception?
Is there any coded information there, like some codes (like scn==3 or scn==4 or error:-215 or depth etc etc), which I can browse somewhere to find out why I am getting the exception?
Most probably, Assertion failure occurs because you are passing an empty image to the cvtColor function.
Or
The Mat image you are passing is not an CV_8U or CV_32F format.
I agree with Miki's comment for more details follow this link How to interpret c++ opencv Assertion error messages due to an error in cvtColor function?
I am getting the TotalCaptureResults object from the camera, using the Camera2 API in Android. I am using a preview, not a single image. Is there a way to get bytes[] from TotalCaptureResults?
Thank you.
Short answer: no.
All CaptureResults objects contain only metadata about a frame capture, no actual pixel information. The associated pixel data are sent to wherever you designated as the target Surface in your CaptureRequest.Builder. So you need to check with whatever Surface you set up, such as an ImageReader which will give you access to an Image output from the camera, which will give you access to the bytes[].
I'd like to use MediaCodec to encode the data coming from the camera (reason: it's more low-level so hopefully faster than using MediaRecorder). Using Camera.PreviewCallBack, I capture the data from the camera into a byte-buffer, in order to pass it on to a MediaCodec object.
To do this, I need to fill in a MediaFormat-object, which would be fairly easy if I knew the MIME-code of the data coming from the camera. I can pick this format using setPreviewFormat() choosing one of the constants declared in te ImageFormat-class.
Hence my question: given the different options provided by the ImageFormat-class to set the camera preview-format, what are the corresponding MIME-type codes?
Thanks a lot in advance.
See example at https://gist.github.com/3990442. You should set MIME type of what you want to get out of encoder, i.e. "video/avc".