How to capture continuous images using Camera 2 api's.
I wrote a simple application and a thread in it to capture continuous images but it is not working
It would be nice if you provided your code so we could see what's wrong.
In general terms, you'll want to create a CameraDevice object and call the CameraDevice.createCaptureSession(List <Surface>, CameraCaptureSession.StateCallback, Handler) method by specifying which surfaces you might like to output to (maybe just 1). Once the CameraCaptureSession.StateCallback (that you specified in the createCaptureSession method) calls the onConfigured(CameraCaptureSession) method, call the CameraDevice.createCaptureRequest(int) method, which returns a CaptureRequest.Builder object. With this, you can use the CaptureRequest.Builder.addTarget(Surface) method to specify which of the pre-specified surface(s) you want to output to (probably all of them). Once you're done adding targets, call the CaptureRequest.Builder.build() method, which returns a CaptureRequest object. You can then use the CameraCaptureSession object that was provided to you by the onConfigured(CameraCaptureSession) method to finally pass your CaptureRequest object to the CameraCaptureSession.setRepeatingRequest(CaptureRequest, CameraCaptureSession.CaptureCallback, Handler) method. This will start continuous output to the surfaces that you specified.
Seriously, this api is so complicated, you'd think they didn't want you to use it. If you need more detailed information about what these classes and methods do, the Android documentation is very good.
Related
I am writing an Application for an Android device where I want to process the image from the camera.
The camera on this device only supports the NV21 and PRIVATE formats. I verified via the CameraCharacteristics and tried different formats like YUV_420_888 only for the app to fail. The camera HW Support is: INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
To create the CameraCaptureSession, I need to create an ImageReader.
To create the ImageReader I need to select the ImageFormat, in this case only PRIVATE works. If I try NV21, I get an error saying that it's not supported.
My onImageAvailableListener gets triggered but since the ImageFormat is PRIVATE, the "planes" attribute in the Image returns NULL.
According to the documentation, it's possible to access the data via the HardwareBuffer.
Starting in Android P private images may also be accessed through
their hardware buffers (when available) through the
Image#getHardwareBuffer() method. Attempting to access the planes of a
private image, will return an empty array.
I do get an object of type HardwareBuffer when I get the image from the acquireLatestImage, but my question is: How do I get the actual data/bytes that represents the pixels from the HardwareBuffer object?
As mentioned in HardwareBuffer documentation:
For more information, see the NDK documentation for AHardwareBuffer
You have to use AHardwareBuffer C language functions (using NDK) to access pixels of HardwareBuffer.
In short:
Create native JNI method in some helper class to call it from Java/Kotlin
Pass your HardwareBuffer to this method as parameter
Inside this method call AHardwareBuffer_fromHardwareBuffer to wrap HardwareBuffer as AHardwareBuffer
Call AHardwareBuffer_lock or AHardwareBuffer_lockPlanes to obtain raw image bytes
Work with pixels. If needed you can call any Java/Kotlin method to process pixels there (use ByteBuffer to wrap raw data).
Call AHardwareBuffer_unlock to remove lock from buffer
Don't forget that HardwareBuffer may be read only or protected.
I am working with the camera2 API in android and am trying to understand this code I am using. Part of the code goes like this:
previewReader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(),
ImageFormat.YUV_420_888, 4);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
// This adds another output surface but not sure where this surface comes from..
previewRequestBuilder.addTarget(previewReader.getSurface());
imageListener is an object from another class that implements android.media.ImageReader.OnImageAvailableListener and backgroundHandler is just a background thread. I am not including code for these two or previewRequestBuilder as they do not seem to be important for understanding my question.
I have searched extensively but it just seems like some magic happens and previewReader finds some surface somewhere, somehow. According to the documentation, what getSurface() does is to:
Get a Surface that can be used to produce Image for this ImageReader
Can anyone explain where it gets this?
That Surface belongs to the ImageReader; it was created in the native equivalent of the ImageReader's constructor, and is (effectively) an ImageReader private member, with a getter.
Here is the line in the native constructor that sets up the IGraphicBufferProducer (gbProducer), which is basically the native equivalent of a Surface.
Here is where you can see that the native code uses that same member to form the return value from getSurface()/nativeGetSurface() (you may have to trace through the code a bit, but it's all there).
So that's the literal answer to your question. But maybe you were asking because it isn't clear why the camera doesn't create the Surface, and force you to give it to the ImageReader, instead: A Surface is a complex object (actually, a buffer queue), and shouldn't be thought of as a simple, pre-allocated bitmap. At the time the capture takes place, the camera pipeline will communicate with its output Surfaces, and set up the correct dimensions and color planes and so forth. (Note that you can add multiple targets via addTarget(); the camera can use each of them.) All the camera needs to know is where it's going to send its output; it doesn't need to create the output Surface itself.
API level 21 introduced camera2, with it setRepeatingRequest and setRepeatingBurst. I have read the doc here, but still cannot catch the difference between the two. Any idea?
Well, you'll notice that the constructors for these two methods are slightly different. setRepeatingBurst's first argument is List<CaptureRequest>, and setRepeatingRequests's is just a CaptureRequest.
According to the docs,
setRepeatingBurst
With this method, the camera device will continually capture images, cycling through the settings in the provided list of CaptureRequests, at the maximum rate possible.
setRepeatingRequest
With this method, the camera device will continually capture images using the settings in the provided CaptureRequest, at the maximum rate possible.
So, setRepeatingBurst can be used to capture images with a list of different settings.
That's my best understanding, hope it helps!
Think of setRepeatingRequest as ONE CaptureRequest with one set of settings to continually capture images.
Where as in setRepeatingBurst there is a list CaptureRequest and each "CaptureRequest" has its own setting to continually capture images.
Conclusion: setRepeatingBurst call is like making multiple setRepeatingRequest calls in one call.
In OpenGL Renderer onDrawFrame is called several time, until the page is completely rendered. I cannot find an event that my page is completeley rendered in order to take a snapshot of the OpenGL page and animate it.
I have the solution to take snapshot on at the animation trigger (specific button), but this will imply a specific delay, until Bitmap is created, such as i would like to keep in memory a mutable copy of every page.
Do you know other way to animate GLSurfaceView with rendered content?
Snippet for triggering snapshot.
glSurfaceView.queueEvent(new Runnable() {
#Override
public void run() {
glSurfaceView.getRenderer().takeGlSnapshot();
}
});
EGLContext for passing the GL11 object.
public void takeGlSnapshot() {
EGL10 egl = (EGL10) EGLContext.getEGL();
GL11 gl = (GL11) egl.eglGetCurrentContext().getGL();
takeSnapshot(gl);
}
onDrawFrame(Gl10 gl) {
//is last call for this page event ????????????
No such event exists, as I will explain below.
OpenGL is designed as a client/server architecture, with the two running asynchronously. In a modern implementation, you can generally think of the client as the API front-end that you use to issue commands and the server as the GPU/driver back-end. API calls will do a little bit of work to validate input parameters etc, but save for a few exceptions (like glReadPixels (...)) they buffer up a command for the server to execute at a later point. You never truly know when your commands are finished, unless you explicitly call glFinish (...).
Calling glFinish (...) at the end of each frame is an awful idea, as it will create a CPU/GPU synchronization point and undo the benefits of having the CPU and GPU run asynchronously. But, if you just want to take a screenshot of the current frame every once in a while, then glFinish (...) could be an acceptable practice.
Another thing to consider, if you are using double-buffered rendering is that you may be able to access the last fully rendered frame by reading the front-buffer. This is implementation specific behavior however, as some systems are designed to discard the contents of the front-buffer after the buffer swap operation, others make reading its contents an undefined operation. In any case, if you do attempt this solution, be aware that the image returned will have a 1 frame latency (however, the process of reading it will not require you to finish your current frame), which may be unacceptable.
Is the Camera.takePicture callback for raw data null when running in emulator environment?
I got the CameraSurface method out of the WWW, so it should be correct.
Before calling takePicture you have to provide a buffer large enough to hold the raw image data. (detailed here). For setting the buffer, use the addCallbackBuffer method.
This issue seems to be related to the raw-data callback only.
Using the picture callback which retrieves a jpeg solves the issue.