I am working with the camera2 API in android and am trying to understand this code I am using. Part of the code goes like this:
previewReader = ImageReader.newInstance(previewSize.getWidth(), previewSize.getHeight(),
ImageFormat.YUV_420_888, 4);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
// This adds another output surface but not sure where this surface comes from..
previewRequestBuilder.addTarget(previewReader.getSurface());
imageListener is an object from another class that implements android.media.ImageReader.OnImageAvailableListener and backgroundHandler is just a background thread. I am not including code for these two or previewRequestBuilder as they do not seem to be important for understanding my question.
I have searched extensively but it just seems like some magic happens and previewReader finds some surface somewhere, somehow. According to the documentation, what getSurface() does is to:
Get a Surface that can be used to produce Image for this ImageReader
Can anyone explain where it gets this?
That Surface belongs to the ImageReader; it was created in the native equivalent of the ImageReader's constructor, and is (effectively) an ImageReader private member, with a getter.
Here is the line in the native constructor that sets up the IGraphicBufferProducer (gbProducer), which is basically the native equivalent of a Surface.
Here is where you can see that the native code uses that same member to form the return value from getSurface()/nativeGetSurface() (you may have to trace through the code a bit, but it's all there).
So that's the literal answer to your question. But maybe you were asking because it isn't clear why the camera doesn't create the Surface, and force you to give it to the ImageReader, instead: A Surface is a complex object (actually, a buffer queue), and shouldn't be thought of as a simple, pre-allocated bitmap. At the time the capture takes place, the camera pipeline will communicate with its output Surfaces, and set up the correct dimensions and color planes and so forth. (Note that you can add multiple targets via addTarget(); the camera can use each of them.) All the camera needs to know is where it's going to send its output; it doesn't need to create the output Surface itself.
Related
I am writing an Application for an Android device where I want to process the image from the camera.
The camera on this device only supports the NV21 and PRIVATE formats. I verified via the CameraCharacteristics and tried different formats like YUV_420_888 only for the app to fail. The camera HW Support is: INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
To create the CameraCaptureSession, I need to create an ImageReader.
To create the ImageReader I need to select the ImageFormat, in this case only PRIVATE works. If I try NV21, I get an error saying that it's not supported.
My onImageAvailableListener gets triggered but since the ImageFormat is PRIVATE, the "planes" attribute in the Image returns NULL.
According to the documentation, it's possible to access the data via the HardwareBuffer.
Starting in Android P private images may also be accessed through
their hardware buffers (when available) through the
Image#getHardwareBuffer() method. Attempting to access the planes of a
private image, will return an empty array.
I do get an object of type HardwareBuffer when I get the image from the acquireLatestImage, but my question is: How do I get the actual data/bytes that represents the pixels from the HardwareBuffer object?
As mentioned in HardwareBuffer documentation:
For more information, see the NDK documentation for AHardwareBuffer
You have to use AHardwareBuffer C language functions (using NDK) to access pixels of HardwareBuffer.
In short:
Create native JNI method in some helper class to call it from Java/Kotlin
Pass your HardwareBuffer to this method as parameter
Inside this method call AHardwareBuffer_fromHardwareBuffer to wrap HardwareBuffer as AHardwareBuffer
Call AHardwareBuffer_lock or AHardwareBuffer_lockPlanes to obtain raw image bytes
Work with pixels. If needed you can call any Java/Kotlin method to process pixels there (use ByteBuffer to wrap raw data).
Call AHardwareBuffer_unlock to remove lock from buffer
Don't forget that HardwareBuffer may be read only or protected.
Currently, android Vulkan only supports NativeActivity, but is there any way we can use Java Activity and SurfaceView or any other view and pass Native through JNI to get NativeWindow handler.
I tried looking around and link my surface view but it didn't work for me, any sample code or example will be appreciated.
I don't know of any sample code off-hand, but if you have a SurfaceView you want to get the Surface from it, and from that you can get (in C) the ANativeWindow for creating the VkSurfaceKHR/VkSwapchainKHR. The sequence is something like:
Java: surface = surfaceView->getHolder()->getSurface();
Pass surface to a JNI call into C as a jobject.
C: window = ANativeWindow_fromSurface(env, jsurface);
That function is declared in the NDK android/native_window_jni.h header.
You'll want to register callbacks with the SurfaceView's SurfaceHolder and manage the window lifecycle (which is tied to the Activity lifecycle) correctly.
How to capture continuous images using Camera 2 api's.
I wrote a simple application and a thread in it to capture continuous images but it is not working
It would be nice if you provided your code so we could see what's wrong.
In general terms, you'll want to create a CameraDevice object and call the CameraDevice.createCaptureSession(List <Surface>, CameraCaptureSession.StateCallback, Handler) method by specifying which surfaces you might like to output to (maybe just 1). Once the CameraCaptureSession.StateCallback (that you specified in the createCaptureSession method) calls the onConfigured(CameraCaptureSession) method, call the CameraDevice.createCaptureRequest(int) method, which returns a CaptureRequest.Builder object. With this, you can use the CaptureRequest.Builder.addTarget(Surface) method to specify which of the pre-specified surface(s) you want to output to (probably all of them). Once you're done adding targets, call the CaptureRequest.Builder.build() method, which returns a CaptureRequest object. You can then use the CameraCaptureSession object that was provided to you by the onConfigured(CameraCaptureSession) method to finally pass your CaptureRequest object to the CameraCaptureSession.setRepeatingRequest(CaptureRequest, CameraCaptureSession.CaptureCallback, Handler) method. This will start continuous output to the surfaces that you specified.
Seriously, this api is so complicated, you'd think they didn't want you to use it. If you need more detailed information about what these classes and methods do, the Android documentation is very good.
I want to render alternatively to an EGLSurface created with eglCreateWindowSurface and one with eglCreatePbufferSurface, reusing the EGLDisplay and EGLContext. I am using a GLSurfaceView for the case when I want the result to be visible to the user, but I don't know how to initialize it to use my EGLDisplay, EGLContext and EGLSurface. I want to use GLSurfaceView.EGLWindowSurfaceFactory, but I see its override method createWindowSurface already has as input params those variables, so I suppose thy are already created by GLSurfaceView. How can it be done?
The whole point of GLSurfaceView is to manage things like that for you, so it's hard to make it do what you want.
One thing you can do is to wait until the GLSurfaceView is created and then create a second EGL context in a share group. This is a bit awkward but can be made to work. In many ways it's simpler to just switch to SurfaceView or TextureView and manage EGL and threading yourself.
You can see various implementations in Grafika. "Show + capture camera" uses GLSurfaceView with a shared EGLContext, "Record GL app with FBO" uses SurfaceView, "Play movie (TextureView)" uses a TextureView, etc.
In OpenGL Renderer onDrawFrame is called several time, until the page is completely rendered. I cannot find an event that my page is completeley rendered in order to take a snapshot of the OpenGL page and animate it.
I have the solution to take snapshot on at the animation trigger (specific button), but this will imply a specific delay, until Bitmap is created, such as i would like to keep in memory a mutable copy of every page.
Do you know other way to animate GLSurfaceView with rendered content?
Snippet for triggering snapshot.
glSurfaceView.queueEvent(new Runnable() {
#Override
public void run() {
glSurfaceView.getRenderer().takeGlSnapshot();
}
});
EGLContext for passing the GL11 object.
public void takeGlSnapshot() {
EGL10 egl = (EGL10) EGLContext.getEGL();
GL11 gl = (GL11) egl.eglGetCurrentContext().getGL();
takeSnapshot(gl);
}
onDrawFrame(Gl10 gl) {
//is last call for this page event ????????????
No such event exists, as I will explain below.
OpenGL is designed as a client/server architecture, with the two running asynchronously. In a modern implementation, you can generally think of the client as the API front-end that you use to issue commands and the server as the GPU/driver back-end. API calls will do a little bit of work to validate input parameters etc, but save for a few exceptions (like glReadPixels (...)) they buffer up a command for the server to execute at a later point. You never truly know when your commands are finished, unless you explicitly call glFinish (...).
Calling glFinish (...) at the end of each frame is an awful idea, as it will create a CPU/GPU synchronization point and undo the benefits of having the CPU and GPU run asynchronously. But, if you just want to take a screenshot of the current frame every once in a while, then glFinish (...) could be an acceptable practice.
Another thing to consider, if you are using double-buffered rendering is that you may be able to access the last fully rendered frame by reading the front-buffer. This is implementation specific behavior however, as some systems are designed to discard the contents of the front-buffer after the buffer swap operation, others make reading its contents an undefined operation. In any case, if you do attempt this solution, be aware that the image returned will have a 1 frame latency (however, the process of reading it will not require you to finish your current frame), which may be unacceptable.