Camera2 API - AF_STATE cannot reach any LOCKED status - android

From the doc, and the camera2basic example, (This question is based heavily on this camera2basic example)
In captureCallback, captureStillPicture() will be executed after afState reached a locked status, i.e. one of the below:
(1) CONTROL_AF_STATE_NOT_FOCUSED_LOCKED
(2) CONTROL_AF_STATE_FOCUSED_LOCKED
It works, if there is no zooming feature. in captureCallback, afState can always reach either of the above 2 state, therefore captureStillPicutre is always triggered.
However, after I implemented zoom feature:
If I take a picture first, and then zoom, and then try to take a picture again, the second trial cannot reach the lock status. It remains in CONTROL_AF_STATE_PASSIVE_FOCUSED.
If I zoom, and then take the first picture, the first picture can be focused. But if I try again (even without changing zoom level), the second trial cannot reach the lock status. It remains in CONTROL_AF_STATE_PASSIVE_FOCUSED.
If I do not zoom at all, all attempts can reach a locked status.
In addition, I observed that in my test case 2 (Zoom and take a first picture), the camera tried to focus by changing its focus distance (you can observe that the preview changes from blur to clear). But such phenomenon does not happen in second trial.
The code for camera2 is quite long, but you may still look at my code here.
When user taps the shutter button, takePicture() will be executed, and therefore lockFocus() and so on.
Much appreciated if someone can help!!

Related

Android Camera2: Auto Focus and Exposure

In my Android camera application I'm using the Camera2 API. The application doesn't show the preview of the Camera, and I've implemented it in a way, when a button on the UI is pressed, it takes an image. But the problem is with auto focus and auto exposure. Simply, I need to camera always focused on the middle of its view. So when building the request, I added following properties:
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
captureBuilder.set(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_START);
captureBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CameraMetadata.CONTROL_AE_PRECAPTURE_TRIGGER_START);
But the problem is still the images are not focused.
I have couple of questions:
Do I need to implement some checking in a method inside CameraCaptureSession.CaptureCallback?
I also noticed that by the time onImageAvailable is called in ImageReader.OnImageAvailableListener, the onCaptureProgressed method of CameraCaptureSession.CaptureCallback is not triggered.
What are the points I'm missing here? Do I need to implement a thread to wait until the camera is focused, which will start by when pressing the take picture button.
Please note that there's no camera preview for this application.
Are you sending only a single capture request? Or are you running a repeating request in the background, and then only issuing a high-resolution capture on button press?
The former won't really work - you have to have a flow of requests to have the autoexposure, focus, and white balance algorithms converge to good values. A single capture won't be properly metered or focused.
Please take a look at the Camera2Basic sample; if you replace the TextureView in that sample with just a SurfaceTexture (give it a random texture ID and don't call updateTexImage), then you can have no preview. But it implements focusing and the precapture trigger correctly, which is critical for you here. For one, the triggers must only be set on one request, and then you do need to watch the capture results coming back to see when the exposure / focus state changes to FOCUSED or CONVERGED.
I'd also recommend the CONTINUOUS_PICTURE focus mode instead of AUTO; it's likely to get you a focused image faster.

Second camera for UI in GoogleVR Unity5

I have a problem on Google VR, as you can see on the screen shot, on the right eye camera, the UI is double. I didn't do anything in the code, I just created another cam for the UI and culling mask is set only to UI, then this is what happened.
What can I do? Please help!

Android: How do I make sure the screen has fully rotated?

I am writing an Android application that needs to rotate the screen as many times as possible in a certain period of time.
I can rotate the screen fine with setRequestedOrientation(int). The problem I am having is that if I try and rotate the screen in the last call to Activity, onWindowFocusChanged(boolean), the screen cannot keep up with the amount of times setRequestedOrientation is called.
My logcat is showing me that I am getting roughly 5-10 "Setting rotation to [0|1|2|3]" before I get one complete screen rotation.
Here's what I have tried:
Using view to request focus and using a OnRequestFocusChanged Listener
Rotating screen in a child thread to run runOnUiThread to add to main thread message queue
View.post
I know iv'e tried other stuff, but I forget, those are the most recent, and as you can tell I'm getting desperate.
So, is there a way to make sure that the Activity has fully rotated and is accepting user input? Maybe there is a way to check and see if the Activity is looping and waiting for input? Any help here would be absolutely great! Thanks guys.

SEC_Overlay called many times with Relative Layout

After getting so much help from Stack-Overflow I'd like to share back some of my experience:
I spent hour of debugging with one special
09-18 08:11:37.177: DEBUG/SEC_Overlay(128): overlay_setPosition(-1) 03,680,90,120 => 503,680,90,120
09-18 08:11:37.177: INFO/SEC_Overlay(128): Nothing to do!
This is logged each time the GLSurfaceview is updating. But only when the GLSurfaceview is used in combination with a relative Layout and a camera preview. This is combination is essential for the AndroAngelo App. On the main screen I have the radial OnScreenMenu overlayed onto the GLSurface view and on the calibration screen I put the result of the image detection onto the preview.
It decreases the performance essentially.
What to do?
I really did not went to the ground of it. It appeared two times. The second time it just disappeared after a restart of the eclipse. The first time maybe also, but I played around with camera parameters. And then it disappeared.
It comes back from time to time. It seems to be something in the phone. Each time it appears, i simple reboot the Android device and it is gone afterwards.

Get number of homescreens without calling onoffsetschanged (for wallpaper)?

I'm doing a live wallpaper. However, what is initially shown depends on the number of home screens.
While onOffsetsChanged() allows you to calculate the number of home screens, it gets called only if the user scrolls the homescreen.
So is there a way to get the current xStep and xOffSet without calling onOffSetsChanged()?
Edit: I may not need to know that per se. Here's what I'm doing: I'm basically drawing a portion of the bitmap. The portion shown depends on the current homescreen.
Edit 2: so to explain what I'm trying to do---I'm basically trying to mimick the scrolling wallpaper effect but with a video. The point is that the portion shown depends on the current homescreen. Here's the problem: So the user selects the wallpaper. OnSurfaceCreated() is called, followed by onSurfaceChanged(). However, onOffSetsChanged() is never called until the user tries to scroll the homescreens. That's the problem. You don't know what part of the bitmap/video to display until the user scrolls the screen. (So Josh's suggestion doesn't work. The part of the video that's displayed may be wrong---until the user scrolls the screen and we get the correct onOffSetsChanged() values.)
Your edit doesn't really explain why you need to know how many screens there are. You can draw the center portion of your bitmap initially, then when xOffset changes to something like 0, draw the leftmost portion of your bitmap. What's the issue?

Categories

Resources