I am using the custom camera application with preview, using the sdk example on nexus s (2.3)
everything works fine, including taking sharp picture, but the preview does not seem to adjust his level (intensity) like the built in camera does:
when I preview dark objects, the built in camera compensates and increases the intensity, while the custom preview stays in the default intensity, making the preview pretty dark. The images turns out in the correct intensity.
It is not related to white balancing, nor to camera exposure.
I do not want to have a full preview processing chain - just enable the luminance automatic level control - is it possible using the standard API?
Thanks
If you use the autofocus on preview I believe you will get the results you expect.
See Camera.autofocus and Camera.AutoFocusCallback
I believe you'll get one autofocus per call. That is, it's not continuous. You can either implement a way to call it continuously using handlers or simply put a touchlistener on your surfaceview and do autofocus when the user taps the preview.
Related
In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!
I am not sure how to exactly express this. But you must have noticed that the android camera automatically adjusts the 'look' of the camera preview based on the object it is being pointed at. Like for example, if you point the camera directly at a light, it will darken the surrounding area of the light and make the light appear without blowing out the color. I have fiddled with many of the settings in the camera app but couldn't find any way to stop this automatic adjustment.
So what is this adjustment called really. And can I turn this feature off/on from code?
Currently my application uses android.provider.MediaStore.ACTION_VIDEO_CAPTURE intent to capture a video, but it seems I can't programatically set the maximum resolution (for example 720p).
Are there any methods/libraries to mimic this behavior, but at the same time with resolution control? Or should I create custom capture myself using MediaRecorder, SurfaceView etc?
If anyone is wondering I've switched to https://github.com/JeroenMols/LandscapeVideoCamera/
This really allowed to change only a couple lines of code to work. The downsize is that it supports only landscape mode. But maybe this is a plus, since less people would record vertical videos.
I am developing an application where I have to take picture without using Media intent i-e without previewing this camera.How can I do this can anyone help me in this regard.
waiting for your reply
Altaf
You cannot take a picture without a preview. Whether it is the preview offered by the Intent or it is a preview that you create yourself with a SurfaceView when you use the Camera object, there has to be a preview.
Just use takePicture() directly on the camera object:
http://developer.android.com/reference/android/hardware/Camera.html#takePicture
I believe some of the older devices wouldn't capture correctly unless preview was setup, but I don't think that's an issue any more. And if you are looking to target devices that require preview you can just resize the preview surface to a single pixel somewhere and put another control on top of it. Still eats resources, but shouldn't be visible.
I have a camera app, in it's simplest state it's nothing more then the cameraPreview example with some 'takePicture' code. The link to the example online is for 2.0, and i'm developping against 1.5 (API lvl 3), but still, here it is: http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/CameraPreview.html
The biggest difference with the old version is the whole "getOptimalPreviewSize" thing.
Everything is done landscape.
Now the problem is: i have a preview, but when i take the picture there is more information on that picture then there is on the preview. The top and bottom show stuff that wasn't visible on the preview.
Now i am going to put an overlay on top of the preview, to align the object in the picture with something. If the picture is taken, the whole thing gets squeezed a bit, and i'm all out of alignment :(.
The camera app on the system doesn't have this problem, so it must be possible to fix this. Any thoughts?
If i must manually set the preview and/or picture size, i'll have trouble with different handsets i guess, and because there are a lot of function only since API lvl 5 (e.g.: getOptimalPreviewSize), I can't use these.
Having built a custom camera app for Android, I know exactly what you are going through. Android 1.5 makes up only 1.1% of the Android users as of 10/29/2011. You will be better off jumping up to at least API level 5. If you want to support portrait and landscape previews consistently on all devices, I recommend you go even higher.
Make use of the getSupportedPreviewSizes() and getSupportedPicturesSizes(). These functions tell you exactly what the camera supports (varies by phone/manufacturer). Run through the enumerations and find values that match from both. Use the one that suits you best.
Word of warning: Failure to set preview and picture sizes that are actually supported can cause your app to crash on certain devices. I've seen this first hand.
reference to 1.5 Android users
The largest preview size returned by getSupportedPicturSizes represents the native resolution of your camera. If the aspect ratio of that size differs from your preview size or the picture size that you set your Camera object to then then cropping will occur. You can compare the aspect ratios to determine how much will be cropped and in which direction.(top/bottom vs left/right)