OpenCV android native camera motion blur - android

I'm using the OpenCV Android native camera libraries for image capture in NDK, but while the image acquisition is happening the phone is (necessarily) moving so the images are quite blurry. What options are there to reduce this blur? I don't actually care too much about color fidelity so can sacrifice quality in that area. There doesn't appear to be direct exposure control, but I did find that doing
cap.set(CV_CAP_PROP_EXPOSURE, -4)
seemed to help a bit although -4 is the furthest it would go. I'm not terribly familiar with camera terminology in general so am not sure what effect the other properties might have for me.
Also, I noticed that in the default camera application (Galaxy SIII), the camera motion blur occurs during image preview, which makes sense since OpenCV also grabs preview images. But in video preview (non-recording) mode the image is a lot crisper. Is anyone aware of how I can access this stream?

Related

Is it possible to capture a High-Res image while using ArCore?

In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!

Real time mark recognition on Android

I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.

Unity Dev for Gear VR - Equirectangular Panorama as skybox/sphere

I've been trying to make a 360 photo viewer similar to the Oculus 360 Photos app. The only problem is when projecting onto a sphere with inverted normals, the image "warps" or "bends" as the sphere does, and results in straight lines such as door frames turning into bending images; bad result.
Changing the size of the sphere does nothing, and obviously the picture has to bend somewhere to fit onto the inner surface of the sphere, so I don't think this solution will work.
I then tried turning the photo into a cylindrical skybox, and using it as a skybox component of the camera, which works great: no bending lines, everything looks as desired. Except for one thing: there is a shimmering/aliasing effect on the texture, unless I enable mip maps, which then results in a blurred image.
Does anybody know how I could apply my image to appear similar to those in the Oculus 360 Photo app? They render with perfect quality and no bending lines, no shimmering. How do they achieve this result?
I've tried different compression types and different shapes, the only thing I haven't tried is slicing the photo into 6 pieces and rendering it on the inside of a cube around the camera, which, due to it's proximity, might not get the shimmery result that could be cause by distance from the camera?
Thoughts, suggestions, questions? Any assistance or discussion is appreciated
I was able to get good results by increasing the renderscale to 1.5 or higher, which eliminated the shimmery aliasing effect. Not 100% sure if this was an issue due to the Samsung s6 resolution, but I just work now with an enhanced render scale for higher quality regardless, and optimise elsewhere to save on framerate
I know that question is old now but I had that problem too but on the oculus go and it is solved thanks to these instructions here & here

Capturing preview image with cwac-camera?

This question may be slightly philosophical in nature, but would it be crazy to just capture a photo from the live preview instead of going through takePhoto?
I've found a few examples of how to do so: How to capture preview image frames from Camera Application in Android Programming? and Capture an image from the camera preview.
Right now I'm juggling through inconsistent different EXIF rotation behavior on various phones (it looks like you have a FullExifFixup for all Samsung devices, but I'm having different behavior between my S2 and S4) and I'm wondering if it wouldn't just be easier to grab the preview image.
Is this a stupid idea?
but would it be crazy to just capture a photo from the live preview instead of going through takePhoto?
It would be crazy with the library as it stands, simply because I don't expose the preview frames. :-) That's on the issue list.
it looks like you have a FullExifFixup for all Samsung devices, but I'm having different behavior between my S2 and S4
I don't have an S2 at the moment. If you can provide me with a reproducible test case (including details of the specific model of S2), post an issue, and I can see what I can do.
I'm wondering if it wouldn't just be easier to grab the preview image
It won't be at full resolution of the camera -- you'll be capped at the preview frame size. That being said, plenty of apps work with just the preview frames. Vine, for example, captures its video by capturing the preview frames, due to various problems they ran into when using MediaRecorder (there's a conference talk from a Vine employee that goes into more details).

Android camera: general ghosting issues

I am developing an android camera app. The camera pictures are later processed by OCR, so the picture must be as sharp as possible.
If you shake the camera, it looks as if the digital camera overlays multiple images, to create the effect of motion blur:
Example 1: http://i.stack.imgur.com/nqrmd.jpg
Example 2: http://i.stack.imgur.com/ZBx6F.jpg
If you examine the pictures closely, the motion blur looks to consist of 2 or 3 images taken in quick succession and blended together to simulate light exposure. I understand that this amounts to the way digital cameras work.
But I'd prefer having a single crisp image rather than a properly exposed one. The app can use histogram corrections to make the text readable again for OCR. The image does not have to appeal to the human eye.
Is there a way to better control the camera to get these sort of raw image snapshots?
I had some limited success using the "Action" scene mode on the camera. Not much, but it's as far as you can get.

Categories

Resources