I have tried two different approaches for capturing an image from android camera hardware when User clicks on a capture-picture button. One with calling autoFocus and after the autoFocusCallback completes with a success response, capture the image. Two, capturing the image without calling autoFocus at all. In both cases, I noticed that the resultant byte-array that is passed to the onPictureTaken method has different lengths. The one that comes after autoFocus completes successfully and invokes the autoFocusCallback is usually at least a 50K bytes larger than the one when autoFocus invocation is completely ignored. Why is that so? Could somebody throw some light? What I don't understand is that when autoFocus completes successfully, shouldn't the picture have a good quality? And typically quality is the value of the bits in each of the bytes representing the RGB channels for each pixel. The overall number of pixels, and thereby total number of bytes representing the RGB channels should be the same irrespective of what values of bits are loaded into the RGB bytes. But apparently seems like there are more bytes of data included for a better clarity image after autoFocus is performed than a regular clarity image.
Been researching for over a month now. Would really appreciate a quick answer.
All the image/video capture drivers use the YUV formats for capture. The formats are either YUV420 or YUV422 in most of the cases. Refer to this link for more information on YUV formats http://www.fourcc.org/yuv.php
As mentioned by you the pictures taken after the Auto focus call are much sharper (edges are sharper and better contrast) and the same shapness is missing in images captured without auto focus.
As you know the Jpeg image compression is used to compress the data of the image, the compression is based on Macro blocks (square blocks in image). Image with sharper edges and more details needs more co-efficients to encode than image with blur, since most of the neighbouring pixels looks like they have been averaged out. This is reason why the auto focussed image is bound have more data since it has more details.
Related
I try too calibrate my YUV conversion with the screen(24 pouces) of my PC. And i foud out that the YUV response is changing.
For example if a look at the screen the widows logo,meanly bleu i got the correct bleu YUV conversion.
But if i open another window with a lot of white the YUV conversion is very bad. There is nearly no bleu responding to the bleu part.
The same problem when i modify the angle of the camera. It is automatycly changing the YUV response.
If there is more or less white color on screen the color response (YUV) is automaticly modified.
It is possible to avoid this automatic change and if some one could explain why this happened.
thanks for help.
I have deleted the answer to said that it is nearly impossible to calibrate YUV to RGB. By setting exposureLock,whitebalancelock and exposurecompensation once at the begining change nothing to help calibration.
Conclusion. It loock nearly impossible to calibrate the RGB color value From YUV. Exposure and ligth change all the time and there are no info about the change.
If someone got the solution i take ;))
I have been working on Touch less Bio-metrics. I want to extract Fingerprints from image captured by normal mobile camera. I have achieved a good image, but it is not good enough to be verified by government.
The lines need to be more thick and connected.
What I have tried so far?
Below are the steps which I took to extract a fingerprint from image. It is good, but lines are disconnected and joined with other.
Changed contrast and brightness to 0.8 and 25 respectively
Converted from RGB to Gray
Applied histogram equalization
Normalized image
Applied adaptive (gaussian c) threshold for block size of 15 and constant 2
Smooth image to get rid of edges
Changed contrast and brightness again to 1.7 and -40 respectively
Applied Gaussian Blur
Add weight (alpha = 0.5, beta = -0.5 and gamma = 0)
Applied binary threshold (threshold = 10)
Original Image would be like this (I missed the original image of processed image)
And the result is the image attached (processed image).
I need lines to be more connected and separated from other lines so that Ridge Ending and Ridge Bifurcation can easily be identified.
I also came through this link, but due to very limited background in Image Processing, I am unable to understand this. Any guidance regarding this link can also help me a lot.
I am using opencv in Android.
Any help is highly appreciated.
I saw a video on Youtube related to RayTracing on video games rendering, and there I could see that the Q2VKPT engine creator uses a "Temporal Filter" using multiple frames to get a clear image (ASVGF).
https://www.youtube.com/watch?v=tbsudki8Sro
from 26:20 to 28:40
Maybe, if you have three different images for the fingerprints and then you use a similar approach, you could get a picture with less noise that works better.
Currently I'm showing a preview of the camera on the screen providing the camera preview texture - camera.setPreviewTexture(...) (doing it using opengl of course).
I have a native library which get bytes[] as an image, and return a byte[] - the result image related to the input image. I want to call it, and then draw the input image and the result to the screen - one on each other.
I know that in Opengl, in order to get the data of texture back in the CPU we must be read it using glReadPixel and after process i will have to load the result to a texture - which will have big impact on performances to do it each frame.
I thought about using camera.setPreviewCallback(...), There i'm getting the frame (Calling the process method and transfer the result to the my SurfaceView), and parallel continue using the texture preview Technic for drawing on the screen, but than i'm afraid of synchronizing between the frames that i got in the previewCallback to those i got in the texture.
Am i missing anything ? or there is not easy way to solve this issue?
One approach that may be useful is to direct the output of the Camera to an ImageReader, which provides a Surface. Each frame sent to the Surface is made available as YUV data without a copy, which makes it faster than some of the alternatives. The variations in color formats (stride, alignment, interleave) are handled by ImageReader.
Since you want the camera image to be presented simultaneously with the processing output, you can't send frames down two independent paths.
When the frame is ready, you will need to do a color-space conversion and upload the pixels with glTexImage2D(). This will likely be the performance-limiting factor.
From the comments it sounds like you're familiar with image filtering using a fragment shader; for anyone else who finds this, you can see an example here.
I have applied some effects to camera preview by OpenGL ES 2.0 shaders.
Next, I want to save these effect pictures (grayscale, negative ...)
I call glReadPixels() in onDrawFrame(), create a bitmap based on the pixels I read from opengl frame buffer and then save it to device storage.
However, in this way, I only "snapshot" the camera effect preview. In other words, the saved image resolution (ex: 800*480) is not the same as the image taken by Camera.PictureCallback (ex: 1920 * 1080).
I know the preview size can be changed by setPreviewSize(), but it can't equal to picture size finally.
So, is it possible to use glsl shader to post process directly the image get by Camera.PictureCallback ? Or there is another way to achieve the same goal?
Any suggestion will be greatly appreciated.
Thanks.
Justin, this was my question about setPreviewTexture(), not a suggestion. If you send the pixel data as received from onPreviewFrame() callback, it will naturally be limited by supported preview sizes.
You can use the same logic to push the pixels from onPictureTaken() to a texture, but you should decode them into RGB from JPEG format.
I am capturing the onPreviewFrame in a byte[] and then saving this to compare with the next captured frame. Although the camera is pointed to an area with no change happening and I can see on the display that there is no change succesive frames element by elemnet are different. Any suggestion why this should be so? Also I only want the grayscale and I understand that the first height * width bytes are grayscale data, is this correct>
Statistically, there cannot be an exact match. Check out Image comparison - fast algorithm for a way of comparing images.