I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.
Related
The Problem
I have been working on implementing a super resolution model with Tensorflow Lite. I have an empty bitmap 4x the size of the input bitmap (which is bmp):
Bitmap out = Bitmap.createBitmap(bmp.getWidth() * 4, bmp.getHeight() * 4, Bitmap.Config.ARGB_8888);
And I converted both bitmaps to TensorImages
TensorImage originalImage = TensorImage.fromBitmap(bmp);
TensorImage superImage = TensorImage.fromBitmap(out);
However, when I run the model (InterpreterApi tflite):
tflite.run(originalImage.getBuffer(), superImage.getBuffer());
The bitmap from superImage has not changed, and it holds the blank bitmap I made at the start.
superImage.getBitmap();
Things I've tried
I looked at basic examples and documentation, most are geared toward classification but they all seemed to do it this way.
I fed the input bitmap to the output, and my app showed the input, so I know that the file picking and preview works.
I tested with different datatypes to store the output, and they either left it blank or weren't compatible with Tensorflow.
What I think
I suspect the problem has something to do with tflite.run() changing a separate instance of superImage, and I get left with the old one. I may also need a different data format that I haven't tried yet.
Thank you for your time.
Similar to what is laid out in the tutorial I am initializing CameraX's Image Analysis use case with code:
ImageAnalysis imageAnalysis =
new ImageAnalysis.Builder()
.setTargetResolution(new Size(1280, 720))
.setTargetRotation(Surface.ROTATION_90)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
imageAnalysis.setAnalyzer(executor, new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
int rotationDegrees = image.getImageInfo().getRotationDegrees();
// insert your code here.
}
});
cameraProvider.bindToLifecycle((LifecycleOwner) this, cameraSelector, imageAnalysis, preview);
I am trying to use setTargetRotation method but I am not clear as to how I am supposed to apply this rotation to the output image as vaguely described in the docs:
The rotation value of ImageInfo will be the rotation, which if applied to the output image, will make the image match target rotation specified here.
If I set a breakpoint in the analyze() method shown above, the image object does not get rotated when changing the setTargetRotation value, so I assume the docs are telling me to grab the orientation with getTargerRotation() in the sense that these two pieces of code (builder vs analyzer) are coded separately and this information can be passed between the two without actually applying any rotation. Did I understand this correctly? This really doesn't make sense to me as the setTargetResolution method actually changes the size sent via the ImageProxy. I'd think setTargetRotation should also apply said rotation, but it appears not.
If my understanding is correct, is there an optimal efficient way to rotate these ImageProxy objects after entering the analyze method? Right now I'm doing it after converting to Bitmap via
Bitmap myImg = BitmapFactory.decodeResource(someInutStream);
Matrix matrix = new Matrix();
matrix.postRotate(30);
Bitmap rotated = Bitmap.createBitmap(myImg, 0, 0, myImg.getWidth(), myImg.getHeight(),
matrix, true);
Above idea came from here, but I'd think this is not the most efficient way to do this. I'm sure I could also come up with a way to transpose the arrays, but this could get tedious and messy quickly. Isn't there any way to setup the ImageAnalysis Builder to send the rotated ImageProxy directly rather than having to make a bitmap of everything?
The rotation value of ImageInfo will be the rotation, which if applied
to the output image, will make the image match target rotation
specified here.
An example to understand the definition would be to assume the target rotation matches the device's orientation. Applying the returned rotation to the output image will result in the image being upright, i.e matching the device's orientation. So if the device is in its natural portrait position, the rotated output image will also be in that orientation.
Output image + Rotated by rotation value --> Upright output image
CameraX's documentation includes a section about rotations, since it can be a confusing topic. You can check it out here.
Going back to your question about setTargetRotation and the ImageAnalysis use case, it isn't meant to rotate the images passed to the Analyzer, but it should affect the rotation information of the images, i.e ImageProxy.getImageInfo().getRotationDegrees(). Transforming images (rotating, cropping, etc) can be an expensive operation, so CameraX does not perform any modification to the analysis frames, but it provides the required metadata to make sense of the output images, metadata that can then be used with image processors that analyze the frames.
If you need to rotate each analysis frame, using the Bitmap approach is one way, but it can be costly. Another more performant way may be to do it in native code.
I'm building a camera that needs to detect the user's face/eyes and measure distance through the eyes.
I found that on this project https://github.com/IvanLudvig/Screen-to-face-distance, it works great but it doesn't happen to use a preview of the frontal camera (Really, I tested it on at least 10 people, all measurements were REALLY close or perfect).
My app already had a selfie camera part made by me, but using the old camera API, and I didn't find a solution to have both camera preview and the face distance to work together on that, always would receive an error that the camera was already in use.
I decided to move to the camera2 to use more than one camera stream, and I'm still learning this process of having two streams at the same time for different things. Btw documentation on this seems to be so scarce, I'm really lost on it.
Now, am I on the right path to this?
Also, on his project,Ivan uses this:
Camera camera = frontCam();
Camera.Parameters campar = camera.getParameters();
F = campar.getFocalLength();
angleX = campar.getHorizontalViewAngle();
angleY = campar.getVerticalViewAngle();
sensorX = (float) (Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float) (Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This is the old camera API, how can I call this on the new one?
Judging from this answer: Android camera2 API get focus distance in AF mode
Do I need to get min and max focal lenghts?
For the horizontal and vertical angles I found this one: What is the Android Camera2 API equivalent of Camera.Parameters.getHorizontalViewAngle() and Camera.Parameters.getVerticalViewAngle()?
The rest I believe is done by Google's Cloud Vision API
EDIT:
I got it to work on camera2, using GMS's own example, CameraSourcePreview and GraphicOverlay to display whatever I want to display together the preview and detect faces.
Now to get the camera characteristics:
CameraManager manager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
character = manager.getCameraCharacteristics(String.valueOf(1));
} catch (CameraAccessException e) {
Log.e(TAG, "CamAcc1Error.", e);
}
angleX = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getWidth();
angleY = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getHeight();
sensorX = (float)(Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float)(Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This pretty much gives me mm accuracy to face distance, which is exactly what I needed.
Now what is left is getting a picture from this preview with GMS's CameraSourcePreview, so that I can use later.
Final Edit here:
I solved the picture issue, but I forgot to edit here. The thing is, all the examples using camera2 to take a picture are really complicated (rightly so, it's a better API than camera, has a lot of options), but it can be really simplyfied to what I did here:
mCameraSource.takePicture(null, bytes -> {
Bitmap bitmap;
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
if (bitmap != null) {
Matrix matrix = new Matrix();
matrix.postRotate(180);
matrix.postScale(1, -1);
rotateBmp = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(),
bitmap.getHeight(), matrix, false);
saveBmp2SD(STORAGE_PATH, rotateBmp);
rotateBmp.recycle();
bitmap.recycle();
}
});
That's all I needed to take a picture and save to a location I specified, don't mind the recycling here, it's not right, I'm working on it
It looks like that bit of math is calculating the physical dimensions of the image sensor, via the angle-of-view equation:
The camera2 API has the sensor dimensions as part of the camera characteristics directly: SENSOR_INFO_PHYSICAL_SIZE.
In fact, if you want to get the field of view in camera2, you have to use the same equation in the other direction, since FOVs are not part of camera characteristics.
Beyond that, it looks like the example you linked just uses the old camera API to fetch that FOV information, and then closes the camera and uses the Vision API to actually drive the camera. So you'd have to look at the vision API docs to see how you can give it camera input instead of having it drive everything. Or you could use the camera API's built-in face detector, which on many devices gives you eye locations as well.
I followed the Android Studio tutorial to get the CameraPreview to work (Camera API Android Developer Guide). This works fine for me and i can view the camera stream in my FrameLayout.
But I would like to get the RGB values from a specific Pixel in the Preview everytime it changes. I did not find a method which gives me the previewImage as a bitmap and was not able to understand the usage of the onPreviewFrame method
#Override
public void onPreviewFrame(byte[] data, Camera camera) {}
How can I get the RGB values from a Camerapreview Pixel?
If you are using the Camera2 API, you can implement the ImageReader.OnImageAvailableListener class in your application. After that, you override the onImageAvailable function , which gets an ImageReader as argument. Then you can access the image just recorded with imageReader.acquireNextImage().
With either API, you need to handle processing YUV data yourself, unfortunately.
Camera devices natively produce YUV data, not RGB, so the API doesn't spend extra resources to auto-convert the data. The main easy exception is piping data to the GPU, where the GPU driver auto-converts YUV to RGB for you within your pixel shader.
But if you're just in regular app code, you need to parse the data.
For the deprecated android.hardware.Camera API, the output is NV21 by default, and you can usually select YV12 as another option.
The wikipedia article on YUV is relatively helpful: https://en.wikipedia.org/wiki/YUV
But it does have the wrong conversion coefficients for YUV->RGB conversion; they should be:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
(Cb = U, Cr = V)
You can also take a look at this stackoverflow post:
Extract black and white image from android camera's NV21 format
which has code that looks to be correct for the conversion.
I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));