I am trying face detection and adding mask(graphic overlay) using google vision api ,the problem is i could not get the ouptut from camera after detecting and adding mask.so far I have tried this solution from github , https://github.com/googlesamples/android-vision/issues/24 ,based on this issue i have added a custom detector class,
Mobile Vision API - concatenate new detector object to continue frame processing . and added this on mydetector class How to create Bitmap from grayscaled byte buffer image? .
MyDetectorClass
class MyFaceDetector extends Detector<Face>
{
private Detector<Face> mDelegate;
MyFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
frame processing
public SparseArray<Face> detect(Frame frame)
{
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
i am getting a rotated bitmap ,that is without the mask (graphic overlay) i have added .How can i get the camera output with mask .
Thanks in advance.
The simple answer is: You can't.
Why? Android camera output frames in NV21 ByteBuffer. And you must generate your masks based on the landmarks points in a separated Bitmap, then join them.
Sorry but, that's how the Android Camera API work. Nothing can be done. You must do it manually.
Also, I wouldn't get the camera preview then convert it to YuvImage then to Bitmap. That process consumes a lot of resources and makes preview very very slow. Instead I would use this method which will be a lot faster and rotates your preview internally so you don't loose time doing it:
outputFrame = new Frame.Builder().setImageData(mPendingFrameData, mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.NV21)
.setId(mPendingFrameId)
.setTimestampMillis(mPendingTimeMillis)
.setRotation(mRotation)
.build();
mDetector.receiveFrame(outputFrame);
All the code can be found in CameraSource.java
Related
I am currently working on a project which uses OpenCV in background mode to detect faces while the app is playing videos .
I've managed to run OpenCV as a service and I am using an ImageReader instance to capture the images
private ImageReader mImageReader = ImageReader.newInstance(mWidth, mHeight, ImageFormat.YUV_420_888, 1);
What I am trying to do is to get the detected face image and send it to the backend side , the image from the Imagereader is converted to Mat so I have both access to the Mat type and the Image type.
I've managed to convert the aquired image to Yuv image then to jpg ( byte array ) by using both toYuvImage and toJpegImage methods from this link : https://blog.minhazav.dev/how-to-convert-yuv-420-sp-android.media.Image-to-Bitmap-or-jpeg/#how-to-convert-yuv_420_888-image-to-jpeg-format
After converting the image to an array of bytes , I'm also trying to convert it to base64 to send it using http , the problem is when I try to put the imageQuality to 100 in toJpegImage , the result of the base64 image is looking corrupted , but when I put the value to something lower like 15 or 10 the image output ( resolution ) is better but the quality is bad , I am not sure if this problem is related to the resolution
byte[] jpegDataTest = ImageUtil.toJpegImage(detectionImage,15);
String base64New = Base64.encodeToString(jpegDataTest, Base64.DEFAULT);
PS : I am converting the image each time a face is detected in a for loop
for(Rect rect : faceDetections.toArray()){}
compress quality is set to 100 : https://i.postimg.cc/YqSmFxrT/quality100.jpg
compress quality is set to 15
public static byte[] toJpegImage(Image image, int imageQuality) {
if (image.getFormat() != ImageFormat.YUV_420_888) {
throw new IllegalArgumentException("Invalid image format");
}
YuvImage yuvImage = toYuvImage(image);
int width = image.getWidth();
int height = image.getHeight();
// Convert to jpeg
byte[] jpegImage = null;
try (ByteArrayOutputStream out = new ByteArrayOutputStream()) {
yuvImage.compressToJpeg(new Rect(0, 0, width, height), imageQuality, out);
jpegImage = out.toByteArray();
} catch (IOException e) {
e.printStackTrace();
}
return jpegImage;
}
private static byte[] YUV_420_888toNV21(Image image) {
byte[] nv21;
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
ByteBuffer vuBuffer = image.getPlanes()[2].getBuffer();
int ySize = yBuffer.remaining();
int vuSize = vuBuffer.remaining();
nv21 = new byte[ySize + vuSize];
yBuffer.get(nv21, 0, ySize);
vuBuffer.get(nv21, ySize, vuSize);
return nv21;
}
I have a text recognition app and I only want the app to read text within a certain box on my screen. My approach is to crop AN image from my camera before I send it to text recognition. I am having an issue cropping my image first.
If I try to convert the byte to a bitmap then crop it using -
Bitmap bmp = BitmapFactory.decodeByteArray(data, 0, data.length);
Bitmap resizedbitmap=Bitmap.createBitmap(bmp, 100, 100, 200, 200);
and then use create bitmap to create a new bitmap that is cropped with a RECT - but since it is null it is failing.
I am receiving null for the bitmap - I have tried changing the options, I have added read write permissions to the manifest - but it is still coming up null.
I am now trying to convert the byte array to a yuvimage but I think data is being lost in the compression so when I scan my image - the text recognition is giving me distorted text blocks and it is not very accurate.
int correctRotation = RNCameraViewHelper.getCorrectCameraRotation(rotation, getFacing(),
getCameraOrientation());
if (data.length < (1.5 * width * height)) {
return;
}
if (willCallTextTask) {
textRecognizerTaskLock = true;
TextRecognizerAsyncTaskDelegate delegate = (TextRecognizerAsyncTaskDelegate) cameraView;
final byte[] compressedImage;
final YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
final ByteArrayOutputStream imageStream = new ByteArrayOutputStream();
Log.d("Tag", "Image Stream" + getWidth());
Log.d("Tag", "WIDTH" + width);
Log.d("Tag", "HEIGHT" + height);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 100, imageStream);
compressedImage = imageStream.toByteArray();
new TextRecognizerAsyncTask(delegate, mThemedReactContext, compressedImage, width, height, correctRotation,
getResources().getDisplayMetrics().density, getFacing(), getWidth(), getHeight(), mPaddingX, mPaddingY)
.execute();
}
}
});
}
Any suggestions would be great - or if theres a better way to do this that would be amazing!
I have to convert com.google.mlkit.vision.common.InputImage to equivalent Bitmap image in android using Java. Right now I am using the following code.
// iImage is an object of InputImage
Bitmap bmap = Bitmap.createBitmap(iImage.getWidth(), iImage.getHeight(), Bitmap.Config.RGB_565);
bmap.copyPixelsFromBuffer(iImage.getByteBuffer());
The above code is NOT converting the InputImage to Bitmap. Can anyone please suggest me the efficient way of converting InputImage to Bitmap.
You can create bitmap from byteBuffer, that can be received by call the method getByteBuffer().In the offical quick start example of ML Kit Vission you can find the vay how to achieve this. Below is a piece of code that can solve your problem:
Method getBitmap() thats converts NV21 format byte buffer to bitmap:
#Nullable
public static Bitmap getBitmap(ByteBuffer data, FrameMetadata metadata) {
data.rewind();
byte[] imageInBuffer = new byte[data.limit()];
data.get(imageInBuffer, 0, imageInBuffer.length);
try {
YuvImage image =
new YuvImage(
imageInBuffer, ImageFormat.NV21, metadata.getWidth(), metadata.getHeight(), null);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
image.compressToJpeg(new Rect(0, 0, metadata.getWidth(), metadata.getHeight()), 80, stream);
Bitmap bmp = BitmapFactory.decodeByteArray(stream.toByteArray(), 0, stream.size());
stream.close();
return rotateBitmap(bmp, metadata.getRotation(), false, false);
} catch (Exception e) {
Log.e("VisionProcessorBase", "Error: " + e.getMessage());
}
return null;
}
Method rotateBitmap():
private static Bitmap rotateBitmap(
Bitmap bitmap, int rotationDegrees, boolean flipX, boolean flipY) {
Matrix matrix = new Matrix();
// Rotate the image back to straight.
matrix.postRotate(rotationDegrees);
// Mirror the image along the X or Y axis.
matrix.postScale(flipX ? -1.0f : 1.0f, flipY ? -1.0f : 1.0f);
Bitmap rotatedBitmap =
Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, true);
// Recycle the old bitmap if it has changed.
if (rotatedBitmap != bitmap) {
bitmap.recycle();
}
return rotatedBitmap;
}
The full code can be seen by clicking on the link: https://github.com/googlesamples/mlkit/blob/master/android/vision-quickstart/app/src/main/java/com/google/mlkit/vision/demo/BitmapUtils.java
My goal is to add an overlay on the camera preview that will find book edges. For that, I override the onPreviewFrame where I do the following:
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Mat mat = new Mat((int) (height*1.5), width, CvType.CV_8UC1);
mat.put(0,0,data);
byte[] bytes = new byte[(int) (height*width*1.5)];
mat.get(0,0,bytes);
if (!test) { //to only do once
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
Uri picUri = Uri.fromFile(pictureFile);
updateGallery(picUri);
test = true;
} catch (IOException e) {
e.printStackTrace();
}
}
}
For now I simply want to take one of the previews and save it after the conversion to mat.
After spending countless hours getting the above to look right, the saved picture cannot be seen on my testing phone (LG Leon). I can't seem to find the issue. Am I mixing the height/width because I'm taking pictures in portrait mode? I tried switching them and still doesn't work. Where is the problem?
The fastest method I managed to find is described HERE in my recently asked question. You can find the method to extract the image in the answer I wrote in my question below. The thing is that the image you get through onPreviewFrame() is NV21. After receiving this image it may be that you need to convert it to RGB (depends on what do you want to achieve; this is also done in the answer I gave you previously).
Seems quite inefficient but it works for me (for now):
//get the camera parameters
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
//convert the byte[] to Bitmap through YuvImage;
//make sure the previewFormat is NV21 (I set it so somewhere before)
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 70, out);
Bitmap bmp = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
//convert Bitmap to Mat; note the bitmap config ARGB_8888 conversion that
//allows you to use other image processing methods and still save at the end
Mat orig = new Mat();
bmp = bmp.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp, orig);
//here you do whatever you want with the Mat
//Mat to Bitmap to OutputStream to byte[] to File
Utils.matToBitmap(orig, bmp);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 70, stream);
byte[] bytes = stream.toByteArray();
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
I am trying to screen cast my android device screen to a web browser using projection API and Webrtc.
Projection API renders its output to a Surface and returns a virtualDisplay. I have done till this. I saw the webrtc library for android. they have made it to receive input only from device camera. I am trying to read and modify webrtc code to stream whatever is shown to the surface.
My question is How can i receive byte[] data from a surface regularly like the Camera.PreviewCallback function. What other available options i have?
Here is how i solved my problem. I used ImageReader class like
imageReader = ImageReader.newInstance(displayWidth, displayHeight, PixelFormat.RGBA_8888, 2);
mediaProjection.createVirtualDisplay("screencapture",
displayWidth, displayHeight, density,
flags, imageReader.getSurface(), null, handler);
imageReader.setOnImageAvailableListener(new ImageAvailableListener(), null);
private class ImageAvailableListener implements ImageReader.OnImageAvailableListener {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
Bitmap bitmap = null;
ByteArrayOutputStream stream = null;
try {
image = imageReader.acquireLatestImage();
if (image != null) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * displayWidth;
// create bitmap
bitmap = Bitmap.createBitmap(displayWidth + rowPadding / pixelStride,
displayHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 50, stream);
StringBuilder sb = new StringBuilder();
sb.append("data:image/png;base64,");
sb.append(StringUtils.newStringUtf8(Base64.encode(stream.toByteArray(), Base64.DEFAULT)));
WebrtcClient.sendProjection(sb.toString());
}
} catch (Exception e) {
e.printStackTrace();
}
I am converting byte[] to Base64 string and sending to through webrtc datachannel.