How to convert Bitmap to webrtc i420 Frame? - android

What I'm curious is simple.
We created video calling functionality using mobile technology on WebRTC.
In addition, I would like to use the OpenCV library to add face detection during video calls.
To implement this function, it is necessary to convert Bitmap to WebRTC I420Frame.
Is there a way?

Using libYuv (https://chromium.googlesource.com/libyuv/libyuv/) would likely be the best way. Once you determine what type the bitmap is (rgb, bgr, argb, etc...), you can pass it to libYuv and it will convert it to i420 for you in an efficient way.
You could also run the calculations yourself if you really wanted to but you wouldn't get the arm specific performance calls that libYuv would do for you.

Using libYuv https://chromium.googlesource.com/libyuv/libyuv/
The following code works:
size_t file_size = frame.width() * frame.height() * 3;
std::unique_ptr<uint8_t[]> res_rgb_buffer2(new uint8_t[file_size]);
webrtc::ConvertFromI420(frame, webrtc::VideoType::kRGB24, 0,
res_rgb_buffer2.get());
now res_rgb_buffer2 contains a pointer for raw BMP data. you can save it in a file but you need to write file header first. the latter part can be found here.

Related

getting "FirebaseMLException: Input ByteBuffer should be direct ByteBuffer" while converting byte order to nativeOrder() in android studio

I am trying to deploy a custom tensorflow lite model using FirebaseMLKit in Android Studio. But while doing so my model is taking ByteBuffer in LITTLE_ENDIAN, whereas my camera is providing byte array in BIG_ENDIAN byte order which is converted into ByteBuffer as shown below.
ByteBuffer buffer = ByteBuffer.wrap(byteArray);
So, I tried to change the order from BIG_ENDIAN to nativeOrder() which is LITTLE_ENDIAN to create FirebaseModelInputs object and passed it to the FirebaseModelInterpreter as shown below.
FirebaseModelInputs inputs = new FirebaseModelInputs.Builder().inputsBuilder.add(buffer.order(ByteOrder.nativeOrder())).build();
return interpreter.run(inputs,inputOutputOptions);
But I am getting following error on doing so. what should I do ?
com.google.firebase.ml.common.FirebaseMLException: Input ByteBuffer should be direct ByteBuffer
at com.google.firebase.ml.custom.FirebaseModelInputs$Builder.add(com.google.firebase:firebase-ml-model-interpreter##22.0.3:14)
at com.example.facedect.facedetection.FaceDetectionProcessor.detectInImage(FaceDetectionProcessor.java:72)
at com.example.facedect.VisionProcessorBase.detectInVisionImage(VisionProcessorBase.java:63)
at com.example.facedect.VisionProcessorBase.process(VisionProcessorBase.java:34)
at com.example.facedect.CameraSource$FrameProcessingRunnable.run(CameraSource.java:223)
at java.lang.Thread.run(Thread.java:764)
I tried to refer their website and found out that indirect buffer is not allowed.
Is there any alternative ?
IIRC ByteBuffer.wrap() will create HeapByteBuffer, which is not a direct Bytebuffer. That's why FirebaseML is complaining.
Here are a few solution I can think of:
Convert your byteArray to LITTLE_ENDIAN and use another direct ByteBuffer to hold it. This might have performance issue because it has extra copy.
Take a look at new feature in Android Studio 4.1 here. However to get best experience your model need to have metadata.
For solution 1, try sth like this?
ByteBuffer buffer = ByteBuffer.wrap(byteArray);
ByteBuffer directBuffer = ByteBuffer.allocateDirect(buffer.remaining());
directBuffer.put(buffer.array(), 0, buffer.remaining());
Then update the order for directBuffer like what you did. Hope it works!

ArCore + Android NDK + OpenCV. Image from ArFrame to valid cv::Mat

I'am use ArCore with Android NDK. And I want to recognize square from each frame of ArFrame.
I'am successfully get ArImage throught ArFrame_acquireCameraImage(...) and convert it to AImage throught ArImage_getNdkImage(...).
After, when I do it, I need to create cv::Mat, that use it for processing.
It is possible to create cv::Mat from AImage?
When I get plane data from AImage throught AImage_getPlaneData(...), may I create a valid cv::Mat from this plane data?
Thanks!

How to convert Bitmap into a Frame?

Currently I'm working on a android program using Mobile Vision. I am using the "TextRecognizer" class and one of the methods is .detect(Frame frame). Right now I have a image I want to input into it, however, the image is the file type "Bitmap". I have tried to convert it to "Frame" by casting it but that hasn't worked. If anyone has any suggesting it would be much appreciated.
Use the setBitmap method in the Frame.Builder class:
Frame outputFrame = new Frame.Builder().setBitmap(myBitmap).build();

How to detect if JPG is in RGB (or CMYK) format?

I need a (really fast) way to check if a JPG file is in RGB format (or any other format that Android can show).
Actually, at this moment, I just know if a JPG file can be show when I try to convert it to Bitmap using BitmapFactory.
I think this should not be the fastest way. So I try to get it by using ExifInterface. Unfortunately, ExifInterface (from Android) does not have any tag that indicates that the jpg can be shown in Android (color space tag or something).
Then, I think I have 2 ways:
1) A fast way to get bitmap from jpg: any tip of how to do it?
2) Or try to read Exif tags by my self, but without adding any other lib to the project: I don't have any idea of how to do it!
Ok so I did some looking around and I may have a solution for you but it may require a little work. The link is a pure java library that I think you can use in your project or at least modify and include a few classes. I have not worked with this yet but it looks like it will work.
http://commons.apache.org/proper/commons-imaging
final ImageInfo imageInfo = Imaging.getImageInfo(File file);
if(imageInfo.getColorType() == ImageInfo.COLOR_TYPE_CMYK){
}
else {
}

Get single buffer from AVFrame data and display it on Android Bitmap/Surface/SurfaceView

I have decoded AVFrame from avcodec_decode_video2 function (FFmpeg) which is then passed to the SWS library and converted from YUV420P format to RGB565. How do I combine all colors and linesizes information i.e. frame->data[0..3], frame->linesize[0..3] into one buffer and how to display it then on the Android device say by using Android Bitmap or SurfaceView/View? I don't want to use SurfaceFlinger because it is not official part of NDK and it is subject to change with every minor release.
You only have data[0] for RGB, and linesize[0] is equal to width if your frame is standard sized.

Categories

Resources