Getting the raw RGB data of the android camera - android

I'm working on a camera app on android. I'm currently taking my capture with the jpeg callback. I'd like to know if there's a way to get to raw data of the capture. I know there is a raw callback for the capture but it always return null.
So from the jpeg callback can I have access to raw data (succession of RGB pixels).
EDIT :
So from the jpeg callback can I have access to raw data (succession of YUV pixels).

I was successfully able to get a "raw" (YUV422) picture with android 5.1 running on a rk3288.
3 steps to get yuv image
init buffer
call addRawImageCallbackBuffer by relfexion
get the yuv picture in the dedicated callback
code sample
val weight = size.width * size.height * ImageFormat.getBitsPerPixel(ImageFormat.NV21) / 8
val bytes = ByteArray(weight);
val camera = android.hardware.Camera.open();
try {
val addRawImageCallbackBuffer = camera.javaClass
.getDeclaredMethod("addRawImageCallbackBuffer", bytes.javaClass)
addRawImageCallbackBuffer.invoke(camera, bytes)
} catch (e: Exception) {
Log.e("RNG", "Error", e);
}
...
camera.takePicture(null, { data, camera ->
val file = File("/sdcard/output.jpg")
file.createNewFile()
val yuv = YuvImage(data, ImageFormat.NV21, size.width, size.height, null)
yuv.compressToJpeg(Rect(0, 0, size.width, size.height), 80, file.outputStream())
}, null)
Explanation
The Camera.takePicture() method takes a callback for raw as second parameter.
camera.takePicture ( shutterCallback, rawCallback, jpegCallback );
This callback will return a null byteArray, unless I explicitly add a buffer for raw image first.
So, you're supposed to call camera.addRawImageCallbackBuffer for this purpose.
Nevertheless, this the method is not available (public but not exported, so you cannot call it directly).
Fortunately, the code sample demonstrates how to force call this method by reflection.
This will make the raw buffer to push a consistent yuv picture as a parameter.

Related

Send image stream byte data flutter to native Android

I was trying to process the live camera feed from flutter so I needed to send the byte data to nativve Android for processing.
I was concatenating the 3 planes as suggested by flutterfire. I needed the data to create an InputImage for the google ml kit.
Concatenate Plane method
static Uint8List _concatenatePlanes(List<Plane> planes) {
final WriteBuffer allBytes = WriteBuffer();
for (Plane plane in planes) {
allBytes.putUint8List(plane.bytes);
}
return allBytes.done().buffer.asUint8List();
}
Input Image created from the image data in native Android
public void fromByteBuffer(Map<String, Object> imageData, final MethodChannel.Result result) {
byte[] bytes = (byte[]) imageData.get("bytes");
int rotationCompensation = ((int) imageData.get("rotation")) % 360;
//Create an input image
InputImage inputImage = InputImage.fromByteArray(bytes,
(int) imageData.get("width"),
(int) imageData.get("height"),
rotationCompensation,
InputImage.IMAGE_FORMAT_NV21);
}
I am not getting any results after processing a frame from camera stream, but if the same frame is captured, stored and then processed by creating the Input Image from file path the image is being processed properly.
Is there anything wrong in the way I am creating the input image.
Any help is appreciated.

decoding h264 video frames using decodeByteArray always return null

I want to convert the h264 frame byte-data to Bitmap array so that I can feed it to other sources, I tried this:
#Override
public ARCONTROLLER_ERROR_ENUM onFrameReceived(ARDeviceController
deviceController,
ARFrame frame) {
byte[] data = frame.getByteData();
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
if(bitmap != null){
// do something
}
return ARCONTROLLER_ERROR_ENUM.ARCONTROLLER_OK;
}
But it doesn’t work, the decodeByteArray can’t decode the byte data returned by the frame and it is always return null.
any suggestions?
BitmapFactory.decodeByteArray() simply isn't going to be capable of this. It's an unsupported format.
Because of complexities like P-frames, B-frames and I-frames, it takes a class with a little more state to be able to decode an arbitrary H.264 frame:
MediaCodec
Here is a pretty decent code sample. There are many more out there.

Getting MediaCodec decoder provide video frames in RGBA

I am trying to use a MediaCodec decoder (via the NDK API) to fetch video frames (for further processing) from a .mp4 file. Here is the sample code that sets up the decoder to render to a surface (owned by an ImageReader):
// Omitting most error handling for clarity
AMediaExtractor* ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSourceFd(ex, fd /*opened previously*/, outStart, outLen);
close(fd);
int numtracks = AMediaExtractor_getTrackCount(ex);
AMediaCodec* decoder = NULL;
for (int i = 0; i < numtracks; i++) {
AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
const char *s = AMediaFormat_toString(format);
LOGV("track %d format: %s", i, s);
const char *mime;
if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime)) {
LOGV("no mime type");
return JNI_FALSE;
} else if (!strncmp(mime, "video/", 6)) {
AMediaExtractor_selectTrack(ex, i);
decoder = AMediaCodec_createDecoderByType(mime);
AImageReader* imageReader;
ANativeWindow* surface;
// This setting doesn’t works
media_status_t status = AImageReader_new(480, 360, AIMAGE_FORMAT_RGBA_8888, 1, &imageReader);
// This setting works
//media_status_t status = AImageReader_new(480, 360, AIMAGE_FORMAT_YUV_420_888, 1, &imageReader);
status = AImageReader_getWindow(imageReader, &surface);
// Configure the decoder to render to a surface
AMediaCodec_configure(codec, format, surface, NULL, 0);
AMediaCodec_start(codec);
}
AMediaFormat_delete(format);
}
Elsewhere, here below is how I am setting up the callback for the ImageReader:
AImageReader_ImageListener* imageListener = new AImageReader_ImageListener();
imageListener->onImageAvailable = &imageCallback;
AImageReader_setImageListener(imageReader, imageListener);
And finally, here below is how the callback looks like:
void imageCallback(void *context, AImageReader *reader) {
int32_t format;
media_status_t status = AImageReader_getFormat (reader, &format);
AImage* image;
status = AImageReader_acquireLatestImage(reader, &image);
status = AImage_getFormat(image, &format);
// further processing to follow
...
}
The issue that I have been facing is that if I configure the ImageReader with RGBA format, the image in the callback always comes out to be NULL:
// Always NULL for ImageReader configured with RGBA
// OK for ImageReader configured with YUV_420_888
AImage* image;
status = AImageReader_acquireLatestImage(reader, &image);
Am I using the NDK API correctly here? One thing I would like to mention is that the RGBA doesn't appears in the list of decoder capabilities as fetched via the following API (not provided via NDK, tried it in the Java layer):
getCodecInfo().getCapabilitiesForType(…).colorFormats
Video decoders normally don't support outputting in RGB format (as you noticed in the colorformats codec info), so that's why this won't work.
The video decoder output can be transparently converted into RGB if you use a surface texture as the output for the decoder - then the decoded video data is available within an OpenGL context. If you then just do a plain 1:1 copy/rendering of the surface texture and have the OpenGL context set up to render into an ImageReader, I would expect you to get the RGB data you need.
It is a bit roundabout (and I'm not sure how easily accessible all the APIs are in a native code context), but should be doable as far as I know.

Android camera image transport to ROS in real-time have an OOM trouble?

I'm trying to transport the image-info from Android Camera to ROS in real-time. However, I got a OOM problem. I'm new to Android-ROS, nearly have no experiences of dealing with such problem.
Here're some information of my demo: (if you guys need more, pls comment)
1.
public class MainActivity extends RosActivity implements NodeMain, SurfaceHolder.Callback, Camera.PreviewCallback
2.Dependencies Opencv-for-Android(3.2.0).
3.ROS messages type: android_cv_bridge.
I'm trying to publish the image-messages in onPreviewFrame() function. Code like this:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Size size = camera.getParameters().getPreviewSize();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);
Bitmap bmp = null;
if(yuvImage != null){
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvImage.compressToJpeg(new Rect(0, 0, size.width, size.height), 80, baos);
bmp = BitmapFactory.decodeByteArray(baos.toByteArray(), 0, baos.size());
try{
baos.flush();
baos.close();
}
catch(IOException e){
e.printStackTrace();
}
image = imagePublisher.newMessage();
Time curTime = connectedNode.getCurrentTime();
image.setEncoding("rgba8");
image.getHeader().setStamp(curTime);
image.getHeader().setFrameId("camera");
curTime = null;
if(isOpenCVInit){
Mat mat_image = new Mat(bmp.getHeight(), bmp.getWidth(), CvType.CV_8UC4, new Scalar(0));
Bitmap copyBmp = bmp.copy(Bitmap.Config.ARGB_8888, true);
// bitmap to mat
Utils.bitmapToMat(copyBmp, mat_image);
// mat to cvImage
CvImage cvImage = new CvImage(image.getHeader(), "rgba8", mat_image);
try {
imagePublisher.publish(cvImage.toImageMsg(image));
} catch (IOException e) {
e.printStackTrace();
}
mat_image.release();
mat_image = null;
if(!bmp.isRecycled()) {
bmp.recycle();
bmp = null;
}
if(!copyBmp.isRecycled()) {
copyBmp.recycle();
copyBmp = null;
}
cvImage =null;
image = null;
}
}
yuvImage = null;
System.gc();
}
The imagePublisher are initialized here:
#Override
public void onStart(ConnectedNode connectedNode) {
this.connectedNode = connectedNode;
imagePublisher = connectedNode.newPublisher(topic_name, sensor_msgs.Image._TYPE);
}
Well, I had try my best to avoid the OOM problem. I had also trying to not apply the OpenCV, and just dealing with the bitmap like this:
ChannelBufferOutputStream cbos = new ChannelBufferOutputStream(MessageBuffers.dynamicBuffer());
bmp.compress(Bitmap.CompressFormat.JPEG, 80, baos);
cbos.buffer().writeBytes(baos.toByteArray());
image.setData(cbos.buffer().copy());
cbos.buffer().clear();
imagePublisher.publish(image);
Unfortunately, it's get worse. I'm doubt the way I'm trying to achieve this target. Or is there a better way to do?
I think your problem might be that your network can't transfer this amount of image data and the OOM is caused by the data stuck in buffers that is not yet transferred.
I had similar issues when I wanted to transfer image from my android device. If your problem is the same, you could solve it in several ways:
transfer data via usb tethering, it's generally much faster than wifi or cellular and can transfer even raw image stream without compression with 30 fps 640x480. For Jpeg I think you will be able to stream FullHD at 30 fps.
save data on the phone to a ROS Bag http://wiki.ros.org/rosbag and then work with the data. Here you miss realtime, but sometimes it's not needed. To make it I actually wrote an application for android https://github.com/lamerman/ros_android_bag and you can also download it directly from Google Play https://play.google.com/store/apps/details?id=org.lamerman.rosandroidbag&hl=en
try to decrease the bandwidth even further (decrease image size, fps) or increase the network quality
About your second attempt with transferring JPEG instead of RAW data, have a look at this source code, here it's implemented correctly https://github.com/rosjava/android_core/blob/kinetic/android_10/src/org/ros/android/view/camera/CompressedImagePublisher.java#L80
The problem of transferring via network is for sure actual for raw images, but may also be for compressed ones if the size of image is big and the frame rate is high.

Android : Render YUV Image / Buffer over Screen

Our application is related to showing live vidoe data received from other end, we need to display live feeds at an interval of 40 ms ,
The data will receive in YUV Format and it seems android doesn't have any inbuilt support to display the YUV data,
This below is the code to manage and show the data to the Screen,
// convert the data to RGB
feedProcess.decode(yuvBuffer,
yuvBuffer.length, imgInfo, imgRaw, ref,
webMIndex);
currentTime=new Date().getTime();
System.out
.println("took "+(currentTime-lastTime) +" ms to decode the buffer " );
imgQ.add(imgRaw);
In Another thread i will receiving data and converting it into the Bitmap
public void run() {
// TODO Auto-generated method stub
while(myThreadRun){
if(!imgQ.isEmpty()){
try{
byte[] arry=imgQ.poll();
Bitmap b=createImgae(arry, imgInfo[0], imgInfo[1], 1,width,height);
myThreadSurfaceView.setBitmap(b);
try {
// draw the image
c = myThreadSurfaceHolder.lockCanvas(null);
synchronized (myThreadSurfaceHolder) {
myThreadSurfaceView.onDraw(c);
}
} finally {
if (c != null) {
myThreadSurfaceHolder
.unlockCanvasAndPost(c);
}
}
}catch(NoSuchElementException ex){
}
}
}
}
This Entire logic is taking approx 100 ms to refresh the screen with the new image, are there any other approches that i can try it out ,
Decode function uncompress which takes 10-15 ms + convert YUV to RGB ( 20-30)ms , and this is done in JNI Code,
My understanding is , if YUV data can be shown directly then we can save some time from here,
Please tell your views

Categories

Resources