in Android Q there is a option to get depth map from image.
Starting in Android Q, cameras can store the depth data for an image in a separate file, using a new schema called Dynamic Depth Format (DDF). Apps can request both the JPG image and its depth metadata, using that information to apply any blur they want in post-processing without modifying the original image data.
To read the specification for the new format, see Dynamic Depth Format.
I have read this file Dynamic Depth Format and it looks that depth data is stored in JPG file as XMP metadata and now my question is how to get this depth map from file or directly from android API?
I am using galaxy s10 with anrdoid Q
If you retrieve a DYNAMIC_DEPTH jpeg, the depth image is stored in the bytes immediately after the main jpeg image. The documentation leaves a lot to be desired in explaining this; I finally figured it out by searching the whole byte buffer for JPEG start and end markers.
I wrote a parser that you can look at or use here: https://github.com/kmewhort/funar/blob/master/app/src/main/java/com/kmewhort/funar/preprocessors/JpegParser.java. It does the following:
Uses com.drew.imaging.ImageMetadataReader to read the EXIF.
Uses com.adobe.internal.xmp to read info about the depth image sizes (and some other interesting depth attributes that you don't necessarily need, depending on your use case)
Calculates the byte locations of the depth image by subtracting each trailer size from the final byte of the whole byte buffer (there can be other trailer images, such as thumbnails). It returns a wrapped byte buffer of the actual depth JPEG.
Related
I want to upload an internal png image to my backend, the API supplied with the backend only allows for byte[] data to be uploaded.
But so far, I haven't found a way of extracting byte[] data from a texture. If it's an internal resource or not, I'm not sure matters?
So what ways are there to achieve this using Libgdx framework?
The image I want to use is loaded with the AssetManager.
Before trying to do this, make sure to understand the following:
A Texture is an OpenGL resource which resides in video memory (VRAM). The texture data itself is not (necessarily) available in RAM. So you can not access it directly. Transferring that data from VRAM to RAM is comparable to taking a screenshot. In general it is something you want to avoid.
However, if you load the image using AssetManager then you are loading it from file and thus have the data available in RAM already. In that case it is not called a Texture but a Pixmap instead. To get the data from the pixmap goes like this:
Pixmap pixmap = new Pixmap(Gdx.files.internal(filename));
ByteBuffer nativeData = pixmap.getPixels();
byte[] managedData = new byte[nativeData.remaining()];
nativeData.get(managedData);
pixmap.dispose();
Note that you can load the Pixmap using AssetManager as well (in that case you would unload instead of dispose it). The nativeData contains the raw memory, most API's can use that also, so check if you can use that directly. Otherwise you can use the managedData managed byte array.
I'm using the camera2 api to capture a burst of images. To ensure fastest capture speed, I am currently using yuv420888.
(jpeg results in approximately 3 fps capture while yuv results in approximately 30fps)
So what I'm asking is how can I access the yuv values for each pixel in the image.
i.e.
Image image = reader.AcquireNextImage();
Pixel pixel = image.getPixel(x,y);
pixel.y = ...
pixel.u = ...
pixel.v = ...
Also if another format would be faster please let me know.
If you look at the Image class you will see the immediate answer is simply the .getPlanes() method.
Of course, for YUV_420_888 this will yield three planes of YUV data which you will have to do a bit of work with in order to get the pixel value at any given location, because the U and V channels have been downsampled and may be interlaced in how they are stored in the Image.Planes. But that is beyond the scope of this question.
Also, you are correct that YUV will be the fastest available output for your camera. JPEG require extra time for encoding which will slow down the pipeline output, and RAW are very large and take a lot of time to read out because they are so large. YUV (of whatever type) is the data format that most camera pipelines work in so it is the 'native' output, and thus the fastest.
I am uploading an image (JPEG) from android phone to server. I tried these two methods -
Method 1 :
int bytes=bitmap.getByteCount();
ByteBuffer byteBuffer=ByteBuffer.allocate(bytes);
bitmap.copyPixelsToBuffer(byteBuffer);
byte[] byteArray = byteBuffer.array();
outputStream.write(byteArray, 0, bytes-1);
Method 2 :
bitmap.compress(Bitmap.CompressFormat.JPEG,100,outputStream);
In method1, I am converting the bitmap to bytearray and writing it to stream. In method 2 I have called the compress function BUT given the quality as 100 (which means no loss I guess).
I expected both to give the same result. BUT the results are very different. In the server the following happened -
Method 1 (the uploaded file in server) :
A file of size 3.8MB was uploaded to the server. The uploaded file is unrecognizable. Does not open with any image viewer.
Method 2 (the uploaded file in server)
A JPEG file of 415KB was uploaded to the server. The uploaded file was in JPEG format.
What is the difference between the two methods. How did the size differ so much even though I gave the compression quality as 100? Also why was the file not recognizable by any image viewer in method 1?
I expected both to give the same result.
I have no idea why.
What is the difference between the two methods.
The second approach creates a JPEG file. The first one does not. The first one merely makes a copy of the bytes that form the decoded image to the supplied buffer. It does not do so in any particular file format, let alone JPEG.
How did the size differ so much even though I gave the compression quality as 100?
Because the first approach applies no compression. 100 for JPEG quality does not mean "not compressed".
Also why was the file not recognizable by any image viewer in method 1?
Because the bytes copied to the buffer are not being written in any particular file format, and certainly not JPEG. That buffer is not designed to be written to disk. Rather, that buffer is designed to be used only to re-create the bitmap later on (e.g., for a bitmap passed over IPC).
From an Android camera, I take YUV array and decode it to RGB. (JNI NDK) Then, I using black-white filter for RGB matrix, and show on CameraPrewiev in format YCbCr_420_SP
lParameters.setPreviewFormat(PixelFormat.YCbCr_420_SP);
Now I need to take a photo. But when I takePhoto, i have this error:
CAMERA-JNI Manually set buffer was too small! Expected 1138126 bytes, but got 165888!
Because from Surface you are not give the image. You must give bitmap from layout and than save on SdCsrd in some folder as Compress JPG. Thanks for all. This question is closed.
I have decoded AVFrame from avcodec_decode_video2 function (FFmpeg) which is then passed to the SWS library and converted from YUV420P format to RGB565. How do I combine all colors and linesizes information i.e. frame->data[0..3], frame->linesize[0..3] into one buffer and how to display it then on the Android device say by using Android Bitmap or SurfaceView/View? I don't want to use SurfaceFlinger because it is not official part of NDK and it is subject to change with every minor release.
You only have data[0] for RGB, and linesize[0] is equal to width if your frame is standard sized.