Retrieving Pixels from a DNG file within Android - android

I am currently working on an HDR application that requires the use of Camera2 to be able to customize HDR settings.
I have developed a customized algorithm to retrieve certain data from Raw DNG images and I would like to implement it on Android.
I am unfortunately not an expert in Java/Android, so I taught myself how to code. Using other formats, I have usually worked with bitmaps to retrieve pixel data. ( which was relatively an easy task concerning the existing methods )
Concerning DNG files, I have found no documentation showing me how to retrieve the pixels data. I thought of bufferizing the image, however the DNG file format contains many information other than pixels and I'm afraid I am unable to find an extraction strategy using bufferstream. (I just want to store the pixels inside an array)
Anyone has an idea ? Would highly appreciate some tips.
Best regards

Camera2 does not produce DNGs directly - it produces plain RAW buffers, which you can then save to a DNG via DngCreator.
Are you operating on the initial RAW buffers, or saving DNGs and then loading them back?
In general, DNGs are not full baked images, so quite a bit of code is needed to render them completely - see for example Adobe's DNG SDK.

Related

How do I query the information needed to interpret a RAW_SENSOR image?

I need to write an Android app that, among other things, uses the Camera2 APIs to capture images in RAW format and process the resulting image data in the app. Other image formats such as YUV are not sufficient for my use case and true RAW images are required. I want to capture the image and immediately process it in-memory, without writing out an intermediate .dng file.
In order to do this, I need to use ImageFormat.RAW_SENSOR to get the image I want. The documentation for RAW_SENSOR states the following:
The layout of the color mosaic, the maximum and minimum encoding values of the raw pixel data, the color space of the image, and all other needed information to interpret a raw sensor image must be queried from the android.hardware.camera2.CameraDevice which produced the image.
However, the documentation for CameraDevice contains nothing about querying this information. A Google search turned up nothing helpful. I found this question with an answer that merely quotes what I quoted above and doesn't help figure out how to actually do it.
Thus I am lost. How do I query this information?
Android's RAW support is heavily based on what the Adobe DNG raw file format requires, so reading that spec can be helpful, to understand what the steps in RAW conversion actually are.
Quite a few fields in the CameraCharacteristics and CaptureResult objects are needed to interpret the raw buffer. The majority of the fields that start with SENSOR_ are required for processing.
See the list for the RAW capability as well, though that's still fairly vague.
The Android compliance tests include a very simple RAW processor, so you can also inspect it to see what it reads in.

Getting the color matrix and white balance info from Camera2 API (for custom raw processing)

I am working at an app where I need to take many pictures (possibly tens of thousands) and they have to be RAW, which I process in native code.
Right now I am converting the RAWs to DNGs, and in the native code I unpack them using libraw. I get the white balance color multipliers, and the color matrix from the dng.
However, converting the raw to dng and then processing the dng takes quite a bit of time, and I would like to skip this step, and process the raw info directly, without the DNG intermediary. But for that I need to get the color matrix and WB values. I did look at the docs, but I didn't find any way on how to do that. Any help would be appreciated.
That information is available in the CameraCharacteristics and CaptureResult objects that you pass to DngCreator, specifically fields like:
https://developer.android.com/reference/kotlin/android/hardware/camera2/CameraCharacteristics#sensor_calibration_transform1
https://developer.android.com/reference/kotlin/android/hardware/camera2/CameraCharacteristics#sensor_color_transform1
https://developer.android.com/reference/kotlin/android/hardware/camera2/CameraCharacteristics#sensor_forward_matrix1
https://developer.android.com/reference/kotlin/android/hardware/camera2/CameraCharacteristics#sensor_reference_illuminant1
https://developer.android.com/reference/android/hardware/camera2/CaptureResult#SENSOR_NEUTRAL_COLOR_POINT
Most of those fields map basically directly to the DNG spec, but you can try to look at the DngCreator implementation to see how to go from the camera2 API to the DNG fields:
https://cs.android.com/android/platform/superproject/+/master:frameworks/base/core/jni/android_hardware_camera2_DngCreator.cpp;l=1217
While there's no official sample for using this information to process a raw buffer, the Android compliance tests include a simple RAW converter in Java, used to confirm that the resulting image reasonably matches the JPEG image provided by the device (to double-check that the various metadata fields are reasonably correct): https://cs.android.com/android/platform/superproject/+/master:cts/tests/camera/src/android/hardware/camera2/cts/rs/RawConverter.java;l=279

storing information in png and jpg

I have found a number of resources but nothing that has quite helped me with what I am looking for. I am trying to understand the .png and.jpg file formats enough to be able to modify and/or read the exif or other meta data in the files or to create my own meta data if possible.
I want to do this in the context of an android application so we can keep if there but it is really not exclusive to that. I am trying to figure out how to do this using a simple imput stream byte array and go from there.
Android itself has to at least extract the RGB pixel information at some point when it creates a bmp image from the stream, I took a look in the BitMapFactory source to try and understand it but I got lost somewhere after delving into the Native files.
I assume the bmps are losing any exif/meta data in the files based on my research. So I guess I want to break the inputstreams down by byte arrays and remove meta data. In .pngs I know there is no 'standard' but based on this page it seems there is some organization of the meta data you can store.
With all that said, I wouldn't mind just leaving exif/png standards behind and trying to store my own information in some sort of standardized way, but I need to know more about how the image readers id the files as either jpg, png, ect. then determine where the pixel information is located.
So I guess my first question is, has anyone done something similar to this before so that they can file me in? If not, does anyone know of any good libraries that might be good for educational purposes into figuring out how to locate and extract this data?
Or even more basically, what is a good way to find meta data and/or the exif standard or even the rgb data programmatically using something like a byte array?
There are a lot of things to address in your question, but first I should clarify that when you say "Android itself has to at least extract the RGB pixel information," what you're referring to is the act of decompression, which is complicated in the case of JPEG, and non trivial even for PNG. I think it would be very useful for you to read through the wikipedias for JPEG and PNG before attempting to go any further (especially sections on header, syntax, file structure, etc).
That being said, you've got the right idea. It shouldn't be too difficult to read in the header of an image as a byte array/stream, make some changes, and replace the old file. A PNG file can be identified by the first 8 bytes, and there should be a similar way to identify a JPEG - I can't remember off the top of my head.
To modify PNG meta data, you'll have to understand "chunks" - types/names, ordering, format, CRC, etc. The libpng website has some good resources for this, here's general PNG info, as well as chunk specifications. Make sure you don't forget to recalculate the CRC if you change anything.
JPEG sections off a file using "markers," which are two bytes long and always start with FF. Exif is just a regular JPEG file with a more specific structure for meta data, and this seems like a reasonable introduction: Exit/TIFF
There are probably libraries for Android/Java that conveniently take care of this for you, but I've never used any myself. A quick google search turns up this, and I'm sure there are many other options if you don't want to take the time to write a parser yourself.

The most compact map file format?

I am trying to understand all this map formats for OpenStreetMap and I really got confused.
The OSM wiki has lots of information, but it looks like it is spread all over different places and i cannot get solid understanding of all the formats.
I am looking for something that can be used in Android for offline use. I know that there are lots of frameworks or even done apps that use different file formats, but as for me they file formats they use are all huge.
As I understand the most lightweight format supported by OSM is PBF-Binary, and it is raster format, right?
I have found that it's possible to convert it to *.map format that is vector, right?
The size then is about 40% less than PBF-Binary, but it has to be rendered and it will not be as fast as raster, right?
So another question is, what is the most compact OSM map format that can be used for android?
I know one app that i use a lot - MapsWithMe and it has small map files and they are very fast, but i don't know if it uses raster or vector, i know only they use OSM maps, but as i understand they have created their own format based on it or smth like that.
I have come across GeoJson and the map files are very small, not more than several megabytes. So now i'm getting confused why it is so, why then it's not used for mobile development, cause i googled 'geojson android' and no information about it.
Are there any comparison tables of these formats?
So if somebody has a link where i can learn about all this things, could you please give it to me?
Thanks
PBF format is a much smaller alternative to XML. It contains the same raw vector data. You can convert from one format into the other without loosing any information. PBF is smaller and faster because it is binary data whereas XML is plaintext. The OSM wiki has a short overview of common OSM file formats.
I don't know where you got the information that GeoJson is small. The size of a map depends on several attributes. Mainly coverage and detail. Usually you don't want to have an offline map covering the whole world on your device because it will be very large. Most of the time you just need a small area, like a country. And often you don't need every information OSM can offer. Roads, cities and important POIs are usually sufficient for routing and searching.
You didn't tell us what you want to do with the map. Just drawing it? Or do you also need a routing and search functionality? What map format would be the most useful for you depends on your use case.
There is already lots of software for Android using OSM, including various open source programs. You can take a look at them if you need inspiration for your software.

Better option to handle JPEG byte array decoding

Given a JPEG Encoded byte array taken from a camera, I need to decode it and display its image (bitmap) in my application. From searching around, I've found that there are two primary ways to go about this: use NDK Jpeg Decoding library or use BitmapFactory.decodeByteArray. This is for an experimental embedded device being developed that runs on Android with a built-in camera.
I would greatly prefer to develop the application in SDK not NDK if possible at all, but many people seemed to handle this problem by going straight to NDK, which bugged me a bit.
Is there some sort of inherent limitation in BitmapFactory.decodeByteArray that forces you to handle this problem by using libjpeg in NDK (Perhaps speed? Incompatibility?)
Performance isn't a big consideration unless if it takes say more than 45 seconds to decode the image and display it.
This is an important decision I need to make upfront, so I'd appreciate any thoughtful answers. Thank you so much.
Here is a really good example / explanation how you can decode images on device efficiently without using NDK : Displaying Bitmap Efficiently. You have an option to decode bitmap from stream or file so it depends on your needs. But in most of my applications I am using the same method and it's working great. So I suggest you to take a look at the SDK example.
Hope it will helps you.

Categories

Resources