storing information in png and jpg - android

I have found a number of resources but nothing that has quite helped me with what I am looking for. I am trying to understand the .png and.jpg file formats enough to be able to modify and/or read the exif or other meta data in the files or to create my own meta data if possible.
I want to do this in the context of an android application so we can keep if there but it is really not exclusive to that. I am trying to figure out how to do this using a simple imput stream byte array and go from there.
Android itself has to at least extract the RGB pixel information at some point when it creates a bmp image from the stream, I took a look in the BitMapFactory source to try and understand it but I got lost somewhere after delving into the Native files.
I assume the bmps are losing any exif/meta data in the files based on my research. So I guess I want to break the inputstreams down by byte arrays and remove meta data. In .pngs I know there is no 'standard' but based on this page it seems there is some organization of the meta data you can store.
With all that said, I wouldn't mind just leaving exif/png standards behind and trying to store my own information in some sort of standardized way, but I need to know more about how the image readers id the files as either jpg, png, ect. then determine where the pixel information is located.
So I guess my first question is, has anyone done something similar to this before so that they can file me in? If not, does anyone know of any good libraries that might be good for educational purposes into figuring out how to locate and extract this data?
Or even more basically, what is a good way to find meta data and/or the exif standard or even the rgb data programmatically using something like a byte array?

There are a lot of things to address in your question, but first I should clarify that when you say "Android itself has to at least extract the RGB pixel information," what you're referring to is the act of decompression, which is complicated in the case of JPEG, and non trivial even for PNG. I think it would be very useful for you to read through the wikipedias for JPEG and PNG before attempting to go any further (especially sections on header, syntax, file structure, etc).
That being said, you've got the right idea. It shouldn't be too difficult to read in the header of an image as a byte array/stream, make some changes, and replace the old file. A PNG file can be identified by the first 8 bytes, and there should be a similar way to identify a JPEG - I can't remember off the top of my head.
To modify PNG meta data, you'll have to understand "chunks" - types/names, ordering, format, CRC, etc. The libpng website has some good resources for this, here's general PNG info, as well as chunk specifications. Make sure you don't forget to recalculate the CRC if you change anything.
JPEG sections off a file using "markers," which are two bytes long and always start with FF. Exif is just a regular JPEG file with a more specific structure for meta data, and this seems like a reasonable introduction: Exit/TIFF
There are probably libraries for Android/Java that conveniently take care of this for you, but I've never used any myself. A quick google search turns up this, and I'm sure there are many other options if you don't want to take the time to write a parser yourself.

Related

How do I query the information needed to interpret a RAW_SENSOR image?

I need to write an Android app that, among other things, uses the Camera2 APIs to capture images in RAW format and process the resulting image data in the app. Other image formats such as YUV are not sufficient for my use case and true RAW images are required. I want to capture the image and immediately process it in-memory, without writing out an intermediate .dng file.
In order to do this, I need to use ImageFormat.RAW_SENSOR to get the image I want. The documentation for RAW_SENSOR states the following:
The layout of the color mosaic, the maximum and minimum encoding values of the raw pixel data, the color space of the image, and all other needed information to interpret a raw sensor image must be queried from the android.hardware.camera2.CameraDevice which produced the image.
However, the documentation for CameraDevice contains nothing about querying this information. A Google search turned up nothing helpful. I found this question with an answer that merely quotes what I quoted above and doesn't help figure out how to actually do it.
Thus I am lost. How do I query this information?
Android's RAW support is heavily based on what the Adobe DNG raw file format requires, so reading that spec can be helpful, to understand what the steps in RAW conversion actually are.
Quite a few fields in the CameraCharacteristics and CaptureResult objects are needed to interpret the raw buffer. The majority of the fields that start with SENSOR_ are required for processing.
See the list for the RAW capability as well, though that's still fairly vague.
The Android compliance tests include a very simple RAW processor, so you can also inspect it to see what it reads in.

Retrieving Pixels from a DNG file within Android

I am currently working on an HDR application that requires the use of Camera2 to be able to customize HDR settings.
I have developed a customized algorithm to retrieve certain data from Raw DNG images and I would like to implement it on Android.
I am unfortunately not an expert in Java/Android, so I taught myself how to code. Using other formats, I have usually worked with bitmaps to retrieve pixel data. ( which was relatively an easy task concerning the existing methods )
Concerning DNG files, I have found no documentation showing me how to retrieve the pixels data. I thought of bufferizing the image, however the DNG file format contains many information other than pixels and I'm afraid I am unable to find an extraction strategy using bufferstream. (I just want to store the pixels inside an array)
Anyone has an idea ? Would highly appreciate some tips.
Best regards
Camera2 does not produce DNGs directly - it produces plain RAW buffers, which you can then save to a DNG via DngCreator.
Are you operating on the initial RAW buffers, or saving DNGs and then loading them back?
In general, DNGs are not full baked images, so quite a bit of code is needed to render them completely - see for example Adobe's DNG SDK.

Fastest way to read/write a Bitmap from/to file?

I'm currently writing Bitmaps to a png file and also reading them back to a Bitmap. I'm looking for ways to improve the speed at which writing and reading happens. The images need to be lossless since I'm reading them back to edit them.
The place where I see the worst performance is the actual BitmapFactory.decode(...).
Few questions:
1. Is there a faster solution to read/write from file to a Bitmap using NDK?
2. Is there a better library to decode a Bitmap faster?
3. What is the best way to store and read a Bitmap?
Trying to resolve the best/fastest possible way to read/write image to file came down to using plain old BitmapFactory. I have tried using NDK to do the encoding/decoding but that really didn't make a difference.
Essentially the format to use was lossless PNG since I didn't want to loose any quality after editing an image.
The main concept from all this was that I needed to understand was how long encoding took versus decoding. The encoding numbers where in the upper 300-600ms, depending on image size, and decoding was just fast, around 10-23ms.
After understanding all that I just created a worker thread that I passed images needing encoding and let it do the work without affecting the user experience. The image was kept cached in memory just in case it was needed right away before it was completely encoded and saved to file.

Android efficient storage of two images which are similar

Application size on a phone needs to be as small as possible. If I have an image of a sword and then a very similar image of that same sword except that I've changed the color or added flames or changed the picture of the jewel or whatever, how do store things as efficiently as possible?
One possibility is to store the differences graphically. I'd store just the image differences and then combine the two images at runtime. I've already asked a question on the graphic design stackexchange site about how to do that.
Another possibility would be that there is that apk already does this or that there is already a file format or method people use to store similar images in android.
Any suggestions? Are there tools that I could use to take two pngs and generate a difference file or a file format for storing similar images or something?
I'd solve this problem at a higher level. For example, do the color change at run-time (maybe store the image with a very specific color like some ugly shade of green that you know is the color to be fixed at run-time with white or red or blue or whatever actual color you want). Then you could generate several image buffers at load-time.
For compositing the two images, just store the 'jewel' image separately, and draw it over the basic sword. Again, you could create a new image at load-time, or just do the overdraw at run-time.
This will help reduce your application's footprint on flash, but will not reduce the memory footprint when the app is active.
I believe your idea of storing the delta between 2 images to be quite good.
You would then compress the resulting delta file with a simple entropy coder, such as Huffman, and you are pretty likely to achieve a strong compression ratio if similarities with base image are important.
If the similarity are really very strong, you could even try a Range Coder, to achieve less-than-one-bit-per-pixel performance. The difference however might be noticeable only for larger images (i.e higher definition than a 12x12 sprite).
These ideas however will require you or someone to write for you such function's code. This should be quite straightforward.
An very easy approach to do this is to use an ImagePack ( one image containing many ) - so you can easy leverage the PNG or JPG compression algorithms for your purpose. You then split the images before drawing.

Base64 Encoded images on Android/iOS

I'm looking for a way of obfuscating the images I store in my application and am currently considering Base46 Encoding.
I need something with minimal overhead or if possible a performance boost over standard files on the file system.
Can someone comment on the feasability of base64 encoding the images (png) and subsequently using (decoding?) on the target platforms?
Thanks.
What sort of attack are you trying to protect against? Base64 is reasonably easily recognizable and has a potentially-significant impact in terms of space (each image will take an extra 33% space).
Some sort of shifting XOR would be harder to spot just from the data, but it wouldn't be adequate protection for really significant assets.
I am sure you understand Base64 won't fool anyone who really want to get your Bitmap.
Jon Skeet is right, Base64 is nice to encode binary data in readable format but will not really help you here. An XOR against a password of yours will be faster, and won't add any size overhead.
If you really want to obfuscate your bitmaps I suggest you to store them in the "raw" ressources folder. By doing this you will be able to keep the nice Android abstraction that handles different form factors (ldpi, hdpi, ...).
Extends the ImageView class to directly work with R.raw.filename id and do the reading file/decoding stream/creating bitmap there. By doing so, you will be able to rollback easily to the standard way of doing things if needed.
Be warned that you could run into memory issues when storing multiple bitmaps within an application memory in Android. OutOfMemoryErrors seem to be a recurring problem when dealing with bitmaps in android. Here is an example: outofmemoryerror-bitmap-size-exceeds-vm-budget-android

Categories

Resources