get pixels from jpeg byte array - android

I need to get the pixels info from a Jpeg image without instanciating a Bitmap and then pass it to the JNI.
It is impossible with android to get anything else than a jpeg image from camera (except if you need a low resolution in which case you can use thePreviewCallback), so I got the byte[] from the jpegCallback.
Is it possible to get pixels info in an int[] without using Bitmap.getPixels() ?

If you don't want to construct a Bitmap, your only option is to decode the JPEG buffer yourself; this means either finding another Java JPEG decompression library, or using JNI and a C JPEG library such as libjpeg. Or, you can write one from scratch, which I don't recommend unless you're already pretty conversant with image compression routines, and you have plenty of time for implementation and debugging.
As Scott asked, why is using Bitmap unacceptable? No matter what route you take, you'll have to call something to decompress the image data, and using BitmapFactory.decodeByteArray is a straightforward, known-to-work option.

I do not see how this is possible with the standard Android APIs. I'm sure there are 3rd party JPEG libraries you could use (Googling finds may possibilities). Why are you trying to avoid Bitmaps? If it is a memory constraint, you might want to decode small horizontal chunks with BitmapRegionDecoder

Related

Image compression before uploading in android

I have a image uploading module in my app where user can select an image from the gallery. The problem is the size of the image can be upto 10MB. Which is very large, I want to apply some compression technique to these images before uploading them.
I did some research on the internet and found some libraries like ImageMagick, ImgMin which allows easy optimization of the images. Is there any way I can use them in my android project without the involvement of any backend server.
References:
ImgMin
https://github.com/rflynn/imgmin
ImageMagick
http://www.imagemagick.org/script/index.php
An easy option you can try is this method from the Bitmap class.
You can select the compression format of a bitmap and to optimise either the quality, or the file size. A downside is that the you need to get a Bitmap instance to start the compression, which may be something you don't want to do.

In Client server Communication, Base64 conversion or Byte array , which one is efficient way to pass image

I was Passing Mat image from Android to Native jni (cpp). I was using opencv to pass Mat image from android to jni but FPS count is 3.2 it become very slow. To convert Base64 string and passing to jni is efficient way to pass? or directly Pass bitmap byte array is efficient way to pass? please justify which one is most preferable for client server communication.
If you base64-encode your image, you'll have to decode it later (on the client). That takes time and wastes resources (what if you had to encode/decode billions of images?)
An image is binary already, so the fastest way would probably be to simply read (or generate) an image and send it as it is.
All in all, you'd better do not do thousands CPU cycles when you can do only a few.

Better option to handle JPEG byte array decoding

Given a JPEG Encoded byte array taken from a camera, I need to decode it and display its image (bitmap) in my application. From searching around, I've found that there are two primary ways to go about this: use NDK Jpeg Decoding library or use BitmapFactory.decodeByteArray. This is for an experimental embedded device being developed that runs on Android with a built-in camera.
I would greatly prefer to develop the application in SDK not NDK if possible at all, but many people seemed to handle this problem by going straight to NDK, which bugged me a bit.
Is there some sort of inherent limitation in BitmapFactory.decodeByteArray that forces you to handle this problem by using libjpeg in NDK (Perhaps speed? Incompatibility?)
Performance isn't a big consideration unless if it takes say more than 45 seconds to decode the image and display it.
This is an important decision I need to make upfront, so I'd appreciate any thoughtful answers. Thank you so much.
Here is a really good example / explanation how you can decode images on device efficiently without using NDK : Displaying Bitmap Efficiently. You have an option to decode bitmap from stream or file so it depends on your needs. But in most of my applications I am using the same method and it's working great. So I suggest you to take a look at the SDK example.
Hope it will helps you.

Fastest way to read/write a Bitmap from/to file?

I'm currently writing Bitmaps to a png file and also reading them back to a Bitmap. I'm looking for ways to improve the speed at which writing and reading happens. The images need to be lossless since I'm reading them back to edit them.
The place where I see the worst performance is the actual BitmapFactory.decode(...).
Few questions:
1. Is there a faster solution to read/write from file to a Bitmap using NDK?
2. Is there a better library to decode a Bitmap faster?
3. What is the best way to store and read a Bitmap?
Trying to resolve the best/fastest possible way to read/write image to file came down to using plain old BitmapFactory. I have tried using NDK to do the encoding/decoding but that really didn't make a difference.
Essentially the format to use was lossless PNG since I didn't want to loose any quality after editing an image.
The main concept from all this was that I needed to understand was how long encoding took versus decoding. The encoding numbers where in the upper 300-600ms, depending on image size, and decoding was just fast, around 10-23ms.
After understanding all that I just created a worker thread that I passed images needing encoding and let it do the work without affecting the user experience. The image was kept cached in memory just in case it was needed right away before it was completely encoded and saved to file.

How to perform Image processing in Android without an API?

I'm planning to write an app for Android which performs a simple cell counting. The method I'm planning to use is a type of Blob analysis.
The steps of my procedure would be;
Histographing to identify the threshold values to perform the thresholding.
Thresholding to create a binary image where cells are white and the background is black.
Filtering to remove noise and excess particles.
Particle (blob) analysis to count cells.
I got this sequence from this site where functions from the software IMAQ Vision are used to perform those steps.
I'm aware that on Android I can use OpenCV's similar functions to replicate the above procedure. But I would like to know whether I'd be able to implement histographing, thresholding and Blob analysis myself writing the required algorithms without calling API functions. Is that possible? And how hard would it be?
It is possible. From a PNG image (e.g. from disk or camera), you can generate a Bitmap object. The Bitmap gives you direct access to the pixel color values. You can also create new Bitmap objects based on raw data.
Then it is up to you to implement the algorithms. Creating a histogram and thresholding should be easy, filtering and blob analysis more difficult. It depends on your exposure to algorithms and data structures, however a hands-on approach is not bad either.
Just make sure to downscale large images (Bitmap can do that too). This saves memory (which can be critical on Android) and gives better results.

Categories

Resources