I'm planning to write an app for Android which performs a simple cell counting. The method I'm planning to use is a type of Blob analysis.
The steps of my procedure would be;
Histographing to identify the threshold values to perform the thresholding.
Thresholding to create a binary image where cells are white and the background is black.
Filtering to remove noise and excess particles.
Particle (blob) analysis to count cells.
I got this sequence from this site where functions from the software IMAQ Vision are used to perform those steps.
I'm aware that on Android I can use OpenCV's similar functions to replicate the above procedure. But I would like to know whether I'd be able to implement histographing, thresholding and Blob analysis myself writing the required algorithms without calling API functions. Is that possible? And how hard would it be?
It is possible. From a PNG image (e.g. from disk or camera), you can generate a Bitmap object. The Bitmap gives you direct access to the pixel color values. You can also create new Bitmap objects based on raw data.
Then it is up to you to implement the algorithms. Creating a histogram and thresholding should be easy, filtering and blob analysis more difficult. It depends on your exposure to algorithms and data structures, however a hands-on approach is not bad either.
Just make sure to downscale large images (Bitmap can do that too). This saves memory (which can be critical on Android) and gives better results.
Related
I am currently working on an HDR application that requires the use of Camera2 to be able to customize HDR settings.
I have developed a customized algorithm to retrieve certain data from Raw DNG images and I would like to implement it on Android.
I am unfortunately not an expert in Java/Android, so I taught myself how to code. Using other formats, I have usually worked with bitmaps to retrieve pixel data. ( which was relatively an easy task concerning the existing methods )
Concerning DNG files, I have found no documentation showing me how to retrieve the pixels data. I thought of bufferizing the image, however the DNG file format contains many information other than pixels and I'm afraid I am unable to find an extraction strategy using bufferstream. (I just want to store the pixels inside an array)
Anyone has an idea ? Would highly appreciate some tips.
Best regards
Camera2 does not produce DNGs directly - it produces plain RAW buffers, which you can then save to a DNG via DngCreator.
Are you operating on the initial RAW buffers, or saving DNGs and then loading them back?
In general, DNGs are not full baked images, so quite a bit of code is needed to render them completely - see for example Adobe's DNG SDK.
I am developing a peer to peer collaborative drawing app on android using alljoyn framework like chalkboard .
I am able to implement collaborative chat among peers. Now i want to implement canvas sharing where in a single canvas everyone will be able to draw in real time.
How can i start with canvas, what would be its data structure,is there any specific image object i need to handle,do i need to use json,do i have to store the pixel values in a 2D array.
I need only a black & white screen with white background and black as drawing part.
I just want to know the approach behind it. Any reference will be helpful.
thanks...
Canvas is really a bitmap.
You add/change pixels on the bitmap using drawing commands.
To do collaborative drawing, you wouldn't share the pixel values between all users with each change.
That would create bottlenecks in serializing, transport and deserializing. It would be too slow to work.
Instead, share the latest drawing commands between all users with each change.
If user#1 draws a line from [20,20] to [100,100], just serialize that command that drew the line and share that with all users.
Perhaps the serialization might look like this: "L 20,20 100,100".
If you want an efficient serialization structure, take a look at the way SVG does it's path data--very efficient for transportation to many users.
All other users would listen for incoming commands. Upon receipt, they would deserialize user#1's line and have it automatically drawn on their own canvas.
hope you are all well.
I am at a somewhat of a crossroads in my current project, I am needing to extract grayscale pixel values that will be sorted as per the discussion in my previous post (and very kindly and thoroughly answered).
The two main methods that I am aware of are:
Extract the grayscale from the Yuv preview.
Take the photo, and convert the RGB values to grayscale.
One of my main aims is simplicity, the project as a whole needs it, so thus my question - whaich of these two (or another method I am not aware of) would be the most reliable/stable, but would be less taxing on the battery and processing time?
Please note, I am not after any code samples, but are looking for what people may have experienced, may hve read (in articles etc) or have a intuitive hunch about.
Thank you for taking the time to read this.
I'm currently working on a project which also uses pixel values to do some calculations, and I noticed that it's better to use the values directly from the YUV preview if you only need the grayscale, or need to use the entire preview for your calculation.
If you want to use the RGB values, or only calculate something based on a certain part of the preview it's better to convert the area you need by converting to a Bitmap and using that for instance.
However, it all depends on what you're trying to achieve since no two projects are alike. If you have the time, why not (rougly) implement both methods and do a quick test to see what works better in terms of cpu usage and total processing time? That's how I found the best method for my particular problem.
I am writing a program to manipulate images,ie change its color,brightness,contrast etc...
The DVM doesn't support the manipulation of images of size beyond a limit...Can any one tell me whether using Open CV will solve the issue(as this seems to be a better option than NDK)?
Or will I have to use NDK?
I have done a lot of search and was not able to find answer..
First of all, there are different options for Image processing in Android, see here for a comparison of the most popular options: see Android Computer Vision JavaCV OpenCV FastCV comparison and Image processing library for Android and Java
Coming back to your question: If the images you deal are really very large so that they do not fit into the memory of the device, you need to process the images in small chunks called tiles.
If your images are not that big, I recommend you to use OpenCv, if you have to do anything more than very simple tasks such as brightness/contrast adjustment.
Application size on a phone needs to be as small as possible. If I have an image of a sword and then a very similar image of that same sword except that I've changed the color or added flames or changed the picture of the jewel or whatever, how do store things as efficiently as possible?
One possibility is to store the differences graphically. I'd store just the image differences and then combine the two images at runtime. I've already asked a question on the graphic design stackexchange site about how to do that.
Another possibility would be that there is that apk already does this or that there is already a file format or method people use to store similar images in android.
Any suggestions? Are there tools that I could use to take two pngs and generate a difference file or a file format for storing similar images or something?
I'd solve this problem at a higher level. For example, do the color change at run-time (maybe store the image with a very specific color like some ugly shade of green that you know is the color to be fixed at run-time with white or red or blue or whatever actual color you want). Then you could generate several image buffers at load-time.
For compositing the two images, just store the 'jewel' image separately, and draw it over the basic sword. Again, you could create a new image at load-time, or just do the overdraw at run-time.
This will help reduce your application's footprint on flash, but will not reduce the memory footprint when the app is active.
I believe your idea of storing the delta between 2 images to be quite good.
You would then compress the resulting delta file with a simple entropy coder, such as Huffman, and you are pretty likely to achieve a strong compression ratio if similarities with base image are important.
If the similarity are really very strong, you could even try a Range Coder, to achieve less-than-one-bit-per-pixel performance. The difference however might be noticeable only for larger images (i.e higher definition than a 12x12 sprite).
These ideas however will require you or someone to write for you such function's code. This should be quite straightforward.
An very easy approach to do this is to use an ImagePack ( one image containing many ) - so you can easy leverage the PNG or JPG compression algorithms for your purpose. You then split the images before drawing.