Android: Will the phone camera resolution affect the result of preprocessing - android

My current project is about android image processing.But if my phone camera is about 1-2 megapixel, will it affect the result of preprocessing like grayscale and binarization?

Your phone camera won't affect any pre-processing you perform in that your pre-processing code will act just the same regardless of the number of megapixels in your camera. Garbage in, garbage out still does apply. If you start with a low quality, poor contrast, blurred picture, you aren't going to be able to turn it in to something fantastic you want to hang on your wall. Additionally, as Mizuki alluded to in his comment, a 1-2 megapixel phone image is far higher resolution than the average image used on the internet, and these can be binarised and greyscaled just fine.
As for the two methods of preprocessing you mentioned in your question:
Binarization
This just converts an image into a two colour version. Normally black and white, though other colours are possible. The number of pixels in the image doesn't matter for this, other than it taking longer if it has more pixels to process. Low quality mobile phone cameras can sometimes produce low contrast photos and this may make it harder for the binarization algorithm to correctly determine the threshold at which pixels should be displayed in either colour.
Greyscale
Converting an image to greyscale is done by manipulating the colours of each pixel so, again, the number of pixels should only increase the preprocessing time, not change the result.

Related

Wrong rotation of images using Android camera2 API in Google's example code?

I have tried writing my own code for accessing camera via the camera2 API on Android instead of using the Google's example. On one hand, I've wasted way too much time understanding what exactly is going on, but on the other hand, I have noticed something quite weird:
I want the camera to produce vertical images. However, despite the fact that the ImageReader is initialized with height larger than width, the Image that I get in the onCaptureCompleted has the same size, except it is rotated 90 degrees. I was struggling trying to understand my mistake, so I went exploring Google's code. And what I have found is that their images are rotated 90 degrees as well! They compensate for that by setting a JPEG_ORIENTATION key in the CaptureRequestBuilder (if you comment that single line, the images that get saved will be rotated). This is unrelated to device orientation - in my case, screen rotation is disabled for the app entirely.
The problem is that for the purposes of the app I am making I need a non-compressed precise data from camera, so since JPEG a) compresses images b) with losses, I cannot use it. Instead I use the YUV_420_888 format which I later convert to a Bitmap. But while the JPEG_ORIENTATION flag can fix the orientation for JPEG images, it seems to do nothing for YUV ones. So how do I get the images to be correctly rotated?
One obvious solution is to rotate the resulting Bitmap, but I'm unsure what angle should I rotate it by on different devices. And more importantly, what causes such strange behavior?
Update: rotating the Bitmap and scaling it to proper size takes way too much time for the preview (the context is as follows: I need both high-res images from camera to process and a downscaled version of these same images in preview. Let's just say I'm making something similar to QR code recognition). I have even tried using RenderScripts to manipulate the image efficiently, but this is still too long. Also, I've read here that when I set multiple output surfaces simultaneously, the same resolution will be used for all of them, which is quite bad for me.
Android stores every image, no matter if it is taken in landscape or in portrait, in landscape mode. It also stores metadata that tells you if the image should be displayed in portrait or landscape.
If you don'r turn the image according to the metadata, you will end up with every image in landscape. I had that problem too (but I wanted compression so my solution doesn't work for you).
You need to read the metadata and turn it accordingly.
I hope this helps at least a bit.

Augmented image getting detected but not tracked

I am working on augmented image example in arcore where I am able to detect the image but the image is not getting tracked and the object is not getting placed.I am referring augmented image example from codelabs. I have changed the image (hand made image), whose arcoreimg score in 100 and also done following changes to the code. It's getting detected continuously but not tracked.
config.setUpdateMode(Config.UpdateMode.LATEST_CAMERA_IMAGE);
config.setFocusMode(Config.FocusMode.AUTO);
For successive detection and tracking of Augmented Images in ARCore follow these basic rules:
In ARCore 1.15+, if your image doesn't move (like a poster on a wall), you should attach a global anchor to the image to increase the tracking's stability.
The physical image has to occupy 1/4 of the camera feed.
The smallest image resolution should be 300 x 300 pixels.
You must track your image under appropriate lighting conditions. Barely-lit room is not good environment for AR user experience.
It's much better to specify an expected physical size of a tracked image. Additional metadata improves tracking performance, especially for large physical images (more than 75 cm in size).
When ARCore detected a desired image with no expected physical size specified, its tracking state will be automatically paused. For user it means that ARCore has recognised the image, but hasn't gathered enough data to estimate its location in 3D space. Do not use the image's pose and size estimates until the image's tracking state is tracking.
Augmented Images support .png and .jpeg. However, avoid heavy compression for .jpeg.
Use images with a high contrast content, it's no matter whether they are color or black-and-white.
Avoid images with repetitive patterns (like Polka dot) and sparse features.
Andy's answer is correct, but maybe insufficiently specific. I had this issue as well and as soon as I added an expected width in meters, it started working almost immediately.
Instead of augmentedImageDatabase.addImage(DEFAULT_IMAGE_NAME, augmentedImageBitmap);
Use augmentedImageDatabase.addImage(DEFAULT_IMAGE_NAME, augmentedImageBitmap, <width in meters>);
Then it'll start tracking almost as soon as it is detected and you won't have to deal with this paused shenanigans. Worked great for me with a 7cm image with a 95 score. It even works great with an image with a score of 40. 40 score image with a set width works better than 100 score image without a set width.

Android steganography detection LSB

I am trying to do detection of LSB Steganography using real-time camera on mobile phone. So far i havent had much luck with detecting the LSB Steganography, whether on printed material or on the PC Screen.
I tried using OpenCV and do the conversion of each frame to RBG, and then read the bits from each pixel, but that never detects the steganography.
I also tried using the Camera functionality, and check onFrame whether pixel by pixel the starting string is recognized or not, so i can read the actual hidden data in the remaining pixels.
This provided few times positive result, but then the reading of the data was impossible.
Any suggestions how to approach this?
Little bit more information on the hidden data:
1. It is all over the image, and i know the algorithm works, since if i just read the exact image through Bitmap in the app, the steganography is detected and decoded, but when i try to use the camera no such luck.
2. It is in a grid, 8x5 pixels all over the image, so it is not that it is only on 1 specific area of the image, and it can not be detected in the camera view.
I can post some code as well if needed.
Thanks.
You still haven't clarified on the specifics of how you do it, but I assume you do some flavour of the following:
embed a secret in a digital image,
print this stego image or have it displayed on a pc, and
take a photograph of that and detect the embedded secret.
For all practical purposes, this can't work. LSB pixel embedding steganography is a very fragile technique. You require a perfect copy of the stego pixels images for extraction to work. Even a simple digital manipulation is enough to destroy your secret. Scaling, cropping and rotation are to name a few. Then you have to worry about the angle you take the photo and the ambient light. And we're not even touching upon the colours that are shown on a pc monitor or the printed photo.
The only reason you get positives for the starting sequence is because you use a short one and you're bound to be lucky. Assuming the photographed stego image results in random deviations for each pixel from its true value, you'll still get lucky sometimes. Imagine the first pixel had the value 250 and after photographed it's 248. Well, the LSB in both cases is still 0.
On top of that, some sequences are more likely to come up. In most photos neighbouring pixels are correlated, because the colour gradient is smooth. This means that if the top left of a photo is dark and the top right is bright, the colour will change slowly. For example, the first 4 pixels have the value 10, then the next few have 11, and so on. In terms of LSBs, you have the pattern 00001111 and as I've just explained, that's likely to come up fairly frequently regardless of what image you photograph out there.

Android camera2: what kind of upsampling/downsampling is done to fit my streams?

I want to get image flows with the least distortions possible(no noise reduction, etc) without having to deal with RAW outputs.
I'm working with two streams(one when using the deprecated camera), one for the preview and one for the processing. I understand the camera2 api, but am wondering what kind of upsampling/downsampling is used when fitting the sensor output to the surfaces?
More specifically, I'm working on zoomed images, and according to the camera2 documentation concerning cropping and the references:
For non-raw streams, any additional per-stream cropping will be done to maximize the final pixel area of the stream.
The whole concept is easy enough to understand, but it's also mentioned that:
Output streams use this rectangle to produce their output, cropping to a smaller region if necessary to maintain the stream's aspect ratio, then scaling the sensor input to match the output's configured resolution.
But I haven't been able to find any info about this scaling. Which method is used(filter based, bicubic, edge-directed, etc)? is there a way to get this info? and is there a way I can actually choose which one is used?
Concerning the deprecated camera, I'm guessing the zoom is just simpler, in the sense that it's probably equivalent to having SCALER_CROPPING_TYPE_CENTER_ONLY with only a limited set of crop regions corresponding to the exposed zoom ratios. But is the image scaling the same as in camera2? If someone could shed some light I'd be happy.
Real life example
Camera sensor: 5312x2988(16:9)
I want a 4x zoom so the crop region should be (1992, 1120, 1328, 747)
(btw what happens to odd sizes? for instance with SCALER_CROPPING_TYPE_CENTER_ONLY devices?)
Now I have a surface of size(1920, 1080), the crop area and the stream ratio fit, but the 1328x747 values must be transformed to fill the 1920x1080 surface. The nature of this transformation is what I want to know.
The scaling algorithm used depends on the device; generally for power efficiency and speed, scaling is done in hardware blocks usually at the end of camera image signal processor (ISP) pipeline.
Therefore, you can't generally rely on it being any particular kind of scaling or filtering. Unfortunately, if you want to understand the entire processing pipeline, you have to start with RAW and implement it yourself.
If you're on the same device, the old camera API and the new camera2 API talk to the same hardware abstraction layer, and the same hardware scalers, so the scaling output will generally match exactly for the same resolution. (with the exception of LEGACY-level devices, where camera2 may need additional GPU-based scaling, which will be bilinear downsampling - but you don't really know when this would apply).

Compress imagesize during runtime

Im having some memory issues with my app. It can pick an image from the users personal gallery and store it in a file to be rendered on the screen. The issue is that the limitation on the imagesize is very small. Which I have discovered lead to pretty much every image on my device being to large to handle. So the method itself became useless since it can't handle moderate sized images. I'm experiencing this only on ios devices so far.
Is there a solution? Can I compress/minimize the size of the image to a smaller one in any way? Cutting all images to the same resolution? (Like Instagram's system).
If you want to reduce the image size in bytes, there are at least 3 areas you can work on:
Reduce image dimensions (pixel resolution). This almost always
causes loss of quality, but if your users are viewing on small
screen devices, and you don't resize too much, the loss won't be
significant. You can also use interpolation to minimize the visual
degradation when resizing.
Reduce bit depth (color resolution).If the image is full color (32
or 24 bits per pixel), you can sometimes get away with reducing it
to a lower color count such as making it 8-bit. Again this will
cause quality loss, but you can use dithering to reduce it.
Use better compression. Most images are
already compressed these days, but in some cases you can re-compress
an image to make the file smaller. One example is JPEG which
supports different quality factors. There are also different
sub-types (color sampling frequencies) in JPEG. So if you save an
image as 4:1:1 instead of 4:4:4, it will contain less color content
and become smaller in byte size, but the difference is usually not
noticeable to the human eye. This post has details on changing
JPEG Quality factor on iOS.

Categories

Resources