Android steganography detection LSB - android

I am trying to do detection of LSB Steganography using real-time camera on mobile phone. So far i havent had much luck with detecting the LSB Steganography, whether on printed material or on the PC Screen.
I tried using OpenCV and do the conversion of each frame to RBG, and then read the bits from each pixel, but that never detects the steganography.
I also tried using the Camera functionality, and check onFrame whether pixel by pixel the starting string is recognized or not, so i can read the actual hidden data in the remaining pixels.
This provided few times positive result, but then the reading of the data was impossible.
Any suggestions how to approach this?
Little bit more information on the hidden data:
1. It is all over the image, and i know the algorithm works, since if i just read the exact image through Bitmap in the app, the steganography is detected and decoded, but when i try to use the camera no such luck.
2. It is in a grid, 8x5 pixels all over the image, so it is not that it is only on 1 specific area of the image, and it can not be detected in the camera view.
I can post some code as well if needed.
Thanks.

You still haven't clarified on the specifics of how you do it, but I assume you do some flavour of the following:
embed a secret in a digital image,
print this stego image or have it displayed on a pc, and
take a photograph of that and detect the embedded secret.
For all practical purposes, this can't work. LSB pixel embedding steganography is a very fragile technique. You require a perfect copy of the stego pixels images for extraction to work. Even a simple digital manipulation is enough to destroy your secret. Scaling, cropping and rotation are to name a few. Then you have to worry about the angle you take the photo and the ambient light. And we're not even touching upon the colours that are shown on a pc monitor or the printed photo.
The only reason you get positives for the starting sequence is because you use a short one and you're bound to be lucky. Assuming the photographed stego image results in random deviations for each pixel from its true value, you'll still get lucky sometimes. Imagine the first pixel had the value 250 and after photographed it's 248. Well, the LSB in both cases is still 0.
On top of that, some sequences are more likely to come up. In most photos neighbouring pixels are correlated, because the colour gradient is smooth. This means that if the top left of a photo is dark and the top right is bright, the colour will change slowly. For example, the first 4 pixels have the value 10, then the next few have 11, and so on. In terms of LSBs, you have the pattern 00001111 and as I've just explained, that's likely to come up fairly frequently regardless of what image you photograph out there.

Related

Augmented image getting detected but not tracked

I am working on augmented image example in arcore where I am able to detect the image but the image is not getting tracked and the object is not getting placed.I am referring augmented image example from codelabs. I have changed the image (hand made image), whose arcoreimg score in 100 and also done following changes to the code. It's getting detected continuously but not tracked.
config.setUpdateMode(Config.UpdateMode.LATEST_CAMERA_IMAGE);
config.setFocusMode(Config.FocusMode.AUTO);
For successive detection and tracking of Augmented Images in ARCore follow these basic rules:
In ARCore 1.15+, if your image doesn't move (like a poster on a wall), you should attach a global anchor to the image to increase the tracking's stability.
The physical image has to occupy 1/4 of the camera feed.
The smallest image resolution should be 300 x 300 pixels.
You must track your image under appropriate lighting conditions. Barely-lit room is not good environment for AR user experience.
It's much better to specify an expected physical size of a tracked image. Additional metadata improves tracking performance, especially for large physical images (more than 75 cm in size).
When ARCore detected a desired image with no expected physical size specified, its tracking state will be automatically paused. For user it means that ARCore has recognised the image, but hasn't gathered enough data to estimate its location in 3D space. Do not use the image's pose and size estimates until the image's tracking state is tracking.
Augmented Images support .png and .jpeg. However, avoid heavy compression for .jpeg.
Use images with a high contrast content, it's no matter whether they are color or black-and-white.
Avoid images with repetitive patterns (like Polka dot) and sparse features.
Andy's answer is correct, but maybe insufficiently specific. I had this issue as well and as soon as I added an expected width in meters, it started working almost immediately.
Instead of augmentedImageDatabase.addImage(DEFAULT_IMAGE_NAME, augmentedImageBitmap);
Use augmentedImageDatabase.addImage(DEFAULT_IMAGE_NAME, augmentedImageBitmap, <width in meters>);
Then it'll start tracking almost as soon as it is detected and you won't have to deal with this paused shenanigans. Worked great for me with a 7cm image with a 95 score. It even works great with an image with a score of 40. 40 score image with a set width works better than 100 score image without a set width.

Extract pointclouds WITH colour using the Project Tango; i.e. getting the current camera frame

I am trying to produce a point cloud where each point has a colour. I can get just the point cloud or I can get the camera to take a picture, but I need them to be as simultaneous as possible. If I could look up an RGB image with a timestamp or call a function to get the current frame when onXYZijAvailable() is called I would be done. I could just go over the points, find out where it would intersect with the image plane and get the colour of that pixel.
As it is now I have not found any way to get the pixel info of an image or get coloured points. I have seen AR apps where the camera is connected to the CameraView and then things are rendered on top, but the camera stream is never touched by the application.
According to this post it should be possible to get the data I want and synchronize the point cloud and the image plane by a simple transformation. This post is also saying something similar. However, I have no idea how to get the RGB data. I cant find any open source projects or tutorials.
The closest I have gotten is finding out when a frame is ready by using this:
public void onFrameAvailable(final int cameraId) {
if (cameraId == TangoCameraIntrinsics.TANGO_CAMERA_COLOR) {
//Get the new rgb frame somehow.
}
}
I am working with the Java API and I would very much like to not delve into JNI and the NDK if at all possible. How can I get the frame that most closely matches the timestamp of my current point cloud?
Thank you for your help.
Update:
I implemented a CPU version of it and even after optimising it a bit I only managed to get .5 FPS on a small point cloud. This is also due to the fact that the colours have to be converted from the android native NV21 colour space to the GPU native RGBA colour space. I could have optimized it further, but I am not going to get a real time effect with this. The CPU on the android device simply can not perform well enough. If you want to do this on more than a few thousand points, go for the extra hassle of using the GPU or do it in post.
Tango normally delivers color pixel data directly to an OpenGLES texture. In Java, you create the destination texture and register it with Tango.connectTextureId(), then in the onFrameAvailable() callback you update the texture with Tango.updateTexture(). Once you have the color image in a texture, you can access it using OpenGLES drawing calls and shaders.
If your goal is to color a Tango point cloud, the most efficient way to do this is in the GPU. That is, instead of pulling the color image out of the GPU and accessing it in Java, you instead pass the point data into the GPU and use OpenGLES shaders to transform the 3D points into 2D texture coordinates and look up the colors from the texture. This is rather tricky to get right if you're doing it for the first time but may be required for acceptable performance.
If you really want direct access to pixel data without using the C API,
you need to render the texture into a buffer and then read the color data from the buffer. It's kind of tricky if you aren't used to OpenGL and writing shaders, but there is an Android Studio app that demonstrates that here, and is further described in this answer. This project demonstrates both how to draw the camera texture to the screen, and how to draw to an offscreen buffer and read RGBA pixels.
If you really want direct access to pixel data but decide that the NDK might be less painful than OpenGLES, the C API has TangoService_connectOnFrameAvailable() which gives you pixel data directly, i.e. without going through OpenGLES. Note, however, that the format of the pixel data is NV21, not RGB or RGBA.
I am doing this now by capturing depth with onXYZijAvailable() and images with onFrameAvailable(). I am using native code, but the same should work in Java. For every onFrameAvailable() I get the image data and put it in a preallocated ring buffer. I have 10 slots and a counter/pointer. Each new image increments the counter, which loops back from 9 to 0. The counter is an index into an array of images. I save the image timestamp in a similar ring buffer. When I get a depth image, onXYZijAvailable(), I grab the data and the timestamp. Then I go back through the images, starting with the most recent and moving backwards, until I find the one with the closest timestamp to the depth data. As you mentioned, you know that the image data will not be from the same frame as the depth data because they use the same camera. But, using these two calls (in JNI) I get within +/- 33msec, i.e. the previous or next frame, on a consistent basis.
I have not checked how close it would be to just naively use the most recently updated rgb image frame, but that should be pretty close.
Just make sure to use the onXYZijAvailable() to drive the timing, because depth updates more slowly than rgb.
I have found that writing individual images to the file system using OpenCV::imwrite() does not keep up with the real time of the camera. I have not tried streaming to a file using the video codec. That should be much faster. Depending on what you plan to do with the data in the end you will need to be careful how you store your results.

How to equalize brightness, contrast, histrograms between two images using EMGUCV

What I am doing is attempting to using EMGU to perform and AbsDiff of two images.
Given the following conditions:
User starts their webcam and with the webcam stationary takes a picture.
User moves into the frame and takes another picture (WebCam has NOT moved).
AbsDiff works well but what I'm finding is that the ISO adjustments and White Balance adjustments made by certain cameras (even on Android and iPhone) are uncontrollable to a degree.
Therefore instead of fighting a losing battle I'd like to attempt some image post processing to see if I can equalize the two.
I found the following thread but it's not helping me much: How do I equalize contrast & brightness of images using opencv?
Can anyone offer specific details of what functions/methods/approach to take using EMGUCV?
I've tried using things like _EqualizeHist(). This yields very poor results.
Instead of equalizing the histograms for each image individually, I'd like to compare the brightness/contrast values and come up with an average that gets applied to both.
I'm not looking for someone to do the work for me (although code example would CERTAINLY be appreciated). I'm looking for either exact guidance or some way to point the ship in the right direction.
Thanks for your time.

Android-Java detect text orientation and rotate image for ocr

I read the card using OCR on Android(or iOS). But in the process, if it is successful it is not upside down. But character is wrong, the process fails. I am using tesserat and opencv algoritms.
Example like this image. How i can detect text orientation and rotate image.
If the OCR technology you are using does not have a dedicated auto-rotate function (most do, so double-check), then the technique I use is to check for either character confidence or to check for words from dictionary. ABBYY OCR, for example, has a dedicated auto-rotate setting. OCR-IT API also has auto-rotate, and also can return flags such as IsWordFromDictionary in the XML result. Every OCR technology may work differently.
If you expect only 4 possible rotations, then the algorithm is:
Perform OCR. Check confidence, or dictionary words, or even just capitalization (incorrect rotation will produce mess like this: DioOpUllltG). Set a threshold over which you accept the result, such as 50%. You are hoping that your first OCR pass is from an image in correct orientation (statistical approach).
If quality is lower than your threshold, then either you have a low quality image in correct orientation, or the orientation is wrong. Rotate and check remaining three orientations. Pick the best one.
In some projects, where images may be at unpredictable extreme angles, such as 30 degrees, OCR will fail in every case when performing 4 flips. Then I usually use an OCR pass at every 10 degrees rotation (36 OCR passes), and pick the best case.

Android: Will the phone camera resolution affect the result of preprocessing

My current project is about android image processing.But if my phone camera is about 1-2 megapixel, will it affect the result of preprocessing like grayscale and binarization?
Your phone camera won't affect any pre-processing you perform in that your pre-processing code will act just the same regardless of the number of megapixels in your camera. Garbage in, garbage out still does apply. If you start with a low quality, poor contrast, blurred picture, you aren't going to be able to turn it in to something fantastic you want to hang on your wall. Additionally, as Mizuki alluded to in his comment, a 1-2 megapixel phone image is far higher resolution than the average image used on the internet, and these can be binarised and greyscaled just fine.
As for the two methods of preprocessing you mentioned in your question:
Binarization
This just converts an image into a two colour version. Normally black and white, though other colours are possible. The number of pixels in the image doesn't matter for this, other than it taking longer if it has more pixels to process. Low quality mobile phone cameras can sometimes produce low contrast photos and this may make it harder for the binarization algorithm to correctly determine the threshold at which pixels should be displayed in either colour.
Greyscale
Converting an image to greyscale is done by manipulating the colours of each pixel so, again, the number of pixels should only increase the preprocessing time, not change the result.

Categories

Resources