I want to compute a SHA1 hash of different bitmaps (SHA isn't forced).
The problem is that there are some bitmaps (captchas) wich are basicly the same, but the name changes often.
I've found this:
Compute SHA256 Hash in Android/Java and C#
But it isn't the soloution i wanted.
The Bitmap.hashCode(), generates only a Integer, and when im right
Returns an integer hash code for this object. By contract, any two objects for which equals(Object) returns true must return the same hash code value. This means that subclasses of Object usually override both methods or neither method.
I dont't want a hash code of the object, i want the hashcode of the bitmap content.
Thanx!
In Android 3.1 or later (API Level 12) there is a method on Bitmap called sameAs() which will compare the pixels and return if the two represent the same image. It does this in native code so it is relatively fast.
If you must target a lower API level, you must write a method that iterates over each pixel of the two objects and see if they match. This will be a very intensive process if done in Java code, so you may consider writing a small routine using the NDK that you can call from your application to do the comparison in native code (there are Bitmap APIs in the NDK so you can easily get at the pixel buffers).
If you opt to do so in Java, getPixels() will assist you in obtaining arrays of the pixel data that you can compare between the two images.
HTH
Here is a more native way for computing Bitmap hash, using Arrays.hashCode, and bitmap.getPixels
int hash(Bitmap bitmap){
int[] buffer = new int[bitmap.getWidth(), bitmap.getHeight()];
bitmap.getPixels(buffer, 0, 0, 0, 0, bitmap.getWidth(), bitmap.getHeight());
return Arrays.hashCode(buffer);
}
The fastest solution I have found so far in kotlin:
fun Bitmap.hash(): Int {
val buffer: ByteBuffer = ByteBuffer.allocate(this.height * this.rowBytes)
this.copyPixelsToBuffer(buffer)
return buffer.hashCode()
}
nearly 100x faster than the accepted answer
You could try to write your own function using only the Pixel from the Bitmap:
public long hashBitmap(Bitmap bmp){
long hash = 31 //or a higher prime at your choice
for(int x = 0; x < bmp.getWidth(); x++){
for (int y = 0; y < bmp.getHeight(); y++){
hash *= (bmp.getPixel(x,y) + 31);
}
}
return hash;
}
if its only about comparing two images you could optimise this routine to hash just every second or x pixel
Similar problem and this worked for me (solved problem with getting a new name for a specific bitmap, so I could check if it was already stored):
fun getUniqueBitmapFileName(bitmap: Bitmap): String {
val buffer = ByteBuffer.allocate(bitmap.getByteCount())
bitmap.copyPixelsToBuffer(buffer)
return Arrays.hashCode(buffer.array()).toString()
}
Related
I trained a quantized semantic segmentation model with my own dataset using the python scripts available on Deeplab's official Github page. I used the mobilenetv2_coco_voc_trainaug backbone. I checked the result model in Netron and this how the input an output looks:
As you can see, the output is an array of int64 with size of 257x257. From my understanding this array should contain the index of label with the highest probability at every array index, or am I missing something?
But when when I try to read this in Android, I got just zeros and ones, indiferent of what is in picture, people, cow, etc.
for (y in 0 until imageHeight) {
for (x in 0 until imageWidth) {
// resultBuffer is a ByteBuffer of size imageSize * imageSize * 8
val value = resultBuffer.getLong((y * imageWidth + x) * 8)
}
}
The result is not that accurate either, since I'm getting segmentation values where I shouldn't.
Any help would be appreciated!
Cant comment yet, lets try guess.
You are trying to use quantized model with int64 output. Output should be 8bit type
And yes, accuracy will drop with quantized model
I am trying to do something in an if-statement, this works in every version of android (16 or higher because of the getDrawable) except Android L (tested on latest). The code is the following:
if (item.getIcon().getConstantState().equals(getResources().getDrawable(R.drawable.add_to_fav_normal).getConstantState())
Any help/hints or explanation would be appreciated!
Use item.getContext().getDrawable(int) or the equivalent ContextCompat method.
Starting in API 21, all framework widgets that load drawables use Context.getDrawable() which applies the context's current theme during inflation. This basically just calls getResources().getDrawable(..., getTheme()) internally, so you could also use context.getResources().getDrawable(..., context.getTheme()).
if (item.getIcon().getConstantState().equals(item.getContext()
.getDrawable(R.drawable.add_to_fav_normal).getConstantState())
In general, though, you shouldn't rely on this check. There are no API guarantees around what constant state you'll receive from a particular drawable.
This solution is convenient for tests only:
public static void assertEqualDrawables(Drawable drawableA, Drawable drawableB) {
Bitmap bitmap1 = ((BitmapDrawable) drawableA).getBitmap();
Bitmap bitmap2 = ((BitmapDrawable) drawableB).getBitmap();
ByteBuffer buffer1 = ByteBuffer.allocate(bitmap1.getHeight() * bitmap1.getRowBytes());
bitmap1.copyPixelsToBuffer(buffer1);
ByteBuffer buffer2 = ByteBuffer.allocate(bitmap2.getHeight() * bitmap2.getRowBytes());
bitmap2.copyPixelsToBuffer(buffer2);
Assert.assertTrue(Arrays.equals(buffer1.array(), buffer2.array()));
}
Based on #alanv's answer, below is what I did and was successful:
if (imgClicked.getDrawable().getConstantState()
.equals(ContextCompat.getDrawable(this,
R.drawable.add_profile).getConstantState())) {
//Both images are same
}else{
//Both images are NOT same
}
Thank's #alanv :)
I am writing an Android application that must paint determined parts of a loaded bitmap image according to received events.
I need to paint (or change the current color) of a single part of a bitmap image, without changing the rest of the image.
Let's say I have a car, which is divided by many parts: door, windows, wheels, etc.
Each time an event (received from the network) arrives, I need to change the color of that particular part with the color specified by the event data.
What would be the best technique to achieve that?
I first thought on FloodFill, as suggested on many threads in SO, but given that the messages are received quite fast (several per second) I fear it would drag performance down, as it seem to be very CPU intensive algorithm.
I also thought about having multiple segments of the same image, each colored with a different color and show the right one at the right time, but the car has at least 10 different parts and each one could be painted with 4-6 colors, so I would end up with dozens of images and that would be impractical to handle, not to mention the waste of memory.
So, is there any other approach?
The fastest way to do it is with a shader. You'll need to use OpenGL ES 2 for that (some Androids only support ES 1). You'll need a temporary bitmap the same size as the image you want to change. Set it as the target. In the shader, retrieve a pixel from the sampler which is bound to the image you want to change. If it's within a small tolerance of the colour you want to change, set gl_FragColor to the new colour, otherwise just set gl_FragColor to the colour you retrieved from the sampler. You'll need to pass the desired colour and the new colour into the shader as vec4s with al_set_shader_float_vector. The fastest way to do this is to keep 2 bitmaps and swap between them as the "main one" that you're using each time a colour changes.
If you can't use a shader, then you'll have to lock the bitmap and replace the colour. Use al_lock_bitmap to lock it, then you can use al_get_pixel and al_put_pixel to change colours. Then al_unlock_bitmap when you're done. You can also avoid using al_get_pixel/al_put_pixel and access the memory manually which will be faster. If you lock the bitmap with the format ALLEGRO_PIXEL_FORMAT_ABGR_8888_LE then the memory is laid out like so:
int w = al_get_bitmap_width(bitmap);
int h = al_get_bitmap_height(bitmap);
for (int y = 0; y < h; y++) {
unsigned char *p = locked_region->data + locked_region->pitch * y;
for (int x = 0; x < w; x++) {
unsigned char r = p[0];
unsigned char g = p[1];
unsigned char b = p[2];
unsigned char a = p[3];
/* change r, g, b, a here if they match */
p[0] = r;
p[1] = g;
p[2] = b;
p[3] = a;
p += 4;
}
}
It's recommended that you lock the image in the format it was created in. That means pick an easy one like the one I mentioned, or else the inner part of the loop gets more complicated. The ABGR_8888 part of the pixel format describes the layout of the data. ABGR tells the order of the components. If you were to read a pixel into a single storage unit (an int in this case but it works the same with a short) then the bit pattern would be AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR. However, when you're reading a byte at a time, most machine are little endian so that means the small end comes first. That's why in my sample code p[0] is red. The 8888 part tells how many bits per component.
I have developed an algorithm for android using OpenCV. I need to find overlap between the previous image and current frame. So, I have produced the template from previous image to match with current frame to make a photograph. It is the procedure to complete photographing. (Taking more than 10 picture)
Here is the code that I have developed to find the overlap.
public void overlapFinder(Mat inputFrame , Mat inputTemplate )
{
Mat mResult;
int resultWidth = inputFrame.width() - inputTemplate.width() + 1;
int resultHeight = inputFrame.height() - inputTemplate.height() + 1;
mResult = new Mat(resultHeight, resultWidth, CvType.CV_8U);
Imgproc.matchTemplate(inputFrame, inputTemplate, mResult,Imgproc.TM_CCORR_NORMED) ;
Core.MinMaxLocResult result = Core.minMaxLoc(mResult);
#SuppressWarnings("unused")
double maxVal = result.maxVal;
}
The problem is that when the "overlap function" is called after generating template from previous image, the application is crashed.
Would anyone please help me with that?
Thanks
Maybe you really need to do some debugging first, but in any case, I can see from your code that it would be worthwhile checking the sizes of your images - it seems that your code assumes the template is always smaller than the input frame.
If that's not true, you will get negative resultWidth and/or resultHeight, which will make it crash.
One other thing - the documentation suggests that the result type should be CV_32FC1.
PS - Try initialising your result like this:
mResult.create(resultHeight, resultWidth, CvType.CV_32FC1);
I come from the Qt world and i am porting an application to Android. I am bit confused, i am banging my head for a few days now on something that must be so trivial that i cannot find why it's not working.
Some background: i have a C++ engine which i use trough NDK and JNI. This engine creates some bitmaps and passes them to the Java side, the Java side must display them on a View and let the user interact with them (drag and such).
The engine works properly, because i use it under Qt with full success. This is the workflow:
1- Java loads a big Bitmap from a custom data file (the C++ engine expects it to be in ARGB format, but it's compressed JPG data)
Bitmap.Config fmt = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeByteArray(buffer, 0, size).copy( fmt , false);
2- initialize the C++ engine passing the bitmap. The C++ engine "breaks" the bitmap in smaller tiles. For tile it builds a rather complex alpha mask and stores it into the first byte of the bitmap (the "a" byte). This alpha mask only uses two values: 0xFF for opaque and 0x00 for transparent.
init_C_engine( this.fullImage );
3- The Java side then allocates all the tiles bitmaps, i do in two steps because before init i dont know which size will the tiles be. The engine will populate the tile_width and tile_height arrays:
Bitmap.Config fmt = Bitmap.Config.ARGB_8888;
for (int t = 0; t < this.puzzle_size; t++ ){
tile_data[ t ] = Bitmap.createBitmap( tile_width[t], tile_height[t], fmt);
4- Last step,inside the C++ engine, all the tiles bitmaps are filled:
for ( int n = 0; n < nBitmaps; n++ )
{
jobject bitmap = env->GetObjectArrayElement( bitmaps, n );
AndroidBitmap_getInfo(env, bitmap, &info);
AndroidBitmap_lockPixels(env, bitmap, reinterpret_cast<void **>(&pixels));
game->getTileBitmap( n, (unsigned char*)pixels );
AndroidBitmap_unlockPixels(env, bitmap);
env->SetObjectArrayElement( bitmaps, n, bitmap );
}
}
Now, in my custom View:
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.drawColor(Color.BLACK);
for ( int tile = 0; tile < board.nTiles; tile++ ){
canvas.drawBitmap( tile_data[tile],
tile_x[tile],
tile_y[tile], paint);
}
}
What i expect is that on my View i see my tiles with transparent areas. what i get instead is a weird behaviour so that on the black background i see the ENTIRE tile like the alpha bytes are all set to opaque, but when i move the tiles one of top of the other, the "transparent" areas get combined in some strange way, like colors are "xor"ed or multiplied in some way! When i move one tile on the other i can see the areas where the alpha bytes are set to transparent but colors gets mangled instead of being transparend!
Basically i expect that pixels having alpha set to 0 are drawn as transparent... i looked on internet but i could not find anything usefull to help me out....
Does somebody have ideas? Anything will be appreciated!
thanks.
Shouldn't you use the index t instead of tile inside the for loop inside onDraw? Like this:
canvas.drawBitmap(tile_data[t], tile_x[t], tile_y[t], paint);