Three Dimensional arrays in Android - android

I am not able to create 3D array with pixel value more then (100,100,3) in android. However it is working fine with array les then above mentioned dimension.
My code:
double mat[][][] = new double[400][400][3];
Error:java.lang.OutOfMemoryError: OutOfMemoryError thrown while trying to throw OutOfMemoryError; no stack available
However,
double mat[][][] = new double[100][100][3];
works fine. I am using emulated virtual machine to run android application.

It's probably memory. In Android double takes 64 bits, which are 8 bytes.
You are creating a 3D array 400x400x3 so the size will be
400 * 400 * 3 * 8 = 3.7MB
While the smaller size will be
100 * 100 * 3 * 8 = 234KB
It is more likely to get 234KB block of size rather than 3.7MB.
Tnx to Luca comment.

Related

How to interpret mobilenetv2 segmentation result output on Android?

I trained a quantized semantic segmentation model with my own dataset using the python scripts available on Deeplab's official Github page. I used the mobilenetv2_coco_voc_trainaug backbone. I checked the result model in Netron and this how the input an output looks:
As you can see, the output is an array of int64 with size of 257x257. From my understanding this array should contain the index of label with the highest probability at every array index, or am I missing something?
But when when I try to read this in Android, I got just zeros and ones, indiferent of what is in picture, people, cow, etc.
for (y in 0 until imageHeight) {
for (x in 0 until imageWidth) {
// resultBuffer is a ByteBuffer of size imageSize * imageSize * 8
val value = resultBuffer.getLong((y * imageWidth + x) * 8)
}
}
The result is not that accurate either, since I'm getting segmentation values where I shouldn't.
Any help would be appreciated!
Cant comment yet, lets try guess.
You are trying to use quantized model with int64 output. Output should be 8bit type
And yes, accuracy will drop with quantized model

TensorFlow object detection fails on Xamarin Android with a reshape issue

I am following this blog post and GitHub almost exactly:
Blog
Github
But when I run, take a picture and call this line:
var outputs = new float[tfLabels.Count];
tfInterface.Feed("Placeholder", floatValues, 1, 227, 227, 3);
tfInterface.Run(new[] { "loss" });
tfInterface.Fetch("loss", outputs);
The app actually crashes and generates the error below on the .Run line.
I get this error in the output window (and the app crashes):
04-04 17:39:12.575 E/TensorFlowInferenceInterface( 8017): Failed to
run TensorFlow inference with inputs:[Placeholder], outputs:[loss]
Unhandled Exception:
Java.Lang.IllegalArgumentException: Input to reshape is a tensor with
97556 values, but the requested shape requires a multiple of 90944
[[Node: block0_0_reshape0 = Reshape[T=DT_FLOAT, Tshape=DT_INT32,
_device="/job:localhost/replica:0/task:0/device:CPU:0"](block0_0_concat,
block0_0_reshape0/shape)]]
According to the posts I am reading from the searching I am doing on this error, I sort of understand this is due to the image not fitting the expected size exactly but in the example I am following, this is resized to fit 227x227 everytime and converted to float like in these lines:
var resizedBitmap = Bitmap.CreateScaledBitmap(bitmap, 227, 227, false).Copy(Bitmap.Config.Argb8888, false);
var floatValues = new float[227 * 227 * 3];
var intValues = new int[227 * 227];
resizedBitmap.GetPixels(intValues, 0, 227, 0, 0, 227, 227);
for(int i = 0; i < intValues.Length; i++)
{
var val = intValues[i];
floatValues[i * 3 + 0] = ((val & 0xFF) - 104);
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - 117);
floatValues[i * 3 + 2] = (((val >> 16) & 0xFF) - 123);
}
So, I don't understand what is causing this or how to fix it. Please help!
UPDATE: I found out the issue is with my model or my labels. I found this out by simply swapping in the model and label file from the sample/github above while leaving all my code the same. When I did this, I no longer get the error. HOWEVER, this still doesn't tell me much. The error is not very explanatory to point me in a direction of what could be wrong with my model. I assume it is the model because the labels file is simply just a text file with labels on each line. I used Custom Vision Service on Azure to create my model. It trained fine and tests just fine on the web portal. I then exported it as TensorFlow. So, I am not sure what I could have done wrong or how to fix it.
Thanks!
After no answers here and several days of searching and trial and error, I have found the issue. In general, I guess this reshape error I was getting you can get if you are feeding the model with an image size other that it is expecting or setup to receive.
The issue is that, everything I have read says that typically you must feed the model with a 227 x 227 x 3 image. Then, I started noticing that size varies on some posts. Some people say 225 x 225 x 3, others say 250 x 250 x 3 and so on. I had tried those sizes as well with no luck.
As you can see in my edit in the question, I did have a clue. When using somebody else's pretrained model, my code works fine. However, when I use my custom model which I created on the Microsoft Azure CustomVision.ai site, I was getting this error.
So, I decided I would try to inspect the models to see what was different. I followed this post: Inspect a pre trained model
When I inspected the model that works using TensorBoard, I see that the input is 227 x 227 x 3 which is what I expected. However, when I viewed my model, I noticed that it was 224 x 224 x 3! I changed my code to resize the image to that size and it works! Problem went away.
So, to summarize, for some reason Microsoft Custom Vision service model generated a model to expect an image size of 224 x 224 x 3. I didn't see any documentation or setting for this. I also don't know if that number will change with each model. If you get a similar shape error, the first place I would check is the size of the image you are feeding your model and what it expects as an input. The good news is you can check your model, even if pre-trained, using TensorBoard and the post I linked above. Look at the input section, it should look something like this:
Hope this helps!

javaCV detectMultiScale with LBP cascade does not work on physical device

My Android application uses javaCV and calls detectMultiScale() function with LBP cascade to detect faces. It works completely fine on my emulator. However, when I tried to test it on my HTC Incredible S, it returns 0, could not detect any face! Could anyone show me some hints why it does not work? Many thanks for your help!!!
Here is my code for face detection:
CASCADE_FILE = working_Dir.getAbsolutePath() + "/lbpcascade_frontalface.xml";
public static CvRect getFaceWithLBP(IplImage grayFaceImg)
{
CascadeClassifier cascade = new CascadeClassifier(CASCADE_FILE);
CvRect facesdetection = new CvRect(null);
cascade.detectMultiScale(grayFaceImg, facesdetection, 1.1, 2, CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH,
new CvSize(), new CvSize(grayFaceImg.width(), grayFaceImg.height()));
return facesdetection;
}
Just a note, as per the OpenCV documentation, the flags (such as CV_HAAR_FIND_BIGGEST_OBJECT | CV_HAAR_DO_ROUGH_SEARCH) can not be used with new cascades (like LBP ones).
void CascadeClassifier::detectMultiScale(const Mat& image, vector& objects, double scaleFactor=1.1, int minNeighbors=3, int flags=0, Size minSize=Size(), Size maxSize=Size())
Parameters:
cascade – Haar classifier cascade (OpenCV 1.x API only). It can be loaded from XML or YAML file using Load(). When the cascade is not needed anymore, release it using cvReleaseHaarClassifierCascade(&cascade).
image – Matrix of the type CV_8U containing an image where objects are detected.
objects – Vector of rectangles where each rectangle contains the detected object.
scaleFactor – Parameter specifying how much the image size is reduced at each image scale.
minNeighbors – Parameter specifying how many neighbors each candidate rectangle should have to retain it.
flags – Parameter with the same meaning for an old cascade as in the function cvHaarDetectObjects. It is not used for a new cascade.
minSize – Minimum possible object size. Objects smaller than that are ignored.
maxSize – Maximum possible object size. Objects larger than that are ignored.

Android calculator: manipulation query + remove unwanted zeros for the answer output

Hi I am working on an android calculator apps and the now working on the manipuations. I have defined for the following:
ArrayList<Float> inputnum = new ArrayList<Float>();
float inputnum1;
float inputnum2;
and then for the operations,
case MULTIPLY:
inputnum1 = inputnum.get(0);
inputnum2 = inputnum.get(1);
inputnum.add(inputnum1 * inputnum2);
Display.setText(String.format("%.9f", inputnum.get(0)));
similar for the division one.
The muliply function and divide function works well for integers (eg 5* 4 output 20.00000000)
however, when it deals with figures with decimal places, eg 5.3 * 4, it output as 21.12000089, which is incorrect.
what is the problem?
also, how to set output to Display to remove unnecessary zero? eg
when 5*4 it only show 20 instead of 20.000000 as final answer?
when 5.3*4 = 21.12 instead of 21.12000000 as final answer?
Thanks a lot!
Just to change all the related float to double will then avoid presenting the rounding error.
If wanted to present 9 decimal places by filling up zero after the dot, eg 7.56 become 7.560000000, can use the below coding.
Display.setText(String.format("%.9f", inputnum.get(0)));

Opengl es 1.1, texture compression ETC1 and Mipmaping (complete set of mipmaps error)

When I activate the mipmaping on uncompressed texture, all is working perfectly.
When I do it on ETC1 texture, the texture is blank, certainly because le complete set of mipmaps was not given.
The code is very simple and works on iPhone (with PVR compression, of course).
It doesn't work on Android. The mipmap was build with an external tool, and past together.
I stop making mipmap at the size of 4, because glCompressedTexImage2D return an opengl error if try using mipmap lower.
for(u32 i=0; i<=levels; i++)
{
size = KC_TexByte(pagex, pagey, tex_type);
glCompressedTexImage2D(GL_TEXTURE_2D, i, type, pagex, pagey, 0, size, ptr);
pagex = MAX(pagex/2, 4);
pagey = MAX(pagey/2, 4);
ptr += size;
KC_Error(); // test openGL error
}
The reason your texture is blank is because it is required that the mipmap go all the way to 1x1.
I would imagine that the error you're getting with small compressed textures is because the texture format you're attempting to use (etc1?) doesn't support those sizes. You'd have to use non-compressed images at those small sizes...
Thanks, but your solution is not the right one; I found another solution.
you're right when you explain that all the mipmap is requiered, until size 1x1
you're wrong, we can't have different format between mipmap
The right way is:
using size to 1x1
keep in mind it's compressed data with bloc, so the size in BYTE doesn't divide by 4 each step. after 8x8, the size stay at the same value.
sx = size in X
sy = size in Y
byte = ((sx+3)/4)*((sy+3)/4) * 8 * 2; // 8 = bit per pixel
for(u32 i=0; i<=levels; i++)
Seems you'd want i < levels instead of <=.

Categories

Resources