Increase number of detections on Tensorflow Lite's Model Maker (Android) - android

I've adapted Tensorflow Lite's Salad Detector Colab and am able to train my own models and get them working on Android but I'm trying to count Objects and I need more than the 25 limit that is the default.
The models have a method for increasing detections so, in the above Colab, I inserted the following code:
spec = model_spec.get('efficientdet_lite4')
spec.tflite_max_detections=50
And on the Android side of things
val options = ObjectDetector.ObjectDetectorOptions.builder()
.setMaxResults(50)
.setScoreThreshold(10)
.build()
The models are training fine but I'm still only able to detect 25 Objects in a single image.
Is there a problem with my models? Or are there any other settings I can change in my Android code that will increase the number of detections?

Solved this myself after Googling a different SOF question on efficientdet_lite4, I stumbled on an AHA moment.
My problem was here:
spec = model_spec.get('efficientdet_lite4')
spec.tflite_max_detections=50
I needed to change the whole spec of the model:
spec = object_detector.EfficientDetLite4Spec(
model_name='efficientdet-lite4',
uri='https://tfhub.dev/tensorflow/efficientdet/lite4/feature-vector/2',
hparams='',
model_dir=None,
epochs=50,
batch_size=64,
steps_per_execution=1,
moving_average_decay=0,
var_freeze_expr='(efficientnet|fpn_cells|resample_p6)',
**tflite_max_detections=50**,
strategy=None,
tpu=None,
gcp_project=None,
tpu_zone=None,
use_xla=False,
profile=False,
debug=False,
tf_random_seed=111111,
verbose=0
)
From there I was able to train the model and things worked on the Android side of things.
This has been bugging me for a few weeks!

Related

Tensorflow Model output weights have different values

I am developing an Android application which requires an ML model integration.For it I am using TensorFlow lite for deployment.I am using Custom Model based Siamese Network for output and the output shape is [1 128].When I infer the tf lite model in python on Google Colab the output [1 128] numbers are different from the one being produced on my Android device.THe input image is same on both inferences and also the input and output shapes but still I am getting different output vectors on my Android Phone and Python TFlite model.I am using Firebase Machine Learning.
Android Code
val interpreter=Interpreter(model)
val imageBitmap= Bitmap.createScaledBitmap(BitmapFactory.decodeFileDescriptor(contentResolver.openFileDescriptor(fileUri,"r")?.fileDescriptor),256,256,true)
val inputImage=ByteBuffer.allocateDirect(256*256*3*4).order(ByteOrder.nativeOrder())
for(ycord in 0 until 256){
for(xcord in 0 until 256){
val pixel=imageBitmap.getPixel(xcord,ycord)
inputImage.putFloat(Color.red(pixel)/1.0f)
inputImage.putFloat(Color.green(pixel)/1.0f)
inputImage.putFloat(Color.blue(pixel)/1.0f)
}
}
imageBitmap.recycle()
val modelOutput=ByteBuffer.allocateDirect(outputSize).order(ByteOrder.nativeOrder())
interpreter.run(inputImage,modelOutput)
modelOutput.rewind()
val probs=modelOutput.asFloatBuffer()
success(ImageProcessResult.Success(probs))
Kindly help me.I need it soon.Any help is appreciated
You are resizing the bitmap to [256,256] in the Android platform.
Even the slightest change in input vectors would change the output vector. When you resize the bitmaps, you change the input vector. However, if the model is general enough the final result which would be argmax of the output vector (in classification) would be the same.
In the case of Siamese, I believe it won't affect the final result (similarity score) in a meaningful way if the model is not overfitted.

Generate and export point cloud from Project Tango

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

ArToolkit, Android and 2D markers

I've encountered some problems using ArCode with 2D bar code inside.
I'm using Android and the ArToolkit.
I've no problem recognizing "Hiro" marker or the "kanji".
Sometimes, artoolkit confuses "0" bar code with "hiro", but this is not the problem, the problem is that I can't, in any way I've tried, recognize a 2D bar code.
This is my code :
if (!ARToolKit.getInstance().initialiseNative(this.getCacheDir().getAbsolutePath()) ||
!ARToolKit.getInstance().initialiseAR(640, 480, "Data/camera_para.dat", 0, false)) {
Log.e("MainActivity", "errore di inizializzazione");
return;
}
_markerID = ARToolKit.getInstance().addMarker("single_barcode;0;40");
it doesn't count if I use :
single_barcode;0;10
...
single_barcode;0;80
obiviosly with instead:
_markerID = ARToolKit.getInstance().addMarker("single;Data/patt.hiro;10");
it works.
I've tried aldo to create a file like the one for hiro ( patt.hiro) and kanji (patt.kanji).
So, I've created a code.dat
1
00
40.0
1.0000 0.0000 0.0000 0.0000
0.0000 1.0000 0.0000 0.0000
0.0000 0.0000 1.0000 0.0000
for the "0" bar code.
_markerID = ARToolKit.getInstance().addMarker("single;Data/code.dat;40");
Again it doesn't count if I use :
single;Data/code.dat;10
..
single;Data/code.dat;80
but again nothing.
I can't find any valid example using this in android, or any exaustive manual...
Where I'm wrong ?
The usage of 2D Barcode for ARToolkit in Android is not available in any public documentation. However, if you refer directly to the ARWrapper source code. I found it is available through NativeInterface and ARToolkit.
Here's a working example I used in my Android App
Firstly, do something like this in your detection initialization
NativeInterface.arwSetPatternDetectionMode(NativeInterface.AR_MATRIX_CODE_DETECTION);
NativeInterface.arwSetMatrixCodeType(NativeInterface.AR_MATRIX_CODE_3x3_PARITY65);
markerID = ARToolKit.getInstance().addMarker("single_barcode;0;80");
For 2D Barcode (Matrix code) detection, you must set the pattern detection mode as AR_MATRIX_CODE_DETECTION. For details of different matrix code type, you may refer to the official documentation. I am using the default ones provided under /artoolkit5/doc/patterns from the github repository.
The configuration string for single barcode detection is using the following format, "single_barcode;<barcode ID>;<Marker Width>".
The rest should be the same as using pattern marker. Just for clarification purpose, after calling ARToolKit.getInstance().convertAndDetect(frame) which is usually in your Activity which inherited from ARActivity, you may query it's visibility using ARToolKit.getInstance().queryMarkerVisible(markerID) as usual.
References
https://github.com/artoolkit/artoolkit5
As I mentioned in another question, everything into the assets folder is cached by ARToolkit and when you add new markers you need to either increase the version number of the app or to uninstall it.
You do not need to recompile the NDK to add new markers.
Also, the string formatting is very important:
The default one is:
_markerID = ARToolKit.getInstance().addMarker("single;Data/patt.hiro;10");
for your marker you are using:
_markerID = ARToolKit.getInstance().addMarker("single_barcode;0;40");
The string defining your marker should be:
"single;Data/single_barcode;40"
Where (as explained in this page http://www.artoolkit.org/documentation/doku.php?id=4_Android:android_developing) the parameters mean:
single means it is a single marker
Data/single_barcode is the path to the file inside the assets folder (assuming you put it in the same dir as the hiro and kanji ones)
40 is the size of the marker in the real world, in milimeters.
And I agree that the documentation of ARToolkit needs to be improved.
Thanks Shalafi, I've tried but nothing happen.
I've found a japanese page in which they say that you have to change a parameter and recompile the entire ArToolkit in C++ in order to make it recognize 2d code.
But ora 2d code or Arcode like hiro.
Has someone some more detailed instructions ?
The japanese page is this : http://sixwish.jp/ARTK4Android/Wrapper/section03/
(I've translated it with google translator )

Minko - Android light issue

I am working with minko and seem to be facing a light issue with Android.
I managed to compile for linux64, Android and html a modified code (based on the tutorials provided by Minko). I simply load and rotate 4 .obj files (the pirate one provided and 3 found on turbosquid for demo purposes only).
The correct result is viewed in the linux64 and html version but the Android one has a "redish" light thrown into it, although the binaries are being generated from the same c++ code.
Here are some pics to demonstrate the problem:
linux64 :
http://tinypic.com/r/qzm2s5/8
Android version :
http://tinypic.com/r/23mn0p3/8
(Couldn’t link the html version but it is close to the linux64 one.)
Here is the part of the code related to the light :
// create the spot light node
auto spotLightNode = scene::Node::create("spotLight");
// change the spot light position
//spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 2.f, 0.f)))); //ok linux - html
spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 8.f, 0.f))));
// create the point light component
auto spotLight = SpotLight::create(.15f, .4f); //ok linux and html
// update the spot light component attributes
spotLight->diffuse(4.5f); //ori - ok linux - html
// add the component to the spot light node
spotLightNode->addComponent(spotLight);
//sets a red color to our spot light
//spotLightNode->component<SpotLight>()->color()->setTo(2.0f, 1.0f, 1.0f);
// add the node to the root of the scene graph
rootNode->addChild(spotLightNode);
As you can notice the color()->setTo has been turned off and works for all except Android (clean and rebuild). Any idea what might be the source of the problem here ?
Any pointer would be much appreciated.
Thx.
Can you test it on other Android devices or with a more recent ROM and give us the result? LG-D855 (LG G3) is powered by an Adreno 330: those GPUs are known to have GLSL compiling deffects, especially with loops and/or structs like we use in Phong.fragment.glsl on the master branch.
The Phong.fragment.glsl on the dev branch has been heavily refactored to fix this (for directional lights only for now).
You could try the dev branch and a directional light and see if it fixes the issue. Be careful though: the dev branch introduces the beta 3, with some API changes. The biggest API change being the math API now using GLM, and the *.effect file format. The best way to go is simply to update your math code to use the new API, everything else should be straight forward.

AS3 AIR (ios and android) CameraRoll issue

I've been trying to sort out an issue for a week or so now. Googled to no avail. I'm currently working on an iOS/Android app that has a feature in the game to take a screenshot and have it show up in the mobile device's gallery.
I'm using the CameraRoll object and the issue is that some objects on screen have smoothing applied. However the CameraRoll screenshot ignores this. Which makes the resulting screen shot have some objects with jaggies.
I've found a number of cries for help on the same issue while googling, but no answers.
Any help is much appreciated.
Jaggies in flash are common since smoothing on bitmaps is disabled by default (more cpu intensive). I'd recommend creating a new bitmap from the CameraRoll MediaEvent.SELECT event. Inside, it should return event.data which is a MediaPromise object. Inside that, you should find a read-only file property where you should be able to find the image.
Then it's just a matter of creating your new image with smoothing.
var img:Bitmap = new Bitmap();
img.bitmapData = file.bitmapData;
img.smoothing = true;
addChild(img);
I've never tried this on mobile before, but it's a common issue which I believe you're encountering.
Addendum:
If you're having an issue with the system based screenshot services, you could create your own using pure AS3. The logic being, AS3 should do a pixel-by-pixel block copy of the stage (thereby respecting the smoothing values of your images).
Try this:
var myBitmapData:BitmapData = new BitmapData(stage.stageWidth, stage.stageHeight);
myBitmapData.draw(stage);

Categories

Resources