I've encountered some problems using ArCode with 2D bar code inside.
I'm using Android and the ArToolkit.
I've no problem recognizing "Hiro" marker or the "kanji".
Sometimes, artoolkit confuses "0" bar code with "hiro", but this is not the problem, the problem is that I can't, in any way I've tried, recognize a 2D bar code.
This is my code :
if (!ARToolKit.getInstance().initialiseNative(this.getCacheDir().getAbsolutePath()) ||
!ARToolKit.getInstance().initialiseAR(640, 480, "Data/camera_para.dat", 0, false)) {
Log.e("MainActivity", "errore di inizializzazione");
return;
}
_markerID = ARToolKit.getInstance().addMarker("single_barcode;0;40");
it doesn't count if I use :
single_barcode;0;10
...
single_barcode;0;80
obiviosly with instead:
_markerID = ARToolKit.getInstance().addMarker("single;Data/patt.hiro;10");
it works.
I've tried aldo to create a file like the one for hiro ( patt.hiro) and kanji (patt.kanji).
So, I've created a code.dat
1
00
40.0
1.0000 0.0000 0.0000 0.0000
0.0000 1.0000 0.0000 0.0000
0.0000 0.0000 1.0000 0.0000
for the "0" bar code.
_markerID = ARToolKit.getInstance().addMarker("single;Data/code.dat;40");
Again it doesn't count if I use :
single;Data/code.dat;10
..
single;Data/code.dat;80
but again nothing.
I can't find any valid example using this in android, or any exaustive manual...
Where I'm wrong ?
The usage of 2D Barcode for ARToolkit in Android is not available in any public documentation. However, if you refer directly to the ARWrapper source code. I found it is available through NativeInterface and ARToolkit.
Here's a working example I used in my Android App
Firstly, do something like this in your detection initialization
NativeInterface.arwSetPatternDetectionMode(NativeInterface.AR_MATRIX_CODE_DETECTION);
NativeInterface.arwSetMatrixCodeType(NativeInterface.AR_MATRIX_CODE_3x3_PARITY65);
markerID = ARToolKit.getInstance().addMarker("single_barcode;0;80");
For 2D Barcode (Matrix code) detection, you must set the pattern detection mode as AR_MATRIX_CODE_DETECTION. For details of different matrix code type, you may refer to the official documentation. I am using the default ones provided under /artoolkit5/doc/patterns from the github repository.
The configuration string for single barcode detection is using the following format, "single_barcode;<barcode ID>;<Marker Width>".
The rest should be the same as using pattern marker. Just for clarification purpose, after calling ARToolKit.getInstance().convertAndDetect(frame) which is usually in your Activity which inherited from ARActivity, you may query it's visibility using ARToolKit.getInstance().queryMarkerVisible(markerID) as usual.
References
https://github.com/artoolkit/artoolkit5
As I mentioned in another question, everything into the assets folder is cached by ARToolkit and when you add new markers you need to either increase the version number of the app or to uninstall it.
You do not need to recompile the NDK to add new markers.
Also, the string formatting is very important:
The default one is:
_markerID = ARToolKit.getInstance().addMarker("single;Data/patt.hiro;10");
for your marker you are using:
_markerID = ARToolKit.getInstance().addMarker("single_barcode;0;40");
The string defining your marker should be:
"single;Data/single_barcode;40"
Where (as explained in this page http://www.artoolkit.org/documentation/doku.php?id=4_Android:android_developing) the parameters mean:
single means it is a single marker
Data/single_barcode is the path to the file inside the assets folder (assuming you put it in the same dir as the hiro and kanji ones)
40 is the size of the marker in the real world, in milimeters.
And I agree that the documentation of ARToolkit needs to be improved.
Thanks Shalafi, I've tried but nothing happen.
I've found a japanese page in which they say that you have to change a parameter and recompile the entire ArToolkit in C++ in order to make it recognize 2d code.
But ora 2d code or Arcode like hiro.
Has someone some more detailed instructions ?
The japanese page is this : http://sixwish.jp/ARTK4Android/Wrapper/section03/
(I've translated it with google translator )
Related
I've adapted Tensorflow Lite's Salad Detector Colab and am able to train my own models and get them working on Android but I'm trying to count Objects and I need more than the 25 limit that is the default.
The models have a method for increasing detections so, in the above Colab, I inserted the following code:
spec = model_spec.get('efficientdet_lite4')
spec.tflite_max_detections=50
And on the Android side of things
val options = ObjectDetector.ObjectDetectorOptions.builder()
.setMaxResults(50)
.setScoreThreshold(10)
.build()
The models are training fine but I'm still only able to detect 25 Objects in a single image.
Is there a problem with my models? Or are there any other settings I can change in my Android code that will increase the number of detections?
Solved this myself after Googling a different SOF question on efficientdet_lite4, I stumbled on an AHA moment.
My problem was here:
spec = model_spec.get('efficientdet_lite4')
spec.tflite_max_detections=50
I needed to change the whole spec of the model:
spec = object_detector.EfficientDetLite4Spec(
model_name='efficientdet-lite4',
uri='https://tfhub.dev/tensorflow/efficientdet/lite4/feature-vector/2',
hparams='',
model_dir=None,
epochs=50,
batch_size=64,
steps_per_execution=1,
moving_average_decay=0,
var_freeze_expr='(efficientnet|fpn_cells|resample_p6)',
**tflite_max_detections=50**,
strategy=None,
tpu=None,
gcp_project=None,
tpu_zone=None,
use_xla=False,
profile=False,
debug=False,
tf_random_seed=111111,
verbose=0
)
From there I was able to train the model and things worked on the Android side of things.
This has been bugging me for a few weeks!
I am trying to implement ARcore with Xamarin and want to set a 3D object in a specific geolocation (like in pokemongo). I tried to go through this sample that I found in this forum: https://blog.xamarin.com/augmented-reality-xamarin-android-arcore/ but it seems that I can't change the position of the 3d object and it is set according to the tap gesture only on a plane.
Is there a way to place an object and track it? I did manage to do that with ARkit, but until now no success for the ARcore Android.
Any ideas would be helpful.
It looks like the Xamarin wrapper for ARCore simply wraps OpenGL. As a result, drawing the object requires setting multiple matrices (Model, View and Projection) matrices:
objectRenderer.UpdateModelMatrix(anchorMatrix, scaleFactor);
objectRenderer.Draw(viewMatrix, projectionMatrix, lightIntensity);
If you simply remove this from within the foreach (var planeAttachment in planeAttachments) {
loop, then you can set the anchorMatrix (a.k.a. the modelMatrix) to a fixed/hardcoded translation then it'll fix itself relative to the camera.
Here's a decent article on View matrices: https://www.3dgep.com/understanding-the-view-matrix/#The_View_Matrix
-- Begin Shameless Plug --
However, if you are open to trying new platforms, my team has built a cross-platform React-Native library for AR/VR development (Viro React): https://viromedia.com/viroreact/
If you're more familiar with SceneKit on iOS, we have built an analogous solution on Android w/ AR/VR support (ViroCore): https://viromedia.com/virocore/
Either solution would allow you to skip over the intricacies of OpenGL and simply position your objects/models with relative ease.
ie.
Placing your model 1 meter in front of you would be as simple as (in Viro React):
<Viro3dObject source={require("./res/model.obj")} position={[0,0,-1]} type="OBJ" />
I am working with minko and seem to be facing a light issue with Android.
I managed to compile for linux64, Android and html a modified code (based on the tutorials provided by Minko). I simply load and rotate 4 .obj files (the pirate one provided and 3 found on turbosquid for demo purposes only).
The correct result is viewed in the linux64 and html version but the Android one has a "redish" light thrown into it, although the binaries are being generated from the same c++ code.
Here are some pics to demonstrate the problem:
linux64 :
http://tinypic.com/r/qzm2s5/8
Android version :
http://tinypic.com/r/23mn0p3/8
(Couldn’t link the html version but it is close to the linux64 one.)
Here is the part of the code related to the light :
// create the spot light node
auto spotLightNode = scene::Node::create("spotLight");
// change the spot light position
//spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 2.f, 0.f)))); //ok linux - html
spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 8.f, 0.f))));
// create the point light component
auto spotLight = SpotLight::create(.15f, .4f); //ok linux and html
// update the spot light component attributes
spotLight->diffuse(4.5f); //ori - ok linux - html
// add the component to the spot light node
spotLightNode->addComponent(spotLight);
//sets a red color to our spot light
//spotLightNode->component<SpotLight>()->color()->setTo(2.0f, 1.0f, 1.0f);
// add the node to the root of the scene graph
rootNode->addChild(spotLightNode);
As you can notice the color()->setTo has been turned off and works for all except Android (clean and rebuild). Any idea what might be the source of the problem here ?
Any pointer would be much appreciated.
Thx.
Can you test it on other Android devices or with a more recent ROM and give us the result? LG-D855 (LG G3) is powered by an Adreno 330: those GPUs are known to have GLSL compiling deffects, especially with loops and/or structs like we use in Phong.fragment.glsl on the master branch.
The Phong.fragment.glsl on the dev branch has been heavily refactored to fix this (for directional lights only for now).
You could try the dev branch and a directional light and see if it fixes the issue. Be careful though: the dev branch introduces the beta 3, with some API changes. The biggest API change being the math API now using GLM, and the *.effect file format. The best way to go is simply to update your math code to use the new API, everything else should be straight forward.
In my android project I am using OpenCV 2.4.8 and the function Imgproc.equalizeHist gives me strange results:
http://imgur.com/a/dhNqH
First shows the original image, second is what I get in android, and third is what I expected (made with imageJ from the original using Process->Enhance Contrast).
Code:
Imgproc.equalizeHist(imageROI, imageROI); //src, dst
imageROI is CvType.CV_8UC1.
Am I supposed to do something with imageROI before calling equalize? OpenCV documentation is mostly C/C++, so i don't know if anything is different for java on android.
Any help would be welcome!
I have a DrawingImage library (icons) for my Windows application development under WPF. I am new to Android development and this library has a lot of path geometries that I would like to use in my Android projects.
I searched to find a built-in way to use geometries such as "F1 M 0 0 -5.715 5 -8.48 5 -14.195 0 0 0 z m -15.0977 9.0001 0 -1 0 -7.461 5.488 4.803 -4.809 3.206 7.319 5.91 7.324 -5.91 -4.81 -3.206 5.488 -4.803 0 8.461 -7.998 6.795 -8.002 -6.795 z" directly in Android however I could not find a way.
I have run into some libraries that can display SVG images and they are OK. However I need to display my path geometries from XAML in android. Is this possible?
I found out that it is not possible to transfer my XAML vectors into Android environment. I took another way:
As far as I see, the best way to use verctor icons in Android is using font icon libraries. https://github.com/bperin/FontAwesomeAndroid is what I used.
Later I needed more icons than provided by the FontAwesome library. My search ended with http://fontastic.me/ service. You can generate your custom font collection free of charge. Then I modified source code of FontAwesomeAndroid so I am able to use my own font icon collection.