I want to implement indoor navigation through image recognition using Vuforia tool which works collaboratively with Android SDK.
I have collected the images that I want to use and now I have succesfully replaced the default picture of ImageTargets appication so as to add my own pictures.
My problem is the step further, since when this application recognizes a museum exhibit diplays a teapot.I want to replace this teapot with arrows that will help museum users to navigate into the museum.
How can I do it?There is an article to the official website of Vuforia which partially clarifies what must have happened but it's far from comprehensive(https://developer.vuforia.com/resources/dev-guide/replacing-teapot).
Any help might be totally crucial for me
Thank you in advance
The first step is to create your arrow models in a program such as Blender or Autodesk Maya and export it as a Wavefront OBJ file. You may have to tweak some plugins/settings in your chosen modelling software to enable that option.
Then you need to convert that .obj file to a C/C++ include file (.h) to work with the native code in ImageTargets.cpp. There is a convenient Perl script that you can download to make this process easier - OBJ2OPENGL.
Then it is a matter of importing your new model in ImageTargets.cpp, e.g. #import "arrow.h" then replace the code to draw the teapot with the following:
// set input data to arrays
glVertexPointer(3, GL_FLOAT, 0, arrowVerts);
glNormalPointer(GL_FLOAT, 0, arrowNormals);
glTexCoordPointer(2, GL_FLOAT, 0, arrowTexCoords);
// draw data
glDrawArrays(GL_TRIANGLES, 0, arrowNumVerts);
Don't forget to rebuild the project using ndk-build when you're finished.
Related
I'm trying to build an overlay for an Android application that uses GLESv2.
I've hooked eglSwapBuffers in order to insert my rending code just before the frame finishes.
I'm able to do simple things like drawing a square with the scissor test:
glEnable(GL_SCISSOR_TEST);
glScissor(0, 0, 200, 200);
glClearColor(1, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_SCISSOR_TEST);
I've also had success drawing simple shapes with the following code, but as soon as I start using vertex attrib pointers the application stops rending correctly and shows a mostly-black screen with a small section that still displays correctly. I'm sure there's some open-gl state that I'm clobbering here but I'm not sure what it is. What would I need to save/restore before/after my draw calls in order to allow the app to continue to render correctly with my overlay?
// Save application state
GLint prev_program;
glGetIntegerv(GL_CURRENT_PROGRAM, &prev_program);
// Do overlay drawing
glUseProgram(program);
glVertexAttribPointer(vPosition, 2, GL_FLOAT, GL_FALSE, 0, RectangleVertices);
glEnableVertexAttribArray(vPosition);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glDisableVertexAttribArray(vPosition);
// Trying to restore application state here - there are probably more things that I'm missing.
glUseProgram(prevProgram);
What would I need to save/restore before/after my draw calls in order to allow the app to continue to render correctly with my overlay?
Everything that you modified ...
Note that even full state restoration can be inadequate for some developer use cases. Normally this is a bug in the application (application making assumptions about object ID assignments), but there are use cases in development tooling such as verbatim API trace replay tools where such assumptions are made.
I am trying to implement ARcore with Xamarin and want to set a 3D object in a specific geolocation (like in pokemongo). I tried to go through this sample that I found in this forum: https://blog.xamarin.com/augmented-reality-xamarin-android-arcore/ but it seems that I can't change the position of the 3d object and it is set according to the tap gesture only on a plane.
Is there a way to place an object and track it? I did manage to do that with ARkit, but until now no success for the ARcore Android.
Any ideas would be helpful.
It looks like the Xamarin wrapper for ARCore simply wraps OpenGL. As a result, drawing the object requires setting multiple matrices (Model, View and Projection) matrices:
objectRenderer.UpdateModelMatrix(anchorMatrix, scaleFactor);
objectRenderer.Draw(viewMatrix, projectionMatrix, lightIntensity);
If you simply remove this from within the foreach (var planeAttachment in planeAttachments) {
loop, then you can set the anchorMatrix (a.k.a. the modelMatrix) to a fixed/hardcoded translation then it'll fix itself relative to the camera.
Here's a decent article on View matrices: https://www.3dgep.com/understanding-the-view-matrix/#The_View_Matrix
-- Begin Shameless Plug --
However, if you are open to trying new platforms, my team has built a cross-platform React-Native library for AR/VR development (Viro React): https://viromedia.com/viroreact/
If you're more familiar with SceneKit on iOS, we have built an analogous solution on Android w/ AR/VR support (ViroCore): https://viromedia.com/virocore/
Either solution would allow you to skip over the intricacies of OpenGL and simply position your objects/models with relative ease.
ie.
Placing your model 1 meter in front of you would be as simple as (in Viro React):
<Viro3dObject source={require("./res/model.obj")} position={[0,0,-1]} type="OBJ" />
I am working with minko and seem to be facing a light issue with Android.
I managed to compile for linux64, Android and html a modified code (based on the tutorials provided by Minko). I simply load and rotate 4 .obj files (the pirate one provided and 3 found on turbosquid for demo purposes only).
The correct result is viewed in the linux64 and html version but the Android one has a "redish" light thrown into it, although the binaries are being generated from the same c++ code.
Here are some pics to demonstrate the problem:
linux64 :
http://tinypic.com/r/qzm2s5/8
Android version :
http://tinypic.com/r/23mn0p3/8
(Couldn’t link the html version but it is close to the linux64 one.)
Here is the part of the code related to the light :
// create the spot light node
auto spotLightNode = scene::Node::create("spotLight");
// change the spot light position
//spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 2.f, 0.f)))); //ok linux - html
spotLightNode->addComponent(Transform::create(Matrix4x4::create()->lookAt(Vector3::zero(), Vector3::create(0.1f, 8.f, 0.f))));
// create the point light component
auto spotLight = SpotLight::create(.15f, .4f); //ok linux and html
// update the spot light component attributes
spotLight->diffuse(4.5f); //ori - ok linux - html
// add the component to the spot light node
spotLightNode->addComponent(spotLight);
//sets a red color to our spot light
//spotLightNode->component<SpotLight>()->color()->setTo(2.0f, 1.0f, 1.0f);
// add the node to the root of the scene graph
rootNode->addChild(spotLightNode);
As you can notice the color()->setTo has been turned off and works for all except Android (clean and rebuild). Any idea what might be the source of the problem here ?
Any pointer would be much appreciated.
Thx.
Can you test it on other Android devices or with a more recent ROM and give us the result? LG-D855 (LG G3) is powered by an Adreno 330: those GPUs are known to have GLSL compiling deffects, especially with loops and/or structs like we use in Phong.fragment.glsl on the master branch.
The Phong.fragment.glsl on the dev branch has been heavily refactored to fix this (for directional lights only for now).
You could try the dev branch and a directional light and see if it fixes the issue. Be careful though: the dev branch introduces the beta 3, with some API changes. The biggest API change being the math API now using GLM, and the *.effect file format. The best way to go is simply to update your math code to use the new API, everything else should be straight forward.
In my android project I am using OpenCV 2.4.8 and the function Imgproc.equalizeHist gives me strange results:
http://imgur.com/a/dhNqH
First shows the original image, second is what I get in android, and third is what I expected (made with imageJ from the original using Process->Enhance Contrast).
Code:
Imgproc.equalizeHist(imageROI, imageROI); //src, dst
imageROI is CvType.CV_8UC1.
Am I supposed to do something with imageROI before calling equalize? OpenCV documentation is mostly C/C++, so i don't know if anything is different for java on android.
Any help would be welcome!
I'm a newbie in Android Vuforia AR development. After google and vuforia forum has no results, I come here and need your suggestions. I successful replace a teapot by my own 3d object, now i need to add another teapots into "stones" target, like this image link? Have you ever work with this case? Please give me some traces to begin.
Thanks and best Regards!
Are you using Unity? Here are two suggestions:
You can programmatically instantiate prefabs on an image target following this code, just add additional transforms:
https://developer.vuforia.com/forum/faq/unity-how-can-i-dynamically-attach-my-3d-model-image-target
Alternatively, in your Scene Hierarchy, you can make additional GameObjects children of the ImageTarget prefab (probably the easiest way), and adjust their position using the Scene Editor.
First, grab a fresh copy of the modelview matrix before transforming it. Second, bind your modelViewProjectionMatrix before using it.
modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(trackable->getPose());
SampleUtils::rotatePoseMatrix(5.0f, 0.0f, 0.0f, 1.0f, &modelViewMatrix.data[0]);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&modelViewMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0],
&modelViewMatrix.data[0] ,
&modelViewProjection.data[0]);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
(GLfloat*)&modelViewProjection.data[0] );
SampleUtils::checkGlError("ImageTargets renderFrame");
glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT,
(const GLvoid*) &teapotIndices[0]);