I am trying to implement ARcore with Xamarin and want to set a 3D object in a specific geolocation (like in pokemongo). I tried to go through this sample that I found in this forum: https://blog.xamarin.com/augmented-reality-xamarin-android-arcore/ but it seems that I can't change the position of the 3d object and it is set according to the tap gesture only on a plane.
Is there a way to place an object and track it? I did manage to do that with ARkit, but until now no success for the ARcore Android.
Any ideas would be helpful.
It looks like the Xamarin wrapper for ARCore simply wraps OpenGL. As a result, drawing the object requires setting multiple matrices (Model, View and Projection) matrices:
objectRenderer.UpdateModelMatrix(anchorMatrix, scaleFactor);
objectRenderer.Draw(viewMatrix, projectionMatrix, lightIntensity);
If you simply remove this from within the foreach (var planeAttachment in planeAttachments) {
loop, then you can set the anchorMatrix (a.k.a. the modelMatrix) to a fixed/hardcoded translation then it'll fix itself relative to the camera.
Here's a decent article on View matrices: https://www.3dgep.com/understanding-the-view-matrix/#The_View_Matrix
-- Begin Shameless Plug --
However, if you are open to trying new platforms, my team has built a cross-platform React-Native library for AR/VR development (Viro React): https://viromedia.com/viroreact/
If you're more familiar with SceneKit on iOS, we have built an analogous solution on Android w/ AR/VR support (ViroCore): https://viromedia.com/virocore/
Either solution would allow you to skip over the intricacies of OpenGL and simply position your objects/models with relative ease.
ie.
Placing your model 1 meter in front of you would be as simple as (in Viro React):
<Viro3dObject source={require("./res/model.obj")} position={[0,0,-1]} type="OBJ" />
Related
I am developing an application on Unity for Android Mobile platform. In which I am rotating object with single finger touch gesture, with the help of this script,
using UnityEngine;
public class MouseDragRotate : MonoBehaviour {
float rotationSpeed = 0.02f;
void OnMouseDrag()
{
float XaxisRotation = Input.GetAxis("Mouse X")*rotationSpeed;
float YaxisRotation = Input.GetAxis("Mouse Y")*rotationSpeed;
// select the axis by which you want to rotate the GameObject
transform.RotateAround (Vector3.down, XaxisRotation);
transform.RotateAround (Vector3.right, YaxisRotation);
}
}
But the problem is that, this script is working only on all Unity Assets for example cube, sphere, capsule and others. But not working with third party 3d objects,
So simply the question is why this script is not working on third party 3d objects ?
You have to have some sort of a Collider attached to the 3d-model/Object you would like to interact with. What you can do is add a BoxCollider to any Imported Object, or if there is a MeshFilter attached you could also add a MeshCollider.
You should make sure the script you show is added to the right top level object and not in a nested component of that object.
If you are still having problems please show us more about the objects you are trying to apply this to. And what components and option are set to it.
I want to integrate OSG scene into my Qt Quick application.
It seems that the proper way to do it is to use QQuickFramebufferObject class and call osgViewer::Viewer::frame() inside QQuickFramebufferObject::Renderer::render(). I've tried to use https://bitbucket.org/leon_manukyan/qtquick2osgitem/overview.
However, it seems this approach doesn't work correctly in all cases. For example, in Android platform this code renders only the first frame.
I think the problem is that QQuickFramebufferObject uses the same OpenGL context both for Qt Quick Scene Graph and code called within QQuickFramebufferObject::Renderer::render().
So I'm wondering, is it possible to integrate OpenSceneGraph into Qt Quick using QQuickFramebufferObject correctly or it is better to use implementation that uses QQuickItem and separate OpenGL context such as https://github.com/podsvirov/osgqtquick?
Is it possible to integrate OpenSceneGraph into Qt Quick using
QQuickFramebufferObject correctly or it is better to use
implementation that uses QQuickItem and separate OpenGL context?
The easiest way would be using QQuickPaintedItem which is derived from QQuickItem. While it is by default offering raster-image type of drawing you can switch its render target to OpenGL FramebufferObject:
QPainter paints into a QOpenGLFramebufferObject using the GL paint
engine. Painting can be faster as no texture upload is required, but
anti-aliasing quality is not as good as if using an image. This render
target allows faster rendering in some cases, but you should avoid
using it if the item is resized often.
MyQQuickItem::MyQQuickItem(QQuickItem* parent) : QQuickPaintedItem(parent)
{
// unless we set the below the render target would be slow rastering
// but we can definitely use the GL paint engine just by doing this:
this->setRenderTarget(QQuickPaintedItem::FramebufferObject);
}
How do we render with this OpenGL target then? The answer can be still good old QPainter filled with the image called on update/paint:
void MyQQuickItem::presentImage(const QImage& img)
{
m_image = img;
update();
}
// must implement
// virtual void QQuickPaintedItem::paint(QPainter *painter) = 0
void MyQQuickItem::paint(QPainter* painter)
{
// or we can precalculate the required output rect
painter->drawImage(this->boundingRect(), m_image);
}
While QOpenGLFramebufferObject used behind the scenes here is not QQuickFramebufferObject the semantics of it is pretty much what the question is about and we've confirmed with the question author that we can use QImage as a source to render in OpenGL.
P.S. I successfully use this technique since Qt 5.7 on PC desktop and singleboard touchscreen Linux device. Just a bit unsure of Android.
hi I'm making a app which detects face landmarks ( 68 point )
I'm in trouble optimizing system. I'm using HOG method to detect faces.
In, detector(cv_grayscale, face_detections, -0.2); type "dlib::frontal_face_detector& detector"
There are so many computations in there. So, android cpu cannot cover them.
So, anybody who solved this problem or relevant issues ?
bool DetectFacesHOG(vector<cv::Rect_<double> >& o_regions, const cv::Mat_<uchar>& intensity, dlib::frontal_face_detector& detector, std::vector<double>& o_confidences)
{
double scaling = 1.3;
cv::Mat_<uchar> upsampled_intensity;
cv::resize(intensity, upsampled_intensity, cv::Size((int)(intensity.cols*scaling), (int)(intensity.rows*scaling)));
dlib::cv_image<uchar> cv_grayscale(upsampled_intensity);
std::vector<dlib::full_detection> face_detections;
// millions of computation !!!!!!!!!!!!!!!!!!!!!!!!
detector(cv_grayscale, face_detections, -0.2);
....
}
Download latest opencv android SDK from here.
it contains a lot of debugged samples. One of them is face detection and it detects faces with 22 frames per second speed on my Xperia-Z5 Phone. Finally, if opencv errors cause of rotation of camera, use this code. The code is very Clear and finds best frame resolution for your Camera View. İf you also want face recognition you can download C++ modules but you must use NDK(c++). Because Android SDK won't have face.h or other modules. You can combine detecting a face from java and recognize them from c++. Don't worry about speed opencv optimizes that. Face detecting lpcascade classificer xmls works high performance. But if you want more detect use haarcascade.
I am developing a sniper game for android using Google card board unity SDK. Now there is the need to tweak the camera's FOV which leads me to interact a variable named 'mockFieldOfView' in CardBoard.cs. Tweaking that value in the Unity editor is fine but as soon as I make a build for Android it doesn't take effect at all. I'm unable to figure out the issue. Any idea or suggestion would be highly appreciated.
Apologize for the late reply, so ouflak you can see complete Cardboard.cs here Cardboard.cs
You don't want to change "mockFieldOfView". That only affects the in-editor FOV. The value you want to change is "matchMonoFOV" on the StereoController. You also have to set a "CenterOfInterest" game object on the StereoController. It makes the stereo FOV attempt to match the FOV on the Main Camera (or whichever camera has the StereoController script).
See StereoController.cs
Update: v0.4.5 of Cardboard SDK supports your use case. Use "matchByZoom" and set the FOV you want on the StereoController's camera. No center of interest is needed.
I had the same issue and in my case it helped to put the MainCamera closer to the object which was the Cockpit of a car in my case.
In order to put the MainCamera closer than 1 real-world-meter to the object you must change the default-minimum-value in Cardboard.cs - I use the following setting:
private readonly Vector2 defaultComfortableViewingRange = new Vector2(0.0f, 100000.0f);
I'm interested in AR applications of mobile devices and naturally I would like to make better use of the compass.
The only issue I've been having to work against isn't how twitchy the compass is. (Angular Smoothing seems to solve this issue just fine) My main issue is that when the device is held Vertical the compass values start freaking out. Causing an on screen compass to flip about all over the place. I don't have a lot of experience with mobile application development so I'm not sure what would be causing this issue, if its a Unity issue or if its just a limitation of the digital compass. I know other apps do seem to be able to use the compass fine in any orientation, but this is all stupidly new to me.
I've definitely tried moving the phone in a figure of 8. The device I have to play around with is a Nexus 4.
using UnityEngine;
using System.Collections;
public class Compass : MonoBehaviour {
// Use this for initialization
void Start () {
Input.location.Start ();
Input.compass.enabled = true;
}
// Update is called once per frame
void Update ()
{
var heading = Input.compass.trueHeading;
transform.eulerAngles = new Vector3 (0, 0, heading);
}
}
Preamble :)
First of, I'm not an expert (unfortunately) in subjects that I will talk about. But still, I've decided to share my thoughts.
Theory
The problem can be generalized in the following way. You want to have some continuous function that takes a 3D vector (which is device orientation in your case) and returns another vector that is orthogonal to original vector. Theory says (see hairy ball theorem) that for some arguments that function will return zero vectors. In case when such a function is compass, zero vectors are returned when device is oriented vertical (and this fells quite natural if you have ever used an ordinary compass).
Practice
Sometimes you want your app to tell which side of the world does phone back (rear camera) is pointing to.
Or maybe even you want combined approach:
If the phone is oriented flat, show what is the phone's top pointing to.
If the phone is oriented vertical, show what is the phone's back pointing to.
In both cases you need to use gyroscope in addition to compass.