I want to integrate OSG scene into my Qt Quick application.
It seems that the proper way to do it is to use QQuickFramebufferObject class and call osgViewer::Viewer::frame() inside QQuickFramebufferObject::Renderer::render(). I've tried to use https://bitbucket.org/leon_manukyan/qtquick2osgitem/overview.
However, it seems this approach doesn't work correctly in all cases. For example, in Android platform this code renders only the first frame.
I think the problem is that QQuickFramebufferObject uses the same OpenGL context both for Qt Quick Scene Graph and code called within QQuickFramebufferObject::Renderer::render().
So I'm wondering, is it possible to integrate OpenSceneGraph into Qt Quick using QQuickFramebufferObject correctly or it is better to use implementation that uses QQuickItem and separate OpenGL context such as https://github.com/podsvirov/osgqtquick?
Is it possible to integrate OpenSceneGraph into Qt Quick using
QQuickFramebufferObject correctly or it is better to use
implementation that uses QQuickItem and separate OpenGL context?
The easiest way would be using QQuickPaintedItem which is derived from QQuickItem. While it is by default offering raster-image type of drawing you can switch its render target to OpenGL FramebufferObject:
QPainter paints into a QOpenGLFramebufferObject using the GL paint
engine. Painting can be faster as no texture upload is required, but
anti-aliasing quality is not as good as if using an image. This render
target allows faster rendering in some cases, but you should avoid
using it if the item is resized often.
MyQQuickItem::MyQQuickItem(QQuickItem* parent) : QQuickPaintedItem(parent)
{
// unless we set the below the render target would be slow rastering
// but we can definitely use the GL paint engine just by doing this:
this->setRenderTarget(QQuickPaintedItem::FramebufferObject);
}
How do we render with this OpenGL target then? The answer can be still good old QPainter filled with the image called on update/paint:
void MyQQuickItem::presentImage(const QImage& img)
{
m_image = img;
update();
}
// must implement
// virtual void QQuickPaintedItem::paint(QPainter *painter) = 0
void MyQQuickItem::paint(QPainter* painter)
{
// or we can precalculate the required output rect
painter->drawImage(this->boundingRect(), m_image);
}
While QOpenGLFramebufferObject used behind the scenes here is not QQuickFramebufferObject the semantics of it is pretty much what the question is about and we've confirmed with the question author that we can use QImage as a source to render in OpenGL.
P.S. I successfully use this technique since Qt 5.7 on PC desktop and singleboard touchscreen Linux device. Just a bit unsure of Android.
Related
I am trying to implement ARcore with Xamarin and want to set a 3D object in a specific geolocation (like in pokemongo). I tried to go through this sample that I found in this forum: https://blog.xamarin.com/augmented-reality-xamarin-android-arcore/ but it seems that I can't change the position of the 3d object and it is set according to the tap gesture only on a plane.
Is there a way to place an object and track it? I did manage to do that with ARkit, but until now no success for the ARcore Android.
Any ideas would be helpful.
It looks like the Xamarin wrapper for ARCore simply wraps OpenGL. As a result, drawing the object requires setting multiple matrices (Model, View and Projection) matrices:
objectRenderer.UpdateModelMatrix(anchorMatrix, scaleFactor);
objectRenderer.Draw(viewMatrix, projectionMatrix, lightIntensity);
If you simply remove this from within the foreach (var planeAttachment in planeAttachments) {
loop, then you can set the anchorMatrix (a.k.a. the modelMatrix) to a fixed/hardcoded translation then it'll fix itself relative to the camera.
Here's a decent article on View matrices: https://www.3dgep.com/understanding-the-view-matrix/#The_View_Matrix
-- Begin Shameless Plug --
However, if you are open to trying new platforms, my team has built a cross-platform React-Native library for AR/VR development (Viro React): https://viromedia.com/viroreact/
If you're more familiar with SceneKit on iOS, we have built an analogous solution on Android w/ AR/VR support (ViroCore): https://viromedia.com/virocore/
Either solution would allow you to skip over the intricacies of OpenGL and simply position your objects/models with relative ease.
ie.
Placing your model 1 meter in front of you would be as simple as (in Viro React):
<Viro3dObject source={require("./res/model.obj")} position={[0,0,-1]} type="OBJ" />
hi I'm making a app which detects face landmarks ( 68 point )
I'm in trouble optimizing system. I'm using HOG method to detect faces.
In, detector(cv_grayscale, face_detections, -0.2); type "dlib::frontal_face_detector& detector"
There are so many computations in there. So, android cpu cannot cover them.
So, anybody who solved this problem or relevant issues ?
bool DetectFacesHOG(vector<cv::Rect_<double> >& o_regions, const cv::Mat_<uchar>& intensity, dlib::frontal_face_detector& detector, std::vector<double>& o_confidences)
{
double scaling = 1.3;
cv::Mat_<uchar> upsampled_intensity;
cv::resize(intensity, upsampled_intensity, cv::Size((int)(intensity.cols*scaling), (int)(intensity.rows*scaling)));
dlib::cv_image<uchar> cv_grayscale(upsampled_intensity);
std::vector<dlib::full_detection> face_detections;
// millions of computation !!!!!!!!!!!!!!!!!!!!!!!!
detector(cv_grayscale, face_detections, -0.2);
....
}
Download latest opencv android SDK from here.
it contains a lot of debugged samples. One of them is face detection and it detects faces with 22 frames per second speed on my Xperia-Z5 Phone. Finally, if opencv errors cause of rotation of camera, use this code. The code is very Clear and finds best frame resolution for your Camera View. İf you also want face recognition you can download C++ modules but you must use NDK(c++). Because Android SDK won't have face.h or other modules. You can combine detecting a face from java and recognize them from c++. Don't worry about speed opencv optimizes that. Face detecting lpcascade classificer xmls works high performance. But if you want more detect use haarcascade.
I am using the OpenGL library Rajawali3D to display my models. What I would like to know is how can I load a texture from my server based on the logged in user? I've searched all over the internet trying to figure this out for months with no success. I found this website that explains how to load a texture from a non-local source but when I tried it, it didn't work with Rajawali. Any suggestions or examples would be much appreciated.
Here's the website I attempted to use: texture from web
I'm not familiar with Rajawali, however as I just checked it out, it seems fairly easy to load a remote texture and apply it to a model.
I presume that you've loaded your 3D model and can show it fine. If so, you should take the following basic steps (which apply generally to all 3D modeling apps):
Prepare texture
Prepare material
Apply material to a model
There's a class called Texture in Rajawali, which creates a texture object from a bitmap image. So, you should first download that image from your server. Downloading process is apart from Rajawali concepts, so you can get it done via many existing libraries.
Once you're finished downloading the image, you can feed it to the Texture class.
Texture mytexture = new Texture("texture", /*address to the downloaded image*/);
Then, you should add it to a material
try {
material.addTexture(mytexture);
} catch (ATexture.TextureException error){
Log.d(TAG, "Error Occurred");
}
Now, you can apply this material to a model
model.setMaterial(material);
I'm doing some image processing on Android just like GPUImage done on iOS.
My product is willing to support any platform. And, it can already run well with GLEW on PC.
And it can also run with a glSurfaceView on my Android device & Blue stacks.
But, the display is not neccessary since my interface is like: ProcessImage(Bitmap dst, Bitmap src, ...other args...);
So, my question is: How to use GLES without a display like glSurfaceView?
Here is what I do on PC:
call wglMakeCurrent(the_dc_to_use, the_gl_rc_to_use) before my job;
call wglMakeCurrent(NULL, NULL)" after my job;
And the context is created from my window's HWND.
What is the similar function on Android?
Currently I'm working on ARSimpleNativeCars in ARToolKit4Android that release on 2012-03-09. Before running the ARSimpleNativeCarsActivity class, I add in another menu class. In that class I start a new intent in a button:
Intent myIntent = new Intent(Assignment_Main.this, ARSimpleNativeCarsActivity.class);
startActivity(myIntent);
The camera view is working fine but the model does not appear. When I check my logcat, there is an error, call to OpenGL ES API with no current context.
But if I run the ARSimpleNativeCarsActivity class directly then is working.
You might want to check the update to ARToolKit for Android released 2012-12-06, which includes a fix for an issue which might be affecting you. The release notes say:
A problem with texture loading when using Wavefront .obj models in the Android examples has been fixed. Now, a new function glmReadOBJ2
delays loading and submission of the textures until the model is ready
to be drawn. Previously, texture loading was performed when the model
was loaded, and typically no OpenGL context would be valid at that
point.
In other words, initialising the native code portion in the application, including model loading, was failing because textures were being loaded without a valid OpenGL context. The code now implements lazy loading of textures. You might be seeing the same problem.