the title says it all. I want to try mediapipe specifically facemesh on android. The problem is I just want to calculate face landmarks on the background and decide more things later (like showing text of landmarks coordinates, or any other ratio). I don't want to show the opengl rendering. I tried to comment this part but still doesn't work
// glSurfaceView.setRenderData(faceMeshResult);
// glSurfaceView.requestRender();
I'm thinking of using normal view or GlSurfaceView instead of SolutionGlSurfaceView but it doesn't work. I found a similar qna How to prevent face detection bounding box from being drawn in mediapipe android
but the answer recommend me to edit compiled class and I don't think android allow me to do that. Android only allow me to extend a compiled class and use it specifically. Problem is the class I need to extend is so deep so I need to also extend all class using that class, moreover it's a C script/class and I'm not sure Android allow me to extend it easily.
Any of you find easy ways to do what I'm trying to do?
Related
I'm building an android app in Android Studio and Kotlin that implements ARCore to render 3D models of bar charts. I need to render these models based on real time data obtained from an API, but I don't know if there is a way to modify a 3D model struture at runtime in order to make the bar chart reflect the real time data.
I'm aware of the possibility to render 3D models at runtime using Sceneform, as well as changing texture, but this doesn't seem to help me with my problem.
It may be worth considering whether you can use the animation functionality available for renderables to meet your needs - i.e. design your bar graph so that the changes you want are part of the animation design.
This will allow you use Sceneform's built in animationm support: https://developers.google.com/ar/develop/java/sceneform/animation/overview-enable-animations
Like 3D models the animations are created in advance and imported into the project when you are building it.
If your models are going to be relatively simple, you can also create simple renderables at runtime using ViewRenderable.builder() - this allows you refer to a layout or to a view created programatically where you can set the height for a bar in a graph for example. More info here: https://developers.google.com/ar/develop/java/sceneform/create-renderables
So I have a custom view that uses a TextureView in order to render it's content through OpenGL and JNI, which works as intented.
Now, looking at the ARCore demos (and documentation) it is posible to render Android Views using the
ViewRenderable class. But doing a quick test using my custom view, it appears that it won't work, since it wasn't made thinking in this kind of behavior...
So I have two questions here:
It is possible to use a TextureView as a custom renderable in ARCore?
If not, is there a way to have a delimited area (anchored in a given point) where I can freely make my JNI/OpenGL calls?
If someone could point me out where in the documentation (or demos) can I look out will be highly appreciated
I'm in need of a recommendation of a free AR library that will allow me to display location indicators (2D views) on top of a camera overlay (you probably know what I mean).
So far I've tried using this iOS library, but it seems to be out of shape since I did not get good results -> somehow the views got displaced and I did not grasp the math behind it.
I'm also in need of an Android version, but that can wait, so I'd like an iOS recommendation.
I've used BeyondAR on Android a couple of times and It works:
https://github.com/BeyondAR/beyondar
You just need to have the coordinates of the object to show and the image.
I am looking for the best way to develop an Android app that has one component that allows the user to draw shapes, rotate them, scale them, slice them etc. (I am calling this component ActivityArea). In addition to this ActivityArea there need to be regular buttons, textViews, editViews etc. on the app.
I have explored 2 options - using libgdx and building a custom view. Both approaches appear doable. However, with libgdx, as far as I understand, all the buttons, textViews etc will also have to be created using the libgdx libraries. With this regard I have the following questions:
Is my understanding that libgdx will necessarily have to be used to render buttons and other regular android views?
Is there any way of including a libgdx powered view within an android layout?
Are there other libraries/options available that can be used to get geometric functions within an Android app?
Any help is greatly appreciated.
You can use Android UI atop LibGDX if you like. Typically for this you'd use the AndroidApplication.initializeForView(...) method to create the libgdx view and inject it into your layout.
As far as other libraries, if you doing 2d shapes and don't have to have a consistent 60fps, I'd probably just use Android's Canvas.
I'm looking for a 'basic' AR SDK that allows me to draw images and 3D shapes around the user (no matter where he is). It would be even better if the SDK includes a simple way to detect interaction with the shapes (something like onClick).
I made a project from scratch on Android but there's still a lot of work to do and I'll need to do the same on iOS after... So that's why I'm looking for an SDK or a similar project (no matter what platform).
I tested Metaio but it's quite expensive and maybe overkill for my purpose because it uses LLA coordinates.
I tested DroidAR on Android but it's only for Android and it looks heavy too (don't need the GPS).
How about Qualcomm's Vuforia? I was able to quickly get a sample project running on it.
EDIT Looks like I was wrong about what it could do. According to this (which is slightly dated, so who knows) Metaio might be your only choice.
i really don't sure what you really want to do ..but if you simply show images or 3d models on camera without any detection you can achieve this very easily i am explaining for Android and you can extend it to ios on same logic.
first approach:
you have to use custom camera of Android in your app,then use any game engine as per your need..i will suggest Jpct-ae or Rajawali
they are very simple to integrate and can be used for 2d images and 3d models.
this tutorial will explains a lot
keep the gl-surafce transparent and you can have model floating in space ...
second approach :
to add some more effect to your AR app you can use sensor values to move model in 3d space as per movement of device..it gives a cool effect.
use first approach and additionally collect sensor values and apply that matrix to gl camera of your game engine..for sensor values follow here
good tutorial here..
i hope this may help you..i done these a long time ago but try to help if you want..