Detect the plane in the sceneform/ArCore and add a few models on AnchorNode.
But Models are disappearing in the following cases.
Move phone faster
Lights are low
Blocking camera vision
So, Why is it Disappearing?
Does anyone have an idea, how to overcome this issue?
It is natural because the 3 cases you listed up makes ARCore hard to track the feature points.
And surely, there is no way to overcome this issue because tracking feature points is ARCore's job and not yours.
I'd rather let users be aware that in some specific environment application might not work properly. Or you could go ask ARCore developers
Related
I am building an app in android for Google cardboard Virtual Reality(VR) in Unity3d engine.Since for VR performance matter a lot for best experience.
I used Simplygon (From Assert store) to combine all textures in my app.
Stats are shown below unfortunately there is no improvement in performance when running on mobile but there is a improvement in editor.
Am I missing something ,What may be the reason for this?
Are they any options to reduce performance in mobile build as well?
Profile window stats of Unity Editor:
Before using Simplygon: ( FPS is around 60 )
After using Simplygon : (Crossed 250 FPS), But Tris and Verts count Increased a lot.
Profile window stats of Android mobile:
Before optimization:
After Optimization:
Are they any options to reduce performance in mobile build as well?
Presumably you meant are there any performance improvements you can make?
You also said that you benchmarked against the editor. This isn't a good idea - for starters - what renderer for unity are you using? OpenGL is more similar for Android than DirectX.
You need to measure more. Then measure more specifically (unity has some good performance logging helpers) before you can make useful changes..
Whenever you are optimizing, differences in hardware can have a huge impact on which optimization are impactful, and which don't make any difference. Since the android architecture is so different from a PC, it becomes a total crap-shoot as to whether other stuff that helped on the PC will also help on the mobile device. To make matters worse, things that help on one mobile device might not help on another one, say, from a different manufacturer.
In your case, the rise in polygon count was, apparently, simply more painful that then texture benefit, for your particular phone. Although, to be honest, there could be some simple settings to fix, too - someone else might have suggestions there.
Like others have said, various performance probes can help you find your bottlenecks.
I'm implementing a simple navigation app with the here-sdk for Android.
It has some great features that would be quite useful compared to my current google maps based app.
However, the app is very slow when navigating as well as when I simply scroll around on the map. I assume that turning off the 3d- buildings would improve the performance, but I cant find a way to achieve this...
Is it possible? And how?
Thanks
Check out Map.setExtrudedBuildingsVisible (boolean visible)
See:
https://developer.here.com/mobile-sdks/documentation/android-hybrid-plus/topics_api_nlp_hybrid_plus/com-here-android-mpa-mapping-map.html#topic-apiref__setextrudedbuildingsvisible-boolean
There's another type of 3D Buildings (3DLandmarks, some 3D Models of famous buildings). Those you can activate/deactivate via setLandmarksVisible(false)
Btw: What device are you running ? What CPU/GPU chipset is it having ? We know that extruded buildings can cause some performance trouble on some few GPUs (see: https://developer.here.com/mobile-sdks/documentation/android-hybrid-plus/topics/development-tips.html)
I've been exploring 3D scanning and reconstruction using Google's project Tango.
So far, some apps I've tried like Project Tango Constructor and Voxxlr do a nice job over short time-spans (I would be happy to get recommendations for other potential scanning apps). The problem is, regardless of the app, if I run it long enough the scans accumulate so much drift that eventually everything is misaligned and ruined.
High chance of drift also occurs whenever I point the device over a featureless space like a blank wall, or when I point the cameras upward to scan ceilings. The device gets disoriented temporarily, thereby destroying the alignment of future scans. Whatever the case, getting the device to know where it is and what it is pointing at is a problem for me.
I know that some of the 3D scanning apps use Area Learning to some extent, since these apps ask me for permission to allow area learning upon startup of the app. I presume that this is to help localize the device and stabilize its pose (please correct me if this is inaccurate).
From the apps I've tried, I have never been given an option to load my own ADF. My understanding is that loading in a carefully learned feature-rich ADF helps to better anchor the device pose. Is there a reason for this dearth of apps that allow users to load in their homemade ADFs? Is it hard/impossible to do? Are current apps already optimally leveraging on area learning to localize, and is it the case that no self-recorded ADF I provide could ever do better?
I would appreciate any pointers/instruction on this topic - the method and efficacy of using ADFs in 3D scanning and reconstruction is not clearly documented. Ultimately, I'm looking for a way to use the Tango to make high quality 3D scans. If ADFs are not needed in the picture, that's fine. If the answer is that I'm endeavoring on an impossible task, I'd like to know as well.
If off-the-shelf solutions are not yet available, I am also willing to try to process the point cloud myself, though I have a feeling its probably much easier said than done.
Unfortunately, Tango doesn't have any application could do this at the moment, you will need to develop you own application for this. Just in case you wonder how to do this in code, here are the steps:
First, the learning mode of the application should be on. When we turn the learning mode on, the system will start to record an ADF, which allows the application to see a existing area that it has been to. For each point cloud we have saved, we should save the timestamp that associated with points as well.
After walking around and collecting the points, we will need to call the TangoService_saveAreaDescription function from the API. This step does some optimization on each key poses saved in the system. After done saving, we need to use the timestamp saved with point cloud to query to optimized pose again, to do that, we use the functionTangoService_getPoseAtTime. After this step, you will see the point cloud set to the right transformation, and the points will be overlapped together.
Just as a recap of the steps:
Turn on learning mode in Tango config.
Walk around, save point cloud along with the timestamp associated with the point cloud.
Call save TangoService_saveAreaDescription function.
After saving is done, call TangoServcie_getPoseAtTime to query the optimized pose based on the timestamp saved with the point cloud.
I am looking for a (preferably Unity) SDK that can track a real-life humanoid mannequin with passive trackers at the end of its joints, using ONLY a built-in mobile device camera. From a technical point of view, essentially I want to achieve the same functionality as presented in this video: https://www.youtube.com/watch?v=KyO5FNhoApw
If I could find an SDK that does this or at least does similar stuff, I could build upon that I think. The important part is to avoid using Kinect and only utilize the camera of the mobile device that runs the application.
Do you have any starting points in mind, maybe an SDK that might be helpful to check out?
Thank you so much in advance! T
I want to implement 3D touches in android,just like the 3d touches in the Iphone 6S and 6S plus.
I looked around in google and couldn't find any consistent material.
I could only find an example in Lua language and i am not sure yet if it's exactly what i am looking for.
So i thought may be if there is no libraries out there, then i should implement the algorithm from scratch, or maybe create a library for it.
But i don't know where to start ? do you guys have any clue ?
I believe you could implement something similar using MotionEvent, it has a getPressure() method that is supposed to return a value between 0 and 1 representing the amount of pressure on a screen. You could then do something different depending on the amount of pressure detected.
Note that some devices do not support this feature, and some (notably the Samsung Galaxy S3) will return inconsistent values.
I don't think it is possible on currently available Android devices. 3D touch is hardware technology embedded in displays in iPhones. I don't think you can implement this just writing some code in your Android application.
Short answer - no.
You need to wait for Google to actually copy the technology if it proves to be useful. But I doubt it'll happen in near future. This is because Android is all about accessibility and these screens will be quite expensive.
Long answer - Android is open source. If you are making something internal then go on, it'll allow you to do that with some modifications. Build a device, put in your modified code, create your own application that takes advantage of the feature and be happy to announce it to the world.