Implementing Augmented reality and 3D objects into a mobile app, - android

I want to make a social media mobile application that utilises all the latest technologies.
This includes Augmented reality implementation.
I want to create a mobile social media platform that has the ability to add 3d objects and implement advanced augmented reality.
My question is why haven’t companies like instagram or Snapchat implemented these features?
Do devices require certain specs for these features ?
Thank you.

the data storage requirements for 3D objects on the web is insane. if you're looking to pull down data from a server that is hosting 3D model objects then some 3D objects will be well over 100 MB into the GB of data depending on the complexity. For right now, AR hasn't proven itself sufficiently worthwhile money-making wise to merit storage requirements as I explained. Also, there's the problem of the user waiting for 100 MB files to render in app and with a social media app, that's too much time to wait. Until file sizes can become smaller with compression algos and/or internet speeds ramp up 10X what they are right now, I don't see this as ever being a thing. Most apps that employ AR are downing asset packages in the background with static asset compartments or including the AR model objects in the app payload.

Related

Android Application : Augmented Reality or Image Recognition

I am interested in developing an Android Application that employs the Android Devices Camera to detect moving "Targets".
The three types of targets I need to detect and distinguish between are pedestrians, runners (joggers) and cyclists.
The augmented realities SDK's I have looked at only seem to offer face recognition which doesn't sound like they can detect entire people.
Have i misunderstood what Augmented Realities SDK can provide?
There is a big list of AR SDKs (also for Android platform):
Augmented reality SDKs
However, to be honest I strongly doubt that you will find any (doesn't matter free or payed) SDK for your task. It is to specific so you should probably write it by yourself using OpenCV.
OpenCV will allow you to detect objects (more or less) and then you will need to write some algorithm for classification. I would recommend classification based on object speed.
Then, when you have your object classified you can add any AR SDK to add something to your picture.

Desktop based Augmented Reality Application

I am developing an AR based Application which contains around 30-50 models. Is it possible to develop it on Android cause there might be Memory problem in mobile devices. Is there any Desktop based AR API/SDK that can be used with 3D animation??
Yes you can create an android application for augmented reality. There are many applications on android market especially, the one in GPS. However handling 50 models might cause a memory problem. However in high end devices like Samsung Galaxy S4 and Note 2, i dont think so you might face memory issue. Further you can also place your models in a dedicated server from where your application can fetch it. This can reduce the chances memory issues.
Some basic examples for AR on android are given here:
http://readwrite.com/2010/12/01/3-augmented-reality-tutorials#awesm=~ohLxX5jDGJLml9
AR application for desktop i haven't worked on it. I think this might help:
http://www.arlab.com/
Does desktop application include WebGL applications in the web browser?
If so, then you might want to check out skarf.js, a framework that I have written for handling JavaScript augmented reality libraries in Three.js (JavaScript 3D library that wraps WebGL). It currently integrates two JavaScript-based augmented reality libraries: JSARToolKit and js-aruco.
The skarf.js framework takes care of a number of things for you, including automatic loading of models when the associated markers are detected (association is specified in a JSON file). There is also a GUI marker system which allows users to control settings using AR markers.
Integration with Three.js is just one line of code to create a Skarf instance and another line of code to update.
There are videos, live demos, source codes, examples and documentation available. Check out http://cg.skeelogy.com/skarfjs/ for more info.

android : Augmented reality support millions of image recognition

I am developing an AR app that does image recognition. Now i need my app to support million of images. I was using vuforia. Vuforia free version has restriction on the number of images supported. Is there any other library that is good in image recognition and supports huge number of images recognition
First of all I would suggest developing an Application which should support million of images using any kind of an SDK would be a performance hit. It will heavily consume the battery of the phone and make the Application to detect images very slow.
Saying that I would suggest trying developing an Augmented Reality application which detects images using the Metaio SDK for Android platform.
Refer their official web site http://www.metaio.com for more info.

3D chess Using adobe flash cs5 and OpenGL on android project possible?

Can I Use both Adobe Flash cs5 and OpenGL to create an application on an android OS4.3 device?
I am creating a 3D chess game compatible for an android OS 4.3, so I am using eclipse and the SDK obviously.
The problem I have now is I am meant to make the chess pieces human like. For instance, the pawn pieces should look like miniature foot soldiers and the king piece should be a figure of a person sitting on a throne etc. I started with OpenGL but because I am new to it, I might not be able to carry put displaying the graphics with OpenGL. So I decided to use adobe flash cs5 to create the pieces and use OpenGL to make the chess board because I can do that and also because in my specs, I said I would be using OpenGL.
I want to know if this will actually work and also if there is a much easier way of doing this I just haven't thought of. Any suggestions would be appreciated, especially how to implement this with the A.I.
If anyone has a sample or an idea I could work with, I will also be very grateful.
Adobe has said that "Stage 3D" support will be coming to mobile devices in the future, but in the meantime, there are not any ways to accelerate 3D with Adobe AIR.
Although Away3D or another 2.5D library would be fast enough for the web or desktop, I am not sure how well this will work for mobile, as AIR moves slow enough even for 2D games.
Since chess is a relatively static game, you might be able to create 3D graphics, then render to 2D sprites. I was the lead engineer for a large Facebook game, and we used this approach. ALthough it required more file size, it worked very well for quality and performance. The end result was something similar to Diablo 1, but in a cowboy theme instead of medieval.
Although it does not have true 3D support, yet, you might also consider looking into NME. That Facebook game I made ran at 5-6 FPS using Flash, but topped 30 FPS using NME on my old Palm Pre (so not the fastest phone in the world). That might help give you extra overhead to be able to lean into rich graphics. The framework will also publish as a true C++ NDK application, so it is actually possible to extend or modify the framework (it's open source) with your own OpenGL calls.
Here's the website if you're interested: http://www.haxenme.org

Android convert 2D image to 3D

I got dozens of DICOM images, and I have to convert it into a 3D image in an Android device, any advice about how to do it, as I never handle any imaging like this before, I have no idea............
The search term you are looking for is "Volume Rendering". This is a deep, complicated topic. If you're looking for a place to get started learning, I'd recommend reading the relevant sections of "Handbook of Medical Imaging: Processing and Analysis", Isaac Bankman (ed.), ISBN# 0-12-077790-8.
Volume rendering of DICOM images is computationally very demanding, and DICOM is a complex protocol. Partly for these reasons, only a few implementations of DICOM viewers exist on mobile devices, versus hundreds on desktop/laptops. And most of these are 2D viewers which are more commonly used in clinical applications. There is no open source DICOM toolkit I know of for mobile devices (there are plenty for Unix/Mac/Windows).
For an example of a good implementation, look at the iPad/iPhone versions of Osirix. Further resources available on my website I Do Imaging
Your best bet is probably to get involved with Xbox360 Kinect hacking/research and use the Kinect's 3D mapping functionality to convert your 2D images to a 3D scene. Kinect is probably the most robust 2D to 3D mapping system in the world right now - you're not likely to find much else.

Categories

Resources