I am using EasyAR to make some augmented reality apps. It has some built in class for displaying videos, but I want to show some images on screen when EasyAR detects any matches, so that user can interact with them. Is there any way to do that? And to be clear I don't want to use Unity, just pure Android.
I think this project has what you need (it's not mine):
https://github.com/khoben/studyAR
Just adapt the texture loading code, and use this ImageRenderer class. I did the same, and it works like a charm.
Related
I'm trying to make a gifs scene. I have scene about a girl moving (Walking, driving a car , running and sleeping). it is a fitness app and need to add this scene!. i already have the GIFs pictures but how to combined them together?
I want to create something familiar like this:
and this one
Actually You should not use Heavy GIF in App, it increase the size of App. You have to use Multiple Images and Use this Library to save your Code.
https://github.com/Q42/AndroidScrollingImageView
I'm in need of a recommendation of a free AR library that will allow me to display location indicators (2D views) on top of a camera overlay (you probably know what I mean).
So far I've tried using this iOS library, but it seems to be out of shape since I did not get good results -> somehow the views got displaced and I did not grasp the math behind it.
I'm also in need of an Android version, but that can wait, so I'd like an iOS recommendation.
I've used BeyondAR on Android a couple of times and It works:
https://github.com/BeyondAR/beyondar
You just need to have the coordinates of the object to show and the image.
We are working on a Mobile Application which has to be developed on IOS and Android Platforms. Most of the application is straight forward which is display content from a back end CMS system for which we are considering using PhoneGap. The application has one module in which the user can browse through a virtual house and be able to modify colors / replace a particular item like a chair etc. This can be easily done using Unity3d which we have in place. Using Unity3d for the whole application is probably going to be an overkill and there will be a lot of effort required on the Application Front or if we go the PhoneGap route managing the 3d thing is going to be a challenge.
The question I have is if there is any way we can export the Unity Scene and use it in a PhoneGap build since we will get best of both packages? If not that what are the options to get Unity3d working (UI and Services for Application) for doing an Application or doing some stuff with PhoneGap which helps to get the 3d part sorted.
Thanks in advance.
Figured out he solution.
Using Phonegap completely. For the 3d Module using Panorama image and displaying it using https://github.com/nicekei/jQuery-html5-canvas-panorama-plugin for interactivity. There are many other plugins available for jquery and mobile friendly. I choose to use the above one. You can find more on http://www.jquery4u.com/plugins/10-jquery-panorama-image-display-plugins/. For walk through using annotations to link it to another page which will give a 3D Panorama for that room.
You can render stills from a 3d software and stitching them using any Panoramic stitcher Photoshop also does the job. Hope it helps.
I am trying to put together a fake UI for an iOS and Android device without actually creating all that tedious UI work. Is there a way to mockup the UI from the images somehow? We have the Photoshop designs and mockups so far.
Are there any tools? I've checked Titanium and ForgedUI. While its a fairly simple concept I still think its an overkill to create all that. for instance I have to slice my PSD for the buttons. Plus the data feed has to come from a URL etc.
I just want something similar to Balsamiq Mockups linked screens one to another for an actual demo of a bunch of screens. We want to test the navigation, the fonts and the product concept while development is getting the product ready.
Thanks in advance!
The easiest way would be to use HTML. If you already have Photoshop mockups, you can simply export those to (retina quality) PNG images and add some image maps for navigation. If you add this site to your homescreen, you will have a pixel-perfect preview without Safari's navigation bar.
However, iOS apps depend heavily on animations. If you're going for a native UI anyway, you should consider creating a "real" UI prototype, using actual UIControl elements. While this is more work, it allows for much better UI evaluation.
For wireframing, a couple of options:
For high-fidelity mockups, I would do it in Interface Builder and storyboard, like JustSid recommended. Just don't do the tedious work of hooking up the UI elements.
You could look at tools like Adobe Proto. I've never tried it, but it seems like it might be an option.
For my super-quick, low-fidelity mockups (e.g. digital equivalent of the "back of the envelope" mockups), I personally use my iPad, a stylus, and a drawing package like Sketchbook Pro (use whatever drawing package you want, but I like one with layers). I also put blank device images on my iPad, too.
So, I open up my drawing package on my iPad, start a new drawing using a picture of an iPhone device as the starting point, drop on a new layer (so I can draw on a layer and either hide or discard that layer to instantly get back to the image of the blank device image), and just draw what I want my UI to look like on this new layer. This works great for brainstorming sessions. There's no automatic bridge from this conceptual, low-fidelity mockup to an actual Xcode storyboard, but I can draw a user interface mockup in seconds. I do this not only for brainstorming with designers, but also for fleshing out my own conceptual ideas, myself, too.
I am new to Augmented Reality, and I was wondering if there was already an existing Framework available for my situation. I want to make an App that scans a non coded image (bottle, pen, phone etc.), and based on the image, sends the user to a specific video.
Is there anything like this currently available? I think this would be more marker based, but I am not entirely sure. Thanks in advance!
I have used this in my app. I found it pretty easy and it did good work in my app. Its Motorola's augmented reality framework.
You should use this if you only want a video to show up when an image is tracked: https://dev.metaio.com/creator/getting-started/
you just drag the video you want over the image you want to trigger it and then deploy it.
If you'd prefer to actually code it within a framework, you can try the metaio sdk, which also allows you use 3D objects for tracking-- i.e you said track an image of a pen, but why not track the pen itself? Tutorials and more info here:
metaio sdk tutorial: https://dev.metaio.com/sdk/getting-started/
3D object tracking tool: https://dev.metaio.com/toolbox/