It's possible that we start an android project in which it's necessary after recognising an image by camera to visualise a content generated in unity.
The easy part would be to use webGL to display it, but there is the problem of devices that do not support it directly. My question is if from android (and later iOs). It's possible to download a content of unity, load and visualise it in runtime?.
Is possible that I had to direct all the effort to generate that content in a .jar and then use something like dependency injection to load it?
I have already a unity scene in an activity but of course in project definition not in run time.
Any help or guidance would be welcome.
Unity builds levels into the final runtime executable, so adding a downloaded 'scene' directly is not possible. The best way around this is to create a 'generator' scene which can accept input from a downloaded text file, such as JSON, and use that to render the level.
However, this method does assume that all the possible objects that can be rendered are in your game as prefabs. If you're wanting to pull images from the net to be loaded into textures, the WWW class might get you started down the right path:
https://docs.unity3d.com/ScriptReference/WWW.LoadImageIntoTexture.html
Related
I have been working on the ARToolkit sdk for android since some time.
In the ARToolkit SDK, I have worked on ARBaseLib and ARSimpleNativeCarsProj and implemented successfully. But I am trying to add external 3d Objects(.obj and .mtl) and I am unable to render the new object files.
I have also looked into the source code provided in this link,
https://github.com/kosiara/artoolkit-android-studio-example
but the problem here is the 3D object(Cube) has been created in using the draw(), openGL libraries function, instead i would want to add an external 3D Object.
More Explanation:
Okey, SimpleNativeCarsProj comes along with two 3D Objects(.OBJ and .MTL) in the assets/Data folder. CASE1 I tried replacing the existing 3Dobject with another 3D Objects, App Crashes on the launch CASE2 As I worked around a little, these files are pushed to the cache folder on the app load, I invalidated the caches and restarted android studio, rebuilt and ran the app, still the app crashes on the launch. Technically I am unable to Replace/Delete/Add another 3D Object files to SimpleNativeCarsProject
Any headsup would be appreciated.
Convert your FBX files
The Encoder works wit FBX (.fbx) files. We recommend using FBX wherever possible as tools support for FBX is widely available.
http://www.wikitude.com/products/wikitude-sdk-features/wikitude-3d-encoder/
Give ArToolKitJpctBaseLib a try. It is a wrapper on ArToolKit + jPCT-AE (a 3d Engine on Android) aims to simplify creation of AR apps for Android.
I'm working on a feature in which I want to add picture over the video and save it to sd card.
in general, the user selects an image with semi-transparent background and puts that image above the video, after the user presses the save button he gets a new video but already with the image above the video.
I have heard about ffmpeg, and saw some commands that are provided by ffmpeg. but I don't know where I should initialize. can anyone provide me an example for the same?
Thank you.
One common approach is to use an ffmpeg wrapper to access ffmpeg functionality from your Android app.
There are several fairly well used wrappers available on GitHub - the ones below are particularly well featured and documented (note, I have not used these as they were not so mature when I was looking at this previously, but if I was doing something like this again now I would definitely build on one of these):
http://writingminds.github.io/ffmpeg-android-java/
https://github.com/guardianproject/android-ffmpeg
Using one of the well supported and used libraries will take care of some common issues that you might otherwise encounter - having to load different binaries for different processor types, and some tricky issues with native library reloading to avoid crashes on subsequent invocations of the wrapper.
Because this approach uses the standard ffmpeg cmd line syntax for commands it also means you should be able to search and find help easily on multiple different operations (as anyone using ffmpeg in 'normal' model will use the same syntax for the ffmpeg command itself).
For example, for your adding an image case here are some results from a quick search (ffmpeg syntax can change over time so it is worth doing a current check):
https://stackoverflow.com/a/32250369/334402
https://superuser.com/a/678171
I've build an application that uses Tesseract (V3.03 rc1) to identify some specific text strings. These are, unfortunately, printed on a custom font that requires that I build my own traineddata file. I've built the application on both iOS (using https://github.com/gali8/Tesseract-OCR-iOS for inspiration) and Android (using https://github.com/rmtheis/tess-two/ for inspiration as well).
The workflow for both platforms is as follows:
I select a bounding box on the preview screen for where I can crop out the relevant text, and crop the image accordingly.
I use OpenCV to get a binary image (using OpenCV's adaptive threshold function with the same parameters for both platforms)
I pass this binary image to Tesseract. Both platforms (Android and iOS) use the same traineddata file.
And yet, iOS recognizes the text strings perfectly, while Android keeps misidentifying certain characters (6s for Ss, As for Hs).
On both platforms, I use the same white list string, I disable load_type_dawg and load_system_dawg, and also choose to save the blob choices.
Has anyone encountered this kind of situation before? Am I missing a setting on Android that's automatically handled in iOS? Is there something particular about Android that hasn't crossed my mind?
Any thoughts or advice would be greatly appreciated!
So, after a lot of work, I found out what was wrong with my Android application (thankfully, it wasn't an issue with Tesseract at all). As I'm more familiar with iOS apps than Android, I wasn't sure how I could load the traineddata file onto the application without requiring the user to have the file loaded on their external storage device. I found inspiration in this project (http://www.codeproject.com/Tips/840623/Android-Character-Recognition), as they autoload the trained data file.
However, I misunderstood how it worked. I originally thought that the TessDataManager did a file lookup on the project's local tesseract/tessdata folder in order to get the trained data file (as I do this also on iOS). However, that's not what it does. It, rather, checks the internal file structure (data/data/projectname/files/tesseract/tessdata/traineddatafilegoeshere) to see if the file exists and if it doesn't, it copies over the trained data file it keeps in the Resources/Raw directory. In my case, it defaulted to the eng file, so it never read my custom font file.
Hopefully this helps someone else having similar issues. Thanks to Robin and RmTheis for all of your help!
I'll try to make this simple :
If I create an AIR app from the Flash IDE, I can choose to embed folder in my package. Then I can load the files using 'app:/'+filename. Everything is ok.
I have to move to Flash Builder because I can't test workers in the IDE (thanks Adobe). My issue is that, if I test/debug from Flash Builder, it does a stream error when calling 'app:/'+filename. If I launch the test in the IDE from FB, it works but the Workers don't. I should mention, the reason I'm using this method is that I have so many graphical assets, it's just easier to maintain/update this way instead of using [Embed.. ] for all my items, and it just works in the IDE...
I've added my folder to my sources locations in Flash Builder, still it seems I cannot use the 'app:/' thing.
How can I make this work so I don't change my code and still use 'app:/'? FB is such a confusing program...
edit : I tested again the workers in the IDE build launched by FB (the test in flash IDE icon), I can trace its state with :
worker.start();
worker.addEventListener(Event.WORKER_STATE, this._handleWorkerState);
private function _handleWorkerState(__e:Event):void{
trace(__e.currentTarget.state);
}
traces 'new' then 'running'. But for some reason, it doesn't send or receive any data from any message channel, which, again, works in FB4.7 when i run a debug but doesn't find my files....
Error #2044: Unhandled ioError:. text=Error #2032: Stream Error. URL: app:/foldername..
So basically, I'm looking for a solution to at least one of my problems :)
EDIT :
So ok. Here it is, one issue was due to the wrong debugger version installed (for the workers part). So I can now work and compile in the IDE again. I haven't found an answer to why 'app:/' doesn't work from FB4.7. So that would be the remaining question.
One option since you have Flash IDE is to create a library with all of your images. Drop all your images into the library in Flash and export them for actionscript. Then publish and create a a SWC. Then you can use the swc, which is kind of like a zip file for display objects, in flashbuilder and access them like:
var mc:MovieClip = new imageExportedForAS3_1()
Create a top level folder in your flex project called for example images, copy all of your images into that folder, then every time you need to load an image, just use the source attribute and use the absoulte rute, for example.
<mx:Image source="#Embed(source='../images/pic.png')"
I have never used the app:/ sentence before! Good luck!
Im thinking about trying to build a complex android app structure for a game maybe or just for practice reasons. Im used to code in objective-c, so im not that much experienced in android...
Anyway in work, we structure our app on ios like this:
-core framework: handling all core items, navigation, datahandling, mechanisms, etc. its the same in all of our project
-project framework: its files are mostly relying (including) the core framework's files, extending/modifying them, and doing the project depending stuff
-skin framework: this contains all the resources and images, if we want to do a re-skinned project, we only have to alter this
-main project: this includes everything just bashing together everything into an app. just starts the application, nothing more, anything else is done by the different frameworks
So I wanted to do a similar structure on android, but I'm not sure that I'm even able to do it... I see that there is android project and library project, I can include them into eachother... but my questions are:
1: can I build a similar structure as on ios?
2: can I make for example a "core" library what contains the basics of mechanisms, and another library containing only the resources, and a third one (or the third could be the actual runnable project), what can get resources from the resource library, can distribute jobs to the core library, etc...
3: can I organize the resources as I like (so not to throw every picture into the drawable folder root for example). For example to have somehow a characters folder (i know i cant do forlders in the res folder), and map files into map folder, etc... My only chance to name them "properly"? (map_sheet_type_1, map_sheet_type_2, character_sheet_type_1, etc) (if its going to be a game, it would use opengl, lots of sprite drawing, etc)
or I should do everything in a single project, dividing everything into a lot of packages, and use libraries only for jobs like "how to transcode "A" object into "B" object" ?
Thanks for the answers in advance
although I've never developed a game before, but an app is an app:
yes
as you mention you have executable projects and libraries projects, libraries can use other libraries and the only thing that goes to the device is whatever the executable project is building. It's just important to remark that compiled libraries *.jar files resources cannot be used in your executable project (that's why the ActionBar Sherlock have to be used as a library-project). In order to use a resource placed in a library project the project must be with its full source code open in the Eclipse so it can be compiled together. That is because inside an app, there's only one R (resources) object, and during build all the resources from all the projects are put together.
unfortunately no. As you mentioned yourself the resources cannot be in subfolders and even their file names are restricted as they can only use lower case letters, numbers and _ (underline). Just be clever and organised, write a spec or something.
packages IS the way to organize a single project in Java. If you gonna use multiple or single is your choice. Usually you can encapsulate in a library-project stuff that can easily be re-used in different projects, and the final project will contain everything that is specific to that one app/game. I'll give you an example on the place I work, we have a KicthenLibrary that is a library-project that we use in every single Android app we do. That library already contains an excellent multi-threaded bitmap download and cache classes, we used to have a MapFragment (now deprecated) before Google released their MapFragment, easy Http GET/POST methods, etc. As you can see, all of those are stuff that can easily be re-used in several different projects.
And just as a last trick, http://www.eclipse.org/egit/ IMHO is much easier to use GIT directly from inside Eclipse.
Here are a couple links that should help you get started on this.
http://kasperholtze.com/android/how-to-best-organize-your-android-source/
http://bartinger.at/organization-tips-for-android-projects/
Also, when I worked at a start-up, we made an app for both iOS and Android. We started creating native apps for each, and ended up having somewhat different structure. Global information/variables were handled different, and I couldn't structure my files quite like iOS did. That said, Android structure isn't terribly hard to figure out, and I made a fair amount of sub-folders in my assets folder (for libraries and js and such). And yes, you can definitely have several libraries.
As for having several projects in several in one app, see this link How to create a single application from multiple Android projects