GOAL : I want to create a free educational tool that will allow you to write code and execute in the same app. ( preferably AS3 )
Essentially I want to be able to have a IDE written inside an Air Application and then run the code / execute inside the same application.
I'm not looking to create external files all i need is to run/execute the code from the text field. I was wondering if it's possible to use #mxmlc inside Air to do live coding.
Or perhaps any ANE's or native Android methods to archive the same goal. ( im assuming if it's possible to archive with android native code, then an ANE could be easily created )
Also i though maybe creating a limited library of all the essential Flash display library - so once i hit run in the Air app it will scan all the source code, and using a string search algorithm -> execute a list of precompiled classes inside the same app ( graphics API, display List, basic math logic, etc. ). I understand that method doesn't have to specifically target the AS3 language. But i would like to avoid creating my own language for that purpose.
Not familiar with mxmlc but i did read somewhere that it's possible to execute code on a server with maven. In that case would it be possible to send the user written source code to the server, receive back the compiled SWF file and upload it back to the app in the run-time ?
You could look at http://eval.hurlant.com/ http://eval.hurlant.com/demo/ but I'm not sure if it'll be working in Air / Android
Related
I want to build and test the network and data layer of an Android app completely outside the Android environment. This would be the classes that make a network call to get JSON data and then convert the JSON objects to Java Objects.
To test this bit of code, I just want to write a program with a simple "main" function that can parse a few command line arguments, execute some code, and give some output.
How would I do that? I'm relatively new to Android and Java. I'm guessing the solution involves some tinkering with the Gradle build configuration.
After these components are tested, I'd like to incorporate them into a particular Android project. But before that I'd like to be able to develop these trickier bits of the code without all the baggage and clutter of the Android Studio and the Emulator, on the command line and a simple Vim editor.
Write your own java project that use apache HttpClient and JSon(optional: Gson),
you can also use other tools like PostMan. Both for network test.
If you wrapped the logic responsible for retrieving the remote JSON data in an android-independent class you could write a Junit test around it. For running this test you don't need to start the emulator (as soon as you don't embed any android logic within that class).
(If these works for you, an upvote is highly appreciated. Thanks for your patience)
The way i understand this question is you want to save time required to learn android deveopment.
Here are some tricks that i use to save time in android development.
Do not use any UI element except a simple activity with hello world screen ( which is default template in android when starting new project). In simple words No button, No Textbox. Practically no touch input at all.(When core of app is functional, do whatever you want).
Set sample values directly in requests make to server (like set complete url directly in request).
Output recieved data to console using System.out.println. Again No TextView, No ListView.
Set your avd to lowest resource available. This make emulator very light but fully functional and quickly usable.
All these combined saves a lot time in development.
I am currently working on an android application that evaluate images in different aspects, and I found that there are lots great open source algorithms can be used.
Problem 1: Most of the algorithms are designed on c/c++/matlab languages that cannot be applied directly.
I've search that NDK is a tool that allows us develop android application by other languages, but the setup procedures are quite complicated that I stuck for days. So before I go further on it, I would like to first ask whether I can include other's c/c++ source code directly like calling java library?
Problem 2: For example, I would like to use Point Matching algorithm's source code in my application, but there are lots files inside as it's just source code but not library/plugin. What are the steps to apply the require functions in my android application?
(the most desired way is to blindly input some images to the alogrithm, and it returns the results like calling functions, which I dont have to understand the working principle of it.)
You can't directly use other C++ libraries, you have to build them for Android first using NDK. If there is a version of the library built for Android, then, of course you can use it directly by linking to it using NDK.
You have two options here. First, you create a regular Java application for Android, write a C++ wrapper for handling calls to native side and build the necessary source files using NDK. Your java application will make calls to wrapper using JNI and the wrapper will call your actual C++ functions as in Java->JNI wrapper on C++->Your actual C++ source.
Second option is going fully native, which will leave out Java, JNI calls and the wrapper. You can access your source files directly as if you were writing a desktop C++ application. In the end you will have to build using NDK but this step is required in any case.
As a suggestion, you can also take a look at OpenCV for image processing purposes. It has libraries built for Android, all you will have to do is to link them.
Short version.
Download opencv4android library. Import it in eclipse and see if everything is fine (compile errors, output, etc.).
Secondly, try to import face detection application and try to understand how it works.
It has a java part and a native part.
In order to understand how it works you must understand how java interacts with C++, more or less how NDK works.
One important aspect is to learn how to create your interfaces in C++, based on the native ones defined in java. When you got there then you can try creating your own application.
After that you can come here and ask specific questions.
I'm writing a cross platform app (Store App and Android) with MvvmCross. In his helpfull webcasts, Stuart Lodge shows how to use Picture Chooser Plugin to select an image from the device library. But, what about other types of files (text, xml and so on)? With a IMvxFileStore object it's possible to read text and binary files, but how to choose them?
There isn't a ready made solution for this that I know of - and 2 key mobile platforms really won't provide this (winphone and ios don't really do filepickers)
However, if you wanted to implement your own file picker interface on droid, wpf and winstore then it should be relatively easy to do:
define an interface in your core project
implement the interface in wpf and winstore using common dialogs
implement the picker in droid using simple directory listing code (or some 3rd party component)
register the components during app setup
There's an n+1 video on injecting services and plugins which may help.
I'm developing an Android application using C++ and Qt Necessitas SDK.
My application should load/save files and I want to handle it using Android actions ( so that I can target Google Drive as well as Dropbox etc... )
The question is... how do I raise Intents ( and which Intent should I raise to share/import my files ) from C++ ?
Rationale: how do I load and save files ( either custom mimetype, or pdf ) using Qt Necessitas on Android?
I'm a complete newbie to java and I know nothing of JNI, but apparently this link will explain how to [use the JNI to] use intents in your QT for Android applications: http://community.kde.org/Necessitas/JNI
It looks pretty straightforward, but I'm afraid I don't yet understand what I'm reading enough to give you a better answer to your question.
Sources:
groups.google.com/forum/?fromgroups=#!topic/android-qt/U3eHis9mLrg
groups.google.com/forum/?fromgroups=#!topic/android-qt/UpgBRz8Imwo
Another option for a some things (opening websites, making phone calls, possibly opening local PDFs and images) would apparently be QDesktopServices::OpenURL() - although that seems to give open-only access (ie, opening the resource but not pulling any data back from it).
I'm interested in this so that I can use the barcode scanner app "ZXING", which has an intent-as-url at http://zxing.appspot.com/scan - I don't know what other little secrets are out there...
HTH until someone can give you a better answer
(2 links disabled due to insufficient reputation)
using stdout library it will maybe pass data to logcat or somewhere else which is not heading toward to display screen.
How can I manage to display what I want using native code with out passing dalvik...T_T
There may well be no officially supported way to do that. Android is fairly fundamentally based around java code running in the dalvik virtual machine.
It used to be your only option was to use jni between your own native and java code to pass the data through to the java-level Android display APIs.
In more recent versions, it is possible to write a so-called native activity where all of your code is C or C++. However, such an activity still runs in a process built around a dalvik virtual machine running platform-supplied java code, and calling into your code via jni.
There is an embedded Open GL native API which you could use to plot text, but it is quite likely that behind the scenes some jni is still involved, at the very least in the setup of the views.
Both the native activity and native use of open GL have examples in the ndk distribution.
I suppose you could also have you code interact via pipes or sockets with a different process which would display its output, but that's just moving the use of dalvik elsewhere.