So I have googled this but just can't find a definitive answer.
I have an Android application which does a type of background calculation. I no longer have access to the source code, and furthermore its written using the NDK so can't use dex2jar.
What I'd like to do is to somehow attach a debugger and see the asm of the calculation to work it out, as I can't think of any other way to do this?
There doesn't seem to be to much information on the web around this.
If the APK is built in debug mode (containing gdbserver), you might get it to work, but you might need to either dissect the ndk-gdb script quite a bit, or build a mock project which ndk-gdb can look at to do its magic.
Alternatively, if you're ok with not stepping through it at runtime (inspecting the registers etc) you can just disassemble the .so files as well. Try <NDK>/toolchains/arm-linux-androideabi-*/prebuilt/*/bin/arm-linux-androideabi-objdump -d libfoo.so. If the detection of arm vs thumb mode doesn't work, you might want to add -M force-thumb. This obviously requires more work, and you can't check the intermediate values, but should be doable with almost any library.
Related
I have a Python3 desktop application which I want to convert to an android apk. I saw that Kivy module exists and might be able to pull this off, but I am concerned about it's ability to make the apk work just like the python code. I use many different modules like PIL, opencv, pyserial, threading, watchdog, file_read_backwards etc).
Is this possible or I am asking for too much? And if it is, how can I change/handle for which android version it will the apk be?
threading is in the stdlib, so you have it
file_read_backwards is pure python and doesn't seem to require anything (as per setup.py) although the requirements_dev.txt lists a lot more things, so i'm not sure
PIL has a recipe (https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/pil/init.py), and there is also now a Pillow recipe, which is probably better to use (https://github.com/kivy/python-for-android/blob/master/pythonforandroid/recipes/Pillow/init.py) so that seems fine
pyserial is pure python, and i think i remember people having success with serial over usb on android, though i didn't try myself, it might require a device able to be an usb host and not just client, but i don't know much about that.
opencv has a recipe, it might be a more touchy one, as it hasn't been touched in years (aside from some cosmetic fixes), and i think i've seen people having issues with it, but i'd say it's worth a shot.
I'd say, it's certainly worth a shot, before converting any of your code, try just building a hello world application with kivy for android, then redo the build but adding your dependencies to the requirements one by one, and see if you can solve it or find help when it doesn't, if all go well, then look into porting your code, which is the part where the success will certainly depend more on you than on what kivy/python-for-android can do.
For a project, I am changing certain ContentProvider-files in the Application Framework Layer of the Android System. As I am trying different things, I was wondering if it is necessary to build the Android Source for every change I make, or if there is a way to somehow emulate the system without the build?
I am not entirely sure what "building" means, as I cannot find a proper definition including Android as a context. I assume it is some kind of compiling?
Converting the source code into an executable program? In that case I do not think there is another way, is there?
But do I understand building correct in the first place? In that case, I would believe there is no other way than building the system every time and then see how it works out.
So I might have the solution right here, but I was hoping someone could assure me that it is right or maybe tell me why it is not?!
Any help is very much appreciated!
Building in this context means that all Android source files are re-compiled by the java compiler and a massive .jar file is produced. This is the Android.jar file that we see in the library dependencies of an Android application project.
Unfortunately, the short answer is that there is no way out for you except to re-compile the entire blob of the Android framework files each time you make a change. What I can advise you is to plan all your changes beforehand so you don't end up wasting a lot of time.
OK So here is my story:
I am creating an app that requires me to take a couple images and a video and merge them together. At first I had no idea what to use, never heard of ffmpeg or ndk.. After around 5 days of battling NDK, switching to Ubuntu and going crazy with ndk-build commands I finally got FFmpeg to compile using the dolphin-player example. Now that I can run ffmpeg on my computer and android device I have no idea what to do next.
Here are the main questions I have:
To use FFmpeg, I saw that I need to use some sort of commands. First off what are these commands, where do I run them?
Second of all, Are the commands all I need? By that I mean can i just run my application normally, somewhere in it execute the commands in some way and it will do the rest for me? or do I need some sort of element in the code, for example VideoEncoder instance or something..
Third of all, I saw people using NDK to use FFmpeg, Do I have to? Or is it optional? I would like to avoid using C if possible as I don't know it at all..
OPTIONAL: Last but not least, Is this the best way of handling what I need to do in my application? If so, can someone guide me in a brief manner of how to use FFmpeg to accomplish said task (mention commands or anything like this)..
I know it's a wall of text but every question is important to me!
Thank you very much stackoverflow community!
I see my answer may no longer relevant to your question but I still put it here as I've recently gone through that very same path and I understand the pain as well as the confusion causing by this matter (setting up NDK using mixed gradle plugin take me 1 day, building FFmpeg takes 2 days and then fail at wtf am I supposed to do next??)
So in short, as #Daniel has pointed out, if you just want to use FFmpeg to run command such ask compressing, cutting, inserting keyframes... then Writing mind's prebuilt FFmpeg Android Java is the easiest way to get FFmpeg running on your app. The downside is since it just run command so it needs to take an input and an output file for the process. See my question here for further clarification.
If you need to do more complex task than this then you have no choice but building the FFmpeg as a library and calling API from it. I've written down step by step instruction that work for me (May 2016). You can see it here:
Building FFmpeg v3.0.2 with NDK r11c (please use Ubuntu if you don't want to rebuild the whole thing, Linux Mint fails me)
Using FFmpeg in Android Studio 2.1.1
Please don't ask me to copy the whole thing here as its a very long instruction and it's easier for me to keep 1 source of information up-to-date. I hope this can save someone's keyboard ;).
1, FFmpeg can be either an app or a set of libraries. If you use it as an app (with an executable binary installed), you can type the commands in a terminal. The app only has limited functions and may not solve your problem. In this case you need to use ffmpeg as libraries and call APIs in your program.
2, To my understanding the commands cannot solve your problem. You need to call ffmpeg APIs. There are a bunch of sample codes for video/image encoding/decoding. You probably also need a container to package the outcome, and ffmpeg libraries can also do that.
3, NDK is preferred by me, since ffmpeg are written in C/C++. There are JAVA wrappers for ffmpeg; if you use them, NDK is not required. However, not all functions in ffmpeg are wrapped well - you may try. If not, then go back to the NDK solution.
4, The simplest way is to decode all your video/images into raw frames, combine them with desired order, and encode them. However in practice this consumes too much memory. The key point then becomes: how can I do the same on the fly? It's not too hard once you reach this step.
Seeing as how there are no preprocessor directives (without jumping through some hurdles), I was wondering if there was an accepted way of doing the following:
Have an android app in the regular ol' android market that uses things like the camera, mic, etc.
Conditionally "swap out" certain features based on some build parameter.
Produce two APKs, one for each store
Ideally, I would want to keep the ANT gymnastics to a minimum ... and also ideally would not have to maintain two sets of files (ie. google_activity_layout.xml and amazon_activity_layout.xml).
edit: this answer looks interesting: https://stackoverflow.com/a/1813873/5416
I have been able to use XMLTask antlib to modify the AndroidManifest.xml as part of the -pre-build hook. I haven't used the mechanism you linked, but I would think that a combination of modifying the permissions and using the linked mechanism would achieve your goal. Since the permissions are checked at runtime vs. at compile time.
You can find the library here: http://www.oopsconsultancy.com/software/xmltask/
One thing to note, it will take some tinkering. My "ant monkey business" did take several hours of tinkering because of the way the apk is compiled together. If you are willing to run the full build a few times it should be less arduous and could probably just ad a completely new task to the beginning of the build.xml that is generated. Let me know if you have questions as I've been tinkering with this stuff a lot.
I want to use Android for a system I have in order to use it as an embedded system that would run a specific application (which runs in chrome browser). However, this will not use Android in ordinary way, but rather hack around it so that libraries like OpenCV and packages like Chromium can be installed on the Android's Linux kernel. In addition, I would also need to figure out a way that would allow a USB camera to be supported.
I have done some research on this, but I am getting nowhere. Would somebody recommend resources that are relevant to this issue, or suggestions on how to approach it? Your feedback would be much appreciated.
Edit1: I am not intending for this question to be too broad. I only want to get more ideas on how you add libraries like OpenCV to Android, and whether there is a way to install the chrome browser as well.
Edit2: the Android system is on the Snapdragon platform.
Both Chromium and OpenCV can be built on Linux, have you tried compiling them from source on Android and failed? What error did you get? Here's a link for cross-compiling Chrome for ARM processors:
http://code.google.com/p/chromium/wiki/LinuxChromiumArm
I would use http://www.android-x86.org/ first and see if it works there before trying to run it on ARM so that you can fail faster if it doesn't work.
You might want to spend some time with ROM hackers to get more insight. Ideally, you want to find some people who are doing something similar so you can work with them. Take a look at:
http://forum.cyanogenmod.com/
http://forum.xda-developers.com/
A lot of what those guys are doing does not apply to what you are looking for, but they do get much deeper into the OS than most programmers. You might get lucky, and not have to modify the Android source code yourself as thinksteep mentions.