I am trying to write a metronome application in Python, and I intend to publish the application for Android and iOS. I have found a few cross-platform frameworks like Kivy, but their audio support is lacking. More specifically, I need very precise audio timing and I can't rely on thread timing or events. I want to write audio data directly to the device's audio output, or create a MIDI file that can be played on the fly. The problem is, I cannot find any suitable framework for this task.
I know that many games have been written for Android in Python, and those games have excellent and precise sound timing. I need help finding either:
a way to create and play MIDI files on the fly in Android with Python,
a Python framework for Android with a suitable audio API to write sound directly to an audio device, or at least play audio with very accurate timing.
Thanks!
I'm looking for the same thing. I too am looking at Kivy. The possible solutions I can see to audio is hooking in a 3rd party application as a "recipe" in Kivy.
There is aubio, which apparently can be compiled for iOS/Android (see stackoverflow question regarding this), but I believe you have to get your own audio source for it, which could be potentially handled by the audiostream subproject in kivy.
Kivy/audiostream imports the core libpd project it appears, so you can use libpd python bindings. I think this is the path of least resistance, but I had issues when trying to run the examples.
Both of these approaches, I think could work but both need some effort to be able to start using.
Related
I'm currently working on a project which requires video from Android and iOS devices to be streamed live to our server.
I've been researching this for a while, and come across libraries that are dead, expensive, or dead expensive.
The only viable solution I've found so far is using Adobe FlashBuilder, but it is quite frankly not very nice for multiple reasons.
I would love to be able to do this natively for both platforms, but this is not a very juicy project for my employers, so they are reluctant to spend any cash on expensive libraries.
Are there any free/cheap libraries that fit the bill? Is there some other way for me to do this natively? The technology itself is negotiable, we are currently focused on RTMP since we are using FlashBuilder, but as long as the video is streamed up to a server then they don't particularly care what protocol is used.
Thanks for your help in advance, let me know if the way I've asked this question is not up to scratch.
I can say about Android. I'm in my application using openCV library specifically FFmpegFrameRecorder can work with RTMP protocol. My application works with RED5 server. Of the drawbacks I would mention a large number of native libraries.
See my answer, I described there that I used
Android Studio with javaCv and FFMPEG
I want to develop an android app that involves recording audio from the microphone. Unfortunately, I don't know java, so I've been looking into doing it in a python framework/platform. I need to be able to:
build an 'elegant' UI (a simple one - buttons and lists);
record and play audio;
build an .apk file that I can publish on the Play Store.
I've looked into Kivy, but it seems audio capture is not available. Other solutions have that capability, but lack proper UI/apk building solutions.
Any ideas? Thanks in advance!
Google has recently made great progress with their speech recognition software, which is used in several open source products, e.g. Chromium Web Speech and Android Handsfree texting. I would like to use their speech recognition as part of my server stack, however I can't find much about it.
Is the text recognition software available as a library or package? Or alternatively, can I call chromium from another program to transcribe some audio file to text?
The Web Speech API's are designed only to be used in the context of either Chrome or Android. There is a lot of work that goes on in the client so there is no public server to server API that would just take an audio file and process it.
If you search github you find tools such as https://gist.github.com/alotaiba/1730160 but I am pretty certain that this method of access is 100% not supported, endorsed or confirmed to keep working.
The method previously stated at https://gist.github.com/alotaiba/1730160 does work for me. I use it on a daily basis in my home automation programs. I use a python script to capture audio and determine what is useful audio or just noise, then it sends the little audio snippet to google and returns the text all under a second!! I have successfully integrated it into my programs and if you google around you will find even more people that have as well!
I am a MIDI based musical application author. In my application I am generating a .midi file with a small lib that I wrote and play it on MediaPlayer and that's enough for that app. However in the future app I plan to have more interactivity and that's where I would probably need a streaming API.
As far as I know Android leaks APIs for realtime midi synth (at least official). But still I can see some apps that do use midi in quite advanced way. Question is how? Do they use NDK to access Sonivox directly or are there an unofficial apis for that after all? Any ideas?
Also I'm very interested if Google is planning to improve MIDI support in future versions of Android (in case anybody of Google sees this :))
Thanks.
You should check out libpd, which is a native port of PureData for both Android and iOS. It will provide you with access to the MIDI drivers of the system while still being able to prototype your software with very high-level tools.
Java has a very important latency, so i think this should be done with the NDK. Check this question, it has a couple of hints. This was reported as an Android issue (NDK support for low-latency audio), there might be some tips or info there too.
This is a simple but great sample application that successfully streams MIDI on Android https://github.com/billthefarmer/mididriver
You will have to put your MIDI messages together manually though ( the example creates two MIDI messages for play note and stop note). One can refer to the MIDI specification to further control the MIDI channels. The problem is that the default sound fonts on Android sound so bad.
I am new to mobile development, but have some experience with web app development. I am looking at starting work on a mobile app that involves shooting and editing video on the phone initially for Android and possibly to be extended to other platforms. Phonegap seemed to be an interesting potential way to start both given my realm of familiarity and the potential to port to multiple platforms. I noticed that it has camera an audio recording support on Android. But how much sense does it make to develop something that relies so much on the phone's hardware on something like Phonegap? Would it be so cumbersome and poorly performing to do this way that I should just start from scratch with Java?
You should think of PhoneGap as an easy way to create the UI; anything unusual (like video support) will still have to be written in Java (or whatever the platform's native code is).
The portability of the Java code to a platform like BlackBerry will rely on how much platform specific code you use (and you may not have a choice).