Android has it's NDK to handle native code. I was just wondering how deep you can go with low level programming on this platform - whether you can for example control what the phone transmits through the GSM/UMTS network or see the raw data that are received.
Is working with the embedded phone modem generally possible on this/other platforms within the scope of their APIs?
I presume that interfering with the lower communication layers (like adding compression to the voice data, change of encoding and so on) means playing with the firmware of the communication modules and is generally something that the phone manufacturers don't support officially.
Anyway is there something about this topics you can recommend to read or search for?
A public API package (it's not really rich in terms of your requirements):
http://developer.android.com/reference/android/telephony/package-summary.html
About the low-level APIs (the Radio Interface Layer of Android), you can read here:
http://www.netmite.com/android/mydroid/development/pdk/docs/telephony.html
About implementing (modifying) the RIL:
http://www.netmite.com/android/mydroid/development/pdk/docs/telephony.html#androidTelephonyRILImplementing
In this case, however, you need to touch the Android source code, so for application development, it is not an option.
Related
I've always liked cheap smartphones ($ 50) because with little money I can have a powerful system with lots of sensors and things like that. So I wondered if it was possible to use the hardware without using the very limited android APIs, programming it at a low level then, of course with the root. In particular I wanted to see how the LTE module worked and experiment with this having full control, the Android API does not allow it to do much.
UPDATE: I'm using something called libhybris, a wrapper that permit the use of android driver blobs in Linux.
The first layer of software for the phone is the bootloader. It tells the processor what partition to load into memory for executing the kernel. This is the level that is usually blocked by manufacturers because of greedy corporate reasons that are beyond the scope of this site.
The second layer of the phone is the linux kernel. Rooting is the process of gaining root user access to this layer. Root is the main administrator user account that has permission to do anything to the device. Accessing this layer is what most people refer to rooting. A large portion of the kernel is written in C, with other parts in c++. What happens at this level is where all the magic is. For most phone this is where the code for the modem resides. Talking to this can usually be done via at commands of serial. Sensors are also programmed at this level and communicate via drivers. Root access is not normally needed to read sensor data, its just a case of permissions usually.
The next level is the android operating system, the java instance runs on top of that, which in turn executes the android operating system. This is the portion that most users will see and is primarily written in java. In reality you can run any kind of user interface at this level.
A very brief view of android apps.
The android api provides a way for java developers to write "apps" that communicate with the kernel and access different parts of the phone's hardware. These apps can also be written using c++. Only until recently has google integrated c++ into android studio, but the most common and still most effective method of doing so is using the QT framework.
It's a bit problematic.
Hardware manufacturers do that actually.
Take into account that Android is Linux much like other distributions.
The manufacturers develop hardware and then compile a version Android that sits on top of it. Each Android compilation is specifically tailored to the hardware and equipped with drivers that enable the main OS access to the different hardware capabilities.
For example, some tables will tweak the Android OS to not support cellular communication because they decided to cut costs and deliver the tablet without a cellular module.
From here you have 2 options:
To hack a specific hardware and understand how the OS communicates with the hardware.
Find hardware manufacturers that release some/all of their Android OS code. This is a much simpler way as you can both learn and extend the Android OS for that specific device.
An example of the 2nd way is Sony who has AOSP that allows low-level access to some of the Sony devices.
Also, there is always the Android NDK which gives you a more low-level access to Android but you are still constrained by the KIT API so I'm not sure it will help you.
I'm using something called libhybris, a wrapper that permit the use of android driver blobs in Linux.
Is it possible to retrieve raw ECG signals programmatically from QardioCore via bluetooth?
I only have an Android device, and as the Android Qardio app doesn't work for QardioCore, I wanted to know if anyone already tried to write an own app for usage with Android, and if it worked?
The manufacturer wrote me in an email:
Qardio Core will be compatible with iOS only and we are focused on providing
a smooth experience on Apple devices that use iOS 10.0 or later.
[...] there are no immediate plans to bring QardioCore to the Android
platform [...]
This apparently means that there are no SDKs as well.
I have also emailed the manufacturer and got a similar answer:
[...]
Qardio Core will only be compatible with iOS for the foreseeable
future. We strive to provide the best experience and have not been
able to guarantee flawless operation on the multitude of Android
devices.
An SDK is also not available at this time.
[...]
I’m completely new to Qt mobile, I even don’t have a solid mobile dev experience, so sorry if I’m asking something obvious.
I need to develop a mobile app that should have the ability to receive a call like functionality (over internet, not GSM call). When answered, it should start streaming audio and video from our server. The call should be one way only, meaning, that stream goes from server to device, but never from device to server.
So my questions is:
Is this possible in Qt? I chose Qt because I’ familiar with it and I want to support desktop, android and ios. maybe windows phone later.
Is it possible to receive a call when the screen is shut off and my app is not running? I mean, this is a mobile device, the app won't be running all the time, it should be started only when a call is made from server to device. How can I achieve that? I think Viber, Skype and other messaging apps do that.
Many thanks in advance :)
1 - Well, sure it can, although it might not be as easy and straightforward as you'd want. Qt Multimedia does provide the necessary classes, but you do have to check how supported they are on the platforms you need to target.
However, the classes Qt provides are either too high level to serve any purpose but their intended purpose, or too low level, so you must implement pretty much everything by hand. In this aspect, the benefit of Qt being capable of producing portable apps may not outweigh the ease of using certain platform specific libraries that offer video streaming out of the box. In other words, it might be easier to write separate Android and iOS apps using Android and iOS libraries than a single Qt app that will work on both.
But just in case you decide to go with Qt, as I mentioned for the time being you are left with one option - do most of the work yourself. This means you should record audio using QAudioRecorder and capture frames periodically from a QCamera in a buffer of given length, compress that data (and preferably encrypt it if security is a concern), send it to the client over a QTcpSocket connection, decompress (and decrypt) the data and play it back in sync. It is certainly doable, but as already mentioned, it will be much harder since Android and iOS libraries offer pretty much "out of the box" solutions. Alternatively, you might decide to use a third party solution that offers support for all the platforms you target.
2 - whether your screen is on or off - that will be a call to a platform specific API, so are requests to turn it on or to keep in on for a given duration. Whether your app is running on the device or not, that is easy - just try a TCP connection with the client on the device, if it succeeds then the client is running. If you want to receive calls while your app is not running, you will have to implement a platform specific service that runs all the time instead and starts your application when a call is received.
QT Mobility does not have a a framework for supporting VoIP as you can see from the reference :
http://doc-snapshot.qt-project.org/qt-mobility/
You could create the VoIP framework of your app natively (which is going to require a good understanding of the various audio and video frameworks available) but another way to go
would be to use a VoIP SDK that supports both iOS and Android such as the Twilio mobile client
https://www.twilio.com/client/mobile
Qt mobile will help you in your application's UI, however you will have to write some native code for each platform you are going to use. Note that Qt is extending fast, you might need not to get your hands dirty with platform specific native code in upcoming versions of Qt.
Yes, you can receive a VoIP call when your application is closed by creating a background service (but as I know so far Qt doesn't do the job for you, you'll need to do it natively), it is the way Skype and Viber work.
As per I know new Blackberry10 OS using qt for developing. There is one source code available about VoiP Calling in qt. I am still searching about video call.
Check below link, May be helpful
1) Blackberry Developer Blog
(2) PjSip Blog
(3)Download Source Code
I don't know how to develop app in Android, ios, desktop using qt language.
But I am suggesting develop app in all native language instead qt.
I've been reading up on how to transfer data between iOS devices over Bluetooth using GameKit. I'm not writing a game, per se, but do have a need to transfer a small amount of binary data between two devices. Between two iOS devices, this is easy enough. However, I was wondering if it is possible to transfer data between an iOS device and an Android device via the same mechanism.
Has anyone come across documentation/tutorial that would explain how to do this? Is it even technically possible? Or has Apple put in some sort of restriction that would prevent this?
The other option I discovered was Bonjour over Bluetooth. Would this be a more suitable option for this type of operation?
This question has been asked many times on this site and the definitive answer is: NO, you can't connect an Android phone to an iPhone over Bluetooth, and YES Apple has restrictions that prevent this.
Some possible alternatives:
Bonjour over WiFi, as you mentioned. However, I couldn't find a comprehensive tutorial for it.
Some internet based sync service, like Dropbox, Google Drive, Amazon S3. These usually have libraries for several platforms.
Direct TCP/IP communication over sockets. (How to write a small (socket) server in iOS)
Bluetooth Low Energy will be possible once the issues on the Android side are solved (Communicating between iOS and Android with Bluetooth LE)
Coolest alternative: use the Bump API. It has iOS and Android support and really easy to integrate. For small payloads this can be the most convenient solution.
Details on why you can't connect an arbitrary device to the iPhone. iOS allows only some bluetooth profiles to be used without the Made For iPhone (MFi) certification (HPF, A2DP, MAP...). The Serial Port Profile that you would require to implement the communication is bound to MFi membership. Membership to this program provides you to the MFi authentication module that has to be added to your hardware and takes care of authenticating the device towards the iPhone. Android phones don't have this module, so even though the physical connection may be possible to build up, the authentication step will fail. iPhone to iPhone communication is possible as both ends are able to authenticate themselves.
Maybe a bit delayed, but technologies have evolved since so there is certainly new info around which draws fresh light on the matter...
As iOS has yet to open up an API for WiFi Direct and Multipeer Connectivity is iOS only, I believe the best way to approach this is to use BLE, which is supported by both platforms (some better than others).
On iOS a device can act both as a BLE Central and BLE Peripheral at the same time, on Android the situation is more complex as not all devices support the BLE Peripheral state. Also the Android BLE stack is very unstable (to date).
If your use case is feature driven, I would suggest to look at Frameworks and Libraries that can achieve cross platform communication for you, without you needing to build it up from scratch.
For example: http://p2pkit.io or google nearby
Disclaimer: I work for Uepaa, developing p2pkit.io for Android and iOS.
You could use p2pkit, or the free solution it was based on: https://github.com/GitGarage. Doesn't work very well, and its a fixer-upper for sure, but its, well, free. Works for small amounts of data transfer right now.
I've been working on a project that would greatly benefit from call-stream modification. This has been repeatedly said/assumed to be unachievable, as most people believe that the hardware loop for the in-call audio is completely disconnected from the main MCU of the device.
Questions like Stream audio to a phone call Android have received answers stating that it is impossible to access the audio. I agree that this is definitely impossible from the Android API, but it is completely unclear whether the hardware ACTUALLY is disconnected completely.
The stackoverflow user 'artsylar' said that they were able to modify the 'framework layer' of Android OS to inject recorded audio into call streams, which would be a huge step forward (see Play an audio clip onto an ongoing call, artsylar's comment on the selected answer). Assuming artsylar's success is valid, there definitely is a way to control the call stream audio by modifying the framework (I assume the telephony base framework in the Android source).
Basically, I completely agree that modifying or controlling the call-stream is impossible from the application layer. However, I am interested in customizing the Android OS in framework or Radio Interface Layer; artsylar seems to have had success, but there is no explanation in open-literature on how. Given the current state of Android technology, could anyone clarify the above to actually establish whether controlling call audio is possible by modifying the core Android OS, and a good path to accomplish this goal?
I believe that a final clarification on this issue would be of great value to the open-source community.
Thanks!
It's technically possible to inject audio into the voice call uplink on some platforms (Qualcomm's MSM8960 and APQ8064, for example). Support exists at the hardware level and at the device driver level. But to make that functionality available to normal applications you'd have to create a custom Android ROM where you've added all the necessary user-space parts in both the Java layers and native layers of Android.
So the short answer is: no, there's no standard way of doing this as an app developer (doesn't matter if you use the SDK or NDK).
If you're working for an OEM or by some other means are able to build and flash your own Android ROMs you can probably get the information you need by asking your platform vendor.
It is very difficult to do so because it relates to handling the Linux Kernal inside the Android OS.
Not only is there no API support , but also the security issue is not allowed to do so.
As being a professional in the software engineering field especially the programmers, we
never assume anyone's success on invention and the related project is valid until the project is being tested.
Also streaming the audio during the call may invoke the issue of privacy and security issue among the smartphone users and the service provider of telephony