Android and C++ Socket Communication - android

I'm developing an app to receive data from C++ program every second. The app also need to send data to C++ program sometimes.
Is it suitable to use socket as communication between both instances?
For each instance, does it have to run socket server and client at the same time?

I think there would be different ways of accomplishing this depending on required timing behavior (does the device have to receive messages synchronously, should messages that cannot be delivered in time be cached till they can be delivered, etc.), public reachability of the android device (if they are connected over mobile networks they are behind NAT in many mobile networks) and if the devices could go into standby mode or not.
Using stream sockets (TCP) if the mobile device stays awake the whole time or processing has to happen always synchronously.
In this case one end would have to be the "server" and one end to be the "client". Because mobile devices tend to go into standby mode i'd use the C++ program (if it runs on a non-mobile device) to be the server - this would be the end that creates a socket, binds it and then uses listen to wait for incoming connections. Whenever the client connects to the server it has to accept the connection which then can be used bidirectionally by using the same handle for send and receive.
The mobile device then would create a socket and connect to the server and could transmit data to it (it does not have to bind or listen). Over the same connection the device could receive data from the server.
If the server is required to send data to the mobile device even when the mobile device has not established a connection and the mobile device is able to go into standby mode one could either periodically wake the device and poll the server or use the firebase cloud messaging system or even short message service or - if the device is not able to go into standby mode - simply create a listening socket too that accepts incoming connections from the C++ application.
Using datagram sockets (UDP)
In this case both the C++ application and the Android application would create and bind a socket to a specific port. Then they can both simply send packets (unreliable) to other clients or even multicast them in a local area network by sending them a multicast address. Of course the mobile device would miss packets that have been sent from the C++ application during periods where it's in standby mode and the C++ application would miss packets during times it's not running.
Using a message queue (if the mobile device may go to standby mode and has to receive messages asynchronously)
In this case both programs would not have to run at the same time if the queues are persistent, but a message broker would have to (for example RabbitMQ). The C++ application could simply push messages into the queue and any subscribed mobile device would receive them either immediately or (for persistent queues) later whenever the devices connects to the server.
Messaging from the mobile device to the server could also be realized over a message queue if synchronous behavior is not required or over a traditional webservice or even a socket.

Related

For how long an app in background would keep receiving bluetooth packets?

👋
I'm researching for a PoC that users would have a mobile app (Android and iOS) connected to a Bluetooth device. Users would lock their cellphone, put them away (close enough for keeping the Bluetooth connection) then the mobile app would stream (broadcast the Bluetooth packets) to an HTTP endpoint.
The mobile app would behave like a hub broadcasting Bluetooth packets.
The stream should last for about 1 - 2 hours.
Would that work or Android and iOS eventually terminate the app?
On iOS, it seems like if you enable Bluetooth-LE related background modes [1], you should be able to "handle important central role events, such as when a connection is established or torn down, when a peripheral sends updated characteristic values, and when a central manager’s state changes.". The caveat to this is that once woken up, you only have a short amount of time (approximately 10 seconds) to perform some processing like sending an HTTP request.
[1] https://developer.apple.com/library/archive/documentation/NetworkingInternetWeb/Conceptual/CoreBluetooth_concepts/CoreBluetoothBackgroundProcessingForIOSApps/PerformingTasksWhileYourAppIsInTheBackground.html

Asynchronously receive Datagram/TCP packet over Android Client Socket

As shown above, I have a socket server (in UDP) running on my Raspberry Pi. I also have a socket client running on my Android app.
Currently, the app client always initiates communication and the RasPi always responds.
The RasPi can even initiate communication and send socket packet to anyone if it knows the IP address.
The problem is my app has to wait for a Thread to receive data forever (basically polling). Like below:
uniSocket.receive(receivePacket);
Should both RasPi and Android run clients and servers or is there something like Datagram_Socket_Callback or some asynchronous method to receive and send data.
The receive call will block until there is something to receive. That isn't polling; it's just waiting. It could wait in the receive call for days at a time if there's no notification to be sent. And that occupies no more resources than a server threading running on the android side (waiting for a connection) would occupy.
You probably will need some kind of periodic keep-alive to ensure the other side hasn't disappeared. You can do that yourself at the application layer, or TCP provides a keep-alive mechanism you can activate with setsockopt for this purpose.
In other words, create a socket, enable TCP keep-alives, send an "I'm waiting for notifications" message to the server, and then call receive to wait. The server then maintains a list of client connections waiting for notifications, and can then send data through a connection whenever there is something to be sent.
For your normal client-server communications, you could use the same socket, but it might be more complicated to synchronize that way, so alternatively, you could just create another socket in parallel.
I don't know a lot about android/app development but obviously your client will need to be prepared to re-create the connection if it's broken. I can think of many reasons why that might happen in normal usage. For example, the initial connection might be created from an IP address obtained on your home network. Then you walk out of your house and lose that IP address. You may now need to recreate the connection with a different IP address on the cell network (or the wifi at the coffee shop or whatever). If the phone goes into Airplane mode, you might lose all IP addresses for a time.
I'm sure it's possible to create a server that runs in android as well, but then you have the problem of the RPi needing to know the android's IP address which may change frequently. I would say it's better to keep the client role on the android side.

Are pushed notifications to mobile phones really pushed?

I know that notifications can be pushed to servers using http/s but can mobile phones really be pushed to from those servers? Technically it is my guess that mobile devices actually poll the notifications servers to see if there are any new notifications and that this is a sort of 'pseudo push'.
So that's my question - do mobile phones truly receive live, pushed notifications or are they actually polling? The reason I ask is that it would seem to be incredibly expensive on the network for mobile phones to have a constantly open channel to masts as a user moves around. Anyone know what the technical detail is?
Apple Push Notifications are delivered to the device over a TCP connection. The iOS device initiates a TCP connection on port 5223 (with a fallback to 443 on WiFi if 5223 cannot be reached).
Once the TCP session is established very little traffic is required to keep the TCP connection alive - just an occasional keep-alive packet.
When a push notification is to be delivered, the Apple servers look for an existing connection to the device. If a connection is found then the data stream is sent over the already established connection, so in that sense it is a "push".
If there is no existing connection to the target device then the message is held on the Apple server until the device connects (or the message expires), so at this level it is more like a "pull" - with the device initiating the connection when it can.
I imagine GCM works in a similar way.
There is simply a TCP socket waiting in accept mode on a cloud Google server. The TCP connection had been initiated by the Goggle Play application. That's why Google Play must be installed on the device for making Google Cloud Messaging (GCM) (formerly Android Cloud to Device Messaging Service - C2DM) work.
When this TCP client socket receives some message, the message contains information such as the package name of the application it should be addressed to, and of course - the data itself. This data is parsed and packed into an intent that is broadcast and eventually received by the application.
The TCP socket stays open even when the device's radio state turns into "idle" mode. Applications don't have to be running to receive the intents.
More information at http://developer.android.com/google/gcm/gcm.html

Periodic ARP messages from Android device every 30-45 seconds.

I have a native C application on an Android 4.3 device which transmits and receives data via UDP.
I noticed that the device is sending out ARP messages every 30-45 seconds while data are being actively transmitted/received via UDP to/from a remote IP address.
I would expect that as long as data are being exchanged actively there wouldn't be a need for sending out ARP messages as a cache is maintained. However, that doesn't seem to be the case.
Is this default and expected behavior on Android ?
Is there some configurable option when creating a socket which controls the frequency of these ARP messages?

Life time of QTcpSocket

I'm currently developping an Android application which connects to a server through TCP. The server is written in Qt and runs on a computer.
In server side, I use a QTcpServer and the signal QTcpServer::newConnection() to get the QTcpSocket newly connected with QTcpServer::nextPendingConnection(). I have implemented a class I called SocketManager, which manages the data received by this socket.
In Android side, I use java Socket to connect to the server.
All work great. When the Android side disconnects from the server, my SocketManager object is well notified and destroys itself. But I would like to manage properly the case when for example the Android device goes to offline or is turned off. In that case, I'm not notified of the disconnection of Android. I connect these signals of my QTcpSocket:
QAbstractSocket::disconnected(),
QAbstractSocket::stateChanged(QAbstractSocket::SocketState)
QAbstractSocket::error(QAbstractSocket::SocketError)
QObject::destroyed(QObject*), thinking that perhaps the QTcpSocket is internally destroyed by the QTcpServer.
But no signal is received when the Android device goes offline or is turned off.
When the QTcpSocket will be released by the QTcpServer? Only when the socket is explicitely disconnected? So in my case, will it never be destroyed? Should I manage the disconnection in all cases in the Android side?
Thanks everyone.
TCP will not notify you of disconnections unless the remote peer explicitly sends disconnect request (by using close() or shutdown() methods) or you try to write to a disconnected socket (in which case you get a broken pipe signal)
The classical way to solve this problem is implementing a heartbeat messaging system where after a certain amount of heartbeat inactivity you close the socket concluding that the remote peer has died suddenly or there is a network problem.

Categories

Resources