Is it okey using WebSocket to get JSON on Android? - android

I am developing client application on Android, which uses REST API to get JSON.
App sends a lot of requests to different URLs. This way is somehow slow - takes 30~60 seconds. This article says that WebSocket works faster. Now I am wondering whether I can use WebSockets for this purpose. So I have several questions about this:
Is it a good practice to use WebSocket to get JSON data from server?
If not, what can be better(faster/more secure) than regular HTTP?
Can I send request to regular HTTP REST API using WebSocket?(this question may seem strange, but I really do not know)? Or guys on
backend should change/modify something to make this enable?
What can be disadvantage(eg. battery drain) for using WebSocket?

You can use WebSocket to get JSON data from a server. However both the client and server must "talk" WebSocket. If the server only understands REST calls, you will not be able to directly connect with WS from a browser to that REST server.
No, you cannot use WebSocket to directly connect to an HTTP REST API. Your browser must use the same protocol as the server. That's why protocols were invented. right? :)
WebSocket is a persistent connection between a client and a server. Its like a TCP socket connection in that way. HTTP is fundamentally not persistent (although there are tricks to keep the connection for a long time). An HTTP client usually calls an HTTP server, gets the data, returns and then the connection is terminated. There are advantages and disadvantages to both (like anything else). A WebSocket connection by itself will not drain your smartphone battery quickly. If you are constantly sending data over that WebSocket that has to be rendered by the GPU on your screen, then your battery will drain. An HTTP connection by itself will not drain your smartphone battery either. But if your phone is polling some REST server and is in a low bandwidth area, your battery will drain from constantly going from low-power mode to high-power mode. So its not the connection that is affecting the battery; its what's happening with that connection that affects the battery.

You still send HTTP requests. It is a good practice if you have a bunch of them that you want to send in a burst or want to have two-way interaction with the server. You simply avoid rebuilding a fresh Session everytime and multiple handshake protocols. If you want to visit a webpage with lots of resources (e.g. images) it is a good idea. Also the server can send request to you without being polled.
WebSockets need to be enabled on the server side (they are by default), but to keep the connection alive the webserver stores some references which need to be managed. So you allocate resources even if there is no messaging. When you have lots of clients, it can become challenging to handle the resource allocations.
Battery drain is not an issue since you are not sending lots of data, but for you as a developer you need to add extra logic to handle the cases when the network or wifi are no longer available. This is when the Websocket is terminated and you need to reconnect with the server.
As a personal comment: Use it if you need server to interact with you (if something happens e.g. a new message has arrived for you at the server, the server can use the websocket to let you know) or if you are expecting lots of requests being send to the server in a short time from your client.
Hope it helps! I am not an expert.

Related

Detecting online status of device and sending back to server

Hi I am developing android application in which I want show whether other person is online or not so that person can intiate the communication.I thought about few solution :
1) Implementing heartbeat mechanism, in which device will send ping request to server after fix interval of time.
2) Server will send push type ping to client and client will give response on that so that server will know that client is online.
First case causes battery and data issue, while second one causes delay in push which will affect the process.
Is there any better solution for this problem? Apart from these or improvise version of above one.
nilkash. Virtually any method for checking network connectivity will at the end result in sending periodical pings between device and the server. Even push type ping will actually do the same (but it saves battery because push notifications aggregate messages for all applications in-to a single connection to a google server). So the best solution is just a proper combination of optimizations and you have to choose them depending on your requests.
Server pushes are power efficient, mostly because they reuse the
same connection for all applications, but the delay can be huge,
something like 10 minutes.
You can subscribe to connectivity
events and send "online" message to server once you are online. (But
not once you are offline because you are... offline). This will give
you immediate online events.
Do not send pings from device when there is no connectivity. Your application should be absolutely idle so as not to use battery.
There is no easy way to find out
when client goes offline on server side. You have to trade
traffic/battery for time resolution. More often you send pings, the
better resolution is. But you can't change ping interval for pushes,
so if you need better resolution, then you need to use your own
connection. But you can send other useful data through that connection too.
If you keep a TCP connection, then your pings can be
very data efficient: TCP keep alive packets are just 60/54 bytes.
But then you have to keep open connections with all clients on the
server. This may be a problem if you have a lot of clients.
The best combination may be something like that: you always send online message from a client when it becomes online. You keep TCP connection while the application is in foreground. You use the same connection to transfer data to and from the application. When your application goes to background you fallback to power consuming push based pings and do them at 10 minutes basis.

Is it ok to use HTTP REST API for Chat application?

We are building a chat application on Android. We are thinking of using HTTP REST API to send outbound messages. Wanted to know if it's a good approach or has any downsides compared to using WebSockets or XMPP (which seems to be more of a defacto standard for transferring chat messages)?
Some of the pros/cons I can think of are:
HTTP endpoint is easy to scale horizontally on the server side (This is the main concern)
Learning curve for Websockets is steeper compared to HTTP
HTTP messages would have a larger payload compared to WebSockets
As per this document, it seems even Facebook used AJAX to handle chat messages initially:
https://www.erlang-factory.com/upload/presentations/31/EugeneLetuchy-ErlangatFacebook.pdf
We can use REST API for chat messaging, but IMHO, XMPP is a better alternative. Let's consider what XMPP has to offer.
XMPP beside supporting TCP transport also provides HTTP (via polling and binding) and websocket transports. Read XMPP via HTTP and WebSocket transports
It would be interesting to understand pros and cons of each transport from XMPP perspective.
XMPP could use HTTP in two ways: polling[18] and binding.
XMPP over HTTP Polling
The polling
method, now deprecated, essentially implies messages stored on a
server-side database are being fetched (and posted) regularly by an
XMPP client by way of HTTP 'GET' and 'POST' requests.
XMPP over HTTP Binding (BOSH)
The binding method is considered more efficient than the regular HTTP 'GET' and 'POST' requests in Polling method because it reduces latency and bandwidth consumption over other HTTP polling techniques
However, this also poses a disadvantage that sockets remain open for an extended length of time, awaiting the client's next request
The binding method, implemented using Bidirectional-streams Over Synchronous HTTP
(BOSH),[19] allows servers to push messages to clients as soon as they
are sent. This push model of notification is more efficient than
polling, where many of the polls return no new data.
It would be good if we understand how the BOSH technique works.
The technique employed by BOSH, which is sometimes called "HTTP long
polling", reduces latency and bandwidth consumption over other HTTP
polling techniques. When the client sends a request, the connection
manager does not immediately send a response; instead it holds the
request open until it has data to actually send to the client (or an
agreed-to length of inactivity has elapsed). The client then
immediately sends a new request to the connection manager, continuing
the long polling loop.
If the connection manager does not have any data to send to the client
after some agreed-to length of time [12], it sends a response with an
empty . This serves a similar purpose to whitespace keep-alives
or XMPP Ping (XEP-0199) [13]; it helps keep a socket connection active
which prevents some intermediaries (firewalls, proxies, etc) from
silently dropping it, and helps to detect breaks in a reasonable
amount of time.
XMPP over WebSocket binding
XMPP supports WebSocket binding which is a more efficient transport
A perhaps more efficient transport for real-time messaging is
WebSocket, a web technology providing for bi-directional, full-duplex
communications channels over a single TCP connection. XMPP over
WebSocket binding is defined in the IETF proposed standard RFC 7395.
Speaking of the learning curve, yes, you might be tempted to use the REST API, but now there are several resources to learn about Android and XMPP, and XMPP server softwares that you can use to run your own XMPP service, either over the Internet or on a local area network. It would be worth spending this effort before you decide your architecture.
I think a REST approach can work for chat. Let's assume:
http://chat.example.com/conversations references all conversations
GET fetches the list
POST creates a new one
http://chat.example.com/conversations/123 references conversation #123
GET fetches messages in this conversation
POST adds a message in the conversation
If I understand correctly, your question is about the last point.
Once the client has posted an outboud message to http://chat.example.com/conversations/123, it will close the http connection.
The drawback is that receiving inbound messages is simply not possible in that case. You will need a different channel (maybe simply Google cloud messaging).
On the contrary, WebSockets and XMPP keep the connection alive, hence they can receive messages with no delay. But the drawback of both of them is indeed that this represents a cost on the server, in terms of scalability ; and a cost on the client in terms of battery usage.
On the server:
sockets are a relative scarce resource
it's not possible to move clients for load balancing if they have an open connection (but can you really move clients anyway? this depends on the responsibility of tiers)
On the client:
maintaining a network connection is extremely expensive. 1 s network ≈ 5 min sleep.
the XMPP librairies are not necessarily working very well on Android.
And I've no idea of support of WebSockets on android.
It is not suggested to use HTTP Rest API for a chat or similar real-time applications.
Some overview...
Chat Client requirements
Friend list fetch
Check online/offline friends
Get chat messages in real-time and send messages.
Receive notifications of delivery/reading etc.
Point 1 is sort of one time job after you start the chat client so can be done with a simple rest call so requires no complicated overhead.
Rest all the points will need persistent checking of data from the server or other parts in case of p2p client also. Which will make you create either long or short polling rest calls to watch for new data or other updates.
Problem with HTTP Rest client
It is not a keep-alive type of communication due to which you will have to make multiple HTTP connections which will have so much overhead that it will become overly laggy. As reconnecting is very costly in HTTP calls.
Web sockets or XMPP
They are duplex mode of communication and are very good at handling incremental data pushes and you are not creating new HTTP connections again so it gives a really smooth performance.
Another solution
In case you are stuck with some legacy systems in case of which you are bound to use the rest API mode.
Try CometD it is a hybrid approach of WebSockets and ajax polling which will give you near real-time communications as well as work on clients which do not support WebSockets by falling back on ajax polling mechanisms. Also, it uses various optimizations to avoid reconnecting again and again.
CometD link
You can also try Socket.io which is also an amazing technology to solve these kinds of use cases
Short answer No.
I would not start a new project or recommend starting a new project (since you mentioned start afresh) that needs a live bi-directional communication that relies on HTTP - as stateless protocol. You may take comfort that the connection is kept alive but there is no guarantee.
Your + HTTP endpoint is easy to scale horizontally on server side pro is a pro in the context when HTTP is used as request and response style and when it is considered stateless. It becomes somewhat moot (although not entirely) when you inherently need to keep the connection alive.
HTTP does offer another following benefit that you have not mentioned in here.
HTTP is easy to deal with corporate firewall proxies when other ports may be blocked.
This is where WebSockets or XMPP over HTTP will have better success rate as mentioned by others.
It depends. Do you consider your application to be "live chat"? Do you require a presence indicator, or typing indicator? Features such as those require continuous connection. But there's another set of chat applications that you'd describe as "in-app messaging" enabled. These applications store conversations and conversation lists on some sort of backend; just install the app on another device and log in, and you'll see your conversations on this type of app. These apps don't have any presence indicator, or feeling of liveness.
Although I haven't implemented any applications with XMPP, it looks like as far as message persistence, the most persistence you'll find with XMPP (out of the box) is persist-until-delivered, similar to SMS. Perhaps you could build a storage/recovery mechanism for XMPP by capturing stanzas as they pass through and storing them in your own DB. But if you don't need the full "chat" experience, using a database, HTTP service and push notifications (to notify of updated threads) seems like a solid path for apps with messaging functionality — which I intend to implement in an iOS & Android app of my own right now.
Let me know if you've found any good open-source schemas/API design for this.

Http or TCP/IP socket, which is better for Android app?

Before ask my question, I want to let you know what stage I am on. I have already implemented TCP/IP socket on my android app, it works fine(so far...).The connection between client(my android app) side and server side is short connection which is when a user submit information, a new thread will be created to send the message out, on the server side, once the server got the message, the server will respond "RCVD", after that the socket will be closed and then the connection is closed. My app has a lot of interactions between user side and server side, therefore it does many connect and disconnect between clients and server, so I always worry about the socket communications will drain phone battery and the performance will be affected.
Recently I find OkHttp on github and a lot of people suggest using it. Im not quite familiar with Http, only knows it is a higher level network protocol.
Can anyone tell me which way is better? which is more efficient for exchanging data(Object/Json/String) and media(Images)? Which is more faster and which use less battery?
Many thanks.
Basically, the comparison between Http and tcp socket is meaningless, But in your situation it really matters.
As you described, in your tcp socket way, you may create new connection each time receiving new push from server, which is not that efficient, If you use OkHttp, when your client exchange message with the same server, the same tcp socket is reused each time rather than make a new one.
By the way, As for the push service, use XMPP(over tcp) may be better cause Http is not optimized for such short message exchange model(You should use some extra strategy on the server side to keep the connection from being closed), but you may have to handle some implements about XMPP server and client.

Avoiding a 100ms http request to slow down the REST API where it's called from

I'm developing a multiplayer Android game with push notifications by using Google GCM.
My web server has a REST API. Most of the requests sent to this API send a request to Google GCM server to send a notification to the opponent.
The thing is on average, a call to my API is ~140 ms long, and ~100 ms is due to the http request sent to Google server.
What can I do to speed up this? I was thinking (I have full control of my server, my stack is Bottle/gunicorn/nginx) of creating an independent process with a database that will try to send a queue of GCM requests, but maybe there's a much simpler way to do that directly in bottle or in pure python.
The problem is that your clients are waiting for your server to send the GCM push notifications. There is no logic to this behavior.
You need to change your server-side code to process your API requests, close the connection to your client, and only then send the push notifications.
The best thing you can do is making all networking asynchronous, if you don't do this yet.
The issue is that there will always be users with a slow internet connection and there isn't a generic approach to bring them fast internet :/.
Other than that, ideas are to
send only few small packets or one huge in favor of many small packets (that's faster)
use UDP over TCP, UDP being connectionless and naturally faster
I've solved my problem thanks to this thread:
I'm using Celery to send my notifications through a task queue.
I can't believe how simple it is!
Thanks anyway :)

How does a server handle web service requests from multiple clients

I just completed an Android application that uses web services to connect to a remote database. I was working on localhost.
Now, I plan to host my web services on a server. Let's say I have my Android application installed on any number of different client smartphones. Each smartphone user calls the web service at the same time.
Now how does the server handle these requests? Does it execute one thread per request? I want to know about the server processing in detail. Considering, all phones use GPRS, will there be any sort of delay in such a situation?
BTW, my web services are all SOAP based and the server I plan to use later will be an SQL Server. I have used .NET framework for creating web services.
Its for the general concept, not a Android specific
Usually, each of the users sends an HTTP request for the page. The server receives the requests and delegates them to different workers (processes or threads).
Depending on the URL given, the server reads a file and sends it back to the user. If the file is a dynamic file such as a PHP file, the file is executed before it's sent back to the user.
Once the requested file has been sent back, the server usually closes the connection after a few seconds.
Look at How Web Servers Work
EDIT:
For HTTP uses TCP which is a connection-based protocol. That is, clients establish a TCP connection while they're communicating with the server.
Multiple clients are allowed to connect to the same destination port on the same destination machine at the same time. The server just opens up multiple simultaneous connections.
Apache (and most other HTTP servers) have a multi-processing module (MPM). This is responsible for allocating Apache threads/processes to handle connections. These processes or threads can then run in parallel on their own connection, without blocking each other. Apache's MPM also tends to keep open "spare" threads or processes even when no connections are open, which helps speed up subsequent requests.
Note:
One of the most common issues with multi-threading is "race conditions"-- where you two requests are doing the same thing ("racing" to do the same thing), if it is a single resource, one of them is going to win. If they both insert a record into the database, they can't both get the same id-- one of them will win. So you need to be careful when writing code to realize other requests are going on at the same time and may modify your database, write files or change globals.
The server will maintain a thread pool listening for incoming requests. Upon receiving a request, the thread will process the request and return the response. If all the requests are received at the same time and they're fewer than the maximum number of threads in the pool, they will all be services in parallel (though the actual processing will be interleaved based on the number of cores/cpus). If there are more requests than threads, the request will be queued (waiting for a connection) until either a thread frees up or the client request times out.
If you're connecting to the service from a mobile network, there is higher latency in the initial connection but not enough to make a difference.
Your question is not really related to Android but to mobile development with web backend.
I don't know how to use .NET for server app development but if you take the example of a Apache/PHP/MySQL, each requests are run in a separated thread.
There might be small latency delays while the request reach the server but this shouldn't affect the time taken by your server to process the request and the data.
One of the thing to think about is to avoid sending multiple requests from one same client. This is a common implementation problem : since you have no data already returned, you think there are no pending request and you launch a new request. This can basically create unnecessary load on your server.
Hope that helps !
a) one instance of the web service( example: spring boot micro service) runs/listens in the server machine at port like 80.
b) This webservice(Spring boot app) needs a servlet container like mostly tomcat.
This container will have thread pool configured.
c) when ever request come from different users simultaneously, this container will
assign each thread from the pool for each of the incoming requests.
d) Since the server side web service code will have beans(in case java) mostly
singleton, each thread pertaining to each request will call the singleton API's
and if there is a need for Database access , then synchronization of these
threads is needed which is done through the #transactional annotation. This
annotation synchronizes the database operation.

Categories

Resources