I have captured some traffic between an Android application and a website using Charles Proxy. Charles identifies the traffic as a Protocol Buffer stream.
The structure as shown in Charles:
- site.com
|
-- sub
|
--- message.proto
The raw message:
POST site.com/sub/message.proto HTTP/1.1
token: random
Id: random
Authorization: Basic OTI[..]
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.3; Galaxy Nexus Build/JWR66Y)
Host: site.com
Connection: Keep-Alive
Accept-Encoding: gzip
Content-Type: application/x-www-form-urlencoded
Content-Length: 580
��hï õÜÕñ6iaõ*|{6¤oQIùk*դž¼
S_½ª¥8.3ÝÎu öÚ´êVFBeùõÈî¿;µ¼ö%S [...]
I have tried a few things to decode the content, but in vain. The command proton decode_raw < message.txt results in a fail message Failed to parse input. Now I am not sure if the message is really a protobuf message since the Content-Type in the headers does not indicate that protobuf is used. I have also saved the traffic as a .bin file.
Charles has the capability to display the contexts of the protobuf message, but requires the corresponding descriptor file. To get the descriptor file I however need the actual .proto file which I do not have.
So, am I forced to decode the message by hand or are there other possibilities which I overlooked?
I suspect that application-level encryption is used and Charles identifies the traffic as protobuf unintentionally.
It looks to me like the content is simply compressed:
Accept-Encoding: gzip
Content-Type: application/x-www-form-urlencoded
Try decompressing it with gunzip.
I agree that it is likely not a protobuf. Charles Proxy is probably confused by the URL ending in .proto.
Note that when attempting to decode the data (whether as a protobuf or as gzip), you'll need to make sure you are only decoding the body of the request, i.e. not the textual HTTP headers. Note that editing the headers out in a text editor likely will not work, since converting binary data to text usually corrupts it. You can probably best extract the data by doing:
tail -c 580 message.txt | zcat
or, if you think it could be a protobuf after all:
tail -c 580 message.txt | protoc --decode_raw
Note that 580 comes from the Content-Length header.
Related
IIS will send back a 400 error if you send it two content-type headers, here is an example:
1: Content-type : application/json
2: Content-type : application/json; charset=utf-8;
Apache handles that and processes properly for json.
My reading of the w3c spec is that only a single Content-Type header is allowable. Arguably though both headers mean exactly the same thing as JSON in this case is as I understand it, UTF-8. So whose right here? IIS or Apache?
My app fails running on IIS, the Android lib I am using sends 2 headers if I give it my own and fails on IIS. So currently I'm locked into Apache.
I am trying to connect to a WebSocket server that my Android device connects to from an app. I captured the packets on my Android device, and the initial request headers look like this:
GET / HTTP/1.1
Pragma: no-cache
Cache-Control: no-cache
Host: example.com:80
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: ysWaBflPV9EmRaB1JpPMOQ==
Origin: http://example.com:80
Sec-WebSocket-Protocol: default-protocol
Sec-WebSocket-Extensions:
Sec-WebSocket-Version: 13
The response from the server looks like this:
HTTP/1.1 101 Switching Protocols
Date: Wed, 31 Jan 2018 02:37:20 GMT
Connection: upgrade
Set-Cookie: AWSALB=0yaRd5HOPlZSITp+bcXoZoIn/7YOOqE9o4/t/8b3kw2PTxooflm/85w+1JfudEE0Cwb1BUkWPV+t4kOnEm4FbLSWwMMFp8URbblZKj0a0kd0xB+glbLBHWxc/TPW; Expires=Wed, 07 Feb 2018 02:37:20 GMT; Path=/
Server: nginx/1.12.1
Upgrade: websocket
Sec-WebSocket-Accept: bj5wLF8vmyDrA7pqEgbHKbxqQSU=
Then, some communication begins, with lots of unrecognizable characters and some clear words in the messages. I don't have much experience with WebSockets, but I assume it is some form of compression.
I was able to send an identical request to this server using the ws module in Node.js, and I got an identical response to the one above. One notable difference was that when I set the protocol header to default-protocol, I received an error saying "Server sent no subprotocol". Without using this header, I still got the same response.
After the initial response, however, I did not receive any more messages, even though I did on my Android device. After about 30 seconds, the connection closes with code 1006 and no further information.
I tested the same request with curl and received the same headers back, but it also closed after about 30 seconds saying:
"Empty reply from server"
So my main question is obviously: What is going wrong, and how can I fix it?
More specifically, I am wondering if anyone with WebSocket experience knows if it is a problem with my client, or with the server itself.
It is possible that the server is authenticating me in some way on my Android device, but the headers that I captured are not revealing anything about that. Is it ever customary to authenticate a connection with a later message in the client-server communications? Is it possible that a separate HTTP request is authenticating me for this WebSocket server? All of these things seem unlikely to me since I found no other packets with anything related to auth requests. It seems much more likely that there is something wrong with the messages being sent.
I have a project where I'm controlling an Arduino at my house using an Android app through WAN. I'm using MIT's App Inventor to design the app and with that I'm using a HTTP PUT/POST (I've tried both) to send the string of information "helloThere" to the Arduino. Everything has been fine while broadcasting directly to my IP address and port number. This is the Arduino output (I've obfusticated my IP and port):
PUT / HTTP/1.1
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.4.4; GT-I9305 Build/KTU84P)
Host: xx.xx.xx.xx:xxxx
Connection: Keep-Alive
Accept-Encoding: gzip
Content-Type: application/x-www-form-urlencoded
Content-Length: 10
helloThere
The problem arises when I use a DDNS (no-ip) to refer to my IP address (As it is dynamic). For some reason the PUT/POST request does not get carried out when getting relayed through this. The output from the Arduino is shown below when using the DDNS:
GET / HTTP/1.1
User-Agent: Dalvik/1.6.0 (Linux; U; Android 4.4.4; GT-I9305 Build/KTU84P)
Host: xx.xxx.xx.xx:xxxx
Connection: Keep-Alive
Accept-Encoding: gzip
Somehow it is changing to a GET request instead of a PUT/POST, but it is still contacting the device. I'll be honest, I'm not a web guy so I'm pretty confused by this, is a DDNS not supposed to relay whatever you send to it? I've had a look around and can't find anything on this, any help or explanation would be appreciated.
EDIT: After doing a lot of further research I have figured out that a DDNS server actually returns the IP address of the desired hostname when queried. Does anyone have any idea what address and port that no ip use to do this? I am aware that windows uses an "NSLOOKUP" to perform this, but I have no idea how this is achieved on an arduino. It could be over UDP or HTTP. Again, any help from someone who has experience in this area would be appreciated.
Alright, finally solved the issue for those of you that are interested. Here's the Arduino code to retrieve the ip:
char server1[] = "xxxxxxxxxxxx.ddns.net"; //server to ping to get external
ip address
if (currentMillis - previousMillis >= interval) {
previousMillis = currentMillis;
if (client1.connect(server1, 80)) { // if you get a connection output to
serial:
Serial.println("connected");
client1.println("GET / HTTP/1.0"); //Make a HTTP request:
client1.println("Host: xxxxxxxxxxxx.ddns.net");
client1.println("Connection: close");
client1.println();
delay(1200);
}
while (client1.available()) { //loop to read html from external server and
take ip from it
char c = client1.read();
HTTPArray [counter] = c; // copy all the data from the buffer into an
array
counter++;
}
When I make a request to web service over wifi everything is working well but the same when made on GPRS having WAP connection (I did not in anyway force the connection to be WAP, this might be carrier dependent. Testing done in Argentina) the server is receiving duplicate values in both Content-Type & Content-Length.
Below is the same request logged on server first over WiFi and second over GPRS.
Over WiFi:
POST /ODP/Services.asmx HTTP/1.1
User-Agent: kSOAP/2.0
SOAPAction: http://temphost.org/RetrieveConfiguration
Content-Type: text/xml
Connection: close
Content-Length: 464
Host: temp.host.com
Accept-Encoding: gzip
Over GPRS:
POST /ODP/Services.asmx HTTP/1.1
Accept-Encoding: deflate, gzip, identity
Content-Length: 464, 464
Content-Type: text/xml, text/xml
Host: temp.host.com
SOAPAction: http://temphost.org/RetrieveConfiguration
User-Agent: kSOAP/2.0
X-WAP-WTLSEncryptiontype: NONE
X-WAP-Bearerinfo: W-HTTPS=FALSE, bearertype=0
Via: W-HTTP/1.1 wgw-fe6 EMIG 5.1
x-msisdn: <User Phone number>
x-up-calling-line-id: <User Phone number>
x-technology-stack: Unknown
TE: trailers
Connection: TE
I am not able to understand how and where the multiple values are being added in the request for the Content-Type & Content-Length headers.
Can somebody enlighten me what is wrong with the WAP connection or is it being added at the carrier end while modifying the request to WAP.
Sadly there's nothing to do on the client side (customer/phone/browser), but there are some possible scenarios to get an approach to a solution.
If you're web developer (VASP side), you need to take care about the URL/URI size in order to prevent that POST needs more than one packet.
If you're the web server manager (VASP/Telco side), you can define that multiple values are going to be accepted for these specific headers (when equal).in this scenario you need to be aware that you gonna put your server under risk of HTTP response splitting attack.
If you're the proxy admin (Telco side), you can tune your GW to discard the extra header instead of merging them into one header containing multiple values but, in this scenario you need to be aware that you gonna be out of RFC recomendations.
If because of its extension a POST is truncated and requires more than one packet, content-length and content-type fields are duplicated.
WGW resolves this situation by merging them into one header containing multiple values comma separated, the problem is that this header containing multiple values generates HTTP 411 error code.
Based in latest RFC drafts, this traffic must be discarded with error code 502 but some workaround intended to remove one of the header instead of merging them in order to keep this transactions in progress is possible.
I was trying to read the HTTP messages between the browser in the Android simulator and other third party web-servers using tcpdump. However, since the browser can accept gzip content-encoding, I can't see the HTML content as plain-text in the tcpdump output. Is there a way to change the configs of the browser so that it doesn't send that Accept-Encoding: gzip header line?
This post implies if you remove the
Accept-Encoding
header, you'll get raw data back... you should be able to write a custom WebView that never sends that header. Hope that works!
http://forgetmenotes.blogspot.com/2009/05/how-to-disable-gzip-compression-in.html