Android SSLEngine example - android

I need to work with a TCP socket over TLS for an app I'm working on. I've been through dozens of examples and while I have no problem getting through the handshake, I can't seem to read the input stream through any means (tried a lot, including readline(), reading to character array, etc). every time I try, the app freezes on that spot. If I debug, it never goes to the next line of code.
In an attempted solution, I decided to move over to using an SSLEngine, since that's supposed to be the Java 1.5 answer to java.nio for SSL. However, I have found one example (here: http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/samples/sslengine/SSLEngineSimpleDemo.java) which is more than a little confusing to me, and I've not been successful implementing it. When I try, the unwrap() call yields an empty buffer, where I know (from using OpenSSL on the command line) that the service in question pushes data back down the pipe.
Suggestions are welcome, I've burned way too much time on this already. Here's the relevant code:
SSLEngine engine = sslContext.createSSLEngine(uri.getHost(), uri.getPort());
engine.setUseClientMode(true);
engine.beginHandshake();
SSLSession session = engine.getSession();
int bufferMax = session.getPacketBufferSize();
int appBufferMax = session.getApplicationBufferSize() + 50;
ByteBuffer cTo = ByteBuffer.allocateDirect(bufferMax);
ByteBuffer sTo = ByteBuffer.allocateDirect(bufferMax);
ByteBuffer out = ByteBuffer.wrap(sessionId.getBytes());
ByteBuffer in = ByteBuffer.allocate(appBufferMax);
debug("sending secret");
SSLEngineResult rslt = engine.wrap(out, cTo);
debug("first result: " + rslt.toString());
sTo.flip();
rslt = engine.unwrap(sTo, in);
debug("next result" + rslt.toString());

This implementation is missing some key pieces. Namely the handshake can bounce between several states NEED_WRAP, NEED_UNWRAP, NEED_TASK to negotiate a connection. This means you cannot just call one and then the other. You will need to loop over the states until a handshake has completed.
while (handshaking) {
switch (state) {
case NEED_WRAP:
doWrap();
break;
case NEED_UNWRAP:
doUnwrap();
break;
case NEED_TASK:
doTask();
break;
}
}
A full working example of Java SSL and NIO
Now that said, you should be aware the SSLEngine on Android is broken. Google recommends using threads and blocking sockets according to that thread.

I have written something to make using SSLEngine easier. It can be used with NIO or for other use cases. Available here SSLFacade

unwrap() can yield an empty buffer if what was unwrapped was an SSL handshake message or alert, rather than application data. There's not enough information here to say more. What was the engine status afterwards?

beginHandshake does not proceed the handshake, it is just used to inform the SSLEngine that you want to perform the handshake for the next calls to wrap/unwrap.
It's useful when you want to do another handshake. For the initial one, it is not needed as the first call to wrap will initiate the handshake.
Besides, you have to check the result of the wrap and unwrap methods to know if all the data has been correctly encoded. It can happen that you have to call the methods several times to process all the data.
The following link might help:
http://onjava.com/onjava/2004/11/03/ssl-nio.html
Or this question:
SSL Handshaking Using Self-Signed Certs and SSLEngine (JSSE)

Related

gRPC Android Client losing connection "too many pings"

Android grpc client is receiving GOAWAY from server with "too many pings" error. Now I realise that this is probably a server side issue, but I think the issue is that the client channel settings do not match that of the servers.
I have a C# gRPC server with the following settings:
List<ChannelOption> channelOptions = new List<ChannelOption>();
channelOptions.Add(new
ChannelOption("GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS",
1000));
channelOptions.Add(new
ChannelOption("GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA", 0));
channelOptions.Add(new
ChannelOption("GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS", 1));
this.server = new Server(channelOptions) {
Services = { TerminalService.BindService(this) },
Ports = {new ServerPort("0.0.0.0", 5000,
ServerCredentials.Insecure)}
};
On Android I have the following channel setup:
private val channel = ManagedChannelBuilder.forAddress(name, port)
.usePlaintext()
.keepAliveTime(10, TimeUnit.SECONDS)
.keepAliveWithoutCalls(true)
.build()
After a few min (however seems to be a random time). I get the goaway error. I noticed that if I stream data on the call then the error never happens. It is only when there is no data on the stream. This leads me to believe the issue is that the GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA needs to be set on the Android client aswell. Problem is for the life of me I cannot find where to set these channel settings on gRPC java. Can someone point out to me where I can set these channel settings? There are no examples where these have been set.
The channel options being specified are using the wrong names. Names like GRPC_ARG_HTTP2_MAX_PINGS_WITHOUT_DATA are the C-defines for things like "grpc.http2.max_pings_without_data".
You can map from the C name to the key string by looking at grpc_types.h. You should prefer using one of the C# constants in ChannelOptions when it is available, but that doesn't seem to be an option in this case.
These options are not visible in the Java ManagedChannelBuilder API because they are server-specific settings. So instead they are visible on the ServerBuilder. See A8 client-side keepalive for reference to the Java keepalive API.

Obtaining data from my TP-Link router programmatically

I'm trying to design an app that can communicate with my router programmatically using the same endpoints as the web interface (there's a demo on TP-Link's website). My router is a TP-Link TD-W8980, if that matters.
The format appears to be very difficult to decipher. Here is a request which obtains the data for the status part of my app. This can obtain a valid response from the router but I'm not sure why!
I'm especially confused by the #0,0,0,0,0,0#0,0,0,0,0,0] part of the response. It's the only part I haven't managed to work out but I think I recall reading it's to do with the stack?!?
[SYS_MODE#0,0,0,0,0,0#0,0,0,0,0,0]0,1
mode
[LAN_HOST_CFG#1,0,0,0,0,0#0,0,0,0,0,0]1,1
DNSServers
[WAN_DSL_INTF_CFG#1,0,0,0,0,0#0,0,0,0,0,0]2,8
upstreamCurrRate
downstreamCurrRate
upstreamMaxRate
downstreamMaxRate
upstreamNoiseMargin
downstreamNoiseMargin
upstreamAttenuation
downstreamAttenuation
[IGD_DEV_INFO#0,0,0,0,0,0#0,0,0,0,0,0]3,3
softwareVersion
hardwareVersion
upTime
[LAN_IP_INTF#0,0,0,0,0,0#0,0,0,0,0,0]4,2
IPInterfaceIPAddress
X_TPLINK_MACAddress
[LAN_HOST_ENTRY#0,0,0,0,0,0#0,0,0,0,0,0]5,4
leaseTimeRemaining
MACAddress
hostName
IPAddress
[WAN_PPP_CONN#0,0,0,0,0,0#0,0,0,0,0,0]6,4
enable
connectionStatus
externalIPAddress
DNSServers
If it helps, the names in capitals (e.g. SYS_MODE) is the name of the section. The number after the ] is a counter stating the section number (sections can be in any order). The final number following the , is the number of parameters that follow in this section.
There are also request types for each section. In the example above, the URL is http://192.168.1.1/cgi?1&1&1&1&5&5&5. As you can see the two main request types are 1 and 5.
Here is an example response from the server. As you can see, some of the sections can be returned more than once, which makes the first number of the six zeros increment each time.
[0,0,0,0,0,0]0
mode=DSL
[1,0,0,0,0,0]1
DNSServers=x.x.x.x,x.x.x.x
[1,0,0,0,0,0]2
upstreamCurrRate=928
downstreamCurrRate=3072
upstreamMaxRate=1068
downstreamMaxRate=3104
upstreamNoiseMargin=60
downstreamNoiseMargin=57
upstreamAttenuation=295
downstreamAttenuation=546
[0,0,0,0,0,0]3
softwareVersion=0.6.0 1.3 v000e.0 Build 131012 Rel.51720n
hardwareVersion=TD-W8980 v1 00000000
upTime=x
[1,1,0,0,0,0]4
IPInterfaceIPAddress=192.168.1.1
X_TPLINK_MACAddress=xx:xx:xx:xx:xx:xx
[1,0,0,0,0,0]5
leaseTimeRemaining=-1
MACAddress=xx:xx:xx:xx:xx:xx
hostName=X
IPAddress=192.168.1.2
[2,0,0,0,0,0]5
leaseTimeRemaining=-1
MACAddress=xx:xx:xx:xx:xx:xx
hostName=X
IPAddress=192.168.1.4
[3,0,0,0,0,0]5
leaseTimeRemaining=-1
MACAddress=xx:xx:xx:xx:xx:xx
hostName=X
IPAddress=192.168.1.11
[4,0,0,0,0,0]5
leaseTimeRemaining=-1
MACAddress=xx:xx:xx:xx:xx:xx
hostName=X
IPAddress=192.168.1.5
[1,2,1,0,0,0]6
enable=1
connectionStatus=Connected
externalIPAddress=x.x.x.x
DNSServers=x.x.x.x,x.x.x.x
[2,1,1,0,0,0]6
enable=0
connectionStatus=Unconfigured
externalIPAddress=0.0.0.0
DNSServers=0.0.0.0,0.0.0.0
[3,1,1,0,0,0]6
enable=0
connectionStatus=Unconfigured
externalIPAddress=0.0.0.0
DNSServers=0.0.0.0,0.0.0.0
[error]0
I would appreciate any explanation of this format and if it appears anywhere else on the web. I've never seen such a system before!

why just after initializing the zram read is issued before write?

I am newbie to Linux kernel and just started to know how zram works. Initial testing, I am seeing that READ is issued before WRITE just after the zram is being initialized. But I am just eager to know, why this is so ?
As an activity I took the dump_stack() and followed the path form where to how this zram read is being performed.
zram get to know this info whether it has to do READ or WRITE operation on issued bio->bi_rw. Code flow is like that zram_make_request API is being called from create_device in zram driver. And zram_make_request internally called __zram_make_request which called the zram_bvec_rw API.
In zram_bvec_rw API check the available info of bio->bi_rw and correspondingly issued the READ and WRITE call.
Now, in this case what is happening: READ is being encapsulated inside bio struct itself. As triage I found that submit_bh fills all the entry of bio and issued the submit_bio.
I was wondering who is actually sets the bio->bi_rw as READ. By enabling the few prints I found that ll_rw_block API is being called by __block_write_begin with READ, later ll_rw_block calls the submit_bh API where rest of bio struct entries are filled.
But I am still not getting the answer why READ is issued for ll_rw_block from __block_write_begin ?
zram driver:
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/drivers/block/zram/zram_drv.c?id=refs/tags/v3.18.14
in file: https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/fs/buffer.c?id=refs/tags/v3.18.14
if (!buffer_uptodate(bh) && !buffer_delay(bh) &&
!buffer_unwritten(bh) &&
(block_start < from || block_end > to)) {
ll_rw_block(READ, 1, &bh);
*wait_bh++=bh;
}
buffer_uptodate(bh), /* Contains valid data */
buffer_delay(bh), /* Buffer is not yet allocated on disk */
buffer_unwritten(bh), /* Buffer is allocated on disk but not written */
Please can someone give an explaination/answer to my question ?
How I am concluding that read is perfomed before write ??
I just check the num_reads and num_writes count. And num_reads count is set to 1 while num_writes is found 0 when we do mkswap /dev/block/zram0 and after calling the swapon /dev/block/zram0 the final counts are num_reads = 2 and num_writes=1.
NOTE: This is the case when we don't performing any additional zram activity. We got this behavior in case as explained above.
Because of this as far as i can see: block_start < from || block_end > to (respecting other conditions of course, buffer_uptodate() etc.) .
i.e. bio will write a whole block so if a region to be updated smaller than submited block you obviously need a fresh copy.

Android Jwebsocket custom protocol

I am opening a connection setting up a custom protocol like this:
WebSocketSubProtocol d = new WebSocketSubProtocol("MyCustomProto",WebSocketEncoding.TEXT);
mJWC.addSubProtocol(d);
mJWC.open(mURL);
But... Server side, I receive tis in the protocol string
"org.jwebsocket.json MyCustomProto"
How can I remove from the string the "org.jwebsocket.json" ?
I don't wanna do it server side...
Thanks!
I will answer to my own question.
By calling the "addSubProtocol" doesn't seem to be the right solution for couple of reasons:
if you call those 3 lines of code multiple time (if the first time the connection failed for example..) well the the protocol string would be something like
"org.jwebsocket.json MyCustomProto MyCustomProto"
It just keep adding the protocol..
So I found a turn around. Now I don't use that "addSubProtocol" but instead I defined the protocol directly when I create the socket
mJWC = new BaseTokenClient("client||"+code+"||"+name,WebSocketEncoding.TEXT);
Voila.. Now no more "org.jwebsocket.json" anymore

Jsoup and gzipped html content (Android)

I've been trying all day to make this thing works but it's still not right yet. I've checked so many posts around here and tested so many different implementations that I'dont know where to look now...
Here is my situation, I have a small php test file (gz.php) on my server wich looks like this :
header("Content-Encoding: gzip");
print("\x1f\x8b\x08\x00\x00\x00\x00\x00");
$contents = gzcompress("Is it working?", 9);
print($contents);
This is the simplest I could do and it works fine with any web browser.
Now I have an Android activity using Jsoup that has this code :
URL url = new URL("http://myServerAdress.com/gz.php");
doc = Jsoup.parse(url, 1000);
Which cause an empty EOFException on the "Jsoup.parse" line.
I've read everywhere that Jsoup is supposed to parse gzipped content without having to do anything special, but obviously, there's something missing.
I've tried many other ways like using Jsoup.connect().get() or InpuStream, GZipInputStream and DataInpuStream. I did try the gzDeflate() and gzencode() methods from PHP as well but no luck either. I even tried not to declare the header-encoding in PHP and try to deflate the content later...but it was as clever as effective...
It has to be something "stupid" I'm missing but I just can't tell what... anybody has an idea?
(ps : I'm using Jsoup 1.7.0, so the latest one as of now)
The asker indicated in a comment that gzcompress was writing a CRC that was both incorrect and incomplete, according to information from here, the operative code being:
// Display the header of the gzip file
// Thanks ck#medienkombinat.de!
// Only display this once
echo "\x1f\x8b\x08\x00\x00\x00\x00\x00";
// Figure out the size and CRC of the original for later
$Size = strlen($contents);
$Crc = crc32($contents);
// Compress the data
$contents = gzcompress($contents, 9);
// We can't just output it here, since the CRC is messed up.
// If I try to "echo $contents" at this point, the compressed
// data is sent, but not completely. There are four bytes at
// the end that are a CRC. Three are sent. The last one is
// left in limbo. Also, if we "echo $contents", then the next
// byte we echo will not be sent to the client. I am not sure
// if this is a bug in 4.0.2 or not, but the best way to avoid
// this is to put the correct CRC at the end of the compressed
// data. (The one generated by gzcompress looks WAY wrong.)
// This will stop Opera from crashing, gunzip will work, and
// other browsers won't keep loading indefinately.
//
// Strip off the old CRC (it's there, but it won't be displayed
// all the way -- very odd)
$contents = substr($contents, 0, strlen($contents) - 4);
// Show only the compressed data
echo $contents;
// Output the CRC, then the size of the original
gzip_PrintFourChars($Crc);
gzip_PrintFourChars($Size);
Jonathan Hedley commented, "jsoup just uses a normal Java GZIPInputStream to parse the gzip, so you'd hit that issue with any Java program." The EOFException is presumably due to the incomplete CRC.

Categories

Resources