I am trying to use gson to do my object mapping on the android emulator.
It has been ridiculously slow when processing json data around 208 kb. I do not have any hierarchies in my json.
After the object mapping is done, i can see it that gson created around 500 records.
It is taking it over 3 minutes on the android emulator to map the input json.
I have annotated my entity which comprises of strings and couple of floats.
An I missing something?
Any ideas, best practices would greatly help.
Are there any ways of quickly object mapping the json data?
URL myURL = new URL(url);
/* Open a connection to that URL. */
URLConnection ucon = myURL.openConnection();
/*
* Define InputStreams to read from the URLConnection.
*/
InputStream is = ucon.getInputStream();
InputStreamReader reader = new InputStreamReader(is);
long tickCount = System.currentTimeMillis();
Policy[] policies = new Gson().fromJson(reader, Policy[].class);
long endCount = System.currentTimeMillis() - tickCount;
Log.d("Time to pull policies in milliseconds", "" + endCount);
I've seen questions like this come up before, and the general consensus is that Jackson is much faster than Gson. See the following links for more information:
Jackson Vs. Gson
Replace standard Android JSON parser for better performance?
http://www.cowtowncoder.com/blog/archives/2009/12/entry_345.html
https://stackoverflow.com/questions/338586/a-better-java-json-library
Here is one which specifically discusses Android: http://ubikapps.net/?p=525
Have you tried the mixing the GSON streaming parser with the Gson object? http://sites.google.com/site/gson/streaming (look for the Mixed read example).
This approach may help since Gson reads in an entire parse tree and then acts on it. With a large array list, reading in all elements and attempting to parse may cause lot of memory swaps (or thrashing). This approach will read in one element at a time.
Hope this helps.
You'd probably get better performance if you wrapped that InputStream in a BufferedInputStream with a nice big buffer...
3 minutes is insane. I seldom run the emulator but I have an app with a ~1.1MB JSON asset and that takes around 5 seconds to load and process on hardware.
(Which is still far too long, but still).
I've found that I can speed up gson.fromJSON quite considerably by not modelling all the elements in the JSON that I won't need. GSON will happily fill in only what is specified in your response classes.
I have found that CREATING a Gson instance is a very expensive operation, both in terms of CPU used and memory allocated.
Since Gson instances are thread-safe, constructing and reusing a single static instance pays off, especially if you are serializing / deserializing often.
Related
I'm developing an Android app that will hold a tensorflow-lite model for offline inference.
I know that it is impossible to completely avoid someone stealing my model, but I would like to make a hard time for someone trying it.
I thought to keep my .tflite model inside the .apk but without the weights of the top layer. Then, at execution time I could download the weights of the last layer and load it in memory.
So, if someone try to steal my model he would get a useless model because it couldn't be used due to the missing weights of the last layer.
It is possible to generate a tflite model without the weights of the last layer?
Is it possible load those weights in a already loaded model in memory?
This is how I loading my .tflite model:
tflite = new Interpreter(loadModelFile(), tfliteOptions);
// loads tflite grapg from file
private MappedByteBuffer loadModelFile() throws IOException {
AssetFileDescriptor fileDescriptor = mAssetManager.openFd(chosen);
FileInputStream inputStream = new FileInputStream(fileDescriptor.getFileDescriptor());
FileChannel fileChannel = inputStream.getChannel();
long startOffset = fileDescriptor.getStartOffset();
long declaredLength = fileDescriptor.getDeclaredLength();
return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength);
}
Are there other approaches to make my model safer? I really need to make inference locally.
If we are talking about Keras models ( or any other model in TF ), we can easily remove the last layer and then convert it to a TF Lite model with tf.lite.TFLiteConverter. That should not be a problem.
Now, in Python, get the last layer's weights and convert it to a nice JSON file. This JSON file could be hosted on cloud ( like Firebase Cloud Storage ) and can be downloaded by the app.
The weights could be parsed as an array() object. The actiavtions from the TF Lite model could be dot multiplied with the weights parsed from the JSON. Lastly, we apply an activation to provide predictions, which we need indeed!
The model is so precisely trained that it could be rarely used for any other use case. So, I think we do not need to worry about that.
Also, it will be better if we use some cloud hosting platforms, which use requests and APIs instead of directly loading a raw model.
I have a huge bible data that is in xml format. I am making an android Bible application. But I feel like my data is very huge.
In my research, I read that xml parser parses through the whole file till it gets the tag that it needs. Does anyone know an easier and faster way to parse all the data.
SAX parsing may be appropriate when the data extraction logic is relatively simple and forward only... if you want to have the ease and comfort of traversing the hierarchical structure or XPath, then you are out of luck...
JDOM or DOM have serious memory usage issues...
VTD-XML is a library that spans the use cases too complicated for SAX StAX, and too memory intensive for DOM or JDOM.
While VTD-XML loads everything in memory, the memory footprint is a modest 1.3x~1.5x the size of the XML document, which is 3~5x more efficient than DOM..
It also exports a DOM like cursor API and supports XPath 1.0...
Can SAX Parsers use XPath in Java?
You should use a SAX parser, it's the best way to parse large XML files. For instance you can do this:
File inputFile = new File("input.txt");
SAXParserFactory factory = SAXParserFactory.newInstance();
SAXParser saxParser = factory.newSAXParser();
UserHandler userhandler = new UserHandler();
saxParser.parse(inputFile, userhandler);
I was just wondering if anyone could recommend a better alternative method than org.json for decoding a complex JSON string. For reference, this will be coming from a web server down to Android (& iOS, but that's the other dev's problem!) devices, it doesn't have to go back up.
The string is of the following nature...
{"header":"value","count":value, ..., "messages":[
{"messagetype":1,"name":"value"},
{"messagetype":2,"name":"value","name":value},
{"messagetype":1,"name":"value"},
{"messagetype":3,"name":"value","subvalues":["value",value,value]},
...
{"messagetype":4,"name":value,"name":"value","name":value}
]}
Basically, there are some header fields which I can always rely on but then there will be an "array" of messages, variable in count, content and order.
I've been researching this for a few days now and have dismissed GSON and a few others because that either need to know the exact structure in advance and/or don't deal well with the embedded types (the contained messages).
Answer three in this question pointed me to using the org.json library and I know I can use that to parse through the string but I guess one of that answer's replies ("That's super old school and nobody uses that library anymore in the real world") has made me question my approach.
Can anyone suggest a library/approach which would handle this problem better? If anyone else has used an alternative approach to dealing with this type of complex and variable structure, I'd really appreciate your input.
Thanks in advance.
I really do not agree with the opinion about org.json libray: "That's super old school and nobody uses that library anymore in the real world", since parsing json by using this library is pretty straightforward. Besides, how complex can json get?, I mean, is all about key/value pairs, nothing that can't be solved with a few lines of code, for instance I will illustrate you a few cases, so that you'll get convinced that is pretty simple to do:
Suppose you have a response from the server containing all info you need formatted in a json array, then you can do something like this to parse the String:
JsonArray arrayJson = new JsonArray(response);
But now you want to access arrayJson childs:
for (int i = 0; i < arrayJson.length() - 1; i++)
{
JsonObject json = arrayJson.getJSONObject(i);
}
And now assume you have another array of json's inside those you retrieved in the for loop:
Then you'll get them this way:
for (int i = 0; i < arrayJson.length() - 1; i++)
{
JsonObject json = arrayJson.getJSONObject(i);
JSONArray anotherArray = json.getJSONArray("key");
}
....., more nestings you can handle them the same way, so I think I established my point. Remember that sometimes, struggling on finding easier ways to do things, can get them even harder to do.
I'm currently parsing JSON and got the following piece of code:
boolean suf = list.getJSONObject(i).getBoolean("sufficient");
String grade = list.getJSONObject(i).getString("grade");
String id= list.getJSONObject(i).getString("id");
I'm wondering if multiple times calling getJSONObject creates overhead resulting in increasing processing time.
Would this be faster and/or better for example?
JSONObject object = list.getJSONObject(i);
boolean suf = object.getBoolean("sufficient");
String grade = object).getString("grade");
String id= object.getString("id");
This does introduce a new object, but will the next 3 calls make the tradeoff worth it?
Since I'm showing a dialog to inform the user something is loading (and thus they can't undertake any action), I'd like to minimize the wait time for the user.
2nd option is how I usually do. But you will hardly see any notice in performance.
list.getJSONObject(i).getBoolean("sufficient"); creates a temporary object and gets the value. Now a days, compilers are smart enough to store that temporary objects just in case them. Even if they don't, unless you are handling some millions of jsonobjs in your "list", I don't see any performance impact here.
I'm developping an app for Android that needs to get an important amount of data from a JSON feed. This feed is a one line JSON file, weighting approx 400 ko, containing approximately 10 arrays that I need to get.
I'm using the JSON library for Android to do so, and the output works well, but it takes ages (well, 30 secs approx) to compute. The download step is done quickly, that's the creation of the JSON objects that seems to be very long. Here are my steps (removing try/catch blocks and so on).
JSONObject feed = new JSONObject(big_string_from_feed);
JSONArray firstArray = feed.getJSONArray("key1");
JSONArray secondArray = feed.getJSONArray("key2");
[...]
And after i go through all my arrays to get every element the following way :
for (int currentIndex =0;currentIndex<firstArray.length();currentIndex++){
JSONObject myObject = firstArray.getJSONObject(currentIndex);
[....]
}
Is there something wrong in the way I do this ? Is there a better way to do it ?
Thank you very much in advance.
If performance is a concern, use Jackson. See https://github.com/eishay/jvm-serializers/wiki for performance results. (These results should be updated soon to include Jackson manual/tree-strings processing, which will have performance somewhere between Jackson manual and Jackson databind-strings processing. Manual/tree-strings processing is the approach demonstrated in the original question.)
Look at json-simple (see http://code.google.com/p/json-simple). It provides SAX style parsing of JSON streams, and is faster.