I have an array of precomputed intensity (computed using a fuzzy logic inference system on a desktop machine). Now I want to use this array as a lookup table for a contrast enhancement application on android, using renderscript.
What I want to do, at a highlevel is to process every pixel in an image, and using the lookup table create a new image where the pixel at the corresponding position has the value looked up in the array. Before I start looking at how to implement this, is this even feasible?
Yes, it is feasible and this is something RS can handle with no problems. You'll need to provide your RS "kernel" with the pre-computed array data as either a separate Allocation or just a data array.
This talk will help get you started: https://youtu.be/3ynA92x8WQo
Related
One string size is about 200 bytes,
and it stores 10~20 size in a daily array.
(Store 10~20 strings of 200bytes, as array type)
I have found a way to convert an array to a string
and store it in SQLite.
However, I do not know it's a good idea
because the size of the string is large.
1.
If large arrays of strings,
is it a good idea to store arrays as a string?
2.
or is there a better way?
I would like advice. Thank you.
You're actually placing your concern onto the wrong part of your database design.
For SQLite, the maximum length of a String is 1 billion bytes, so your worries about your 10-20 strings of 200 bytes each actually isn't considered that large.
There's really no harm in storing your array as a single long String in your database. Especially when it's nowhere close to the maximum limit of a String.
Your database query time won't become longer due to your String being long. The real concern here is the processing you'll be doing on that String to turn it back into an Array. Typically, if the String is extremely long, the real performance hit is when you're flattening the array into a String and when you're transforming that String back into an Array.
However, typically, this is something you'll show a loading indicator for to your users.
For storing an Array into a database, there's really only two ways to do so:
Flatten array into a single String and store the String as TEXT
Create a table meant to store the individual elements of the string, and include a column for a Foreign Key that allows you to associate those rows with the same array. Then you'll store each element of your String arrays as a row in this table.
Depending on what you need, one design is better than the other.
For example, you would normally prefer the second implementation if your app requires you to constantly edit individual elements of an array.
For such an example, it wouldn't make much sense to use the first solution, because this means every time you want to edit the contents of an array, you'll be fetching back the complete array in it's entirety. This is impractical when you only want to fetch or edit a particular portion of that String.
Therefore, in such an example, it is much more practical to store the individual elements of the arrays into individual rows of a Table meant for this type of data. You'll be querying only the row you want and updating only the row you want.
So to answer your questions, there's really only two ways to store your String array as a TEXT type in your SQLite database. Both ways work and the real concern is to consider which solution fits your needs best.
If your app only requires you to store and fetch the array in it's entirety each time, then the single String method might be more preferable.
But if your app requires you to work with individual elements of your array, then using the table method would be more convenient.
I'm trying to use the model used on this tutorial in an Android app. I wanted to modify DetectorActivity.java and TensorFlowMultiBoxDetector.java found here but it seems like i miss some parameters as imageMean, imageStd, inputName, outputLocationsName and outputScoresName.
From what I understand, input name is the name of the input for the model and both outputs are the names for the position and score output, but what do imageMean and imageStd stand for ?
I don't need to use the model with a camera, I just need to detect objects on bitmaps.
Your understanding of input/output names seems correct. They are tensorflow node names that can receive input and will contain the outputs at the end. imageMean and imageStd are used to scale the RGB values of the image to the mean of 0 and the standard deviation of 1. See the 8 lines starting from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/TensorFlowMultiBoxDetector.java#L208
The TensorFlow android demo app you are referring to has been updated. It now supports MobileNets. Check it out at github: commit 53aabd5cb0ffcc1fd33cbd00eb468dd8d8353df2.
Background:
The goal is to write a rather large (at least 2048 x 2048 pixels) image file with OpenGL rendered data.
Today I first use glReadPixels in order to get the 32-bit (argb8888) pixel data into an int array.
Then I copy the data into a new short array, converting the 32-bit argb values into 16-bit (rgb565) values. At this point I also turn the image upside down and change the color order to make the opengl-image data compatible with android bitmap data (different row order and color channel order).
Finally I create a Bitmap() instance and .copyPixelsFromBuffer(Buffer b) in order to be able to save it to disk as a png-file.
However I want to use memory more efficient in order to avoid out of memory crashes on some phones.
Question:
Can I skip the first transformation from int[] -> short[] in some way (and avoid the allocation of a new array for pixel data)? Maybe just use byte arrays / buffers and write the converted pixels to the same array I read from...
More important: Can I skip the bitmap creation (here's where the program crash) and somehow write the data directly to disk as a working image file (and avoid allocation of the pixel data again in the bitmap object)?
EDIT: If I could write the data directly to file, maybe I don't need to convert to 16-bit pixel data, depending on the file size and how fast the file can be read into memory at a later point.
I'm not sure that this could help but, this PNGJ library allows to write a PNG sequentially, line by line. If memory usage if your primary concern (and if you can access the pixels values in the order of the final PNG file from the rendered data) it could be useful.
I recently created a program that gets medi-large amounts of xml data and converts it into arrays of Strings, then displays the data.
The program works great, but it freezes when it is making the arrays (for around 16 seconds depending on the size).
Is there any way I can optimize my program (Alternatives to string arrays etc.)
3 optimizations that should help:
Threading
If the program freezes it most likely means that you're not using a separate thread to process the large XML file. This means that your app has to wait until this task finishes to respond again.
Instead, create a new thread to process the XML and notify the main thread via a Handler when it's done, or use AsyncTask. This is explained in more detail here.
Data storage
Additionally, a local SQLite database might be more appropriate to store large amounts of data, specially if you don't have to show it all at once. This can be achieved with cursors that are provided by the platform.
Configuration changes
Finally, make sure that your data doesn't have to be reconstructed when a configuration change occurs (such as an orientation change). A persistent SQLite database can help with that, and also these methods.
You can use SAX to process the stream of XML, rather than trying to parse the whole file and generating a DOM in memory.
If you find that you really are using too much memory, and you have a reason to keep the string in memory rather than caching them on disk, there are certainly ways you can reduce the memory requirements. It's a sad fact that Java strings use a lot of space. They require two objects (the string itself and an underlying char array) and use two bytes per char. If your data is mostly 7-bit ASCII, you may be better of leaving it as a UTF-8 encoded byte stream, using 1 byte per character in the typical case.
A very effective scheme is to maintain an array of 32k byte buffers, and append the UTF-8 representation of each new string onto the first empty space in one of those arrays. Your reference to the string becomes a simple integer: PTR = (buffer index * 32k) + (buffer offset). "PTR/32k" yields the index of the desired byte buffer, and "PTR % 32k" yields the location within the buffer. Use either an initial length byte or a null terminator to keep track of how long the string is. When you need to access one of the strings, don't allocate a new String object: unpack it into a mutable StringBuilder or work directly with the UTF-8 byte representation.
The above approach is obviously a lot more work, but can save you between a factor of 2 and 6 in memory usage (depending on the length of your strings). However, you should beware of premature optimization. If your problem is with the processing time to parse your input, or is somewhere else in your program, you could find that you've done a lot of work to fix something that isn't your bottleneck and thus get no improvement at all.
I am wondering how would I be able to run a SQLite order by in this manner
select * from contacts order by jarowinkler(contacts.name,'john smith');
I know Android has a bottleneck with user defined functions, do I have an alternative?
Step #1: Do the query minus the ORDER BY portion
Step #2: Create a CursorWrapper that wraps your Cursor, calculates the Jaro-Winkler distance for each position, sorts the positions, then uses the sorted positions when overriding all methods that require a position (e.g., moveToPosition(), moveToNext()).
Pre calculate string lengths and add them into separate column. Then sort entired table by that that length. Add indexes (if you can). Then add extra filters for example you don't want to compare "Srivastava Brahmaputra" to "John Smith". The length are out of wack by way too much so exclude these kind of comparison by length as a percentage of the total length. So if your word is 10 characters compare it only to words with 10+-2 or 10+-3 characters.
This way you will significantly reduce the number of times this algorithm needs to run.
Typically in the vocalbulary of 100 000 entries such filters reduce the number of comparisons to about 300. Unless you are doing a full blown record linkage and then I would wonder why use Android for that. You would still need to apply probabilistic methods for that and calculate scores and this is not a job for Android (at least not for now).
Also in MS SQL Server Jaro Winkler string distance wrapped into CLR function perform much better, since SQL Server doesn't supprt arays natively and much of the processing is around arrays. So implementation in T-SQL add too much overhead, but SQL-CLR works extremely fast.