Splitting a byte[] into multiple byte[] arrays - android

I am trying to "chunk" up the bytes of an image. This will allow me to upload a large image in bytes array. I have the image currently stored as one large byte[]. I would like to split the byte array into byte[]'s with a each exactly of 5 MB.

public static byte[][] divideArray(byte[] source, int chunksize) {
byte[][] ret = new byte[(int) Math.ceil(source.length / (double) chunksize)][chunksize];
int start = 0;
int parts = 0;
for (int i = 0; i < ret.length; i++) {
if (start + chunksize > source.length) {
System.arraycopy(source, start, ret[i], 0, source.length - start);
} else {
System.arraycopy(source, start, ret[i], 0, chunksize);
}
start += chunksize;
parts++;
}
Log.d("Parts", parts + "");
return ret;
}
Call It by
divideArray(common.fullyReadFileToBytes(wallpaperDirectory), 5 * 1024 * 1024)

You can use copyOfRange for that:
T[] copyOfRange (T[] original,
int from,
int to);
In your case, something like this:
Byte[] copyOfRange (original,
0,
5000000);
make sure you calculate the offset:
class test {
// this is just for dummy data
public static byte[] getTestBytes() {
byte[] largeByteArray = new byte[50_000_000];
for(int i = 0; i < 50_000_000; i ++) {
largeByteArray[i] = 0;
}
return largeByteArray;
}
// this method splits your byte array into small portions
// and returns a list with those portions
public static List<byte[]> byteToPortions(byte[] largeByteArray) {
// create a list to keep the portions
List<byte[]> byteArrayPortions = new ArrayList<>();
// 5mb is about 5.000.000 bytes
int sizePerPortion = 5_000_000;
int offset = 0;
// split the array
while(offset < largeByteArray.length) {
// into 5 mb portions
byte[] portion = Arrays.copyOfRange(largeByteArray, offset, offset + sizePerPortion);
// update the offset to increment the copied area
offset += sizePerPortion;
// add the byte array portions to the list
byteArrayPortions.add(portion);
}
// return portions
return byteArrayPortions;
}
// create your byte array, and split it to portions
public static void main(String[] args) {
byte[] largeByteArray = getTestBytes();
List<byte[]> portions = byteToPortions(largeByteArray);
// work with your portions
}
}
Something cool: the value to does not have to be an index inside the array, it checks that for you without erroring and copies a subset that is valid to the intended array.

These answers work fine but has optimization issues. They allocate extra space of the whole chunk size irrespective of the only actual bytes of memory the array needs to allocate for copying the data.
Here is the solution for this problem.
private fun divideDataIntoChunks(source: ByteArray, chunkSize: Int): kotlin.collections.ArrayList<ByteArray> {
val result: ArrayList<ByteArray> = ArrayList()
if (chunkSize <= 0) {
result.add(source)
} else {
for (chunk in source.indices step chunkSize) {
result.add(source.copyOfRange(chunk, min(chunk+chunkSize,source.size)))
}
}
return result
}

Related

Integrating im2txt model in Android phone

I am new to TensorFlow and cannot find out the solution of these questions.
How can I retrain the im2txt model for my new dataset such that the dataset on which the im2txt model was trained does not get lost and my new dataset is added to the MSCOCO dataset to caption the new images (i.e training dataset= MSCOCO dataset + My new dataset). Someone, please share the detailed procedure and the problems that I can face while retraining.
I have found out the TensorFlow tutorial for running inception V3 model in android on real-time datasets, Can this method be applied as well to the im2txt model i.e. can this be made to caption an image taken from a mobile in real time. Someone, please share the detailed steps how to do this.
After the weeks of struggle can be able to run and execute the im2txt model on Android.
Since I found the solutions from different blogs and different questions and answers, Felt that it might be useful if the all(maximum) solution is at one place.So, Sharing the steps followed.
You need to clone the tensorflow project https://github.com/tensorflow/tensorflow/releases/tag/v1.5.0 in order to freeze the graph and for some more utils.
Downloaded the im2txt model form https://github.com/KranthiGV/Pretrained-Show-and-Tell-model
Followed the steps described in the above link Can able to Run the inference to generate captions on the Linux desktop Successfully after renaming some variable in the graph(to overcome NotFoundError (see above for traceback): Key lstm/basic_lstm_cell/bias not found in checkpoint types of errors)
Now we need to freeze the existing model to obtain the frozen graph in order to use in android/ios
from cloned tensorflow project using freeze_graph.py( tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) one can freeze the graph from any model by giving the following command
An example of command-line usage is:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=some_graph_def.pb \
--input_checkpoint=model.ckpt-8361242 \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax
--input_binary=true
we need to supply the all the output_node_names which we required to run the model, from "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_wrapper.py" we can list out the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state'
when I run the freeze graph command by supplying the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state' got the error "AssertionError: softmax is not in the graph"
From the answers for
How to freeze an im2txt model?(How to freeze an im2txt model?)
by Steph and Jeff Tang
The current model ckpt.data, ckpt.index and ckpt.meta files and a graph.pbtxt should be loaded in inference mode (see InferenceWrapper in im2txt). It builds a graph with the correct names 'softmax', 'lstm/initial_state' and 'lstm/state'. You save this graph (with the same ckpt format) and then you can apply the freeze_graph script to obtain the frozen model.
to do this in Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\inference_wrapper.base.py, just add something like saver.save(sess, "model/ckpt4") after saver.restore(sess, checkpoint_path) in def _restore_fn(sess):. Then rebuild and run_inference and you'll get a model that can be frozen, transformed, and optionally memmapped, to be loaded by iOS and Android apps
Now I run the command as below
python tensorflow/python/tools/freeze_graph.py \
--input_meta_graph=/tmp/ckpt4.meta \
--input_checkpoint=/tmp/ckpt4 \
--output_graph=/tmp/ckpt4_frozen.pb \
--output_node_names="softmax,lstm/initial_state,lstm/state" \
--input_binary=true
and loaded the obtained ckpt4_frozen.pb file into the Android application and got the error
"java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs. Registered devices: [CPU], Registered kernels:
[[Node: decode/DecodeJpeg = DecodeJpegacceptable_fraction=1, channels=3, dct_method="", fancy_upscaling=true, ratio=1, try_recover_truncated=false]]"
From https://github.com/tensorflow/tensorflow/issues/2883
Since DecodeJpeg isn't supported as part of the Android tensorflow core, you'll need to strip it out of the graph first
bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=ckpt4_frozen.pb \
--output_graph=ckpt4_frozen_stripped_graph.pb \
--input_node_names=convert_image/Cast,input_feed,lstm/state_feed\
--output_node_names=softmax,lstm/initial_state,lstm/state\
--input_binary=true
When I try to load ckpt4_frozen_stripped_graph.pb in android i faced errors so i followed Jeff Tang's answer (Error using Model after using optimize_for_inference.py on frozen graph)
Instead of tools:strip_unused I used graph transformation tool
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/tmp/ckpt4_frozen.pb \
--out_graph=/tmp/ckpt4_frozen_transformed.pb \
--inputs="convert_image/Cast,input_feed,lstm/state_feed" \
--outputs="softmax,lstm/initial_state,lstm/state" \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms'
I can able load the obtained ckpt4_frozen_transformed.pb on android successfully.
when I supply the input as float array of RGB image pixels for input node "convert_image/Cast" and fetch the output from "lstm/initail_state" node successfully.
Now the challenge is to understand the beam search in "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\caption_generator.py" and same should be implemented on the Android side.
If you observe the python script caption_generator.py at
softmax, new_states, metadata = self.model.inference_step(sess,input_feed,state_feed)
input_feed is an int32 bit array and state_feed is a multidimensional float array
on the android side, I tried feeding int32 bit array for "input_feed", since there is no Java API to feed multidimensional array so I fed float array to lstm/state_feed as it is which fetched previously from "lstm/initail_state" node.
Got two errors one is for the input_fedd is expecting int 64bit and
"java.lang.IllegalArgumentException: -input rank(-1) <= split_dim < input rank (1), but got 1" at lstm/state_feed.
For the first error, I changed the input_feed feed data type from int32 to int 64.
About second error it is expecting rank two tensor.
If you see tensorflow java sources the data type float array which we are feeding is converted to a rank one tensor, we should feed data type in such a way that rank two tensor should be created but at present, I didn't find any API to feed multidimensional float array there
When I browsing tensorflow java source I found the API which was not exposed as Android API where we can create a rank two tensor. so I rebuilt and both libtensorflow_inference.so and libandroid_tensorflow_inference_java.jar by enabling the rank two tensor creation call.(for building process refer https://blog.mindorks.com/android-tensorflow-machine-learning-example-ff0e9b2654cc)
Now I can able to run the inference on Android and get the one caption for the image.But the accuracy very low.
Reason limiting for the one caption is I didn't find a way to fetch the outputs as a multidimensional array which is required for generating more number of cations for a single image.
String actualFilename = labelFilename.split("file:///android_asset/")[1];
vocab = new Vocabulary(assetManager.open(actualFilename));
inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);
final Graph g = c.inferenceInterface.graph();
final Operation inputOperation = g.operation(inputName);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + inputName + "'");
}
final Operation outPutOperation = g.operation(outputName);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + outputName + "'");
}
// The shape of the output is [N, NUM_CLASSES], where N is the batch size.
int numClasses = (int) inferenceInterface.graph().operation(outputName)
.output(0).shape().size(1);
Log.i(TAG, "Read " + vocab.totalWords() + " labels, output layer size is " + numClasses);
// Ideally, inputSize could have been retrieved from the shape of the input operation. Alas,
// the placeholder node for input in the graphdef typically used does not specify a shape, so it
// must be passed in as a parameter.
inputSize = inputSize;
// Pre-allocate buffers.
outputNames = new String[]{outputName + ":0"};
outputs = new float[numClasses];
inferenceInterface.feed(inputName + ":0", pixels, inputSize, inputSize, 3);
inferenceInterface.run(outputNames, runStats);
inferenceInterface.fetch(outputName + ":0", outputs);
startIm2txtBeamSearch(outputs);
//Implemented Beam search in JAVA
private void startIm2txtBeamSearch(float[] outputs) {
int beam_size = 1;
//TODO:Prepare vocab ids from file
ArrayList<Integer> vocab_ids = new ArrayList<>();
vocab_ids.add(1);
int vocab_end_id = 2;
float lenth_normalization_factor = 0;
int maxCaptionLength = 20;
Graph g = inferenceInterface.graph();
//node input feed
String input_feed_node_name = "input_feed";
Operation inputOperation = g.operation(input_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_feed_node_name + "'");
}
String output_feed_node_name = "softmax";
Operation outPutOperation = g.operation(output_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_feed_node_name + "'");
}
int output_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_feed_node_name + ", output layer size is " + output_feed_node_numClasses);
FloatBuffer output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
//float [][] output_feed_output = new float[numClasses][];
//node state feed
String input_state_feed_node_name = "lstm/state_feed";
inputOperation = g.operation(input_state_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_state_feed_node_name + "'");
}
String output_state_feed_node_name = "lstm/state";
outPutOperation = g.operation(output_state_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_state_feed_node_name + "'");
}
int output_state_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_state_feed_node_name + ", output layer size is " + output_state_feed_node_numClasses);
FloatBuffer output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
//float[][] output_state_output= new float[numClasses][];
String[] output_nodes = new String[]{output_feed_node_name, output_state_feed_node_name};
Caption initialBean = new Caption(vocab_ids, outputs, (float) 0.0, (float) 0.0);
TopN partialCaptions = new TopN(beam_size);
partialCaptions.push(initialBean);
TopN completeCaption = new TopN(beam_size);
captionLengthLoop:
for (int i = maxCaptionLength; i >= 0; i--) {
List<Caption> partialCaptionsList = new LinkedList<>(partialCaptions.extract(false));
partialCaptions.reset();
long[] input_feed = new long[partialCaptionsList.size()];
float[][] state_feed = new float[partialCaptionsList.size()][];
for (int j = 0; j < partialCaptionsList.size(); j++) {
Caption curCaption = partialCaptionsList.get(j);
ArrayList<Integer> senArray = curCaption.getSentence();
input_feed[j] = senArray.get(senArray.size() - 1);
state_feed[j] = curCaption.getState();
}
//feeding
inferenceInterface.feed(input_feed_node_name, input_feed, new long[]{input_feed.length});
inferenceInterface.feed(input_state_feed_node_name, state_feed, new long[]{state_feed.length});
//run
inferenceInterface.run(output_nodes, runStats);
//fetching
inferenceInterface.fetch(output_feed_node_name, output_feed_output);
inferenceInterface.fetch(output_state_feed_node_name, output_state_output);
float[] word_probabilities = new float[partialCaptionsList.size()];
float[] new_state = new float[partialCaptionsList.size()];
for (int k = 0; k < partialCaptionsList.size(); k++) {
word_probabilities = output_feed_output.array();
//output_feed_output.get(word_probabilities[k]);
new_state = output_state_output.array();
//output_feed_output.get(state[k]);
// For this partial caption, get the beam_size most probable next words.
Map<Integer, Float> word_and_probs = new LinkedHashMap<>();
//key is index of probability; value is index = word
for (int l = 0; l < word_probabilities.length; l++) {
word_and_probs.put(l, word_probabilities[l]);
}
//sorting
// word_and_probs = word_and_probs.entrySet().stream()
// .sorted(Map.Entry.comparingByValue())
// .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,(e1, e2) -> e1, LinkedHashMap::new));
word_and_probs = MapUtil.sortByValue(word_and_probs);
//considering first (beam size probabilities)
LinkedHashMap<Integer, Float> final_word_and_probs = new LinkedHashMap<>();
for (int key : word_and_probs.keySet()) {
final_word_and_probs.put(key, word_and_probs.get(key));
if (final_word_and_probs.size() == beam_size)
break;
}
for (int w : final_word_and_probs.keySet()) {
float p = final_word_and_probs.get(w);
if (p < 1e-12) {//# Avoid log(0).
Log.d(TAG, "p is < 1e-12");
continue;
}
Caption partialCaption = partialCaptionsList.get(k);
ArrayList<Integer> sentence = new ArrayList<>(partialCaption.getSentence());
sentence.add(w);
float logprob = (float) (partialCaption.getPorb() + Math.log(p));
float scroe = logprob;
Caption beam = new Caption(sentence, new_state, logprob, scroe);
if (w == vocab_end_id) {
completeCaption.push(beam);
} else {
partialCaptions.push(beam);
}
}
if (partialCaptions.getSize() == 0)//run out of partial candidates; happens when beam_size = 1.
break captionLengthLoop;
}
//clear buffer retrieve sub sequent output
output_feed_output.clear();
output_state_output.clear();
output_feed_output = null;
output_state_output = null;
output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
Log.d(TAG, "----" + i + " Iteration completed----");
}
Log.d(TAG, "----Total Iterations completed----");
LinkedList<Caption> completeCaptions = completeCaption.extract(true);
for (Caption cap : completeCaptions) {
ArrayList<Integer> wordids = cap.getSentence();
StringBuffer caption = new StringBuffer();
boolean isFirst = true;
for (int word : wordids) {
if (!isFirst)
caption.append(" ");
caption.append(vocab.getWord(word));
isFirst = false;
}
Log.d(TAG, "Cap score = " + Math.exp(cap.getScore()) + " and Caption is " + caption);
}
}
//Vocab
public class Vocabulary {
String TAG = Vocabulary.class.getSimpleName();
String start_word = "<S>", end_word = "</S>", unk_word = "<UNK>";
ArrayList<String> words;
public Vocabulary(File vocab_file) {
loadVocabsFromFile(vocab_file);
}
public Vocabulary(InputStream vocab_file_stream) {
words = readLinesFromFileAndLoadWords(new InputStreamReader(vocab_file_stream));
}
public Vocabulary(String vocab_file_path) {
File vocabFile = new File(vocab_file_path);
loadVocabsFromFile(vocabFile);
}
private void loadVocabsFromFile(File vocabFile) {
try {
this.words = readLinesFromFileAndLoadWords(new FileReader(vocabFile));
//Log.d(TAG, "Words read from file = " + words.size());
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
private ArrayList<String> readLinesFromFileAndLoadWords(InputStreamReader file_reader) {
ArrayList<String> words = new ArrayList<>();
try (BufferedReader br = new BufferedReader(file_reader)) {
String line;
while ((line = br.readLine()) != null) {
// process the line.
words.add(line.split(" ")[0].trim());
}
br.close();
if (!words.contains(unk_word))
words.add(unk_word);
} catch (IOException e) {
e.printStackTrace();
}
return words;
}
public String getWord(int word_id) {
if (words != null)
if (word_id >= 0 && word_id < words.size())
return words.get(word_id);
return "No word found, Maybe Vocab File not loaded";
}
public int totalWords() {
if (words != null)
return words.size();
return 0;
}
}
//MapUtil
public class MapUtil {
public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue(Map<K, V> map) {
List<Map.Entry<K, V>> list = new ArrayList<>(map.entrySet());
list.sort(new Comparator<Map.Entry<K, V>>() {
#Override
public int compare(Map.Entry<K, V> o1, Map.Entry<K, V> o2) {
if (o1.getValue() instanceof Float && o2.getValue() instanceof Float) {
Float o1Float = (Float) o1.getValue();
Float o2Float = (Float) o2.getValue();
return o1Float >= o2Float ? -1 : 1;
}
return 0;
}
});
Map<K, V> result = new LinkedHashMap<>();
for (Map.Entry<K, V> entry : list) {
result.put(entry.getKey(), entry.getValue());
}
return result;
}
}
//Caption
public class Caption implements Comparable<Caption> {
private ArrayList<Integer> sentence;
private float[] state;
private float porb;
private float score;
public Caption(ArrayList<Integer> sentence, float[] state, float porb, float score) {
this.sentence = sentence;
this.state = state;
this.porb = porb;
this.score = score;
}
public ArrayList<Integer> getSentence() {
return sentence;
}
public void setSentence(ArrayList<Integer> sentence) {
this.sentence = sentence;
}
public float[] getState() {
return state;
}
public void setState(float[] state) {
this.state = state;
}
public float getPorb() {
return porb;
}
public void setPorb(float porb) {
this.porb = porb;
}
public float getScore() {
return score;
}
public void setScore(float score) {
this.score = score;
}
#Override
public int compareTo(#NonNull Caption oc) {
if (score == oc.score)
return 0;
if (score < oc.score)
return -1;
else
return 1;
}
}
//TopN
public class TopN {
//Maintains the top n elements of an incrementally provided set.
int n;
LinkedList<Caption> data;
public TopN(int n) {
this.n = n;
this.data = new LinkedList<>();
}
public int getSize() {
if (data != null)
return data.size();
return 0;
}
//Pushes a new element
public void push(Caption x) {
if (data != null) {
if (getSize() < n) {
data.add(x);
} else {
data.removeLast();
data.add(x);
}
}
}
//Extracts all elements from the TopN. This is a destructive operation.
//The only method that can be called immediately after extract() is reset().
//Args:
//sort: Whether to return the elements in descending sorted order.
//Returns: A list of data; the top n elements provided to the set.
public LinkedList<Caption> extract(boolean sort) {
if (sort) {
Collections.sort(data);
}
return data;
}
//Returns the TopN to an empty state.
public void reset() {
if (data != null) data.clear();
}
}
Even though accuracy is very low I am sharing this because it might be useful for some to load the show and tell models in android.

invalid conversion from 'byte*' to 'byte'

invalid conversion from 'byte*' to 'byte'
i have write this arduino function
byte receiveMessage(AndroidAccessory acc,boolean accstates){
if(accstates){
byte rcvmsg[255];
int len = acc.read(rcvmsg, sizeof(rcvmsg), 1);
if (len > 0) {
if (rcvmsg[0] == COMMAND_TEXT) {
if (rcvmsg[1] == TARGET_DEFAULT){
byte textLength = rcvmsg[2];
int textEndIndex = 3 + textLength;
byte theMessage[textLength];
int i=0;
for(int x = 3; x < textEndIndex; x++) {
theMessage[i]=rcvmsg[x];
i++;
delay(250);
}
return theMessage;
delay(250);
}
}
}
}
}
this is the error
In function byte receiveMessage(AndroidAccessory, boolean) invalid conversion from byte*' to 'byte"
this function is to receive the data from the android and return it as a byte array
You need to use dynamic allocation, or pass the array to the function as a parameter which is a better solution in your case
void receiveMessage(AndroidAccessory acc, boolean accstates, byte *theMessage){
if (theMessage == NULL)
return;
if(accstates){
byte rcvmsg[255];
int len = acc.read(rcvmsg, sizeof(rcvmsg), 1);
if (len > 0) {
if (rcvmsg[0] == COMMAND_TEXT) {
if (rcvmsg[1] == TARGET_DEFAULT){
byte textLength = rcvmsg[2];
int textEndIndex = 3 + textLength;
int i=0;
for(int x = 3; x < textEndIndex; x++) {
theMessage[i]=rcvmsg[x];
i++;
delay(250);
}
return;
}
}
}
}
}
with this, you will call the function passing the array to it, for example
byte theMessage[255];
receiveMessage(acc, accstates, theMessage);
/* here the message already contains the data you read in the function */
But you can't return a local variable, because the data is only valid in the scope where the variable is valid, in fact it's invalid right outside the if (rcvmsg[0] == COMMAND_TEXT) block, because you defined it local to that block.
Note: please read Wimmel's comment, or you could set the last byte to '\0' if it's just text, and then use the array as a string.
As far as the error is concerned you are returning the incorrect value.
theMessage is a byte array not a byte
Also the last answers explains why cant you return local variable pointer

Need help generating a unique request code for alarm

My app structure is like, there are 1000 masjids/mosques and each masjid has been given a unique id like 1,2,3,4 ..... 1000 . Now each mosque has seven alarms associated with it , I wish to generate a unique request code number for each alarm so that they don't overlap each other,
Following is the code:
//note integer NamazType has range 0 to 5
public int generateRequestCodeForAlarm(int MasjidID,int NamazType )
{
return (MasjidID *(10)) + (namazType);
}
Will this code work?
you can simply concatenate masjidID and namaztype( or specifically namaz ID). This will always return unique.
public int generateRequestCodeForAlarm(int MasjidID,int NamazType )
{
return Integer.ParseInt(String.valueOf(MasjidID)+""+NamazType)
}
Use Random class:
Try out like this:
//note integer NamazType has range 0 to 5
public int generateRequestCodeForAlarm(int MasjidID, int NamazType)
{
return (MasjidID * (Math.abs(new Random().nextInt()))) + (namazType);
}
It will work for sure.
public int generateRequestCodeForAlarm(int MasjidID,int NamazType )
{
return (MasjidID*(10)) + (NamazType );
}
Output:
Have a look at this
If MasjidID and NamazType are unique, then
Integer.parseInt( MasjidID + "" + NamazType );
would be enough to do the trick!
Example:
Masjid ID = 96, Namaz type = 1, Unique no = 961
MasjidId = 960, Namaz type = 1, Unique no = 9601
MasjidID = 999, Namaz type = 6, Unique no = 9996
I don't find any way in which it would get repeated. However, it is very similar to
(MasjidID * 10) + NamazType
Irrespective of MasjidID and NamazType, if a random number needs to be generated, this can be used.
public class NoRepeatRandom {
private int[] number = null;
private int N = -1;
private int size = 0;
public NoRepeatRandom(int minVal, int maxVal)
{
N = (maxVal - minVal) + 1;
number = new int[N];
int n = minVal;
for(int i = 0; i < N; i++)
number[i] = n++;
size = N;
}
public void Reset() { size = N; }
// Returns -1 if none left
public int GetRandom()
{
if(size <= 0) return -1;
int index = (int) (size * Math.random());
int randNum = number[index];
// Swap current value with current last, so we don't actually
// have to remove anything, and our list still contains everything
// if we want to reset
number[index] = number[size-1];
number[--size] = randNum;
return randNum;
}
}

Indexing Android

My problem is I have around 1000+ records in an Android App
string field1;
string field2;
string field3;
string field4;
//...
I want to search in this set of records and get the best results on two fields (field1 and field2).
Currently I read each record and compare() (string compare) with the text i want to search so that takes a long time.
What is the best method to perform search?
Store each records in SQLite DB and do "select query where like"
Hash-Mapped
? any other suggestions?
Or may be create an Index of the records and do search.
If you want to search for not exact matches, I would try to make an ArrayList of MyAppRecord where
public class MyAppRecord {
private String record;
private int deviance;
}
and get for each record the deviance of the String you want to find with:
public static int getLevenshteinDistance (String s, String t) {
if (s == null || t == null) {
throw new IllegalArgumentException("Strings must not be null");
}
int n = s.length(); // length of s
int m = t.length(); // length of t
if (n == 0) {
return m;
} else if (m == 0) {
return n;
}
int p[] = new int[n+1]; //'previous' cost array, horizontally
int d[] = new int[n+1]; // cost array, horizontally
int _d[]; //placeholder to assist in swapping p and d
// indexes into strings s and t
int i; // iterates through s
int j; // iterates through t
char t_j; // jth character of t
int cost; // cost
for (i = 0; i<=n; i++) {
p[i] = i;
}
for (j = 1; j<=m; j++) {
t_j = t.charAt(j-1);
d[0] = j;
for (i=1; i<=n; i++) {
cost = s.charAt(i-1)==t_j ? 0 : 1;
// minimum of cell to the left+1, to the top+1, diagonally left and up +cost
d[i] = Math.min(Math.min(d[i-1]+1, p[i]+1), p[i-1]+cost);
}
// copy current distance counts to 'previous row' distance counts
_d = p;
p = d;
d = _d;
}
// our last action in the above loop was to switch d and p, so p now
// actually has the most recent cost counts
return p[n];
}
}
save it to your MyAppRecord-object and finally sort your ArrayList by the deviance of its MyAppRecord-objects.
Note that this could take some time, depending on your set of records. And NOTE that there is no way of telling wether dogA or dogB is on a certain position in your list by searching for dog.
Read up on the Levensthein distance to get a feeling on how it works. You may get the idea of sorting out strings that are possibly to long/short to get a distance that is okay for a threshold you may have.
It is also possible to copy "good enough" results to a different ArrayList.

Android update array of bitmaps / Queue

I need some advice how to implement this situation in my application.
I have array of bitmpaps, which I'm using to store different states of my Canvas, so I can use them in the future. Here is the code which I'm using :
private Bitmap[] temp;
// on user click happens this ->
if(index<5){
temp[index] = Bitmap.createBitmap(mBitmap);
index++;
}
So basically I want to save only the last 5 bitmaps depending on user's actions. The things which I want to learn is how can I update my array so I can always have the last 5 bitmaps.
Here is what I mean :
Bitmaps [1,2,3,4,5] -> after user clicks I want to delete the first bitmap, re-order the array and save the new one as the last..so my array should look like this : Bitmaps[2,3,4,5,6];
Any suggestions / advices which is the best way to do that?
Thanks in advance!
I just wrote this...
Use this code to initialise:
Cacher cach = new Cacher(5);
//when you want to add a bitmap
cach.add(yourBitmap);
//get the i'th bitmap using
cach.get(yourIndex);
Remember you can re implement the function get to return the ith "old" Bitmap
public class Cacher {
public Cacher(int max) {
this.max = max;
temp = new Bitmap[max];
time = new long[max];
for(int i=0;i<max;i++)
time[i] = -1;
}
private Bitmap[] temp;
private long[] time;
private int max = 5;
public void add(Bitmap mBitmap) {
int index = getIndexForNew();
temp[index] = Bitmap.createBitmap(mBitmap);
}
public Bitmap get(int i) {
if(time[i] == -1)
return null;
else
return temp[i];
}
private int getIndexForNew() {
int minimum = 0;
long value = time[minimum];
for(int i=0;i<max;i++) {
if(time[i]==-1)
return i;
else {
if(time[i]<value) {
minimum = i;
value = time[minimum];
}
}
return minimum;
}
}

Categories

Resources