In android I am looping through the database and assigning text and image:
Cursor res = myDb.getAllData();
while (res.moveToNext()) {
Actors actor = new Actors();
actor.setName(res.getString(1));
String th = res.getString(11);
Integer thumb = this.getResources().getIdentifier(th, "drawable", "mypackage");
actor.setThumb(R.drawable.th);
}
However Lint suggests not to use getIdentifier - Use of this function is discouraged because resource reflection makes it harder to perform build optimizations and compile-time verification of code.
In database column I have just the image name (string). How can I replace getIdentifier?
Even if I change the DB column maybe directly to R.drawable.imagename, it is still a string and for setThumb I need a drawable.
Ok, so the only solution what I've found is here https://stackoverflow.com/a/4428288/1345089
public static int getResId(String resName, Class<?> c) {
try {
Field idField = c.getDeclaredField(resName);
return idField.getInt(idField);
} catch (Exception e) {
e.printStackTrace();
return -1;
}
}
and then just calling:
int resID = getResId("icon", R.drawable.class);
This works very well, however some users reports, that after installing the app from Play store (not my app, but any with this method implemented and proguard enabled), after a while it will start throwing NoSuchFieldException, as more resources are mapped to the same class/field.
It can be caused by proguard but not sure. The solution is then to put the old way getResources().getIdentifier to the exception part of code.
You can try this approach:
Rename your resources as follow:
ressourceName00
ressourceName01
ressourceName02 .... and so on,
then use the methode below:
for (int i = 0; i < RessourceQtt; i++) {
Uri path1 = Uri.parse("android.resource://yourPackage Directories/drawable/ressourceName0" + i);
list.add(new CarouselItem(String.valueOf(path1)));
}
You can try this my code three way
// Show message and quit
Application app = cordova.getActivity().getApplication();
String package_name = app.getPackageName();
Resources resources = app.getResources();
String message = resources.getString(resources.getIdentifier("message", "string", package_name));
String label = resources.getString(resources.getIdentifier("label", "string", package_name));
this.alert(message, label);
set icon like that
private int getIconResId() {
Context context = getApplicationContext();
Resources res = context.getResources();
String pkgName = context.getPackageName();
int resId;
resId = res.getIdentifier("icon", "drawable", pkgName);
return resId;
}
this is also
//set icon image
final String prefix = "ic_";
String icon_id = prefix + cursor.getString(cursor.getColumnIndex(WeatherEntry.COLUMN_ICON));
Resources res = mContext.getResources();
int resourceId = res.getIdentifier(icon_id, "drawable", mContext.getPackageName());
viewHolder.imgIcon.setImageResource(resourceId);
I hope this code help for you.
public static int getIcId(String resN, Class<?> c) {
try {
Field idF = c.getDeclaredField(resN);
return idF.getInt(idF);
} catch (Exception e) {
throw new RuntimeException("No resource ID found for: "
+ resN+ " / " + c, e);
}}
In my android project, when I switch language to Arabic, the call log page does not display data, but when I switch to other languages(such as english) can be displayed properly, how to solve, please see the following information.
1.A part of the code for the callLog adapter is as follows:
//显示归属地
if (callLog.getBelong_area() != null && !callLog.getBelong_area().equals("")) {
LogE.e("item","有归属地:"+callLog.getBelong_area());
holder.belong_area.setVisibility(View.VISIBLE);
holder.belong_area.setText(callLog.getBelong_area());
} else {
LogE.e("item","没有有归属地");
holder.belong_area.setText("");
holder.belong_area.setVisibility(View.GONE);
}
2.It still does not display data when I enter a fixed value,such as follow:
holder.belong_area.setText("北京");
3.Print the log log as follows:
08-23 10:07:13.241 17494-17494/com.allinone.callerid E/item: 有归属地:北京
08-23 10:07:13.607 17494-17494/com.allinone.callerid E/item:
有归属地:Shijiazhuang, Hebei 08-23 10:07:13.674
17494-17494/com.allinone.callerid E/item: 有归属地:北京 08-23 10:07:13.714
17494-17494/com.allinone.callerid E/item: 有归属地:湖北省,武汉市
4.Runtime screenshot:
arabic language(wrong),
enter image description here
english language(right)
enter image description here
You should have proper character set of your system.
You can check the folowing code to detect a charaterset of the language.
public class CharsetDetectTest {
public static void main(String[] args) {
detectCharset("北京");
}
public static void detectCharset(String originalStr) {
String[] charSet
= { "utf-8", "big5", "EUC-CN", "iso-8859-1", "gb2312" };
for (int i = 0; i < charSet.length; i++) {
for (int j = 0; j < charSet.length; j++) {
try {
LogE.e("charaters",
"[" + charSet[i] + "==>" + charSet[j] + "] = "
+ new String(originalStr.getBytes(charSet[i]), charSet[j]));
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
}
}
}
}
The debug output will be
[utf-8==>utf-8] = 北京
[utf-8==>big5] = ��鈭�
[utf-8==>EUC-CN] = ��浜�
[utf-8==>iso-8859-1] = å京
[utf-8==>gb2312] = ��浜�
[big5==>utf-8] = �_��
[big5==>big5] = 北京
[big5==>EUC-CN] = �_ㄊ
[big5==>iso-8859-1] = ¥_¨Ê
[big5==>gb2312] = �_ㄊ
[EUC-CN==>utf-8] = ����
[EUC-CN==>big5] = 控儔
[EUC-CN==>EUC-CN] = 北京
[EUC-CN==>iso-8859-1] = ±±¾©
[EUC-CN==>gb2312] = 北京
[iso-8859-1==>utf-8] = ??
[iso-8859-1==>big5] = ??
[iso-8859-1==>EUC-CN] = ??
[iso-8859-1==>iso-8859-1] = ??
[iso-8859-1==>gb2312] = ??
[gb2312==>utf-8] = ����
[gb2312==>big5] = 控儔
[gb2312==>EUC-CN] = 北京
[gb2312==>iso-8859-1] = ±±¾©
[gb2312==>gb2312] = 北京
Then, use one of the right characterset.
holder.belong_area.setText(new String("北京".getBytes("utf-8"), "utf-8"));
or
holder.belong_area.setText(new String("北京".getBytes("utf-8"));
You can check the chinese charater site.
best regards,
I am new to TensorFlow and cannot find out the solution of these questions.
How can I retrain the im2txt model for my new dataset such that the dataset on which the im2txt model was trained does not get lost and my new dataset is added to the MSCOCO dataset to caption the new images (i.e training dataset= MSCOCO dataset + My new dataset). Someone, please share the detailed procedure and the problems that I can face while retraining.
I have found out the TensorFlow tutorial for running inception V3 model in android on real-time datasets, Can this method be applied as well to the im2txt model i.e. can this be made to caption an image taken from a mobile in real time. Someone, please share the detailed steps how to do this.
After the weeks of struggle can be able to run and execute the im2txt model on Android.
Since I found the solutions from different blogs and different questions and answers, Felt that it might be useful if the all(maximum) solution is at one place.So, Sharing the steps followed.
You need to clone the tensorflow project https://github.com/tensorflow/tensorflow/releases/tag/v1.5.0 in order to freeze the graph and for some more utils.
Downloaded the im2txt model form https://github.com/KranthiGV/Pretrained-Show-and-Tell-model
Followed the steps described in the above link Can able to Run the inference to generate captions on the Linux desktop Successfully after renaming some variable in the graph(to overcome NotFoundError (see above for traceback): Key lstm/basic_lstm_cell/bias not found in checkpoint types of errors)
Now we need to freeze the existing model to obtain the frozen graph in order to use in android/ios
from cloned tensorflow project using freeze_graph.py( tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) one can freeze the graph from any model by giving the following command
An example of command-line usage is:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=some_graph_def.pb \
--input_checkpoint=model.ckpt-8361242 \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax
--input_binary=true
we need to supply the all the output_node_names which we required to run the model, from "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_wrapper.py" we can list out the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state'
when I run the freeze graph command by supplying the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state' got the error "AssertionError: softmax is not in the graph"
From the answers for
How to freeze an im2txt model?(How to freeze an im2txt model?)
by Steph and Jeff Tang
The current model ckpt.data, ckpt.index and ckpt.meta files and a graph.pbtxt should be loaded in inference mode (see InferenceWrapper in im2txt). It builds a graph with the correct names 'softmax', 'lstm/initial_state' and 'lstm/state'. You save this graph (with the same ckpt format) and then you can apply the freeze_graph script to obtain the frozen model.
to do this in Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\inference_wrapper.base.py, just add something like saver.save(sess, "model/ckpt4") after saver.restore(sess, checkpoint_path) in def _restore_fn(sess):. Then rebuild and run_inference and you'll get a model that can be frozen, transformed, and optionally memmapped, to be loaded by iOS and Android apps
Now I run the command as below
python tensorflow/python/tools/freeze_graph.py \
--input_meta_graph=/tmp/ckpt4.meta \
--input_checkpoint=/tmp/ckpt4 \
--output_graph=/tmp/ckpt4_frozen.pb \
--output_node_names="softmax,lstm/initial_state,lstm/state" \
--input_binary=true
and loaded the obtained ckpt4_frozen.pb file into the Android application and got the error
"java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs. Registered devices: [CPU], Registered kernels:
[[Node: decode/DecodeJpeg = DecodeJpegacceptable_fraction=1, channels=3, dct_method="", fancy_upscaling=true, ratio=1, try_recover_truncated=false]]"
From https://github.com/tensorflow/tensorflow/issues/2883
Since DecodeJpeg isn't supported as part of the Android tensorflow core, you'll need to strip it out of the graph first
bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=ckpt4_frozen.pb \
--output_graph=ckpt4_frozen_stripped_graph.pb \
--input_node_names=convert_image/Cast,input_feed,lstm/state_feed\
--output_node_names=softmax,lstm/initial_state,lstm/state\
--input_binary=true
When I try to load ckpt4_frozen_stripped_graph.pb in android i faced errors so i followed Jeff Tang's answer (Error using Model after using optimize_for_inference.py on frozen graph)
Instead of tools:strip_unused I used graph transformation tool
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/tmp/ckpt4_frozen.pb \
--out_graph=/tmp/ckpt4_frozen_transformed.pb \
--inputs="convert_image/Cast,input_feed,lstm/state_feed" \
--outputs="softmax,lstm/initial_state,lstm/state" \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms'
I can able load the obtained ckpt4_frozen_transformed.pb on android successfully.
when I supply the input as float array of RGB image pixels for input node "convert_image/Cast" and fetch the output from "lstm/initail_state" node successfully.
Now the challenge is to understand the beam search in "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\caption_generator.py" and same should be implemented on the Android side.
If you observe the python script caption_generator.py at
softmax, new_states, metadata = self.model.inference_step(sess,input_feed,state_feed)
input_feed is an int32 bit array and state_feed is a multidimensional float array
on the android side, I tried feeding int32 bit array for "input_feed", since there is no Java API to feed multidimensional array so I fed float array to lstm/state_feed as it is which fetched previously from "lstm/initail_state" node.
Got two errors one is for the input_fedd is expecting int 64bit and
"java.lang.IllegalArgumentException: -input rank(-1) <= split_dim < input rank (1), but got 1" at lstm/state_feed.
For the first error, I changed the input_feed feed data type from int32 to int 64.
About second error it is expecting rank two tensor.
If you see tensorflow java sources the data type float array which we are feeding is converted to a rank one tensor, we should feed data type in such a way that rank two tensor should be created but at present, I didn't find any API to feed multidimensional float array there
When I browsing tensorflow java source I found the API which was not exposed as Android API where we can create a rank two tensor. so I rebuilt and both libtensorflow_inference.so and libandroid_tensorflow_inference_java.jar by enabling the rank two tensor creation call.(for building process refer https://blog.mindorks.com/android-tensorflow-machine-learning-example-ff0e9b2654cc)
Now I can able to run the inference on Android and get the one caption for the image.But the accuracy very low.
Reason limiting for the one caption is I didn't find a way to fetch the outputs as a multidimensional array which is required for generating more number of cations for a single image.
String actualFilename = labelFilename.split("file:///android_asset/")[1];
vocab = new Vocabulary(assetManager.open(actualFilename));
inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);
final Graph g = c.inferenceInterface.graph();
final Operation inputOperation = g.operation(inputName);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + inputName + "'");
}
final Operation outPutOperation = g.operation(outputName);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + outputName + "'");
}
// The shape of the output is [N, NUM_CLASSES], where N is the batch size.
int numClasses = (int) inferenceInterface.graph().operation(outputName)
.output(0).shape().size(1);
Log.i(TAG, "Read " + vocab.totalWords() + " labels, output layer size is " + numClasses);
// Ideally, inputSize could have been retrieved from the shape of the input operation. Alas,
// the placeholder node for input in the graphdef typically used does not specify a shape, so it
// must be passed in as a parameter.
inputSize = inputSize;
// Pre-allocate buffers.
outputNames = new String[]{outputName + ":0"};
outputs = new float[numClasses];
inferenceInterface.feed(inputName + ":0", pixels, inputSize, inputSize, 3);
inferenceInterface.run(outputNames, runStats);
inferenceInterface.fetch(outputName + ":0", outputs);
startIm2txtBeamSearch(outputs);
//Implemented Beam search in JAVA
private void startIm2txtBeamSearch(float[] outputs) {
int beam_size = 1;
//TODO:Prepare vocab ids from file
ArrayList<Integer> vocab_ids = new ArrayList<>();
vocab_ids.add(1);
int vocab_end_id = 2;
float lenth_normalization_factor = 0;
int maxCaptionLength = 20;
Graph g = inferenceInterface.graph();
//node input feed
String input_feed_node_name = "input_feed";
Operation inputOperation = g.operation(input_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_feed_node_name + "'");
}
String output_feed_node_name = "softmax";
Operation outPutOperation = g.operation(output_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_feed_node_name + "'");
}
int output_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_feed_node_name + ", output layer size is " + output_feed_node_numClasses);
FloatBuffer output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
//float [][] output_feed_output = new float[numClasses][];
//node state feed
String input_state_feed_node_name = "lstm/state_feed";
inputOperation = g.operation(input_state_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_state_feed_node_name + "'");
}
String output_state_feed_node_name = "lstm/state";
outPutOperation = g.operation(output_state_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_state_feed_node_name + "'");
}
int output_state_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_state_feed_node_name + ", output layer size is " + output_state_feed_node_numClasses);
FloatBuffer output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
//float[][] output_state_output= new float[numClasses][];
String[] output_nodes = new String[]{output_feed_node_name, output_state_feed_node_name};
Caption initialBean = new Caption(vocab_ids, outputs, (float) 0.0, (float) 0.0);
TopN partialCaptions = new TopN(beam_size);
partialCaptions.push(initialBean);
TopN completeCaption = new TopN(beam_size);
captionLengthLoop:
for (int i = maxCaptionLength; i >= 0; i--) {
List<Caption> partialCaptionsList = new LinkedList<>(partialCaptions.extract(false));
partialCaptions.reset();
long[] input_feed = new long[partialCaptionsList.size()];
float[][] state_feed = new float[partialCaptionsList.size()][];
for (int j = 0; j < partialCaptionsList.size(); j++) {
Caption curCaption = partialCaptionsList.get(j);
ArrayList<Integer> senArray = curCaption.getSentence();
input_feed[j] = senArray.get(senArray.size() - 1);
state_feed[j] = curCaption.getState();
}
//feeding
inferenceInterface.feed(input_feed_node_name, input_feed, new long[]{input_feed.length});
inferenceInterface.feed(input_state_feed_node_name, state_feed, new long[]{state_feed.length});
//run
inferenceInterface.run(output_nodes, runStats);
//fetching
inferenceInterface.fetch(output_feed_node_name, output_feed_output);
inferenceInterface.fetch(output_state_feed_node_name, output_state_output);
float[] word_probabilities = new float[partialCaptionsList.size()];
float[] new_state = new float[partialCaptionsList.size()];
for (int k = 0; k < partialCaptionsList.size(); k++) {
word_probabilities = output_feed_output.array();
//output_feed_output.get(word_probabilities[k]);
new_state = output_state_output.array();
//output_feed_output.get(state[k]);
// For this partial caption, get the beam_size most probable next words.
Map<Integer, Float> word_and_probs = new LinkedHashMap<>();
//key is index of probability; value is index = word
for (int l = 0; l < word_probabilities.length; l++) {
word_and_probs.put(l, word_probabilities[l]);
}
//sorting
// word_and_probs = word_and_probs.entrySet().stream()
// .sorted(Map.Entry.comparingByValue())
// .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,(e1, e2) -> e1, LinkedHashMap::new));
word_and_probs = MapUtil.sortByValue(word_and_probs);
//considering first (beam size probabilities)
LinkedHashMap<Integer, Float> final_word_and_probs = new LinkedHashMap<>();
for (int key : word_and_probs.keySet()) {
final_word_and_probs.put(key, word_and_probs.get(key));
if (final_word_and_probs.size() == beam_size)
break;
}
for (int w : final_word_and_probs.keySet()) {
float p = final_word_and_probs.get(w);
if (p < 1e-12) {//# Avoid log(0).
Log.d(TAG, "p is < 1e-12");
continue;
}
Caption partialCaption = partialCaptionsList.get(k);
ArrayList<Integer> sentence = new ArrayList<>(partialCaption.getSentence());
sentence.add(w);
float logprob = (float) (partialCaption.getPorb() + Math.log(p));
float scroe = logprob;
Caption beam = new Caption(sentence, new_state, logprob, scroe);
if (w == vocab_end_id) {
completeCaption.push(beam);
} else {
partialCaptions.push(beam);
}
}
if (partialCaptions.getSize() == 0)//run out of partial candidates; happens when beam_size = 1.
break captionLengthLoop;
}
//clear buffer retrieve sub sequent output
output_feed_output.clear();
output_state_output.clear();
output_feed_output = null;
output_state_output = null;
output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
Log.d(TAG, "----" + i + " Iteration completed----");
}
Log.d(TAG, "----Total Iterations completed----");
LinkedList<Caption> completeCaptions = completeCaption.extract(true);
for (Caption cap : completeCaptions) {
ArrayList<Integer> wordids = cap.getSentence();
StringBuffer caption = new StringBuffer();
boolean isFirst = true;
for (int word : wordids) {
if (!isFirst)
caption.append(" ");
caption.append(vocab.getWord(word));
isFirst = false;
}
Log.d(TAG, "Cap score = " + Math.exp(cap.getScore()) + " and Caption is " + caption);
}
}
//Vocab
public class Vocabulary {
String TAG = Vocabulary.class.getSimpleName();
String start_word = "<S>", end_word = "</S>", unk_word = "<UNK>";
ArrayList<String> words;
public Vocabulary(File vocab_file) {
loadVocabsFromFile(vocab_file);
}
public Vocabulary(InputStream vocab_file_stream) {
words = readLinesFromFileAndLoadWords(new InputStreamReader(vocab_file_stream));
}
public Vocabulary(String vocab_file_path) {
File vocabFile = new File(vocab_file_path);
loadVocabsFromFile(vocabFile);
}
private void loadVocabsFromFile(File vocabFile) {
try {
this.words = readLinesFromFileAndLoadWords(new FileReader(vocabFile));
//Log.d(TAG, "Words read from file = " + words.size());
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
private ArrayList<String> readLinesFromFileAndLoadWords(InputStreamReader file_reader) {
ArrayList<String> words = new ArrayList<>();
try (BufferedReader br = new BufferedReader(file_reader)) {
String line;
while ((line = br.readLine()) != null) {
// process the line.
words.add(line.split(" ")[0].trim());
}
br.close();
if (!words.contains(unk_word))
words.add(unk_word);
} catch (IOException e) {
e.printStackTrace();
}
return words;
}
public String getWord(int word_id) {
if (words != null)
if (word_id >= 0 && word_id < words.size())
return words.get(word_id);
return "No word found, Maybe Vocab File not loaded";
}
public int totalWords() {
if (words != null)
return words.size();
return 0;
}
}
//MapUtil
public class MapUtil {
public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue(Map<K, V> map) {
List<Map.Entry<K, V>> list = new ArrayList<>(map.entrySet());
list.sort(new Comparator<Map.Entry<K, V>>() {
#Override
public int compare(Map.Entry<K, V> o1, Map.Entry<K, V> o2) {
if (o1.getValue() instanceof Float && o2.getValue() instanceof Float) {
Float o1Float = (Float) o1.getValue();
Float o2Float = (Float) o2.getValue();
return o1Float >= o2Float ? -1 : 1;
}
return 0;
}
});
Map<K, V> result = new LinkedHashMap<>();
for (Map.Entry<K, V> entry : list) {
result.put(entry.getKey(), entry.getValue());
}
return result;
}
}
//Caption
public class Caption implements Comparable<Caption> {
private ArrayList<Integer> sentence;
private float[] state;
private float porb;
private float score;
public Caption(ArrayList<Integer> sentence, float[] state, float porb, float score) {
this.sentence = sentence;
this.state = state;
this.porb = porb;
this.score = score;
}
public ArrayList<Integer> getSentence() {
return sentence;
}
public void setSentence(ArrayList<Integer> sentence) {
this.sentence = sentence;
}
public float[] getState() {
return state;
}
public void setState(float[] state) {
this.state = state;
}
public float getPorb() {
return porb;
}
public void setPorb(float porb) {
this.porb = porb;
}
public float getScore() {
return score;
}
public void setScore(float score) {
this.score = score;
}
#Override
public int compareTo(#NonNull Caption oc) {
if (score == oc.score)
return 0;
if (score < oc.score)
return -1;
else
return 1;
}
}
//TopN
public class TopN {
//Maintains the top n elements of an incrementally provided set.
int n;
LinkedList<Caption> data;
public TopN(int n) {
this.n = n;
this.data = new LinkedList<>();
}
public int getSize() {
if (data != null)
return data.size();
return 0;
}
//Pushes a new element
public void push(Caption x) {
if (data != null) {
if (getSize() < n) {
data.add(x);
} else {
data.removeLast();
data.add(x);
}
}
}
//Extracts all elements from the TopN. This is a destructive operation.
//The only method that can be called immediately after extract() is reset().
//Args:
//sort: Whether to return the elements in descending sorted order.
//Returns: A list of data; the top n elements provided to the set.
public LinkedList<Caption> extract(boolean sort) {
if (sort) {
Collections.sort(data);
}
return data;
}
//Returns the TopN to an empty state.
public void reset() {
if (data != null) data.clear();
}
}
Even though accuracy is very low I am sharing this because it might be useful for some to load the show and tell models in android.
I want to get a list of certain filenames in a drawable resource directory (at runtime, not using XML). I guess, I would need to distinguish certain files from others. So ...
*res/drawable-hdpi/mysubdir
-- x-one.png
-- one.png
-- x-two.png
-- two.png
-- x-three.png
-- three.png
*
I want to put the x-*.png filenames into a List<String>. Is this possible?
Borrowing some code from another question, I can get all the drawables, but I don't see a way to distinguish one from another in a meaningful way.
private void getDrawableResources() {
final R.drawable drawableResources = new R.drawable();
final Class<R.drawable> drawableClass = R.drawable.class;
final Field[] fields = drawableClass.getDeclaredFields();
final List<Integer> resourceIdList = new ArrayList<Integer>();
for (int i = 0, max = fields.length; i < max; i++) {
final int resourceId;
try {
resourceId = fields[i].getInt(drawableResources);
resourceIdList.add(resourceId);
}
catch (Exception e) {
continue;
}
}
Resources resources = this.getActivity().getResources();
for (int i = 0; i < resourceIdList.size(); i++) {
Drawable drawable = resources.getDrawable(resourceIdList.get(i));
resourceIdList.get(i) + "): " + drawable.toString());
}
}
Resource directories do not support subdirectories.
Use FileNameFilter while getting list of files in a directory.
File dir = new File(dirLocation);
if(dir.exists() && dir.isDirectory())
{
File[] files = dir.listFiles(new FilenameFilter()
{
public boolean accept(File dir, String name)
{
return name.contains("X-");
}
});
}
I am working in a project in which I have to show the system's available locales in listview with the following format:
So I've done this in onCreate:
#Override
protected void onCreate(Bundle icicle) {
super.onCreate(icicle);
setContentView(getContentView());
String[] locales = getAssets().getLocales(); // all system locale
Arrays.sort(locales); // sort in lexicographic order
final int origSize = locales.length;
// Loc is a class that I've expalined later in this question
Loc[] preprocess = new Loc[origSize];
int finalSize = 0;
for (int i = 0; i < origSize; i++) {
String s = locales[i];
int len = s.length(); // i.e. en_US
if (len == 5) {
String language = s.substring(0, 2); // i.e. en
String country = s.substring(3, 5); // i.e. US
Locale l = new Locale(language, country);
// There are some other logics. I excluded those for simplicity
// and to focus the main problem
preprocess[finalSize++] = new Loc(
toTitleCase(l.getDisplayName(l)), l);
}
}
mLocales = new Loc[finalSize + 1];
// put into another array keeping it's first index empty
for (int i = 0; i < finalSize; i++) {
mLocales[i + 1] = preprocess[i];
}
// put the system default to show it at the first index
mLocales[0] = new Loc("Use System Default", Resources
.getSystem().getConfiguration().locale);
// pass the array to Listview
int layoutId = R.layout.locale_picker_item;
int fieldId = R.id.locale;
ArrayAdapter<Loc> adapter = new ArrayAdapter<Loc>(this, layoutId,
fieldId, mLocales);
getListView().setAdapter(adapter);
}
And the Loc Class is:
public static class Loc {
String label;
Locale locale;
public Loc(String label, Locale locale) {
this.label = label;
this.locale = locale;
}
#Override
public String toString() {
// for the first index, it should show system default
if (this.label.equals("Use System Default")
return (this.label + " (" + this.locale.getDisplayName() + ", "
+ this.locale.getCountry() + ")");
return this.locale.getDisplayName(this.locale);
}
}
Expected Behavior:
________________________________
Use System Default (English, US)
________________________________
বাংলা (বাংলাদেশ)
________________________________
বাংলা (ভারত)
________________________________
English (United States)
....
....
....
But In my case,
________________________________
English (United States)
________________________________
বাংলা (বাংলাদেশ)
________________________________
বাংলা (ভারত)
________________________________
English (United States)
....
....
....
So my question is, why is the text I want to show in the listview in the first index, not showing?
There's a typo in the string you're comparing:
mLocales[0] = new Loc("Use System Default" ...
and
if (this.label.equals("Use Sytem Default") ...
problem in The spelling of System. your are checking Sytem
do
this.label.equals("Use System Default")
instead of
this.label.equals("Use Sytem Default")