I'm using the MPAndroidChart and am really enjoying it.
A 'little' need I have is that I can put null values to the 'entrys'. I'm monitoring the apache conections on servers of my system, and I would to see if they is down (where I put the null value) or if they just no conections (0).
I tried, but the Entry class don't accept 'null' as value showing the message: 'The constructor Entry(null, int) is undefined'
Thanks!
A possible solution for you could be to check weather the object you received is null, or not. If the object is null, you don't even create an Entry object instead of just setting it's value to null.
Example:
// array that contains the information you want to display
ConnectionHolder[] connectionHolders = ...;
ArrayList<Entry> entries = new ArrayList<Entry>();
int cnt = 0;
for(ConnectionHolder ch : connectionHolders) {
if(ch != null) entries.add(new Entry(ch.getNrOfConnections(), cnt));
else {
// do nothing
}
cnt++; // always increment
}
This would create e.g. a LineChart where no circles are drawn on indices where the ConnectionHolder object was null.
For a future release of the library, I will try to add the feature so that null values are supported.
My solution is to draw another DataSet with TRANSPARENT (or arbitrary) color:
- chart with fixed number of X values
- Y values are updated periodically
- boolean flag indicate transparent part (or another color)
private static final int SERIES_SIZE = 360;
int xIndex = -1;
float xIndexVal;
private LineChart chart;
private boolean currentFlag;
public void createChart(LineDataSet dataSet) {
LineData chartData = new LineData();
prepareDataSet(dataSet);
chartData.addDataSet(dataSet);
for (int i = 0; i < SERIES_SIZE; i++) {
chartData.addXValue("" /*+ i*/);
}
chart.setData(chartData);
}
private void prepareDataSet(LineDataSet dataSet, YAxis axis, int color) {
// configure set
}
public void update(Float val, boolean flag) {
List<ILineDataSet> dsl = chart.getData().getDataSets();
Log.d("chart", String.format("%s --- %d sets, index %d", descr, dsl.size(), xIndex));
if (xIndex == SERIES_SIZE - 1) {
// remove all entries at X index 0
for (int i = 0; i < chart.getData().getDataSetCount(); i++) {
Entry entry0 = chart.getData().getDataSetByIndex(i).getEntryForIndex(0);
if (entry0 != null && entry0.getXIndex() == 0) {
chart.getData().removeEntry(entry0, i);
Log.d("chart", String.format("entry 0 removed from dataset %d, %d entries in the set", i, chart.getData().getDataSetByIndex(i).getEntryCount()));
}
else {
Log.d("chart", String.format("all %d entries in the set kept", chart.getData().getDataSetByIndex(i).getEntryCount()));
}
}
// remove empty set, if any
for (Iterator<ILineDataSet> mit = dsl.iterator(); mit.hasNext(); ) {
if (mit.next().getEntryCount() == 0) {
mit.remove();
Log.d("chart", String.format("set removed, %d sets", dsl.size()));
}
}
// move all entries by -1
for (ILineDataSet ds : dsl) {
for (Entry entry : ((LineDataSet)ds).getYVals()) {
entry.setXIndex(entry.getXIndex() - 1);
}
}
}
else {
xIndex++;
}
if (currentFlag != flag) {
currentFlag = !currentFlag;
LineDataSet set = new LineDataSet(null, "");
prepareDataSet(set, chart.getAxisLeft(), currentFlag ? Color.TRANSPARENT : Color.BLUE);
chart.getData().addDataSet(set);
if (xIndex != 0) {
chart.getData().addEntry(new Entry(xIndexVal, xIndex - 1), dsl.size() - 1);
}
}
xIndexVal = val;
chart.getData().addEntry(new Entry(val, xIndex), dsl.size() - 1);
chart.notifyDataSetChanged();
chart.invalidate();
}
Related
Im having troubles with Android SortedList in RecyclerView, mainly with the remove method:
public void replaceAll(List userFertList, List defaultFertList){
restartIndexes(userFertList, defaultFertList);
mComparator.swapLists(Utils.fertiliserListToNameList(userFertList));
List<Fertiliser> combinedList = Utils.combineFertLists(userFertList, defaultFertList);
mSortedList.beginBatchedUpdates();
for (int i = mSortedList.size() -1; i > -1 ; i--) {
final Fertiliser fertiliser = mSortedList.get(i);
if(!combinedList.contains(fertiliser)){
if(!mSortedList.remove(fertiliser)){
throw new RuntimeException();
};
}
}
mSortedList.addAll(combinedList);
mSortedList.endBatchedUpdates();
}
The above code is executed when filtering the list. All of the objects that are not present in the new list are removed. However the call to remove objects sometimes fail. I know the object is present, because it's taken from the SortedList itself.
My research hinted me there's something wrong with my Comparator compare method:
#Override
public int compare(Fertiliser fertiliser, Fertiliser t1) {
if(fertiliser == t1){
return 0;
}
if(mUserFertNames.contains(fertiliser.getName()) != mUserFertNames.contains(t1.getName())){
return mUserFertNames.contains(fertiliser.getName()) ? -1 : 1;
} else {
return fertiliser.getName().compareToIgnoreCase(t1.getName());
}
}
Im sorting by two criteria (one that checks if the object is present in a list and by name).
So my thinking is, because SortedList uses the Comparator to locate the element, my Comparator gives false results, and the list cannot find the item:
The called method from the SortedList:
private int findIndexOf(T item, T[] mData, int left, int right, int reason) {
while (left < right) {
final int middle = (left + right) / 2;
T myItem = mData[middle];
final int cmp = mCallback.compare(myItem, item);
if (cmp < 0) {
left = middle + 1;
} else if (cmp == 0) {
if (mCallback.areItemsTheSame(myItem, item)) {
return middle;
} else {
int exact = linearEqualitySearch(item, middle, left, right);
if (reason == INSERTION) {
return exact == INVALID_POSITION ? middle : exact;
} else {
return exact;
}
}
} else {
right = middle;
}
}
return reason == INSERTION ? left : INVALID_POSITION;
}
However i coudn't find a solution. Can you help me?
P.S. When i examined the error, both objects were not in the list (so they were compared by names only).
Try .removeItemAt(i) instead of .remove(fertiliser). This worked for me while list was filtered.
I am new to TensorFlow and cannot find out the solution of these questions.
How can I retrain the im2txt model for my new dataset such that the dataset on which the im2txt model was trained does not get lost and my new dataset is added to the MSCOCO dataset to caption the new images (i.e training dataset= MSCOCO dataset + My new dataset). Someone, please share the detailed procedure and the problems that I can face while retraining.
I have found out the TensorFlow tutorial for running inception V3 model in android on real-time datasets, Can this method be applied as well to the im2txt model i.e. can this be made to caption an image taken from a mobile in real time. Someone, please share the detailed steps how to do this.
After the weeks of struggle can be able to run and execute the im2txt model on Android.
Since I found the solutions from different blogs and different questions and answers, Felt that it might be useful if the all(maximum) solution is at one place.So, Sharing the steps followed.
You need to clone the tensorflow project https://github.com/tensorflow/tensorflow/releases/tag/v1.5.0 in order to freeze the graph and for some more utils.
Downloaded the im2txt model form https://github.com/KranthiGV/Pretrained-Show-and-Tell-model
Followed the steps described in the above link Can able to Run the inference to generate captions on the Linux desktop Successfully after renaming some variable in the graph(to overcome NotFoundError (see above for traceback): Key lstm/basic_lstm_cell/bias not found in checkpoint types of errors)
Now we need to freeze the existing model to obtain the frozen graph in order to use in android/ios
from cloned tensorflow project using freeze_graph.py( tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py) one can freeze the graph from any model by giving the following command
An example of command-line usage is:
bazel build tensorflow/python/tools:freeze_graph && \
bazel-bin/tensorflow/python/tools/freeze_graph \
--input_graph=some_graph_def.pb \
--input_checkpoint=model.ckpt-8361242 \
--output_graph=/tmp/frozen_graph.pb --output_node_names=softmax
--input_binary=true
we need to supply the all the output_node_names which we required to run the model, from "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_wrapper.py" we can list out the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state'
when I run the freeze graph command by supplying the output node names as 'softmax', 'lstm/initial_state' and 'lstm/state' got the error "AssertionError: softmax is not in the graph"
From the answers for
How to freeze an im2txt model?(How to freeze an im2txt model?)
by Steph and Jeff Tang
The current model ckpt.data, ckpt.index and ckpt.meta files and a graph.pbtxt should be loaded in inference mode (see InferenceWrapper in im2txt). It builds a graph with the correct names 'softmax', 'lstm/initial_state' and 'lstm/state'. You save this graph (with the same ckpt format) and then you can apply the freeze_graph script to obtain the frozen model.
to do this in Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\inference_wrapper.base.py, just add something like saver.save(sess, "model/ckpt4") after saver.restore(sess, checkpoint_path) in def _restore_fn(sess):. Then rebuild and run_inference and you'll get a model that can be frozen, transformed, and optionally memmapped, to be loaded by iOS and Android apps
Now I run the command as below
python tensorflow/python/tools/freeze_graph.py \
--input_meta_graph=/tmp/ckpt4.meta \
--input_checkpoint=/tmp/ckpt4 \
--output_graph=/tmp/ckpt4_frozen.pb \
--output_node_names="softmax,lstm/initial_state,lstm/state" \
--input_binary=true
and loaded the obtained ckpt4_frozen.pb file into the Android application and got the error
"java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'DecodeJpeg' with these attrs. Registered devices: [CPU], Registered kernels:
[[Node: decode/DecodeJpeg = DecodeJpegacceptable_fraction=1, channels=3, dct_method="", fancy_upscaling=true, ratio=1, try_recover_truncated=false]]"
From https://github.com/tensorflow/tensorflow/issues/2883
Since DecodeJpeg isn't supported as part of the Android tensorflow core, you'll need to strip it out of the graph first
bazel build tensorflow/python/tools:strip_unused && \
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=ckpt4_frozen.pb \
--output_graph=ckpt4_frozen_stripped_graph.pb \
--input_node_names=convert_image/Cast,input_feed,lstm/state_feed\
--output_node_names=softmax,lstm/initial_state,lstm/state\
--input_binary=true
When I try to load ckpt4_frozen_stripped_graph.pb in android i faced errors so i followed Jeff Tang's answer (Error using Model after using optimize_for_inference.py on frozen graph)
Instead of tools:strip_unused I used graph transformation tool
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/tmp/ckpt4_frozen.pb \
--out_graph=/tmp/ckpt4_frozen_transformed.pb \
--inputs="convert_image/Cast,input_feed,lstm/state_feed" \
--outputs="softmax,lstm/initial_state,lstm/state" \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms'
I can able load the obtained ckpt4_frozen_transformed.pb on android successfully.
when I supply the input as float array of RGB image pixels for input node "convert_image/Cast" and fetch the output from "lstm/initail_state" node successfully.
Now the challenge is to understand the beam search in "Pretrained-Show-and-Tell-model\im2txt\im2txt\inference_utils\caption_generator.py" and same should be implemented on the Android side.
If you observe the python script caption_generator.py at
softmax, new_states, metadata = self.model.inference_step(sess,input_feed,state_feed)
input_feed is an int32 bit array and state_feed is a multidimensional float array
on the android side, I tried feeding int32 bit array for "input_feed", since there is no Java API to feed multidimensional array so I fed float array to lstm/state_feed as it is which fetched previously from "lstm/initail_state" node.
Got two errors one is for the input_fedd is expecting int 64bit and
"java.lang.IllegalArgumentException: -input rank(-1) <= split_dim < input rank (1), but got 1" at lstm/state_feed.
For the first error, I changed the input_feed feed data type from int32 to int 64.
About second error it is expecting rank two tensor.
If you see tensorflow java sources the data type float array which we are feeding is converted to a rank one tensor, we should feed data type in such a way that rank two tensor should be created but at present, I didn't find any API to feed multidimensional float array there
When I browsing tensorflow java source I found the API which was not exposed as Android API where we can create a rank two tensor. so I rebuilt and both libtensorflow_inference.so and libandroid_tensorflow_inference_java.jar by enabling the rank two tensor creation call.(for building process refer https://blog.mindorks.com/android-tensorflow-machine-learning-example-ff0e9b2654cc)
Now I can able to run the inference on Android and get the one caption for the image.But the accuracy very low.
Reason limiting for the one caption is I didn't find a way to fetch the outputs as a multidimensional array which is required for generating more number of cations for a single image.
String actualFilename = labelFilename.split("file:///android_asset/")[1];
vocab = new Vocabulary(assetManager.open(actualFilename));
inferenceInterface = new TensorFlowInferenceInterface(assetManager, modelFilename);
final Graph g = c.inferenceInterface.graph();
final Operation inputOperation = g.operation(inputName);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + inputName + "'");
}
final Operation outPutOperation = g.operation(outputName);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + outputName + "'");
}
// The shape of the output is [N, NUM_CLASSES], where N is the batch size.
int numClasses = (int) inferenceInterface.graph().operation(outputName)
.output(0).shape().size(1);
Log.i(TAG, "Read " + vocab.totalWords() + " labels, output layer size is " + numClasses);
// Ideally, inputSize could have been retrieved from the shape of the input operation. Alas,
// the placeholder node for input in the graphdef typically used does not specify a shape, so it
// must be passed in as a parameter.
inputSize = inputSize;
// Pre-allocate buffers.
outputNames = new String[]{outputName + ":0"};
outputs = new float[numClasses];
inferenceInterface.feed(inputName + ":0", pixels, inputSize, inputSize, 3);
inferenceInterface.run(outputNames, runStats);
inferenceInterface.fetch(outputName + ":0", outputs);
startIm2txtBeamSearch(outputs);
//Implemented Beam search in JAVA
private void startIm2txtBeamSearch(float[] outputs) {
int beam_size = 1;
//TODO:Prepare vocab ids from file
ArrayList<Integer> vocab_ids = new ArrayList<>();
vocab_ids.add(1);
int vocab_end_id = 2;
float lenth_normalization_factor = 0;
int maxCaptionLength = 20;
Graph g = inferenceInterface.graph();
//node input feed
String input_feed_node_name = "input_feed";
Operation inputOperation = g.operation(input_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_feed_node_name + "'");
}
String output_feed_node_name = "softmax";
Operation outPutOperation = g.operation(output_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_feed_node_name + "'");
}
int output_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_feed_node_name + ", output layer size is " + output_feed_node_numClasses);
FloatBuffer output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
//float [][] output_feed_output = new float[numClasses][];
//node state feed
String input_state_feed_node_name = "lstm/state_feed";
inputOperation = g.operation(input_state_feed_node_name);
if (inputOperation == null) {
throw new RuntimeException("Failed to find input Node '" + input_state_feed_node_name + "'");
}
String output_state_feed_node_name = "lstm/state";
outPutOperation = g.operation(output_state_feed_node_name);
if (outPutOperation == null) {
throw new RuntimeException("Failed to find output Node '" + output_state_feed_node_name + "'");
}
int output_state_feed_node_numClasses = (int) outPutOperation.output(0).shape().size(1);
Log.i(TAG, "Output layer " + output_state_feed_node_name + ", output layer size is " + output_state_feed_node_numClasses);
FloatBuffer output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
//float[][] output_state_output= new float[numClasses][];
String[] output_nodes = new String[]{output_feed_node_name, output_state_feed_node_name};
Caption initialBean = new Caption(vocab_ids, outputs, (float) 0.0, (float) 0.0);
TopN partialCaptions = new TopN(beam_size);
partialCaptions.push(initialBean);
TopN completeCaption = new TopN(beam_size);
captionLengthLoop:
for (int i = maxCaptionLength; i >= 0; i--) {
List<Caption> partialCaptionsList = new LinkedList<>(partialCaptions.extract(false));
partialCaptions.reset();
long[] input_feed = new long[partialCaptionsList.size()];
float[][] state_feed = new float[partialCaptionsList.size()][];
for (int j = 0; j < partialCaptionsList.size(); j++) {
Caption curCaption = partialCaptionsList.get(j);
ArrayList<Integer> senArray = curCaption.getSentence();
input_feed[j] = senArray.get(senArray.size() - 1);
state_feed[j] = curCaption.getState();
}
//feeding
inferenceInterface.feed(input_feed_node_name, input_feed, new long[]{input_feed.length});
inferenceInterface.feed(input_state_feed_node_name, state_feed, new long[]{state_feed.length});
//run
inferenceInterface.run(output_nodes, runStats);
//fetching
inferenceInterface.fetch(output_feed_node_name, output_feed_output);
inferenceInterface.fetch(output_state_feed_node_name, output_state_output);
float[] word_probabilities = new float[partialCaptionsList.size()];
float[] new_state = new float[partialCaptionsList.size()];
for (int k = 0; k < partialCaptionsList.size(); k++) {
word_probabilities = output_feed_output.array();
//output_feed_output.get(word_probabilities[k]);
new_state = output_state_output.array();
//output_feed_output.get(state[k]);
// For this partial caption, get the beam_size most probable next words.
Map<Integer, Float> word_and_probs = new LinkedHashMap<>();
//key is index of probability; value is index = word
for (int l = 0; l < word_probabilities.length; l++) {
word_and_probs.put(l, word_probabilities[l]);
}
//sorting
// word_and_probs = word_and_probs.entrySet().stream()
// .sorted(Map.Entry.comparingByValue())
// .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue,(e1, e2) -> e1, LinkedHashMap::new));
word_and_probs = MapUtil.sortByValue(word_and_probs);
//considering first (beam size probabilities)
LinkedHashMap<Integer, Float> final_word_and_probs = new LinkedHashMap<>();
for (int key : word_and_probs.keySet()) {
final_word_and_probs.put(key, word_and_probs.get(key));
if (final_word_and_probs.size() == beam_size)
break;
}
for (int w : final_word_and_probs.keySet()) {
float p = final_word_and_probs.get(w);
if (p < 1e-12) {//# Avoid log(0).
Log.d(TAG, "p is < 1e-12");
continue;
}
Caption partialCaption = partialCaptionsList.get(k);
ArrayList<Integer> sentence = new ArrayList<>(partialCaption.getSentence());
sentence.add(w);
float logprob = (float) (partialCaption.getPorb() + Math.log(p));
float scroe = logprob;
Caption beam = new Caption(sentence, new_state, logprob, scroe);
if (w == vocab_end_id) {
completeCaption.push(beam);
} else {
partialCaptions.push(beam);
}
}
if (partialCaptions.getSize() == 0)//run out of partial candidates; happens when beam_size = 1.
break captionLengthLoop;
}
//clear buffer retrieve sub sequent output
output_feed_output.clear();
output_state_output.clear();
output_feed_output = null;
output_state_output = null;
output_feed_output = FloatBuffer.allocate(output_feed_node_numClasses);
output_state_output = FloatBuffer.allocate(output_state_feed_node_numClasses);
Log.d(TAG, "----" + i + " Iteration completed----");
}
Log.d(TAG, "----Total Iterations completed----");
LinkedList<Caption> completeCaptions = completeCaption.extract(true);
for (Caption cap : completeCaptions) {
ArrayList<Integer> wordids = cap.getSentence();
StringBuffer caption = new StringBuffer();
boolean isFirst = true;
for (int word : wordids) {
if (!isFirst)
caption.append(" ");
caption.append(vocab.getWord(word));
isFirst = false;
}
Log.d(TAG, "Cap score = " + Math.exp(cap.getScore()) + " and Caption is " + caption);
}
}
//Vocab
public class Vocabulary {
String TAG = Vocabulary.class.getSimpleName();
String start_word = "<S>", end_word = "</S>", unk_word = "<UNK>";
ArrayList<String> words;
public Vocabulary(File vocab_file) {
loadVocabsFromFile(vocab_file);
}
public Vocabulary(InputStream vocab_file_stream) {
words = readLinesFromFileAndLoadWords(new InputStreamReader(vocab_file_stream));
}
public Vocabulary(String vocab_file_path) {
File vocabFile = new File(vocab_file_path);
loadVocabsFromFile(vocabFile);
}
private void loadVocabsFromFile(File vocabFile) {
try {
this.words = readLinesFromFileAndLoadWords(new FileReader(vocabFile));
//Log.d(TAG, "Words read from file = " + words.size());
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
private ArrayList<String> readLinesFromFileAndLoadWords(InputStreamReader file_reader) {
ArrayList<String> words = new ArrayList<>();
try (BufferedReader br = new BufferedReader(file_reader)) {
String line;
while ((line = br.readLine()) != null) {
// process the line.
words.add(line.split(" ")[0].trim());
}
br.close();
if (!words.contains(unk_word))
words.add(unk_word);
} catch (IOException e) {
e.printStackTrace();
}
return words;
}
public String getWord(int word_id) {
if (words != null)
if (word_id >= 0 && word_id < words.size())
return words.get(word_id);
return "No word found, Maybe Vocab File not loaded";
}
public int totalWords() {
if (words != null)
return words.size();
return 0;
}
}
//MapUtil
public class MapUtil {
public static <K, V extends Comparable<? super V>> Map<K, V> sortByValue(Map<K, V> map) {
List<Map.Entry<K, V>> list = new ArrayList<>(map.entrySet());
list.sort(new Comparator<Map.Entry<K, V>>() {
#Override
public int compare(Map.Entry<K, V> o1, Map.Entry<K, V> o2) {
if (o1.getValue() instanceof Float && o2.getValue() instanceof Float) {
Float o1Float = (Float) o1.getValue();
Float o2Float = (Float) o2.getValue();
return o1Float >= o2Float ? -1 : 1;
}
return 0;
}
});
Map<K, V> result = new LinkedHashMap<>();
for (Map.Entry<K, V> entry : list) {
result.put(entry.getKey(), entry.getValue());
}
return result;
}
}
//Caption
public class Caption implements Comparable<Caption> {
private ArrayList<Integer> sentence;
private float[] state;
private float porb;
private float score;
public Caption(ArrayList<Integer> sentence, float[] state, float porb, float score) {
this.sentence = sentence;
this.state = state;
this.porb = porb;
this.score = score;
}
public ArrayList<Integer> getSentence() {
return sentence;
}
public void setSentence(ArrayList<Integer> sentence) {
this.sentence = sentence;
}
public float[] getState() {
return state;
}
public void setState(float[] state) {
this.state = state;
}
public float getPorb() {
return porb;
}
public void setPorb(float porb) {
this.porb = porb;
}
public float getScore() {
return score;
}
public void setScore(float score) {
this.score = score;
}
#Override
public int compareTo(#NonNull Caption oc) {
if (score == oc.score)
return 0;
if (score < oc.score)
return -1;
else
return 1;
}
}
//TopN
public class TopN {
//Maintains the top n elements of an incrementally provided set.
int n;
LinkedList<Caption> data;
public TopN(int n) {
this.n = n;
this.data = new LinkedList<>();
}
public int getSize() {
if (data != null)
return data.size();
return 0;
}
//Pushes a new element
public void push(Caption x) {
if (data != null) {
if (getSize() < n) {
data.add(x);
} else {
data.removeLast();
data.add(x);
}
}
}
//Extracts all elements from the TopN. This is a destructive operation.
//The only method that can be called immediately after extract() is reset().
//Args:
//sort: Whether to return the elements in descending sorted order.
//Returns: A list of data; the top n elements provided to the set.
public LinkedList<Caption> extract(boolean sort) {
if (sort) {
Collections.sort(data);
}
return data;
}
//Returns the TopN to an empty state.
public void reset() {
if (data != null) data.clear();
}
}
Even though accuracy is very low I am sharing this because it might be useful for some to load the show and tell models in android.
I have implemented to simple methods:
#Override
protected void addDataSet(int day) {
LineData lineData = this.lineChart.getData();
if(lineData != null) {
ArrayList<Entry> yValues = new ArrayList<Entry>();
for(int i = 0; i < this.measureDataListEntries.size(); i++) {
String stringValue = this.measureDataListEntries.get(i).getValue();
int dayOfWeek = Helper.getDayOfWeek(this.measureDataListEntries.get(i).getTime());
float value = Float.parseFloat(stringValue);
if(dayOfWeek == day) {
yValues.add(new Entry(value, i));
}
}
String label = this.getLabel(day);
int color = this.getColor(day);
LineDataSet lineDataSet = new LineDataSet(yValues, label);
lineDataSet.setColor(color);
lineDataSet.setCircleColor(color);
lineDataSet.setLineWidth(1f);
lineDataSet.setCircleSize(4f);
lineDataSet.setFillAlpha(65);
lineData.addDataSet(lineDataSet);
this.lineChart.notifyDataSetChanged();
this.lineChart.invalidate();
this.lineChart.animateX(1000);
if(yValues.size() > 0) {
this.getCheckBox(day).setEnabled(true);
}
}
}
#Override
protected void removeDataSet(int day) {
LineData lineData = this.lineChart.getData();
if(lineData != null) {
String label = this.getLabel(day);
lineData.removeDataSet(lineData.getDataSetByLabel(label, true));
this.lineChart.notifyDataSetChanged();
this.lineChart.invalidate();
this.lineChart.animateX(1000);
}
}
At startup i add seven different datasets: monday, tuesday, wednesday, thursday, friday, saturday, sunday. The adding and removing of datasets works for all days except the day at the first position of the dataset in this case it is monday. the remove method gets called correctly but the dataset does not get removed. adding works always.
Only the dataset at first position cant be removed
Is there a workaround?
EDIT
The code used for the deletion from MPAndroidChart is the following:
public T getDataSetByLabel(String label, boolean ignorecase) {
int index = getDataSetIndexByLabel(mDataSets, label, ignorecase);
if (index <= 0 || index >= mDataSets.size())
return null;
else
return mDataSets.get(index);
}
why there is <= 0 and not just < 0?
Ofcourse adding a dummy dataset at the first position would make it working but im never a friend of such ugly codings. Why dont accept index = 0 for deliting?
This is already fixed. Use the latest version of the library.
Refer this: https://github.com/PhilJay/MPAndroidChart/issues/255
Fixed since 16th December, 2014.
I know for set a style into spannable, we can use setSpan(Object classOfStyle, int start, int end, int flags).
I wanna set alignspan into a current paragraph. Current paragraph detected by current cursor position. It`s possible? May i can get start-position and end-position of paragraph?
Edit :
A paragraph is a group of sentences. One of paragraph will be ended by "Enter Character".
I've written a small helper class (optimized for speed) that will do exactly what you want. You pass a Spannable object which will be parsed to find all paragraphs:
public class Layout {
private static final Pattern LINEBREAK_PATTERN = Pattern.compile("\\r\\n|\\r|\\n");
private int mNrOfLines = 0;
private ArrayList<Selection> mSelection = new ArrayList<Selection>();
public Layout(Spannable spannable) {
String s = spannable.toString();
// remove the trailing line feeds
int len = s.length();
char c = len > 0 ? s.charAt(len - 1) : '-';
while (len > 0 && (c == '\n' || c == '\r')) {
len--;
c = s.charAt(len - 1);
}
// now find the line breaks and the according lines / paragraphs
mNrOfLines = 1;
Matcher m = LINEBREAK_PATTERN.matcher(s.substring(0, len));
int groupStart = 0;
while (m.find()) {
mSelection.add( new Selection(groupStart, m.end()) );
groupStart = m.end();
mNrOfLines++;
}
if (groupStart < len) {
mSelection.add( new Selection(groupStart, len) );
}
}
public List<Selection> getParagraphs() {
return mSelection;
}
public int getLineForOffset(int offset) {
int lineNr = 0;
while(lineNr < mNrOfLines && offset >= mSelection.get(lineNr).end()) {
lineNr++;
}
return Math.min(Math.max(0, lineNr), mSelection.size() - 1);
}
public int getLineStart(int line) {
return mNrOfLines == 0 || line < 0 ? 0 :
line<mNrOfLines ? mSelection.get(line).start() :
mSelection.get(mNrOfLines-1).end()-1;
}
public int getLineEnd(int line) {
return mNrOfLines == 0 || line < 0 ? 0 :
line<mNrOfLines ? mSelection.get(line).end() :
mSelection.get(mNrOfLines-1).end()-1;
}
}
The three methods getLineForOffset, getLineStart and getLineEnd can be used to find the paragraphs for the current Selection. The Selection can be either the current cursor position (which is basically a Selection with start==end) or the selected text. The following code will return the selected paragraphs for the current selection:
Selection getParagraphs(EditText editor) {
Layout layout = new Layout( editor.getEditableText() );
int selStart = editor.getSelectionStart();
int selEnd = editor.getSelectionEnd();
int firstLine = layout.getLineForOffset(selStart);
int end = selStart == selEnd ? selEnd : selEnd - 1;
int lastLine = layout.getLineForOffset(end);
return new Selection(layout.getLineStart(firstLine), layout.getLineEnd(lastLine));
}
If e.g. your text is:
line 1
l[ine 2
line ]3
line 4
with everything between the [] selected, getParagraphs() would return the second and third line (it basically expands the selected text to include all (even partially) selected paragraphs.
I Write some method for get first position of paragraph and last position of paragraph.
I don`t know, which one is better performance. Please correct me if its wrong.
public static int getFirstPositionOfParagraph(String paragraph, int cursorPosition) {
StringBuffer buffer = new StringBuffer(paragraph);
List pos = new ArrayList();
for (String breaker : LINE_BREAK){
pos.add(buffer.lastIndexOf(breaker, cursorPosition));
}
int firstPosition = Collections.max(pos);
return firstPosition
public static int getLastPositionOfParagraph(String paragraph, int cursorPosition) {
StringBuffer buffer = new StringBuffer(paragraph);
List pos = new ArrayList();
for (String breaker : LINE_BREAK){
int breakerPos = buffer.indexOf(breaker, cursorPosition);
if (breakerPos > 0)
pos.add(breakerPos);
}
return pos.size() > 0 ? Collections.min(pos) + 1 : paragraph.length();
}
I am working on Pie charts using AChartEngine library. Here I want to disable chart values to show on charts, but only those whose content value is 0.If anyone known this please share to me.
Codings:
pchart=ChartFactory.getPieChartView(this,buildCategoryDataset("Daily Basis", values),renderer);
re.addView(pchart);
}
}
BuildCategory Renderer:
protected DefaultRenderer buildCategoryRenderer(int[] colors) {
DefaultRenderer renderer = new DefaultRenderer();
renderer.setLabelsTextSize(10);
renderer.setLegendTextSize(15);
renderer.setShowLegend(true);
renderer.setExternalZoomEnabled(true);
renderer.setZoomEnabled(true);
renderer.setZoomButtonsVisible(true);
renderer.setShowAxes(true);
// renderer.setMargins(new int[] { 20, 30, 15, 0 });
for (int color : colors) {
SimpleSeriesRenderer r = new SimpleSeriesRenderer();
r.setColor(color);
renderer.addSeriesRenderer(r);
}
return renderer;
}
DataSet Here:
protected CategorySeries buildCategoryDataset(String title, double[] values) {
CategorySeries series = new CategorySeries(title);
int k = 0,year;
String str_year;
str_year=String.valueOf(Math.round(values[0]));
Log.i("Invent","Year"+str_year);
year=Integer.parseInt(str_year);
Log.i("Invent","YearInt"+year);
if(flag_val==1)
{
for(k=0;k<tot_yearno-1;k++)
{
if(values[k]==0)
{
series.add(0.0); /// Here some settings for display null value.But **here i Want to display nothing Only.**
}
else
series.add("" + year_first++, values[k]);
}
}
if(flag_val==0)
{
for (double value : values) {
if(value==0)
{
series.add("" + ++k, 0.0);
}
else
series.add("Day " + ++k, value);
}
}
return series;
}
}
What you are doing here:
if(values[k]==0)
{
series.add(0.0); /// Here some settings for display null value.But **here i Want to display nothing Only.**
}
else
series.add("" + year_first++, values[k]);
}
You should do it like this:
if(values[k] != 0) //or possibly (values[k] >= 0) since you cannot show negatives either
{
series.add("" + year_first++, values[k]);
}
EDIT: Try this then:
if(values[k]==0)
{
series.add("" + year_first++, new Double(0.0));
}
else
{
series.add("" + year_first++, new Double(values[k]));
}
If you do not wish to show 0 values, just don't add them to the dataset.
Look at the createDataSet in the PieChartView for an example - there they add the values with the label.
private staticPieDataSet createDataSet(){
DefaultPieDataset dataset = new DefaultPieDataset();
dataset.setValue("One", new Double(43.2));
}
Now when you add your own values (I am not sure how you do it, but I pass them through the bundle datatype and then extract them for the following method), you can just not add based on value:
private staticPieDataSet createDataSet(Double[] values, String[] labels){
DefaultPieDataset dataset = new DefaultPieDataset();
for(int i = 0; i < values.length; i++){
if (value[i] > 0)
dataset.setValue(labels[i], values[i]);
}
}
PS!!! I wouldn't recommend this by the way - your users may become confused as to why some data shows up and others do not - rather put the value in brackets with the label so they can see this value is 0.
Do it something like this way, before showing the Chart View as I have done for TimeChart
for(int i=0;i<selectedCollectionY.size();i++)
{
if(selectedCollectionY.get(i).equalsIgnoreCase("0"))
{
selectedCollectionY.remove(i);
selectedCollectionX.remove(i);
}
}
View mView = null;
mView = ChartFactory.getTimeChartView(GraphActivity.this, Line_Chart
.getLineDemoDataset(mSpinner.getSelectedItem().toString(),
selectedCollectionX, selectedCollectionY), Line_Chart
.getLineDemoRenderer(mSpinner.getSelectedItem().toString()),
"MMM dd");
mView.setLayoutParams(new LayoutParams(
LinearLayout.LayoutParams.WRAP_CONTENT,
LinearLayout.LayoutParams.WRAP_CONTENT));
mLinearLayout.removeAllViews();
mLinearLayout.addView(mView);