what does the "..." in each function mean? and why in the last function, there is no "..."?
private class DownloadFilesTask extends AsyncTask<URL, Integer, Long> {
protected Long doInBackground(URL... urls) {
int count = urls.length;
long totalSize = 0;
for (int i = 0; i < count; i++) {
totalSize += Downloader.downloadFile(urls[i]);
publishProgress((int) ((i / (float) count) * 100));
// Escape early if cancel() is called
if (isCancelled()) break;
}
return totalSize;
}
protected void onProgressUpdate(Integer... progress) {
setProgressPercent(progress[0]);
}
protected void onPostExecute(Long result) {
showDialog("Downloaded " + result + " bytes");
}
}
As Morrison said, the ... syntax is for a variable length list of arguments (urls holds more than one URL).
This is typically used to allow users of the AsyncTask to do things like (in your case) pass in more than one URL to be fetched in the background. If you only have one URL, you would use your DownloadFilesTask like this:
DownloadFilesTask worker = new DownloadFilesTask();
worker.execute(new URL("http://google.com"));
or with multiple URLs, do this:
worker.execute(new URL[]{ new URL("http://google.com"),
new URL("http://stackoverflow.com") });
The onProgressUpdate() is used to let the background task communicate progress to the UI. Since the background task might involve multiple jobs (one for each URL parameter), it may make sense to publish separate progress values (e.g. 0 to 100% complete) for each task. You don't have to. Your background task could certainly choose to calculate a total progress value, and pass that single value to onProgressUpdate().
The onPostExecute() method is a little different. It processes a single result, from the set of operations that were done in doInBackground(). For example, if you download multiple URLs, then you might return a failure code if any of them failed. The input parameter to onPostExecute() will be whatever value you return from doInBackground(). That's why, in this case, they are both Long values.
If doInBackground() returns totalSize, then that value will be passed on onPostExecute(), where it can be used to inform the user what happened, or whatever other post-processing you like.
If you really need to communicate multiple results as a result of your background task, you can certainly change the Long generic parameter to something other than a Long (e.g. some kind of collection).
In Java its called Varargs which allow for a variable number of parameters.
http://docs.oracle.com/javase/1.5.0/docs/guide/language/varargs.html
The three dots are ... are used to indicate ellipsis, In our case in Java Language these are used to indicate varangs (variable no. of arguments).
Let me explain a little bit about varangs:
The varangs allows the method to accept zero or muliple
arguments.If we don't know how many argument we will have to pass in
the method, varargs is the better approach.
Syntax of varargs:
The varargs uses ellipsis i.e. three dots after the data type. Syntax is as follows:
return_type method_name(data_type... variableName){}
Simple Example of Varargs in java:
class VarargsExample1{
static void display(String... values){
System.out.println("display method invoked ");
}
public static void main(String args[]){
display();//zero argument
display("my","name","is","varargs");//four arguments
}
}
Rules for varargs:
While using the varargs, you must follow some rules otherwise program
code won't compile. The rules are as follows:
There can be only one variable argument in the method. Variable
argument (varargs) must be the last argument.
Very short (and basic) answer:
That represents a variable number of items "converted" to an array and it should be the last argument.
Example:
test("string", false, 20, 75, 31);
void test(String string, boolean bool, int... integers) {
// string = "string"
// bool = false
// integers[0] = 20
// integers[1] = 75
// integers[2] = 31
}
But you can also call
test("text", true, 15);
or
test("wow", true, 1, 2, 3, 4, 5, 6, 7, 8, 9, 100, 123, 345, 9123);
Related
My tide prediction application uses 8 double arrays for tide height calculations. Literally every tide station in the United States requires these to have 37 elements, EXCEPT Anchorage, Alaska which requires 124 elements.
Here is a declaration example
final int NUM_C = 37; //all stations except anchorage use 37
//final int NUM_C = 124; //anchorage uses 124
double a[] = new double[NUM_C + 1];
Can I efficiently specify the array size at the start up of the app? I can determine which is needed. I don't want to burden the application with inefficiency for 99% + of the users to handle this one case. The difference is only about 3K bytes.
Why don't you instantiate the variable in the constructor? It gives you more freedom to do programatic manipulation.
public class Station {
double a[];
public Station(String location) {
if(location.equals("Anchorage")) {
a = new double[124];
} else {
a = new double[37];
}
}
}
As I understand the instantiation of the object fields in the constructor is the normal case, while the instantiation with the declaration is just an additional feature of Java.
As for the speed it does not make a difference, if you specify the size by a literal value, a constant or a variable. A more interesting question is, if you should use ArrayList instead of an array. See here.
public class Station {
ArrayList<Double> a;
public Station(String location) {
if(location.equals("Anchorage")) {
a = new ArrayList<>(124);
} else {
a = new ArrayList<>(37);
}
}
}
My choice would be ArrayList as it is more flexible. Eight times 124 is not a very large number anyway. No reason to worry about performance for this.
I have tried figuring this out, but it doesn't add up. The data doesn't appear as it should.
First I generate dummy data. This is done async because I need time between the calls to System.currentTimeMillis to get some spacing between them. (Look aside the crappy code here, this is just debug data that will not be in the release. Using Thread.sleep on the main thread is a bad idea considering ANR's)
public class AsyncGeneration extends AsyncTask<String, String, String>{
public AsyncGeneration() {
super();
}
#Override
protected void onPreExecute() {
super.onPreExecute();
}
#Override
protected void onPostExecute(String s) {
super.onPostExecute(s);
}
#Override
protected void onProgressUpdate(String... values) {
super.onProgressUpdate(values);
}
#Override
protected void onCancelled(String s) {
super.onCancelled(s);
}
#Override
protected void onCancelled() {
super.onCancelled();
}
#Override
protected String doInBackground(String... strings) {
while(root == null) {
try {
Thread.sleep(200);
}catch(InterruptedException e){}
}
List<Entry> rdata = new ArrayList<>();
for(int i = 1; i < 30; i++){
try{
Thread.sleep(200);
}catch(Exception e){
//IGNORE
}
float time = System.currentTimeMillis();
Log.e("GChart", "Timex: " + time);
Session s = new Session(r.nextInt(5000), time);//Replace time with the index in the for-loop and it works for some reason
rdata.add(new Entry(s.getXValue(), s.getYValue()));
Log.e("GChart", "Timey: " + s.getXValue());
}
final List<Entry> entries = rdata;
OverviewFragment.this.getActivity().runOnUiThread(() ->{
LineDataSet data = new LineDataSet(entries, "Distance");
data.setCircleColor(Color.parseColor("#FF0000"));
LineData lineData = new LineData(data);
tab1chart.setData(lineData);
tab1chart.invalidate(); // refresh
tab1chart.getXAxis().setValueFormatter(new DateFormatter());
tab1chart.getXAxis().setPosition(XAxis.XAxisPosition.BOTTOM);
tab1chart.getXAxis().setTextSize(10f);
tab1chart.getXAxis().setTextColor(Color.RED);
tab1chart.getXAxis().setDrawAxisLine(true);
tab1chart.getXAxis().setDrawGridLines(true);
tab1chart.getAxisLeft().setValueFormatter(new DistanceValueFormatter());
tab1chart.getAxisLeft().setDrawGridLines(true);
Log.v("Chart", "Chart data loaded and invalidated");//This prints
});
return null;
}
}
So far everything looks fine. The data gets put into the chart, no exceptions, no crashes.
When the chart renders, a single data point shows up at the far-left of the chart. I generate 30 data points, one shows up.
That's issue #1: Only one data point shows up.
Issue #2 is slightly harder. The entire X axis at the bottom disappears. X axis is gone and when zooming, the Y axis, and its text also disappears. This is fairly hard to explain, so here is a screenshot:
It is worth mentioning the fact that if I pass i in the for-loop as the time, it shows up just as expected: all the axises are in place, zoom doesn't break anything.
(Float values can take the same values as Longs except they have decimals in addition.)
And in addition, I format the data:
public class DateFormatter implements IAxisValueFormatter {
SimpleDateFormat formatter;
public DateFormatter(){
formatter = new SimpleDateFormat("dd/MM/yy, HH:mm");
}
#Override
public String getFormattedValue(float value, AxisBase axis) {
//This method is never called. Nothing is returned
if(axis instanceof XAxis) {
String formatted = formatter.format(new Date((long) value));
Log.d("Formatter", "Formatted \"" + value + "\" to \"" + formatted + "\".");
return formatted;
}
return "Not supported";
}
}
Removing the formatter doesn't change anything, the axis is still gone. Zoom still breaks the Y axis.
So my question is: How do I fix this? I don't get why the chart doesn't work when I pass the current time in milliseconds (and I checked the values with debug output, floats can handle it). I got some debug output earlier that eventually stopped coming at all that showed the value passed to the formatter was values < 2000. I can't reproduce this any more though
The chart itself doesn't appear to be broken but from touch events it looks like every single point is pushed into the same X coordinate but only one point renders. When I touch the chart, the orange-ish lines show up indicating the position of a data point. It pops up on points that aren't visible.
When I pass the for-loop index as the X value, it works as expected, the formatter works fine (looking aside the fact that it shows the date as in 1970, but it is counted as 1 millisecond into the epoch, so that is to expect)
I looked at this as well on formatting the date. When I then try passing the milliseconds since the epoch, the chart stops working.
I'm using MPChart v 3.0.2, compiling against Android 26, using Java 8, and running Android Studio 3.0 beta 2
As for the layout, it is just a LinearLayout with a LineChart in it.
This should work, but it breaks when I pass it the current time in milliseconds for some reason. Passing any other data (as long as the numbers aren't that big) works fine.
The two images are not how the chart is supposed to look. The X axis is supposed to be visible, but for some reason it isn't along with a large amount of the data.
This is known issue of Android MPChart (have a read this thread). Time series chart supported in MPChart - You can set time (in millis) as X values for Hourly Charts.
But too many consecutive data (points) won't correctly plot in Line Charts. Because Entry object will accept only float values due to some performance constraints.
So, keep the first value as the reference and subtract each up coming value from reference value & divide by some constants (say 1000) .So , you X value set will be like 10,20,30....& so on.
Do the reverse logic in your Axis Value formatter to render the X Axis Label properly (see the code snippet).
lineChart.getXAxis().setValueFormatter(new IAxisValueFormatter() {
#Override
public String getFormattedValue(float value, AxisBase axis) {
SimpleDateFormat format2 = new SimpleDateFormat("HH:mm:ss");
return format2.format(new Date(firstTimeStamp + ((long) value) * 1000L));
}
});
I am in need of assistance. I will be computing a measured variable, then taking the top 100 values of these and averaging them. Please remember, I have been teaching myself only for the past 6 weeks and what is obvious to some, will not necessarily be obvious to me.
In essence, say 'double x' is the variable, that I have many close values for. What I need is a way to compute the sum (then average) of the top 100 of these values.
In my research, the closest thing I can see that would suit what I need is 'nextAfter(double start, double direction); and before this, using 'max' to determine the maximum value, would this be the correct starting point:
double xm = max(x);
static double (xm, x < xm);
My question is how to get the sum of the top 100 values (the maximum and 99 nextAfter's) - averaging would be easy - just dividing by 100.
To compute the average of the largest n values you read from the source, you need to store at least these values. Since at any given point before the end you don't know whether some of the largest n values overall will come later, you need to keep track the largest n values seen so far.
A simple way to do that is to store the largest values in a heap or priority queue, since that allows easy adding of new values and finding (and removing) of the smallest of the stored values. The default PriorityQueue is well-suited for this task, since it uses the natural ordering of the elements, and thus polling removes the smallest of the stored elements. If one wanted to compute the average of the n smallest elements, one would need to use a PriorityQueue with a custom Comparator (or in this special case, simply negating all values and using the natural ordering would work too).
The lazy way (less code) to achieve the desired is to simply add each incoming value to the queue, and if the queue's size exceeds n [then it must be n+1] remove the smallest element from the queue:
// vp is the value provider
while(vp.hasNext()) {
// read the next value and add it to the queue
pq.add(vp.nextValue());
if (pq.size() > topSize) {
pq.poll();
}
A slightly more involved way is to first check whether the new value needs to be added, and only modify the queue when that is the case,
double newValue = vp.nextValue();
// Check if we have to put the new value in the queue
// that is the case when the queue is not yet full, or the smallest
// stored value is smaller than the new
if (pq.size() < topSize || pq.peek() < newValue) {
// remove the smallest value from the queue only if it is full
if (pq.size() == topSize()) {
pq.poll();
}
pq.add(newValue);
}
This way is potentially more efficient, since adding a value to the queue and removing the smallest are both O(log size) operations, while comparing to the smallest stored value is O(1). So if there are many values smaller than the n largest seen before, the second way saves some work.
If performance is critical, be aware that a PriorityQueue cannot store primitive types like double, so the storing (and retrieving for the average computation) involves boxing (wrapping a double value in a Double object) resp. unboxing (pulling the double value from a Double object), and consequently an indirection from the underlying array of the queue to the actual values. Those costs could be avoided by implementing a heap-based priority queue using a raw double[] yourself. (But that should rarely be necessary, usually, the cost of the boxing and indirections would constitute only a minute part of the overall processing.)
A simple-minded complete working example:
import java.util.PriorityQueue;
/**
* Example class to collect the largest values from a stream and compute their
* average.
*/
public class Average {
// number of values we want to save
private int topSize;
// number of values read so far
private long count = 0;
// priority queue to save the largest topSize values
private PriorityQueue<Double> pq;
// source of read values, could be a file reader, a device reader, or whatever
private ValueProvider vp;
/**
* Construct an <code>Average</code> to sample the largest <code>n</code>
* values from the source.
*
* #param tops Number of values to save for averaging.
* #param v Source of the values to sample.
*
* #throws IllegalArgumentException when the specified number of values is less than one.
*/
public Average(int tops, ValueProvider v) throws IllegalArgumentException {
if (tops < 1) {
throw new IllegalArgumentException("Can't get average of fewer than one values.");
}
topSize = tops;
vp = v;
// Initialise queue to needed capacity; topSize + 1, since we first add
// and then poll. Thus no resizing should ever be necessary.
pq = new PriorityQueue<Double>(topSize+1);
}
/**
* Compute the average of the values stored in the <code>PriorityQueue<Double></code>
*
* #param prio The queue to average.
* #return the average of the values stored in the queue.
*/
public static double average(PriorityQueue<Double> prio) throws IllegalArgumentException {
if (prio == null || prio.size() == 0) {
throw new IllegalArgumentException("Priority queue argument is null or empty.");
}
double sum = 0;
for(Double d : prio) {
sum += d;
}
return sum/prio.size();
}
/**
* Reads values from the provider until exhausted, reporting the average
* of the largest <code>topSize</code> values read so far from time to time
* and when the source is exhausted.
*/
public void collectAverage() {
while(vp.hasNext()) {
// read the next value and add it to the queue
pq.add(vp.nextValue());
++count;
// If the queue was already full, we now have
// topSize + 1 values in it, so we remove the smallest.
// That is, conveniently, what the default PriorityQueue<Double>
// gives us. If we wanted for example the smallest, we'd need
// to use a PriorityQueue with a custom Comparator (or negate
// the values).
if (pq.size() > topSize) {
pq.poll();
}
// Occasionally report the running average of the largest topSize
// values read so far. This may not be desired.
if (count % (topSize*25) == 0 || count < 11) {
System.out.printf("Average of top %d values after collecting %d is %f\n",
pq.size(), count, average(pq));
}
}
// Report final average. Returning the average would be a natural choice too.
System.out.printf("Average of top %d values of %d total is %f\n",
pq.size(), count, average(pq));
}
public static void main(String[] args) {
Average a = new Average(100, new SimpleProvider(123456));
a.collectAverage();
}
}
using the interface
/**
* Interface for a source of <code>double</code>s.
*/
public interface ValueProvider {
/**
* Gets the next value from the source.
*
* #return The next value if there is one.
* #throws RuntimeException if the source is exhausted.
*/
public double nextValue() throws RuntimeException;
/**
* Checks whether the source has more values to deliver.
*
* #return whether there is at least one more value to be obtained from the source.
*/
public boolean hasNext();
}
and implementing class
/**
* Simple provider of a stream of <code>double</code>s.
*/
public class SimpleProvider implements ValueProvider {
// State determining which value to return next.
private long state = 0;
// Last allowed state.
private final long end;
/**
* Construct a provider of <code>e</code> values.
*
* #param e the number of values to yield.
*/
public SimpleProvider(long e) {
end = e > 0 ? e : 0;
}
/**
* Default constructor to provide 10000 values.
*/
public SimpleProvider() {
this(10000);
}
public double nextValue() {
++state;
return Math.log(state)*Math.sin(state) + Math.cos(state/2.0);
}
public boolean hasNext() {
return state < end;
}
}
Calling the ORMLite RuntimeExceptionDao's createOrUpdate(...) method in my app is very slow.
I have a very simple object (Item) with a 2 ints (one is the generatedId), a String and a double. I test the time it takes (roughly) to update the object in the database (a 100 times) with the code below. The log statement logs:
time to update 1 row 100 times: 3069
Why does it take 3 seconds to update an object 100 times, in a table with only 1 row. Is this the normal ORMLite speed? If not, what might be the problem?
RuntimeExceptionDao<Item, Integer> dao =
DatabaseManager.getInstance().getHelper().getReadingStateDao();
Item item = new Item();
long start = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
item.setViewMode(i);
dao.createOrUpdate(item);
}
long update = System.currentTimeMillis();
Log.v(TAG, "time to update 1 row 100 times: " + (update - start));
If I create 100 new rows then the speed is even slower.
Note: I am already using ormlite_config.txt. It logs "Loaded configuration for class ...Item" so this is not the problem.
Thanks.
This may be the "expected" speed unfortunately. Make sure you are using ORMLite version 4.39 or higher. createOrUpdate(...) was using a more expensive method to test for existing of the object in the database beforehand. But I suspect this is going to be a minimal speed improvement.
If I create 100 new rows then the speed is even slower.
By default Sqlite is in auto-commit mode. One thing to try is to wrap your inserts (or your createOrUpdates) using the the ORMLite Dao.callBatchTasks(...) method.
In by BulkInsertsTest android unit test, the following doInserts(...) method inserts 1000 items. When I just call it:
doInserts(dao);
It takes 7.3 seconds in my emulator. If I call using the callBatchTasks(...) method which wraps a transactions around the call in Android Sqlite:
dao.callBatchTasks(new Callable<Void>() {
public Void call() throws Exception {
doInserts(dao);
return null;
}
});
It takes 1.6 seconds. The same performance can be had by using the dao.setSavePoint(...) method. This starts a transaction but is not as good as the callBachTasks(...) method because you have to make sure you close your own transaction:
DatabaseConnection conn = dao.startThreadConnection();
Savepoint savePoint = null;
try {
savePoint = conn.setSavePoint(null);
doInserts(dao);
} finally {
// commit at the end
conn.commit(savePoint);
dao.endThreadConnection(conn);
}
This also takes ~1.7 seconds.
I am working on an Android app which encounters performance issues.
My goal is to receive strings from an AsyncTask and display them in a TextView. The TextView is initially empty and each time the other process sends a string concatenates it to the current content of the textview.
I currently use a StringBuilder to store the main string and each time I receive a new string, I append it to the StringBuilder and call
myTextView.setText(myStringBuilder.toString())
The problem is that the background process can send up to 100 strings per second, and my method is not efficient enough.
Redrawing the whole TextView everytime is obviously a bad idea (time complexity O(N²)), but I'm not seeing another solution...
Do you know of an alternative to TextView which could do these concatenations in O(N) ?
As long as there is a newline between the strings, you could use a ListView to append the strings and hold the strings themselves in an ArrayList or LinkedList to which you append as the AsyncTask receives the strings.
You might also consider simply invalidating the TextField less frequently; say 10 times a second. This would certainly improve responsiveness. Something like the following could work:
static long lastTimeUpdated = 0;
if( receivedString.size() > 0 )
{
myStringBuilder.append( receivedString );
}
if( (System.currentTimeMillis() - lastTimeUpdated) > 100 )
{
myTextView.setText( myStringBuilder.getChars( 0, myStringBuilder.length() );
}
If the strings come in bursts -- such that you have a delay between bursts greater than, say, a second -- then reset a timer every update that will trigger this code to run again to pick up the trailing portion of the last burst.
I finally found an answer with the help of havexz and Greyson here, and some code here.
As the strings were coming in bursts, I chose to update the UI every 100ms.
For the record, here's what my code looks like:
private static boolean output_upToDate = true;
/* Handles the refresh */
private Handler outputUpdater = new Handler();
/* Adjust this value for your purpose */
public static final long REFRESH_INTERVAL = 100; // in milliseconds
/* This object is used as a lock to avoid data loss in the last refresh */
private static final Object lock = new Object();
private Runnable outputUpdaterTask = new Runnable() {
public void run() {
// takes the lock
synchronized(lock){
if(!output_upToDate){
// updates the outview
outView.setText(new_text);
// notifies that the output is up-to-date
output_upToDate = true;
}
}
outputUpdater.postDelayed(this, REFRESH_INTERVAL);
}
};
and I put this in my onCreate() method:
outputUpdater.post(outputUpdaterTask);
Some explanations: when my app calls its onCreate() method, my outputUpdater Handler receives one request to refresh. But this task (outputUpdaterTask) puts itself a refresh request 100ms later. The lock is shared with the process which send the new strings and sets output_upToDate to false.
Try throttling the update. So instead of updating 100 times per sec as that is the rate of generation. Keep the 100 strings in string builder and then update once per sec.
Code should like:
StringBuilder completeStr = new StringBuilder();
StringBuilder new100Str = new StringBuilder();
int counter = 0;
if(counter < 100) {
new100Str.append(newString);
counter++;
} else {
counter = 0;
completeStr.append(new100Str);
new100Str = new StringBuilder();
myTextView.setText(completeStr.toString());
}
NOTE: Code above is just for illustration so you might have to alter it as per your needs.