I am calling MixPanel.flush in my onDestroy method but it looks as if the application is ending before MixPanel has a chance to send / flush its data.
I do not see any data in my MixPanel analytics screen unless I pause my android app using a breakpoint in onDestroy right after MixPanel.flush() is called.
Is there any way I can have my app stay open for MixPanel to finish?
You have to call flush() in your Activity onDestroy()
Like this:
#Override
protected void onDestroy() {
mMixpanel.flush();
super.onDestroy();
}
Notice you call flush() before calling super.onDestroy();, otherwise the activity lifecycle will continue and your call might never be on time, this way the destroy process will at least start after the flush.
It works for me.
It's kinda clumsy (MixPanel should have done a better job) because you might not know what activity the user is leaving your app from and to counteract that problem you have to put that code in your base activity (inherited by all your activities) therefore, causing a flush every time you change activities, which -in turn- defeats the purpose of queueing events…
UPDATE: It's true that the last events may not be flushing even tho Mixpanel recommends to do the above (they probably never thought about it, but since the SDK is open source, we might want to take a look)
In spite of Mixpanel's quality, I recommend calling flush() as early as possible. You can even call flush in the onStop() method of every activity (before super.onStop()) and that way you make sure each Activity flushes its events every time it's stopped.
Although this may defeat the purpose of the mixpanel network friendly queuing, it may e the only way (without resorting to strange hacks) to keep the events synced.
Because of all this, I took a look at mixpanel's source code (which I had checked out) and they also have a method to define the flush frequency:
/**
* Sets the target frequency of messages to Mixpanel servers.
* If no calls to {#link #flush()} are made, the Mixpanel
* library attempts to send tracking information in batches at a rate
* that provides a reasonable compromise between battery life and liveness of data.
* Callers can override this value, for the whole application, by calling
* <tt>setFlushInterval</tt>.
*
* #param context the execution context associated with this application, probably
* the main application activity.
* #param milliseconds the target number of milliseconds between automatic flushes.
* this value is advisory, actual flushes may be more or less frequent
*/
public static void setFlushInterval(Context context, long milliseconds);
Maybe reducing this number might help.
The default value seems to be:
// Time interval in ms events/people requests are flushed at.
public static final long FLUSH_RATE = 60 * 1000;
What flush() does, is just post a message to the worker thread always running.
The Worker thread intercepts this here:
else if (msg.what == FLUSH_QUEUE) {
logAboutMessageToMixpanel("Flushing queue due to scheduled or forced flush");
updateFlushFrequency();
sendAllData();
}
After updating the FlushFrequency (based upon the current systemTime), it sends the data using HTTP.
I can see why if your main process is dying (last activity) this code may not execute in time…
Last but not least, if you switch to use the library in source code (as opposed to just using the jar), you can change in MPConfig the value of:
public static final boolean DEBUG = false;
to get a lot of logging (among those are the flushes and posts to server). Might help to see what events are actually being sent to the server (and when).
There is also an upper limit to the number of queued items before que queue is forced to flush:
// When we've reached this many track calls, flush immediately
public static final int BULK_UPLOAD_LIMIT = 40;
This is seen in the queue code:
if (queueDepth >= MPConfig.BULK_UPLOAD_LIMIT) {
logAboutMessageToMixpanel("Flushing queue due to bulk upload limit");
updateFlushFrequency();
sendAllData();
}
Good luck :)
Related
For my master thesis I need to analyse iris recognition data. Therefore I have about 400 templates created. Each of this templates has to be compared to each other template, resulting in ~160.000 matching results.
Those results need to be uploaded to my Azure Easy Table. I really don't know where to start as the ThreadPoolExecutor cannot handle more than 128 Threads in parallel.
What is the correct approach to do something like this? Time is not really an issue.
This is my current approach:
for (int i = 0; i < mIrisEntries.size(); i++){
match(i);
}
public void match(final int position) {
IrisEntry inputEntry = mIrisEntries.get(position);
// takes about 10ms
List<IrisResult> results = mUSITHelper.matchEntries(inputEntry, mIrisEntries, this);
for (IrisResult s : results) {
try {
Thread.sleep(1000);
mAzureTableManager.addIrisResult(s); // here the AsyncTask is started
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
This approach works for some time,. but then the system starts to kill all threads and uploading is cancelled.
I do not completely follow all of what you describe in the question and comments.
However, at that amount of processing time (~30 minutes if I understood you), you need to use a foreground service, as otherwise your process may not survive that long. In that service, use your own ThreadPoolExecutor, with the number of threads in the pool tuned based on the number of CPU cores. Ideally, I would not post ~160,000 jobs to that executor, but rather 400, where each of those 400 jobs performs the work for one of your templates. The last job you post to the executor would do any final cleanup, plus stop the service.
Also, you may be able to halve your work. Suppose A and B are two of your templates. If comparing A to B is the same as comparing B to A, you do not need to do both comparisons. Your 400 templates expanding to ~160,000 jobs seems to imply that you are comparing A to B and B to A for all pairs.
As #CommonsWare said, I think it's a good idea for reducing the number of upload request & the data size.
However, per my experience, I suggest that you need to try to enable offline sync for your android app so that don't need to consider to the issue about uploading data asynchronously.
Hope it helps.
I'd like to use traceview to measure performance for several asynchronous events. The asynchronous events are passed to me in a callback that looks similar to the below code.
interface EventCallback {
void onStartEvent(String name);
void onStopEvent(String name);
}
where every asynchronous event will start with a "onStartEvent" call and end with an "onStopEvent" call.
I'd like to create trace files for every event. From my reading here (http://developer.android.com/tools/debugging/debugging-tracing.html#creatingtracefiles), it's not possible to trace asynchronous events since the ordering of the calls must be "structured" in a "stack" like ordering. So, the call to "Debug.stopMethodTracing()" always applies to the most recent call to "Debug.startMethodTracing("calc");"
So, if I receive callbacks in the following order.
onStartEvent(A)
onStartEvent(B)
onStopEvent(A)
onStopEvent(B)
which will get interpreted to
Debug.startMethodTracing("A");
Debug.startMethodTracing("B");
Debug.stopMethodTracing(); // will apply to "B" instead of "A"
Debug.stopMethodTracing(); // will apply to "A" instead of "B"
Using traceview, is there anyway to do what I want? i.e. trace "non-structured" asynchronous events?
traceview might be the wrong tool. If you really want to go this route you can keep an "active event count", and keep the tracefile open so long as there is an event being handled. This can result in multiple events being present in the same trace file, but you're tracing method calls in the VM, so there's no simple way around that.
If your events happen on different threads, you could separate them out with a post-processing step. This would require some effort to parse the data and strip out the undesirable records. (See e.g. this or this.)
You don't really say what you're trying to measure. For example, if you just want start/end times, you could just write those to a log file of your own and skip all the traceview fun.
Depending on what you're after, systrace may be easier to work with. Unfortunately the custom event class (Trace) only makes the synchronous event APIs public -- if you don't mind using reflection to access non-public interfaces you can also generate async events.
I have an AppWidget that may receive two consecutive update request. To be shown it has to programmatically draw five 50x50 bitmaps, setting some PendingIntent and get some configuration (just to give you a little idea of the work load). It takes around 60 milliseconds between the two calls.
The option I have found so far to avoid the unnecessary update is to have a static field, something like:
public class myWidget extends AppWidgetProvider {
private static long lastUpdate;
#Override
public void onReceive(Context context, Intent intent) {
if((System.currentTimeMillis()-lastUpdate) > 200) {
doUpdates(context);
}
lastUpdate = System.currentTimeMillis();
}
}
With performance and "best practice" in mind...
Which do you think is the best solution in this case?
1) Use the static field (like in the example)
2) Just let the widget update twice
3) Other
In other words, is the use of the static field more harmful than just letting the widget update twice?
To be shown it has to programmatically draw five 50x50 bitmaps, setting some PendingIntent and get some configuration (just to give you a little idea of the work load). It takes around 60 milliseconds between the two calls.
Note that this will cause your UI to drop frames if you happen to have your UI in the foreground at the time the update request(s) come in.
Which do you think is the best solution in this case?
Both #1 and #2.
There is no guarantee that your process will still be around between the two subsequent update requests. Probably it will be around, given that you appear to be optimizing for two updates within 200ms. But it's not guaranteed. So, use the static data member for optimization purposes, but ensure that your code will survive the process being terminated in between.
I'd suggest using SystemClock.elapsedRealtime(), though, instead of System.currentTimeMillis(). System.currentTimeMillis() is based on the real-time clock, which can be adjusted on the fly (e.g., NITZ signals, SNTP updates, user manually changing the clock). SystemClock.elapsedRealtime() is guaranteed to be monotonically increasing, and so it is a better choice for this sort of scenario. Only use SystemClock.elapsedRealtime() when you need to tie something to "real world" time (e.g., as part of work with Calendar objects), not for interval timing.
I have a test case for my app which fills in the TextViews in an Activity and then simulates clicking the Save button which commits the data to a database. I repeat this several times with different data, call Instrumentation.waitForIdleSync(), and then check that the data inserted is in fact in the database. I recently ran this test three times in a row without changing or recompiling my code. The result each time was different: one test run passed and the other two test runs reported different data items missing from the database. What could cause this kind of behavior? Is it possibly due to some race condition between competing threads? How do I debug this when the outcome differs each time I run it?
Looks like a race condition.
remember in the world of threading there is no way to ensure runtime order.
I'm not an android dev so I'm only speculating but UI is only on one event thread generally so when you call the method from another thread (your test) you're probably breaking that as you're outside of the event thread.
You could try using a semaphore or more likely a lock on the resource.
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/Lock.html
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/Semaphore.html
I (finally!) found a solution to this problem. I now call finish() on the tested Activity to make sure that all of its connections to the database are closed. This seems to ensure consistency in the data when I run the assertions.
I would suggest making a probe for the database data rather than a straight assert on it. By this I mean make a piece of code that will keep checking the database for up to a certain amount of time for a condition rather than waiting for x seconds (or idle time) then check, I am not on a proper computer so the following is only pseudo code
public static void assertDatabaseHasData(String message, String dataExpected, long maxTimeToWaitFor){
long timeToWaitUntil = System.getCurrentTimeMillis() + maxTimeToWaitFor;
boolean expectationMatched = false;
do {
if(databaseCheck() == dataExpected){
expecttionMatched == true;
}
}while(!expectationMatched && System.getCurrentTimeMillis() < timeToWaituntil);
assertTrue(message, expectationMatched);
}
When i get to a computer i will try to relook into the above and make it better (I would actually of used hamcrest rather than asserts but that is personal preference)
I'm getting data from a server for my app. The "getData" functions are included in the app's main Activity, in a splash thread. The problem I'm having is this:
If I quickly enter news or description after loading up the app, I notice that not all info has loaded (last 2 or 3 strings that needed to be saved are null). If, however, I allow the app a few more seconds after displaying the main menu (after completing the splash thread), the problem doesn't occur, all info is stored correctly on the phone. I tried delaying the splash screen by a few seconds but that's not really an elegant solution nor does it always work.
My question is how can I make sure that the functions have been completed before it jumps to "finally"
I'm not storing the data in any database, just in public static string arrays in another class.
You have my code below:
if(networkAvailable()){
Thread splashTread = new Thread() {
#Override
public void run() {
try {
getData.execute(description_Hyperlinks);
getNews.execute(new String[]{newsJSON_Hyperlink});
getOffers.execute(new String[]{offersJSON_Hyperlink});
for(int i = 0; i<3; i++)
Thread.sleep(1000);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
finish();
startActivity(new Intent(FlexFormActivity.this, MainMenu.class));
stop();
}
}
};
splashTread.start();
For the purpose of making the discussion in the comments on the initial question more readable and physically visible
It would be a better idea to use the Android build-in support for asynchronous task-handling in form of the AsyncTask-class. This allows you to "hook in" the task, giving you the opportunity to react on the different stages of the progress.
The idea would be to not make getData, getNews and getOffers each extend AsyncTask, but to rather have a single task (called e.g. "LoadContents"), which loads the data, the news and the offers one after another.
It would then be possible to determine, when the whole initial work has been done, which makes it easy to react on this "completion of task", in whatever form you can imagine.
As a little code-review, It should normally never be necessary to use the Thread-class itself, as Java and Android provide many wrappers around it (in particular the Java Executor Framework), which should be favored in order to produce more clean and reliable code.
Also, as a general advice on "disabling the back-button" (which is used by #Eugen to ensure that the splash-screen stays present): Don't do it. It's not the kind of behavior a user expects when he uses an application.
Imagine someone has accedently opened an app, which takes ~10 seconds for the initial loading of contents, and this process can't be canceled. The user will have to wait the entire time, only to then leave the app without using it.
Therefore, you should not "deactivate" the back-button, but rather make your task (and therefore the initial loading of your application) "cancel-able". When using an AsyncTask, this is already implemented for you.