If I have a Spring app with a mail server inbound channel, what is the best way to process every file in every email (I poll approx. every 1 min, and fetch 1 email with multiple attachments).
Although I can apply multithreading at the receiving channel (SimpleAsyncTaskExecutor or ThreadPoolTaskExecutor) this doesn't help much because if I have 10 files attached in an email, their processing is pretty much bound to one thread.
I've been keeping this pretty synchronous until now, because I wanted to aggregate some data for every email, and send a response after all files have been processed. I believe this could also be done in a better way.
In general how can I asynchronously process every file in every email, and then again asynchronously build an email reply?
Looks like you are asking for java.util.concurrent.Future. This is a java core concept to block until a (method) result is calculated. (see JavaDoc for an example)
The Spring #Async support the Future concept too.
So the only think you need to do is having a method that uses #Async takes one attachment of the Mail as argument and returns what ever is calculated in a future.
The you need to invoke all this methods for all attachments (asynchronous) and store the immediately returned future in a list. After all methods are invoked. You try to get the feature results in an new loop. After this loop is finish all attachments are proceed asynchronous.
processOneMail(List<Attachement> attachments) {
List<Future<AttachmentResult>> futures = new ArrayList...
for(Attachment attachment : attachments) {
futures.add(processOneAttachment(attachment)); //async
}
List<AttachmentResult> attachmentResults = new ArrayList...
for(Future<AttachmentResult>> future : futures) {
attachmentResults.add(future.get()); //eventually blocks
}
//now all attachments are calculated and stored in the list.
...
}
#Async
Future<AttachmentResult> processOneAttachment(Attachment attachment) {
...
}
See also: http://blog.espenberntsen.net/2010/03/08/spring-asynchronous-support/
Related
My firestore onSnapshot() function is being called twice.
let user = firebase.firestore().collection('users').doc(userID).onSnapshot
({
next: (documentSnapshot: firebase.firestore.DocumentSnapshot) =>
{
this.userArray.push(documentSnapshot as User);
console.log(documentSnapshot);
//here
},
error: (firestoreError: firebase.firestore.FirestoreError) =>
{
console.log(firestoreError);
//here
}
});
I have also tried subscribing like in https://firebase.google.com/docs/firestore/query-data/listen#detach_a_listener by including user() at the //here comment but to no avail.
How can I modify such that the function only executes one time, i.e. push only one user object per time instead of twice.
I don't know if this is related to your question. If one is using
firebase.firestore.FieldValue.serverTimestamp()
to give a document a timestamp, then onSnaphot will fire twice. This seem to be because when you add a new document to your database onSnapshot will fire, but the serverTimestamp has not run yet. After a few milliseconds serverTimestamp will run and update you document => onSnapshot will fire again.
I would like to add a small delay before onSnapshot fires (say 0,5s or so), but I couldn't find the way to do this.
You can also make a server side function for onCreate event, I believe that would solve your problem. Maybe your userArray.push-action would be more suitable to execute in server side.
Update: To learn more about the behavior of serverTimestamp() and why it triggers the listener twice read this article: The secrets of Firestore’s FieldValue.serverTimestamp() — REVEALED!. Also, the official documentation states:
When you perform a write, your listeners will be notified with the new data before the data is sent to the backend.
In the article there are a couple of suggested solutions, one of which is to use the metadata property of the snapshot to find whether the Boolean value of metadata.hasPendingWrites is true (which tells you that the snapshot you’re looking at hasn’t been written to the server yet) or false.
For example, in your case you can check whether hasPendingWrites is false and then push the object:
if ( !documentSnapshot.metadata.hasPendingWrites ){
// This code will only execute once the data has been written to the server
this.userArray.push(documentSnapshot as User);
console.log(documentSnapshot);
}
In a more generic example, the code will look like this:
firestore.collection("MyCollection")
.onSnapshot( snapshot => {
if ( snapshot.metadata.hasPendingWrites ){
// Local changes have not yet been written to the backend
} else {
// Changes have been written to the backend
}
});
Another useful approach, found in the documentation is the following:
If you just want to know when your write has completed, you can listen to the completion callback rather than using hasPendingWrites. In JavaScript, use the Promise returned from your write operation by attaching a .then() callback.
I hope these resources and the various approaches will help anyone trying to figure out a solution.
REFERENCES:
Events for local changes
The hasPendingWrites metadata property
Snapshot Listen Options
If you need a one time response, use the .get() method for a promise.
firebase.firestore().collection('users').doc(userID).get().then(snap => {
this.userArray = [...this.userArray, snap.doc);
});
However, I suggest using AngularFire (totally biased since I maintain the library). It makes handling common Angular + Firebase tasks much easier.
I'm using Firebase Remote Config to fetch remote data and my app needs an up-to-date data from the first launch.
I'm doing a fetch and update in my Application's onCreate():
mFirebaseRemoteConfig.fetch(cacheExpiration)
.addOnCompleteListener(new OnCompleteListener<Void>() {
#Override
public void onComplete(#NonNull Task<Void> task) {
if (task.isSuccessful()) {
mFirebaseRemoteConfig.activateFetched();
}
}
});
And read the value with :
myValue = mFirebaseRemoteConfig.getBoolean(Constants.FIREBASE_REMOTE_MY_VALUE);
The first fetch works well (activateFetched() is successfully triggered), but it returns the remote_config_defaults value and not the published remote config.
The second fetch, even a few seconds later, returns the remote value.
After that, the following fetches are subject to the cacheExpiration rule (which is totally OK).
Any idea why my remote value is not fetched at the first call?
It sounds like you are overlooking the asynchronous nature of fetching the remote parameters. The onComplete() callback fires after a request to the Firebase servers is sent and the reply received. This will take a fraction of a second, maybe more.
If your statement to use the fetched value:
myValue = mFirebaseRemoteConfig.getBoolean(Constants.FIREBASE_REMOTE_MY_VALUE);
follows the call to fetch() and is not in the onComplete() callback, it will execute before the config data has been received. The second call only appears to work because enough time has elapsed for the first call to complete and the data it fetched and activated is present.
The callbacks for Firebase Remote Config have been designed like that, it will return the cached values first. If there is no cached value saved from the server, it will return the value defined in defaults and trigger a remote fetch. The next time it returns it will return the fetched values from the server if it manages to save them.
The way in which Firebase Remote Config decides on a value can be described as follows:
First it checks if there is a cached value that was stored from the server, if there is it uses that and will return that value on the first call.
If there is no cached value, it looks to the defaults defined either programmatically or in the defaults file. (When you call setDefaults())
If there is no value cached from the server, and no value in defaults, it uses the system default for that type.
More info can be found here : https://firebase.google.com/docs/remote-config/
Like #Bob Snyder pointed out, this is because of the async nature of firebase.
So use onCompleteListener like this to fix the issue:
firebaseRemoteConfig.activate().addOnCompleteListener {
//logic to check the remote value
}
One issue that I was running into when fetching the RemoteConfig from an Android device was that we were initially using the method
fetch()
which gave us the same issue where the initial value was always the same as the default. Changing this to
fetchAndActivate()
fixed the issue for us. I assume the difference is that Firebase allows you to fetch the data but not immediately 'activate' it, which presumably is helpful if you want to take some immediate action based on your default values, then activate the remote values and then any logic after that point would be based on the remote values.
Hope this helps someone :)
I'm saving the user's location in the app local database and then send it to the server. Once the server return a success, I delete the location that was sent.
Each time a point has been saved in the database I call this method:
public void sendPoint(){
amazonRetrofit.postAmazonPoints(databaseHelper.getPoints())
.map(listIdsSent -> deleteDatabasePoints(listIdsSent))
.doOnCompleted(() -> emitStoreChange(finalEvent))
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(AndroidSchedulers.from(backgroundLooper))
.subscribe();
}
I query the database for the point to be send to the server
I received from the server the list of point successfully sent
Using .map(), I gather the point successfully sent and delete them from the local database
Sometimes, It happens that I call this method repeatedly without having wait for the previous request to be completed and deleted the point sent. So, when I call that method again, it will post the same point as the previous request because that previous request isn't completed yet thus haven't deleted the point using the .map() yet. Causing the server to receive duplicates...
Timeline
1st Call to postPoint()
Retrive point A,B,C from the database
Post point A,B,C to the server
2nd call to postPoint()
Retrive point A,B,C,D from the database
Post point A,B,C,D to the server
Receive success from the 1st request
Deleting A,B,C from the local database
Receive success from the 2nd request
Deleting A,B,C,D from the local database
Result:
The server database now have received : A,B,C,A,B,C,D
Each request occurs sequentially but somehow the same location points are sent to the server when I call sendPoint() too quickly. How can I fix this?
First to all you are not using observerOn operator properly, observeOn operator is applied over the steps in your pipeline, once is defined.
So if you define at the end of the pipeline just before subscribeOn, then none of your previous steps will be executed in that thread.
Also, since you need to wait until the response of your server call, you can use the callbacks handlers that Subscriber already provide (onNext(), onComplete())
public void sendPoint(){
Observable.from(databaseHelper.getPoints())
.observeOn(AndroidSchedulers.mainThread())
.flatMap(poins-> amazonRetrofit.postAmazonPoints(points))
.subscribeOn(AndroidSchedulers.from(backgroundLooper))
.subscribe(listIdsSent-> deleteDatabasePoints(listIdsSent), () -> emitStoreChange(finalEvent));
}
if you want to see more examples of ObserverOn and SubscribeOn you can take a look here. https://github.com/politrons/reactive/blob/master/src/test/java/rx/observables/scheduler/ObservableAsynchronous.java
You should have some kind of validation on the client side or/and on the backend side.
Client side:
The simplest solution is to add two columns to the table with locations like "processing" and "uploaded".
When you select locations from database and clausure where processing=false and uploaded=false.
Then when you have rows ready to sent set processing=true and when the server returns success set done=true.
Backend side (optional, depends on requirements):
You should send location with timestamp to the server (probably one more additional column in your client side table). If the server gets a location with timestamp older than the last one in a database it shouldn't store it.
RxJava solution:
You can implement a similar solution with memory cache which is kept around all sendPoint as List.
Pseudocode:
public void sendPoint(){
databaseHelper.getPoints()
.filter(points -> pointsNotInCache())
.map(points -> amazonRetrofit.postAmazonPoints())
.map(points -> addToCache())
.map(listIdsSent -> deleteDatabasePoints(listIdsSent))
.map(listIdsSent -> removeSentPointsFromCache()) //if you would like save memory
.doOnCompleted(() -> emitStoreChange(finalEvent))
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(AndroidSchedulers.from(backgroundLooper))
.subscribe();
}
It looks like, as everyone else is saying, you need an intermediate cache.
i.e.
HashSet<Point> mHashSet = new HashSet<>();
public void sendPoint() {
Observable.from(databaseHelper.getPoints())
.filter(point -> !mHashSet.contains(point))
.doOnNext(mHashSet::put)
.toList()
.flatMap(amazonRetrofit::postAmazonPoints)
.map(this::deleteDatabasePoints)
.doOnCompleted(() -> emitStoreChange(finalEvent))
.observeOn(AndroidSchedulers.mainThread())
.subscribeOn(AndroidSchedulers.from(backgroundLooper))
.subscribe();
}
I'm still fairly new to RxJava and I'm using it in an Android application. I've read a metric ton on the subject but still feel like I'm missing something.
I have the following scenario:
I have data stored in the system which is accessed via various service connections (AIDL) and I need to retrieve data from this system (1-n number of async calls can happen). Rx has helped me a ton in simplifying this code. However, this entire process tends to take a few seconds (upwards of 5 seconds+) therefore I need to cache this data to speed up the native app.
The requirements at this point are:
Initial subscription, the cache will be empty, therefore we have to wait the required time to load. No big deal. After that the data should be cached.
Subsequent loads should pull the data from cache, but then the data should be reloaded and the disk cache should be behind the scenes.
The Problem: I have two Observables - A and B. A contains the nested Observables that pull data from the local services (tons going on here). B is much simpler. B simply contains the code to pull the data from disk cache.
Need to solve:
a) Return a cached item (if cached) and continue to re-load the disk cache.
b) Cache is empty, load the data from system, cache it and return it. Subsequent calls go back to "a".
I've had a few folks recommend a few operations such as flatmap, merge and even subjects but for some reason I'm having trouble connecting the dots.
How can I do this?
Here are a couple options on how to do this. I'll try to explain them as best I can as I go along. This is napkin-code, and I'm using Java8-style lambda syntax because I'm lazy and it's prettier. :)
A subject, like AsyncSubject, would be perfect if you could keep these as instance states in memory, although it sounds like you need to store these to disk. However, I think this approach is worth mentioning just in case you are able to. Also, it's just a nifty technique to know.
AsyncSubject is an Observable that only emits the LAST value published to it (A Subject is both an Observer and an Observable), and will only start emitting after onCompleted has been called. Thus, anything that subscribes after that complete will receive the next value.
In this case, you could have (in an application class or other singleton instance at the app level):
public class MyApplication extends Application {
private final AsyncSubject<Foo> foo = AsyncSubject.create();
/** Asynchronously gets foo and stores it in the subject. */
public void fetchFooAsync() {
// Gets the observable that does all the heavy lifting.
// It should emit one item and then complete.
FooHelper.getTheFooObservable().subscribe(foo);
}
/** Provides the foo for any consumers who need a foo. */
public Observable<Foo> getFoo() {
return foo;
}
}
Deferring the Observable. Observable.defer lets you wait to create an Observable until it is subscribed to. You can use this to allow the disk cache fetch to run in the background, and then return the cached version or, if not in cache, make the real deal.
This version assumes that your getter code, both cache fetch and non- catch creation, are blocking calls, not observables, and the defer does work in the background. For example:
public Observable<Foo> getFoo() {
Observable.defer(() -> {
if (FooHelper.isFooCached()) {
return Observable.just(FooHelper.getFooFromCacheBlocking());
}
return Observable.just(FooHelper.createNewFooBlocking());
}).subscribeOn(Schedulers.io());
}
Use concatWith and take. Here we assume our method to get the Foo from the disk cache either emits a single item and completes or else just completes without emitting, if empty.
public Observable<Foo> getFoo() {
return FooHelper.getCachedFooObservable()
.concatWith(FooHelper.getRealFooObservable())
.take(1);
}
That method should only attempt to fetch the real deal if the cached observable finished empty.
Use amb or ambWith. This is probably one the craziest solutions, but fun to point out. amb basically takes a couple (or more with the overloads) observables and waits until one of them emits an item, then it completely discards the other observable and just takes the one that won the race. The only way this would be useful is if it's possible for the computation step of creating a new Foo to be faster than fetching it from disk. In that case, you could do something like this:
public Observable<Foo> getFoo() {
return Observable.amb(
FooHelper.getCachedFooObservable(),
FooHelper.getRealFooObservable());
}
I kinda prefer Option 3. As far as actually caching it, you could have something like this at one of the entry points (preferably before we're gonna need the Foo, since as you said this is a long-running operation) Later consumers should get the cached version as long as it has finished writing. Using an AsyncSubject here may help as well, to make sure we don't trigger the work multiple times while waiting for it to be written. The consumers would only get the completed result, but again, that only works if it can be reasonably kept around in memory.
if (!FooHelper.isFooCached()) {
getFoo()
.subscribeOn(Schedulers.io())
.subscribe((foo) -> FooHelper.cacheTheFoo(foo));
}
Note that, you should either keep around a single thread scheduler meant for disk writing (and reading) and use .observeOn(foo) after .subscribeOn(...), or otherwise synchronize access to the disk cache to prevent concurrency issues.
I’ve recently published a library on Github for Android and Java, called RxCache, which meets your needs about caching data using observables.
RxCache implements two caching layers -memory and disk, and it counts with several annotations in order to configure the behaviour of every provider.
It is highly recommended to use with Retrofit for data retrieved from http calls. Using lambda expression, you can formulate expression as follows:
rxCache.getUser(retrofit.getUser(id), () -> true).flatmap(user -> user);
I hope you will find it interesting :)
Take a look at the project below. This is my personal take on things and I have used this pattern in a number of apps.
https://github.com/zsiegel/rxandroid-architecture-sample
Take a look at the PersistenceService. Rather than hitting the database (or MockService in the example project) you could simply have a local list of users that are updated with the save() method and just return that in the get().
Let me know if you have any questions.
I have severals URLs I need to get data from, this should happen in order, one by one. The amount of data returned by requesting those URLs is relatively big. I need to be able to reschedule particular downloads which failed.
What is the best way to go? Shall I use IntentService, Loaders or something else?
Additional note: I would need not only to download, but also post process the data (create tables in db, fill it with data, etc). So DownloadManger can't be of help here.
I would use an IntentService.
It has a number of advantages that are suitable for your needs, including being able to download the data without your application running and supporting automatic restart of the service using setIntentRedelivery().
You can set a number of identifiers for the particular job, you need to perform using Intent extras, and you can keep track of the progress using SharedPreferences - that way you can also resume the work if it's been cancelled previously.
The easiest way is probably to use the system DownloadManager http://developer.android.com/reference/android/app/DownloadManager.html
(answering from my phone, so please excuse the lack of formatting)
I would suggest a service for this. Having service resolves many problems
It would allow reporting of progress asynchronously to the application so you can enable or disable a specific gui in application based on the download status of data
It will allow you to continue the download even if the user switches to other application or closes the application.
Will allow you to establish independent communication with server to prioritize downloads without user interaction.
Try a WakefulIntentService for creating a long-running job that uses wakelocks to keep your task alive and running https://github.com/commonsguy/cwac-wakeful .
Also, if your whole app process is getting killed, you may want to look into persisting the task queue to disk, using something like Tape, from Square
I think the way to go is loading urls in an array, then starting an AsyncTask, returning a boolean to onPostExecute indicating if the operation has success or not. then, keeping a global int index, you can run the AsyncTask with the next index if success, or the same index otherwise. Here is a pseudocode
private int index=0;
//this array must be loaded with urls
private ArrayList<String> urlsArray;
new MyDownloaderAsyncTask().execute(urlsArray.get(index));
class MyDownloaderAsyncTask extends AsyncTask<String,String,Boolean>{
#Override
doInBackground(String... input){
//downlaod my data is the function which download data and return a boolean
return downloadMyData();
}
#Override
onPostExecute(Boolean result){
if(result)
new MyDownloaderAsyncTask().execute(urlsArray.get(++index));
else
new MyDownloaderAsyncTask().execute(urlsArray.get(index));
}
}
hope this help
I have just completed an open source library that can do exactly what you need. Using droidQuery, you can do something like this:
$.ajax(new AjaxOptions().url("http://www.example.com")
.type("GET")
.dataType("JSON")
.context(this)
.success(new Function() {
#Override
public void invoke($ droidQuery, Object... params) {
//since dataType is JSON, params[0] is a JSONObject
JSONObject obj = (JSONObject) params[0];
//TODO handle data
//TODO start the next ajax task
}
})
.error(new Function() {
#Override
public void invoke($ droidQuery, Object... params) {
AjaxError error = params[0];
//TODO adjust error.options before retry:
$.ajax(error.request, error.options);
}
}));
You can specify other data types, which will return different object types, such as JSONObject, String, Document, etc.
Similar to #Murtuza Kabul I'd say use a service, but it's a little complicated than that. We have a similar situation related to constant internet access and updates, although ours places greater focus on keeping the service running. I'll try to highlight the main features without drowning you in too much detail (and code is owned by the company ;) )
android.permission.RECEIVE_BOOT_COMPLETED permission and a BroadcastReceiver listening for android.intent.action.BOOT_COMPLETED to poke the service awake.
Don't link the service to the Activity, you want it running all the time. eg we call context.startService(new Intent(context.getApplicationContext(), OurService.class))
The service class is just a simple class which registers and calls an OurServiceHandler (as in our case we fire off repeated checks and the Handler manages the 'ticks')
We have an OurServiceRunnable which is a singleton which is checked and called by the Handler for each test. It protects against overlapping updates. It delegates to an OurServiceWorker to do the actual lifting.
Sounds heavy handed, but you want to ensure that the service is always running, always ticking (via the Handler) but only running a single check at a time. You're also going to run into database issue if you use the standard SqlLite DbHelper paradigm, as you can't open the DB on multiple threads and you definitely want the internet access off the main thread. Our hack was a java.util.concurrent.locks.ReentrantLock protecting access to the DB, but you could probably keep DB access on the UI thread and pass DB operations via the Handler.
Beyond this it's just a matter of keeping the downloads atomic in terms of "get task, download task, complete task" or enabling it to pick up from a failed state eg downloaded OK, attempt to complete.
You should take a look at the volley library :
http://www.javacodegeeks.com/2013/06/android-volley-library-example.html
There is also an interesting video of the author that took place at google io 2013 :
http://www.youtube.com/watch?v=yhv8l9F44qo
Mainly because it eases the process of managing a lot of these fastidious tasks that are connection checking, connection interruption, queue management, retry, resume, etc.
Quoting from the javacodegeeks "Advantages of using Volley :
Volley automatically schedule all network requests. It means that Volley will be taking care of all the network requests your app executes for fetching response or image from web.
Volley provides transparent disk and memory caching.
Volley provides powerful cancellation request API. It means that you can cancel a single request or you can set blocks or scopes of requests to cancel.
Volley provides powerful customization abilities.
Volley provides Debugging and tracing tools"
Update from dennisdrew :
For large file, better use a variant of volley which authorize using another http client implementation. This link gives more details :
The volley article about this modification :
http://ogrelab.ikratko.com/android-volley-examples-samples-and-demos/
The github file detail :
https://github.com/ogrebgr/android_volley_examples/blob/master/src/com/github/volley_examples/toolbox/ExtHttpClientStack.java
public class FetchDataFromDBThread implements Runnable {
/*
* Defines the code to run for this task.
*/
#Override
public void run() {
// Moves the current Thread into the background
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_BACKGROUND);
FetchDataFromDB();
}
}