I wrote a lazy image downloader for my app using an ExecutorService. It gives me great control about how many downloads are running in parallel at what time and so on.
Now, the only problem that I have is that if I submit a task it ends up at the tail of the queue (FIFO).
Does anyone know how to change this to LIFO?
You can do it in two or three simple steps:
Create a LifoBlockingDeque class:
public class LifoBlockingDeque <E> extends LinkedBlockingDeque<E> {
#Override
public boolean offer(E e) {
// Override to put objects at the front of the list
return super.offerFirst(e);
}
#Override
public boolean offer(E e,long timeout, TimeUnit unit) throws InterruptedException {
// Override to put objects at the front of the list
return super.offerFirst(e,timeout, unit);
}
#Override
public boolean add(E e) {
// Override to put objects at the front of the list
return super.offerFirst(e);
}
#Override
public void put(E e) throws InterruptedException {
//Override to put objects at the front of the list
super.putFirst(e);
}
}
Create the executor:
mThreadPool = new ThreadPoolExecutor(THREAD_POOL_SIZE,
THREAD_POOL_SIZE, 0L,
TimeUnit.MILLISECONDS,
new LifoBlockingDeque<Runnable>());
LinkedBlockingDeque is supported only from API Level 9. To use it on earlier versions do the following:
Use the Java 1.6 implementation - download it from here.
Then change
implements BlockingDeque<E>
to
implements BlockingQueue<E>
To make it compile on Android. BlockingDeque is subtype of BlockingQueue, so no harm done.
And you're done!
You will need to specify the queue type that the ExecutorService is using.
Typically you might be retrieving an ExecutorService via the static methods in Executors. Instead you will need to instantiate one directly and pass in the Queue type that you want that provides LIFO.
EG, to create a LIFO thread pool executor, you could use the following constructor.
ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue)
and pass in a LIFO queue as the final parameter.
There is no LIFO queue in the java collections that I am aware of (please correct me if wrong), but you could easily just create an anonymous inner class that extends LinkedBlockingQueue and overrides the appropriate methods.
For example, (untested)
ThreadPoolExecutor executor = new ThreadPoolExecutor(4, 16, 1, TimeUnit.MINUTES, new LinkedBlockingQueue() {
#Override
public void put(Object obj) {
// override to put objects at the front of the list
super.addFirst(obj);
}
});
UPDATE in response to comments.
We can use a blocking queue that wraps a priority queue. We have to wrap because the Executor expects runnables but we need timestamps too.
// the class that will wrap the runnables
static class Pair {
long timestamp;
Runnable runnable;
Pair(Runnable r) {
this.timestamp = System.currentTimeMillis();
this.runnable = r;
}
}
ThreadPoolExecutor executor = new ThreadPoolExecutor(4, 16, 1, TimeUnit.MINUTES, new BlockingQueue<Runnable>() {
private Comparator comparator = new Comparator<Pair>() {
#Override
public int compare(Pair arg0, Pair arg1) {
Long t1 = arg0.timestamp;
Long t2 = arg1.timestamp;
// compare in reverse to get oldest first. Could also do
// -t1.compareTo(t2);
return t2.compareTo(t1);
}
};
private PriorityBlockingQueue<Pair> backingQueue = new PriorityBlockingQueue<Pair>(11, comparator);
#Override
public boolean add(Runnable r) {
return backingQueue.add(new Pair(r));
}
#Override
public boolean offer(Runnable r) {
return backingQueue.offer(new Pair(r));
}
#Override
public boolean offer(Runnable r, long timeout, TimeUnit unit) {
return backingQueue.offer(new Pair(r), timeout, unit);
}
// implement / delegate rest of methods to the backing queue
});
The ThreadPoolExecutor has a constructor which allows to specify the queue type to use. You can plug any BlockingQueue in there, and possibly a priority queue might be a good fit for you. You can configure the priority queue to sort based on a (creation) time stamp which you add to you download jobs, and the executor will execute the jobs in the desired order.
I had the same requirements: Lazy loading and LIFO for a better user experience. So I have used the ThreadPoolExecutor with a wrapped BlockingQueue (like mentioned before).
For easy backward compatibility I decided to go the easy way and for older devices I am simply using a fixed thread pool - wich means FIFO ordering. That's not perfect but for the first try okay. This looks like:
try {
sWorkQueue = new BlockingLifoQueue<Runnable>();
sExecutor = (ThreadPoolExecutor) Class.forName("java.util.concurrent.ThreadPoolExecutor").getConstructor(int.class, int.class, long.class, TimeUnit.class, BlockingQueue.class).newInstance(3, DEFAULT_POOL_SIZE, 10, TimeUnit.MINUTES, sWorkQueue);
if (BuildConfig.DEBUG) Log.d(LOG_TAG, "Thread pool with LIFO working queue created");
} catch (Exception e) {
if (BuildConfig.DEBUG) Log.d(LOG_TAG, "LIFO working queues are not available. Using default fixed thread pool");
sExecutor = (ThreadPoolExecutor) Executors.newFixedThreadPool(DEFAULT_POOL_SIZE);
}
Related
I have an async method makeRequest() with callback. It called many times from different classes of my application. I need that this calls start one by one and never simultaneously.
I want to implement this using Rx. Like this:
public void execute() { // This method called many times from another classes
Observable.just(true)
// what I need to add here?
.subscribeOn(Schedulers.io())
.observeOn(AndroidSchedulers.mainThread())
.map(o -> {
internalExecute();
return o;
})
.subscribe();
}
private void internalExecute() { // This method should called only when previous call was finished
makeRequest(this::onRequestFinished);
}
private void onRequestFinished() {
// here is I handle request finish
}
But at now all requests works at parallel. What I need to add here to run requests one by one?
According to comments, you have here separated streams and requests. each client that execute request expect a result from the request. but no requests allowed to run in parallel, in this case I think the easiest way is to limit the Scheduler to an application global background sequential thread Executor, i.e:
Schedulers.from(Executors.newSingleThreadExecutor())
provide somewhere in your app this single thread Executor, in singleton manner of course, it's important that each request stream will use the same object:
private final Scheduler singleThreadScheduler = Schedulers.from(Executors.newSingleThreadExecutor());
public void execute() { // This method called many times from another classes
Observable.just(true)
.map(o -> {
internalExecute();
return o;
})
.subscribeOn(singleThreadScheduler)
.subscribe();
}
private void internalExecute() { // This method should called only when previous call was finished
makeRequest(this::onRequestFinished);
}
private void onRequestFinished() {
//NOTE: you should make sure that the callback execute where you need it (main thread?)
// here is I handle request finish
}
besides that, you're not exposing Observable outside, to the clients, but rather using callback mechanism, you can leverage reactive approach further, by making execute() returning Observable. (and enjoy composition of Obesrvables, operators, proper use of observeOn/subscribeOn, error handling with onError, disposing/unsubscribing etc.), as you're using async api, you can use fromEmitter()/create() (in newer RxJava1 version)), read more here:
private final Scheduler singleThreadScheduler = Schedulers.from(Executors.newSingleThreadExecutor());
public Observable<Result> execute() { // This method called many times from another classes
return Observable.fromEmitter(new Action1<Emitter<? extends Object>>() {
#Override
public void call(Emitter<?> emitter) {
emitter.setCancellation(() -> {
//cancel request on unsubscribing
});
makeRequest(result -> {
emitter.onNext(result);
});
}
})
.subscribeOn(singleThreadScheduler)
}
Hey here's an interesting question. I am using in my Android project, lots of sql operations with sqlite. For this matter, I am using thread pool in order of reusing the existing resources. The thread pool look's like this:
final int NUMBER_OF_CORES = Runtime.getRuntime().availableProcessors();
ThreadPoolExecutor threadPoolExecutor= new ThreadPoolExecutor(NUMBER_OF_CORES*2,NUMBER_OF_CORES*2,1L, TimeUnit.SECONDS,new ArrayBlockingQueue<Runnable>(12,true),new PriorityThreadFactory(Process.THREAD_PRIORITY_BACKGROUND),new RejectedThread(context));
public class PriorityThreadFactory implements ThreadFactory {
private final int mThreadPriority;
public PriorityThreadFactory(int threadPriority) {
mThreadPriority = threadPriority;
}
#Override
public Thread newThread(final Runnable runnable) {
Runnable wrapperRunnable = new Runnable() {
#Override
public void run() {
try {
android.os.Process.setThreadPriority(mThreadPriority);
} catch (Throwable t) {
}
runnable.run();
}
};
return new Thread(wrapperRunnable);
}
}
public class RejectedThread implements RejectedExecutionHandler {
MyLogger myLogger;
public RejectedThread(Context context) {
this.myLogger=new MyLogger(RejectedThread.class.getSimpleName(), context);
}
#Override
public void rejectedExecution(Runnable worker, ThreadPoolExecutor executor) {
this.myLogger.info("Execution rejected for: "+worker.toString());
}
}
And also I am creating a new Runnable for every CRUD (Create-Read-Update-Delete) operation that I make in the database (being executed by the thread pool above). Here is the questions, beside the threadpool for sql operations, I would need one more thread pool for executing logger operations, to log system behavior for the rest of my functions that I make. Is there a way to prevent any crush/(insufficient resources) because I am using two or more thread pool executors (allocated separated, using in different purposes and never executing a thread pool executor on another thread pool executor) ?
I think that in general your idea is very good, but your implementation is a bit inefficient.
Try to answer these questions to yourself:
Why do you need two thread pools?
Do you REALLY need two thread pools?
Why do you set CORE size to NUMBER_OF_CORES*2?
Why do you set MAX size to NUMBER_OF_CORES*2?
Do you REALLY need to overwrite threads priorities?
In my experience, none of the above complications are really necessary.
For example, in all my apps I use a single instance of BackgroundThreadPoster class in order to offload work to background threads. The class is very simple:
/**
* A single instance of this class should be used whenever we need to post anything to a (random) background thread.
*/
public class BackgroundThreadPoster {
ExecutorService mExecutorService = Executors.newCachedThreadPool();
public void post(Runnable runnable) {
mExecutorService.execute(runnable);
}
}
Default pre-configured implementation returned by Executors.newCachedThreadPool() works like magic and I've never encountered any need to customize its parameters.
A full tutorial application that uses this approach can be found here: https://github.com/techyourchance/android_mvc_tutorial
Maybe this can work for you too?
I am just getting started with RxJava2 and wonder how I could correctly implement a UDP observable.
I already have some working code, but I think there may be some issues: see the 4 questions in the comments of the source-code below.
I've also published the code on GitHub RxJava2_Udp: comments, issues and pull requests welcome.
class UdpObservable {
private static class UdpThread extends Thread {
private final int portNo;
private final int bufferSizeInBytes;
private final ObservableEmitter<DatagramPacket> emitter;
private DatagramSocket udpSocket;
private UdpThread(#NonNull ObservableEmitter<DatagramPacket> emitter
, int portNo, int bufferSizeInBytes) {
this.emitter = emitter;
this.portNo = portNo;
this.bufferSizeInBytes = bufferSizeInBytes;
}
#Override
public void run() {
try {
// we don't want to create the DatagramSocket in the constructor, because this
// might raise an Exception that the observer wants to handle
udpSocket = new DatagramSocket(portNo);
try {
/* QUESTION 1:
Do I really need to check isInterrupted() and emitter.isDisposed()?
When the thread is interrupted an interrupted exception will
be raised anyway and the emitter is being disposed (this is what
caused the interruption)
*/
while (!isInterrupted() && !emitter.isDisposed()) {
byte[] rcvBuffer = new byte[bufferSizeInBytes];
DatagramPacket datagramPacket = new DatagramPacket(rcvBuffer, rcvBuffer.length);
udpSocket.receive(datagramPacket);
// QUESTION 1a: same as QUESTION 1 above
if (!isInterrupted() && !emitter.isDisposed()) {
emitter.onNext(datagramPacket);
}
}
} finally {
closeUdpSocket();
}
} catch (Throwable th) {
// the thread will only be interrupted when the observer has unsubscribed:
// so we need not report it
if (!isInterrupted()) {
if (!emitter.isDisposed()) {
emitter.onError(th);
} else {
// QUESTION 2: is this the correct way to handle errors, when the emitter
// is already disposed?
RxJavaPlugins.onError(th);
}
}
}
}
private void closeUdpSocket() {
if (!udpSocket.isClosed()) {
udpSocket.close();
}
}
#Override
public void interrupt() {
super.interrupt();
// QUESTION 3: this is called from an external thread, right, so
// how can we correctly synchronize the access to udpSocket?
closeUdpSocket();
}
}
/**
* creates an Observable that will emit all UDP datagrams of a UDP port.
* <p>
* This will be an infinite stream that ends when the observer unsubscribes, or when an error
* occurs. The observer does not handle backpressure.
* </p>
*/
public static Observable<DatagramPacket> create(final int portNo, final int bufferSizeInBytes) {
return Observable.create(
new ObservableOnSubscribe<DatagramPacket>() {
#Override
public void subscribe(ObservableEmitter<DatagramPacket> emitter) throws Exception {
final UdpThread udpThread = new UdpThread(emitter, portNo, bufferSizeInBytes);
/* QUESTION 4: Is this the right way to handle unsubscription?
*/
emitter.setCancellable(new Cancellable() {
#Override
public void cancel() throws Exception {
udpThread.interrupt();
}
});
udpThread.start();
}
}
);
}
}
Generally speaking, I think it is not the right way of creating it, you should not create thread yourself, as RxJava and it's Schedulers should do it for you.
Consider that the code that code executed at the ObservableOnSubscribe will run at a thread per your Scheduler strategy, so you don't need to construct it yourself. just do the ude while-loop inside the create.
You don't need to call Thread.interrupt() method, RxJava will do that for you when you're dispose (unsubscribe) the Observable. (set the cancelable before the while loop of course)
As for your questions:
You don't need to check for the interrupted as the exception will
be raise if you'r waiting for io operation, you also don't need to
check for the disposal because onNext() will do it for you and will
not emit of unsubscribed.
Again you can call onError and the emitter will take care of checking if the Observable was unsubscribed.
As said before, there should be no Thread, but for resource cleanup, you can use the emitter.setCancellable method. (close the stream), this is happen on the same thread your code runs.
Answered before, Thread.interrput() will be raised with dispose/unsubscribe by RxJava, resource clean up should go to the emitter.setCancellable method
Below you see some code that works fine - but only once. It is suppsed to block until the runOnUIThread is finished. And it does, when it runs the first time it is called. But when called the second time, it runs through to the end, then the runOnUIThread starts running. It could be, that after the methos was run the first time, the thread that caled it still has the lock, and when it calls the method the second time, it runs through. Is this right? And what can I do to fix that? Or is it a timing problem, the second time the caller gets the lock first?
static Integer syn = 0;
#Override
public String getTanFromUser(long accid, String prompt) {
// make parameters final
final long accid_int = accid;
final String prompt_int = prompt;
Runnable tanDialog = new Runnable() {
public void run() {
synchronized(syn) {
tanInputData = getTANWithExecutionStop(TransferFormActivity.this);
syn.notify() ;
}
}
};
synchronized(syn) {
runOnUiThread(tanDialog);
try {syn.wait();}
catch (InterruptedException e) {}
}
return tanInputData;
}
Background: The thread that calls this method is an asynctask inside a bound service that is doing transactions with a bank in the background. At unregular intervalls the bank send requests for user verification (captche, controll questions, requests for pin, etc.) and the service must display some dialogs vis a weak-referenced callback to the activities in the foreground. Since the service is doing several nested while-loops, it is easier to show the dialogs synchroniously than stopping an restarting the service (savind/restoring the state data would be too complex).
You could try if using a Callable inside a FutureTask instead of a Runnable works better. That combination is as far as I understand meant to provide return values from threads.
public String getTanFromUser(long accid, String prompt) {
// make parameters final
final long accid_int = accid;
final String prompt_int = prompt;
Callable<String> tanDialog = new Callable<String>() {
public String call() throws Exception {
return getTANWithExecutionStop(TransferFormActivity.this);
}
};
FutureTask<String> task = new FutureTask<String>(tanDialog);
runOnUiThread(task);
String result = null;
try {
result = task.get();
}
catch (InterruptedException e) { /* whatever */ }
catch (ExecutionException e) { /* whatever */ }
return result;
}
A Callable is like a Runnable but has a return value.
A FutureTask does the synchronization and waits for the result. Similar to your wait() / notify(). FutureTask also implements Runnable so it can be used for runOnUiThread.
I need to perform a series of http requests, each of which may depend on a previous http response. I have been able to achieve this using an AsyncTask "tree" of sorts, but as the decision tree grows, the AsyncTask technique grows more unwieldy.
I think that somehow using a SynchronousQueue (or other type of queue) is the best approach, but I can't seem to find any good guidance or tutorials on how to use a Queue for something like http requests.
Can anyone provide any guidance or point to any good tutorials on using SynchronousQueue or suggest the best kind of Queue?
Use a java.util.concurrent.SingleThreadExecutor and make a Runnable out of each HTTP operation and result-handler. You can submit subsequent tasks to it as you determine whether you need to continue progress.
For example, the HTTP "task" would run and submit the Result "task" on success, or the Error "task" on failure. The Result task would in-turn submit another HTTP task when it was done processing. Using SingleThreadExecutor ensures only one task runs at-a-time.
You could use a ThreadPoolExecutor if you can handle multiple operations in-flight at once.
Take all that, and wrap it in an AsyncTask that manages the top-level "kick-off" and waits for everything to complete. It would probably be useful to have a ConditionVariable or something to synchronize the "end" signal (using a Done "task") so you can safely tear down the Executor.
A SynchronousQueue doesn't do anything helpful for you here, because it leaves you to do all the tread management. If you use an Executor that is all handled and all you deal with is Runnables and Futures. That's probably why you are not finding any tutorials. Anyway, the Executors all use one of those queue implementations underneath!
As requested, here is some skeleton Java code. Unsupported untested as-is. This should get you started. You can use a different synchronization object if you don't like ConditionVariable.
This is a generic technique, not specific to Android, feel free to use it in other contexts.
This functions as a State Machine, with HttpTask et al forming the states, and the transitions are hard-coded by submitting the Next State to the ExecutorService. There's even a "Big Bang at the end, so everyone knows when to clap" in the form of the ConditionVariable.
Some may consider DoneTask and FailedTask overkill, but it keeps the Next State mechanism consistent, and lets Future<? extends ResultTask> function as a somewhat type-safe container for the results, and certainly keeps you from mis-assigning to it.
abstract class BasicTask {
final ExecutorService es;
final ConditionVariable cv;
public BasicTask(ExecutorService es, ConditionVariable cv) {
this.es = es;
this.cv = cv;
}
}
abstract class HttpTask extends BasicTask {
// source omitted.
// you should make a class to prepare e.g. Apache HTTP resources for specific tasks (see below).
}
abstract class ResultTask implements Runnable {
final ConditionVariable cv;
public ResultTask(ConditionVariable cv) {
this.cv = cv;
}
public void run() {
cv.open();
}
}
final class FailedTask extends ResultTask {
final Exception ex;
public FailedTask(ConditionVariable cv, Exception ex) {
super(cv);
this.ex = ex;
}
public Exception getError() { return ex; }
}
final class DoneTask<T> extends ResultTask {
final T results;
public DoneTask(ConditionVariable cv, T results) {
super(cv);
this.results = results;
}
public T getResults() { return results; }
}
class HttpSequence extends AsyncTask<Void,Void,Object> {
// this will capture the ending task
Future<? extends ResultTask> result;
// this is an inner class, in order to set Result. Refactor so these are small.
// if you don't like inner classes, you still need to arrange for capturing the "answer"
final class SomeHttpTask extends HttpTask implements Runnable {
public void run() {
try {
final SomeType thisStep = doTheStuff(lastStep);
if(thisStep.isDone()) {
// we are done here
result = es.submit(new DoneTask<SomeType>(cv, thisStep));
}
else if(thisStep.isFailed()) {
// not done: we can't proceed because of something in the response
throw thisStep.getError();
}
else {
// not done, everything is ok for next step
es.submit(new NextHttpTask(es, cv, thisStep));
}
}
catch(Exception ex) {
result = es.submit(new FailedTask(cv, ex));
}
}
}
final class TheFirstTask extends HttpTask implements Runnable {
// source omitted. just emphasizing you need one of these for each "step".
// if you don't need to set Result, this could be a static inner class.
}
#Override
public Object doInBackground(Void...) {
final ExecutorService es = Executors.newSingleThreadExecutor();
final ConditionVariable cv = new ConditionVariable(false);
try {
es.submit(new TheFirstTask(es, cv));
// you can choose not to timeout at this level and simply block until something happens...
final boolean done = cv.block(timeout);
if(!done) {
// you will need to account for unfinished threads, see finally section!
return new IllegalStateException("timed out waiting on completion!");
}
if(result != null) {
final ResultTask done = result.get();
if(done instanceof DoneTask) {
// pass SomeType to onPostExecute()
return ((DoneTask<SomeTYpe>)done).getResults();
}
else if(done instanceof FailedTask) {
// pass Exception to onPostExecute()
return ((FailedTask)done).getError();
}
else {
// something bad happened, pass it to onPostExecute()
return new IllegalStateException("something unexpected signalled CV!");
}
}
else {
// something bad happened, pass it to onPostExecute()
return new IllegalStateException("something signalled CV without setting result!");
}
}
catch(Exception ex) {
// something outside workflow failed, pass it to onPostExecute()
return ex;
}
finally {
// naive shutdown (doesn't interrupt running tasks): read JavaDoc on ExecutorService for details
es.shutdown();
}
}
#Override
public void onPostExecute(Object result) {
if(result instanceof SomeType) {
// success UI
}
else if(result instanceof Exception) {
// error UI
}
}
}
I can't say for sure without knowing the details of your use case, but you probably want to avoid the SynchronousQueue, as it will block the thread putting things into the queue until the listener thread takes it back out of the queue. If you were putting things in using the UI thread you'd be locking up the UI.
I think a BlockingQueue may suit your needs. The JavaDoc has a good producer-consumer example.