I am very confused with exoplayer and their documentation, They have explained everything in very short.
Can anyone please tell me what exactly leastRecentlyUsedCacheEvictor is and how it work? use case and methods?
ExoPlayer video cache uses a CacheEvictor instance to tell the library when to delete cached files. LeastRecentlyUsedCacheEvictor as the name represents declares that policy in a least recently used order.
Assuming you have watched video A, B, C, A (again) and D (order matters) and you hit the maximum cache capacity passed in LeastRecentlyUsedCacheEvictor constructor. The evictor instance lists the cache usages and finds video B as the least recently used one and deletes it to free space.
Here is a simple usage example:
public class VideoCacheSingleton {
private static final int MAX_VIDEO_CACHE_SIZE_IN_BYTES = 200 * 1024 * 1024; // 200MB
private static Cache sInstance;
public static Cache getInstance(Context context) {
if (sInstance != null) return sInstance;
else return sInstance = new SimpleCache(new File(context.getCacheDir(), "video"), new LeastRecentlyUsedCacheEvictor(MAX_VIDEO_CACHE_SIZE_IN_BYTES), new ExoDatabaseProvider(context)));
}
}
Related
android.os.FileObserver requires a java.io.File to function.
But with Android 10 Google restricted access to everything but your app's private directory due to the famous "Storage Access Framework". Thus, accessing anything via java.io.File breaks and renders FileObserver useless unless you intend to use it in your app's private directory. However, I want to be notified when something is changed in a certain directory on external storage. I would also like to avoid periodically checking for changes.
I tried using ContentResolver.registerContentObserver(uri,notifyForDescendants,observer) and ran into some problems with that method:
Every Uri I have plugged in so far was accepted
It neither fails nor notifies if the Uri doesn't work
I cannot find any documentation telling me which Uris actually work
The only thing I got working to some extent is the following approach:
// works, but returns all changes to the external storage
contentResolver.registerContentObserver(MediaStore.Files.getContentUri("external"), true, contentObserver)
Unfortunately this includes all of the external storage and only returns Media Uris when changes happen - for example content://media/external/file/67226.
Is there a way to find out whether or not that Uri points to my directory?
Or is there a way to make registerContentObserver() work with a Uri in such a way that I get a notification whenever something in the folder has changed?
I also had no success trying various Uris related to DocumentsFile and external storage Uris.
I kept getting errors when even trying to use the base constructor such as the following -
No direct method <init>(Ljava/util/List;I)V in class Landroid/os/FileObserver; or its super classes (declaration of 'android.os.FileObserver' appears in /system/framework/framework.jar!classes2.dex)
From a comment on Detect file change using FileObserver on Android:
I saw that message (or something like that) when i was trying to use constructor FileObserver(File). Use of deprecated FileObserver(String) solved my problem.... Original FileObserver has bugs.
Full disclosure, I was using the Xamarin.Android API; however, the gist and the commenter I quoted were both working with Java. At any rate, indeed - tried again using the counterpart String constructor and I was finally able to make and use the observer. Grinds my gears to use a deprecated API, but apparently they're hanging onto it at least up to and including Android 12.0.0_r3... still, would much prefer the supported constructors actually work. Maybe there's some warrant here for filing an issue.
I found a way to implement FileObserver on Android 10 with ContentObserver, but it might only work with media files since it works with media content uris.
The uri for ContentResolver.registerContentObserver() should be the file's corresponding media uri (e.g. content://media/external/file/49) which is queried by file path.
fun getMediaUri(context: Context, file: File): Uri? {
val externalUri = MediaStore.Files.getContentUri("external")
context.contentResolver.query(
externalUri,
null,
"${MediaStore.Files.FileColumns.DATA} = ?",
arrayOf(file.path),
null
)?.use { cursor ->
if (cursor.moveToFirst()) {
val idIndex = cursor.getColumnIndex("_id")
val id = cursor.getLong(idIndex)
return Uri.withAppendedPath(externalUri, "$id")
}
}
return null
}
Then ContentObserver.onChange() will be triggered for every file change with uri: content://media/external/file/{id}; uri in ContentObserver.onChange(boolean selfChange, Uri uri) will always be content://media/external/file; only registered file will be with id (e.g. content://media/external/file/49?deletedata=false).
Does what FileObserver used to do when input uri's path matches registered uri.
I have a temporary solution for this issue, so let's see if this can help.
I start an infinite while loop watching for file created and file deleted (if you want file modified or file renamed you have to implement more) using DocumentFile. Below is my sample:
private static int currentFileIndirectory = 0;
private static final int FILE_CREATED = 0;
private static final int FILE_DELETED = 1;
private static DocumentFile[] onDirectoryChanged(DocumentFile[] documentFiles, int event) {
Log.d("FileUtil", "onDirectoryChanged: " + event);
if (event == FILE_CREATED) {
} else {
}
return documentFiles;
}
private static boolean didStartWatching = false;
private static void startWatchingDirectory(final DocumentFile directory) {
if (!didStartWatching) {
didStartWatching = true;
DocumentFile[] documentFiles = directory.listFiles();
if (null != documentFiles && documentFiles.length > 0) {
currentFileIndirectory = documentFiles.length;
}
new Thread(new Runnable() {
public void run() {
while (true) {
DocumentFile[] documentFiles = directory.listFiles();
if (null != documentFiles && documentFiles.length > 0) {
if (documentFiles.length != currentFileIndirectory) {
if (documentFiles.length > currentFileIndirectory) {//file created
DocumentFile[] newFiles = new DocumentFile[documentFiles.length - currentFileIndirectory];
onDirectoryChanged(newFiles, FILE_CREATED);
} else {//file Deleted
onDirectoryChanged(null, FILE_DELETED);
}
currentFileIndirectory = documentFiles.length;
}
}
}
}
}).start();
}
}
I am currently using Picasso to download and cache images in my app inside multiple recycler views. So far Picasso has used around 49MB cache size and i am worried that as more images come into play, this will become much higher.
I am using the default Picasso.with(context) object. Please answer the following:
1) Is there a way to restrict the Size of Picasso's cache. MemoryPolicy and NetworkPolicy set to NO_CACHE isn't an option. I need caching but upto a certain level (60MB max)
2) Is there a way in picasso to store Resized/cropped images like in Glide DiskCacheStrategy.RESULT
3) If the option is to use OKHTTP, please guide me to a good tutorial for using it to limit Picasso's cache size. (Picasso 2.5.2)
4) Since i am using a Gradle dependency of Picasso, how can i add a clear Cache function as shown here:
Clear Cache memory of Picasso
Please try this one, it does seem to work great for me:
I use it as a Singleton.
Just put 60 where DISK/CACHE size parameters are.
//Singleton Class for Picasso Downloading, Caching and Displaying Images Library
public class PicassoSingleton {
private static Picasso mInstance;
private static long mDiskCacheSize = CommonConsts.DISK_CACHE_SIZE * 1024 * 1024; //Disk Cache
private static int mMemoryCacheSize = CommonConsts.MEMORY_CACHE_SIZE * 1024 * 1024; //Memory Cache
private static OkHttpClient mOkHttpClient; //OK Http Client for downloading
private static Cache diskCache;
private static LruCache lruCache;
public static Picasso getSharedInstance(Context context) {
if (mInstance == null && context != null) {
//Create disk cache folder if does not exist
File cache = new File(context.getApplicationContext().getCacheDir(), "picasso_cache");
if (!cache.exists())
cache.mkdirs();
diskCache = new Cache(cache, mDiskCacheSize);
lruCache = new LruCache(mMemoryCacheSize);
//Create OK Http Client with retry enabled, timeout and disk cache
mOkHttpClient = new OkHttpClient();
mOkHttpClient.setConnectTimeout(CommonConsts.SECONDS_TO_OK_HTTP_TIME_OUT, TimeUnit.SECONDS);
mOkHttpClient.setRetryOnConnectionFailure(true);
mOkHttpClient.setCache(diskCache);
//For better performence in Memory use set memoryCache(Cache.NONE) in this builder (If needed)
mInstance = new Picasso.Builder(context).memoryCache(lruCache).
downloader(new OkHttpDownloader(mOkHttpClient)).
indicatorsEnabled(CommonConsts.SHOW_PICASSO_INDICATORS).build();
}
}
return mInstance;
}
public static void updatePicassoInstance() {
mInstance = null;
}
public static void clearCache() {
if(lruCache != null) {
lruCache.clear();
}
try {
if(diskCache != null) {
diskCache.evictAll();
}
} catch (IOException e) {
e.printStackTrace();
}
lruCache = null;
diskCache = null;
}
}
1) Yeah, easy: new com.squareup.picasso.LruCache(60 * 1024 * 1024). (just use your Cache instance in your Picasso instance like new Picasso.Builder(application).memoryCache(cache).build())
2) Picasso automatically uses the resize() and other methods' parameters as part of the keys for the memory cache. As for the disk cache, nope, Picasso does not touch your disk cache. The disk cache is the responsibility of the HTTP client (like OkHttp).
3) If you are talking about disk cache size: new OkHttpClient.Builder().cache(new Cache(directory, maxSize)).build(). (now you have something like new Picasso.Builder(application).memoryCache(cache).downloader(new OkHttp3Downloader(client)).build())
4) Picasso's Cache interface has a clear() method (and LruCache implements it, of course).
Ok, I did a lot of digging inside Picasso, and OKHTTP's internal working to find out how caching happens, whats the policy etc.
For people trying to use latest picasso 2.5+ and Okhttp 3+, the accepted answer WILL NOT WORK!! (My bad for not checking with the latest :( )
1) The getSharedInstance was not Thread safe, made it synchronized.
2) If you don't to do this calling everytime, do a Picasso.setSingletonInstance(thecustompicassocreatedbygetsharedinstance)
P.S. do this inside a try block so as to avoid illegalstateexception on activity reopening very quickly after a destroy that the static singleton is not destroyed. Also make sure this method gets called before any Picasso.with(context) calls
3) Looking at the code, I would advise people not to meddle with LruCache unless absolutely sure, It can very easily lead to either waste of unused RAM or if set low-> Outofmemoryexceptions.
4)It is fine if you don't even do any of this. Picasso by default tries to make a disk cache from it's inbuilt okhttpdownloader. But this might or might not work based on what picasso version you use. If it doesn't work, it uses default java URL downloader which also does some caching of its own.
5) Only main reason i see to do all this is to get the Clear Cache functionality. As we all know Picasso does not give this easily as it is protected inside the package. And most mere mortals like me use gradle to include the package leaving us out in the dust to not have cache clearing access.
Here is the code along with all the options for what i wanted. This will use Picasso 2.5.2 , Okhttp 3.4.0 and OkHttp3Downloader by jakewharton.
package com.example.project.recommendedapp;
import android.content.Context;
import android.util.Log;
import com.jakewharton.picasso.OkHttp3Downloader;
import com.squareup.picasso.LruCache;
import com.squareup.picasso.Picasso;
import java.io.File;
import java.io.IOException;
import java.util.concurrent.TimeUnit;
import okhttp3.Cache;
import okhttp3.OkHttpClient;
//Singleton Class for Picasso Downloading, Caching and Displaying Images Library
public class PicassoSingleton {
private static Picasso mInstance;
private static long mDiskCacheSize = 50*1024*1024; //Disk Cache limit 50mb
//private static int mMemoryCacheSize = 50*1024*1024; //Memory Cache 50mb, not currently using this. Using default implementation
private static OkHttpClient mOkHttp3Client; //OK Http Client for downloading
private static OkHttp3Downloader okHttp3Downloader;
private static Cache diskCache;
private static LruCache lruCache;//not using it currently
public static synchronized Picasso getSharedInstance(Context context)
{
if(mInstance == null) {
if (context != null) {
//Create disk cache folder if does not exist
File cache = new File(context.getApplicationContext().getCacheDir(), "picasso_cache");
if (!cache.exists()) {
cache.mkdirs();
}
diskCache = new Cache(cache, mDiskCacheSize);
//lruCache = new LruCache(mMemoryCacheSize);//not going to be using it, using default memory cache currently
lruCache = new LruCache(context); // This is the default lrucache for picasso-> calculates and sets memory cache by itself
//Create OK Http Client with retry enabled, timeout and disk cache
mOkHttp3Client = new OkHttpClient.Builder().cache(diskCache).connectTimeout(6000, TimeUnit.SECONDS).build(); //100 min cache timeout
//For better performence in Memory use set memoryCache(Cache.NONE) in this builder (If needed)
mInstance = new Picasso.Builder(context).memoryCache(lruCache).downloader(new OkHttp3Downloader(mOkHttp3Client)).indicatorsEnabled(true).build();
}
}
return mInstance;
}
public static void deletePicassoInstance()
{
mInstance = null;
}
public static void clearLRUCache()
{
if(lruCache!=null) {
lruCache.clear();
Log.d("FragmentCreate","clearing LRU cache");
}
lruCache = null;
}
public static void clearDiskCache(){
try {
if(diskCache!=null) {
diskCache.evictAll();
}
} catch (IOException e) {
e.printStackTrace();
}
diskCache = null;
}
}
i am having issues with speed of communication between workers in AS3 coding for AIR for android. my test device is a Galaxy S2 (android 4.0.4) and i am developing in flashdevelop using AIR18.0.
first things first.
i tried the good old AMF serialisation copying via shared object. i was getting smack average 49 calculations/second on the physics engine (the secondary thread) with a stable 60FPS on main thread. had to crank it up over to over 300 dynamic objects to get any noticeable slowdown.
all went well, so i started the on-device testing and that is when shit started to go sideways. i was getting less than 1.5 steps/s.
started to dig a bit deeper, write a shitton of code to check what the hell is so slow and i found that looking at shared objects was kinda like watching other people watching paint dry.
at this point i started to get deeper into researching. i found that there are a number of people already complaining about the speed of message channels (found not much on shared objects, "developers" status quo i guess). so i decided to go the lowest i could using shared bytearrays and mutexes. (i skipped over condition since i don't particularly want any of my threads to pause).
cranked up the desktop debugger i was getting 115-ish calculations/s and over 350 calculations/s with direct callback (the debugger did throw the exception, wasn't designed for that kind of continuous processing i guess.. anywho..). shared bytearray and mutexes was as advertised, faster than the orgasm of my ex girlfriend.
i do the debugging on the S2 and behold, i get 3.4 calculations/s with 200 dynamic objects.
so.. concurrency on mobile was pretty much done for me. then i thought i do a little test with no communication whatsoever. same scene, physics doing a more than acceptable 40 calculations/s and graphics running at the expected 60FPS...
so, my bluntly evident question:
WHAT the FAPPING FIREFLY is going on?
here is my Com code:
package CCom
{
import Box2D.Dynamics.b2Body;
import Box2D.Dynamics.b2World;
import flash.concurrent.Condition;
import flash.concurrent.Mutex;
import flash.utils.ByteArray;
import Grx.DickbutImage;
import Phx.PhxMain;
/**
* shared and executed across all threads.
* provides access to mutex and binary data.
*
* #author szeredai akos
*/
public class CComCore
{
//===============================================================================================//
public static var positionData:ByteArray = new ByteArray();
public static var positionMutex:Mutex = new Mutex();
public static var creationData:ByteArray = new ByteArray();
public static var creationMutex:Mutex = new Mutex();
public static var debugData:ByteArray = new ByteArray();
public static var debugMutex:Mutex = new Mutex();
//===============================================================================================//
public function CComCore()
{
positionData.shareable = true;
creationData.shareable = true;
debugData.shareable = true;
}
//===============================================================================================//
public static function encodePositions(w:b2World):void
{
var ud:Object;
positionMutex.lock();
positionData.position = 0;
for (var b:b2Body = w.GetBodyList(); b; b = b.GetNext())
{
ud = b.GetUserData();
if (ud && ud.serial)
{
positionMutex.lock();
positionData.writeInt(ud.serial); // serial
positionData.writeBoolean(b.IsAwake); // active state
positionData.writeInt(b.GetType()) // 0-static 1-kinematic 2-dynamic
positionData.writeDouble(b.GetPosition().x / PhxMain.SCALE); // x
positionData.writeDouble(b.GetPosition().y / PhxMain.SCALE); // y
positionData.writeDouble(b.GetAngle()); // r in radians
}
}
positionData.length = positionData.position;
positionMutex.unlock();
}
//===============================================================================================//
public static function decodeToAry(ar:Vector.<DickbutImage>):void
{
var index:int;
var rot:Number = 0;
positionData.position = 0;
while (positionData.bytesAvailable > 0)
{
//positionMutex.lock();
index = positionData.readInt();
positionData.readBoolean();
positionData.readInt();
ar[index].x -= (ar[index].x - positionData.readDouble()) / 10;
ar[index].y -= (ar[index].y - positionData.readDouble()) / 10;
ar[index].rotation = positionData.readDouble();
//positionMutex.unlock();
}
}
//===============================================================================================//
}
}
(disregard the lowpass filter on the position y-=(y-x)/c)
so.
please note that having the mutex only on the parsing of the physics does increase performance by about 20% while having minimal impact on the framerate of the main thread. this leads me to believe that the problem does not lie in the writing and reading of the data per say but in the speed at which that data is made available for a second thread. i mean,.. those are bytearray ops, it's only natural that it is fast. i did check the speed by simply dumping the remote thread into the main, and the speed is still sound. hell,.. it gets acceptable even on the S2 without dumping the extra calculations.
ps: i did try release version too.
if no one has a viable solution (besides a .2-.4s buffer, and the obvious single thread) i do want to hear about wanky workarounds or at least the specific source of the problem.
thx in advance
Think I found the issue.
As always things are more complex than one initially thinks.
Timer events, as well as set interval and timeout are all limited to 60fps. The timer does execute on time as long as the app is idle at that particular point or IMMEDIATELY after it is free to execute and the delay has passed. But the delay, obviously, can't be shorter than 15-ish (and its less on desktop, I guess). Shouldn't be a problem, right?
However.
If that piece of code manipulates shared objects the timer suddenly decides to shit himself and look at it for those 15ms regardless if it had its idle time or not.
Anyhow, the thing is that there is an buggy interaction between shared objects, workers, timer events and the adobe imposed 60FPS limitation.
The workaround is quite simple. Have the timer on some massive delay of like 5000ms and do like 5000 loops within the callback of the timer event. Obviously, the next timer event won't fire until the 5000loop is completed but most importantly it also won't add that monumental delay.
Another weird thing that came up is the greedy ownership of mutexes during the 5000loop so the usage of flash.concurrent.Condition is a must.
The good thing is that the performance boost is there and its impressive.
The downside is that the entire physics thing is now intimately locked to the framerate of the main thread (or whatever contraption the main game loop consists of), but hey. 60Fps is good enough, I guess.
Zi MuleTrex-Condition thing for those interested:
package CCom
{
import Box2D.Dynamics.b2Body;
import Box2D.Dynamics.b2World;
import flash.concurrent.Condition;
import flash.concurrent.Mutex;
import flash.utils.ByteArray;
import Grx.DickbutImage;
import Phx.PhxMain;
/**
* shared and executed across all threads.
* provides access to mutex and binary data.
*
* #author szeredai akos
*/
public class CComCore
{
//===============================================================================================//
public static var positionData:ByteArray = new ByteArray();
public static var positionMutex:Mutex = new Mutex();
public static var positionCondition:Condition = new Condition(positionMutex);
public static var creationData:ByteArray = new ByteArray();
public static var creationMutex:Mutex = new Mutex();
public static var debugData:ByteArray = new ByteArray();
public static var debugMutex:Mutex = new Mutex();
//===============================================================================================//
public function CComCore()
{
positionData.shareable = true;
creationData.shareable = true;
debugData.shareable = true;
}
//===============================================================================================//
public static function encodePositions(w:b2World):void
{
var ud:Object;
positionData.position = 0;
positionMutex.lock();
for (var b:b2Body = w.GetBodyList(); b; b = b.GetNext())
{
ud = b.GetUserData();
if (ud && ud.serial)
{
positionData.writeBoolean(b.IsAwake); // active state
positionData.writeInt(ud.serial); // serial
positionData.writeInt(b.GetType()) // 0-static 1-kinematic 2-dynamic
positionData.writeDouble(b.GetPosition().x / PhxMain.SCALE); // x
positionData.writeDouble(b.GetPosition().y / PhxMain.SCALE); // y
positionData.writeDouble(b.GetAngle()); // r in radians
}
}
positionData.writeBoolean(false);
positionCondition.wait();
}
//===============================================================================================//
public static function decodeToAry(ar:Vector.<DickbutImage>):void
{
var index:int;
var rot:Number = 0;
positionMutex.lock();
positionData.position = 0;
while (positionData.bytesAvailable > 0 && positionData.readBoolean())
{
//positionMutex.lock();
index = positionData.readInt();
positionData.readInt();
ar[index].x = positionData.readDouble();
ar[index].y = positionData.readDouble();
ar[index].rotation = positionData.readDouble();
//positionMutex.unlock();
}
positionCondition.notify();
positionMutex.unlock();
}
//===============================================================================================//
}
}
Sync will become a lot more complex as more channels and byteArrays start to pop up.
I'm using muPDF for reading PDFs in my application. I don't like its default animation (Switching horizontally). In other side i found this brilliant library for curl effect on images, and this project for flip-flap effect on layouts.
In curl sample project, in CurlActivity, all of data are images and set in PageProvider like this:
private class PageProvider implements CurlView.PageProvider {
// Bitmap resources.
private int[] mBitmapIds = { R.drawable.image1, R.drawable.image2,
R.drawable.image3, R.drawable.image4};
And use it like this:
private CurlView mCurlView;
mCurlView = (CurlView) findViewById(R.id.curl);
mCurlView.setPageProvider(new PageProvider());
And CurlView extends from GLSurfaceView and implements View.OnTouchListener, CurlRenderer.Observer
But in muPDF if i'm not mistaken, data are in core object. core is instance of MuPDFCore. And using it like this:
MuPDFReaderView mDocView;
MuPDFView pageView = (MuPDFView) mDocView.getDisplayedView();
mDocView.setAdapter(new MuPDFPageAdapter(this, this, core));
MuPDFReaderView extends ReaderView and ReaderView extends AdapterView<Adapter> and implements GestureDetector.OnGestureListener, ScaleGestureDetector.OnScaleGestureListener, Runnable.
My question is where how can I using curl effect in muPDF? Where should I get pages one by one and converting them to bitmaps? and then changing aspects of the Adapter in muPDF to CurlView.
In flip-flap sample project, in FlipHorizontalLayoutActivity (I like this effect too), we have these:
private FlipViewController flipView;
flipView = new FlipViewController(this, FlipViewController.HORIZONTAL);
flipView.setAdapter(new TravelAdapter(this));
setContentView(flipView);
And FlipViewController extends AdapterView<Adapter>, and data set in TravelAdapter that extends BaseAdapter.
No one has done this before? Or can help me to do that?!
EDIT:
I found another good open source PDF reader with curl effect called fbreaderJ. its developer says "An additional module that allows to open PDF files in FBReader. Based on radaee pdf library."
I got confused! cause radaeepdf is closed source and downloadable project is just for demo and inserted username and password is for this package.
People want to change whole fbreader project such as package name.
Another issue for make me confused is where is this additional module source code?!
Anyway, if someone wants to help me, fbreader has done it very well.
EDIT:
I talked to Robin Watts, who developed muPDF (or one of developers), and he said:
Have you read platform/android/ClassStructure.txt ? MuPDF is
primarily a C library. The standard api is therefore a C one. Rather
than exposing that api exactly as is to Java (which would be the
nicest solution, and something that I've done some work on, but have
not completed due to lack of time), we've implemented MuPDFCore to
wrap up just the bits we needed. MuPDFCore handles opening a PDF file,
and getting bitmaps from it to be used in views. or rather, MuPDFCore
returns 'views', not 'bitmaps'. If you need bitmaps, then you're going
to need to make changes in MuPDFCore.
There are too many errors when changing a little part of MuPDFReaderView class. I get confused! These are related to each other.
Please answer more precisely.
EDIT:
And bounty has expired.
If the muPDF does not support rendering to a bitmap, you have no other choice than rendering to a regular view and take a screen dump to a bitmap like this:
View content = findViewById(R.id.yourPdfView);
Bitmap bitmap = content.getDrawingCache();
Then use this bitmap as input to your other library.
Where should i get pages one by one and converting them to bitmaps?
In our application (newspaper app) we use MuPDF to render PDFs.
The workflow goes like this:
Download PDF file (we have one PDF per newspaper page)
Render it with MuPDF
Save the bitmap to the filesystem
Load the Bitmap from filesystem as background image to a view
So, finally, what we use is MuPDFCore.java and its methods drawPage(...) and onDestroy()
Is this what you want to know or do i miss the point?
EDIT
1.) I think it is not necessary to post code how to download a file. But after downloading i add a RenderTask (extends from Runnable) to a Renderqueue and trigger that queue. The RenderTask needs some information for rendering:
/**
* constructs a new RenderTask instance
* #param context: you need Context for MuPdfCore instance
* #param pageNumber
* #param pathToPdf
* #param renderCallback: callback to set bitmap to the view after
* rendering
* #param heightOfRenderedBitmap: this is the target height
* #param widthOfRenderedBitmap: this is the target width
*/
public RenderTask (Context context, Integer pageNumber, String pathToPdf, IRenderCallback,
renderCallback, int heightOfRenderedBitmap,
int widthOfRenderedBitmap) {
//store things in fields
}
2.) + 3.) The Renderqueue wraps the RenderTask in a new Thread and starts it. So the run-method of the RenderTask will be invoked:
#Override
public void run () {
//do not render it if file exists
if (exists () == true) {
finish();
return;
}
Bitmap bitmap = render();
//if something went wrong, we can't store the bitmap
if (bitmap == null) {
finish();
return;
}
//now save the bitmap
// in my case i save the destination path in a String field
imagePath = save(bitmap, new File("path/to/your/destination/folder/" + pageNumber + ".jpg"));
bitmap.recycle();
finish();
}
/**
* let's trigger the callback
*/
private void finish () {
if (renderCallback != null) {
// i send the whole Rendertask to callback
// maybe in your case it is enough to send the pageNumber or path to
// renderend bitmap
renderCallback.finished(this);
}
}
/**
* renders a bitmap
* #return
*/
private Bitmap render() {
MuPDFCore core = null;
try {
core = new MuPDFCore(context, pathToPdf);
} catch (Exception e) {
return null;
}
Bitmap bm = Bitmap.createBitmap(widthOfRenderedBitmap, heightOfRenderedBitmap, Config.ARGB_8888);
// here you render the WHOLE pdf cause patch-x/-y == 0
core.drawPage(bm, 0, widthOfRenderedBitmap, heightOfRenderedBitmap, 0, 0, widthOfRenderedBitmap, heightOfRenderedBitmap, core.new Cookie());
core.onDestroy();
core = null;
return bm;
}
/**
* saves bitmap to filesystem
* #param bitmap
* #param image
* #return
*/
private String save(Bitmap bitmap, File image) {
FileOutputStream out = null;
try {
out = new FileOutputStream(image.getAbsolutePath());
bitmap.compress(Bitmap.CompressFormat.JPEG, 80, out);
return image.getAbsolutePath();
} catch (Exception e) {
return null;
}
finally {
try {
if (out != null) {
out.close();
}
} catch(Throwable ignore) {}
}
}
}
4.) I think it is not necessary to post code how to set a bitmap as background of a view
Note: this is general question of person new to caching mechanisms on Android.
Why RS uses LRU caching in FlickrSpiceService sample?
There is LruCacheBitmapObjectPersister:
#Override
public CacheManager createCacheManager(Application application) throws CacheCreationException {
CacheManager manager = new CacheManager();
InFileBitmapObjectPersister filePersister = new InFileBitmapObjectPersister(getApplication());
LruCacheBitmapObjectPersister memoryPersister = new LruCacheBitmapObjectPersister(filePersister, 1024 * 1024);
manager.addPersister(memoryPersister);
return manager;
}
Why don't remove it and just use InFileBitmapObjectPersister like this:
#Override
public CacheManager createCacheManager(Application application) throws CacheCreationException {
CacheManager manager = new CacheManager();
InFileBitmapObjectPersister filePersister = new InFileBitmapObjectPersister(getApplication());
manager.addPersister(filePersister);
return manager;
}
Memory cache (the LruCacheBitmapObjectPersister in this case) is much faster than the file system one (InFileBitmapObjectPersister), but at the same time, it is smaller.
Therefore, using smaller (but faster) memory cache as Level 1 and larger (but slower) file system cache as Level 2 offers improved performance for common usage. You can check this broadly related answer for processor cache for more info. Multilevel cache is a recurring theme in Computer Science.