Guava Android expireAfterWrite issue - android

We are trying to use the library on Android for TimedEviction. The items in the cache are expiring as soon as we overwrite an existing item.
We are building the cache as follows:
private Cache rssiMap;
RemovalListener removalListener = new RemovalListener() {
#Override
public void onRemoval(RemovalNotification removal) {
}
};
rssiMap = CacheBuilder.newBuilder()
.expireAfterWrite(1, TimeUnit.MINUTES)
.removalListener(removalListener)
.build();
rssiMap.put(device, rssi);
Is there something wrong we are doing with the code or is this a known issue?

This is correct behavior.
Actually, client code doesn't care WHEN the elements are expired, right? Client code does care about WHAT final cache values are.
The RemovalListener focuses on WHAT is evicted instead of WHEN.
JavaDoc of RemovalListener.onRemoval():
Notifies the listener that a removal occurred at some point in the
past.
By the way, I recommend use Cache.get() instead of Cache.put().
From JavaDoc of Cache.put():
Prefer get(Object, Callable) when using the conventional "if cached,
return; otherwise create, cache and return" pattern.

Related

Save event type logs

We want to add a reporting feature to our existing application.
For this purpose we are sending Events in JSON via HTTPS to a server application.
We need to remember Event-objects that could not be send to the server (No internet, server not reachable...). We are considering to store the events in a SQLite database and discard all Events that are older than 24 hours to prevent flooding our storage.
Another option would be to write the JSON-objects to a file and concat each new event when it could not be send to the server. The problem with this solution is, that it would be hard for us to discard logs older than 24 hours.
We store the event sin a table with the columns:
| id | json | created_at |
Can anyone recommend best practices for this use case?
Currently we tend to use the sqlite solution but we are wondering if there are any caveats that we are not aware of.
If you don't mind using third-party lib i can recommend android-priority-jobqueue. You can easily achieve what you are trying to do. You can always create job and it will handle itself. You can set if it needs network, if it is persistent (saved into DB when no network) and even you can customize your own retry logic.
Here's little example.
public class PostTweetJob extends Job {
public static final int PRIORITY = 1;
private String text;
public PostTweetJob(String text) {
// This job requires network connectivity,
// and should be persisted in case the application exits before job is completed.
super(new Params(PRIORITY).requireNetwork().persist());
}
#Override
public void onAdded() {
// Job has been saved to disk.
// This is a good place to dispatch a UI event to indicate the job will eventually run.
}
#Override
public void onRun() throws Throwable {
// yours code here
}
#Override
protected RetryConstraint shouldReRunOnThrowable(Throwable throwable, int runCount,
int maxRunCount) {
// An error occurred in onRun.
return RetryConstraint.createExponentialBackoff(runCount, 1000);
}
}
And you call it like this.
jobManager.addJobInBackground(new PostTweetJob("It works"));
use JobService(Android 5+ - lollipop and above) and AlarmManager (for android sdk<21 - pre lollipop) with this solution u can schedule any task and it would be performed. JobService was developed rxactely for tjis purposes(schedule and perform different tasks) maybe you can try JobIntentService it is would work on kitkat(android 4+) devices
P.S.
In that case you didnt need any third party libs and other dependrncies like firebase/google play services(like for FirebaseDispatcher)

Appcelerator reset app to initial state

I am working on an app with Appcelerator.
Appcelerator uses nearly the same kind of require.js like node.js does.
Now I want to implement a feature that logges out the current user and does not leave any trace.
The most simple way would be to restart the app, but Appcelerator and especially Apple does not support this.
So i have to open the login window and clean all the data that leaves a trace to the old user.
The easiest way would be to dereference one of the main nodes in the require chain leaving all the data dereferenced and garbage collected.
I know there is a way (as mentioned here) to do that in node:
/**
* Removes a module from the cache
*/
function purgeCache(moduleName) {
// Traverse the cache looking for the files
// loaded by the specified module name
searchCache(moduleName, function (mod) {
delete require.cache[mod.id];
});
// Remove cached paths to the module.
// Thanks to #bentael for pointing this out.
Object.keys(module.constructor._pathCache).forEach(function(cacheKey) {
if (cacheKey.indexOf(moduleName)>0) {
delete module.constructor._pathCache[cacheKey];
}
});
};
/**
* Traverses the cache to search for all the cached
* files of the specified module name
*/
function searchCache(moduleName, callback) {
// Resolve the module identified by the specified name
var mod = require.resolve(moduleName);
// Check if the module has been resolved and found within
// the cache
if (mod && ((mod = require.cache[mod]) !== undefined)) {
// Recursively go over the results
(function traverse(mod) {
// Go over each of the module's children and
// traverse them
mod.children.forEach(function (child) {
traverse(child);
});
// Call the specified callback providing the
// found cached module
callback(mod);
}(mod));
}
};
So I tried to read out the require-cache in Appcelerator with:console.log(require, "-" ,require.cache); with an output like: <KrollCallback: 0x79f6fe50> - <null>
So now my questions:
Is there a way to reach the require-cache in Appcelerator?
Do you know a way to clean up a big Appcelerator-App?
Since it is possible to wirte native Modules for Appcelerator:
Do you know a way to clean up a big Android App?
Do you know a way to clean up a big iOS App?
Thank you very much

Google Drive Android API: Deleted folder still exists in query

Running the code below, I create a folder with Google Drive Android API on a tablet. After a few seconds, delete that folder from a remote location on a PC. When I re-run the code, the API still thinks 'MyFolder' exists, even though it was deleted and not visible in the Google Drive app on the tablet. The folder persistance finally disappears after a while and the code works as expected. Is this expected behavior for Cloud drives?
Query query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(
SearchableField.TITLE, "MyFolder"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(new ResultCallback<DriveApi.MetadataBufferResult>() {
#Override
public void onResult(DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Cannot create folder in the root.");
} else {
boolean isFound = false;
for(Metadata m : result.getMetadataBuffer()) {
if(!isFound) {
if (m.getTitle().equals("MyFolder")) {
showMessage("Folder exists");
isFound = true;
}
}
}
if(!isFound) {
showMessage("Folder not found; creating it.");
MetadataChangeSet changeSet = new MetadataChangeSet.Builder()
.setTitle("MyFolder")
.build();
Drive.DriveApi.getRootFolder(getGoogleApiClient())
.createFolder(getGoogleApiClient(), changeSet)
.setResultCallback(new ResultCallback<DriveFolder.DriveFolderResult>() {
#Override
public void onResult(DriveFolder.DriveFolderResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Error while trying to create the folder");
} else {
mThwingAlbertFolderId = result.getDriveFolder().getDriveId();
showMessage("Created a folder: " + mThwingAlbertFolderId);
}
}
});
}
}
}
});
What you are seeing, is a 'normal' behavior of the GDAA, that can be explained if you look closer at the 'Lifecycle of a Drive file' diagram (warning: I've never seen the source code, just assuming from what I observed).
See, the GDAA, unlike the REST Api, creates a layer that does its best to create caching and network traffic optimization. So, when you manipulate the file/folder from the 'outside' (like the web app), the GDAA layer has no knowledge of the fact until it initiates synchronization, controlled by it's own logic. I myself originally assumed that GooDrive has this under control by dispatching some kind of notification back to the GDAA, but it apparently is not the case. Also, some Googlers mentioned 'requestSync()' as a cure, but I never succeeded to make it work.
What you think you're doing, is polling the GooDrive. But effectively, you're polling the GDAA (local GooPlaySvcs) whose DriveId is still valid (not updated), unlike the real GooDrive object that is already gone.
This is one thing that is not clearly stated in the docs. GDAA is not the best Api for EVERY application. It's caching mechanism is great for transparently managing online/offline states, network traffic optimization. battery life, ... But in your situation, you may be better off by using the REST Api, since the response you get reflects the current GooDrive state.
I myself faced a similar situation and had to switch from the GDAA back to the REST (and replaced polling with a private GCM based notification system). Needless to say, by using the REST Api, your app gets more complex, usually requiring sync adapter / service to do the data synchronization, managing network states, ... all the stuff GDAA gives you for free).
In case you want to play with the 2 apis side-by side, there are two identical CRUD implementation you can use (GDAA, REST) on Github.
Good Luck
Google drive api does not sync immediately, That is why the deleted folders are still showing, so you have to force google drive to sync using requestSync()
Drive.DriveApi.requestSync(mGoogleApiClient).await();
I fount an example snippet here:
http://wiki.workassis.com/android-google-drive-api-deleted-folder-still-exists-in-query/
As Sean mentioned, the Drive Android API caches metadata locally to reduce bandwidth and battery usage.
When you perform an action on the device, e.g. creating a folder, we attempt to apply that action on the server as soon as possible. Though there can be delays due to action dependencies and content transfers, you will generally see the results reflected on the server very quickly.
When an action is performed on the server, e.g. by deleting a folder via the web client, this action is reflected on the device the next time the Drive Android API syncs. In order to conserve battery and bandwidth, sync frequency depends on how the API is being used as this is a priority for users.
If you need to guarantee that a sync has occurred, you can explicitly request a sync using DriveApi.requestSync() and wait on the result. This is currently rate limited to 1 per minute, which is frequently hit during testing, but should have a much smaller impact on real world usage.
Please let us know on our issue tracker if this sync behavior is causing issues for your use case so we can investigate solutions.
Google drive uses its own lifecycle for Drive api and manage all things in cache that's why if you delete some file or folder and try to access using google drive apis it is still available because it is stored in cache so you need to explicitly call requestSync() method for that then after that cache will be updated and gives you that folder or file not found.
below is code for that:
Drive.DriveApi.requestSync(mGoogleApiClient).setResultCallback(new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
Log.e("sync_status", status.toString());
if (status.getStatus().isSuccess()) {
setRootFolderDriveId();
}
}
});
and don't call Drive.DriveApi.requestSync(mGoogleApiClient).await() because your main thread will block so it will crash. use above one and after get successful callback you can do your operation on google drive because it's updated.
You can do it in main thread:
Drive.DriveApi.requestSync(mGoogleApiClient).setResultCallback(new ResultCallback<com.google.android.gms.common.api.Status>() {
#Override
public void onResult(com.google.android.gms.common.api.Status status) {
if (!status.getStatus().isSuccess()) {
Log.e("SYNCING", "ERROR" + status.getStatusMessage());
} else {
Log.e("SYNCING", "SUCCESS");
// execute your code to interact with Google Drive
}
}
});
I was having the same issue and using "Drive.DriveApi.requestSync" did the trick.
Also I suggest taking a look at https://github.com/francescocervone/RxDrive because you can concatenate the sync to other drive operations using rxandroid.
For example, this becomes a delete-and-sync operation:
Observable<Boolean> deleteFile = rxDrive.delete(file);
Observable<Void> syncDrive = rxDrive.sync();
Observable.concat(deleteFile, syncDrive);
The reason why you get listed deleted files from your query is that Google Drive has a "Trash" folder that is "searchable". You need to empty your trash first.

Check if Volley gets results from cache or over network

How can I check whether Volley gets the results of a JsonObjectRequest from the cache or from the network?
I need to show a progress dialog when it needs a network connection but not when the results are quickly received from the cache.
my request looks something like this
volleyQueue = Volley.newRequestQueue(this);
JsonObjectRequest jr = new JsonObjectRequest(Request.Method.POST, url, null, new Response.Listener<JSONObject>(){...stuff}, new Response.ErrorListener(){...errorstuff});
jr.setShouldCache(true);
volleyQueue.add(jr);
I did this by overriding Request#addMarker and checking for a "cache-hit" marker being added:
public class MyRequest<T> extends Request<T> {
protected boolean cacheHit;
#Override
public void addMarker(String tag) {
super.addMarker(tag);
cacheHit = false;
if (tag.equals("cache-hit")){
cacheHit = true;
}
}
}
Before making the Request you can get the cache from the Request Queue and check if the Entry is not null.
mRequestQueue.getCache().get("key");
The key for each request is usually the URL.
I guess you should have to check if the Entry has expired too.
Volley has a built in way to know if image requests are immediate through the ImageContainer class, but it doesn't seem to have a similar mechanism for other requests such a JSON object request.
It seems that you have 2 main choices:
You can set a timer for something like 300ms after you request the JSON (test for the best time). When the timer is done, check to see if you have the result already, otherwise show the dialog. I know this is a bit of a "hack" but it could be good enough.
Edit the Volley code to add an "isImmediate" flag to every request. There are multiple ways to achieve this. I suggest starting at CacheDispatcher
Starting from Tim Kelly's answer.
by the time you check "cacheHit", it'll be reverted to false and you'll not know that it's a cache hit because many other tags are received after "cacheHit" is received and before the "onResponse" is called.
So, add
if(tag.equals("network-http-complete")){
cacheHit = false;
}
and remove cacheHit = false;
adb shell setprop log.tag.Volley VERBOSE
Run this command in your terminal, you may need to set 'adb' in your path in order to use that command, it should be located in your sdk/platform-tools/ dir.
This will provide much more detailed volley logs and will show something along the lines of an execution stack for a volley request which exhibits cache hits or misses.

Android Volley - Some disc cached images dont get displayed?

I've run into this weird error, where some images get cached as usual and some don't, any idea why?
Both images do get displayed and memory cached just fine, but when offline some display error image.
For example, this works fine:
http://cs4381.vk.me/u73742951/a_58a41ac2.jpg
However, this does not: http://upload.wikimedia.org/wikipedia/commons/thumb/d/d7/Android_robot.svg/220px-Android_robot.svg.png
Both work fine displaying and memcaching but the second doesn't get displayed from disk cache, although I think I see it being saved, as app says it has 12kB cache in the system settings
Edit
I checked out a clean copy of Volley and it does the same thing. Its definatelly a bug...
From what Ive found out its that images do get cached, but Bitmap cachedBitmap = mCache.getBitmap(cacheKey); always returns null, so the cache says it doesnt have the bitmaps and then proceedes to download it again, and fail when offline, weird
The reason you're not getting any hits is because the default behavior in Volley for disk caching is dependent on the HTTP headers of the element you're requesting (in your case, an image).
Check the volley logs and see if you get the "cache-hit-expired" message - that means that the image was cached but it's TTL is expired as far as the default disk cache is concerned.
If you want the default settings to work, the images must have a Cache-Control header like max-age=??? where the question marks indicate enough seconds from the time it was downloaded.
If you want to change the default behavior, I'm not sure, but I think you have to edit the code a bit.
Look at the CacheDispatcher class in the Volley source.
Hope that helps.
A quick and dirty way:
private static class NoExpireDiskBasedCache extends DiskBasedCache
{
public NoExpireDiskBasedCache(File rootDirectory, int maxCacheSizeInBytes)
{
super(rootDirectory, maxCacheSizeInBytes);
}
public NoExpireDiskBasedCache(File rootDirectory)
{
super(rootDirectory);
}
#Override
public synchronized void put(String key, Entry entry)
{
if (entry != null)
{
entry.etag = null;
entry.softTtl = Long.MAX_VALUE;
entry.ttl = Long.MAX_VALUE;
}
super.put(key, entry);
}
}

Categories

Resources