I'm trying to cache JSON requests from a server, however, they are incorrectly using the Cache-Control header, amongst others (everything expires in the past). I want to override it so that calls are cached for say, 3 hours, regardless of what the server requests. Is that possible? The documentation for Volley is Scarce.
You might subclass the JsonObjectRequest class and overwrite parseNetworkResponse. You will notice the call to HttpHeaderParser.parseCacheHeaders - it's a good place to start :] just wrap this call or replace it and provide your own dummy Cache header configuration object [with your proprietary clientside cache time] to Response.success.
In my implementation it looks like this:
parseNetworkResponse
return Response.success(payload, enforceClientCaching(HttpHeaderParser.parseCacheHeaders(response), response));
with enforceClientCaching related members being
protected static final int defaultClientCacheExpiry = 1000 * 60 * 60; // milliseconds; = 1 hour
protected Cache.Entry enforceClientCaching(Cache.Entry entry, NetworkResponse response) {
if (getClientCacheExpiry() == null) return entry;
long now = System.currentTimeMillis();
if (entry == null) {
entry = new Cache.Entry();
entry.data = response.data;
entry.etag = response.headers.get("ETag");
entry.softTtl = now + getClientCacheExpiry();
entry.ttl = entry.softTtl;
entry.serverDate = now;
entry.responseHeaders = response.headers;
} else if (entry.isExpired()) {
entry.softTtl = now + getClientCacheExpiry();
entry.ttl = entry.softTtl;
}
return entry;
}
protected Integer getClientCacheExpiry() {
return defaultClientCacheExpiry;
}
It handles 2 cases:
no Cache headers were set
Server cache entry indicates expired item
So if the server starts sending correct cache headers with expiry in the future, it will still work.
Related
My problem is i can't get infinite stream with Retrofit. After i get credentials for initial poll() request - i do initial poll() request. Each poll() request responds in 25 sec if there is no change, or earlier if there are any changes - returning changed_data[]. Each response contains timestamp data needed for next poll request - i should do new poll() request after each poll() response. Here is my code:
getServerApi().getLongPollServer()
.flatMap(longPollServer -> getLongPollServerApi(longPollServer.getServer()).poll("a_check", Config.LONG_POLLING_SERVER_TIMEOUT, 2, longPollServer.getKey(), longPollServer.getTs(), "")
.take(1)
.flatMap(longPollEnvelope -> getLongPollServerApi(longPollServer.getServer()).poll("a_check", Config.LONG_POLLING_SERVER_TIMEOUT, 2, longPollServer.getKey(), longPollEnvelope.getTs(), "")))
.retry()
.subscribe(longPollEnvelope1 -> {
processUpdates(longPollEnvelope1.getUpdates());
});
I'm new to RxJava, maybe i don't understand something, but i can't get infinite stream. I get 3 calls, then onNext and onComplete.
P.S. Maybe there is a better solution to implement long-polling on Android?
Whilst not ideal, I believe that you could use RX's side effects to achieve a desired result ('doOn' operations).
Observable<CredentialsWithTimestamp> credentialsProvider = Observable.just(new CredentialsWithTimestamp("credentials", 1434873025320L)); // replace with your implementation
Observable<ServerResponse> o = credentialsProvider.flatMap(credentialsWithTimestamp -> {
// side effect variable
AtomicLong timestamp = new AtomicLong(credentialsWithTimestamp.timestamp); // computational steering (inc. initial value)
return Observable.just(credentialsWithTimestamp.credentials) // same credentials are reused for each request - if invalid / onError, the later retry() will be called for new credentials
.flatMap(credentials -> api.query("request", credentials, timestamp.get())) // this will use the value from previous doOnNext
.doOnNext(serverResponse -> timestamp.set(serverResponse.getTimestamp()))
.repeat();
})
.retry()
.share();
private static class CredentialsWithTimestamp {
public final String credentials;
public final long timestamp; // I assume this is necessary for you from the first request
public CredentialsWithTimestamp(String credentials, long timestamp) {
this.credentials = credentials;
this.timestamp = timestamp;
}
}
When subscribing to 'o' the internal observable will repeat. Should there be an error then 'o' will retry and re-request from the credentials stream.
In your example, computational steering is achieved by updating the timestamp variable, which is necessary for the next request.
I'm using Volley as my network stack in a project I'm working on in Android. Part of my requirements is to download potentially very large files and save them on the file system.
Ive been looking at the implementation of volley, and it seems that the only way volley works is it downloads an entire file into a potentially massive byte array and then defers handling of this byte array to some callback handler.
Since these files can be very large, I'm worried about an out of memory error during the download process.
Is there a way to tell volley to process all bytes from an http input stream directly into a file output stream? Or would this require me to implement my own network object?
I couldn't find any material about this online, so any suggestions would be appreciated.
Okay, so I've come up with a solution which involves editing Volley itself. Here's a walk through:
Network response can't hold a byte array anymore. It needs to hold an input stream. Doing this immediately breaks all request implementations, since they rely on NetworkResponse holding a public byte array member. The least invasive way I found to deal with this is to add a "toByteArray" method inside NetworkResponse, and then do a little refactoring, making any reference to a byte array use this method, rather than the removed byte array member. This means that the transition of the input stream to a byte array happens during the response parsing. I'm not entirely sure what the long term effects of this are, and so some unit testing / community input would be a huge help here. Here's the code:
public class NetworkResponse {
/**
* Creates a new network response.
* #param statusCode the HTTP status code
* #param data Response body
* #param headers Headers returned with this response, or null for none
* #param notModified True if the server returned a 304 and the data was already in cache
*/
public NetworkResponse(int statusCode, inputStream data, Map<String, String> headers,
boolean notModified, ByteArrayPool byteArrayPool, int contentLength) {
this.statusCode = statusCode;
this.data = data;
this.headers = headers;
this.notModified = notModified;
this.byteArrayPool = byteArrayPool;
this.contentLength = contentLength;
}
public NetworkResponse(byte[] data) {
this(HttpStatus.SC_OK, data, Collections.<String, String>emptyMap(), false);
}
public NetworkResponse(byte[] data, Map<String, String> headers) {
this(HttpStatus.SC_OK, data, headers, false);
}
/** The HTTP status code. */
public final int statusCode;
/** Raw data from this response. */
public final InputStream inputStream;
/** Response headers. */
public final Map<String, String> headers;
/** True if the server returned a 304 (Not Modified). */
public final boolean notModified;
public final ByteArrayPool byteArrayPool;
public final int contentLength;
// method taken from BasicNetwork with a few small alterations.
public byte[] toByteArray() throws IOException, ServerError {
PoolingByteArrayOutputStream bytes =
new PoolingByteArrayOutputStream(byteArrayPool, contentLength);
byte[] buffer = null;
try {
if (inputStream == null) {
throw new ServerError();
}
buffer = byteArrayPool.getBuf(1024);
int count;
while ((count = inputStream.read(buffer)) != -1) {
bytes.write(buffer, 0, count);
}
return bytes.toByteArray();
} finally {
try {
// Close the InputStream and release the resources by "consuming the content".
// Not sure what to do about the entity "consumeContent()"... ideas?
inputStream.close();
} catch (IOException e) {
// This can happen if there was an exception above that left the entity in
// an invalid state.
VolleyLog.v("Error occured when calling consumingContent");
}
byteArrayPool.returnBuf(buffer);
bytes.close();
}
}
}
Then to prepare the NetworkResponse, we need to edit the BasicNetwork to create the NetworkResponse correctly (inside BasicNetwork.performRequest):
int contentLength = 0;
if (httpResponse.getEntity() != null)
{
responseContents = httpResponse.getEntity().getContent(); // responseContents is now an InputStream
contentLength = httpResponse.getEntity().getContentLength();
}
...
return new NetworkResponse(statusCode, responseContents, responseHeaders, false, mPool, contentLength);
That's it. Once the data inside network response is an input stream, I can build my own requests which can parse it directly into a file output stream which only hold a small in-memory buffer.
From a few initial tests, this seems to be working alright without harming other components, however a change like this probably requires some more intensive testing & peer reviewing, so I'm going to leave this answer not marked as correct until more people weigh in, or I see it's robust enough to rely on.
Please feel free to comment on this answer and/or post answers yourselves. This feels like a serious flaw in Volley's design, and if you see flaws with this design, or can think of better designs yourselves, I think it would benefit everyone.
I have an android application I am trying to sync with a rails app.
In the android app I download and load a json object from the server (using an OAuth token for authentication):
private JSONArray getJSON(URL url, String authToken) throws IOException, JSONException {
URLConnection con = url.openConnection();
con.setRequestProperty("Content-Type", "application/json");
con.setRequestProperty("Authorization", "Bearer " + authToken);
InputStream is = new BufferedInputStream(con.getInputStream());
Scanner s = new Scanner(is).useDelimiter("\\A");
String text = s.hasNext() ? s.next() : "";
return new JSONArray(text);
}
I'm trying to reload the existing data and it is ~380k. When I run the code I get the following in the rails server log:
Started GET "/events.json?created_since=0" for 192.168.1.111 at 2014-01-22 20:15:16 -0500
Processing by EventsController#index as JSON
Parameters: {"created_since"=>"0", "event"=>{}}
Doorkeeper::AccessToken Load (0.6ms) SELECT "oauth_access_tokens".* FROM "oauth_access_tokens" WHERE "oauth_access_tokens"."token" = [:filtered:] ORDER BY "oauth_access_tokens"."id" ASC LIMIT 1
User Load (0.3ms) SELECT "users".* FROM "users" WHERE "users"."id" = ? LIMIT 1 [["id", 501661262]]
Habit Load (0.7ms) SELECT "habits".* FROM "habits" WHERE "habits"."user_id" = ? ORDER BY "habits"."id" ASC LIMIT 1000 [["user_id", 501661262]]
Event Load (26.7ms) SELECT "events".* FROM "events" WHERE "events"."habit_id" = ? [["habit_id", 1]]
⋮
Rendered events/index.json.jbuilder (3422.5ms)
Completed 200 OK in 4491ms (Views: 3436.2ms | ActiveRecord: 73.4ms)
[2014-01-22 20:15:21] ERROR Errno::ECONNRESET: Connection reset by peer
/home/will/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/webrick/httpserver.rb:80:in `eof?'
/home/will/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/webrick/httpserver.rb:80:in `run'
/home/will/.rvm/rubies/ruby-2.0.0-p247/lib/ruby/2.0.0/webrick/server.rb:295:in `block in start_thread'
The connection reset is repeated seven times. The client receives about 260k of data. app/views/events/index.json.jbuilder is:
json.array!(#events) do |event|
json.extract! event, :id, :habit_id, :time, :description
end
The same method is used to load a different model with only a few entries and it loads correctly. Is there a limit to how big a file can be downloaded? In any case pagination seems like a good idea. Anyone know of any guidelines on what size chunks I ought to break it up into?
Instead using Scanner, maybe you can use Reader to read the result InputStream
Reader reader = new InputStreamReader(is);
JsonParser parser = new JsonParser();
JsonArray jsonArray = parser.parse(reader).getAsJsonArray();
I ended up paginating the data and it now loads correctly:
JSONArray events;
int page = 1;
SimpleDateFormat timeFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
batch = new ArrayList<ContentProviderOperation>();
do {
events = getJSON(new URL(EVENT_READ_URL + "?page=" + (page++)), authToken);
for(int i = 0; i < events.length(); i++) {
JSONObject event = events.getJSONObject(i);
batch.add(ContentProviderOperation.newInsert(HabitContentProvider.EVENTS_URI)
.withValue(EventTable.COLUMN_ID, event.getInt("id"))
.withValue(EventTable.COLUMN_HABIT_ID, event.getInt(EventTable.COLUMN_HABIT_ID))
.withValue(EventTable.COLUMN_TIME, timeFormat.parse(event.getString(EventTable.COLUMN_TIME)).getTime() / 1000)
.withValue(EventTable.COLUMN_DESCRIPTION, event.getString(EventTable.COLUMN_DESCRIPTION))
.build());
}
} while(events.length() > 0);
try {
mContentResolver.applyBatch(HabitContentProvider.AUTHORITY, batch);
} catch(SQLiteConstraintException e) {
Log.e(TAG, "SQLiteConstraintException: " + e.getMessage());
}
In my rails code I added the will_paginate gem, included array support, and added the following to my code:
if params[:page]
#events = #events.paginate(page: params[:page], per_page: params[:per_page] || 300)
end
The service I am using to obtain images, like many such sites does not have a cache control header indicating how long the image should be cached. Volley uses an http cache control header by default to decide how long to cache images on disk. How could I override this default behavior and keep such images for a set period of time?
Thanks
I needed to change the default caching strategy to a "cache all" policy, without taking into account the HTTP headers.
You want to cache for a set period of time. There are several ways you can do this, since there are many places in the code that "touch" the network response. I suggest an edit to the HttpHeaderParser (parseCacheHeaders method at line 39):
Cache.Entry entry = new Cache.Entry();
entry.data = response.data;
entry.etag = serverEtag;
entry.softTtl = softExpire;
entry.ttl = now; // **Edited**
entry.serverDate = serverDate;
entry.responseHeaders = headers;
and another to Cache.Entry class:
/** True if the entry is expired. */
public boolean isExpired() {
return this.ttl + GLOBAL_TTL < System.currentTimeMillis();
}
/** True if a refresh is needed from the original data source. */
public boolean refreshNeeded() {
return this.softTtl + GLOBAL_TTL < System.currentTimeMillis();
}
where GLOBAL_TTL is a constant representing the time you want each image to live in the cache.
I am using Google's Volley Library as my design for getting network data;
I have set up a RequestQueue
requestQueue = new RequestQueue(new DiskBasedCache(new File(context.getCacheDir(),
DEFAULT_CACHE_DIR)), new BasicNetwork(new
HttpClientStack(AndroidHttpClient.newInstance(userAgent))));
I have also subclassed Request, and have data coming back from the network just fine. My issue is with caching: in parseNetworkResponse() which is overridden in my subclass of Request, when I call
return Response.success(list, HttpHeaderParser.parseCacheHeaders(response));
HttpHeaderParser.parseCacheHeaders(response) returns null since the server is set up for "no caching" in its response header... Regardless I still would like to cache this data for a variable set number of hours (24 hours probably), How can I do this by creating a volley Cache.Entry... It is my understanding that the URL is used as the cache key value (and I would like it to be the URL).
To sum up, since HttpHeaderParser.parseCacheHeaders(response) returns null, I would like to create a new Cache.Entry that is set up for expiring after 24 hours, and the cache key being the URL of the request.
Any thoughts?
Thanks!
I've had the same issue and ended up with this solution:
#Override
protected Response<String> parseNetworkResponse(NetworkResponse response) {
// Create a FakeCache that invalidate the data after 24 hour
Cache.Entry mFakeCache = HttpHeaderParser.parseCacheHeaders(response);
mFakeCache.etag = null;
mFakeCache.softTtl = System.currentTimeMillis() + 86400 * 1000;
mFakeCache.ttl = mFakeCache.softTtl;
return Response.success(response.data, mFakeCache);
}