I am building an android application in which some data is sent to server from the android device, every 1 second for 30 seconds. (total 30 requests to server).
I am using a for loop for this :
for(i=0;i<30;i++){
//Some data computation
JsonObjectRequest req = new JsonObjectRequest(url, new JSONObject(params),
new Response.Listener<JSONObject>() {
#Override
public void onResponse(JSONObject response) {
try {
VolleyLog.v("Response:%n %s", response.toString(4));
} catch (JSONException e) {
e.printStackTrace();
}
}
}, new Response.ErrorListener() {
#Override
public void onErrorResponse(VolleyError error) {
VolleyLog.e("Error: ", error.getMessage());
}
});
}
I am getting more than 30 enteries at my server end.
Is the JsonObjectRequest is sending multiple requests in every iteration?
Volley makes multiple requests to the server by default (Default Retry Policy).
These Settings can be found in DefaultRetryPolicy.java.
So according to this default policy, Volley tries to wait for the response for 2500 milliseconds, however if the response is not received in this time span then it retries for the number set by the DEFAULT_MAX_RETRIES, i.e., 1. And the DEFAULT_BACKOFF_MULT variable is used to determine exponential time set to socket for every retry attempt.
/** The default socket timeout in milliseconds */
public static final int DEFAULT_TIMEOUT_MS = 2500;
/** The default number of retries */
public static final int DEFAULT_MAX_RETRIES = 0;
/** The default backoff multiplier */
public static final float DEFAULT_BACKOFF_MULT = 1f;
Now, to stop the multiple request you can configure retry policy for your request object by using the setRetryPolicy() method of the request object.
//req = Request
req.setRetryPolicy(new DefaultRetryPolicy(20 * 1000, 0,
DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));
Reference : Here
Related
I am creating an Android app that sends http requests contains IMU data every 20ms using Handler and Runnable.
public void onClickLogData(View view){
Log.d(TAG,"onClickLogData");
final OkHttpClient client = new OkHttpClient();
Handler handler = new Handler();
Runnable runnable = new Runnable() {
#Override
public void run() {
if (Running) {
handler.postDelayed(this, 20);
String url = "http://192.168.86.43:5000/server";
Log.d(TAG, String.valueOf(time));
RequestBody body = new FormBody.Builder()
.add("Timestamp", String.valueOf(time))
.add("accx", String.valueOf(accx))
.add("accy", String.valueOf(accy))
.add("accz", String.valueOf(accz))
.add("gyrox", String.valueOf(gyrox))
.add("gyroy", String.valueOf(gyroy))
.add("gyroz", String.valueOf(gyroz))
.add("magx", String.valueOf(magx))
.add("magy", String.valueOf(magy))
.add("magz", String.valueOf(magz))
.build();
Request request = new Request.Builder()
.url(url)
.post(body)
.build();
final Call call = client.newCall(request);
call.enqueue(new Callback() {
#Override
public void onFailure(#NonNull Call call, #NonNull IOException e) {
Log.i("onFailure", e.getMessage());
}
#Override
public void onResponse(#NonNull Call call, #NonNull Response response)
throws IOException {
assert response.body() != null;
String result = response.body().string();
Log.i("result", result);
}
});
} else {
handler.removeCallbacks(this);
}
}
};
handler.postDelayed(runnable, 1000);
}
And the data are received and stored on my laptop.
with open('imu.csv','w') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(['Timestamp','accx','accy','accz','gyrox','gyroy','gyroz','magx','magy','magz'])
app = Flask(__name__)
#app.route('/server', methods=['GET','POST'])
def server():
r = request.form
data = r.to_dict(flat=False)
t = int(str(data['Timestamp'])[2:-2])
print(t)
accx = float(str(data['accx'])[2:-2])
accy = float(str(data['accy'])[2:-2])
accz = float(str(data['accz'])[2:-2])
gyrox = float(str(data['gyrox'])[2:-2])
gyroy = float(str(data['gyroy'])[2:-2])
gyroz = float(str(data['gyroz'])[2:-2])
magx = float(str(data['magx'])[2:-2])
magy = float(str(data['magy'])[2:-2])
magz = float(str(data['magz'])[2:-2])
imu_data = [t,accx,accy,accz,gyrox,gyroy,gyroz,magx,magy,magz]
with open('imu.csv','a+') as csv_file:
writer = csv.writer(csv_file)
writer.writerow(imu_data)
return("ok")
if __name__ == '__main__':
app.run(host='0.0.0.0')
The requests are sent in chronological order on Android side as Log indicates, however on the receiving side many of the requests are received in wrong time sequence. enter image description here
It seems that this happens more frequently as time goes. What possibly could be the cause of this and where should I be looking at?
All sorts of things. Requests are sent over a network. They can take different paths to get there each time. Requests can even get lost. Using TCP you'd automatically resend a lost request, but then it would be even more out of order. They can be delayed in the network in different bridges and routers. There is no promise over the internet that different requests will be received in order. That's only a promise over a single socket using TCP- and that is only possible with a lot of work (basically keeping track of every packet sent and received and waiting until you have them in order to send it to the app). If your architecture requires you to receive them in order, your architecture cannot possibly work over the internet.
If you do need an ordering on the server, either embed a request number that's monotonically increasing, or embed a timestamp in the request.
There I am getting data from SQLite and send it to the server using Volley.
for now, I am sending all the data at a time.
I just want to know how can I create a queue that first data of one vehicle, gets its response and then send another one.
cursor=helperClass.readAllData();
if (cursor!=null)
{
while (cursor.moveToNext())
{
modelClass=new ModelClass(cursor.getInt(0),cursor.getString(1),
cursor.getString(2),cursor.getString(3),
cursor.getString(4),cursor.getString(5));
modelClasses.add(modelClass);
}
sizeOfArray=modelClasses.size();
for (int i=0; i<sizeOfArray;i++)
{
name = modelClasses.get(i).getName();
model=modelClasses.get(i).getModelName();
number=modelClasses.get(i).getEngineNumber();
image=modelClasses.get(i).getImageBase64();
hdimage=modelClasses.get(i).getHdimageBase64();
uploadData(name, model, number, image, hdimage);
Toast.makeText(UploadDataServiceClass.this, String.valueOf(sizeOfArray), Toast.LENGTH_SHORT).show();
Toast.makeText(UploadDataServiceClass.this, String.valueOf(i), Toast.LENGTH_SHORT).show();
}
}
uploadData(name,model,number,image,hdimage)
RequestQueue requestQueue=Volley.newRequestQueue(UploadDataServiceClass.this);
StringRequest stringRequest=new StringRequest(Request.Method.POST, showURL, new Response.Listener<String>()
{
#Override
public void onResponse(String response)
{
try
{
Log.d(TAG, "onResponse: " + response);
JSONObject jsonObject = new JSONObject(response);
}
catch (JSONException e)
{
e.printStackTrace();
}
}
}, new Response.ErrorListener()
{
#Override
public void onErrorResponse(VolleyError error)
{ }
}
)
{
#Override
protected Map<String, String> getParams()
{
Map<String, String> parameters = new HashMap<String, String>();
parameters.put("name", name);
parameters.put("model", model);
parameters.put("number", number);
parameters.put("image", image);
parameters.put("hdimage", hdimage);
parameters.put("crud_type", "insert");
return parameters;
}
};
requestQueue.add(stringRequest);
You need a Executor Service with singleThreadExecutor to execute your threads one by one
Creates an Executor that uses a single worker thread operating off an unbounded queue. (Note however that if this single thread terminates due to a failure during execution prior to shutdown, a new one will take its place if needed to execute subsequent tasks.) Tasks are guaranteed to execute sequentially, and no more than one task will be active at any given time. Unlike the otherwise equivalent newFixedThreadPool(1) the returned executor is guaranteed not to be reconfigurable to use additional threads.
but what is Executor Service ?
with a Executor Service you can set the maximum running tasks (1 in your case)
here is a simple tutorials about ThreadPool , Executors and Future
I have this code below which makes 300 http requests and each request returns 10000 rows from database. Total size of 10000 is approximately 0.4mb. So 300*0.4 = 120mb.
Questions:
How increasing the ThreadPool size for handing requests in Volley, can affect the perfomance in app? I change it to 12, but the execution time and size of data was the same as with 4. Is there any difference at all?
When in creasing the number of Volley threads, does the resulted data increase as well? If had 1 thread the maximum returned data each time would be 0.4mb. But if we had 4, the maximum would be 1.6mb.
Emulator: 4 Cores MultiThread
ExecutorService service = Executors.newFixedThreadPool(4);
RequestQueue queue;
AtomicInteger counter = new AtomicInteger(0);
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
File cacheDir = new File(this.getCacheDir(), "Volley");
queue = new RequestQueue(new DiskBasedCache(cacheDir), new BasicNetwork(new HurlStack()), 4);
queue.start();
start();
}
public void start(){
String url ="...";
for(int i =0 ; i<300; i++) {
counter.incrementAndGet();
StringRequest stringRequest = new StringRequest(Request.Method.GET, url,
new Response.Listener<String>() {
#Override
public void onResponse(String response) {
method(response);
}
}, new Response.ErrorListener() {
#Override
public void onErrorResponse(VolleyError error) {
Log.d("VolleyError", error.toString());
}
});
stringRequest.setTag("a");
queue.add(stringRequest);
}
}
public synchronized void decreased(){
if(counter.decrementAndGet()==0)
start();
}
public void method( String response){
Runnable task = new Runnable() {
#Override
public void run() {
List<Customer> customers= new ArrayList<>();
ObjectMapper objectMapper = new ObjectMapper();
TypeFactory typeFactory = objectMapper.getTypeFactory();
try {
customers= objectMapper.readValue(response, new TypeReference<List<Customer>>() {});
//Simulate database insertion delay
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
decreased();
} catch (IOException e1) {
e1.printStackTrace();
}
}
};
logHeap("");
service.execute(task);
}
Regarding Question 1:
Thread pool with size 4 will be better when compared to 12.
Thread pool size should be in conjunction with number of processors available.
Since the number of processors are limited, app should not spawn unnecessary threads as this may lead to performance problem. As android OS has to manage resource between more threads which will lead to increased wait and actual time for each thread.
Ideally assuming your threads do not have locking such that they do not block each other (independent of each other) and you can assume that the work load (processing) is same, then it turns out that, have a pool size of Runtime.getRuntime().availableProcessors() or availableProcessors() + 1 gives the best results.
Please refer link Setting Ideal size of Thread Pool for more info.
Regarding question 2: If I have understood your question correctly, there should be no change on returned data as thread pool size has no effect on network payload, only wait time and actual time will be changed, when thread pool size value is changed.
I am using the GenericRequest (an extension of the built-in jsonrequest) to make a REST call to a server that takes in a json object and returns a string, which is "0" if the json object already exists and a nonzero string otherwise.
However, with the following code, I always get a "0" back no matter what I sent.
JSONObject userobj = new JSONObject();
try {
userobj.put("email",email);
userobj.put("password",password);
userobj.put("username",name);
} catch (JSONException e) {
e.printStackTrace();
}
Log.d(TAG, userobj.toString());
GenericRequest jsonObjReq = new GenericRequest(Request.Method.POST, REGISTER_URL, String.class, userobj,
new Response.Listener<String>() {
#Override
public void onResponse(String response) {
// Handle access token.
Log.d(TAG, "Register received: " + response);
long token = Long.parseLong(response);
if(token == 0) {
Log.d(TAG, "Received 0!");
Toast.makeText(MainActivity.this, R.string.registerfail_toast, Toast.LENGTH_LONG).show();
} else {
Log.d(TAG, "Register success!");
Toast.makeText(MainActivity.this, R.string.Welcome, Toast.LENGTH_LONG).show();
}
}
},
new Response.ErrorListener() {
#Override
public void onErrorResponse(VolleyError error) {
Log.d(TAG, error.toString());
Toast.makeText(MainActivity.this, error.toString(), Toast.LENGTH_LONG).show();
}
}) {
#Override
public String getBodyContentType() {
return "application/json";
}
};
jsonObjReq.setRetryPolicy(new DefaultRetryPolicy(0, -1, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));
helper.add(jsonObjReq);
When testing in Postman, given the input like:
{
"email": "dlee23122",
"password": "1234",
"username": "dlee23122"
},
it gives back a nonzero string. (Screenshot as follows.) But when given a slightly different input using the Volley, it keeps giving back a "0". What could be the reason?
Thanks in advance!
Even i faced the same problem, volley default connection timeout is set to 5 sec and it was posting two times, so check for time in postman on right side, if it is closer to 5000ms or greater this might be the problem.
My problem got solved by adding the following to Volley request:
DefaultRetryPolicy retryPolicy = new DefaultRetryPolicy(0, -1, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT);
jsonObjectRequest.setRetryPolicy(retryPolicy);
if you want set custom retry policy look at this post Change Volley timeout duration
I'm making request to call server using volley, But sometimes when there is latency in network it again makes request to the server.
But as per my request it should call server to only one time irrespective of response from server.
The Volley Default Retry Policy is:
/** The default socket timeout in milliseconds */
public static final int DEFAULT_TIMEOUT_MS = 2500;
/** The default number of retries */
public static final int DEFAULT_MAX_RETRIES = 1;
/** The default backoff multiplier */
public static final float DEFAULT_BACKOFF_MULT = 1f;
You can find it in DefaultRetryPolicy.java,
so you can see that volley makes 1 retry request by default.
Try to use smaller TIMEOUT (if you don't want to wait the 2500ms), or bigger than 2500ms to get the answer), but keep the other values, for example:
// Wait 20 seconds and don't retry more than once
myRequest.setRetryPolicy(new DefaultRetryPolicy(
(int) TimeUnit.SECONDS.toMillis(20),
DefaultRetryPolicy.DEFAULT_MAX_RETRIES,
DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));
public RetryPolicy myRetryPolicy(){
return new RetryPolicy() {
#Override
public int getCurrentTimeout() {
return 10000;
}
#Override
public int getCurrentRetryCount() {
return 0;
}
#Override
public void retry(VolleyError error) throws VolleyError {
Log.d(TAG, "Volley Error " + error.toString());
throw new VolleyError("Do Not Retry");
}
};
}
If you set a custom RetryPolicy and in that, throw a VolleyError in the retry method. Volley never retries.
postRequest.setRetryPolicy(myRetryPolicy());
LogCat
11-05 13:00:36.078 6014-6314/******: Volley Error com.android.volley.TimeoutError
11-05 13:00:36.079 6014-6014/*******: Volley Error com.android.volley.VolleyError: Do Not Retry