Possible OutOfMemoryException when uploading a larger file to Parse - android

I am trying out Parse for a possible backend for my app but I have a concern when dealing with large files that I want to upload to it.
in the documentation is says I need to convert a file to byte array and pass that into the ParseFile like so
byte[] data = "Working at Parse is great!".getBytes();
ParseFile file = new ParseFile("resume.txt", data);
however a large file can obviously throw an OutOfMemoryException here since loading the file into a byte array would load the whole thing into memory.
So my question is how would I go about uploading a large file to Parse while avoiding getting OOME?

It is very likely your users will end up crashing the app if they have full control of the upload process since many users don't have a solid understanding of file sizes. I noticed that parse has a nice wrapper class in their API for iOS (PFFile) which is limited to 10 MB, it's a pity they haven't implemented the same for android.
I think you are going to have to do this manually using their REST API. Have a look at their REST API, more specifically the files endpoint. You can easily use an HttpURLConnection in conjunction with a FileInputStream which gives you more flexibility over the upload process using streams. I'm usually more comfortable doing stuff manually rather than using a wrapper that's not exactly clear what's going on behind the scenes, but if you are not, I can post a minimalistic example.

One possible solution is to read in portions of the large file and upload each portion separately. You would need to do this in such a way that you can reassemble the file when you download it again.

no reason that you cannot use the REST API and chunk the upload using normal http 1.1 headers.
You should read up on 'chunked-encoding' and figure out how to segment your overall byte-array in in android if its really that big.
In android, in the sender, you will have to segment stuff , explicitly managing io on both sides where there will be streams...
The output stream, you should be able to hook up directly to the http request objects "OUT STREAM". The input side is just a read of a smaller block of your input file so you can conserve memory.
You set up the http Connection
Connect it
get the OUTSTREAM for the http-connection-request and pass that to the streamCopy util that is handling your chunking.
HTTP req headers:
> Transfer-Encoding: chunked
> Expect: 100-continue
HTTP response headers from parse u should see...
something indicating a "100 continue" that its waiting for more data to be sent by the chunking process behind the outputstream attached to the request.
Remember to CLOSE streams/connections when done with the http POST.
Note - if you have big files you really should question what you are actually getting from the big size. All types of media files allow you to trim the size prior to the uploads.

Related

How to upload a large file on android?

Recently I have been working on an app and it requires the files on the phone to be uploaded to the server. These file may be either images or videos. I used ASyncTask to do the networking on the background.
However if the file size is greater than 45 MBs the file upload fails...it works just fine other-wise
What should I use instead of Async Tasks? Should I go for Sync Adapters or for the Volley library? I know nothing in either of these.
You can use retrofit typedfile approach to upload file in multi-part.
For your reference :
https://futurestud.io/blog/retrofit-how-to-upload-files/
What response do you get from the server when the upload fails? Understanding the response can help you get insight into why the upload is failing. One possibility is that the server you are trying to upload file to is not configured to handle that big payload in which case you will get the following HTTP response.
HTTP Error 413 Request entity too large
HTTP Error 413
However this is just one of the possibilities. I suggest you to inspect HTTP response and it's other properties.

How can I divide my file into blocks of data and trasfer those blocks from my android app?

I want to be able to transfer blocks of my file instead of the complete file from my app inorder to transfer it more efficiently. What is the best way to do that?
Update
Here is an example of where I would need this approach : Say I have a 4GB file which I am trying to upload. If the network fails, my file upload will stop and I will have to start from scratch. Instead, if I keep track of the blocks that I have already transferred, I can continue from the blocks which were yet to be transferred. This is especially important for flaky network connections.
May i know what kind of file you have. ?I am asking because if it is other than text file then i think you need meu law and A law or G711 or G729 algorithm.
I figured out one approach to do this. What you could do is divide the file into blocks using the approach mentioned here - How to break a file into pieces using Java? and convert each block into a Base64 string and then transfer the encoded string. The server responds with the chunk number after it receives it. This chunk number can then be saved on the device locally and the next chunk can then be sent to the server. After all the chunks have been transferred, the Base64 decoding can be done on the server side and the files can be merged.

Android + NodeJS: Client-Server communication

I have some questions about developing a Android application which shall be able to communicate with a NodeJS server.
The Android application gathers some data and saves everything in a .csv file.
This file now needs to be uploaded to a NodeJS server. The NodeJS server should save the file as well as storing the content in a MongoDB.
My question now is how I should implement the communication between the Android device and the server.
I know how to upload a single file to a NodeJS server using a HttpURLConnection with a DataOutputStream.
But I need more than just uploading the file because I need a unique identification of each Android device.
I thought about using the (encrypted) Google account E-Mail address of the user to distinguish the devices. I am not interested in knowing who uploads which data but I need to store the data for each device separately.
The problem is that I don't know how to communicate between the device and the server.
If I upload a file via HttpURLConnection and DataOutptStream it seems that I can only upload the file without any additional information like the unique key for the device.
I also thought about uploading the file via sockets. But I am not sure how to handle huge file sizes (5 MB or more).
I am not looking for code fragments. I rather need some hints to the right direction. Hopefully my problem was stated clearly and someone can help me with this.
Using a HttpUrlConnection on the Android side, and a RESTful server on the Node side would be a straightforward option.
You can embed information into the URL in a RESTful way:
pathParam: www.address.com/api/save/{clientId}/data
queryParam: www.address.com/api/save/data?c={clientID}
each uniquely identifying the client. This can be whatever scheme you choose. You will have to build the HttpUrlConnection each time as the URI is unique, and important!
The server side can then route the URL however you see fit. Node has a number of packages to aid in that (Express, Restify, etc.). Basically you'll grab the body of the request to store into your DB, but the other parameters are available too so it's all a unique and separated transaction.
Edit: The package you use for RESTful handling can stream large files for you as well. Processing of the request can really begin once the data is fully uploaded to the server.
Using a socket would be nearly just as easy. The most difficult part will be 'making your own protocol' which in reality could be very simple.
Upload 1 file at at time by sending data to the socket like this:
54::{filename:'myfilename.txt',length:13023,hash:'ss23vd'}xxxxxxxxxxx...
54= length of the JSON txt
:: = the delimiter between the length and the JSON
{JSON} = additional data you need
xxx... = 13023 bytes of data
Then once all the data is sent you can disconnect... OR if you need to send another file, you know where the next set of data should be.
And since node.js is javascript you already have wonderful JSON support to parse the JSON for you.
Would I suggest using a socket? Probably not. Because if you ever have to upload additional files at the same time, HTTP and node.js HTTP modules might do a better job. But if you can guarantee nothing will ever change, then sure, why not... But that's a bad attitude to have towards development.

Getting incomplete xml data from web-service

I am sending a POST request to a web-service which returns data in XML format. When I invoke the web-service via the browser i can see the complete XML file. However, when I call the web-service from my android app (in emulator) I only get partial XML data?
The size of the XML file being sent is approx 14500 bytes. I first store the response in a byte array whose size i calculate using getContentLength() function. This method returns the correct size of the response.
However, my InputStreamReader reads approximately 6350 bytes of the actual data to the byte array. The remaining part of the byte array simply contains zeros.
I tried looking everywhere online but so far I haven't come across a solution. Yes chunking is one option but since I dont have access to the web-service code I can't implement that option.
Really appreciate if someone could guide me here!
I don't understand why do you need to use InputStreamReader (which can behave very wrong if you are not careful) to read web service data (in your case it is only around 14.5kb). Just use HttpURLConnection, connect to your ws, and it will take care for you about the connection and also the response connection.getResponseMessage();

Android: Download File only if changed

I want to write a android application which needs data from the web. This information is stored in a json-file. The data from the json-file is saved on the device. To keep it up to date, I need to check for changes in the file every hour.
As the remote File can get quite large I want to download it only if it is different from the version which was previously downloaded. I thought about using the Last-Modified-Header of HTTP for this.
I came up with the following workflow (pseudo-code):
data = null; data_timestamp = null;
Every hour repeat:
Issue a HTTP Head-Request to the URL and option new_timestamp from Last-Modified Header.
If either data==null or new_timestamp > data_timestamp then
Issue a normal HTTP-Request to the URL
Save to data and set data_timestamp = new_timestamp
Do you think this is a reasonable approach? I could use the if-modified-since HTTP Header to get the data only if it has changed since the last download. This would save me one request. If it has changed, a body containing the new data is provided, if it hasn't, the body is empty.
I also thought about using ETags, as I typically want to download if the file has new content (and not if the modified-date has changed), but my webserver (nginx) doesn't support the creation of etags and I don't want to involve another layer on the server-side for performance-reasons.
You should look into using ETags instead of relying on HTTP HEAD. They are supported in javax.ws.rs.core with the EntityTag class.
You can see a Java-based example using Spring to help explain some of the concepts as well.
I solved the problem as described by me above: Download the file by using the if-modified-since HTTP Header. The ngnix webserver can be configured to return the right information regarding this header.

Categories

Resources