I wanted to save a string to a file and read it back, so I followed those two examples:
developer.android.com example
anddev.org example
However, in both of them, no BufferedOutputStream wrapper is used, while the docu of FileOutputStream recommends it.
Was this done to have an easier example or is it really not necessary on Android? And does whatever the answer is also apply to the InputStream?
Regards,
jellyfish
In this case, the authors of the examples know the size of the output data to be small ahead of time. You really only need the Buffered version if you have "large" amounts of data to write (which you usually don't know for absolute certainty ahead of time).
The JavaDocs for BufferedOutputStream highlights this well...
Expensive interaction with the
underlying input stream is minimized,
since most (smaller) requests can be
satisfied by accessing the buffer
alone.
Related
Is there a way to find out the size of an asset file in kotlin android? I am wanting to know the size of a group of files so as I can then confirm the device has enough free space to copy the files to internal memory.
I have searched on Google and Stack Overflow, however I have not found anything that was able to assist
UPDATE:
I have figured out two possible solutions:
context.assets.open("FILE").available()
context.assets.open("FILE").readBytes().size
Which is a better way and more efficient and accurate? I have read that available() is not completely accurate. Is this true? Link Here. If this is truly the case, does that mean reading the bytes from the inputstream (as shown in case number 2) is the best way?
Recently, I am working on a project which will have thousands of small files to manage. The whole system will randomly read/write those files. And parts of them are downloaded from networks at the moment that it is needed. For performance and system limitations, I could only keep 10~20 file descriptors/handles. So I decided to create a large single file as my 'disk'. And then dynamic read and write files into it.
Here is my design:
A file header for serialization of meta-info
An index to save offset of every little file
A bunch of 'pre-allocate segments' to save files
Support multi-thread accessing
Finally, It looks like a virtual file system. But I'm not sure is it safe to read/write the same file at the same time on Android and iOS. And I'm totally no idea how to manage the fragment of my 'pre-allocate segments'.
Is there any best practice or frameworks that I can reference? Am I on the right way?
First off:
I'm not sure is it safe to read/write the same file at the same time on Android and iOS
One the one hand, it is safe, in the sense that it won't break the OS, the filesystem, other files, or other applications not using this file. Your concurrent reads and writes will also not fail.
On the other hand, concurrent reads and writes will not be atomic, so if you overwrite a 10K file in one thread while another thread is reading it, you might get half-new-half-old data.
Secondly, this problem of non-atomicity will also apply to your single-file filesystem framework. In any case, this can be mitigated (by you or the the framework) by locking the byte ranges being read & written.
Filesystems do support byte range locking on files. I could easily find the Android docs. iOS seems to take a different approach.
iOS seems to be taking an event-based approach to changes to parts of files. But if you just use the filesystem directly rather than building your own filesystem in a single file, all you need to do is to atomically replace files.
This approach also has an equivalent in android.
In the Java (Android world) you can look at :
RandomAccessFile : good old way of reading and writing at any position of a given file
FileChannel : thread safe method for doing it (but you need to manage yourself multiple concurrent write to the same file by managing multiple FileChannel instances over the same file see Concurrency of RandomAccessFile in Java)
What you're looking for is close to what most of the bitorrent clients does (at least on good old hard drive to avoid fragmentation), creating a file of a given size (reserving the space for the whole download on disk) and then downloading and serving segments of it over the network !
According to your design, segments are just a matter of how many bytes you reserve in your file for managing allocations. You can read about file systems on the web !
don't know much regarding the iOs world...
I am not an expert on Android/Ios but I think that in your current case the problem will be to use multiple threads on a io operation. My advice if you want to follow down this place would be to add an intermediate memory buffer that could handle the update/deletion and asynchronously write on disk.
Another advice would be to try to use an sqlite db (a db with a single file) that is already doing all the heavy work for you.
Put all files in a database. Sqlite for Android. Then they are in one file.
I have found a number of resources but nothing that has quite helped me with what I am looking for. I am trying to understand the .png and.jpg file formats enough to be able to modify and/or read the exif or other meta data in the files or to create my own meta data if possible.
I want to do this in the context of an android application so we can keep if there but it is really not exclusive to that. I am trying to figure out how to do this using a simple imput stream byte array and go from there.
Android itself has to at least extract the RGB pixel information at some point when it creates a bmp image from the stream, I took a look in the BitMapFactory source to try and understand it but I got lost somewhere after delving into the Native files.
I assume the bmps are losing any exif/meta data in the files based on my research. So I guess I want to break the inputstreams down by byte arrays and remove meta data. In .pngs I know there is no 'standard' but based on this page it seems there is some organization of the meta data you can store.
With all that said, I wouldn't mind just leaving exif/png standards behind and trying to store my own information in some sort of standardized way, but I need to know more about how the image readers id the files as either jpg, png, ect. then determine where the pixel information is located.
So I guess my first question is, has anyone done something similar to this before so that they can file me in? If not, does anyone know of any good libraries that might be good for educational purposes into figuring out how to locate and extract this data?
Or even more basically, what is a good way to find meta data and/or the exif standard or even the rgb data programmatically using something like a byte array?
There are a lot of things to address in your question, but first I should clarify that when you say "Android itself has to at least extract the RGB pixel information," what you're referring to is the act of decompression, which is complicated in the case of JPEG, and non trivial even for PNG. I think it would be very useful for you to read through the wikipedias for JPEG and PNG before attempting to go any further (especially sections on header, syntax, file structure, etc).
That being said, you've got the right idea. It shouldn't be too difficult to read in the header of an image as a byte array/stream, make some changes, and replace the old file. A PNG file can be identified by the first 8 bytes, and there should be a similar way to identify a JPEG - I can't remember off the top of my head.
To modify PNG meta data, you'll have to understand "chunks" - types/names, ordering, format, CRC, etc. The libpng website has some good resources for this, here's general PNG info, as well as chunk specifications. Make sure you don't forget to recalculate the CRC if you change anything.
JPEG sections off a file using "markers," which are two bytes long and always start with FF. Exif is just a regular JPEG file with a more specific structure for meta data, and this seems like a reasonable introduction: Exit/TIFF
There are probably libraries for Android/Java that conveniently take care of this for you, but I've never used any myself. A quick google search turns up this, and I'm sure there are many other options if you don't want to take the time to write a parser yourself.
I need to implement a video DASH client for Android.
At this time I haven't find any solution except write the InputStream in a temp file and then read the file. Of course this solution is not efficient at all. I thought to use an OutputStream to use its FileDescriptor as the data source. But I'm not able to use a valid FileDescriptor without creating an existing file...
Because of the DASH protocol, the client has the charge of getting all the (little) segments, so I really need to find a way to read the media directly from the memory. Maybe the only solution is to use the JNI but I don't really know how.
To resume I'm open to every suggestions. The only constraints are :
At first I have an InputStream
Here it can be any intermediate operations but the more efficient as possible
Get a valid input to feed a MediaPlayer
That seems pretty basic but I can't found any way to achieve that. Thanks.
Derzu is correct that a local proxy can do this. See my answer here. Feel free to ask any questions.
I have a couple of binary files that are used in image processing where the data is in float form and is used frequently. The calculations include multiplication and addition of pixels from images taken by the camera.
At present the files are stored as Resources in the /res folder and I am using a DataInputStream each time the file is used, where the data is read and calculated in a single function.
I am relatively new to using files in applications, especially on Android, and so I wanted to find out if there is a better practice to getting data from a file, for instance as I am using the same data over and over again is it better to read the data and store it in an array or even in the SQLite database. This is a augmented reality application so the processed images need to be displayed in near real time.
Many thanks in advance.
EDIT: Thank you Patthoyts for your answer, on a different question however, Why people say mmap by MappedByteBuffer is faster?, I've read that using a MappedByteBuffer is not needed as I am reading my data sequentially each time. Could someone with experience on Android tell me whenever it is better to read the data in at the start and store it in a large array or buffer, or to keep reading it from a stream each time I need it. Overall I have around 50,000 floating point values.
You can use memory-mapped files on Android which means the data can be paged into memory by the OS for rapid access until something else needs memory. You need to copy your data file out of your res/raw or assets section to the data filesystem though but after that you can use a MappedByteBuffer to provide access. eg:
FileInputStream inputstream = context.openFileInput(DATA_FILE_PATH);
FileChannel channel = inputstream.getChannel();
MappedByteBuffer buffer = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size());
buffer.order(ByteOrder.LITTLE_ENDIAN);
IntBuffer intbuffer = buffer.asIntBuffer();
now access as sequence of integers using intbuffer.get(index) and so on.