Buffered file reading in Flutter - android

I have following Dart code and I am trying to make reading the file buffered. Just like Java's BufferedReader or C++ ifstream. Is there such functionality? I cannot even find buffer mentioned in file.dart nor file_impl.dart. If I understood my debugging correctly, it seems that Dart is reading the whole file at once.
So could anybody help me make it buffered or point me in right direction where the buffer is?
final file = File(join(documentsDirectory, "xxx.txt"));
final List<String> lines = await file.readAsLines(); //file.readAsLinesSync()
lines.forEach((line) {
....
});

Use file.openRead(). This will return a Stream of bytes. If you want to read as characters, transform the stream using the appropriate decoder (probably utf8).
As it says, you must read the stream to the end, or cancel it.

Related

How to correctly send audio files to Google Speech API?

I'm trying to implement Google Speech API in Android by following this demo: https://github.com/GoogleCloudPlatform/android-docs-samples
I was able to successfully reproduce the example in my app by using the given "audio.raw" file located in R.raw, and everything works perfectly. However, when I try to use my own audio files, it returns "API successful" without any transcription text. I'm not sure if it has to do with the files' path or the encoding, so I'll include information on both just in case.
Encoding
My audio files are obtained by recording a voice through MediaRecorder. These are the settings:
myAudioRecorder = new MediaRecorder();
myAudioRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
myAudioRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
myAudioRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_WB);
myAudioRecorder.setAudioSamplingRate(16000);
myAudioRecorder.setAudioEncodingBitRate(16000);
myAudioRecorder.setAudioChannels(1);
myAudioRecorder.setOutputFile(outputFile);
SpeechService's recognizeInputStream() function in the API:
mApi.recognize(
RecognizeRequest.newBuilder()
.setConfig(RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.AMR_WB) //originally it was LINEAR16
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build())
.setAudio(RecognitionAudio.newBuilder()
.setContent(ByteString.readFrom(stream))
.build())
.build(),
mFileResponseObserver);
Encoding guidelines by Google: https://cloud.google.com/speech/docs/best-practices
From what I understand, I can use AMR_WB and 16kHz instead of the default LINEAR16, I'm just not sure if I'm doing it right.
Path
This is the example that is fully working (with the audio file from the repo):
mSpeechService.recognizeInputStream(getResources().openRawResource(R.raw.audio));
However, none of the following options work, even with the exact same file:
InputStream inputStream = new URL("[website]/test/audio.raw").openStream();
mSpeechService.recognizeInputStream(inputStream);
Neither:
Uri uri = Uri.parse("android.resource://[package]/raw/audio");
InputStream inputStream = getActivity().getContentResolver().openInputStream(uri); //"getActivity()" because this is in a Fragment
mSpeechService.recognizeInputStream(inputStream);
To be clear, the result on the above paths is the same as on my custom audio files: "API successful" with no transcription. One of the options I have tried for my custom audio files, with the same thing happening, is this:
FileInputStream fis = new FileInputStream(filePath);
mSpeechService.recognizeInputStream(fis);
The only reason I'm not 100% sure the problem is in the path is because if the API is returning with success, then the file was found in the specified path. The problem should be the encoding, but then it's weird that the same file ("audio.raw") sent in different ways produces different results.
Anyway, thank you in advance! :)
EDIT:
To be clear, it's not that it returns an empty string in the transcription. It just never enters the "onSpeechRecognized" function that also exists in the demo, so no transcription is given.

Android ExifInterface not saving attribute

Below is my code-
try {
InputStream inputStream = getAssets().open("thumbnail.jpg");
exifInterface = new ExifInterface(inputStream);
exifInterface.setAttribute(ExifInterface.TAG_ARTIST,"TEST INPUT");
exifInterface.saveAttributes();
} catch (IOException e) {
e.printStackTrace();
}
On the exifInterface.saveAttributes() line I get the following error -
java.io.IOException: ExifInterface does not support saving attributes
for the current input.
I am not sure if the error is due to the image file or due to the attribute. I'm trying to save. Also I looked online for possible solutions (eg. Sanselan) but not sure if it will solve this.
Can somebody explain how to fix this?
Thanks!
You can't do attribute mutation using Input Stream.
You can check the code of ExifInterface, it says that:
/**
* Reads Exif tags from the specified image input stream. Attribute mutation is not supported
* for input streams. The given input stream will proceed its current position. Developers
* should close the input stream after use. This constructor is not intended to be used with
* an input stream that performs any networking operations.
*/
public ExifInterface(InputStream inputStream) throws IOException {
/* Irrelevant code here */
So, if you would like to write in the meta data of your file, you need to pass the file in the constructor. Otherwise it is going to fail. You can also see the code that will always fail (with InputStream) in the class:
public void saveAttributes() throws IOException {
if (!mIsSupportedFile || mMimeType != IMAGE_TYPE_JPEG) {
throw new IOException("ExifInterface only supports saving attributes on JPEG formats.");
}
if (mFilename == null) {
throw new IOException(
"ExifInterface does not support saving attributes for the current input.");
}
//Irrelevant code
So use ExifInterface(file) and you'll be able to make your code work.
Happy coding!
ExifInterface does not support saving attributes for the current input.
The current input is an InputStream. One cannot save data to an InputStream. Only to an OutputStream.
A second problem is that the file in assets is read only. Hence you could not even open an OutputStream if you had tried that. So impossible.
What I think might be the issue is : you are trying to add attribute to read only assets placed inside app during the zip of app is created.
And adding attribute to files inside zip is still not supported by exifInterface. Howsoever you can easily add attributes to other files that exist outside say in SDCard.

Sharing via Seekable Pipe or Stream with Another Android App?

Lots of Intent actions, like ACTION_VIEW, take a Uri pointing to the content the action should be performed upon. If the content is backed by a file -- whether the Uri points directly to the file, or to a ContentProvider serving the file (see FileProvider) -- this generally works.
There are scenarios in which developers do not want to have the content reside in a file for sharing with other apps. One common scenario is for encryption: the decrypted data should reside in RAM, not on disk, to minimize the risk of somebody getting at that decrypted data.
My classic solution to sharing from RAM is to use ParcelFileDescriptor and createPipe(). However, when the activity responding to ACTION_VIEW (or whatever) gets an InputStream on that pipe, the resulting stream is limited compared to the streams you get when the ContentProvider is serving up content from a file. For example, this sample app works fine with Adobe Reader and crashes QuickOffice.
Based on past related questions, my assumption is that createPipe() is truly creating a pipe, and that pipes are non-seekable. Clients that attempt to "rewind" or "fast forward" run into problems as a result.
I am seeking a reliable solution for sharing in-memory content with a third-party app that gets around this limitation. Specifically:
It has to use a Uri syntax that is likely to be honored by client apps (i.e., ACTION_VIEW implementers); solutions that involve something obtuse that client apps are unlikely to recognize (e.g., pass such-and-so via an Intent extra) do not qualify
The data to be shared cannot be written to a file as part of the sharing (of course, the client app could wind up saving the received bytes to disk, but let's ignore that risk for the moment)
Ideally it does not involve the app looking to share the data opening up a ServerSocket or otherwise exacerbating security risks
Possible suggested ideas include:
Some way to reconfigure createPipe() that results in a seekable pipe
Some way to use a socket-based FileDescriptor that results in a seekable pipe
Some kind of RAM disk or something else that feels like a file to the rest of Android but is not persistent
A key critierion, if you will, of a working solution is if I can get a PDF served from RAM that QuickOffice can read.
Any suggestions?
Thanks!
You've posed a really difficult combination of requirements.
Lets look at your ideas for solutions:
Possible suggested ideas include:
Some way to reconfigure createPipe() that results in a seekable pipe
Some way to use a socket-based FileDescriptor that results in a seekable pipe
Some kind of RAM disk or something else that feels like a file to the rest of Android but is not persistent
The first one won't work. This issue is that the pipe primitive implemented by the OS is fundamentally non-seekable. The reason is supporting seek that would require the OS to buffer the entire pipe "contents" ... until the reading end closes. That is unimplementable ... unless you place a limit on the amount of data that can be sent through the pipe.
The second one won't work either, for pretty much the same reason. OS-level sockets are not seekable.
At one level, the final idea (a RAM file system) works, modulo that such a capability is supported by the Android OS. (A Ramfs file is seekable, after all.) However, a file stream is not a pipe. In particular the behaviour with respect to the end-of-file is different for a file stream and a pipe. And getting a file stream to look like a pipe stream from the perspective of the reader would entail some special code on that side. (The problem is similar to the problem of running tail -f on a log file ...)
Unfortunately, I don't think there's any other way to get a file descriptor that behaves like a pipe with respect to end-of-file and is also seekable ... short of radically modifying the operating system.
If you could change the application that is reading from the stream, you could work around this. This is precluded by the fact that the fd needs to be read and seeked by QuickOffice which (I assume) you can't modify. (But if you could change the application, there are ways to make this work ...)
By the way, I think you'd have the some problems with these requirements on Linux or Windows. And they are not Java specific.
UPDATE
There have been various interesting comments on this, and I want to address some here:
The OP has explained the use-case that is motivating his question. Basically, he wants a scheme where the data passing through the "channel" between the applications is not going to be vulnerable in the event that the users device is stolen (or confiscated) while the applications are actually running.
Is that achievable?
In theory, no. If one postulates a high degree of technical sophistication (and techniques that the public may not know about ...) then the "bad guys" could break into the OS and read the data from shared memory while the "channel" remained active.
I doubt that such attacks are (currently) possible in practice.
However, even if we assume that the "channel" writes nothing to "disc" there could still be traces of the channel in memory: e.g.
a still mounted RAMfs or still active shared memory segments, or
remnants of previous RAMfs / shared memory.
In theory, this data could in theory be retrieved, provided that the "bad guy" doesn't turn of or reboot the device.
It has been suggested that ashmem could be used in this context:
The issue of there being no public Java APIs could be addressed (by writing 3rd-party APIs, for example)
The real stumbling block is the need for a stream API. According the "ashmem" docs, they have a file-like API. But I think that just means that they conform to the "file descriptor" model. These FDs can be passed from one application to another (across fork / exec), and you use "ioctl" to operate on them. But there is no indication that they implement "read" and "write" ... let alone "seek".
Now, you could probably implement a read/write/seekable stream on top of ashmem, using native and Java libraries on both ends of the channel. But both applications would need to be "aware" of this process, probably to the level of providing command line options to set up the channel.
These issues also apply to old-style shmem ... except that the channel setup is probably more difficult.
The other potential option is to use a RAM fs.
This is easier to implement. The files in the RAMfs will behave like "normal" files; when opened by an application you get a file descriptor that can be read, written and seeked ... depending on how it was opened. And (I think) you should be able to pass a seekable FD for a RAMfs file across a fork/exec.
The problem is that the RAMfs needs to be "mounted" by the operating system in order to use it. While it is mounted, another (privileged) application can also open and read files. And the OS won't let you unmount the RAMfs while some application has open fds for RAMfs files.
There is a (hypothetical) scheme that partly mitigates the above.
The source application creates and mounts a "private" RAMfs.
The source application creates/opens the file for read/write and then unlinks it.
The source application writes the file using the fd from the open.
The source application forks / execs the sink application, passing the fd.
The sink application reads from the (I think) still seekable fd, seeking as required.
When the source application notices that the (child) sink application process has exited, it unmounts and destroys the RAMfs.
This would not require modifying the reading (sink) application.
However, a third (privileged) application could still potentially get into the RAMfs, locate the unlinked file in memory, and read it.
However, having re-reviewed all of the above, the most practical solution is still to modify the reading (sink) application to read the entire input stream into a byte[], then open a ByteArrayInputStream on the buffered data. The core application can seek and reset it at will.
It's not a general solution to your problem, but opening a PDF in QuickOffice works for me with the following code (based on your sample):
#Override
public AssetFileDescriptor openAssetFile(Uri uri, String mode) throws FileNotFoundException {
try {
byte[] data = getData(uri);
long size = data.length;
ParcelFileDescriptor[] pipe = ParcelFileDescriptor.createPipe();
new TransferThread(new ByteArrayInputStream(data), new AutoCloseOutputStream(pipe[1])).start();
return new AssetFileDescriptor(pipe[0], 0, size);
} catch (IOException e) {
e.printStackTrace();
}
return null;
};
private byte[] getData(Uri uri) throws IOException {
AssetManager assets = getContext().getResources().getAssets();
InputStream is = assets.open(uri.getLastPathSegment());
ByteArrayOutputStream os = new ByteArrayOutputStream();
copy(is, os);
return os.toByteArray();
}
private void copy(InputStream in, OutputStream out) throws IOException {
byte[] buf = new byte[1024];
int len;
while ((len = in.read(buf)) > 0) {
out.write(buf, 0, len);
}
in.close();
out.flush();
out.close();
}
#Override
public Cursor query(Uri url, String[] projection, String selection, String[] selectionArgs, String sort) {
if (projection == null) {
projection = new String[] { OpenableColumns.DISPLAY_NAME, OpenableColumns.SIZE };
}
String[] cols = new String[projection.length];
Object[] values = new Object[projection.length];
int i = 0;
for (String col : projection) {
if (OpenableColumns.DISPLAY_NAME.equals(col)) {
cols[i] = OpenableColumns.DISPLAY_NAME;
values[i++] = url.getLastPathSegment();
}
else if (OpenableColumns.SIZE.equals(col)) {
cols[i] = OpenableColumns.SIZE;
values[i++] = AssetFileDescriptor.UNKNOWN_LENGTH;
}
}
cols = copyOf(cols, i);
values = copyOf(values, i);
final MatrixCursor cursor = new MatrixCursor(cols, 1);
cursor.addRow(values);
return cursor;
}
private String[] copyOf(String[] original, int newLength) {
final String[] result = new String[newLength];
System.arraycopy(original, 0, result, 0, newLength);
return result;
}
private Object[] copyOf(Object[] original, int newLength) {
final Object[] result = new Object[newLength];
System.arraycopy(original, 0, result, 0, newLength);
return result;
}
I believe you're looking for StorageManager.openProxyFileDescriptor, function added in API 26. This will give you ParcelFileDescriptor, needed for your ContentProvider.openAssetFile to work. But you can also grab its file descriptor and use it in file I/O: new FileInputStream(fd.getFileDescriptor())
In function description is :
This can be useful when you want to provide quick access to a large file that isn't backed by a real file on disk, such as a file on a
network share, cloud storage service, etc. As an example, you could
respond to a ContentResolver#openFileDescriptor(android.net.Uri,
String) request by returning a ParcelFileDescriptor created with this
method, and then stream the content on-demand as requested. Another
useful example might be where you have an encrypted file that you're
willing to decrypt on-demand, but where you want to avoid persisting
the cleartext version.
It works with ProxyFileDescriptorCallback, which is your function to provide I/O, mainly read pieces of your file from various offsets (or decrypt it, read from network, generate, etc).
As I tested, it's well suited also for video playback over content:// scheme, because seeking is efficient, no seek-by-read as is the option for pipe-based approach, but Android really asks relevant fragments of your file.
Internally Android uses some fuse driver to transfer the data between processes.
I've been experimenting with #josias code. I found some of the query(...) calls came with a projection of _data. Including the data for that column and setting the actual length means more file types can be opened in more apps. Always including _data even when not in the passed in projection allows opening even more file types.
Here is what I ended up with:
private static final String[] PROJECTION = {OpenableColumns.DISPLAY_NAME, OpenableColumns.SIZE, "_data"};
#Override
public Cursor query(Uri url, String[] projection, String selection, String[] selectionArgs, String sort) {
byte[] data = getData(mSourcePath, url);
final MatrixCursor cursor = new MatrixCursor(PROJECTION, 1);
cursor.newRow()
.add(url.getLastPathSegment())
.add(data.length)
.add(data);
return cursor;
}

Program crashes on trying to open a file

My android program crashes on this line when the file size is very large. Is there any way I can prevent the program from crashing ?
byte[] myByteArray = new byte[(int)mFile.length()];
Additional details :-
I am trying to send a file to server.
error log-
E/dalvikvm-heap(29811): Out of memory on a 136309996-byte allocation.
You should use a stream when reading the file. Since you've mentioned sending to a server, you should stream that file to the server.
As others have mentioned, you should consider your data size (1GB seems excessive). I haven't tested this, but the basic approach in code would look something like:
// open a stream to the file
FileInputStream fileInputStream = new FileInputStream(filePath);
// open a stream to the server
HttpURLConnection connection = new URL(url).openConnection();
DataOutputStream outputStream = new DataOutputStream(connection.getOutputStream());
byte[] buffer = new byte[BUFFER_SIZE]; // pick some buffer size
int bytesRead = 0;
// continually read from the file into the buffer and immediately write that to output stream
while ((bytesRead = fileInputStream.read(buffer)) != -1) {
outputStream.write(buffer);
}
Hope that is clear enough for you to fit to your needs.
Yep. Don't try to read the whole file into memory at once...
If you really need the whole file in memory you might have more luck with allocating dynamic memory for each line and storing the lines in a list. (you might be able to get a bunch of smaller chunks of memory but not one big piece)
Without knowing the context we can't tell, but normally you would parse the file into data structs rather than just storing the whole file in memory.
In JDK 7 you can use Files.readAllBytes(Path).
Example:
import java.nio.file.Files;
import java.nio.file.Paths;
import java.nio.file.Path;
Path path = Paths.get("path/to/file");
byte[] myByteArray = Files.readAllBytes(path);
Don't try reading the complete file into memory. Instead open a stream and process the file line by line (is it's a text file) or in parts. How that has to be done depends on the problem you are trying to solve.
EDIT: You say you want to upload a file, so please check this question. You don't need to have the complete file in memory.

Using MediaRecorder to write to a buffer or FIFO

I am developing a low data rate VoIP kind of project . I need to capture audio at low data rates and store it in an internal buffer or FIFO (NOT in a file).
I would like to use low data rate .AMR encoders, which means AudioRecord is out. MediaRecorder looks like it does exactly what I want except that it seems to write to a file.
MediaRecorder takes a FileDescriptor... is there any way I can write a class that implements the FileDescriptor interface... acting as a sync for bytes... but instead of sending them to a file they are stored in a buffer? The documentation on FileDescriptor specifically says that Applications shouldn't write their own but why not and is it possible anyway?
http://docs.oracle.com/javase/1.4.2/docs/api/java/io/FileDescriptor.html
In short, I'd like to develop my own stream, and trick MediaRecorder to send data to it. Perhaps doing something tricky with opening both ends of a socket within the same APK and giving MediaRecorder the socket to write to? Using the socket as my FIFO? I'm somewhat new to this so any help/suggestions greatly appreciated.
I have a related question on the RX side. I'd like to have a buffer/fifo that feeds MediaPlayer. Can I trick MediaPlayer to accept data from a buffer fed by my own proprietary stream?
I know its a bit late to answer this question now...
...But if it helps here's the solution.
Android MediaRecorder's method setOutputFile() accepts FileDescriptor as a parameter.
As for your need a unix data pipe could be created and its FD could be passed as an argument in the following manner...
mediaRecorder.setOutputFile(getPipeFD());
FileDescriptor getPipeFD()
{
final String FUNCTION = "getPipeFD";
FileDescriptor outputPipe = null;
try
{
ParcelFileDescriptor[] pipe = ParcelFileDescriptor.createPipe();
outputPipe = pipe[1].getFileDescriptor();
}
catch(Exception e)
{
Log.e(TAG, FUNCTION + " : " + e.getMessage());
}
return outputPipe;
}
The ParcelFileDescriptor.createPipe() creates a Unix Data Pipe and returns an array of ParcelFileDescriptors. The first object refers to the read channel (Source Channel) and the second one refers to the write channel (Sink Channel) of the pipe. Use MediaRecorder object to write the recorded data to the write channel...
As far as MediaPlayer is concerned the same technique could be used by passing the FileDescriptor object related to the created pipe's read channel to the setDataSource() method...

Categories

Resources