I started developing an android app that have to interact with MMS attachements, in particular, get attachements such as text, bitmaps, audio, video etc. and store them on the phone in a specific folder.
So i started reading some books and some post on the web but it isn't a very common argument, and i didn't find an official way to do what i want to do.
I found a fairly good article here on stack-overflow here: How to Read MMS Data in Android?... it works very well for me, but there are 2 problems:
The article shows you how to get MMS data by querying over the "hidden" SMS-MMS content provider, and as far as i know, Google doesn't guarantee that they'll keep the current structure in every android's future relase.
The article only explains how to get Text data and Bitmap data from MMS...what about video/audio? I tried to get a video/audio stream from an InputStream such as the example did with Bitmaps, unfortunately with no luck...
I'm very disappointed about the absence of official tutorial or "How-To" over this argument because SMS and MMS management is a very common need in mobile developement.
I hope someone can help me....
Thanks in advance!!
I found a fairly simple way to read Video/Audio data from MMS, so i decided to publish this part of my class that provides MMS attachements, for all users that need this.
private static final int RAW_DATA_BLOCK_SIZE = 16384; //Set the block size used to write a ByteArrayOutputStream to byte[]
public static final int ERROR_IO_EXCEPTION = 1;
public static final int ERROR_FILE_NOT_FOUND = 2;
public static byte[] LoadRaw(Context context, Uri uri, int Error){
InputStream inputStream = null;
byte[] ret = new byte[0];
//Open inputStream from the specified URI
try {
inputStream = context.getContentResolver().openInputStream(uri);
//Try read from the InputStream
if(inputStream!=null)
ret = InputStreamToByteArray(inputStream);
}
catch (FileNotFoundException e1) {
Error = ERROR_FILE_NOT_FOUND;
}
catch (IOException e) {
Error = ERROR_IO_EXCEPTION;
}
finally{
if (inputStream != null) {
try {
inputStream.close();
}
catch (IOException e) {
//Problem on closing stream.
//The return state does not change.
Error = ERROR_IO_EXCEPTION;
}
}
}
//Return
return ret;
}
//Create a byte array from an open inputStream. Read blocks of RAW_DATA_BLOCK_SIZE byte
private static byte[] InputStreamToByteArray(InputStream inputStream) throws IOException{
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
int nRead;
byte[] data = new byte[RAW_DATA_BLOCK_SIZE];
while ((nRead = inputStream.read(data, 0, data.length)) != -1) {
buffer.write(data, 0, nRead);
}
buffer.flush();
return buffer.toByteArray();
}
In this way you can extract "Raw" data such as Audio/Video/Images from MMS by passing:
the context where you need to use this function
the URI of the MMS part that contains data you want to extract (for ex. "content://mms/part/2")
the byref param that returns an eventual error code thrown by the procedure.
Once you have your byte[], you can create an empty File and then use a FileOutputStream to write the byte[] into it. If the file path\extension is correct and if your app has all the right
permissions, you'll be able to store your data.
PS. This procedure has been tested a few times and it worked, but i don't exclude can be some unmanaged exception cases that may produce error states. IMHO it can be improoved too...
Related
UPDATE:
I found this post, which details exactly the same problem I am seeing. It turns out that the fact I am using a Pipe approach in my DocumentsProvider to stream content from DropBox means that ExoPlayer doesn't know the size of the file ahead of time, and so by default was not saving it to the cache.
So I ended up doing what I presume the author did - I created a custom CacheDataSource for these situations that alters the DataSpec.flags variable in the open() method of that class:
public long open(DataSpec dataSpec) throws IOException {
try {
key = cacheKeyFactory.buildCacheKey(dataSpec);
uri = dataSpec.uri;
actualUri = getRedirectedUriOrDefault(cache, key, /* defaultUri= */ uri);
httpMethod = dataSpec.httpMethod;
if ( !dataSpec.isFlagSet(DataSpec.FLAG_ALLOW_CACHING_UNKNOWN_LENGTH) ) { // <-- update here
flags = (dataSpec.flags | DataSpec.FLAG_ALLOW_CACHING_UNKNOWN_LENGTH);
} else {
flags = dataSpec.flags;
}
readPosition = dataSpec.position;
Not the optimum solution, and I also chimed in on the other post with a request for a more supported way to indicate this flag should be set.
But at least now my streamed files are being saved in the cache.
I am implementing a customer CacheDataSourceFactory for ExoPlayer2, in order to implement a cache to store videos streamed to ExoPlayer.
I have reviewed several posts here, this one was helpful in getting the general approach right to have a video cached into the directory of my choice.
I noticed that when handling a URI that resolves to my custom DocumentsProvider, the Cache defined by the CacheDataSourceFactory is only used to store what looks like a "pointer" or "index" file ("cached_content_index.exi"). Looking in that file I see the URI of the video streamed by my custom DocumentsProvider. However the actual video is not in the cache.
Here is the relevant portion of my Provider, it's quite straight forward:
// Return a descriptor that will stream the file
Timber.d("In openDocument of DropboxProvider for Id: %s, streaming from source", documentId);
ParcelFileDescriptor[] pipe;
try {
pipe = ParcelFileDescriptor.createPipe();
// Get input stream for the pipe
DbxDownloader downloader = mDbxClient.files().download(fileMetadata.getPathLower(), fileMetadata.getRev());
new TransferThread(downloader.getInputStream(), new ParcelFileDescriptor.AutoCloseOutputStream(pipe[1]), signal, fileMetadata.getSize()).start();
return pipe[0];
} catch (DbxException dbe) {
Timber.d("Got IDbxException when streaming content: %s", dbe.getMessage());
} catch (IOException ioe) {
Timber.d("Got IOException when streaming content: %s", ioe.getMessage());
} catch (Exception e) {
Timber.d("Got Exception when streaming content: %s", e.getMessage());
}
return null;
And the TransferThread:
private static class TransferThread extends Thread {
final InputStream in;
final OutputStream out;
final CancellationSignal signal;
final long size;
TransferThread(InputStream in, OutputStream out, CancellationSignal signal, long size) {
this.in = in;
this.out = out;
this.signal = signal;
this.size = size;
}
#Override
public void run() {
int biteSize = (8*1024);
if ( size <= (biteSize * 8) ) {
biteSize = Math.max( ((int)(size / (biteSize*2))) * (biteSize * 2), biteSize);
}
Timber.d("TransferThread: File size is: %s, buffer biteSize set to: %d", InTouchUtils.getFormattedFileSize(size), biteSize);
byte[] buf = new byte[biteSize];
int len;
try {
while ( ((len=in.read(buf)) >= 0) && (signal == null || !signal.isCanceled()) ) {
out.write(buf, 0, len);
}
} catch (IOException e) {
// When Glide is used to request a URI where this provider resolves the query,
// it will close the stream out from under us once it has fetched enough bytes
// to render a single frame as an image if the if it is to a video, so
// we swallow that exception here, only logging the error if it isn't that EPIPE
// (broken pipe due to one end being closed) exception.
if ( !(e.getMessage().contains("EPIPE"))) {
Timber.d("TransferThread: Got IOException transferring file: %s", e.getMessage());
}
} finally {
try {
if (in != null) {
in.close();
}
if ( out != null ) {
out.flush();
out.close();
}
Timber.d("TransferThread: Finished streaming file.");
} catch (IOException ioe) {
Timber.d("TransferThread: Got IOException closing file: %s", ioe.getMessage());
}
}
}
}
Again - ExoPlayer seems quite happy with the ParcelFileDescriptor it receives from the DocumentsProvider in this case - it takes the bytes streamed to it and plays the video. I am just not seeing the video file end up in the cache.
I also tried an example streaming a video from my Google Drive (which uses the out-of-the-box documents provider from the SAF), and this time the video did wind up in the cache.
Since they both use the same MediaSource instance - there must be an approach that the Google Docs provider takes so that ExoPlayer knows to place the resulting streamed video in the cache that my custom Dropbox DocumentsProvider is not doing.
Does anyone know how to get to the source code of the DocumentsProvider that ships with the SAF that manages access to Google Docs? I'd like to see what it is doing in its openDocument() method.
Is the fact that the Dropbox provider is utilizing a Pipe in its ParcelFileDescriptor something that ExoPlayer doesn't handle?
Other Ideas?
I realized after going through a post in the IBM developer forums that the android sdk reads bytes from the mic recording and writes them to the websocket. I am now trying to read bytes from an audio file on memory and write them to the websocket. How should I do this? So far I have:
public class AudioCaptureThread extends Thread{
private static final String TAG = "AudioCaptureThread";
private boolean mStop = false;
private boolean mStopped = false;
private int mSamplingRate = -1;
private IAudioConsumer mIAudioConsumer = null;
// the thread receives high priority because it needs to do real time audio capture
// THREAD_PRIORITY_URGENT_AUDIO = "Standard priority of the most important audio threads"
public AudioCaptureThread(int iSamplingRate, IAudioConsumer IAudioConsumer) {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
mSamplingRate = iSamplingRate;
mIAudioConsumer = IAudioConsumer;
}
// once the thread is started it runs nonstop until it is stopped from the outside
#Override
public void run() {
File path = Activity.getContext.getExternalFilesDir(null);
File file = new File (path, "whatstheweatherlike.wav");
int length = (int) file.length();
ByteArrayOutputStream bos = new ByteArrayOutputStream();
byte[] b = new byte[length];
FileInputStream in = null;
try {
in = new FileInputStream(file);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
try {
for (int readNum; (readNum = in.read(b)) != -1;) {
bos.write(b, 0, readNum);
}
} catch (IOException e) {
e.printStackTrace();
}
byte[] bytes = bos.toByteArray();
mIAudioConsumer.consume(bytes);
}
However, Activity.getContext is not recognized. I can convert the file to bytes in MainActivity but how do I then write it to the websocket? Am I on the right track or is this not the right way? If it is, how do I solve this problem?
Any help is appreciated!
Activity.getContext is not recognized because there's no reference to Activity, since it's just a Thread. You would have to pass in the Activity, although it would likely make more sense just to pass in the Context if you need it.
You've got the right idea that you can create a FileInputStream and use that. You might like to use our MicrophoneCaptureThread as a reference. It'd be a very similar situation, except you'd be using your FileInputStream instead of reading from the microphone. You can check it out (and an example project that uses it) here: https://github.com/watson-developer-cloud/android-sdk/blob/master/library/src/main/java/com/ibm/watson/developer_cloud/android/library/audio/MicrophoneCaptureThread.java
background
suppose i have an inputStream that was originated from the internet of a certain image file.
i wish to get information about the image file and only then to decode it.
it's useful for multiple purposes, such as downsampling and also previewing of information before the image is shown.
the problem
i've tried to mark&reset the inputStream by wrapping the inputStream with a BufferedInputStream , but it didn't work:
inputStream=new BufferedInputStream(inputStream);
inputStream.mark(Integer.MAX_VALUE);
final BitmapFactory.Options options=new BitmapFactory.Options();
options.inJustDecodeBounds=true;
BitmapFactory.decodeStream(inputStream,null,options);
//this works fine. i get the options filled just right.
inputStream.reset();
final Bitmap bitmap=BitmapFactory.decodeStream(inputStream,null,options);
//this returns null
for getting the inputStream out of a url, i use:
public static InputStream getInputStreamFromInternet(final String urlString)
{
try
{
final URL url=new URL(urlString);
final HttpURLConnection urlConnection=(HttpURLConnection)url.openConnection();
final InputStream in=urlConnection.getInputStream();
return in;
}
catch(final Exception e)
{
e.printStackTrace();
}
return null;
}
the question
how can i make the code handle the marking an resetting ?
it works perfectly with resources (in fact i didn't even have to create a new BufferedInputStream for this to work) but not with inputStream from the internet...
EDIT:
it seems my code is just fine, sort of...
on some websites (like this one and this one), it fails to decode the image file even after reseting.
if you decode the bitmap (and use inSampleSize) , it can decode it fine (just takes a long time).
now the question is why it happens, and how can i fix it.
I believe the problem is that the call to mark() with the large value is overwritten by a call to mark(1024). As described in the documentation:
Prior to KITKAT, if is.markSupported() returns true, is.mark(1024) would be called. As of KITKAT, this is no longer the case.
This may be resulting in a reset() fail if reads larger than this value are being done.
(Here is a solution for the same problem, but when reading from disk. I didn't realize at first your question was specifically from a network stream.)
The problem with mark & reset in general here is that BitmapFactory.decodeStream() sometimes resets your marks. Thus resetting in order to do the actual read is broken.
But there is a second problem with BufferedInputStream: it can cause the entire image to be buffered in memory along side of where ever you are actually reading it into. Depending on your use case, this can really kill your performance. (Lots of allocation means lots of GC)
There is a really great solution here:
https://stackoverflow.com/a/18665678/1366
I modified it slightly for this particular use case to solve the mark & reset problem:
public class MarkableFileInputStream extends FilterInputStream
{
private static final String TAG = MarkableFileInputStream.class.getSimpleName();
private FileChannel m_fileChannel;
private long m_mark = -1;
public MarkableFileInputStream( FileInputStream fis )
{
super( fis );
m_fileChannel = fis.getChannel();
}
#Override
public boolean markSupported()
{
return true;
}
#Override
public synchronized void mark( int readlimit )
{
try
{
m_mark = m_fileChannel.position();
}
catch( IOException ex )
{
Log.d( TAG, "Mark failed" );
m_mark = -1;
}
}
#Override
public synchronized void reset() throws IOException
{
// Reset to beginning if mark has not been called or was reset
// This is a little bit of custom functionality to solve problems
// specific to Android's Bitmap decoding, and is slightly non-standard behavior
if( m_mark == -1 )
{
m_fileChannel.position( 0 );
}
else
{
m_fileChannel.position( m_mark );
m_mark = -1;
}
}
}
This won't allocate any extra memory during reads, and can be reset even if the marks have been cleared.
whether you can mark / reset a stream depends on the implementation of the stream. those are optional operations and aren't typically supported. your options are to read the stream into a buffer and then read from that stream 2x, or just make the network connection 2x.
the easiest thing is probably to write into a ByteArrayOutputStream,
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int count;
byte[] b = new byte[...];
while ((count = input.read(b) != -1) [
baos.write(b, 0, count);
}
now either use the result of baos.toByteArray() directly, or create a ByteArrayInputStream and use that repeatedly, calling reset() after consuming it each time.
ByteArrayInputStream bais = new ByteArrayInputStream(baos.toByteArray());
that might sound silly, but there's no magic. you either buffer the data in memory, or you read it 2x from the source. if the stream did support mark / reset, it'd have to do the same thing in it's implementation.
Here is a simple method that always works for me :)
private Bitmap downloadBitmap(String url) {
// initilize the default HTTP client object
final DefaultHttpClient client = new DefaultHttpClient();
//forming a HttoGet request
final HttpGet getRequest = new HttpGet(url);
try {
HttpResponse response = client.execute(getRequest);
//check 200 OK for success
final int statusCode = response.getStatusLine().getStatusCode();
if (statusCode != HttpStatus.SC_OK) {
Log.w("ImageDownloader", "Error " + statusCode +
" while retrieving bitmap from " + url);
return null;
}
final HttpEntity entity = response.getEntity();
if (entity != null) {
InputStream inputStream = null;
try {
// getting contents from the stream
inputStream = entity.getContent();
// decoding stream data back into image Bitmap that android understands
image = BitmapFactory.decodeStream(inputStream);
} finally {
if (inputStream != null) {
inputStream.close();
}
entity.consumeContent();
}
}
} catch (Exception e) {
// You Could provide a more explicit error message for IOException
getRequest.abort();
Log.e("ImageDownloader", "Something went wrong while" +
" retrieving bitmap from " + url + e.toString());
}
return image;
}
I am 100% sure the problem here is with the actual image. However I hope that the solution is some attribute of the image that will help others in the future.
The image:
the photo in question http://soundwave.robotsidekick.com/mlsphotos.jpg
I have tried loading this image in several ways. I have downloaded it and tried loading it in an ImageView:
final ImageView v = (ImageView) findViewById(R.id.image);
v.setImageResource(R.drawable.photo);
v.invalidate();
I have tried loading it from a url:
final String[] params = new String[] {
"",
};
(new AsyncTask<String, Bitmap, Bitmap>()
{
#Override
protected Bitmap doInBackground(final String... params)
{
Bitmap ret = null;
for (final String url : params)
{
try
{
Log.e(TAG, url);
ret = BitmapFactory.decodeStream((new URL(url)).openStream());
publishProgress(ret);
}
catch (final MalformedURLException e)
{
Log.e(TAG, "Malformed URL", e);
}
catch (final IOException e)
{
Log.e(TAG, "IO Exception", e);
}
}
return ret;
}
#Override
protected void onProgressUpdate(final Bitmap... values)
{
super.onProgressUpdate(values);
for (final Bitmap result : values)
{
if (result != null)
{
final ImageView v = (ImageView) MainActivity.this.findViewById(R.id.image);
v.setImageBitmap(result);
v.invalidate();
}
}
}
}).execute(params);
I have also tried loading the image in a WebView like this:
final WebView webview = (WebView) findViewById(R.id.webview);
webview.loadData("<html><body><img src=\"" + url + "\"></body></html>", "text/html", "utf-8");
webview.invalidate();
I have also tried loading the image in Browser (the app) and that does not work.
None of those work, HOWEVER if I load the url into Chrome on Android it works great (not in Browser), if I load the image on my desktop browser (Chrome, Firefox, etc) it loads great. I have checked that the mime type matches the extension and I am just at a loss.
EDIT
There is a work around for images coming from an InputStream where the bitmap processing runs out of data on the stream before the stream completes. The work around is documented in this question and this bug.
However this is a corrupt image whose data ends prematurely. I know that means I am already down a broken track, but I am hoping to have some better error handling than Android passing me back null and I lose. iOS, Chrome (on device and computer) as well as most other places seem to have much better error handling. Is there something I can do on Android to handle corrupt jpgs?
EDIT 2
There has to be a solution here because Chrome on my device handles this situation elegantly. However the closest I can come to fixing this is the following code:
final InputStream is = (new URL(url)).openStream();
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
final int size = 1024;
int len = -1;
byte[] buf = new byte[size];
while ((len = is.read(buf, 0, size)) != -1)
{
bos.write(buf, 0, len);
}
buf = bos.toByteArray();
// buf is now filled with the corrupted bytes of the image
ret = BitmapFactory.decodeByteArray(buf, 0, buf.length);
// ret is null because it was a corrupt jpg
With that code I can check if there are bytes and the image wasn't decoded. Then at least I can tell I have a corrupt image (or not an image) and can report something slightly more useful to the user (like hey I have an image here with 16K but I sure don't know what to do with it).
Anyone know how Chrome manages to decode as much of the image as they can before they hit the corruption?
I opened the file in Photoshop CS6 and it said that the file may be damaged, possibly truncated or incomplete. The file can be opened. If I save it in Photoshop without making any changes, it then works in Android. I'm afraid I don't know exactly what's wrong with the image though.
Here is the important bit about JPGs from Wikipedia and here's a question that ultimate led me to the solution.
I just appended the two closing jpeg end of image bytes to the stream, in order to convince the decoder that the stream is done with image data. This method is flawed because JPGs can have JPGs inside them, meaning appending one set of end of image bytes, doesn't guarantee that we closed all the images.
In both solutions below I assume is is an input stream for a JPG image. Also these two constants are defined:
private static final int JPEG_EOI_1 = 0xFF;
private static final int JPEG_EOI_2 = 0xD9;
This method we read all the bytes into memory then try to decode the bytes:
final ByteArrayOutputStream bos = new ByteArrayOutputStream();
final int size = 1024;
int len = -1;
final byte[] buf = new byte[size];
try
{
while ((len = is.read(buf, 0, size)) != -1)
{
bos.write(buf, 0, len);
}
bos.write(JPEG_EOI_1);
bos.write(JPEG_EOI_2);
final byte[] bytes = bos.toByteArray();
return BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
}
catch (final IOException ex)
{
return null;
}
catch (final Exception ex)
{
return null;
}
This method creates a stream wrapper that makes sure the last two bytes are JPG end of image bytes:
return BitmapFactory.decodeStream(new JpegClosedInputStream(is));
// And here's the stream wrapper
class JpegClosedInputStream extends InputStream
{
private final InputStream inputStream;
private int bytesPastEnd;
private JpegClosedInputStream(final InputStream iInputStream)
{
inputStream = iInputStream;
bytesPastEnd = 0;
}
#Override
public int read() throws IOException
{
int buffer = inputStream.read();
if (buffer == -1)
{
if (bytesPastEnd > 0)
{
buffer = JPEG_EOI_2;
}
else
{
++bytesPastEnd;
buffer = JPEG_EOI_1;
}
}
return buffer;
}
}
do setcontentview and show it from the xml like that:
//photo.xml
<ImageView
android:id="#+id/picture"
android:layout_width="250dp"
android:layout_height="250dp"
android:layout_gravity="center"
android:src="#drawable/photo" />
/>
//Photo.java
setContentView(R.layout.photo);
I have a big problem with parsing some json data which I get as response from a web server. The thing that I'm doing is I get the response via POST and than convert the response as string and parse it. But in some devices I get OutOfMemoryError , which I'm trying to fix. Here is how I'm converting the response to string :
public static String convertStreamToString(InputStream is) throws Exception {
ByteArrayOutputStream into = new ByteArrayOutputStream();
byte[] buf = new byte[4096];
for (int n; 0 < (n = is.read(buf));) {
into.write(buf, 0, n);
}
into.close();
return new String(into.toByteArray(), "UTF-8");
}
and here is how I'm using this piece of code :
InputStream response = new BufferedInputStream(connection.getInputStream());
try {
String responsee = convertStreamToString(response);
jsonParser(responsee);
} catch (Exception e) {
e.printStackTrace();
cancelDialog("Error occurred! Please try again later.");
}
Any suggestions how can I fix that problem so don't happen in all devices?
Thanks in advance for any kind of help or advices.
The mobile has limited internal memory.
I have also face same issue. The solution that we found is that download only the necessary information. So you please confirm your requirement how much data you want to inside the mobile. If you filter the unnecessary data then the problem will get resolved.
Before testing the program on extreme condition first check whether simple download happening if it is happening then check the limit of your data means up to how extent it will not give out of memory error. and accordingly that rework your requirement.