For an Android app in development, mp3 download times from a 3rd party server are up to 200% slower than downloading the same mp3 (from same 3rd party server) using several other existing apps. There is no processing being done against the mp3 file during or post-download...rather, it's just a straightforward download. Testing was done on a Samsung Galaxy device running Android 4.0, with 1 gig RAM.
Aside from using the wrong buffer rate (which as I understand it from reading other questions on StackOverflow determines how often to write from cache to the internal SD card), what are the biggest red flags that we should look for or biggest mistakes we are likely making that are causing our downloads to be so much slower than other apps?
Thank you for any help!
For our application, we found that optimizing the buffer size was sufficient to bring our MP3 download speed up to (and, in some cases, better than) other existing apps when we tested the download time for the same mp3 file from the same 3rd party server on several different WiFi networks.
I hope this helps other developers should they ever confront this same issue. CHEERS!
Related
I am working on a video recording and sharing application for Android. The specifications of the app are as follows:-
Recording a 10 second (maximum) video from inside the app (not using the device's camera app)
No further editing on the video
Storing the video in a Firebase Cloud Storage (GCS) bucket
Downloading and playing of the said video by other users
From the research, I did on SO and others sources for this, I have found the following (please correct me if I am wrong):-
The three options and their respective features are:-
1.Ffmpeg
Capable of achieving the above goal and has extensive answers and explanations on sites like SO, however
Increases the APK size by 20-30mb (large library)
Runs the risk of not working properly on certain 64-bit devices
2.MediaRecorder
Reliable and supported by most devices
Will store files in .mp4 format (unless converted to h264)
Easier for playback (no decoding needed)
Adds the mp4 and 3gp headers
Increases latency according to this question
3.MediaCodec
Low level
Will require MediaCodec, MediaMuxer, and MediaExtractor
Output in h264 ( without using MediaMuxer for playback )
Good for video manipulations (though, not required in my use case)
Not supported by pre 4.3 (API 18) devices
More difficult to implement and code (my opinion - please correct me if I am wrong)
Unavailability of extensive information, tutorials, answers or samples (Bigflake.com being the only exception)
After spending days on this, I still can't figure out which approach suits my particular use case. Please elaborate on what I should do for my application. If there's a completely different approach, then I am open to that as well.
My biggest criteria are that the video encoding process be as efficient as possible and the video to be stored in the cloud should have the lowest possible space usage without compromising on the video quality.
Also, I'd be grateful if you could suggest the appropriate format for saving and distributing the video in Firebase Storage, and point me to tutorials or samples of your suggested approach.
Thank you in advance! And sorry for the long read.
Your overview on this topic is applicable to the point.
I'll just add my 2 cents on this topic that you might have missed as addition:
1.FFMpeg
+/-If you build your own SO then you can reduce the size down to about 2-3 MB depending on the use-case of course. Editing a 6000 lines buildscript takes time and effort though
++Supports wide range of formats (almost everything)
++Results are the same for every device
++Any resolution supported
--High energy consumption due do SW-En-/Decoding, while also making it slow. There is a plugin to support lib-stagefright, but it doesn't work on many devices (as of May 2016)
--Licensing can be problematic depending on your location and use-case. I'm not a lawyer, but we had legal consulting on this topic and it's quite complex.
2. MediaRecorder
++Easiest to implement (simplified access to mediacodec/libstagefright) Raw data gets passed to the encoder directly so no messing around there
++HW Accelerated on most devices. Makes it fast and energy saving.
++Delay only applies to live streaming
--Dependent on implementation of HW-manufacturers
--Results may vary from device to device
++No licensing problems
3.MediaCodec
+/-Most of 2.MediaRecorder applies to this as well (apart from ease of use)
++Most flexible access to HW-en-/decoding
--Hard to use for cases that were not thought of (e.g. mixing videos from different sources)
+/-Delay for streaming can be eliminated (is tricky though)
--HW-manufacturers sometimes don't implement things correctly (e.g the Samsung Galaxy S5 sometimes produces a SIG-SEV if live data from some DLSR is fed to the encoder. Works fine for a while, then all of a sudden it's SIG-SEV. This might be the dslr's fault, but the SIG-SEV is not avoidable and crashes the app, which in the end is the app developers fault ;) )
--If used without MediaMuxer you need either good understanding of media containers or rely on 3rd party libraries
The list is obviously not complete and some points might not be correct. The last time I worked with video was almost half a year ago.
As for your use-case I would recommend using MediaRecorder since it is the easiest to implement, supported on all devices, and offers a good deal of quality/size option. FFMpeg produces better results for the same storage size, but takes longer (extreme case, DSLR live footage was encoded 30 times faster), and is more energy consuming.
As far as I understand your use-case, there is no need to fiddle around with MediaCodec since you want to encode and decode only.
I suggest using VP8 or 9 since you wont run into licensing problems. Again I'm no lawyer but distributing H264 over your own server might make you a broadcasting station, so i was told.
Hope this helps you in your decision making
This may be an already answered question, but I guess I'm still a bit confused. I've read most of the questions related to storage, I understand the pros and cons of each solution, I want to use the internal storage, but I still feel stuck because of the space constraints.
I've got a working app that can save up to 200 images which can be over 100k each(so more or less 20MB of data). The app will also be restricted to the new generation of phones(Samsung Galaxy S3,4 , Iphone 5s) because of camera requirements.
I'd like to save the images in the internal storage because I don't want the users messing about with them and I'm guessing that it should be fine, but I know there can be quite restrictive limits to the allowed memory. Is 20MB too much? where can I find the amount of internal app-allocated memory?
here I found that the memory is no longer physically distinct(
external-internal storage) which raises the question how is it allocated?
A little insight would be great! Thanks
I know there can be quite restrictive limits to the allowed memory
Not since Android 3.0, for most devices. Android 3.0+ has internal and external storage sharing one partition by default, so if a device is advertised as having 8GB of space, that is available to both internal and external storage. You will find the occasional oddball device that has separate internal and external storage, but they are the exception, not the rule, for Android 3.0+.
I'm curious if someone can point me to (or just describe) how, given that an Android application has an extremely limited memory space to play with (I think it's around 20 megs), a video player can load and play videos that are an order of magnitude or more larger in size. Is the app loading the video in some sort of chunks?
I ask because I have an app that has some video assets embedded and the app has grown to be about 80 megs and is just a total monster for compiling and debugging (without the assets the app would only be about 2 megs), and I was thinking that I should just remove the assets and have them download on the side and sit on the sdcard, rather than within the apk, but I'm worried that loading them and playing them at run time will bust through the app's memory allotment, and was hoping someone can shed some light on what my options are.
TIA
They buffer the video in manageable chunks, yes. Even if your videos are GBs in size, you won't hit a memory wall using standard video playing APIs. You can use the standard calls to setVideoURI or setVideoPath to point to the file and it will handle it from there. The same works for MediaPlayer in general if you're not using a VideoView
Downloading the videos outside the apk is still a good idea, though. If I had to download a new 80MB file for each incremental upgrade, I'd probably just uninstall instead. If you don't want to host them yourself, look into the supplemental apk option.
You can take your game into the native layer where you can bypass this limit. Take a look at a blog I wrote http://jnispot.blogspot.com/2012/10/jnindk-arrays-bitmaps-and-games-oh-my.html
I have no idea if this is what video apps do, but you can actually increase the memory beyond the normal limit by using malloc statements in native code. This bypasses the normal max of around 20 MB.
I have only used this for encryption on android devices where AES encryption needs rather karger amounts of memory when dealing with large encrypted files (even around 10MB files).
The title says most of it. I believe packaging the basic data set into the app will result in a better user experience, rather than have people download files before they can start using the app. This is where one can start losing users. At the same time, 20MB is considered kind of a lot for Android,so I wonder if this will cause issues for some users in using the app.
I am not sure if this will cause an issue. I am an android developer who uses android phone and facebook app in my fone is almost 21MB. It does not cause any issue...However, as a developer a better approach would be to do an app that does not exceed 10MB space(Unless your app is outstanding like Facebook). You can do this by using images of smaller size,making sure you do not have any resources that you are not using(classes,layouts etc)
The size never causes issue but you may consider more:
I am a android developer and a long time Android user too. Not All Android phones have high-end processors to run app faster.
A lot of Android Phones have phone memory of 100-250MB. And the old versions of Android doesn't allow user to install app on SD card. So the user may hesitate to install your App.
Unless it is necessary try to reduce the App size.
As per my personal experience, If you are designing something astonishing and it costs even few hundred MBs on my phone, so i really wouldn't mind to give a try. Since new phones, processors and high storage capacities are continuously evolving and appearing in consumers' hands, so how can we expect applications to remain the same (tiny) in size? Let them grow (but not without any valid reason), and people would still try/buy it. There are no fixed rules or guidelines for limiting the app size, but a directly proportional relationship explains it well:
High-end graphics and feature-rich application ∝ Extra size/memory
What I think is :
The size of the app never creates issue. Again if its an extraordinary app. then surely user will surely get attracted and download your app..
But on the other side just think about the Internal Memory of the phone. There are lots of phone available that has very low internal memory(many have 150 or 180MB as internal memory). May be because of too low internal memory, they wont be able to use your application and hence you may not get big traffic.
You've got a lot of answers here so I'm just going to give you my perspective.
I would be frustrated to say the least if I downloaded a 10MB app and then opened it to find I needed to download another 10MB of necessary materials. Just make the app 20MB so I know what I'm getting into when I start the download.
Only put the bear essentials into the app if it's going to be that big. Don't require users to download high res images, language packs, etc. Just publish the bare minimum that your app requires to run if it's going to be larger than 10MB. You could even publish two versions of your app, the bare minimum at 7MB or the HOLY SH*T package at 20MB, at least users would have a choice when they went to download your app.
Spend some time looking up common practices when it comes to saving space when making an app, every little bit counts and if you can make the same app and save 5MB, your users will appreciate it. If it comes down to a lot of images, consider using this tool; http://www.getpaint.net. However I would suggest reducing the JPEG quality) rather than compress them. JPEGs aren't very squishy.
Going along with #3. Think about universally accepted methods of communication; a sideways triangle for a play button, and X for a delete button, be sneaky...save space. User's love that crap :]
So anybody worth their salt in the android development community knows about issue 3434 relating to low latency audio in Android. For those who don't, you can educate yourself here. http://code.google.com/p/android/issues/detail?id=3434
I'm looking for any sort of temporary workaround for my personal project. I've heard tell of exposing private interfaces to the NDK by rolling your own build of android and modifying the NDK.
All I need is a way to access the low level alsa drivers which are already packaged with the standard 2.2 build. I'd like to have the ability to send PCM directly to the audio hardware on my device. I don't care that the resulting app won't be distributable over the marketplace, and likely won't run with any other device than mine.
Anybody have any useful ideas?
-Griff
EDIT: I should mention, I know AudioTrack provides this functionality, but I'd like much lower latency -- AudioTrack sits around 300ms, I'd like somewhere around 20-30 ms.
Griff, that's just the problem, NDK wil not improve the known latency issue (that's even documented). The hardware abstraction layer in native code is currently adding to the latency, so it's not just about access to the low level drivers (btw you shouldn't rely on alsa drivers being there anyway).
Android: sound API (deterministic, low latency) covers the tradeoffs pretty well. TL;DR: NDK gives you a minor benefit because the threads can run at higher priority, but this benefit is meaningless pre-Jellybean because the entire audio system is tuned for Java.
The Galaxy Nexus running 4.1 can get fairly close to 30ms of output latency.