My app needs to store mpg4 files on a server and be able to quickly grab them and stream them via the MediaPlayer. I have been using AWS S3 to store them after recording from my app but when I go ahead and grab them to stream them through the MediaPlayer it is sometimes fast but often slow. I was just hoping for a little guidance on the best approach to this. The videos need to be streamed back rather fast but S3 seems to be a bit slow and can be costly. What are the alternatives or best solutions?
S3's key selling point is high availability storage, rather than speed of access, especially if you need to access the content in many different geographic locations.
For reliable low latency distribution of video (i.e. minimum stops for buffering) you want a Content Distribution Network solution (CDN). In very simple terms, this creates cached copies of your content at the edge of the network so it can be accessed quickly.
Amazon's CDN solution is Cloudfront and it is designed to integrate with content stored on S3. The link below gives a good walkthrough of setting up Cloudfront for some S3 content. Note it does have a cost so you will need to check it meets your budget (other CDN's are available - they are all similar in concept):
http://www.shootingbusiness.com/amazon-video-streaming-slow/
If you needs are small and localised, and you can test and confirm that performance is ok, you may be ok to simply host the Video on EC2/ECB, with a backup on S3 in case of issues. Again, you probably would need to run the different scenarios through the Amazon price calculator to decide the best approach for your needs.
I have never seen performance issues with this EC2/ECB approach for a small user base, certainly for users in the same general geographic area as the AWS availability zone, but it does not necessarily scale well, especially with a more distributed user base.
Related
I am working on a video recording and sharing application for Android. The specifications of the app are as follows:-
Recording a 10 second (maximum) video from inside the app (not using the device's camera app)
No further editing on the video
Storing the video in a Firebase Cloud Storage (GCS) bucket
Downloading and playing of the said video by other users
From the research, I did on SO and others sources for this, I have found the following (please correct me if I am wrong):-
The three options and their respective features are:-
1.Ffmpeg
Capable of achieving the above goal and has extensive answers and explanations on sites like SO, however
Increases the APK size by 20-30mb (large library)
Runs the risk of not working properly on certain 64-bit devices
2.MediaRecorder
Reliable and supported by most devices
Will store files in .mp4 format (unless converted to h264)
Easier for playback (no decoding needed)
Adds the mp4 and 3gp headers
Increases latency according to this question
3.MediaCodec
Low level
Will require MediaCodec, MediaMuxer, and MediaExtractor
Output in h264 ( without using MediaMuxer for playback )
Good for video manipulations (though, not required in my use case)
Not supported by pre 4.3 (API 18) devices
More difficult to implement and code (my opinion - please correct me if I am wrong)
Unavailability of extensive information, tutorials, answers or samples (Bigflake.com being the only exception)
After spending days on this, I still can't figure out which approach suits my particular use case. Please elaborate on what I should do for my application. If there's a completely different approach, then I am open to that as well.
My biggest criteria are that the video encoding process be as efficient as possible and the video to be stored in the cloud should have the lowest possible space usage without compromising on the video quality.
Also, I'd be grateful if you could suggest the appropriate format for saving and distributing the video in Firebase Storage, and point me to tutorials or samples of your suggested approach.
Thank you in advance! And sorry for the long read.
Your overview on this topic is applicable to the point.
I'll just add my 2 cents on this topic that you might have missed as addition:
1.FFMpeg
+/-If you build your own SO then you can reduce the size down to about 2-3 MB depending on the use-case of course. Editing a 6000 lines buildscript takes time and effort though
++Supports wide range of formats (almost everything)
++Results are the same for every device
++Any resolution supported
--High energy consumption due do SW-En-/Decoding, while also making it slow. There is a plugin to support lib-stagefright, but it doesn't work on many devices (as of May 2016)
--Licensing can be problematic depending on your location and use-case. I'm not a lawyer, but we had legal consulting on this topic and it's quite complex.
2. MediaRecorder
++Easiest to implement (simplified access to mediacodec/libstagefright) Raw data gets passed to the encoder directly so no messing around there
++HW Accelerated on most devices. Makes it fast and energy saving.
++Delay only applies to live streaming
--Dependent on implementation of HW-manufacturers
--Results may vary from device to device
++No licensing problems
3.MediaCodec
+/-Most of 2.MediaRecorder applies to this as well (apart from ease of use)
++Most flexible access to HW-en-/decoding
--Hard to use for cases that were not thought of (e.g. mixing videos from different sources)
+/-Delay for streaming can be eliminated (is tricky though)
--HW-manufacturers sometimes don't implement things correctly (e.g the Samsung Galaxy S5 sometimes produces a SIG-SEV if live data from some DLSR is fed to the encoder. Works fine for a while, then all of a sudden it's SIG-SEV. This might be the dslr's fault, but the SIG-SEV is not avoidable and crashes the app, which in the end is the app developers fault ;) )
--If used without MediaMuxer you need either good understanding of media containers or rely on 3rd party libraries
The list is obviously not complete and some points might not be correct. The last time I worked with video was almost half a year ago.
As for your use-case I would recommend using MediaRecorder since it is the easiest to implement, supported on all devices, and offers a good deal of quality/size option. FFMpeg produces better results for the same storage size, but takes longer (extreme case, DSLR live footage was encoded 30 times faster), and is more energy consuming.
As far as I understand your use-case, there is no need to fiddle around with MediaCodec since you want to encode and decode only.
I suggest using VP8 or 9 since you wont run into licensing problems. Again I'm no lawyer but distributing H264 over your own server might make you a broadcasting station, so i was told.
Hope this helps you in your decision making
We are developing a synchronous multiplayer game. As it stands one of the players is selected as the server instead of connecting the clients to a dedicated server.
With the restricted environment of mobile apps, should we still be worried about cheating (from the player running the server) or is this a non issue in the mobile space? Are there any other major concerns we should look out for if we decide to stick with players hosting the game?
All of the below is about Android. iOS is more secure, but the server load issue still applies there too.
If you store game data on the SD card, any app can access that data. You could encrypt it, but it would still be a liability (like the Whatsapp hack here: techcrunch.com/2014/03/12/hole-in-whatsapp-for-android-lets-hackers-steal-your-conversations/)
If someone were to implement a low-level interception / modification of your game server network traffic, this could also be a problem. (http://www.justbeck.com/modifying-data-in-transit-to-android-apps-using-burp-and-backtrack-5/)
If you are using a Service, make sure it's a local service so it's only accessible from your app.
Also, the "restricted" aspect of Android systems can be easily removed by rooting the device.
Another thing to consider is network and cpu load. Both these things could grow big very fast, making the server laggy or even crash, considering the relatively low capacities of Android devices as compared to dedicated servers. Of course, this depends on the amount of work the server has to do per client.
In general, dedicated servers are a good idea, even for Android games I think.
I'd look into this from two different point of views:
Cost/Benefit: have in mind that dedicated server will impact your budget, so ask yourself if cheating is really a concern or not. I'd treat mobile space as other kind of spaces.
Game quality: As #1 is your point of view, this is your players point of view... They are going to feel something is going wrong and think about cheating? maybe. You can fix this with a reputation of the player that is hosting the server.
I'm currently working on an app with the end goal of being roughly analogous to an Android version of Air Play for the iDevices.
Streaming media and all that is easy enough, but I'd like to be able to include games as well. The problem with that is that to do so I'd have to stream the screen.
I've looked around at various things about taking screenshots (this question and the derivatives from it in particular), but I'm concerned about the frequency/latency. When gaming, anything less than 15-20 fps simply isn't going to cut it, and I'm not certain such is possible with the methods I've seen so far.
Does anyone know if such a thing is plausible, and if so what it would take?
Edit: To make it more clear, I'm basically trying to create a more limited form of "remote desktop" for Android. Essentially, capture what the device is currently doing (movie, game, whatever) and replicate it on another device.
My initial thoughts are to simply grab the audio buffer and the frame buffer and pass them through a socket to the other device, but I'm concerned that the methods I've seen for capturing the frame buffer are too slow for the intended use. I've seen people throwing around comments of 3 FPS limits and whatnot on some of the more common ways of accessing the frame buffer.
What I'm looking for is a way to get at the buffer without those limitations.
I am not sure what you are trying to accomplish when you refer to "Stream" a video game.
But if you are trying to mimic AirPlay, all you need to do is connect via a Bluetooth/ internet connection to a device and allow sound. Then save the results or handle it accordingly.
But video games do not "Stream" a screen because the mobile device will not handle much of a work load. There are other problems like, how to will you handle the game if the person looses internet connection while playing? On top of that, this would require a lot of servers to support the game workload on the backend and bandwidth.
But if you are trying to create an online game. Essentially all you need to do is send and receive messages from a server. That is simple. If you want to "Stream" to another device, simply connect the mobile device to speakers or a TV. Just about all mobile video games or applications just send simple messages via JSON or something similar. This reduces overhead, is simple syntax, and may be used across multiple platforms.
It sounds like you should take a look at this (repost):
https://stackoverflow.com/questions/2885533/where-to-start-game-programming-for-android
If not, this is more of an open question about how to implement a video game.
While I have some experience putting little recorded clips in an app, storing them and replaying them with a simple button press.
now I have the requirement to encapsulate some videos with the app available at installation time. What would be the best way to store longer video material in an iPhone/Android multimedia app?
The material should be available offline. So far I just have some basic question:
Which target conversion format and settings would be the most appropriate for iPhone/Android?
(is there a considerable difference between phone/tablet/device resolution - or can all practically be served with one single format ?)
Is there some experience how many minutes of "good" (not HD) video can be stored in total within such an app?
Do you know of any tutorial/sample app one could use as a starting point for ideas and/or coding ideally with some kind of innovative "sexy" features, such as overlay of touchable areas?
Many thanks!
I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage.
Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK).
Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?
Is there any way I can use the API to
do something like this?
No.
Or alternatively, what would be the
best way to go about this, if it's
even feasible on cell phone hardware
(which seems to keep getting more and
more powerful, strangely...)?
It is possible you can find a Java library that lets you assemble movies out of stills and annotations, but I would be rather surprised if it met your needs, would run on Android, and would run acceptably on mobile phone hardware.
IMHO, the best route is to use a Web service. Use the device for data collection, use the server to do all the heavy lifting of assembling the movie out of the parts.
If you have to do it on-device, the NDK seems like the only practical route.
Do you just want to create movie files or do you want to display them on the phone?
If you just want to display the post-processed annotated images as a movie then it's possible. What is the format of your images ? Currently, I'm able to display to MJPEG video on a Nexus One (running 2.1) without any noticeable lag without using the NDK. In my case the images are coming from the network.
On the other hand, if you just want to create movie files and store is on the phone or some other place then CommonsWave's idea of "delegating" this to a server makes more sense since you will have more processing power and storage on the server. This will require that you have access to a network and don't mind sending all the images from the phone to the server and then download the movie file back to the phone.