Encode photos into (some) video format in Android - android

In my app I need to convert photos (taken by the user) into some playable (on both Android device and pc web browser) vide format.
My first and obvious choice was GIF, I've managed to get it working using this https://github.com/nbadal/android-gif-encoder but the result was of very poor quality. What I didn't now was gif is a terrible standard - only 256 colors per frame, virtually no compression, so for my purposes it's useless.
I know it is possible to use ffmpeg for this, but I have no experience with NDK (and I've used C only at the university).
Are there any other options worth exploring?
EDIT: it needs to work on ICS (minSdkVersion=15)

You can use MediaMuxer (for android over 4.3) to convert your images to mp4.

Related

How to use Qt Multimedia and C++ to save an .mp4 video out of OpenGL textures

I am using Qt 5.9 based app which runs on embedded linux & android. The application processes real time data using OpenGL ES 3.0 & displays OpenGL textures at real time. I am displaying at the rate of 30+ frames per second which makes it pretty much real time & appears like a video.
I need to save an mp4 from a 30 to 40 frames that are displayed using OpenGL textures. As I understand, I can leverage Qt Multimedia to do this. But I lack the knowledge of how to do this. I am trying read & understand the how part from links like here & here.
One the mp4 is saved, playback can be done using QMediaPlayer as explained here. That looks darn simple. But I am struggling to figure how get my OpenGL textures saved into a .mp4 when I need them to.
So, How do I save a .mp4 video out of the OpenGL textures that are displayed on a QML item?
Pointing out to any basic example that exists would also help.
I don't think Qt will do you any favors when it comes to content creation, Qt's multimedia facilities are purely for content consumption purposes. You can play MM, not make MM.
You will have you explicitly use one of the many available MM libraries out there - vlc, ffmpeg, gstreamer, libav to name a few.

create video from series of images android

I am trying to create a video from series of images in android.
I have come across these three options MediaCodec, ffmpeg using ndk and jcodec. Can someone let me know which one of them is best and easiest. I didn't find any proper documentation so can somebody please post their working example?
If you are talking about API 4.3+ in general you need to get input surface from encoder, copy image to the texture that comes along with the surface, put correct timestamp and send it back to encoder. and do it
fps (frames per second) * resulted video duration in seconds
times. encoder bitstream after encoder should go to the muxer, so finally you will get mp4 file.
It requires rather much coding:)
I suggest you to try free Intel Media Pack: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
It has a sample - JpegSubstituteEffect, it allows to create videos from images. The idea is to take a dummy video (black video and quiet audio) and to substitute all black frame by coping images. It could be easily enhanced to creating a video from series of images. I know a couple of applications in Google Play making the same using Media Pack
I tried JCodec 1.7 for Android. This is very simple compared to the other two options and works. There is class SequenceEncoder in the android package that accepts Bitmap instances and encodes them in to video. I ended up cloning this class into my app to override some of the settings e.g. fps. Problem with JCodec is that performance is dismal - encoding single 720x480 pixels frame takes just about 45 seconds. I wanted to do timelapse videos possibly at fullHD and was initially thinking any encoder will do as I was not expecting encoding a frame to take more than a second (minimal interval between frames in my app is 3 seconds). As you can guess with 45 seconds per frame JCodec is not a fit.
I am monitoring your question for other answers that may be helpful.
The MediaCodec/MediaMuxer way seems ok but it is insanely complex. I need to learn quite a bit about OpenGL ES, video formats and some Android voodoo to get this going. Ohh and this only works on the latest crop of phones 4.3+. This is real shame for Google with all of their claims to fame. I found some Stackoverflow discussions on the topic. Two sub-paths exist - the OpenGL way is device independent. There is another way which involves transcoding your RGB Bitmap data to YUV. the catch with YUV is that there are 3 flavours of it depending on the device HW - planar, semi planar and semi planar compressed (I am not sure if a 4th way is not coming in the future so...).
Here are couple useful links on the OpenGL way
CTS test - https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java
Many CTS like tests - http://bigflake.com/mediacodec/#EncodeDecodeTest
Two of them seem to be relevant and useful
EncodeAndMuxTest - http://bigflake.com/mediacodec/EncodeAndMuxTest.java.txt
CameraToMpegTest - http://bigflake.com/mediacodec/CameraToMpegTest.java.txt (I believe this to be closest to what I need, just need to understand all the OpenGL voodoo and get my Bitmap in as texture filling up the entire frame i.e. projections, cameras and what not comes into play)
ffmpeg route does not seem direct enough too. Something in C++ accepting stuff from Java...I guess some weird conversions of the Bitmap to byte[]/ByteBuffer will be first needed - cpu intensive and slow. I actually have JPEG byte[] but am not sure this will come handy to the ffmpeg library. I am not sure if ffmpeg is taking leverage of the GPU or other HW acceleration so it may well end up at 10 seconds per frame and literally baking the phone.
FFmpeg can implement this task. You first need compile ffmpeg library in Android (refer to this article "How to use Vitamio with your own FFmpeg build")
You could refer the samples in FFmpeg to figure out how to implement your task.
In Android implement your task in C++; then use JNI to integrate the C++ code into your Android app.

Optimizing Painting video encoding in Android

I want to create live painting video as export feature for a painting application.
I can create a video with a series of images, with the use of a library ( FFMPEG or MediaCodec). But, this would require too much processing power to compare the images and encode the video.
While drawing, I know exactly which pixels are changed. So, I can save lot of processing if I can pass this info to FFMPEG, instead of having the FFMPEG figure this out from the images.
Is there away to efficiently encode the video for this purpose ?
It should not require "too much processing power" for MediaCodec. Because, for example, device is capable to write video in real time, some of them write full HD video.There's another thing : each MediaCodec's encoder requires pixel data in specific format, you should query API for supported capabilities before using the API. Also it will be tricky to make your app work on all devices with MediaCodec if your app produces only one pixel format, because probably not all of devices will support it(another words: different vendors have different MediaCodec implementation).

Video camera interface between beaglebone and android

I am trying to develop an application in which a Beaglebone platform captures video images from a camera connected to it, and then send them (through an internet socket) to an Android application such the application shows the video images.
I have read that openCV may be a very good option to capture the images from a camera, but then I am not sure how the images can be sent through a socket.
On the other end, I think that the video images received by the Android application could be treated by simple images. With this in mind I think I can refresh the image every second or so.
I am not sure if I am in the right way for the implementation, so I really appreciate any suggestion and help you could provide.
Thanks in advance, Gus.
The folks at OpenROV have done something like you've said. Instead of using a custom Android app, which is certainly possible, they've simply used a web browser to display the images captured.
https://github.com/OpenROV/openrov-software
This application uses OpenCV to perform the capture and analysis, a Node.JS application to transmit the data over socket.io to the web browser and a web client to display the video. An architecture description on how this works is given here:
http://www.youtube.com/watch?v=uvnAYDxbDUo
You can also look at running something like mjpg-streamer:
http://digitalcommons.calpoly.edu/cgi/viewcontent.cgi?article=1051&context=cpesp
Note that displaying the video stream as a set of images can have big performance impact. For example, if you are not careful how you encode each frame, you can more than double the traffic between the two systems. ARGB takes 32 bits to encode a pixel, YUV takes 12 bits, so even accounting for the frame compression, you still are doubling the storage per frame. Also, rendering ARGB is much, much slower than rendering YUV, as most of the Android phones actually have hardware-optimized YUV rendering (as in the GPU can directly blit the YUV in the display memory). In addition, rendering separate frames as approach usually make sone take the easy way and render a Bitmap on a Canvas, which works if you are content with something in the order of 10-15 fps, but can never get to 60 fps, and can get to a peak (not sustained) of 30 fps only on very few phones.
If you have a hardware MPEG encoder on the Beaglebone board, you should use it to encode and stream the video. This would allow you to directly pass the MPEG stream to the standard Android media player for rendering. Of course, using the standard media player will not allow you to process the video stream in real time, so depending on your scenario this might not be an option for you.

Format of sound/image files to be used in Android Apps

While developing an Android app what format of sound/image should i should be using so that i can control the overall size of the app after completion.
here is a link to all the media types supported by Android.
For sound I would probably use a low-bitrate .mp3 or a .midi and for images either a compressed .jpg or .gif
For supported media formats see this.
For images you'll probably end up with JPG or PNG (if you need transparency). You should also scrape the images to remove all unnecessary meta data etc. For linux, a nice tool for this is Trimage.
Take a look at Supported Media Formats.
My choice would be:
Images: go with JPG for compression or PNG for quality and transparency support.
Audio: go with MP3-VBR (variable bit rate) for compression and quality.
The size of your file will be greatly affected by compression level. At some point, if you compress too much you will see/hear artifacts. The acceptable level of compression is subjective and really depends on the input data (image or audio). You should be testing different levels of compression to see what works.

Categories

Resources