I am trying to resize video file on Android device using FFmpeg. Remark: I cant use FFmpeg as binary - in my case it should be used as shared library (only FFmpeg C API is accessible).
I did not find any documentation regarding video resizing, however it looks like algorithm is following:
10 OPEN video_stream FROM video.mp4
20 READ packet FROM video_stream INTO frame
30 IF frame NOT COMPLETE GOTO 20
40 RESIZE frame
50 WRITE frame TO converted_video.mp4
60 GOTO 20
Should I use sws_scale function in order to resize frame? Is there any other (easier?) way to resize video file using FFmpeg C API?
From what I remember, sws_scale() is the way to do this with recent FFmpeg versions. Despite the extra steps like preparing an SwsContext it's not that hard to use, and there are examples such as the tutorial on the site you referenced.
Older versions also had an img_convert() function which was a bit simpler to use, and for a while it was still in the library but not in the usual headers -- it still worked if you supplied your own prototype (taken from an older version). This may still work, if you're willing to chance it -- though I haven't tried it with the latest versions. It's probably safer and better to use sws_scale(), though.
It's also possible to handle the scaling outside of the FFmpeg libraries, but it's likely more trouble than it's worth in this case. The Doxygen documentation for the libraries describes the AVPicture structure well enough to work with it directly, or transfer the image to/from some other form. The main difficulty is ensuring you can work with or convert the color/pixel format used -- if you would have to convert it to another format, you should use sws_scale() for at least that much, if not the resizing as well.
Related
I try to build a network implementing the Yolo Object detection using tensorflow, and I want it could be used on Android. After building the structure, I use the tf.train.write_graph to get the graph file and want to replace the original file in android demo.
But the pb file is too large (1.1G) which is not usable on Android. So, how could I reduce the size?
I would suggest you to first try quantizing your graph, for that you'll only need an official TensorFlow script. Here's a great tutorial by Pete Warden:
https://petewarden.com/2016/05/03/how-to-quantize-neural-networks-with-tensorflow/
In theory if you used 32 bit floats your model is going to end up ~4 times (~250Mb) smaller since the values in the graph will be converted to 8 bit integers (For inference it has no significant effect on the performance). Note that this comes into play when you compress the Protocol Buffer file.
I'm working on a feature in which I want to add picture over the video and save it to sd card.
in general, the user selects an image with semi-transparent background and puts that image above the video, after the user presses the save button he gets a new video but already with the image above the video.
I have heard about ffmpeg, and saw some commands that are provided by ffmpeg. but I don't know where I should initialize. can anyone provide me an example for the same?
Thank you.
One common approach is to use an ffmpeg wrapper to access ffmpeg functionality from your Android app.
There are several fairly well used wrappers available on GitHub - the ones below are particularly well featured and documented (note, I have not used these as they were not so mature when I was looking at this previously, but if I was doing something like this again now I would definitely build on one of these):
http://writingminds.github.io/ffmpeg-android-java/
https://github.com/guardianproject/android-ffmpeg
Using one of the well supported and used libraries will take care of some common issues that you might otherwise encounter - having to load different binaries for different processor types, and some tricky issues with native library reloading to avoid crashes on subsequent invocations of the wrapper.
Because this approach uses the standard ffmpeg cmd line syntax for commands it also means you should be able to search and find help easily on multiple different operations (as anyone using ffmpeg in 'normal' model will use the same syntax for the ffmpeg command itself).
For example, for your adding an image case here are some results from a quick search (ffmpeg syntax can change over time so it is worth doing a current check):
https://stackoverflow.com/a/32250369/334402
https://superuser.com/a/678171
I have a video(.mp4) file in my SDCard,I want to reduce a size of .mp4 file and upload this file to a server.
One way you can do this is to use ffmpeg.
There are several ways of using ffmpeg in an Android program:
use the native libraries directly from c using JNI
use a library which provides a wrapper around the 'ffmpeg' cmd line utility (also uses JNI in the wrapper library)
call ffmpeg cmd line via 'exec' from within you Android app
Of the three, I personally have used the wrapper approach in the past and found it worked well. IMHO, the documentation and examples available with the native libraries represented quite a steep learning curve.
Note, if you do use 'exec' there are some things it is worth being aware of - see bottom of this answer: https://stackoverflow.com/a/25002844/334402.
The wrapper does have limitations - at heart, the ffmpeg cmd line tool is not intended to be used this way and you have to keep that in mind, but it does work. There is an example project available on github which seems to have a reasonable user base - I did not use it myself but I did refer to it and found it useful, especially for an issue you will find if you need to call your ffmpeg wrapper more than once from the same activity or task:
https://github.com/jhotovy/android-ffmpeg
See this answer (and the questions and answers it is part if) for some more specifics on the 'calling ffmpeg two times' solution:
https://stackoverflow.com/a/28752190/334402
I have a lot of data (text format) to send from a device. It obviously means that I should compress it. But my question is whether there are any ways of doing it other than by zip algorithm (like this). The reason I am asking this question is over here - for a text file i.e. 7-zip is twice (!) better than zip. Which is a significant gain. And maybe there are even better algorithms.
So are there any effective ways of data compression (better than zip) available for Android?
You would need to compile another library into your code, since I doubt that compression algorithms other than zlib are available as part of the standard libraries on the Android.
The 7-zip algorithm you refer to is actually called LZMA, which you can get in library form in the LZMA SDK. The source code is available in Java as well as C. If you can link C code into your application, that would be preferable for speed.
Since there's no such thing as a free lunch, the speed is important. LZMA will require much more memory and much more execution time to achieve the improved compression. You should experiment with LZMA and zlib on your data to see where you would like the tradeoff to fall between execution time and compression, both to choose a package and to pick compression levels within a package.
If you find that you'd like to go the other way, to less compression and even higher speed than zlib, you can look at lz4.
Your question is too general.
You can use any library, as long as it is in Java or C/C++ (via the NDK). If you don't want to use external libraries, you have to stick to what's in the SDK. Depending on how you are sending the data, there might be standard ways to do this. For example, HTTP uses gzip and has the necessary headers already defined.
In short, test different things with your expected data format and size, find the best one and integrate it in your app.
Since PVR file format and PVRTexTool utility supports ETC compression -- I want to use it for my textures in the Android project.
Unfortunately I found no libs or samples how to load ETC1 OpenGL texture from PVR file.
One source I have is Objective-C PVR loader for iOS. But I need some example on C++ for Android NDK.
Unfortunately there doesn't seem to be much info on this out there, however there are a couple of things. http://www.brokenteapotstudios.com/android-game-development-blog/2011/05/loading-opengl-textures-in-c-and-etc1-texture-compression.html provides most of the info needed. If you want to use the header version, the header format is described here http://www.mhgames.org/2012/03/android-development-loading-etc1-textures-from-ndk/
Hopefully that's helpful :)
Read this first regarding compressed textures on android:
http://developer.android.com/guide/topics/graphics/opengl.html (scroll down to the "OpenGL Versions and Device Compatibility" chapter)
There's also the ETC1Util class (as referred from the link above) :
http://developer.android.com/reference/android/opengl/ETC1Util.html
Logical thing to do would be to use ETC1Util.isETC1Supported() to see if ETC1 is supported on your device and if not, provide a fallback option.
I also recommend you take a look (if you haven't done so already) at the PowerVR android sdk:
http://www.imgtec.com/powervr/insider/sdkdownloads/index.asp
I haven't taken a look at it myself, but I'm sure it's got what you are looking for.
So, I don't think there's any need for Objective-C...
Good luck!