I'm looking for a way to process video images (without recording) at 120fps - on Android.
Going over all relevant SO questions, I couldn't find an answer to my question.
The docs says one must use CameraConstrainedHighSpeedCaptureSession to work with high speed video sessions.
For regular capture session I am using an ImageReader and process the images in C++.
But for high speed sessions I can't use it.
Is there a way to solve this issue using Camera2 API?
If not, is there a c++/opengl way to do it?
Important - my goal is to NOT record the video.
I've looked at Grafika & bigflake but they use the old Camera API.
Any help is much appreciated.
Thank you.
Related
I want to know, if we can manage Googles new CameraX api's vide capturing speed, for example record video in 2x speed or in slow mode. Any sugestion?
Currently you can't record a video in 2x speed or any other speed different than 1x using CameraX.
Alternatively you can use FFmpeg to create a slow/fast motion video from an input video.
There are few ways to create a slow/fast motion video using ffmpeg, see guide - this process can be done in a few ways, the first one mentioned in the linked page is lossless and requires no re encoding so it is very fast.
If you don't know much about ffmpeg there are plenty of articles online and guides.
for using it in android I personally prefer this library, but there are plenty more option out there.
I do not know much about Camera API, though i need to use frames from a capturing video with more then 30fps with a good quality camera(S9).
Can anybody suggest code for the same.
I tried to find fit code for this but i am failed.
Thanks in Advance
Mohit
You are lucky, since not so long ago a component of android JetPack was released, called CameraX. Sadly it is still in alpha stage, meaning that you should avoid using it in production since it might have breaking changes in the future. This component was built on top of Camera 2 API, witch is a low level API for working with camera.
If you plan to use your app in production I highly recommend to use Camera 2 API, it is low level, however you have the full control over the camera.
Here is an example to get you started.
I've been working with the Ionic Framework for a quite a while now, and while I have been able to find tons of tutorials and examples for taking a picture or video, I haven't seen anyone discuss taking video from the camera and displaying it live in the app so that the user can see what they're about to take a picture or video of. This would be essentially like the native camera app on iOS or Android where the output from the camera is displayed live while the user is getting ready to take a photo or start taking video. I understand that some people have tried to just take single pictures from the camera and update the UI several times a second to make it seem like the video is being streamed straight from the camera, yet I also understand that these attempts are usually plagued with memory leaks, crashes, and tend to have quite a low framerate, causing rough video. If anyone has experience with solving this sort of problem or might have some clues for me, thanks for your input!
it's live streaming and WebRTC is the way to go.
https://webrtchacks.com/webrtc-hybrid-applications/
I am trying to create a video from series of images in android.
I have come across these three options MediaCodec, ffmpeg using ndk and jcodec. Can someone let me know which one of them is best and easiest. I didn't find any proper documentation so can somebody please post their working example?
If you are talking about API 4.3+ in general you need to get input surface from encoder, copy image to the texture that comes along with the surface, put correct timestamp and send it back to encoder. and do it
fps (frames per second) * resulted video duration in seconds
times. encoder bitstream after encoder should go to the muxer, so finally you will get mp4 file.
It requires rather much coding:)
I suggest you to try free Intel Media Pack: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials
It has a sample - JpegSubstituteEffect, it allows to create videos from images. The idea is to take a dummy video (black video and quiet audio) and to substitute all black frame by coping images. It could be easily enhanced to creating a video from series of images. I know a couple of applications in Google Play making the same using Media Pack
I tried JCodec 1.7 for Android. This is very simple compared to the other two options and works. There is class SequenceEncoder in the android package that accepts Bitmap instances and encodes them in to video. I ended up cloning this class into my app to override some of the settings e.g. fps. Problem with JCodec is that performance is dismal - encoding single 720x480 pixels frame takes just about 45 seconds. I wanted to do timelapse videos possibly at fullHD and was initially thinking any encoder will do as I was not expecting encoding a frame to take more than a second (minimal interval between frames in my app is 3 seconds). As you can guess with 45 seconds per frame JCodec is not a fit.
I am monitoring your question for other answers that may be helpful.
The MediaCodec/MediaMuxer way seems ok but it is insanely complex. I need to learn quite a bit about OpenGL ES, video formats and some Android voodoo to get this going. Ohh and this only works on the latest crop of phones 4.3+. This is real shame for Google with all of their claims to fame. I found some Stackoverflow discussions on the topic. Two sub-paths exist - the OpenGL way is device independent. There is another way which involves transcoding your RGB Bitmap data to YUV. the catch with YUV is that there are 3 flavours of it depending on the device HW - planar, semi planar and semi planar compressed (I am not sure if a 4th way is not coming in the future so...).
Here are couple useful links on the OpenGL way
CTS test - https://android.googlesource.com/platform/cts/+/jb-mr2-release/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java
Many CTS like tests - http://bigflake.com/mediacodec/#EncodeDecodeTest
Two of them seem to be relevant and useful
EncodeAndMuxTest - http://bigflake.com/mediacodec/EncodeAndMuxTest.java.txt
CameraToMpegTest - http://bigflake.com/mediacodec/CameraToMpegTest.java.txt (I believe this to be closest to what I need, just need to understand all the OpenGL voodoo and get my Bitmap in as texture filling up the entire frame i.e. projections, cameras and what not comes into play)
ffmpeg route does not seem direct enough too. Something in C++ accepting stuff from Java...I guess some weird conversions of the Bitmap to byte[]/ByteBuffer will be first needed - cpu intensive and slow. I actually have JPEG byte[] but am not sure this will come handy to the ffmpeg library. I am not sure if ffmpeg is taking leverage of the GPU or other HW acceleration so it may well end up at 10 seconds per frame and literally baking the phone.
FFmpeg can implement this task. You first need compile ffmpeg library in Android (refer to this article "How to use Vitamio with your own FFmpeg build")
You could refer the samples in FFmpeg to figure out how to implement your task.
In Android implement your task in C++; then use JNI to integrate the C++ code into your Android app.
I'm making android game.(using andengine)
I need to record game play screen .
This is not for making promotion video, It is for game players to review their game play.
My app should record video by itself.
So I can't solve this problem using available recording app in market.
I already checked below code.
http://code.google.com/p/andengineexamples/source/browse/src/org/anddev/andengine/examples/ScreenCaptureExample.java?spec=svn66d3057f672175a8a21f9d06e7a045942331f65c&r=66d3057f672175a8a21f9d06e7a045942331f65c
It works very well..
But I want to record game play video, not a one screenshot.
At least I need 24fps for smooth replay, But If I use glreadpixels , I can get 5 fps at my xoom device.
I searched various websites to solve this optimization problem.
most people saying glreadplxels is too slow to record video.
http://www.gamedev.net/topic/473794-glreadpixel-takes-tooooo-much-time/
they recommend glcopyteximage2d instead of glreadpixels.
because glcopyteximage2d is much more faster than glreadpixels.
but I can't find how to use glcopyteximage2d in andengine.
even someone say that android opengl ES do not support glcopyteximage2d.
Maybe Another method exists to record smooth video.
It is read framebuffer of android device.
most of recording app in market using this method. but these app needs root permission to grab framebuffer.
I've read some news that android will be support capture screen from suface_flinger after gingerbread.
But I can't find out how to use framebuffer without root permission. T_T
These are my guessing solution.
use another opengl API which has better speed than glreadpixels.
find some android API can get framebuffer without root permission.
(Maybe I can access to android SURFACE_FLINGER ??)
draw another offscreen texture to record video.
But I don't know how to implement these methods.
Which approach is correct?
Do you have a example code to record video for android?
please help me to solve this problem.
If you know any other method, That will be helpful.
any help will be appreciated
Does the GPU vendor of your device support es3.0, if it does you can try to use PBO.
Here is a topic I you can refer to :Low readback performance with PBO , help !!!!!