I'm making android game.(using andengine)
I need to record game play screen .
This is not for making promotion video, It is for game players to review their game play.
My app should record video by itself.
So I can't solve this problem using available recording app in market.
I already checked below code.
http://code.google.com/p/andengineexamples/source/browse/src/org/anddev/andengine/examples/ScreenCaptureExample.java?spec=svn66d3057f672175a8a21f9d06e7a045942331f65c&r=66d3057f672175a8a21f9d06e7a045942331f65c
It works very well..
But I want to record game play video, not a one screenshot.
At least I need 24fps for smooth replay, But If I use glreadpixels , I can get 5 fps at my xoom device.
I searched various websites to solve this optimization problem.
most people saying glreadplxels is too slow to record video.
http://www.gamedev.net/topic/473794-glreadpixel-takes-tooooo-much-time/
they recommend glcopyteximage2d instead of glreadpixels.
because glcopyteximage2d is much more faster than glreadpixels.
but I can't find how to use glcopyteximage2d in andengine.
even someone say that android opengl ES do not support glcopyteximage2d.
Maybe Another method exists to record smooth video.
It is read framebuffer of android device.
most of recording app in market using this method. but these app needs root permission to grab framebuffer.
I've read some news that android will be support capture screen from suface_flinger after gingerbread.
But I can't find out how to use framebuffer without root permission. T_T
These are my guessing solution.
use another opengl API which has better speed than glreadpixels.
find some android API can get framebuffer without root permission.
(Maybe I can access to android SURFACE_FLINGER ??)
draw another offscreen texture to record video.
But I don't know how to implement these methods.
Which approach is correct?
Do you have a example code to record video for android?
please help me to solve this problem.
If you know any other method, That will be helpful.
any help will be appreciated
Does the GPU vendor of your device support es3.0, if it does you can try to use PBO.
Here is a topic I you can refer to :Low readback performance with PBO , help !!!!!
Related
I'm looking for a way to process video images (without recording) at 120fps - on Android.
Going over all relevant SO questions, I couldn't find an answer to my question.
The docs says one must use CameraConstrainedHighSpeedCaptureSession to work with high speed video sessions.
For regular capture session I am using an ImageReader and process the images in C++.
But for high speed sessions I can't use it.
Is there a way to solve this issue using Camera2 API?
If not, is there a c++/opengl way to do it?
Important - my goal is to NOT record the video.
I've looked at Grafika & bigflake but they use the old Camera API.
Any help is much appreciated.
Thank you.
I do not know much about Camera API, though i need to use frames from a capturing video with more then 30fps with a good quality camera(S9).
Can anybody suggest code for the same.
I tried to find fit code for this but i am failed.
Thanks in Advance
Mohit
You are lucky, since not so long ago a component of android JetPack was released, called CameraX. Sadly it is still in alpha stage, meaning that you should avoid using it in production since it might have breaking changes in the future. This component was built on top of Camera 2 API, witch is a low level API for working with camera.
If you plan to use your app in production I highly recommend to use Camera 2 API, it is low level, however you have the full control over the camera.
Here is an example to get you started.
I need to play a video on a OpenGL surface. I think I will need to render each frame of the video to a texture in a loop and then render it via OpenGL. is this possible under ios and/or android ?
It is possible on iOS, but it's pretty tricky business to get it to run fast enough to keep up with a video stream.
There is an old demo app from Apple called ChromaKey that takes a CVPixelBuffer from Core Video and maps it directly into an OpenGL texture without having to copy the data. That makes performance MUCH better, and is the approach I would suggest.
I don't know if there is more current sample code available that shows how it's done. That code is back from the days of iOS 6, and was written in Objective-C. (I would suggest doing new iOS development in Swift, since that's where Apple is putting its emphasis.)
Regards community
I just want to build a similar app like this,
with my own content of course.
How to capture 360 degree video (cameras, format, ingest, audio)?
Implementation:
2.1 Which one Cardboard SDK works best for my interests (Android or Unity)
2.2 Do you know any blogs, websites, tutorials, samples in which I can support.
Thank you
MovieTextures are a great way to do this in Unity, but unfortunately MovieTextures are not implemented on Android (maybe this will change in Unity 5). See the docs here:
For a simple wrap-a-texture-on-a-sphere app, the Cardboard Java SDK should work. But if you would rather use Unity due to other aspects of the app, the next best way to do this would be to allocate a RenderTexture and then grab the GL id and pass it to a native plugin that you would write.
This native code would be decoding the video stream, and for each frame it would fill the texture. Then Unity can handle the rest of the rendering, as detailed by the previous answer.
First of all, you need content, and to record stereo 360 video, you'll need a rig of at least 12 cameras. Such rigs can be purchased for GoPro cams. That's gonna be expensive.
The recently released Unity 5 is a great option and I strongly suggest using it. The most basic way of doing 360 stereo videos in Unity is to create two spheres with MovieTextures showing your 360 video. Then you turn them "inside out", so that they display their back faces instead of the front faces. This can be done with a simple shader, turning on front face culling and removing the mirror effect. Then you place your cameras inside the spheres. If you are using the google cardboard sdk camera rig, put the spheres on different culling layers and make the cameras only see the appropriate spheres. Remember to put the spheres in proper positions regarding the cameras.
There may be other ways to do this, leading to better results, but they won't be as simple. You can also look for some paid scripts/plugins/assets to do 360 video in VR.
One of the features of Android 4.4 (Kit Kat) is that it provides a way for developers to capture an MP4 video of the screen using adb shell screenrecord. Does Android 4.4 provide any new API's for applications to capture and encode video, or does it just provide the screenrecord utility/binary?
I ask because I would like to do some screen capture work in an application I'm writing. Before anyone asks, yes, the application would have framebuffer access. However, the only Android-provided capturing/encoding API that I've seen (MediaRecorder) seems to be limited to recording video from the device's camera.
The only screen capture solutions I've seen mentioned on StackOverfow seem to revolve around taking screenshots at a regular interval or using JNI to encode the framebuffer with a ported version of ffmpeg. Are there more elegant, native solutions?
The screenrecord utility uses private APIs, so you can't do exactly what it does.
The way it works is to create a virtual display, route the virtual display to a video encoder, and then save the output to a file. You can do essentially the same thing, but because you're not running as the "shell" user you'd only be able to see the layers you created. The relevant APIs are designed around creating a Presentation, which may not be exactly what you want.
See the source code for a CTS test with a trivial example (just uses an ImageView).
Of course, if you happen to be a GLES application, you can just record the output directly (see e.g. EncodeAndMuxTest and the "Record GL app" activity in Grafika).
Well, AFAIK, i don't see an API support equivalent to capturing what's going on the screen.