Adding effect to an Android Camera preview - android

I am looking to get Live camera feed and add effects to it and display it. Which is the right technology to go forward with. Any open source

You can configure camera class to provide you the preview buffer (but copies). This is provided throuh PreviewBuffer Callback interface. You have implement the interface and set it on the camera. During preview you will get the preview buffers.
http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html
Then you can apply custom processing algorithm on the buffer and use either Surface or opengl surface to draw the same.
Shash316

Check out OpenCV... it will require you to do some work in C/C++, JNI and the Android NDK but it is a really nice library and should do what you need pretty easily.

Kieran is right, OpenCV would be a good and easy way with a lot of capabilities.
See http://opencv.willowgarage.com/wiki/AndroidTrunk for details of the android implementation.
And checkout the sample application: https://code.ros.org/svn/opencv/trunk/opencv/android/apps/OpenCV_SAMPLE/
This should get you a good starter as it is an example using processors for the live image.

Related

Implement camera preview in native and use in flutter

I want to implement a native android camera preview (using c++) and display this preview on a flutter application.
I couldn't find any direct possible way to do so - up to this point the only solution I could find (which I still am not sure if it will work and think it might be too complex) is the following:
Implement a native stream using ACameraCaptureSession_setRepeatingRequest and ACameraDevice_createCaptureRequest, pass it to Java, and convert it to an android view, and then using Flutter's PlatformView.
As I mentioned, I'm not sure it will work, and even if it does, I wonder if there is some simpler way to do it.

Using webrtc for android,how to save a image in the video call?

I don't know how to get the video frame , so I can't save the image.
Give me some tips. Thanks a lot.
As canvas and rest of the facilities are unavailable in Android we can dodge this situation by taking screenshots and introducing animation in our app's UI. screenshot image can be stored at configured location and resused it later for exchanging it to other party
Edit: One can take reference from AppRTC to capture surfaceview()
https://codereview.webrtc.org/1257043004/
GLSurfaceView () should not work as webrtc library has the hold of camera and screen. One has to build extended class to get Videorenderer and get the snap of frame , Once done one can display the frame using customized api displayFrame() mentioned by cferran in opentok android examples.
You can also use OpenTok library but that is chargeable when compared to webRTC.
If you are interested in using a third party library here is an example on how to implement this use case: https://github.com/opentok/opentok-android-sdk-samples/tree/master/Live-Photo-Capture
If you prefer to use directly WebRTC, here you can find generic information about how to build WebRTC on Android: https://webrtc.org/native-code/android/

Adding ARToolkit Marker tracking into Tango

I have been trying to integrate ARToolkit Marker Object tracking into a Tango Application.
So far I have created a build so that a tango app can access and use the ARToolkit Native Library or the ARToolkit Unity wrappers.
However, they both seem to require exclusive access to the camera in their default configurations.
How could you feed the same Android video feed to both libraries?
Could you create a dummy camera device which doubles out the feed?
Could you take the tango feed as normal, and then resend it into ARToolkit with a special VideoConf
[edit]
ARToolkit uses the older Camera1 API, takes a onPreviewFrame() callback and passes that byte[] data to it's own Native Library call, which does the actual work.
Along the lines of the second bullet point, could Tango provide a copy of each frames raw camera data using something like iTangoVideoOverlay .
(ARToolkits NDK functionality seems to expect NV21, but can also accept other formats)
If that data was extractable from tango, I believe the ARToolkit NDK functionality can be used without actually owning the camera.
I am afraid that neither of the method you mentioned would work. Tango has exclusive access to camera and I believe ARToolkit also occupies the camera exclusively through camera2 API. With current TangoSDK, I think the walk-around would be use ARToolkit for camera rendering, and Tango for pose tracking.
However, this could expose a problem for time-stamping, which is Tango and ARToolkit has different timestamps. The solution for this is to take a timestamp offset at the very beginning when application starts, and constantly apply that offset when querying pose from Tango based on timestamp.
This blog shows an example integrating the two.
It also links to example source code, but I haven't tidied it up at all after testing - proceed with caution!
You cannot feed the same camera source to both libraries (first bullet point), but you can forward the camera feed from Tango (ITangoVideoOverlay) into ARToolkit ([AcceptVideoImage][2]) (second bullet point).
This is not ideal, because it is fairly inefficient to send the data to Java from C#. The Phab 2 Pro has to downsample the video X4 to achieve a decent framerate.
A better answer would replace the AndroidJavaClass calls with pipes/sockets.
Also there are many little problems - it's a pretty hacky workaround.

How to use OpenGL without displaying it?

For an application i want to render things in background even when the app is not currently displayed. The official docs write to open a GLcontext via a GLSurfaceView. For not displaying graphics and rendering into another target there seems not to be a real solution.
So the Question is how to create a GL-context without GLSurfaceView in Android?
Use case: Record a video and add current time as text directly into video. For that CPU-based image-manipulation is simply to slow to be performed live. At least if the video should be also displayed while recording. OpenGL could render everything simply into a Framebuffer/Renderbuffer.
You don't have to use GLSurfaceView to do OpenGL rendering. If you look at the source code, you can see that it's only using only publicly available APIs. It is merely a convenience class that makes the most common uses of OpenGL under Android (using OpenGL to draw the content of a view) very easy. If your use case is different then you just... don't use it.
The API you use for creating contexts and rendering surfaces more directly is EGL. There are two versions of it available in the Android Java frameworks: EGL10 and EGL14. EGL10 is very old, and I would strongly recommend using EGL14.
The EGL calls are not really documented in the Android SDK documentation, but you can use the man pages on www.khronos.org to see the calls explained.
Using EGL directly, you can create a context and an off-screen rendering surface that will allow you to use OpenGL without any kind of view.
I posted complete code showing how to create a context with an off-screen rendering surface in a previous answer here: GLES10.glGetIntegerv returns 0 in Lollipop only.
Already answered here and here
I think the most simple is what is described in second link when they say :
set it's view size 1pixel&1pixel, put it visible(user cannot see it) and in OnDraw bind FBO to the main texture and render it
So you would still create a GLSurfaceView (just to get a context) but you would make it 1x1 and invisible.
Or you could use a translucent GLSurfaceView as suggested in my comment (I am not sure this will work on any device, this seems to be a bit tricky to setup).

OpenCV eye tracking on Android

I'm looking to do basic eye tracking in android using the OpenCV api. I've found that there seem to be two ways to use opencv in Andriod, either by using their c++ wrapper or by using JavaCV api. I'm willing to do either but I'm looking for some idea or sample code as to how i would track basic eye movement with either platform. I'm leaning toward the JavaCV api because it looks easier to use but I could really use some sort of tutorial on the basics of using it with android.
Assuming you already looked into JNI (Java Native interface), JavaCV is exactly the same thing as OpenCV. As per eye tracking, you will need to get the live video feed from the camera and locate the participant's eyes in the frames using template matching and blink detection.
You will just have to make your View implements Camera.PreviewCallback in order to get a hold on the camera feed.
The OpenCV Site on eye tracking provides some sample codes that will help you track the eyes.
If you want to see an example of opencv on android, click on this open source code.
Hope it helps

Categories

Resources