SurfaceView or TextureView which is better for Preview - android

Got stuck In a situation, where working on a Camera app for android. The Camera should be customized and
Customized Camera
and not just the built-in. That's fine. I want camera to detect the eyes, while capturing the photo.
But I have few questions:
What to do for preview of a camera
Whether to use TextureView or SurfaceView
After capturing the image, where should it be shown
What is openCV, if I work with Texture- or SurfaceView do I still need openCV

You can start with one of many tutorials, or pick up some boilerplate from GitHub. There is no big difference whether to use TextureView or SurfaceView. You don't need OpenCV unless you need some real time image processing. It is your free choice whether to show captured picture, where and when.

OpenCV is a library that provides implementations of various features, like face detection, image processing etc.
If you plan to use OpenCV, then it provides its own implementation of CameraView that can be used for the CameraPreview.
OpenCV should be used as its methods are rigorous and efficient.
you may want to refer this for your project
https://github.com/hosek/eyeTrackSample

Related

Is it possible to run ARCamera in the background without previewing it on Android?

I am fairly new to Android and especially to the various camera systems in this platform. I'm building an app where I need to integrate ARCore only to track the camera pose (among other things like objects in the scene, planes etc). I don't want to augment anything in the "real world" , so I am not looking to preview the frames being fed to the camera. I've looked through all of the examples in the arcore-sdk and sample code in google's documentation. None of them cover my use case where I want to be able to fetch camera's pose without previewing the camera images onto a surface view or something. I also don't want to 'fake' it by creating a view and hiding it. I would like to know if anyone has experience with such a thing or any ideas how we can achieve it or if we can achieve this at all? Does ARCore even support this?
UPDATE: I found this https://github.com/google-ar/arcore-android-sdk/issues/259 where they mention that it's possible with just an OpenGL context. But I have no clue how to get started. Any samples or pointers would be appreciated!
You can run an ArSession for tracking. ArSession doesn't depend on View.

camera preview with camera2 library

Is there any way to have a camera preview using the camera2 library while running the application? I only need one method which is able to show in the app the camera preview (not taking pictures or opening the camera APP).
Take a look at CameraView, an unofficial support library widget for drawing camera preview easily (It can also take snapshots, but you can ignore that part).

Android multiple camera preview

Is it possible to broadcast the android camera preview into 2 different SurfaceView controls at the same time? I have seen some apps that show effects into different previews in real-time, how do they achieve that? I read about the TextureView, is this the view to use? where can I find examples of multiple simultaneous camera previews?
Thanks
Well, as they answered in this question, I downloaded the grafika project and revised the "texture from camera" example.
In the RenderThread is a Sprite2d atribute called mRect.
I just make another instance called mRect2, and configuired it with the same parameters that mRect has, except the rotation, I put it to the double:
mRect.setRotation(rotAngle);
mRect2.setRotation(rotAngle*2);
This is the result
There is still a lot of code to understand, but it works and seems a very promising path to continue by.
I don't think that it is possible to independently open 2 camera previews at the same time, as the camera is treated as a shared resource. However, it will be possible to draw to multiple SurfaceViews which is what the apps you describe do.

First steps in creating a chroma key effect using android camera

I'd like to create a chroma key effect using the android camera. I don't need a step by step, but I'd like to know the best way to hijack the android camera and apply the filters. I've checked out the API and haven't found anything super definitive on how to manipulate data coming from the camera. At first I looked into using a surface texture, but I'm not fully aware how that helps or how to even use it. Then I checked out using a GLSurfaceView, which may be the right direction, but not really sure.
Also, to add to my question, how would I handle both preview and saving of the image? Would I process the image at minimum, twice? Once while previewing and once while saving? I think that's probably the best solution.
Lastly, would it make sense to create a C/++ wrapper to handle the processing to optimize speed?
Any help at all would be greatly appreciated. A link to some examples would also be greatly appreciated.
Thanks.
The only real chance is to use openGL ES and fragment shader (it will require at least openGL ES 2.0) and do the chroma key effect on GPU. The shader itself will be quite easy (google).
But to do that, you need to display camera preview with callback. You will have to implement Camera.PreviewCallback, create a buffer for image data and use setPreviewCallbackWithBuffer method. You can get the basic idea from my answer to a similar question. Note that there is a significant problem with performance of this custom camera preview, but it might work on hardware that supports ES 2.0.
To display the preview with openGL, you will need to extend GLSurfaceView and also implement GLSurfaceView.Renderer. Then you will bind the camera preview frame as a texture with glTexImage2D to some simple rectangle and the rest will be handled by shaders. See how to use shaders in ES here or if you have no experience with shaders, this tutorial might be a good start.
To the other question: you could save the current image from the preview, but the preview has lower resolution than a taken picture, so you will probably want to take a picture and then process it separately (you could use the same shader for it).
As for the C++, it's a lot of additional effort with questionable output. But it can improve performance if done right. Try to check this article, it's on a similar topic, it describes how to use NDK to process camera preview and display it in openGL. But if you were thinking about doing the chroma key effect in C++, it would be significantly slower than shaders.
You can check this library: https://github.com/cyberagent/android-gpuimage.
It provides a framework to do image processing on device's GPU using GL shaders.
There is also a sample showing how to use the library with a camera.
There is a Chroma-Key-Project on Google-Code: http://code.google.com/p/chroma-key-project/ It includes a way to upload pictures that are token using chroma-key:
After an exhaustive search online, I have failed to find any open source projects working >with Chroma-keying for Android devices. The aim of this project is to provide a useful >Chroma-key library, that will make it easy to implement applications and games that can take >pictures in front of a Green or Blue screen, and apply the pictures on a chosen background. >Furthermore, the application will also allow the user to upload the picture using Intent.

Why on Android, OpenCV camera is faster than the Android Camera when capturing video

In a project on Android, I'm trying to capture the video and process it in realtime (like a Kinect). I tried with two method: using OpenCV keep calling mCamera.grab() and capture.retrieve(mRgba,Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA); or the Android's Camera by keep capturing image.
I feel that the OpenCV camera's ability to capture image faster than the Android one. But why?
OpenCV uses a hack to get low level access to the Android camera. It allows to avoid several data copyings and transitions between native and managed layers.

Categories

Resources