Is it possible to broadcast the android camera preview into 2 different SurfaceView controls at the same time? I have seen some apps that show effects into different previews in real-time, how do they achieve that? I read about the TextureView, is this the view to use? where can I find examples of multiple simultaneous camera previews?
Thanks
Well, as they answered in this question, I downloaded the grafika project and revised the "texture from camera" example.
In the RenderThread is a Sprite2d atribute called mRect.
I just make another instance called mRect2, and configuired it with the same parameters that mRect has, except the rotation, I put it to the double:
mRect.setRotation(rotAngle);
mRect2.setRotation(rotAngle*2);
This is the result
There is still a lot of code to understand, but it works and seems a very promising path to continue by.
I don't think that it is possible to independently open 2 camera previews at the same time, as the camera is treated as a shared resource. However, it will be possible to draw to multiple SurfaceViews which is what the apps you describe do.
Related
I am fairly new to Android and especially to the various camera systems in this platform. I'm building an app where I need to integrate ARCore only to track the camera pose (among other things like objects in the scene, planes etc). I don't want to augment anything in the "real world" , so I am not looking to preview the frames being fed to the camera. I've looked through all of the examples in the arcore-sdk and sample code in google's documentation. None of them cover my use case where I want to be able to fetch camera's pose without previewing the camera images onto a surface view or something. I also don't want to 'fake' it by creating a view and hiding it. I would like to know if anyone has experience with such a thing or any ideas how we can achieve it or if we can achieve this at all? Does ARCore even support this?
UPDATE: I found this https://github.com/google-ar/arcore-android-sdk/issues/259 where they mention that it's possible with just an OpenGL context. But I have no clue how to get started. Any samples or pointers would be appreciated!
You can run an ArSession for tracking. ArSession doesn't depend on View.
I am currently working on an application in android studios that messes with the colors of live camera feed from a phones camera. For example, I may want to filter out all the reds, or maybe I want to make displayed camera image black-and-white.
However, I haven't really found much on how to do this. I've found tutorials on using both the deprecated Camera class and the android.hardware.camera2 class. My preferred sample code was for camera2, found directly here (takes you directly to Java class files, not whole project).
So does anyone know how to use camera2 to do what I want? Do I need to use the deprecated class Camera instead? My idea is that I need to have an activity that has the main job of displaying images, and behind the scenes the phone camera is running, sending the image (in whatever format, Bitmap) to have the colors messed with (by some code I will make), which then sends the image to be displayed in the main activity.
So that is three main pieces: (1) Camera to Bitmap, to get what is currently seen by the phone Camera and store it in code; (2) mess with the colors of the Bitmap to distort the current view in my desired way; and (3) then a way of taking the resulting distorted view and displaying that on the screen. Of course, as mentioned, it's the first and last of the three just mentioned that I really need help with.
Please let me know what other details will be helpful to know about.
I am currently working on an Augmented Reality application for Android devices and want to modify camera preview with information I gained in previous steps.
Now there seem to be 2 ways to do this:
I could modify the data[] array to directly apply my overlay on the actual preview frame.
I could make a second view which holds the information to be displayed and display it over the camera preview.
Which of these approaches is the better one, and for what reason? I cannot see any advantages of one over the other yet.
I'd like to create a chroma key effect using the android camera. I don't need a step by step, but I'd like to know the best way to hijack the android camera and apply the filters. I've checked out the API and haven't found anything super definitive on how to manipulate data coming from the camera. At first I looked into using a surface texture, but I'm not fully aware how that helps or how to even use it. Then I checked out using a GLSurfaceView, which may be the right direction, but not really sure.
Also, to add to my question, how would I handle both preview and saving of the image? Would I process the image at minimum, twice? Once while previewing and once while saving? I think that's probably the best solution.
Lastly, would it make sense to create a C/++ wrapper to handle the processing to optimize speed?
Any help at all would be greatly appreciated. A link to some examples would also be greatly appreciated.
Thanks.
The only real chance is to use openGL ES and fragment shader (it will require at least openGL ES 2.0) and do the chroma key effect on GPU. The shader itself will be quite easy (google).
But to do that, you need to display camera preview with callback. You will have to implement Camera.PreviewCallback, create a buffer for image data and use setPreviewCallbackWithBuffer method. You can get the basic idea from my answer to a similar question. Note that there is a significant problem with performance of this custom camera preview, but it might work on hardware that supports ES 2.0.
To display the preview with openGL, you will need to extend GLSurfaceView and also implement GLSurfaceView.Renderer. Then you will bind the camera preview frame as a texture with glTexImage2D to some simple rectangle and the rest will be handled by shaders. See how to use shaders in ES here or if you have no experience with shaders, this tutorial might be a good start.
To the other question: you could save the current image from the preview, but the preview has lower resolution than a taken picture, so you will probably want to take a picture and then process it separately (you could use the same shader for it).
As for the C++, it's a lot of additional effort with questionable output. But it can improve performance if done right. Try to check this article, it's on a similar topic, it describes how to use NDK to process camera preview and display it in openGL. But if you were thinking about doing the chroma key effect in C++, it would be significantly slower than shaders.
You can check this library: https://github.com/cyberagent/android-gpuimage.
It provides a framework to do image processing on device's GPU using GL shaders.
There is also a sample showing how to use the library with a camera.
There is a Chroma-Key-Project on Google-Code: http://code.google.com/p/chroma-key-project/ It includes a way to upload pictures that are token using chroma-key:
After an exhaustive search online, I have failed to find any open source projects working >with Chroma-keying for Android devices. The aim of this project is to provide a useful >Chroma-key library, that will make it easy to implement applications and games that can take >pictures in front of a Green or Blue screen, and apply the pictures on a chosen background. >Furthermore, the application will also allow the user to upload the picture using Intent.
I want to write an activity that:
Shows the camera preview (viewfinder), and has a "capture" button.
When the "capture" button is pressed, takes a picture and returns it to the calling activity (setResult() & finish()).
Are there any complete examples out there that works on every device? A link to a simple open source application that takes pictures would be the ideal answer.
My research so far:
This is a common scenario, and there are many questions and tutorials on this.
There are two main approaches:
Use the android.provider.MediaStore.ACTION_IMAGE_CAPTURE event. See this question
Use the Camera API directly. See this example or this question (with lots of references).
Approach 1 would have been perfect, but the issue is that the intent is implemented differently on each device. On some devices it works well. However, on some devices you can take a picture but it is never returned to your app. On some devices nothing happens when you launch the intent. Typically it also saves the picture to the SD card, and requires the SD card to be present. The user interaction is also different on every device.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
I would have used ZXing as an example application (I work with it a lot), but it only uses the preview (viewfinder), and doesn't take any pictures. I also found that on some devices, ZXing did not automatically adjust the white balance when the lighting conditions changed, while the native camera app did it properly (not sure if this can be fixed).
Update:
For a while I used the camera API directly. This gives more control (custom UI, etc), but I would not recommend it to anyone. I would work on 90% of devices, but every now and again a new device would be released, with a different problem.
Some of the problems I've encountered:
Handling autofocus
Handling flash
Supporting devices with a front camera, back camera or both
Each device has a different combination of screen resolution, preview resolutions (doesn't always match the screen resolution) and picture resolutions.
So in general, I'd not recommend going this route at all, unless there is no other way. After two years I dumped by custom code and switched back to the Intent-based approach. Since then I've had much less trouble. The issues I've had with the Intent-based approach in the past was probably just my own incompetence.
If you really need to go this route, I've heard it's much easier if you only support devices with Android 4.0+.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
Either there is a bug in the examples or there is a compatibility issue with the devices.
The example that CommonsWare gave works well. The example works when using it as-is, but here are the issues I ran into when modifying it for my use case:
Never take a second picture before the first picture has completed, in other words PictureCallback.onPictureTaken() has been called. The CommonsWare example uses the inPreview flag for this purpose.
Make sure that your SurfaceView is full-screen. If you want a smaller preview you might need to change the preview size selection logic, otherwise the preview might not fit into the SurfaceView on some devices. Some devices only support a full-screen preview size, so keeping it full-screen is the simplest solution.
To add more components to the preview screen, FrameLayout works well in my experience. I started by using a LinearLayout to add text above the preview, but that broke rule #2. When using a FrameLayout to add components on top of the preview, you don't have any issues with the preview resolution.
I also posted a minor issue relating to Camera.open() on GitHub.
"the recommended way to access the camera is to open Camera on a separate thread". Otherwise, Camera.open() can take a while and might bog down the UI thread.
"Callbacks will be invoked on the event thread open(int) was called from". That's why to achieve best performance with camera preview callbacks (e.g. to encode them in a low-latency video for live communication), I recommend to open camera in a new HandlerThread, as shown here.