I'm building an Augmented Reality app (using Google Cardboard) that uses WebRTC (I'm using pristine.io's compiled sources) to stream the camera preview to a webclient. The user should be able to see "through" the camera.
My Problem is: to enable the user to see through the camera, I need to render the camera preview onto a SurfaceTexture (which is used for the Virtual Reality view), using an instance of camera. This by itself is easy (and already done), but WebRTC is already creating an instance of camera (in a class called VideoCapturerAndroid) and two of those at the same time are obviously not working. Also, VideoCapturerAndroid has a private constructor (and is offering a factory method) and no Getter for the camera field. Because of this, I can't get the field to set the Preview Texture...
I tried to find the source Code to be able to simply change that class (e.g. creating a Getter), but with no success.
I also thought about creating a GLSurfaceView.Renderer to render the GLSurfaceView, on which the WebRTC Library is drawing, to a SurfaceTexture (explained here: https://coderwall.com/p/6koh_g/rendering-any-android-view-directly-to-an-opengl-texture), but the Renderer is already getting set in VideoRendererGui and according to the docs, you can only set the Renderer of a View once.
Does anyone have an idea how I can access the camera field in VideoCapturerAndroid or get the preview data elsewhere to render it onto the SurfaceTexture?
Related
I have a Service running in the background which is started by an activity. The service itself is a background Camera recorder. The Service tries to Starts Camera2 and writes to a Surface provided by the Media Recorder.
When I get the Activity runnning, now I want to have the live stream along with the background recording. So far, I have been creating a SurfaceView in an Activity and passing it as a target to Camera2 when the surface is created from the Activity. But, I have to reinitialize the Camera2 API each time the Surface gets destroyed(ie Activity goes to the background). Is this the right approach to solve this problem? Is it possible for the Service to own the SurfaceView and pass a reference to Surface back to the Activity so that it can display the live feed without re initializing the Camera device?
Not directly; the SurfaceView has to live in the app process because it's closely tied to the app's UI state (and as you've noticed, it gets torn down every time the app goes in the background). So it can't live in the service.
However, you could have the Service receive camera frames, and resend them to the app when it's in the foreground. For example, you could use an ImageReader to read YUV frames from the camera, and then write those (with an ImageWriter) to a Surface the app provides from its own ImageReader (or SurfaceView if you can do the YV12 bit described below). Then the camera doesn't need to be reconfigured when the app appears or disappears; the service can just start dumping the images to nowhere.
That does require the app to draw its own YUV frames, which is annoying, since there's unfortunately no requirement that a TextureView or SurfaceView will accept YUV_420_888 buffers. If your camera device supports YV12 output, then that can be written directly to a SurfaceView or TextureView; otherwise, you'll probably need to implement some custom translation. Most efficiently that can be done in JNI code, where you can use the NDK ANativeWindow APIs to lock the Surface from a SurfaceView, set its format to YV12, and write the YUV_420_888 image data into it (translating pixel stride/row stride as needed).
I'm developing an AR application using ARCore and got stuck in the following issue.
I want to get the Camera instance, that the ARCore session initializes, in order to configure it myself(change preview resolution, bitrate, white balance and so on).
Unfortunately, ARCore uses an object called Session(which is also part of ARCore lib) which uses Tango(was taken from Tango Project). The Tango object handles the hardware camera through JNI calls.
ARCore doesn't have API to configure the camera and getting it through reflection seems like a bad idea.
Anybody?
I am new to Android, i want to take pictures in background without surface view/preview. I have searched online but the methods don't seem working for me. I want to use the latest Camera2 API.
Regards!
Muhammad Awais
Just create an ImageReader, and a camera capture session with that ImageReader's Surface. No need to have a SurfaceView or TextureView as well.
You'll need to stream some number of captures before starting to save any, though, to ensure that the auto-exposure/focus/etc routines of the camera have time to converge.
The Media Projection package is new Lollipop, and allows an app to capture the device's screen in realtime for streaming to video. I was hoping this could also be used to capture a single still screenshot, but so far I have not been successful. Of course, the first frame of a captured video could work, but I'm aiming for a perfect, lossless screenshot matching the pixel resolution of the device. A still from a captured video cannot provide that.
I've tried a lot of things, but the closest I came to a solution was to first launch an invisible activity. This activity then follows the API example for starting screen capture, which can include asking the user's permission. Once screen capture is enabled, the screen image is live in a SurfaceView. However, I cannot find a way to capture a bitmap from the SurfaceView. There are lots of questions and discussions about this, but no solutions seem to work, and there is some evidence that it is impossible.
Any ideas?
You can't capture the contents of a SurfaceView.
What you can do is replace the SurfaceView with a Surface object that has an in-process consumer, such as SurfaceTexture. In the android-ScreenCapture example linked from the question, mMediaProjection.createVirtualDisplay() wants a Surface to send images to. If you create a SurfaceTexture, and use that to construct a Surface, the images generated by the MediaProjection will be available from an OpenGL ES texture.
If GLES isn't your thing, the ImageReader class can be used. It also provides a Surface that can be passed to createVirtualDisplay(), but it's easier to access the pixels from software.
As you may notice, camera in android phones stops working when we minimize it (for example when we start a new application). My question is: Is there any way to create an app with android camera, which records even if we minimize it so it could be recording videos while we are doing something different on our phone? Or may be it is only possible if we create such camera without using MediaStore? If you share some links or code which might help me, I'll be grateful. Thanks in advance.
I believe the answer to this is that one must use
public final void setPreviewTexture (SurfaceTexture surfaceTexture)
Added in API level 11
Sets the SurfaceTexture to be used for live preview.
Either a surface or surface texture is necessary for preview,
and preview is necessary to take pictures.
from https://developer.android.com/reference/android/hardware/Camera.html . And
from https://developer.android.com/reference/android/graphics/SurfaceTexture.html :
The image stream may come from either camera preview or video decode.
A SurfaceTexture may be used in place of a SurfaceHolder when specifying the
output destination of a Camera or MediaPlayer object. Doing so will cause all the
frames from the image stream to be sent to the SurfaceTexture object rather than
to the device's display.
and I would really like to try this and send you some code, but I have no phone more recent than gingerbread, and this was introduced with honeycomb.
Using a surface associated with an Activity, surfaceDestroyed is called sometime between onPause and onStop when the Activity is being minimised, although, oddly, not when the phone is being put to sleep: How SurfaceHolder callbacks are related to Activity lifecycle? But I hope that a surfaceTexture is not destroyed in this way.