I have a Service running in the background which is started by an activity. The service itself is a background Camera recorder. The Service tries to Starts Camera2 and writes to a Surface provided by the Media Recorder.
When I get the Activity runnning, now I want to have the live stream along with the background recording. So far, I have been creating a SurfaceView in an Activity and passing it as a target to Camera2 when the surface is created from the Activity. But, I have to reinitialize the Camera2 API each time the Surface gets destroyed(ie Activity goes to the background). Is this the right approach to solve this problem? Is it possible for the Service to own the SurfaceView and pass a reference to Surface back to the Activity so that it can display the live feed without re initializing the Camera device?
Not directly; the SurfaceView has to live in the app process because it's closely tied to the app's UI state (and as you've noticed, it gets torn down every time the app goes in the background). So it can't live in the service.
However, you could have the Service receive camera frames, and resend them to the app when it's in the foreground. For example, you could use an ImageReader to read YUV frames from the camera, and then write those (with an ImageWriter) to a Surface the app provides from its own ImageReader (or SurfaceView if you can do the YV12 bit described below). Then the camera doesn't need to be reconfigured when the app appears or disappears; the service can just start dumping the images to nowhere.
That does require the app to draw its own YUV frames, which is annoying, since there's unfortunately no requirement that a TextureView or SurfaceView will accept YUV_420_888 buffers. If your camera device supports YV12 output, then that can be written directly to a SurfaceView or TextureView; otherwise, you'll probably need to implement some custom translation. Most efficiently that can be done in JNI code, where you can use the NDK ANativeWindow APIs to lock the Surface from a SurfaceView, set its format to YV12, and write the YUV_420_888 image data into it (translating pixel stride/row stride as needed).
Related
I want to extend an app from Camera1 to Camera2 depending on the API. One core mechanism of the app consists in taking preview pictures at a rate of about 20 pics per second. With Camera1 I realized that by creating a SurfaceView, adding a Callback on its holder and after creation of the surface accessing the preview pics via periodic setOneShotPreviewCallbacks. That was pretty easy and reliable.
Now, when studying Camera2, I came "from the end" and managed to convert YUV420_888 to Bitmap (see YUV420_888 to Bitmap Conversion ). However I am struggling now with the "capture technique". From the Google example I see that you need to make a "setRepeating" CaptureRequest with CameraDevice.TEMPLATE_PREVIEW for displaying the preview e.g. on a surface view. That is fine. However, in order to take an actual picture I need to make another capture request with (this time) builder.addTarget(imageReader.getSurface()). I.e. data will be available within the onImageAvailable method of the imageReader.
The problem: the creation of the captureRequest is a rather heavy operation taking about 200ms on my device. Therefore, the usage of a capture request (whether with Template STILL_CAPTUR nor PREVIEW) can impossibly be a feasible approach for capturing 20 images per second, as I need it. The proposals I found here on SO are primarily based on the (educationally moderately efficient) Google example, which I don't really understand...
I feel the solution must be to feed the ImageReader with a contiuous stream of preview pics, which can be picked from there in a given frequency. Can someone please give some guidance on how to implement this? Many thanks.
If you want to send a buffer to both the preview SurfaceView and to your YUV ImageReader for every frame, simply add both Surfaces to the repeating preview request as targets.
Generally, a capture request can target any subset (or all) of the
session's configured output targets.
Also, if you do want to only capture an occasional frame to your YUV ImageReader with .capture(), you don't have to recreate the capture request builder each time; just call .build() again on the same builder, or just reuse the actual constructed CaptureRequest if you're not changing any settings.
Even with this occasional capture, you probably want to include the preview Surface as a target in the YUV capture request, so that there's no skipped frame in the displayed preview.
I'm using libvlc on and android app to play a network stream; the native code draws the image on a surfaceview.
Let's assume the video stops and I have no access to the native code, how can I detect if the SurfaceView is still moving (the video is not frozen)?
I tried getViewTreeObserver().addOnDrawListener(); and getViewTreeObserver().addOnPreDrawListener(); but they do not have the effect I'm looking for.
Thanks
You can't get that information from the SurfaceView, because the SurfaceView itself does not know.
The SurfaceView's job is to set up the Surface, and create a hole in the View layout that you can see through. Once it has done these things, it is no longer actively involved in the process of displaying video. The content flows from the decoder to the Surface, which is managed by SurfaceFlinger (the system graphics compositor).
This is, for example, how DRM video works. The app "playing" the video has no access to DRM-protected video frames. (Neither does SurfaceFlinger, actually, but that's a longer story.)
The best way to know if content is still arriving is to ask the video source if it is still sending you content. Another approach would be to change your SurfaceView to a TextureView, and provide an onSurfaceTextureUpdated() callback method.
I am not sure what exactly what you are trying to achieve here but you can see if surface view is rendering or not through implementing an interface called SurfaceHolder.Callback which gives you access to the following methods,
On Surface Created - This is called immediately after the surface is first created.
On Surface Changed - This is called immediately after any structural changes (format or size) have been made to the surface.
On Surface Destroyed - This is called immediately before a surface is being destroyed.
Take a look at the documentation for surface view. For SurfaceHolder take a look at this link. Basically in order to know
I am trying to decode video samples using MediaCodec API. I am using surfaceView to show rendered samples. If i press home button, app going into pause state and surface destroyed. When i coming back to resume state, new surfaceView reference is creating, but decoder is unable to pump samples on surfaceView. so screen appearing as black.
video configure:
videoDecoder.configure(format, surface, null, 0);
So how can i reconfigure videoDecoder in above statement. It is similar to the following problem
How to keep decoding alive during screen orientation?
The MediaCodec API does not currently (API 19) provide a way to replace the output Surface.
As in the other question you refer to, I think the way to deal with this will be to decode to a Surface that isn't tied to the view hierarchy (and, hence, doesn't get torn down when the Activity is destroyed).
If you direct the output of the MediaCodec to a SurfaceTexture, you can then render that texture onto the SurfaceView. This will require a bit of GLES code. You can find the necessary pieces in the Grafika sources, but there isn't currently a full implementation of what you want (e.g. PlayMovieActivity decodes video to a SurfaceTexture, but that ST is part of a TextureView, which will get torn down).
The additional rendering step will increase the GPU load, and won't work for DRM-protected video. For most devices and apps this won't matter.
See also the bigflake examples.
Update: I've added this to Grafika, with a twist. See the "Double decode" example. The output goes to a SurfaceTexture associated with a TextureView. If the screen is rotated (or, currently, blanked by hitting the power button), decoding continues. If you leave the activity with the "back" or "home" button, decoding stops. It works by retaining the SurfaceTexture, attaching it to the new TextureView.
As you may notice, camera in android phones stops working when we minimize it (for example when we start a new application). My question is: Is there any way to create an app with android camera, which records even if we minimize it so it could be recording videos while we are doing something different on our phone? Or may be it is only possible if we create such camera without using MediaStore? If you share some links or code which might help me, I'll be grateful. Thanks in advance.
I believe the answer to this is that one must use
public final void setPreviewTexture (SurfaceTexture surfaceTexture)
Added in API level 11
Sets the SurfaceTexture to be used for live preview.
Either a surface or surface texture is necessary for preview,
and preview is necessary to take pictures.
from https://developer.android.com/reference/android/hardware/Camera.html . And
from https://developer.android.com/reference/android/graphics/SurfaceTexture.html :
The image stream may come from either camera preview or video decode.
A SurfaceTexture may be used in place of a SurfaceHolder when specifying the
output destination of a Camera or MediaPlayer object. Doing so will cause all the
frames from the image stream to be sent to the SurfaceTexture object rather than
to the device's display.
and I would really like to try this and send you some code, but I have no phone more recent than gingerbread, and this was introduced with honeycomb.
Using a surface associated with an Activity, surfaceDestroyed is called sometime between onPause and onStop when the Activity is being minimised, although, oddly, not when the phone is being put to sleep: How SurfaceHolder callbacks are related to Activity lifecycle? But I hope that a surfaceTexture is not destroyed in this way.
I was wondering whether it is possible to have 2 instances of the camera preview in android. What I mean is running 2 instances of the camera at the same time. If it is, how would one go about this, will there be need to implement an instance on a different thread? I have not used the camera API before, so I would appreciate it if I can have a heads up on the issue, so I don't waste time on it.
Thank you.
It is not possible to have two open connections to the camera - you have to lock the camera in order to get a preview and it can only be locked once. Indeed, if you have the camera locked, and your app crashes before you've unlocked it, then nobody can use the camera!
See http://developer.android.com/reference/android/hardware/Camera.html#open%28int%29
You must call release() when you are
done using the camera, otherwise it
will remain locked and be unavailable
to other applications.
...
RuntimeException: if connection to the
camera service fails (for example, if
the camera is in use by another
process).
That said, you can certainly register a preview callback and take the preview data from your single camera instance to use in multiple views. But be aware of the issues with the YUV format of the raw byte[] data provided by the preview callback: Getting frames from Video Image in Android (note that the preview data is raw from the camera driver and may vary from device to device)
Ignoring the big Why question, your best bet would be to make a service that interacts with the camera, and go from there.