I can't understand how to display filtered video from camera on Android correctly...
I wrote for sdk-8, so I've used the scheme below:
Camera.setPreviewDisplay(null); // use null surface holder to identify the fact that I don't want to see raw camera preview.
Camera.setPreviewCallbackWithBuffer() + Camera.addCallbackBuffer() // to get camera data, modify it and draw on my GLSurfaceView
And this scheme is wonderful works on 2.2.* androids... and I had been happy, until didn't try application on 4.* =) my callback function for receive frame data doesn't called at all!
According documentation, I shouldn't use null as argument for setPreviewDisplay... without surface instance, video stream will not run... but if I give him surface he will start drawing camera raw preview on that surface....
The question is: How can I correctly draw filtered camera video by my self?!
Related
I am new to Camera2 framework and trying to understand the logic of creation of capture sessions.
I need a simple thing - preview and record video. I also want to set the correct orientation hint at the time I start recording the video. But I came to a chicken/egg problem.
Here is my logic:
In order to start recording, I am doing this:
val recordRequest = session.device.createCaptureRequest(CameraDevice.TEMPLATE_RECORD).apply {
// Add the preview and recording surface targets
addTarget(viewFinder.holder.surface)
addTarget(recorder.surface)
}.build()
session.setRepeatingRequest(recordRequest, null, cameraHandler)
recorder.setOrientationHint(it) // NOT allowed after getSurface()!
recorder.prepare() // NOT allowed after getSurface()!
recorder.start()
However, I already called recorder.surface (or getSurface()) when I added targets above. One can think that I can prepare and then add targets, however, the documentation for addTarget() says, that the surface The Surface added must be one of the surfaces included in the most recent call to CameraDevice#createCaptureSession
That leads to an interesting problem. Whenever I open the app, I need to create the capture session to start previewing camera image. However, at the point of creation the createCaptureSession() needs to include all the surfaces that will come in future capture requests. Which means that I also need to include the recording surface, even if I simply open camera without recording yet. How do I get this Surface for recording? Well, the documentation says I can get it from MediaRecorder or I can get it from MediaCodec. I want to get it from MediaRecorder since I want to use CamcorderProfiles. However, as I showed in the above code, once I get the surface from the recorder at the point of session creation - I cannot do any changes there at the point of starting recording, like setting orientation hint.
The official Camera2Video sample app does a trick - it uses createPersistentInputSurface however in their example the camera is fixed, which allows them to allocate enough memory for it and use that surface throughout the app lifecycle.
How can this be solved? Am I misunderstanding the concepts here? How can I create the recorder at a later point, when I start recording, but still have the surface for it created earlier, when I open the camera for preview?
Using a persistent input surface is the right approach. Create a new MediaRecorder once you know the orientation for recording, and set its Surface using the persistent input surface.
That's exactly what the Camera2Video sample does, as well:
// React to user touching the capture button
capture_button.setOnTouchListener { view, event ->
when (event.action) {
MotionEvent.ACTION_DOWN -> lifecycleScope.launch(Dispatchers.IO) {
// Prevents screen rotation during the video recording
requireActivity().requestedOrientation =
ActivityInfo.SCREEN_ORIENTATION_LOCKED
// Start recording repeating requests, which will stop the ongoing preview
// repeating requests without having to explicitly call `session.stopRepeating`
session.setRepeatingRequest(recordRequest, null, cameraHandler)
// Finalizes recorder setup and starts recording
recorder.apply {
// Sets output orientation based on current sensor value at start time
relativeOrientation.value?.let { setOrientationHint(it) }
prepare()
start()
}
and recorder is created with an earlier-created persistent surface:
/** Saves the video recording */
private val recorder: MediaRecorder by lazy { createRecorder(recorderSurface)
}
When you say the camera is fixed, do you mean the app orientation being fixed, or that the sample doesn't support switching front/back cameras? None of that should particularly matter for persistent surfaces; you can create a new one when you switch cameras or change orientations, if you need to.
I have used JavaCv (and opencv too) to implement live face detection preview on Android. I work ok. Now I want to take a picture or record a video from live preview which have face detection (I mean when I take a picture, this picture will have a person and a rectangle around his/her face). I have researched a lot but get no result. Can anyone help me please !!!
What you're looking for is the imwrite() method.
Since your question isn't clear on the use-case, I'll give a generic algorithm, as shown:
imwrite writes a specified Mat object to a file and it accepts 2 arguments - fileName and Mat object, for example - imwrite('output.jpg',img);
Here's the logic you can follow:
Receive input frame (Mat input from video and run face detection using your existing method.
Draw a rectangle on an output image (Mat output).
Use imwrite as - imwrite('face.jpg',output)
In case you want to record all the frames with a face in them, replace 'face.jpg' with a string variable that is updated with each loop iteration and run imwrite in a loop
If you wish to record a video. Have a look at VideoWriter() class
I'm experimenting with the following Google sample: https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker
The sample is using the Play Service new Face detection APIs, and draws a square on detected faces on the camera video stream.
I'm trying to figure out if it is possible to save the frames that has detected faces in them, from following the code it seems that the face detector's processor is a good place to perform the 'saving' but it only supplies the detection meta data and not the actual frame.
Your guidance will be appreciated.
You can get it in the following way:
Bitmap source = ((BitmapDrawable) yourImageView.getDrawable()).getBitmap();
// detect faces
Bitmap faceBitmap = createBitmap(source,
face.getPosition().x,
face.getPosition().y,
face.getWidth(),
face.getHeight());
Yes it is possible. I answered to question about getting frames from CameraSource here. Most trickiest parts are to access CameraSource frames and to convert Frame datatype to Bitmap. Then having frames as Bitmaps you can pass them to you FaceGraphic class and in method draw() save those Bitmaps, because draw() is called only when faces are detected.
I am reading the code about Android Camera2 APIs from here:
https://github.com/googlesamples/android-Camera2Basic
And it is confusing in this lines:
https://github.com/googlesamples/android-Camera2Basic/blob/master/Application/src/main/java/com/example/android/camera2basic/Camera2BasicFragment.java#L570-L574
that the previewRequest builder only add surface, which is the TextureView to show, as target. But the following line actually add both as the targets. As I understand, this should not fire the "OnImageAvailable" Lisenter during preview, no? So why this add the imagereader's surface here?
I tried to removed this imagereader's surface here but got error when I really want to capture an image.....
SOOO CONFUSING!!!
You need to declare all output Surfaces that image data might be sent to at the time you create a CameraCaptureSession. This is just the way the framework is designed.
Whenever you create a CaptureRequest, you add a (list of) target output Surface(s). This is where the image data from the captured frame will go- it may be a Surface associated with a TextureView for displaying, or with an ImageReader for saving, or with an Allocation for processing, etc. (A Surface is really just a buffer which can take the data output by the camera. The type of object that buffer is associated with determines how you can access/work with the data.)
You don't have to send the data from each frame to all registered Surfaces, but it has to be sent to a subset of them. You can't add a Surface as a target to a CaptureRequest if it wasn't registered with the CameraCaptureSession when it was created. Well, you can, but passing it to the session will cause a crash, so don't.
Environment:
Nexus 7 Jelly Bean 4.1.2
Problem:
I'm trying to make a Motion Detection application that works with RTSP using VideoView.
I wish that there was something like an onNewFrameListener
videoView.onNewFrame(Frame frame)
I've tried to get access to the raw frames of an RTSP stream via VideoView but couldn't find any support for that in the Android SDK.
I found out that VideoView encapsulates the Android's MediaPlayer class.
So i dived into the media_jni lib to try and find a way to access the raw frames, But couldn't find the byte buffer or whatever that represents a frame.
Question:
Anyone has an idea where or how can i find this buffer and get access to it ?
Or any other idea of implementing a Motion Detection over a VideoView ?
Even if it's sais that i need to recompile the AOSP.
You can extend the VideoView and override its draw(Canvas canvas) method.
Set your bitmap to the canvas received through draw.
Call super.draw() which will get the frame drawn onto your bitmap.
Access the frame pixels from the bitmap.
class MotionDetectorVideoView extends VideoView {
public Bitmap mFrameBitmap;
...
#Override
public void draw(Canvas canvas) {
// set your own member bitmap to canvas..
canvas.setBitmap(mFrameBitmap);
super.draw(canvas);
// do whatever you want with mFrameBitmap. It now contains the frame.
...
// Allocate `buffer` big enough to hold the whole frame.
mFrameBitmap.copyPixelsToBuffer(buffer);
...
}
}
I don't know whether this will work. Avoid doing heavy calculation in draw, start a thread there.
In your case I would use the Camera Preview instead the VideoView, if you are working with live motion, not recorded videos. You can use a Camera Preview Callback to catch everyframe captured by your camera. This callback implements :
onPreviewFrame(byte[] data, Camera camera)
Called as preview frames are displayed.
Which I think it could be useful for you.
http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html
Tell if that is what you are searching for.
Good luck.