Editing android VideoView frames - android

Environment:
Nexus 7 Jelly Bean 4.1.2
Problem:
I'm trying to make a Motion Detection application that works with RTSP using VideoView.
I wish that there was something like an onNewFrameListener
videoView.onNewFrame(Frame frame)
I've tried to get access to the raw frames of an RTSP stream via VideoView but couldn't find any support for that in the Android SDK.
I found out that VideoView encapsulates the Android's MediaPlayer class.
So i dived into the media_jni lib to try and find a way to access the raw frames, But couldn't find the byte buffer or whatever that represents a frame.
Question:
Anyone has an idea where or how can i find this buffer and get access to it ?
Or any other idea of implementing a Motion Detection over a VideoView ?
Even if it's sais that i need to recompile the AOSP.

You can extend the VideoView and override its draw(Canvas canvas) method.
Set your bitmap to the canvas received through draw.
Call super.draw() which will get the frame drawn onto your bitmap.
Access the frame pixels from the bitmap.
class MotionDetectorVideoView extends VideoView {
public Bitmap mFrameBitmap;
...
#Override
public void draw(Canvas canvas) {
// set your own member bitmap to canvas..
canvas.setBitmap(mFrameBitmap);
super.draw(canvas);
// do whatever you want with mFrameBitmap. It now contains the frame.
...
// Allocate `buffer` big enough to hold the whole frame.
mFrameBitmap.copyPixelsToBuffer(buffer);
...
}
}
I don't know whether this will work. Avoid doing heavy calculation in draw, start a thread there.

In your case I would use the Camera Preview instead the VideoView, if you are working with live motion, not recorded videos. You can use a Camera Preview Callback to catch everyframe captured by your camera. This callback implements :
onPreviewFrame(byte[] data, Camera camera)
Called as preview frames are displayed.
Which I think it could be useful for you.
http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html
Tell if that is what you are searching for.
Good luck.

Related

Get detected face bitmap

I'm experimenting with the following Google sample: https://github.com/googlesamples/android-vision/tree/master/visionSamples/FaceTracker
The sample is using the Play Service new Face detection APIs, and draws a square on detected faces on the camera video stream.
I'm trying to figure out if it is possible to save the frames that has detected faces in them, from following the code it seems that the face detector's processor is a good place to perform the 'saving' but it only supplies the detection meta data and not the actual frame.
Your guidance will be appreciated.
You can get it in the following way:
Bitmap source = ((BitmapDrawable) yourImageView.getDrawable()).getBitmap();
// detect faces
Bitmap faceBitmap = createBitmap(source,
face.getPosition().x,
face.getPosition().y,
face.getWidth(),
face.getHeight());
Yes it is possible. I answered to question about getting frames from CameraSource here. Most trickiest parts are to access CameraSource frames and to convert Frame datatype to Bitmap. Then having frames as Bitmaps you can pass them to you FaceGraphic class and in method draw() save those Bitmaps, because draw() is called only when faces are detected.

Adapting Grafika RecordFBOActivity to work with Android GPUImage

I have an application that is using the Android port of GPUImage as the OpenGL Renderer and manager of several filters.
It currently does not have a video implementation, so I am trying to adapt the RecordFBOActivity from the Google grafika repository to work with the GPUImage architecture.
The base GPUImage class manages the GLContext and GLSurfaceView, and the GPUImageRenderer class implements the Renderer class.
This is the class where I am trying to adapt the RenderThread from the RecordFBOActivity of grafika. There are a few problems.
First, in the preparegl() method, I am passing a SurfaceTexture instead of a Surface, as GPUImage doesn't use the SurfaceHolder at all (I think I can implement it, but am trying not to change the base code too much, as i would like to push back my implementation to the aforementioned repo). I know that WindowSurface.java has an overloaded method to construct a WindowSurface from a SurfaceTexture as well as a Surface, but if I do this the mSurface iVar is always null, as I never have a surface to pass to it, which causes a NPE in the makeCurrent() method of recording.
Second, GPUImage attaches itself to a GLSurfaceView, not a SurfaceView like the grafika example uses, so I'm a little uncertain if there are any low level inconsistencies that may be causing conflicts for me...
Third, and I think this is the main issue, at least at the moment, is that I can't seem to reconcile the camera preview of GPUImage with the WindowSurface of grafika. If I comment out the prepareGl() method, the setUpSurfaceTexture() of GPUImage sets the preview texture of the camera from the SurfaceTexture that is created by glGenTextures() and the preview works fine.. as well as being attached to the filter render chain. However, if I try to call the prepareGL() method, and pass the exact same SurfaceTexture to the constructor of mWindowSurface, the camera service dies and i get a EGL_BAD_SURFACE error.
Long question, with a few moving parts, I know... Will attempt to edit/update as I can clarify issues and approaches to myself. But would love if anyone has any thoughts/interrogations... particularly #fadden :D
I was also trying to achieve the same thing and have tried what fadden has suggested. Tried to integrate CameraSurfaceRenderer functionality to GPUImageRenderer. The preview is fine but the recording is just a video with black frames. EGL14.eglGetCurrentContext() returns null for following call and my guess is if a new context is created it will not be same as what GPUImage might have
mVideoEncoder.startRecording(new TextureMovieEncoder.EncoderConfig(
mOutputFile, 640, 480, 1000000, EGL14.eglGetCurrentContext()));
#Jesses.co.tt were you able to achieve it?
(as I can't add comment it is added as an answer).

Android filtered video from camera

I can't understand how to display filtered video from camera on Android correctly...
I wrote for sdk-8, so I've used the scheme below:
Camera.setPreviewDisplay(null); // use null surface holder to identify the fact that I don't want to see raw camera preview.
Camera.setPreviewCallbackWithBuffer() + Camera.addCallbackBuffer() // to get camera data, modify it and draw on my GLSurfaceView
And this scheme is wonderful works on 2.2.* androids... and I had been happy, until didn't try application on 4.* =) my callback function for receive frame data doesn't called at all!
According documentation, I shouldn't use null as argument for setPreviewDisplay... without surface instance, video stream will not run... but if I give him surface he will start drawing camera raw preview on that surface....
The question is: How can I correctly draw filtered camera video by my self?!

really confused with setPreviewCallback in Android, need advice

I'm building an application on Android to take frames from the camera, process them, and then display the frame on a surfaceView, as well as drawing on the SurfaceView via the canvas and drawbitmap and all.
Just to check, is SurfaceView and Bitmaps and Canvases the best way to do it ? I'm after speed.
Assuming the answer to the above is Yes, the question would be: Where should I place the following function
camera_object.setPreviewCallback(new PreviewCallback()
public void onPreviewFrame(byte[] data, Camera camera){
should I place it in onCreate() or should I place it in surfaceCreated() or surfaceChanged() ?
I declared my mainactivity class as follows:
public class MainActivity extends Activity implements SurfaceHolder.Callback, Camera.PreviewCallback
{
and in that class Eclipse forces me to create an override function for onpreviewframe in the MainActivity class as follows
public void onPreviewFrame(byte[] data, Camera camera){
}
but it never gets called. Should I try to use this function ? is it better to use it ? or is it just an Eclipse thing ?
Please advise
Are you calling setPreviewDisplay(), startPreview() and setPreviewCallback(this) from the app? Without that you will not get any calls to onPreviewFrame(). In fact if you are using SurfaceView, then the callback preview buffers are a copy of the actual buffers that are being displayed on the screen. So if you want to display these copied buffers, you need to create a new view and overwrite it. This would be inefficient. I would suggest you use SurfaceTexture instead and use 'onFrameAvailable' callback to get the frames and then draw & display manually. An example of this can be found in the PanoramaActivity code of the default Android Camera App.
Without camera_object.setPreviewDisplay(surface_holder); you cannot receive camera callbacks; don't forget also
surface_view.setVisibility(View.VISIBLE);
surface_holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
You can hide the camera preview under another view; on 3.0 and higher you can even push the surface out of the screen (display it below the bottom of the screen). I am not sure if the latter trick works on 2.3.6.

Android image processing get image data quick

I am creating an Android app to do some image processing techniques with the camera and it needs to be fast. This is the pseudo-code of how the entire system works:
1. loop while not finished
1.1 get image frame
1.2 process image for object detection
2. end loop
I actually have questions on the basics of the Camera class:
Is previewing the perceived image from the camera faster than no previews at all? The former means using SurfaceView to preview the image.
Let's say from the takePicture() method, can the image data array be obtained without the previews?
My real question is, what is the best way to obtain the image data (say, byte[] array) quickly and iteratively after processing the image (as stated on top)?
I planned to use takePicture() method to get the image data, but I need your opinion if this is the only way or if there other better ways.
You can setup a SurfaceView as the Camera's preview display and get the data of every preview frame using the PreviewCallback. This would be better than using takePicture if you don't need the high resolution that takePicture captures. In other words, if you want to capture images of lower quality at a faster rate, use PreviewCallback... if you want to capture images of higher quality at a very slow rate, use takePicture.
As for your questions, I don't think you can take pictures without using a preview display, but i could be wrong.
class MainActivity extends Activity implements Camera.PreviewCallback, SurfaceHolder.Callback {
...
public void surfaceCreated(SurfaceHolder holder) {
camera = Camera.open();
camera.setPreviewCallback(this);
...
}
public void onPreviewFrame(byte[] data, Camera camera) {
// image data contained in data... do as you wish
}
}

Categories

Resources