Android MediaCodec encode and decode using vp8 format - android

I want to develop an app, which will have 2 buttons and SurfaceView(Actually a class extends SurfaceView implements SurfaceHolder.Callback).
When user click on button1 - with ScreenCapture, I will get an image and using mediacodec, I will do vp8 encoding and the output saving in bytebuffer.(I am not saving in a file location)
When user click on button2 - I need to show it on SurfaceView with that output of bytebuffer which is captured.
i have tried as
MediaCodec decoder = MediaCodec.createDecoderByType("video/x-vnd.on2.vp8");
decoder.dequeueOutputBuffer(mBufferInfo, DEFAULT_TIMEOUT_US);
.....
but not able to update the surfaceview.
How can i update the surfaceview using bytebuffer data?

I got the answer...
mMediaCOdec.releaseOutputBuffer(index, true)... -- here i set render as false. If set render value as true, then i can able to draw or set the captured image.
With releaseOutputBuffer (int index, long renderTimestampNs) -- we can render the image. But supported in API level 21 only.
Thanks..

Related

Is it possible to grap one pixel from CameraPreview?

I followed the Android Studio tutorial to get the CameraPreview to work (Camera API Android Developer Guide). This works fine for me and i can view the camera stream in my FrameLayout.
But I would like to get the RGB values from a specific Pixel in the Preview everytime it changes. I did not find a method which gives me the previewImage as a bitmap and was not able to understand the usage of the onPreviewFrame method
#Override
public void onPreviewFrame(byte[] data, Camera camera) {}
How can I get the RGB values from a Camerapreview Pixel?
If you are using the Camera2 API, you can implement the ImageReader.OnImageAvailableListener class in your application. After that, you override the onImageAvailable function , which gets an ImageReader as argument. Then you can access the image just recorded with imageReader.acquireNextImage().
With either API, you need to handle processing YUV data yourself, unfortunately.
Camera devices natively produce YUV data, not RGB, so the API doesn't spend extra resources to auto-convert the data. The main easy exception is piping data to the GPU, where the GPU driver auto-converts YUV to RGB for you within your pixel shader.
But if you're just in regular app code, you need to parse the data.
For the deprecated android.hardware.Camera API, the output is NV21 by default, and you can usually select YV12 as another option.
The wikipedia article on YUV is relatively helpful: https://en.wikipedia.org/wiki/YUV
But it does have the wrong conversion coefficients for YUV->RGB conversion; they should be:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
(Cb = U, Cr = V)
You can also take a look at this stackoverflow post:
Extract black and white image from android camera's NV21 format
which has code that looks to be correct for the conversion.

Take photo and record video of real-time face detection preview

I have used JavaCv (and opencv too) to implement live face detection preview on Android. I work ok. Now I want to take a picture or record a video from live preview which have face detection (I mean when I take a picture, this picture will have a person and a rectangle around his/her face). I have researched a lot but get no result. Can anyone help me please !!!
What you're looking for is the imwrite() method.
Since your question isn't clear on the use-case, I'll give a generic algorithm, as shown:
imwrite writes a specified Mat object to a file and it accepts 2 arguments - fileName and Mat object, for example - imwrite('output.jpg',img);
Here's the logic you can follow:
Receive input frame (Mat input from video and run face detection using your existing method.
Draw a rectangle on an output image (Mat output).
Use imwrite as - imwrite('face.jpg',output)
In case you want to record all the frames with a face in them, replace 'face.jpg' with a string variable that is updated with each loop iteration and run imwrite in a loop
If you wish to record a video. Have a look at VideoWriter() class

GPUimageVideoCamera for android

I am using GPUImage library to compress a video in my iOs app (GPUimageVideoCamera)
https://github.com/BradLarson/GPUImage/
I have worked with it on iOS and it is very fast
I want to do the same in my android app, but it seems that GPUImageMovie class doesn't exist in android library:
https://github.com/CyberAgent/android-gpuimage/tree/master/library/src/jp/co/cyberagent/android/gpuimage
It seems that android library only work on images (no video).
Anyone know if this library can do the job? If not, did someone developed GPUImage all library? If not, what is the best library i can use that can do the job as fast as GPUImage library do.
That's what GPUimageVideoCamera do in iOs (Filtering live video):
To filter live video from an iOS device's camera, you can use code like the following:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:#"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
videoCamera.audioEncodingTarget = movieWriter;
Is there a library that can do the same in android?

Change keyframe interval using android camera

Is there a way of changing the frequency of a keyframe when using the android camera? I'm using an intent to record video, and then the MediaMetaDataRetriever class to extract frames, but the keyframes are too far apart for my liking.
You can use getFrameAtTime (long timeUs, int option) using OPTION_CLOSEST to get a frame that is not a key frame, so you can access any frame you want.
The only problem is that this frame is not a key frame so it can be pixelated.

Editing android VideoView frames

Environment:
Nexus 7 Jelly Bean 4.1.2
Problem:
I'm trying to make a Motion Detection application that works with RTSP using VideoView.
I wish that there was something like an onNewFrameListener
videoView.onNewFrame(Frame frame)
I've tried to get access to the raw frames of an RTSP stream via VideoView but couldn't find any support for that in the Android SDK.
I found out that VideoView encapsulates the Android's MediaPlayer class.
So i dived into the media_jni lib to try and find a way to access the raw frames, But couldn't find the byte buffer or whatever that represents a frame.
Question:
Anyone has an idea where or how can i find this buffer and get access to it ?
Or any other idea of implementing a Motion Detection over a VideoView ?
Even if it's sais that i need to recompile the AOSP.
You can extend the VideoView and override its draw(Canvas canvas) method.
Set your bitmap to the canvas received through draw.
Call super.draw() which will get the frame drawn onto your bitmap.
Access the frame pixels from the bitmap.
class MotionDetectorVideoView extends VideoView {
public Bitmap mFrameBitmap;
...
#Override
public void draw(Canvas canvas) {
// set your own member bitmap to canvas..
canvas.setBitmap(mFrameBitmap);
super.draw(canvas);
// do whatever you want with mFrameBitmap. It now contains the frame.
...
// Allocate `buffer` big enough to hold the whole frame.
mFrameBitmap.copyPixelsToBuffer(buffer);
...
}
}
I don't know whether this will work. Avoid doing heavy calculation in draw, start a thread there.
In your case I would use the Camera Preview instead the VideoView, if you are working with live motion, not recorded videos. You can use a Camera Preview Callback to catch everyframe captured by your camera. This callback implements :
onPreviewFrame(byte[] data, Camera camera)
Called as preview frames are displayed.
Which I think it could be useful for you.
http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html
Tell if that is what you are searching for.
Good luck.

Categories

Resources