Android -- analyze encoded frames when while video - android

I have a camera app that analyzes video when recording (detect camera movement).
I use camera2 API and I get frames (from camera) when recording video (with MediaRecorder). But the video file should be analyzed once more on a server.
The problem is that I analyze non-compressed frames on phone. But server analyzes video frames after compression and decompression. And those frames are definitelly different.
My MotionDetector takes frames from camera, calculates shift between current frame and previous frame using Phase Correlate.
After video file uploaded to the server, it is analyzed with the same MotionDetector.
If I compare shift-between-frames on uncomperred frames to shift-between-frames on compressed frames -- they are obviously different.
Is there a way to get frames after compression?
Currently data flow looks like:
1. Frames from camera
2.1 frames from camera displayed on a screen
2.2 frames from camera are fed to MediaRecorder.
2.3 frames from camera are fed to my MotionDetector
3. MediaRecorder encodes camera frames into H264 format.
4 MediaRecorder output stored to file
I need the data workflow to look like:
1. Frames from camera
2.1 frames from camera displayed on a screen
2.2 frames from camera are fed into MediaRecorder.
3. MediaRecorder encodes camera frames into H264 format.
4.1 MediaRecorder output stored to file
4.2 MediaRecorder output goes to MediaCodec
5. MediaCodec decodes frames.
6. frames from MediaCoded are fed to MotionDetector.
Is there a way to do that?

If you stop using MediaRecorder, and just use MediaCodec directly for encoding video, you should be able to inspect the encoded video buffers before writing them to a MediaMuxer for storage to disk.
That's a lot more complicated than just using MediaRecorder directly, but a lot more flexible for cases like this.

Related

Extracting YUV frames from a HEVC stream on android

I have an incoming hevc video stream (over the network), I am able to render this to a SurfaceView and show it on the screen. But I would instead like to extract YUV (or RGB) frames without rendering it on screen. Is this possible?

android - MediaCodec record video with timestamp on each video frame

I need to record video with timestamp on each video frame. I see a example in cts which use InputSurace.java and OutputSurface.java to connect Decoder and Encoder to transcode video files. Is it possible to reuse these two android java class to implement a timestamp video recorder?
I try to use OutputSurface as Camera preview output and use InputSurface as MediaCodec Encoder input but sounds like only record 2 or 3 frames then it hang there forever!
Take your time and explore this link to get an idea how to feed the Camera preview into a video file. Once you are confident about the mechanism, you should be able to feed the MediaCodec input surface with some kind of OpenGL magic to put extra graphics on the top of the Camera's preview. I would recommend to tweak the example code's drawExtra() as a start.

Camera Preview Frames interlaced with Media playback Error

I took the code from grafika for Double Decoder and changed it so that one textureview is outputting from mediaplayer and the other textureview outputting the camera preview. It would be working fine except that in the textureview with the media player output will get frames of the camera preview interlaced. It's really funky behavior and I am not doing anything dynamic to reconfigure the output viewport.
Both Textureviews & SurfaceTextures are separated and are independently assigned to their respective output -> mediaplayer output and camera preview output:
mediaPlayer.setSurface(surface);
camera.setPreviewTexture(surfaceTexture);
Running the mediaPlayer code and camera code on a separate threads does not help. It seems like the SurfaceTexture frame buffers are being somehow being shared and frames are being dropped or interfering with each other? Could this be be a performance issue? Even when the original example of the code was to be able to decode movies in parallel?

How can the https://github.com/google/grafika be used to reencode existing videos to have all keyframes

I am trying to re-encode existing video frames to create a video which will have all frames as keyframes(I-frames). The current grafika project has a CameraCaptureActivity which in turn uses VideoEncoderCore to encode videos. I have modified this to record a video from camera so that i'll have all frames as Keyframes but this module has lot of moving parts and that is making it hard to uncouple it from renderer and camera to have a smooth pipeline of frames going into encoded video with all supplied frames as keyframes. Any Ideas?

How to capture H.264 encoded frame in android?

I understand like, there are two ways of capturing video in android.
1) using SurfaceView API
2) using MediaRecorder API
I want to capture the H.264 encoded frames using the Android (3.0+) 's default encoder to send it over network using RTP.
While using preview callbacks with SurfaceView and SurfaceHolder classes, we are able to get raw frames shown as preview to the user. We were getting the frames in "onPreviewFrame" method of "PreviewCallback" class.
But, those frames are H.264 encoded.
So, I tried with "MediaRecorder" API to set H.264 encoding and "SurfaceView" to get the preview frames.
In this case, the previewcallbacks are not getting called.
Can you please let me know how to achieve this. Our main aim is to get the H.264 encoded frame (which hass been encoded using android's default codec).
Ref: 1) https://stackoverflow.com/a/8655244/698316
2) Similar issue: http://permalink.gmane.org/gmane.comp.handhelds.android.devel/214422
Can you suggest a way to capture the H.264 encoded frames using android's default H.264 codec support.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
Basically you let the video encoder write a .mp4 with H.264 to a special file descriptor that calls your code on write. Then strip off the MP4 header and turn the H.264 NALUs into RTP packets.

Categories

Resources