Extracting YUV frames from a HEVC stream on android - android

I have an incoming hevc video stream (over the network), I am able to render this to a SurfaceView and show it on the screen. But I would instead like to extract YUV (or RGB) frames without rendering it on screen. Is this possible?

Related

How to feed the video frames directly to Encoder surface, without using Decoder?

I am developing a video compressor app where the general architecture everyone follow is first decode the video frames on the Output surface of decoder then swap those buffer directly to the Input surface and then encode it. Here we are using both decoder and Encoder. that requires double time.
My question is is it possible to by pass my video frames directly to the Input surface of Encoder without passing it through the Output surface of decoder. I don't want to use decoder.
If it is possible then kindly suggest me the solution.
I looked into this example EncodeAndMux where the programme is creating frames and passing it directly to the Input surface of Encoder. I want the similar way how to pass video frames directly to the Encoder Surface.
Thank you in advance.
If you have already decoded frames or raw frames so yes you can pass them to input surface of encoder without using a decoder check this answer, otherwise you should decode frames first using input/output buffers of decoder.

Android -- analyze encoded frames when while video

I have a camera app that analyzes video when recording (detect camera movement).
I use camera2 API and I get frames (from camera) when recording video (with MediaRecorder). But the video file should be analyzed once more on a server.
The problem is that I analyze non-compressed frames on phone. But server analyzes video frames after compression and decompression. And those frames are definitelly different.
My MotionDetector takes frames from camera, calculates shift between current frame and previous frame using Phase Correlate.
After video file uploaded to the server, it is analyzed with the same MotionDetector.
If I compare shift-between-frames on uncomperred frames to shift-between-frames on compressed frames -- they are obviously different.
Is there a way to get frames after compression?
Currently data flow looks like:
1. Frames from camera
2.1 frames from camera displayed on a screen
2.2 frames from camera are fed to MediaRecorder.
2.3 frames from camera are fed to my MotionDetector
3. MediaRecorder encodes camera frames into H264 format.
4 MediaRecorder output stored to file
I need the data workflow to look like:
1. Frames from camera
2.1 frames from camera displayed on a screen
2.2 frames from camera are fed into MediaRecorder.
3. MediaRecorder encodes camera frames into H264 format.
4.1 MediaRecorder output stored to file
4.2 MediaRecorder output goes to MediaCodec
5. MediaCodec decodes frames.
6. frames from MediaCoded are fed to MotionDetector.
Is there a way to do that?
If you stop using MediaRecorder, and just use MediaCodec directly for encoding video, you should be able to inspect the encoded video buffers before writing them to a MediaMuxer for storage to disk.
That's a lot more complicated than just using MediaRecorder directly, but a lot more flexible for cases like this.

Processing Frames from Mediacodec Output and Update the Frames on Android

I am doing a project on image processing stuff. I receive a raw h264 video stream in real time and decode it using MediaCodec. I have successfully displayed the decoded video on a TextureView or SurfaceView. Now I want to process each frame, do something to it using OpenCV4Android and show the updated video frame on the screen. I know OpenCV has a sample project that demonstrates how to process video frames from the phone camera, but I wonder how to do it if I have another video source.
Also I have some questions on TextureView:
What does the onSurfaceTextureUpdated() from SurfaceTextureListener do? If I call getBitmap() in this function, then does that mean I get each frame of the video? And what about SurfaceTexture.onFrameAvailableListener?
Is it possible to use a hidden TextureView as an intermediate, extract its frames for processing and render it back to another surface, say, OpenGL ES texture for displaying?
The various examples in Grafika that use Camera as input can also work with input from a video stream. Either way you send the video frame to a Surface.
If you want to work with a frame of video in software, rather than on the GPU, things get more difficult. You either have to receive the frame on a Surface and copy it to a memory buffer, probably performing an RGB-to-YUV color conversion in the process, or you have to get the YUV buffer output from MediaCodec. The latter is tricky because a few different formats are possible, including Qualcomm's proprietary tiled format.
With regard to TextureView:
onSurfaceTextureUpdated() is called whenever TextureView receives a new frame. You can use getBitmap() to get every frame of the video, but you need to pace the video playback to match your filter speed -- TextureView will drop frames if you fall behind.
You could create a "hidden TextureView" by putting other View elements on top of it, but that would be silly. TextureView uses a SurfaceTexture to convert the video frames to OpenGL ES textures, then renders them as part of drawing the View UI. The bitmap data is retrieved with glReadPixels(). You can just use these elements directly. The bigflake ExtractMpegFramesTest demonstrates this.

Is there any API to display raw YUV video frames using android Surface?

I have a raw video file of YUV 420 format and I want to read frame-by-frame and display the video on android device. I am not capturing any video frames using camera.
Question: Is there any API using Surface, SurfaceView so as to display the YUV 420 raw video without any encoding and decoding processes?

How to capture H.264 encoded frame in android?

I understand like, there are two ways of capturing video in android.
1) using SurfaceView API
2) using MediaRecorder API
I want to capture the H.264 encoded frames using the Android (3.0+) 's default encoder to send it over network using RTP.
While using preview callbacks with SurfaceView and SurfaceHolder classes, we are able to get raw frames shown as preview to the user. We were getting the frames in "onPreviewFrame" method of "PreviewCallback" class.
But, those frames are H.264 encoded.
So, I tried with "MediaRecorder" API to set H.264 encoding and "SurfaceView" to get the preview frames.
In this case, the previewcallbacks are not getting called.
Can you please let me know how to achieve this. Our main aim is to get the H.264 encoded frame (which hass been encoded using android's default codec).
Ref: 1) https://stackoverflow.com/a/8655244/698316
2) Similar issue: http://permalink.gmane.org/gmane.comp.handhelds.android.devel/214422
Can you suggest a way to capture the H.264 encoded frames using android's default H.264 codec support.
See Spydroid http://code.google.com/p/spydroid-ipcamera/
Basically you let the video encoder write a .mp4 with H.264 to a special file descriptor that calls your code on write. Then strip off the MP4 header and turn the H.264 NALUs into RTP packets.

Categories

Resources