I am bit confused after reading lot of resources in the internet.
1) I have a TextureView in the Application. ((tv))
2) I have associated SurfaceTextureLister to ((tv)).
tv.setSurfaceTextureListener(new TextureView.SurfaceTextureListener() {
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
mySurface = new Surface(surface); => This surface is sent to native layer.
....
}
3) Passing this mySurface surface into the native, I get ANativeWindow for this surface, and use ANativeWindow_lock and ANativeWindow_unlockAndPost to copy the data into the surface.
So far so good. it is displaying the data what i have copied into the ANativeWindow.
Now, i want to record all these frames into into MP4 format in a file.
What i did?
1)
I have used the below link for the reference:
http://www.bigflake.com/mediacodec/EncodeAndMuxTest.java.txt
2)
I have retrieived a surface from the mediacodec encoder, and passed to the native to copy the same way as i have copied into the display surface. I see no output in the mp4 file. just a black screen.
QUESTION:
1) Is this the right approach ? what i mean is you have a raw data,
copy the this data into two surfaces, one has come from application TextureView, and the other one is from mediacodec encoder surface.
2)
Or, Did i overlook any other easy way of recording the data into mp4 format ?
3)
Or, Any concepts did i miss completely to look at ?
Kindly provide your valuable input.
Related
Im currently trying to display a video frame using opengl.
So far it works but I have some color problem.
Im using this as my
Reference for my logic
I have this code
//YUV420SP data
uint8_t *decodedBuff = AMediaCodec_getOutputBuffer(d->codec, status, &bufSize);
buildTexture(decodedBuff, decodedBuff+w*h, decodedBuff+w*h, w, h);
renderFrame();
but it displays with wrong color.
decodedBuff = Y
decodedBuff+w*h = U
decodedBuff+w*h*5 = V
but this separation formula is for YUV420P.
Do you guys happen to know whats for YUV420SP?
Your help is very much appreciated
If you are doing it this way you are doing it wrong. You should never manually read raw data from video surfaces in fragment shaders.
Generate a SurfaceTexture, bind it to an OpenGL ES texture, and use EGL_image_external to access the texture via an external image sampler.
This will give you direct access to the video data in your shader, including automatic handling of the memory format and color conversion, in many cases for "free" because it's backed by GPU hardware acceleration.
I followed the Android Studio tutorial to get the CameraPreview to work (Camera API Android Developer Guide). This works fine for me and i can view the camera stream in my FrameLayout.
But I would like to get the RGB values from a specific Pixel in the Preview everytime it changes. I did not find a method which gives me the previewImage as a bitmap and was not able to understand the usage of the onPreviewFrame method
#Override
public void onPreviewFrame(byte[] data, Camera camera) {}
How can I get the RGB values from a Camerapreview Pixel?
If you are using the Camera2 API, you can implement the ImageReader.OnImageAvailableListener class in your application. After that, you override the onImageAvailable function , which gets an ImageReader as argument. Then you can access the image just recorded with imageReader.acquireNextImage().
With either API, you need to handle processing YUV data yourself, unfortunately.
Camera devices natively produce YUV data, not RGB, so the API doesn't spend extra resources to auto-convert the data. The main easy exception is piping data to the GPU, where the GPU driver auto-converts YUV to RGB for you within your pixel shader.
But if you're just in regular app code, you need to parse the data.
For the deprecated android.hardware.Camera API, the output is NV21 by default, and you can usually select YV12 as another option.
The wikipedia article on YUV is relatively helpful: https://en.wikipedia.org/wiki/YUV
But it does have the wrong conversion coefficients for YUV->RGB conversion; they should be:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
(Cb = U, Cr = V)
You can also take a look at this stackoverflow post:
Extract black and white image from android camera's NV21 format
which has code that looks to be correct for the conversion.
I would like to access the YUV images of a video, and passing a ImageReader surface to a MediaCodec (as refered in the documentation) looked like a really smart way to do it. However, i can't make sense of the data inside the Image instance supplied by the onImageAvailable callback. Just looking at its Y plane, it looks to be mostly 0 values, no matter what is the video i provide.
I read some #fadden comments that looked a bit old by now, referring that the ImageReader surface was not available yet for MediaCodec, is this still the case? Did anyone succeeded in implementing a MediaCodec decoding to a ImageReader surface solution?
To illustrate, i was hopping for the Y plane to be like this:
and it comes out as:
Thanks for any pointers
I met the same problem. In my case, it is because I did not specicify the MediaCodec to output the same format image as the ImageReader.
int imageFormat = ImageFormat.YUV_420_888;
mImageReader = ImageReader.newInstance(
videoFormat.getInteger(MediaFormat.KEY_WIDTH),
videoFormat.getInteger(MediaFormat.KEY_HEIGHT),
imageFormat,
3
);
// I add the following line of code.
videoFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Flexible);
// Then initialize the MediaCodec with videoFormat.
I am trying to apply face detection on camera preview frames. I am using OpenGL and OpenCV to process these camera frames at run-time.
#Override
public void onDrawFrame(GL10 unused) {
if (VERBOSE) {
Log.d(TAG, "onDrawFrame tex=" + mTextureId);
}
mSurfaceTexture.updateTexImage();
mSurfaceTexture.getTransformMatrix(mSTMatrix);
// TODO: need to implement
//JniCppManager.processFrame();
drawFrame(mTextureId, mSTMatrix);
}
I am trying to implement a c++ implementation of processFrame(). How can I get a Mat object in c++ from transformation matrix? Could anyone provide me some pointers to the solution.
Your pipeline is currently:
Camera (produces frame)
SurfaceTexture (receives frame, converts to GLES "external" texture)
[missing stuff]
Array of RGB bytes passed to C++
What you need to do for [missing stuff] is render the pixels to an off-screen pbuffer and read them back with glReadPixels(). You can do this from code written in Java or native; for the former you'd want to read them into a "direct" ByteBuffer so you can easily access the pixels from native code. The EGL context used by GLES is held in thread-local storage, so the native code running on the GLSurfaceView render thread will be able to access it.
An example of this can be found in the bigflake ExtractMpegFramesTest, which differs primarily in that it's grabbing frames from a video rather than a Camera.
For API 19+, if you can process frames in YV12 or NV21 rather than RGB, you can feed the Camera to an ImageReader and get access to the data without having to copy/convert it.
I would like to make "instant" screenshots of an Presentation object in Android. My Presentation normally renders to a virtual display (PRIVATE) which is backed by the surface from a MediaRecorder that is setup to record video. Recording the presentation as a video works great.
mMediaRecorder = new MediaRecorder();
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
mMediaRecorder.setOutputFile(getScratchFile().getAbsolutePath());
mMediaRecorder.setVideoEncodingBitRate(bitrateInBitsPerSecond);
mMediaRecorder.setVideoFrameRate(30);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mVirtualDisplay = mDisplayManager.createVirtualDisplay(
DISPLAY_NAME, // string name required
mVideoSize.getWidth(),
mVideoSize.getHeight(),
160, // screen densityDpi, not sure what it means in this context
mMediaRecorder.getSurface(), // the media recorder must already be {#code prepare()}'d
DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY); // only we can use it
mPresentation = new MyPresentation(getActivity(), mVirtualDisplay().getDisplay());
How can I get a screenshot of the mMediaRecorder.getSurface() at any time, including when the mMediaRecorder is setup but not recording?
I've tried a number of methods related to the view.getDrawingCache() on the Presentation root view object and I only get clear/black output. The presentation itself contains TextureView objects, which I'm guessing are messing up this strategy.
I also tried using an ImageReader with the DisplayPreview of the mMediaRecorder but it receives no images in the callback -- ever
mMediaRecorder.setDisplayPreview(mImageReader.getSurface());
I'd really like some way to mirror the Surface which backs the presentation into an ImageReader and use that as a consumer, I just can't see how to "mirror" one surface as a producer into another "consumer" class. It seems there should be an easy way with SurfaceFlinger.
Surfaces are the producer end of a producer-consumer data structure. A producer can't pull data back out of the pipe, so attempting to read frames back from a Surface isn't possible.
When feeding MediaCodec or MediaRecorder, the consumer end is in a different process (mediaserver) that manages the media hardware. For a SurfaceView, the consumer is in SurfaceFlinger. For a TextureView, both ends are in your app, which is why you can easily get a frame from TextureView (call getBitmap()).
To intercept the incoming data you'd need to have both producer and consumer in the same process. The SurfaceTexture class (also known as "GLConsumer") provides this -- it's a consumer that converts the frames it receives into GLES textures.
So the idea would be to create a SurfaceTexture, create a new Surface from that (you'll note that Surface's only public constructor takes a SurfaceTexture), and pass that Surface as the virtual display output Surface. Then as frames come in you "forward" them to the MediaRecorder by rendering the textures with OpenGL ES.
This is not entirely straightforward, especially if you haven't worked with OpenGL ES before. Various examples can be found in Grafika (e.g. "texture from camera" and "record GL app").
I don't know if setDisplayPreview() is expected to work for anything other than Camera.