Recreate muxer for new video file while encoder works - android

I use CameraToMpegTest from BigFlake, but I modify it to enable start and stop muxer many time while encoder works. When the video recording is not, I still use the encoder to get frames from texture for other processing.
For first start/stop it works fine, but there is a problem with second start/stop. I get exception then close app: "stop() is called in invalid state 3", "...Caused by: java.lang.IllegalStateException: Failed to stop the muxer...". Also second video file doesn't open but it's not empty.
Is there a possibility to restart the muxer while continuously running the encoder? What am I doing wrong?
UPDATE: I find out that problem in keyframes. When I set format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 0) then all is fine. And so, how to request a keyframe when muxer starts recording again?
public class MainFrameProcessor {
public static final String TAG = "MainFrameProcessor";
public static final boolean VERBOSE = false; // lots of logging
// parameters for the encoder
private static final String MIME_TYPE = "video/avc"; // H.264 Advanced Video Coding
private static final int FRAME_RATE = 30; // 30fps
private static final int IFRAME_INTERVAL = 5; // 5 seconds between I-frames
public static final int encWidth = 1920;
public static final int encHeight = 1080;
private static final int encBitRate = 10000000;//6000000; // Mbps //http://static.googleusercontent.com/media/source.android.com/en//compatibility/android-cdd.pdf
public ByteBuffer mPixelBuf; // used by saveFrame()
private boolean isWrite=false;
private boolean isWork=true;
// encoder / muxer state
private MediaCodec mEncoder;
private CodecInputSurface mInputSurface;
private Writer writer;
private String path;
// camera state
private Camera mCamera;
private SurfaceTextureManager mStManager;
private Handler messageHandler;
public MainFrameProcessor(String path, Handler messageHandler){
mPixelBuf = ByteBuffer.allocateDirect(encHeight * encWidth * 4);
mPixelBuf.order(ByteOrder.LITTLE_ENDIAN);
this.path=path;
this.messageHandler=messageHandler;
}
/** test entry point */
public void startProcessor() throws Throwable {
CameraToMpegWrapper.runTest(this);
}
public void stopProcessor() {
isWork=false;
}
synchronized public void release(){
stopProcessor();
releaseCamera();
releaseEncoderAndWriter();
releaseSurfaceTexture();
}
/**
* Wraps processCameraFrames(). This is necessary because SurfaceTexture will try to use
* the looper in the current thread if one exists, and the CTS tests create one on the
* test thread.
*
* The wrapper propagates exceptions thrown by the worker thread back to the caller.
*/
private static class CameraToMpegWrapper implements Runnable {
private Throwable mThrowable;
private MainFrameProcessor mTest;
private CameraToMpegWrapper(MainFrameProcessor test) {
mTest = test;
}
#Override
public void run() {
try {
mTest.processCameraFrames();
} catch (Throwable th) {
mThrowable = th;
}
}
/** Entry point. */
public static void runTest(MainFrameProcessor obj) throws Throwable {
CameraToMpegWrapper wrapper = new CameraToMpegWrapper(obj);
Thread th = new Thread(wrapper, "codec test");
th.start();
//th.join(); //http://stackoverflow.hex1.ru/questions/22457623/surfacetextures-onframeavailable-method-always-called-too-late
if (wrapper.mThrowable != null) {
throw wrapper.mThrowable;
}
}
}
boolean to_start=false;
public void startRecord(){
to_start=true;
}
boolean to_stop=false;
public void stopRecord(){
to_stop=true;
}
/**
* Tests encoding of AVC video from Camera input. The output is saved as an MP4 file.
*/
private void processCameraFrames() {
// arbitrary but popular values
Log.d(TAG, MIME_TYPE + " output " + encWidth + "x" + encHeight + " #" + encBitRate);
try {
prepareCamera(encWidth, encHeight);
prepareEncoderAndWriter(encWidth, encHeight, encBitRate);
mInputSurface.makeCurrent();
prepareSurfaceTexture();
mCamera.startPreview();
SurfaceTexture st = mStManager.getSurfaceTexture();
while (isWork) {
if (to_start){
writer.startWriter();
isWrite=true;
to_start=false;
}
if (to_stop){
isWrite=false;
writer.stopWriter();
to_stop=false;
}
// Acquire a new frame of input, and render it to the Surface. If we had a
// GLSurfaceView we could switch EGL contexts and call drawImage() a second
// time to render it on screen. The texture can be shared between contexts by
// passing the GLSurfaceView's EGLContext as eglCreateContext()'s share_context
// argument.
mStManager.awaitNewImage();
mStManager.drawImage();
synchronized (mPixelBuf) {
mPixelBuf.rewind();
GLES20.glReadPixels(0, 0, encWidth, encHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
mPixelBuf);
}
if (isWrite) {
if (writer.checkDurationEnd()) {
stopRecord();
Message msg = messageHandler.obtainMessage(MainActivity.MESSAGE_STOP_REC);
messageHandler.sendMessage(msg);
} else
writer.write(st, mInputSurface);
}
}
} finally {
// release everything we grabbed
release();
}
}
/**
* Configures Camera for video capture. Sets mCamera.
* <p>
* Opens a Camera and sets parameters. Does not start preview.
*/
private void prepareCamera(int encWidth, int encHeight) {
if (mCamera != null) {
throw new RuntimeException("camera already initialized");
}
Camera.CameraInfo info = new Camera.CameraInfo();
mCamera = Camera.open(); // opens first back-facing camera
if (mCamera == null) {
throw new RuntimeException("Unable to open camera");
}
Camera.Parameters parms = mCamera.getParameters();
choosePreviewSize(parms, encWidth, encHeight);
// leave the frame rate set to default
mCamera.setParameters(parms);
Camera.Size size = parms.getPreviewSize();
Log.d(TAG, "Camera preview size is " + size.width + "x" + size.height);
}
/**
* Attempts to find a preview size that matches the provided width and height (which
* specify the dimensions of the encoded video). If it fails to find a match it just
* uses the default preview size.
* <p>
* TODO: should do a best-fit match.
*/
private static void choosePreviewSize(Camera.Parameters parms, int width, int height) {
// We should make sure that the requested MPEG size is less than the preferred
// size, and has the same aspect ratio.
Camera.Size ppsfv = parms.getPreferredPreviewSizeForVideo();
if (VERBOSE && ppsfv != null) {
Log.d(TAG, "Camera preferred preview size for video is " +
ppsfv.width + "x" + ppsfv.height);
}
for (Camera.Size size : parms.getSupportedPreviewSizes()) {
if (size.width == width && size.height == height) {
parms.setPreviewSize(width, height);
return;
}
}
Log.w(TAG, "Unable to set preview size to " + width + "x" + height);
if (ppsfv != null) {
parms.setPreviewSize(ppsfv.width, ppsfv.height);
}
}
/**
* Stops camera preview, and releases the camera to the system.
*/
private void releaseCamera() {
if (VERBOSE) Log.d(TAG, "releasing camera");
if (mCamera != null) {
mCamera.stopPreview();
mCamera.release();
mCamera = null;
}
}
/**
* Configures SurfaceTexture for camera preview. Initializes mStManager, and sets the
* associated SurfaceTexture as the Camera's "preview texture".
* <p>
* Configure the EGL surface that will be used for output before calling here.
*/
private void prepareSurfaceTexture() {
mStManager = new SurfaceTextureManager(mPixelBuf);
SurfaceTexture st = mStManager.getSurfaceTexture();
try {
mCamera.setPreviewTexture(st);
} catch (IOException ioe) {
throw new RuntimeException("setPreviewTexture failed", ioe);
}
}
/**
* Releases the SurfaceTexture.
*/
private void releaseSurfaceTexture() {
if (mStManager != null) {
mStManager.release();
mStManager = null;
}
}
/**
* Configures encoder and muxer state, and prepares the input Surface. Initializes
* mEncoder, mMuxer, mInputSurface, mBufferInfo, mTrackIndex, and mMuxerStarted.
*/
private void prepareEncoderAndWriter(int width, int height, int bitRate) {
MediaFormat format = MediaFormat.createVideoFormat(MIME_TYPE, width, height);
// Set some properties. Failing to specify some of these can cause the MediaCodec
// configure() call to throw an unhelpful exception.
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
if (VERBOSE) Log.d(TAG, "format: " + format);
// Create a MediaCodec encoder, and configure it with our format. Get a Surface
// we can use for input and wrap it with a class that handles the EGL work.
//
// If you want to have two EGL contexts -- one for display, one for recording --
// you will likely want to defer instantiation of CodecInputSurface until after the
// "display" EGL context is created, then modify the eglCreateContext call to
// take eglGetCurrentContext() as the share_context argument.
try {
mEncoder = MediaCodec.createEncoderByType(MIME_TYPE);
} catch (IOException e) {
e.printStackTrace();
}
mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mInputSurface = new CodecInputSurface(mEncoder.createInputSurface());
mEncoder.start();
writer=new Writer(path, mEncoder);
}
/**
* Releases encoder resources.
*/
private void releaseEncoderAndWriter() {
if (VERBOSE) Log.d(TAG, "releasing encoder objects");
if (writer!=null)
writer.releaseWriter();
if (mEncoder != null) {
mEncoder.stop();
mEncoder.release();
mEncoder = null;
}
if (mInputSurface != null) {
mInputSurface.release();
mInputSurface = null;
}
}
}
Class for Muxer:
public class Writer {
private static final long DURATION_SEC = 1000; // seconds of video
private MediaCodec mEncoder;
private MediaMuxer mMuxer;
private String path;
private int mTrackIndex;
private long startWhen;
private long desiredEnd;
private boolean mMuxerStarted;
// allocate one of these up front so we don't need to do it every time
private MediaCodec.BufferInfo mBufferInfo;
boolean new_track=false;
public Writer(String path, MediaCodec mEncoder){
this.path=path;
this.mEncoder=mEncoder;
}
public void startWriter(){
// Output filename. Ideally this would use Context.getFilesDir() rather than a
// hard-coded output directory.
String outputPath= CameraUtils.createPathFile(path, "mp4");
Log.i(MainFrameProcessor.TAG, "Output file is " + outputPath);
// Create a MediaMuxer. We can't add the video track and start() the muxer here,
// because our MediaFormat doesn't have the Magic Goodies. These can only be
// obtained from the encoder after it has started processing data.
//
// We're not actually interested in multiplexing audio. We just want to convert
// the raw H.264 elementary stream we get from MediaCodec into a .mp4 file.
try {
mMuxer = new MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
} catch (IOException ioe) {
throw new RuntimeException("MediaMuxer creation failed", ioe);
}
mBufferInfo = new MediaCodec.BufferInfo();
mTrackIndex = -1;
mMuxerStarted = false;
new_track=true;
startWhen = System.nanoTime();
desiredEnd = startWhen + DURATION_SEC * 1000000000L;
}
synchronized public void stopWriter(){
// send end-of-stream to encoder, and drain remaining output
if (mMuxer != null) {
Log.w(MainFrameProcessor.TAG,"stop");
//if (mEncoder!=null) drainEncoder(true);
mMuxer.stop();
mMuxer.release();
mMuxer = null;
}
}
public void releaseWriter() {
stopWriter();
}
public void write(SurfaceTexture st, CodecInputSurface mInputSurface) {
// Set the presentation time stamp from the SurfaceTexture's time stamp. This
// will be used by MediaMuxer to set the PTS in the video.
if (MainFrameProcessor.VERBOSE) {
Log.d(MainFrameProcessor.TAG, "present: " +
((st.getTimestamp() - startWhen) / 1000000.0) + "ms");
}
mInputSurface.setPresentationTime(st.getTimestamp());
// Submit it to the encoder. The eglSwapBuffers call will block if the input
// is full, which would be bad if it stayed full until we dequeued an output
// buffer (which we can't do, since we're stuck here). So long as we fully drain
// the encoder before supplying additional input, the system guarantees that we
// can supply another frame without blocking.
if (MainFrameProcessor.VERBOSE) Log.d(MainFrameProcessor.TAG, "sending frame to encoder");
mInputSurface.swapBuffers();
drainEncoder(false);
}
/**
* Extracts all pending data from the encoder and forwards it to the muxer.
* <p>
* If endOfStream is not set, this returns when there is no more data to drain. If it
* is set, we send EOS to the encoder, and then iterate until we see EOS on the output.
* Calling this with endOfStream set should be done once, right before stopping the muxer.
* <p>
* We're just using the muxer to get a .mp4 file (instead of a raw H.264 stream). We're
* not recording audio.
*/
private void drainEncoder(boolean endOfStream) {
final int TIMEOUT_USEC = 10000;
if (MainFrameProcessor.VERBOSE) Log.d(MainFrameProcessor.TAG, "drainEncoder(" + endOfStream + ")");
if (endOfStream) {
if (MainFrameProcessor.VERBOSE) Log.d(MainFrameProcessor.TAG, "sending EOS to encoder");
mEncoder.signalEndOfInputStream();
}
ByteBuffer[] encoderOutputBuffers = mEncoder.getOutputBuffers();
while (true) {
int encoderStatus = mEncoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
Log.w("ddd", ""+encoderStatus+"; "+new_track);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
if (!endOfStream) {
break; // out of while
} else {
if (MainFrameProcessor.VERBOSE) Log.d(MainFrameProcessor.TAG, "no output available, spinning to await EOS");
}
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not expected for an encoder
encoderOutputBuffers = mEncoder.getOutputBuffers();
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// should happen before receiving buffers, and should only happen once
/*if (mMuxerStarted) {
throw new RuntimeException("format changed twice");
}
MediaFormat newFormat = mEncoder.getOutputFormat();
Log.d(MainFrameProcessor.TAG, "encoder output format changed: " + newFormat);
// now that we have the Magic Goodies, start the muxer
mTrackIndex = mMuxer.addTrack(newFormat);
mMuxer.start();
mMuxerStarted = true;*/
} else if (encoderStatus < 0) {
Log.w(MainFrameProcessor.TAG, "unexpected result from encoder.dequeueOutputBuffer: " +
encoderStatus);
// let's ignore it
} else if (new_track){
MediaFormat newFormat = mEncoder.getOutputFormat();
Log.d(MainFrameProcessor.TAG, "encoder output format changed: " + newFormat);
// now that we have the Magic Goodies, start the muxer
mTrackIndex = mMuxer.addTrack(newFormat);
mMuxer.start();
mMuxerStarted = true;
new_track=false;
} else {
ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + encoderStatus +
" was null");
}
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// The codec config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
if (MainFrameProcessor.VERBOSE) Log.d(MainFrameProcessor.TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
mBufferInfo.size = 0;
}
if (mBufferInfo.size != 0) {
if (!mMuxerStarted) {
throw new RuntimeException("muxer hasn't started");
}
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(mBufferInfo.offset);
encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
if (MainFrameProcessor.VERBOSE) Log.d(MainFrameProcessor.TAG, "sent " + mBufferInfo.size + " bytes to muxer");
}
mEncoder.releaseOutputBuffer(encoderStatus, false);
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
if (!endOfStream) {
Log.w(MainFrameProcessor.TAG, "reached end of stream unexpectedly");
} else {
if (MainFrameProcessor.VERBOSE) Log.d(MainFrameProcessor.TAG, "end of stream reached");
}
break; // out of while
}
}
}
}
public boolean checkDurationEnd() {
return System.nanoTime() >= desiredEnd;
}
}

Related

How to access the information of a video while it is captured on Android?

I want to encrypt a video while it is captured by the camera of an Android device, for this I will use Media Codec and Media Muxer since Media Player does not allow me to work with the bytes or buffer directly, my problem is that Media Muxer happened By parameter the output file which prevents me from encrypting the information before, therefore my question is if there is any way to encrypt the information before it is passed to the output file, either using this method or another.
public class VideoEncoderCore {
private static final String TAG = "";
private static final boolean VERBOSE = false;
// TODO: these ought to be configurable as well
private static final String MIME_TYPE = "video/avc"; // H.264 Advanced Video Coding
private static final int FRAME_RATE = 30; // 30fps
private static final int IFRAME_INTERVAL = 5; // 5 seconds between I-frames
private Surface mInputSurface;
private MediaMuxer mMuxer;
private MediaCodec mEncoder;
private MediaCodec.BufferInfo mBufferInfo;
private int mTrackIndex;
private boolean mMuxerStarted;
/**
* Configures encoder and muxer state, and prepares the input Surface.
*/
public VideoEncoderCore(int width, int height, int bitRate, File outputFile)
throws IOException {
mBufferInfo = new MediaCodec.BufferInfo();
MediaFormat format = MediaFormat.createVideoFormat(MIME_TYPE, width, height);
// Set some properties. Failing to specify some of these can cause the MediaCodec
// configure() call to throw an unhelpful exception.
format.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
format.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
if (VERBOSE) Log.d(TAG, "format: " + format);
// Create a MediaCodec encoder, and configure it with our format. Get a Surface
// we can use for input and wrap it with a class that handles the EGL work.
mEncoder = MediaCodec.createEncoderByType(MIME_TYPE);
mEncoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mInputSurface = mEncoder.createInputSurface();
mEncoder.start();
// Create a MediaMuxer. We can't add the video track and start() the muxer here,
// because our MediaFormat doesn't have the Magic Goodies. These can only be
// obtained from the encoder after it has started processing data.
//
// We're not actually interested in multiplexing audio. We just want to convert
// the raw H.264 elementary stream we get from MediaCodec into a .mp4 file.
mMuxer = new MediaMuxer(outputFile.toString(),
MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
mTrackIndex = -1;
mMuxerStarted = false;
}
/**
* Returns the encoder's input surface.
*/
public Surface getInputSurface() {
return mInputSurface;
}
/**
* Releases encoder resources.
*/
public void release() {
if (VERBOSE) Log.d(TAG, "releasing encoder objects");
if (mEncoder != null) {
mEncoder.stop();
mEncoder.release();
mEncoder = null;
}
if (mMuxer != null) {
// TODO: stop() throws an exception if you haven't fed it any data. Keep track
// of frames submitted, and don't call stop() if we haven't written anything.
mMuxer.stop();
mMuxer.release();
mMuxer = null;
}
}
/**
* Extracts all pending data from the encoder and forwards it to the muxer.
* <p>
* If endOfStream is not set, this returns when there is no more data to drain. If it
* is set, we send EOS to the encoder, and then iterate until we see EOS on the output.
* Calling this with endOfStream set should be done once, right before stopping the muxer.
* <p>
* We're just using the muxer to get a .mp4 file (instead of a raw H.264 stream). We're
* not recording audio.
*/
public void drainEncoder(boolean endOfStream) {
final int TIMEOUT_USEC = 10000;
if (VERBOSE) Log.d(TAG, "drainEncoder(" + endOfStream + ")");
if (endOfStream) {
if (VERBOSE) Log.d(TAG, "sending EOS to encoder");
mEncoder.signalEndOfInputStream();
}
ByteBuffer[] encoderOutputBuffers = mEncoder.getOutputBuffers();
while (true) {
int encoderStatus = mEncoder.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
if (!endOfStream) {
break; // out of while
} else {
if (VERBOSE) Log.d(TAG, "no output available, spinning to await EOS");
}
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not expected for an encoder
encoderOutputBuffers = mEncoder.getOutputBuffers();
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// should happen before receiving buffers, and should only happen once
if (mMuxerStarted) {
throw new RuntimeException("format changed twice");
}
MediaFormat newFormat = mEncoder.getOutputFormat();
Log.d(TAG, "encoder output format changed: " + newFormat);
// now that we have the Magic Goodies, start the muxer
mTrackIndex = mMuxer.addTrack(newFormat);
mMuxer.start();
mMuxerStarted = true;
} else if (encoderStatus < 0) {
Log.w(TAG, "unexpected result from encoder.dequeueOutputBuffer: " +
encoderStatus);
// let's ignore it
} else {
ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + encoderStatus +
" was null");
}
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// The codec config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
mBufferInfo.size = 0;
}
if (mBufferInfo.size != 0) {
if (!mMuxerStarted) {
throw new RuntimeException("muxer hasn't started");
}
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(mBufferInfo.offset);
encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
mMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
if (VERBOSE) {
Log.d(TAG, "sent " + mBufferInfo.size + " bytes to muxer, ts=" +
mBufferInfo.presentationTimeUs);
}
}
mEncoder.releaseOutputBuffer(encoderStatus, false);
if ((mBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
if (!endOfStream) {
Log.w(TAG, "reached end of stream unexpectedly");
} else {
if (VERBOSE) Log.d(TAG, "end of stream reached");
}
break; // out of while
}
}
}
}
}
The above code was pulled from the Grafika project, an Android graphics and media hack dump.

decode h264 raw stream using mediacodec

I recieve h264 data from server, I want to decode this stream using mediacodec and texture view on android.I got the data from the server , parssing it to get the SPS , the PPS and the video frame data, then I passed this data to the mediacodec , but the function dequeueOutputBuffer(info, 100000) always returns -1 and I get dequeueOutputBuffer timed out.
Any help please, I'am stucked at this issues from three weeks.
this is the code used to decode the video frame.
public class H264PlayerActivity extends AppCompatActivity implements TextureView.SurfaceTextureListener {
private TextureView m_surface;// View that contains the Surface Texture
private H264Provider provider;// Object that connects to our server and gets H264 frames
private MediaCodec m_codec;// Media decoder
// private DecodeFramesTask m_frameTask;// AsyncTask that takes H264 frames and uses the decoder to update the Surface Texture
// the channel used to receive the partner's video
private ZMQ.Socket subscriber = null;
private ZMQ.Context context;
// thread handling the video reception
// byte[] byte_SPSPPS = null;
//byte[] byte_Frame = null;
public static String stringSubscribe=null;
public static String myIpAcquisition=null;
public static byte[] byte_SPSPPS = null;
public static byte[] byte_Frame = null;
boolean isIframe = false;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.h264player_activity);
Bundle extras = getIntent().getExtras();
if(extras!=null)
{
stringSubscribe=extras.getString("stringSubscribe");
myIpAcquisition=(extras.getString("myIpAcquisition"));
}
// Get a referance to the TextureView in the UI
m_surface = (TextureView) findViewById(R.id.textureView);
// Add this class as a call back so we can catch the events from the Surface Texture
m_surface.setSurfaceTextureListener(this);
}
#RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN)
#Override
// Invoked when a TextureView's SurfaceTexture is ready for use
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
// when the surface is ready, we make a H264 provider Object. When its constructor runs it starts an AsyncTask to log into our server and start getting frames
provider = new H264Provider(stringSubscribe, myIpAcquisition,byte_SPSPPS,byte_Frame);
}
#Override
// Invoked when the SurfaceTexture's buffers size changed
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
}
#Override
// Invoked when the specified SurfaceTexture is about to be destroyed
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
return false;
}
#Override
// Invoked when the specified SurfaceTexture is updated through updateTexImage()
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
}
private class H264Provider {
String stringSubscribe = "";
String myIpAcquisition = "";
byte[] byte_SPSPPS = null;
byte[] byte_PPS = null;
byte[] byte_Frame = null;
H264Provider(String stringSubscribe, String myIpAcquisition, byte[] byte_SPS, byte[] byte_Frame) {
this.stringSubscribe = stringSubscribe;
this.myIpAcquisition = myIpAcquisition;
this.byte_SPSPPS = byte_SPS;
this.byte_PPS = byte_PPS;
this.byte_Frame = byte_Frame;
System.out.println(" subscriber client started");
//SetUpConnection setup=new SetUpConnection();
// setup.execute();
PlayerThread mPlayer = new PlayerThread();
mPlayer.start();
}
void release(){
// close ØMQ socket
subscriber.close();
//terminate 0MQ context
context.term();
}
byte[] getCSD( ) {
return byte_SPSPPS;
}
byte[] nextFrame( ) {
return byte_Frame;
}
private class PlayerThread extends Thread
{
public PlayerThread()
{
System.out.println(" subscriber client started");
}
#TargetApi(Build.VERSION_CODES.LOLLIPOP)
#RequiresApi(api = Build.VERSION_CODES.JELLY_BEAN)
#Override
public void run() {
/******************************************ZMQ****************************/
// Prepare our context and subscriber
ZMQ.Context context = ZMQ.context(1);
//create 0MQ socket
ZMQ.Socket subscriber = context.socket(ZMQ.SUB);
//create outgoing connection from socket
String address = "tcp://" + myIpAcquisition + ":xxxx";
Boolean bbbb = subscriber.connect(address);
subscriber.setHWM(20);// the number of messages to queue.
Log.e("zmq_tag", "connect connect " + bbbb);
//boolean bbbb = subscriber.setSocketOpt(zmq.ZMQ.ZMQ_SNDHWM, 1);
subscriber.subscribe(stringSubscribe.getBytes(ZMQ.CHARSET));
Log.e("zmq_tag", " zmq stringSubscribe " + stringSubscribe);
boolean bRun = true;
while (bRun) {
ZMsg msg = ZMsg.recvMsg(subscriber);
String string_SPS = null;
String string_PPS = null;
String SPSPPS = null;
String string_Frame = null;
if (msg != null) {
// create a video message out of the zmq message
VideoMessage oVideoMsg = VideoMessage.fromZMsg(msg);
// wait until get Iframe
String szInfoPublisher = new String(oVideoMsg.szInfoPublisher);
Log.e("zmq_tag", "szInfoPublisher " + szInfoPublisher);
if (szInfoPublisher.contains("0000000167")) {
isIframe = true;
String[] split_IFrame = szInfoPublisher.split("0000000165");
String SPS__PPS = split_IFrame[0];
String [] split_SPSPPS=SPS__PPS.split("0000000167");
SPSPPS="0000000167" + split_SPSPPS[1];
Log.e("zmq_tag", "SPS+PPS " + SPSPPS);
String iFrame = "0000000165" + split_IFrame[1];
Log.e("zmq_tag", "IFrame " + iFrame);
string_Frame = iFrame;
} else {
if ((szInfoPublisher.contains("0000000161")||szInfoPublisher.contains("0000000141")) && isIframe) {
if (szInfoPublisher.contains("0000000161"))
{
String[] split_IFrame = szInfoPublisher.split("0000000161");
String newMSG = "0000000161" + split_IFrame[1];
Log.e("zmq_tag", " P Frame " + newMSG);
string_Frame = newMSG;
} else
if (szInfoPublisher.contains("0000000141"))
{
String[] split_IFrame = szInfoPublisher.split("0000000141");
String newMSG = "0000000141" + split_IFrame[1];
Log.e("zmq_tag", " P Frame " + newMSG);
string_Frame = newMSG;
}
} else {
isIframe = false;
}
}
}
if (SPSPPS != null) {
byte_SPSPPS = SPSPPS.getBytes();
Log.e("zmq_tag", " byte_SPSPPS " + new String(byte_SPSPPS));
}
if (string_Frame != null) {
byte_Frame = string_Frame.getBytes();
Log.e("zmq_tag", " byte_Frame " + new String(byte_Frame));
}
if(SPSPPS != null) {
// Create the format settinsg for the MediaCodec
MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, 1920, 1080);// MIMETYPE: a two-part identifier for file formats and format contents
// Set the PPS and SPS frame
format.setByteBuffer("csd-0", ByteBuffer.wrap(byte_SPSPPS));
// Set the buffer size
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 100000);
try {
// Get an instance of MediaCodec and give it its Mime type
m_codec = MediaCodec.createDecoderByType(MediaFormat.MIMETYPE_VIDEO_AVC);
// Configure the Codec
m_codec.configure(format, new Surface(m_surface.getSurfaceTexture()), null, 0);
// Start the codec
m_codec.start();
// Create the AsyncTask to get the frames and decode them using the Codec
while (!Thread.interrupted()) {
// Get the next frame
byte[] frame = byte_Frame;
Log.e("zmq_tag", " frame " + new String(frame));
// Now we need to give it to the Codec to decode into the surface
// Get the input buffer from the decoder
int inputIndex = m_codec.dequeueInputBuffer(1);// Pass in -1 here as in this example we don't have a playback time reference
Log.e("zmq_tag", "inputIndex " + inputIndex);
// If the buffer number is valid use the buffer with that index
if (inputIndex >= 0) {
ByteBuffer buffer =m_codec.getInputBuffer(inputIndex);
buffer.put(frame);
// tell the decoder to process the frame
m_codec.queueInputBuffer(inputIndex, 0, frame.length, 0, 0);
}
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int outputIndex = m_codec.dequeueOutputBuffer(info, 100000);
Log.e("zmq_tag", "outputIndex " + outputIndex);
if (outputIndex >= 0) {
m_codec.releaseOutputBuffer(outputIndex, true);
}
// wait for the next frame to be ready, our server makes a frame every 250ms
try {
Thread.sleep(250);
} catch (Exception e) {
e.printStackTrace();
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
// close ØMQ socket
subscriber.close();
//terminate 0MQ context
context.term();
}
}
sorry I can't comment, but I see some probable mistakes in your code :
Your KEY_MAX_INPUT_SIZE is wrong, it must be at least your Height * Width, in your case the Height * Width = 1920 * 1080 = 2073600 > 100000, you feed your decoder input buffer with data that can be > 100000, so since the decoder wants NALUs it probably wouldn't like it.
You do not clear input buffer before pushing data (realy needed?)

How to get YUV frames from Dji SDK?

I am working on a Android app which render data form the Drone , I am able to render the raw frames on the SurfaceView after decoding it with the DjiVideoSteamDecoder class. I want the Yuv frames form the decoder class i.e DjiVideoSteamDecoder. The problem is I am not getting the continious yuv frames from the Yuv listner. As per the Docs we need to set
DJIVideoStreamDecoder.getInstance().changeSurface(null) it works for we frames and then it stops producing the YUv data .Let me past the decoding class which gives me the YUV frames.
private void initCodec() {
if (width == 0 || height == 0) {
return;
}
Log.e("OBJ","codec inside initcodec"+codec);
if (codec != null) {
releaseCodec();
}
loge("initVideoDecoder----------------------------------------------------------");
loge("initVideoDecoder video width = " + width + " height = " + height);
// create the media format
MediaFormat format = MediaFormat.createVideoFormat(VIDEO_ENCODING_FORMAT, width, height);
if (surface == null) {
Log.i(TAG,"initVideoDecoder: yuv output");
// The surface is null, which means that the yuv data is needed, so the color format should
// be set to YUV420.
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar);
} else {
Log.i(TAG,"initVideoDecoder: display");
// The surface is set, so the color format should be set to format surface.
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
}
try {
// Create the codec instance.
codec = MediaCodec.createDecoderByType(VIDEO_ENCODING_FORMAT);
Log.i(TAG, "initVideoDecoder create: " + (codec == null));
// Configure the codec. What should be noted here is that the hardware decoder would not output
// any yuv data if OnReceive surface is configured into, which mean that if you want the yuv frames, you
// should set "null" surface when calling the "configure" method of MediaCodec.
codec.configure(format, surface, null, 0);
Log.i(TAG, "initVideoDecoder configure");
// codec.configure(format, null, null, 0);
if (codec == null) {
Log.e(TAG, "Can't find video info!");
return;
}
// Start the codec
codec.start();
Log.i(TAG, "initVideoDecoder start");
// Get the input and output buffers of hardware decoder
inputBuffers = codec.getInputBuffers();
outputBuffers = codec.getOutputBuffers();
Log.i(TAG, "initVideoDecoder get buffers");
} catch (Exception e) {
Log.i(TAG, "init codec failed, do it again: "+ e.getMessage());
if (e instanceof MediaCodec.CodecException) {
MediaCodec.CodecException ce = (MediaCodec.CodecException) e;
ce.printStackTrace();
}
e.printStackTrace();
}
}
And decoder class :
private void decodeFrame() throws Exception {
DJIFrame inputFrame = frameQueue.poll();
if (inputFrame == null) {
return;
}
if (codec == null) {
if (dataHandler != null && !dataHandler.hasMessages(MSG_INIT_CODEC)) {
dataHandler.sendEmptyMessage(MSG_INIT_CODEC);
}
}
int inIndex = -1;
// Get input buffer index of the MediaCodec.
for (int i = 0; i < CODEC_DEQUEUE_INPUT_QUEUE_RETRY && inIndex < 0; i ++) {
try {
inIndex = codec.dequeueInputBuffer(0);
} catch (IllegalStateException e) {
logd(TAG, "decodeFrame: dequeue input: " + e);
codec.stop();
codec.reset();
initCodec();
e.printStackTrace();
}
}
logd(TAG, "decodeFrame: index=" + inIndex);
Log.e("OBJ","index "+inIndex);
// Decode the frame using MediaCodec
if (inIndex >= 0) {
ByteBuffer buffer = inputBuffers[inIndex];
buffer.clear();
buffer.rewind();
buffer.put(inputFrame.videoBuffer);
inputFrame.fedIntoCodecTime = System.currentTimeMillis();
long queueingDelay = inputFrame.getQueueDelay();
logd("input frame delay: " + queueingDelay);
// Feed the frame data to the decoder.
codec.queueInputBuffer(inIndex, 0, inputFrame.size, inputFrame.pts, 0);
hasIFrameInCodec = true;
// Get the output data from the decoder.
int outIndex = -1;
outIndex = codec.dequeueOutputBuffer(bufferInfo, 0);
Log.e("OBJ","Outputindex"+outIndex);
logd(TAG, "decodeFrame: outIndex: " + outIndex);
if (outIndex >= 0) {
if ( surface == null && yuvDataListener != null) {
//if (yuvDataListener != null) {
// If the surface is null, the yuv data should be get from the buffer and invoke the callback.
logd("decodeFrame: need callback");
ByteBuffer yuvDataBuf = outputBuffers[outIndex];
yuvDataBuf.position(bufferInfo.offset);
yuvDataBuf.limit(bufferInfo.size - bufferInfo.offset);
final byte[] bytes = new byte[bufferInfo.size - bufferInfo.offset];
yuvDataBuf.get(bytes);
callbackHandler.post(new Runnable() {
#Override
public void run() {
yuvDataListener.onYuvDataReceived(bytes, width, height);
}
});
}
// All the output buffer must be release no matter whether the yuv data is output or
// not, so that the codec can reuse the buffer.
codec.releaseOutputBuffer(outIndex, true);
} else if (outIndex == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// The output buffer set is changed. So the decoder should be reinitialized and the
// output buffers should be retrieved.
long curTime = System.currentTimeMillis();
bufferChangedQueue.addLast(curTime);
if (bufferChangedQueue.size() >= 10) {
long headTime = bufferChangedQueue.pollFirst();
if (curTime - headTime < 1000) {
// reset decoder
loge("Reset decoder. Get INFO_OUTPUT_BUFFERS_CHANGED more than 10 times within OnReceive second.");
bufferChangedQueue.clear();
dataHandler.removeCallbacksAndMessages(null);
dataHandler.sendEmptyMessage(MSG_INIT_CODEC);
return;
}
}
if (outputBuffers == null) {
return;
}
outputBuffers = codec.getOutputBuffers();
} else if (outIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
loge("format changed, color: " + codec.getOutputFormat().getInteger(MediaFormat.KEY_COLOR_FORMAT));
}
}
}

Recoding one H.264 video to another using opengl surfaces is very slow on my android

I'm developing function of translating one video into another with additional effects for each frame. I decided to use opengl-es for applying effects on each frame. My input and output videos are in MP4 using H.264 codec.
I use MediaCodec API (android api 18+) for decoding H.264 into the opengl texture, then draw on the surface using this texture with my shader.
I thought that using MediaCodec with H.264 will do hardware decoding on android and it will be fast. But appeared that it is not.
Recoding small 432x240 15 seconds video consumed 28 seconds of total time!
Please, take a look at my code + profile information and share some advice, critics if I'm doing something wrong.
My code:
private void editVideoFile()
{
if (VERBOSE)
{
Log.d(TAG, "editVideoFile " + mWidth + "x" + mHeight);
}
MediaCodec decoder = null;
MediaCodec encoder = null;
InputSurface inputSurface = null;
OutputSurface outputSurface = null;
try
{
File inputFile = new File(FILES_DIR, INPUT_FILE); // must be an absolute path
// The MediaExtractor error messages aren't very useful. Check to see if the input
// file exists so we can throw a better one if it's not there.
if (!inputFile.canRead())
{
throw new FileNotFoundException("Unable to read " + inputFile);
}
extractor = new MediaExtractor();
extractor.setDataSource(inputFile.toString());
int trackIndex = inVideoTrackIndex = selectTrack(extractor);
if (trackIndex < 0)
{
throw new RuntimeException("No video track found in " + inputFile);
}
extractor.selectTrack(trackIndex);
MediaFormat inputFormat = extractor.getTrackFormat(trackIndex);
mWidth = inputFormat.getInteger(MediaFormat.KEY_WIDTH);
mHeight = inputFormat.getInteger(MediaFormat.KEY_HEIGHT);
if (VERBOSE)
{
Log.d(TAG, "Video size is " + mWidth + "x" + mHeight);
}
// Create an encoder format that matches the input format. (Might be able to just
// re-use the format used to generate the video, since we want it to be the same.)
MediaFormat outputFormat = MediaFormat.createVideoFormat(MIME_TYPE, mWidth, mHeight);
outputFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT,
MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
outputFormat.setInteger(MediaFormat.KEY_BIT_RATE,
getFormatValue(inputFormat, MediaFormat.KEY_BIT_RATE, BIT_RATE));
outputFormat.setInteger(MediaFormat.KEY_FRAME_RATE,
getFormatValue(inputFormat, MediaFormat.KEY_FRAME_RATE, FRAME_RATE));
outputFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL,
getFormatValue(inputFormat,MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL));
try
{
encoder = MediaCodec.createEncoderByType(MIME_TYPE);
}
catch (IOException iex)
{
throw new RuntimeException(iex);
}
encoder.configure(outputFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
inputSurface = new InputSurface(encoder.createInputSurface());
inputSurface.makeCurrent();
encoder.start();
// Output filename. Ideally this would use Context.getFilesDir() rather than a
// hard-coded output directory.
String outputPath = new File(OUTPUT_DIR,
"transformed-" + mWidth + "x" + mHeight + ".mp4").toString();
Log.d(TAG, "output file is " + outputPath);
// Create a MediaMuxer. We can't add the video track and start() the muxer here,
// because our MediaFormat doesn't have the Magic Goodies. These can only be
// obtained from the encoder after it has started processing data.
//
// We're not actually interested in multiplexing audio. We just want to convert
// the raw H.264 elementary stream we get from MediaCodec into a .mp4 file.
try
{
mMuxer = new MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
}
catch (IOException ioe)
{
throw new RuntimeException("MediaMuxer creation failed", ioe);
}
mTrackIndex = -1;
mMuxerStarted = false;
// OutputSurface uses the EGL context created by InputSurface.
try
{
decoder = MediaCodec.createDecoderByType(MIME_TYPE);
}
catch (IOException iex)
{
throw new RuntimeException(iex);
}
outputSurface = new OutputSurface();
outputSurface.changeFragmentShader(FRAGMENT_SHADER);
decoder.configure(inputFormat, outputSurface.getSurface(), null, 0);
decoder.start();
editVideoData(decoder, outputSurface, inputSurface, encoder);
}
catch (Exception ex)
{
Log.e(TAG, "Error processing", ex);
throw new RuntimeException(ex);
}
finally
{
if (VERBOSE)
{
Log.d(TAG, "shutting down encoder, decoder");
}
if (outputSurface != null)
{
outputSurface.release();
}
if (inputSurface != null)
{
inputSurface.release();
}
if (encoder != null)
{
encoder.stop();
encoder.release();
}
if (decoder != null)
{
decoder.stop();
decoder.release();
}
if (mMuxer != null)
{
mMuxer.stop();
mMuxer.release();
mMuxer = null;
}
}
}
/**
* Selects the video track, if any.
*
* #return the track index, or -1 if no video track is found.
*/
private int selectTrack(MediaExtractor extractor)
{
// Select the first video track we find, ignore the rest.
int numTracks = extractor.getTrackCount();
for (int i = 0; i < numTracks; i++)
{
MediaFormat format = extractor.getTrackFormat(i);
String mime = format.getString(MediaFormat.KEY_MIME);
if (mime.startsWith("video/"))
{
if (VERBOSE)
{
Log.d(TAG, "Extractor selected track " + i + " (" + mime + "): " + format);
}
return i;
}
}
return -1;
}
/**
* Edits a stream of video data.
*/
private void editVideoData(MediaCodec decoder,
OutputSurface outputSurface, InputSurface inputSurface, MediaCodec encoder)
{
final int TIMEOUT_USEC = 10000;
ByteBuffer[] decoderInputBuffers = decoder.getInputBuffers();
ByteBuffer[] encoderOutputBuffers = encoder.getOutputBuffers();
MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
int inputChunk = 0;
boolean outputDone = false;
boolean inputDone = false;
boolean decoderDone = false;
while (!outputDone)
{
if (VERBOSE)
{
Log.d(TAG, "edit loop");
}
// Feed more data to the decoder.
if (!inputDone)
{
int inputBufIndex = decoder.dequeueInputBuffer(TIMEOUT_USEC);
if (inputBufIndex >= 0)
{
ByteBuffer inputBuf = decoderInputBuffers[inputBufIndex];
// Read the sample data into the ByteBuffer. This neither respects nor
// updates inputBuf's position, limit, etc.
int chunkSize = extractor.readSampleData(inputBuf, 0);
if (chunkSize < 0)
{
// End of stream -- send empty frame with EOS flag set.
decoder.queueInputBuffer(inputBufIndex, 0, 0, 0L,
MediaCodec.BUFFER_FLAG_END_OF_STREAM);
inputDone = true;
if (VERBOSE)
{
Log.d(TAG, "sent input EOS");
}
}
else
{
if (extractor.getSampleTrackIndex() != inVideoTrackIndex)
{
Log.w(TAG, "WEIRD: got sample from track " +
extractor.getSampleTrackIndex() + ", expected " + inVideoTrackIndex);
}
long presentationTimeUs = extractor.getSampleTime();
decoder.queueInputBuffer(inputBufIndex, 0, chunkSize,
presentationTimeUs, 0 /*flags*/);
if (VERBOSE)
{
Log.d(TAG, "submitted frame " + inputChunk + " to dec, size=" +
chunkSize);
}
inputChunk++;
extractor.advance();
}
}
else
{
if (VERBOSE)
{
Log.d(TAG, "input buffer not available");
}
}
}
// Assume output is available. Loop until both assumptions are false.
boolean decoderOutputAvailable = !decoderDone;
boolean encoderOutputAvailable = true;
while (decoderOutputAvailable || encoderOutputAvailable)
{
// Start by draining any pending output from the encoder. It's important to
// do this before we try to stuff any more data in.
int encoderStatus = encoder.dequeueOutputBuffer(info, TIMEOUT_USEC);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER)
{
// no output available yet
if (VERBOSE)
{
Log.d(TAG, "no output from encoder available");
}
encoderOutputAvailable = false;
}
else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED)
{
encoderOutputBuffers = encoder.getOutputBuffers();
if (VERBOSE)
{
Log.d(TAG, "encoder output buffers changed");
}
}
else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED)
{
if (mMuxerStarted)
{
throw new RuntimeException("format changed twice");
}
MediaFormat newFormat = encoder.getOutputFormat();
Log.d(TAG, "encoder output format changed: " + newFormat);
// now that we have the Magic Goodies, start the muxer
mTrackIndex = mMuxer.addTrack(newFormat);
mMuxer.start();
mMuxerStarted = true;
}
else if (encoderStatus < 0)
{
throw new RuntimeException("unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
}
else
{ // encoderStatus >= 0
ByteBuffer encodedData = encoderOutputBuffers[encoderStatus];
if (encodedData == null)
{
throw new RuntimeException("encoderOutputBuffer " + encoderStatus + " was null");
}
if ((info.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0)
{
// The codec config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
if (VERBOSE)
{
Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
}
info.size = 0;
}
// Write the data to the output "file".
if (info.size != 0)
{
if (!mMuxerStarted)
{
throw new RuntimeException("muxer hasn't started");
}
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(info.offset);
encodedData.limit(info.offset + info.size);
mMuxer.writeSampleData(mTrackIndex, encodedData, info);
if (VERBOSE)
{
Log.d(TAG, "sent " + info.size + " bytes to muxer");
}
}
outputDone = (info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0;
encoder.releaseOutputBuffer(encoderStatus, false);
}
if (encoderStatus != MediaCodec.INFO_TRY_AGAIN_LATER)
{
// Continue attempts to drain output.
continue;
}
// Encoder is drained, check to see if we've got a new frame of output from
// the decoder. (The output is going to a Surface, rather than a ByteBuffer,
// but we still get information through BufferInfo.)
if (!decoderDone)
{
int decoderStatus = decoder.dequeueOutputBuffer(info, TIMEOUT_USEC);
if (decoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER)
{
// no output available yet
if (VERBOSE)
{
Log.d(TAG, "no output from decoder available");
}
decoderOutputAvailable = false;
}
else if (decoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED)
{
//decoderOutputBuffers = decoder.getOutputBuffers();
if (VERBOSE)
{
Log.d(TAG, "decoder output buffers changed (we don't care)");
}
}
else if (decoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED)
{
// expected before first buffer of data
MediaFormat newFormat = decoder.getOutputFormat();
if (VERBOSE)
{
Log.d(TAG, "decoder output format changed: " + newFormat);
}
}
else if (decoderStatus < 0)
{
throw new RuntimeException("unexpected result from decoder.dequeueOutputBuffer: " + decoderStatus);
}
else
{ // decoderStatus >= 0
if (VERBOSE)
{
Log.d(TAG, "surface decoder given buffer "
+ decoderStatus + " (size=" + info.size + ")");
}
// The ByteBuffers are null references, but we still get a nonzero
// size for the decoded data.
boolean doRender = (info.size != 0);
// As soon as we call releaseOutputBuffer, the buffer will be forwarded
// to SurfaceTexture to convert to a texture. The API doesn't
// guarantee that the texture will be available before the call
// returns, so we need to wait for the onFrameAvailable callback to
// fire. If we don't wait, we risk rendering from the previous frame.
decoder.releaseOutputBuffer(decoderStatus, doRender);
if (doRender)
{
// This waits for the image and renders it after it arrives.
if (VERBOSE)
{
Log.d(TAG, "awaiting frame");
}
outputSurface.awaitNewImage();
outputSurface.drawImage();
// Send it to the encoder.
inputSurface.setPresentationTime(info.presentationTimeUs * 1000);
if (VERBOSE)
{
Log.d(TAG, "swapBuffers");
}
inputSurface.swapBuffers();
}
if ((info.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0)
{
// forward decoder EOS to encoder
if (VERBOSE)
{
Log.d(TAG, "signaling input EOS");
}
if (WORK_AROUND_BUGS)
{
// Bail early, possibly dropping a frame.
return;
}
else
{
encoder.signalEndOfInputStream();
}
}
}
}
}
}
}
And profile information:
Tested on Samsung Galaxy Note3 Intl (Qualcom)
Your issue probably is in how you synchronously wait for events on one single thread, with a nonzero timeout.
You could probably get better throuhput if you lower the timeout. Most of the hardware codecs work with a bit of latency; you can have a good total throughput, but don't expect to have a result (a frame encoded or decoded) immediately.
Ideally, you would use a zero timeout to check all inputs/outputs of both encoder and decoder, and in case there's no free buffers on either points, wait with a nonzero timeout on e.g. encoder output or decoder output.
If you can target Android 5.0, with asynchronous mode in MediaCodec, it's much easier to get this done right. See e.g. https://github.com/mstorsjo/android-decodeencodetest for an example on how to do this. See also https://stackoverflow.com/a/35885471/3115956 for a longer discussion on this issue.
You can also have a look at some similar questions.

How to play raw h264 produced by MediaCodec encoder?

I'm a bit new when it comes to MediaCodec (and video encoding/decoding in general), so correct me if anything I say here is wrong.
I want to play the raw h264 output of MediaCodec with VLC/ffplay. I need this to play becuase my end goal is to stream some live video to a computer, and MediaMuxer only produces a file on disk rather than something I can stream with (very) low latency to a desktop. (I'm open to other solutions, but I have not found anything else that fits the latency requirement)
Here is the code I'm using encode the video and write it to a file: (it's based off the MediaCodec example found here, only with the MediaMuxer part removed)
package com.jackos2500.droidtop;
import android.media.MediaCodec;
import android.media.MediaCodecInfo;
import android.media.MediaFormat;
import android.opengl.EGL14;
import android.opengl.EGLConfig;
import android.opengl.EGLContext;
import android.opengl.EGLDisplay;
import android.opengl.EGLExt;
import android.opengl.EGLSurface;
import android.opengl.GLES20;
import android.os.Environment;
import android.util.Log;
import android.view.Surface;
import java.io.BufferedOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
public class StreamH264 {
private static final String TAG = "StreamH264";
private static final boolean VERBOSE = true; // lots of logging
// where to put the output file (note: /sdcard requires WRITE_EXTERNAL_STORAGE permission)
private static final File OUTPUT_DIR = Environment.getExternalStorageDirectory();
public static int MEGABIT = 1000 * 1000;
private static final int IFRAME_INTERVAL = 10;
private static final int TEST_R0 = 0;
private static final int TEST_G0 = 136;
private static final int TEST_B0 = 0;
private static final int TEST_R1 = 236;
private static final int TEST_G1 = 50;
private static final int TEST_B1 = 186;
private MediaCodec codec;
private CodecInputSurface inputSurface;
private BufferedOutputStream out;
private MediaCodec.BufferInfo bufferInfo;
public StreamH264() {
}
private void prepareEncoder() throws IOException {
bufferInfo = new MediaCodec.BufferInfo();
MediaFormat format = MediaFormat.createVideoFormat("video/avc", 1280, 720);
format.setInteger(MediaFormat.KEY_BIT_RATE, 2 * MEGABIT);
format.setInteger(MediaFormat.KEY_FRAME_RATE, 30);
format.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, IFRAME_INTERVAL);
codec = MediaCodec.createEncoderByType("video/avc");
codec.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
inputSurface = new CodecInputSurface(codec.createInputSurface());
codec.start();
File dst = new File(OUTPUT_DIR, "test.264");
out = new BufferedOutputStream(new FileOutputStream(dst));
}
private void releaseEncoder() throws IOException {
if (VERBOSE) Log.d(TAG, "releasing encoder objects");
if (codec != null) {
codec.stop();
codec.release();
codec = null;
}
if (inputSurface != null) {
inputSurface.release();
inputSurface = null;
}
if (out != null) {
out.flush();
out.close();
out = null;
}
}
public void stream() throws IOException {
try {
prepareEncoder();
inputSurface.makeCurrent();
for (int i = 0; i < (30 * 5); i++) {
// Feed any pending encoder output into the file.
drainEncoder(false);
// Generate a new frame of input.
generateSurfaceFrame(i);
inputSurface.setPresentationTime(computePresentationTimeNsec(i, 30));
// Submit it to the encoder. The eglSwapBuffers call will block if the input
// is full, which would be bad if it stayed full until we dequeued an output
// buffer (which we can't do, since we're stuck here). So long as we fully drain
// the encoder before supplying additional input, the system guarantees that we
// can supply another frame without blocking.
if (VERBOSE) Log.d(TAG, "sending frame " + i + " to encoder");
inputSurface.swapBuffers();
}
// send end-of-stream to encoder, and drain remaining output
drainEncoder(true);
} finally {
// release encoder, muxer, and input Surface
releaseEncoder();
}
}
private void drainEncoder(boolean endOfStream) throws IOException {
final int TIMEOUT_USEC = 10000;
if (VERBOSE) Log.d(TAG, "drainEncoder(" + endOfStream + ")");
if (endOfStream) {
if (VERBOSE) Log.d(TAG, "sending EOS to encoder");
codec.signalEndOfInputStream();
}
ByteBuffer[] outputBuffers = codec.getOutputBuffers();
while (true) {
int encoderStatus = codec.dequeueOutputBuffer(bufferInfo, TIMEOUT_USEC);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
if (!endOfStream) {
break; // out of while
} else {
if (VERBOSE) Log.d(TAG, "no output available, spinning to await EOS");
}
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED) {
// not expected for an encoder
outputBuffers = codec.getOutputBuffers();
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// should happen before receiving buffers, and should only happen once
MediaFormat newFormat = codec.getOutputFormat();
Log.d(TAG, "encoder output format changed: " + newFormat);
} else if (encoderStatus < 0) {
Log.w(TAG, "unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
// let's ignore it
} else {
ByteBuffer encodedData = outputBuffers[encoderStatus];
if (encodedData == null) {
throw new RuntimeException("encoderOutputBuffer " + encoderStatus + " was null");
}
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
// The codec config data was pulled out and fed to the muxer when we got
// the INFO_OUTPUT_FORMAT_CHANGED status. Ignore it.
if (VERBOSE) Log.d(TAG, "ignoring BUFFER_FLAG_CODEC_CONFIG");
bufferInfo.size = 0;
}
if (bufferInfo.size != 0) {
// adjust the ByteBuffer values to match BufferInfo (not needed?)
encodedData.position(bufferInfo.offset);
encodedData.limit(bufferInfo.offset + bufferInfo.size);
byte[] data = new byte[bufferInfo.size];
encodedData.get(data);
out.write(data);
if (VERBOSE) Log.d(TAG, "sent " + bufferInfo.size + " bytes to file");
}
codec.releaseOutputBuffer(encoderStatus, false);
if ((bufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
if (!endOfStream) {
Log.w(TAG, "reached end of stream unexpectedly");
} else {
if (VERBOSE) Log.d(TAG, "end of stream reached");
}
break; // out of while
}
}
}
}
private void generateSurfaceFrame(int frameIndex) {
frameIndex %= 8;
int startX, startY;
if (frameIndex < 4) {
// (0,0) is bottom-left in GL
startX = frameIndex * (1280 / 4);
startY = 720 / 2;
} else {
startX = (7 - frameIndex) * (1280 / 4);
startY = 0;
}
GLES20.glClearColor(TEST_R0 / 255.0f, TEST_G0 / 255.0f, TEST_B0 / 255.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
GLES20.glScissor(startX, startY, 1280 / 4, 720 / 2);
GLES20.glClearColor(TEST_R1 / 255.0f, TEST_G1 / 255.0f, TEST_B1 / 255.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
}
private static long computePresentationTimeNsec(int frameIndex, int frameRate) {
final long ONE_BILLION = 1000000000;
return frameIndex * ONE_BILLION / frameRate;
}
/**
* Holds state associated with a Surface used for MediaCodec encoder input.
* <p>
* The constructor takes a Surface obtained from MediaCodec.createInputSurface(), and uses that
* to create an EGL window surface. Calls to eglSwapBuffers() cause a frame of data to be sent
* to the video encoder.
* <p>
* This object owns the Surface -- releasing this will release the Surface too.
*/
private static class CodecInputSurface {
private static final int EGL_RECORDABLE_ANDROID = 0x3142;
private EGLDisplay mEGLDisplay = EGL14.EGL_NO_DISPLAY;
private EGLContext mEGLContext = EGL14.EGL_NO_CONTEXT;
private EGLSurface mEGLSurface = EGL14.EGL_NO_SURFACE;
private Surface mSurface;
/**
* Creates a CodecInputSurface from a Surface.
*/
public CodecInputSurface(Surface surface) {
if (surface == null) {
throw new NullPointerException();
}
mSurface = surface;
eglSetup();
}
/**
* Prepares EGL. We want a GLES 2.0 context and a surface that supports recording.
*/
private void eglSetup() {
mEGLDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
if (mEGLDisplay == EGL14.EGL_NO_DISPLAY) {
throw new RuntimeException("unable to get EGL14 display");
}
int[] version = new int[2];
if (!EGL14.eglInitialize(mEGLDisplay, version, 0, version, 1)) {
throw new RuntimeException("unable to initialize EGL14");
}
// Configure EGL for recording and OpenGL ES 2.0.
int[] attribList = {
EGL14.EGL_RED_SIZE, 8,
EGL14.EGL_GREEN_SIZE, 8,
EGL14.EGL_BLUE_SIZE, 8,
EGL14.EGL_ALPHA_SIZE, 8,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL_RECORDABLE_ANDROID, 1,
EGL14.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] numConfigs = new int[1];
EGL14.eglChooseConfig(mEGLDisplay, attribList, 0, configs, 0, configs.length,
numConfigs, 0);
checkEglError("eglCreateContext RGB888+recordable ES2");
// Configure context for OpenGL ES 2.0.
int[] attrib_list = {
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
};
mEGLContext = EGL14.eglCreateContext(mEGLDisplay, configs[0], EGL14.EGL_NO_CONTEXT,
attrib_list, 0);
checkEglError("eglCreateContext");
// Create a window surface, and attach it to the Surface we received.
int[] surfaceAttribs = {
EGL14.EGL_NONE
};
mEGLSurface = EGL14.eglCreateWindowSurface(mEGLDisplay, configs[0], mSurface,
surfaceAttribs, 0);
checkEglError("eglCreateWindowSurface");
}
/**
* Discards all resources held by this class, notably the EGL context. Also releases the
* Surface that was passed to our constructor.
*/
public void release() {
if (mEGLDisplay != EGL14.EGL_NO_DISPLAY) {
EGL14.eglMakeCurrent(mEGLDisplay, EGL14.EGL_NO_SURFACE, EGL14.EGL_NO_SURFACE,
EGL14.EGL_NO_CONTEXT);
EGL14.eglDestroySurface(mEGLDisplay, mEGLSurface);
EGL14.eglDestroyContext(mEGLDisplay, mEGLContext);
EGL14.eglReleaseThread();
EGL14.eglTerminate(mEGLDisplay);
}
mSurface.release();
mEGLDisplay = EGL14.EGL_NO_DISPLAY;
mEGLContext = EGL14.EGL_NO_CONTEXT;
mEGLSurface = EGL14.EGL_NO_SURFACE;
mSurface = null;
}
/**
* Makes our EGL context and surface current.
*/
public void makeCurrent() {
EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
checkEglError("eglMakeCurrent");
}
/**
* Calls eglSwapBuffers. Use this to "publish" the current frame.
*/
public boolean swapBuffers() {
boolean result = EGL14.eglSwapBuffers(mEGLDisplay, mEGLSurface);
checkEglError("eglSwapBuffers");
return result;
}
/**
* Sends the presentation time stamp to EGL. Time is expressed in nanoseconds.
*/
public void setPresentationTime(long nsecs) {
EGLExt.eglPresentationTimeANDROID(mEGLDisplay, mEGLSurface, nsecs);
checkEglError("eglPresentationTimeANDROID");
}
/**
* Checks for EGL errors. Throws an exception if one is found.
*/
private void checkEglError(String msg) {
int error;
if ((error = EGL14.eglGetError()) != EGL14.EGL_SUCCESS) {
throw new RuntimeException(msg + ": EGL error: 0x" + Integer.toHexString(error));
}
}
}
}
However, the file produced from this code does not play with VLC or ffplay. Can anyone tell me what I'm doing wrong? I believe it is due to an incorrect format (or total lack) of headers required for the playing of raw h264, as I have had success playing .264 files downloaded from the internet with ffplay. Also, I'm not sure exactly how I'm going to stream this video to a computer, so if somebody could give me some suggestions as to how I might do that, I would be very grateful! Thanks!
You should be able to play back a raw H264 stream (as you wrote, other raw .264 files play back just fine with VLC or ffplay), but you are missing the parameter sets. These are passed in two different ways, and you happen to be missing both. First they are returned in MediaFormat when you get MediaCodec.INFO_OUTPUT_FORMAT_CHANGED (which you don't handle, you just log a message about it), secondly they are returned in a buffer with MediaCodec.BUFFER_FLAG_CODEC_CONFIG set (which you ignore by setting the size to 0). The simplest solution here is to remove the special case handling of MediaCodec.BUFFER_FLAG_CODEC_CONFIG, and it should all work just fine.
The code you've based it on does things this way in order to test all the different ways of doing things - where you copied it from, the parameter sets were carried in the MediaFormat from MediaCodec.INFO_OUTPUT_FORMAT_CHANGED. If you wanted to use that in your case with a raw H264 bytestream, you could write the byte buffers with keys csd-0 and csd-1 from the MediaFormat and keep ignoring the buffers with MediaCodec.BUFFER_FLAG_CODEC_CONFIG set.
You cannot play just raw h264. It does not have any information about format. You also can find several great examples here. In order to stream you need to implement some streaming protocol like RTSP (in a case of real time streaming) or more flexible HLS (if real time is not required)

Categories

Resources