I've compiled an FFMPEG library for use on Android with libx264 and using the NDK.
I want to encode an MPEG video file however the application is failing when opening the encoder codec, in avcodec_open2.
The FFMPEG logs I receive from avcodec_open2 are below with the function returning -22.
Picture size %ux%u is invalid.
ignoring invalid width/height values
Specified pix_fmt is not supported
On windows this code works fine, it's only on Android that there is a failure. Any ides why this would fail on Android?
if (!(codec = avcodec_find_encoder(AV_CODEC_ID_MPEG1VIDEO)))
{
return -1;
}
//Allocate context based on codec
if (!(context = avcodec_alloc_context3(codec)))
{
return -2;
}
//Setup Context
// put sample parameters
context->bit_rate = 4000000;
// resolution must be a multiple of two
context->width = 1280;
context->height = 720;
// frames per second
context->time_base = (AVRational){1,25};
context->inter_quant_bias = 96;
context->gop_size = 10;
context->max_b_frames = 1;
//IDs
context->pix_fmt = AV_PIX_FMT_YUV420P;
context->codec_id = AV_CODEC_ID_MPEG1VIDEO;
context->codec_type = AVMEDIA_TYPE_VIDEO;
if (AV_CODEC_ID_MPEG1VIDEO == AV_CODEC_ID_H264)
{
av_opt_set(context->priv_data, "preset", "slow", 0);
}
if ((result = avcodec_open2(context, codec, NULL)) < 0)
{
//Failed opening Codec!
}
This problem was caused by building FFMPEG with outdated source code.
I got the most recent source from https://www.ffmpeg.org/ and compiled it in the same way and the new library works fine.
Note: I hadn't considered the full implications regarding licenses of using libx264. I've since dropped it.
Related
I'm developing an Android native app where I record the screen video stream (encoding it with the native AMediaCodec library to video/avc) and I mux it with an AAC / audio/mp4a-latm audio track. My code works just fine on several devices, but I got some problems with some devices (Huawei P8 and P10 lite, running Android 6.0.0 and 7.0 respectively, and Nexus 5 running Android 6.0.1). The issue is that, whenever I try to add the second track to the muxer (no matter the order I add them), it fails returning a -10000 error code.
I've simplified the problem, trying to just mux an audio and a video file together; the results are the same. In this simplified version I use two AMediaExtractor's to get the audio and video formats to configure the AMediaMuxer, but when I add the second track I still get an error. Here's the code:
const auto videoSample = "videosample.mp4";
const auto audioSample = "audiosample.aac";
const auto filePath = "muxed_file.mp4"
auto* extractorV = AMediaExtractor_new();
AMediaExtractor_setDataSource(extractorV, videoSample);
AMediaExtractor_selectTrack(extractorV, 0U); // here I take care to select the right "video/avc" track
auto* videoFormat = AMediaExtractor_getTrackFormat(extractorV, 0U);
auto* extractorA = AMediaExtractor_new();
AMediaExtractor_setDataSource(extractorA, audioSample);
AMediaExtractor_selectTrack(extractorA, 0U); // here I take care to select the right "mp4a-latm" track
auto* audioFormat = AMediaExtractor_getTrackFormat(extractorA, 0U);
auto fd = open(filePath.c_str(), O_WRONLY | O_CREAT | O_TRUNC, 0666);
auto* muxer = AMediaMuxer_new(fd, AMEDIAMUXER_OUTPUT_FORMAT_MPEG_4);
auto videoTrack = AMediaMuxer_addTrack(muxer, videoFormat); // the operation succeeds: videoTrack is 0
auto audioTrack = AMediaMuxer_addTrack(muxer, audioFormat); // error: audioTrack is -10000
AMediaExtractor_seekTo(extractorV, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaExtractor_seekTo(extractorA, 0, AMEDIAEXTRACTOR_SEEK_CLOSEST_SYNC);
AMediaMuxer_start(muxer);
Is there something wrong with my code? Is it something that is not supposed to work on Android prior to 8 or it'a pure coincidence? I've read a lot of posts (especially by #fadden) here on SO, but I'm not able to figure it out.
Let me give you some context:
the failure is independent from the order I add the two tracks: it will always be the second AMediaMuxer_addTrack() to fail
audio and video tracks should be ok: when I mux only one of the tracks, everything works well even on Huaweis and Nexus 5, I obtain correct output files, both with the audio or video track alone
I tried to move the AMediaExtractor_seekTo() calls to other positions, without success
the same code works just fine on other devices (OnePlus 5 and Nokia 7 plus, both running Android >= 8.0)
Just for completeness, this is the code I later use to obtain the output mp4 file:
AMediaMuxer_start(muxer);
// mux the VIDEO track
std::array<uint8_t, 256U * 1024U> videoBuf;
AMediaCodecBufferInfo videoBufInfo{};
videoBufInfo.flags = AMediaExtractor_getSampleFlags(extractorV);
bool videoEos{};
while (!videoEos) {
auto ts = AMediaExtractor_getSampleTime(extractorV);
videoBufInfo.presentationTimeUs = std::max(videoBufInfo.presentationTimeUs, ts);
videoBufInfo.size = AMediaExtractor_readSampleData(extractorV, videoBuf.data(), videoBuf.size());
if(videoBufInfo.presentationTimeUs == -1 || videoBufInfo.size < 0) {
videoEos = true;
} else {
AMediaMuxer_writeSampleData(muxer, videoTrack, videoBuf.data(), &videoBufInfo);
AMediaExtractor_advance(extractorV);
}
}
// mux the audio track
std::array<uint8_t, 256U * 1024U> audioBuf;
AMediaCodecBufferInfo audioBufInfo{};
audioBufInfo.flags = AMediaExtractor_getSampleFlags(extractorA);
bool audioEos{};
while (!audioEos) {
audioBufInfo.size = AMediaExtractor_readSampleData(extractorA, audioBuf.data(), audioBuf.size());
if(audioBufInfo.size < 0) {
audioEos = true;
} else {
audioBufInfo.presentationTimeUs = AMediaExtractor_getSampleTime(extractorA);
AMediaMuxer_writeSampleData(muxer, audioTrack, audioBuf.data(), &audioBufInfo);
AMediaExtractor_advance(extractorA);
}
}
AMediaMuxer_stop(muxer);
AMediaMuxer_delete(muxer);
close(fd);
AMediaFormat_delete(audioFormat);
AMediaExtractor_delete(extractorA);
AMediaFormat_delete(videoFormat);
AMediaExtractor_delete(extractorV);
I am using MediaCodec API to decode a H264 video stream using a SurfaceView as the output surface. The decoder is configured successfully without any errors. When I try to finally render the decoded video frame onto the SurfaceView using releaseOutputBuffer(bufferIndex, true), it throws MediaCodec.CodecException, however the video is rendered correctly.
Calling getDiagnosticInfo() and getErrorCode() on the exception object return an error code of -34, but I can't find in the docs what this error code means. The documentation is also very unclear about when this exception is thrown.
Has anyone faced this exception/error code before? How can I fix this?
PS: Although the video works fine but this exeception is thrown at everyreleaseOutputBuffer(bufferIndex, true), call.
Android media-codec is very dependant on the device vendor. Samsung is incredibly problematic other devices running the same code will run fine. This has been my life for the last 6 months.
The best approach to do although it can feel wrong is to try + catch + retry. There are 4 distinct places where the MediaCodec will throw exceptions:
Configuration - NativeDecoder.Configure(...);
Start - NativeDecoder.Start();
Render output - NativeDecoder.ReleaseOutputBuffer(...);
Input - codec.QueueInputBuffer(...);
NOTE: my code is in Xamarin but the calls map very closely to raw java.
The way you configure your format description also matters. The media-codec can crash on NEXUS devices if you don't specify:
formatDescription.SetInteger(MediaFormat.KeyMaxInputSize, currentPalette.Width * currentPalette.Height);
When you catch any exception you will need to ensure the mediacodec is reset. Unfortunatly reset isnt available to older api-levels but you can simulate the same effect with:
#region Close + Release Native Decoder
void StopAndReleaseNativeDecoder() {
FlushNativeDecoder();
StopNativeDecoder();
ReleaseNativeDecoder();
}
void FlushNativeDecoder() {
if (NativeDecoder != null) {
try {
NativeDecoder.Flush();
} catch {
// ignore
}
}
}
void StopNativeDecoder() {
if (NativeDecoder != null) {
try {
NativeDecoder.Stop();
} catch {
// ignore
}
}
}
void ReleaseNativeDecoder() {
while (NativeDecoder != null) {
try {
NativeDecoder.Release();
} catch {
// ignore
} finally {
NativeDecoder = null;
}
}
}
#endregion
Once you catch the error when you pass new input you can check:
if (!DroidDecoder.IsRunning && streamView != null && streamView.VideoLayer.IsAvailable) {
DroidDecoder.StartDecoder(streamView.VideoLayer.SurfaceTexture);
}
DroidDecoder.DecodeH264FrameBuffer(payload, payloadSize, frameDuration, presentationTime, isKeyFrame);
Rendering to a texture-view seems to be the most stable option currently. But the device fragmentation has really hurt android in this area. We have found cheaper devices such as a the Tesco Hudl to be of the most stable for video. Even had up to 21 concurrent videos on screen at 1 time. Samsung S4 can get around 4-6 depending on the resolution/fps but something like the HTC can work as well as the Hudl. Its been a wake up call and made me realise samsung devices are literally copying apple design and twiddling with the android-sdk and actually breaking a lot of functionality along the way.
It is most probably an issue with the codec you are using. Try using something like this to
private static MediaCodecInfo selectCodec(String mime){
int numCodecs = MediaCodecList.getCodecCount();
for(int i = 0; i < numCodecs; i++){
MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
if(!codecInfo.isEncoder()){
continue;
}
String[] types = codecInfo.getSupportedTypes();
for(int j = 0; j < types.length; j++){
if(types[j].equalsIgnoreCase(mime)){
return codecInfo;
}
}
}
return null;
}
And then setting your encoder with:
MediaCodecInfo codecInfo = selectCodec(MIME_TYPE);
mEncoder = MediaCodec.createCodecByName(codecInfo.getName());
That may resolve your error by ensuring that the Codec you've chosen is fully supported.
I tried to get the SLDeviceVolumeItf interface of the RecorderObject on Android but I got the error: SL_RESULT_FEATURE_UNSUPPORTED.
I read that the Android implementation of OpenSL ES does not support volume setting for the AudioRecorder. Is that true?
If yes is there a workaround? I have a VOIP application that does not worl well on Galaxy Nexus because of the very high mic gain.
I also tried to get the SL_IID_ANDROIDCONFIGURATION to set the streamType to the new VOICE_COMMUNINCATION audio-source but again I get error 12 (not supported).
// create audio recorder
const SLInterfaceID id[2] = { SL_IID_ANDROIDSIMPLEBUFFERQUEUE, SL_IID_ANDROIDCONFIGURATION };
const SLboolean req[2] = { SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE };
result = (*engine)->CreateAudioRecorder(engine, &recorderObject, &audioSrc, &audioSnk, 2, id, req);
if (SL_RESULT_SUCCESS != result) {
return false;
}
SLAndroidConfigurationItf recorderConfig;
result = (*recorderObject)->GetInterface(recorderObject, SL_IID_ANDROIDCONFIGURATION, &recorderConfig);
if(result != SL_RESULT_SUCCESS) {
error("failed to get SL_IID_ANDROIDCONFIGURATION interface. e == %d", result);
}
The recorderObject is created but I can't get the SL_IID_ANDROIDCONFIGURATION interface.
I tried it on Galaxy Nexus (ICS), HTC sense (ICS) and Motorola Blur (Gingerbread).
I'm using NDK version 6.
Now I can get the interface. I had to use NDK 8 and target-14.
When I tried to use 10 as a target, I had an error compiling the native code (dirent.h was not found).
I had to use target-platform-14.
I ran into a similar problem. My results were returning the error code for not implemented. However, my problem was that I wasn't creating the recorder with the SL_IID_ANDROIDCONFIGURATION interface flag.
apiLvl = (*env)->GetStaticIntField(env, versionClass, sdkIntFieldID);
SLint32 streamType = SL_ANDROID_RECORDING_PRESET_GENERIC;
if(apiLvl > 10){
streamType = SL_ANDROID_RECORDING_PRESET_VOICE_COMMUNICATION;
I("set SL_ANDROID_RECORDING_PRESET_VOICE_COMMUNICATION");
}
result = (*recorderConfig)->SetConfiguration(recorderConfig, SL_ANDROID_KEY_RECORDING_PRESET, &streamType, sizeof(SLint32));
if (SL_RESULT_SUCCESS != result) {
return 0;
}
Even i tried to find a way to change the gain in OpenSL, looks like there is no api/interface for that. i implemented a work around by implementing a simple shift gain multiplier
void multiply_gain(void *buffer, int bytes, int gain_val)
{
int i = 0, j = 0;
short *buffer_samples = (short*)buffer;
for(i = 0, j = 0; i < bytes; i+=2,j++)
{
buffer_samples[j] = (buffer_samples[j] >> gain_val);
}
}
But here the gain is multiplied/divided (based on << or >>) by a factor or 2. if you need a smoother gain curve, you need to write a more complex digital gain function.
I am using the IP Webcam program on android and receiving it on my PC by WiFi. What I want is to use opencv in Visual Studio, C++, to get that video stream, there is an option to get MJPG stream by the following URL: http://MyIP:port/videofeed
How to get it using opencv?
Old question, but I hope this can help someone (same as my answer here)
OpenCV expects a filename extension for its VideoCapture argument,
even though one isn't always necessary (like in your case).
You can "trick" it by passing in a dummy parameter which ends in the
mjpg extension:
So perhaps try:
VideoCapture vc;
ipCam.open("http://MyIP:port/videofeed/?dummy=param.mjpg")
Install IP Camera Adapter and configure it to capture the videostream. Then install ManyCam and you'll see "MPEG Camera" in the camera section.(you'll see the same instructions if you go to the link on how to setup IPWebCam for skype)
Now you can access your MJPG stream just like a webcam through openCV. I tried this with OpenCV 2.2 + QT and works well.
Think this helps.
I did a dirty patch to make openCV working with android ipWebcam:
In the file OpenCV-2.3.1/modules/highgui/src/cap_ffmpeg_impl.hpp
In the function bool CvCapture_FFMPEG::open( const char* _filename )
replace:
int err = av_open_input_file(&ic, _filename, NULL, 0, NULL);
by
AVInputFormat* iformat = av_find_input_format("mjpeg");
int err = av_open_input_file(&ic, _filename, iformat, 0, NULL);
ic->iformat = iformat;
and comment:
err = av_seek_frame(ic, video_stream, 10, 0);
if (err < 0)
{
filename=(char*)malloc(strlen(_filename)+1);
strcpy(filename, _filename);
// reopen videofile to 'seek' back to first frame
reopen();
}
else
{
// seek seems to work, so we don't need the filename,
// but we still need to seek back to filestart
filename=NULL;
int64_t ts = video_st->first_dts;
int flags = AVSEEK_FLAG_FRAME | AVSEEK_FLAG_BACKWARD;
av_seek_frame(ic, video_stream, ts, flags);
}
That should work. Hope it helps.
This is the solution (im using IP Webcam on android):
CvCapture* capture = 0;
capture = cvCaptureFromFile("http://IP:Port/videofeed?dummy=param.mjpg");
I am not able to comment, so im posting new post. In original answer is an error - used / before dummy. THX for solution.
Working example for me
// OpenCVTest.cpp : Defines the entry point for the console application.
//
#include "stdafx.h"
#include "opencv2/highgui/highgui.hpp"
/**
* #function main
*/
int main( int argc, const char** argv )
{
CvCapture* capture;
IplImage* frame = 0;
while (true)
{
//Read the video stream
capture = cvCaptureFromFile("http://192.168.1.129:8080/webcam.mjpeg");
frame = cvQueryFrame( capture );
// create a window to display detected faces
cvNamedWindow("Sample Program", CV_WINDOW_AUTOSIZE);
// display face detections
cvShowImage("Sample Program", frame);
int c = cvWaitKey(10);
if( (char)c == 27 ) { exit(0); }
}
// clean up and release resources
cvReleaseImage(&frame);
return 0;
}
Broadcast mjpeg from a webcam with vlc, how described at http://tumblr.martinml.com/post/2108887785/how-to-broadcast-a-mjpeg-stream-from-your-webcam-with
I am working on an android application in which a video is dynamically generated by compositing a sequence of animation frames. I tried to use the Android Media Recorder API for this but have not found a way to get it to accept a non-camera source as input. I have been attempting to use a FFMPEG port (based on the Rockplayer build) but am running into difficulties with missing functions since I am using it as an encoder, not a decoder.
The iPhone version of this app uses AVAssetWriter from the AVFoundation framework.
Is there an easier way to do this or am I stuck slugging it out with FFMPEG?
This may help (see the note on resolution though):-
How to encode using the FFMpeg in Android (using H263)
I'm not sure if they did a custom build of ffmpeg, or not, if so they may be able to offer advice on porting a more feature complete version.
-Anthony
Opencv has ViewBase class which takes the input from the camera as a frame and represent the frame as a bitmap , you can extand the class View base and make it for your own use , even though installing opencv on the android isn't very easy.
When you extend SampleCvViewBase you will have the following function which you can use pretty much hard work but the best I can think of.
#Override
protected Bitmap processFrame(VideoCapture capture) {
capture.retrieve(picture, Highgui.CV_CAP_ANDROID_COLOR_FRAME_RGBA);
if (Utils.matToBitmap(picture, bmp))
return bmp;
bmp.recycle();
return null;
}
You can use a pure Java open source library called JCodec ( http://jcodec.org ).
It contains a simple yet working H.264 encoder and MP4 muxer. The class below uses JCodec low level API and should be what you need ( CORRECTED ):
public class SequenceEncoder {
private SeekableByteChannel ch;
private Picture toEncode;
private RgbToYuv420 transform;
private H264Encoder encoder;
private ArrayList<ByteBuffer> spsList;
private ArrayList<ByteBuffer> ppsList;
private CompressedTrack outTrack;
private ByteBuffer _out;
private int frameNo;
private MP4Muxer muxer;
public SequenceEncoder(File out) throws IOException {
this.ch = NIOUtils.writableFileChannel(out);
// Transform to convert between RGB and YUV
transform = new RgbToYuv420(0, 0);
// Muxer that will store the encoded frames
muxer = new MP4Muxer(ch, Brand.MP4);
// Add video track to muxer
outTrack = muxer.addTrackForCompressed(TrackType.VIDEO, 25);
// Allocate a buffer big enough to hold output frames
_out = ByteBuffer.allocate(1920 * 1080 * 6);
// Create an instance of encoder
encoder = new H264Encoder();
// Encoder extra data ( SPS, PPS ) to be stored in a special place of
// MP4
spsList = new ArrayList<ByteBuffer>();
ppsList = new ArrayList<ByteBuffer>();
}
public void encodeImage(BufferedImage bi) throws IOException {
if (toEncode == null) {
toEncode = Picture.create(bi.getWidth(), bi.getHeight(), ColorSpace.YUV420);
}
// Perform conversion
for (int i = 0; i < 3; i++)
Arrays.fill(toEncode.getData()[i], 0);
transform.transform(AWTUtil.fromBufferedImage(bi), toEncode);
// Encode image into H.264 frame, the result is stored in '_out' buffer
_out.clear();
ByteBuffer result = encoder.encodeFrame(_out, toEncode);
// Based on the frame above form correct MP4 packet
spsList.clear();
ppsList.clear();
H264Utils.encodeMOVPacket(result, spsList, ppsList);
// Add packet to video track
outTrack.addFrame(new MP4Packet(result, frameNo, 25, 1, frameNo, true, null, frameNo, 0));
frameNo++;
}
public void finish() throws IOException {
// Push saved SPS/PPS to a special storage in MP4
outTrack.addSampleEntry(H264Utils.createMOVSampleEntry(spsList, ppsList));
// Write MP4 header and finalize recording
muxer.writeHeader();
NIOUtils.closeQuietly(ch);
}
public static void main(String[] args) throws IOException {
SequenceEncoder encoder = new SequenceEncoder(new File("video.mp4"));
for (int i = 1; i < 100; i++) {
BufferedImage bi = ImageIO.read(new File(String.format("folder/img%08d.png", i)));
encoder.encodeImage(bi);
}
encoder.finish();
}
}
You can get JCodec jar from a project web-site.