I have a ip camera that gives 30 frames mjpeg per second. I want to record that frames to a mp4 file. I have already a library that record it to avi file but its not preferred. I try to convert video to mp4 after record it avi but its a bit slower.
Can you help please
here you go https://github.com/bytedeco/javacv
Android Sample: https://github.com/bytedeco/sample-projects/tree/master/JavaCV-android-example
FFmpegFrameGrabber g = new FFmpegFrameGrabber("textures/video/anim.mp4");
g.start();
for (int i = 0 ; i < 30 ; i++) {
ImageIO.write(g.grab().getBufferedImage(), "png", new File("frame-dump/video-frame-" + System.currentTimeMillis() + ".png"));
}
g.stop();
I have an RTMP stream I want to play in my app using the Exoplayer library. My setup for that is as follows:
TrackSelector trackSelector = new DefaultTrackSelector();
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new ExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
3000, // max buffer
1000, // playback
2000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addListener(mVideoStreamHandler);
mPlayer.addVideoListener(new VideoListener() {
#Override
public void onVideoSizeChanged(int width, int height, int unappliedRotationDegrees, float pixelWidthHeightRatio) {
Log.d("hasil", "onVideoSizeChanged: w:" + width + ", h:" + height);
String res = width + "x" + height;
resolution.setText(res);
}
#Override
public void onRenderedFirstFrame() {
}
});
Where createSource() is as follows:
private void createSource() {
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720));
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL));
}
My current problem is that only the first three ExtractorMediaSources work fine in Exoplayer. The mMediaSourceAudio refuses to play in Exoplayer, but works just fine in the VLC Media Player for Android.
Right now I have a suspicion that the format is AAC-LTP, or whatever AAC variant that requires a codec available in VLC but not in default Android. However, I do not have access to the encoding process so I don't know for sure.
If this isn't the case, what is it?
EDIT:
I've been debugging the BandwidthMeter and added a MediaSourceEventListener. When I use the normal Video sources, onDownstreamFormatChanged() gets called, but not when I use that Audio Stream source.
In addition, the BandwidthMeter works fine, with bytes always downloaded in all parts of the stream and more bytes when the video stream comes in, but only in the Audio only stream that, when I call mPlayer.getBufferedPosition(), the returned value is always 0. Also, when I use the Audio Stream source, no OMX code was called - no decoders were set up.
Am I seeing a malformed audio stream, or do I need to change my Exoplayer's settings?
EDIT 2:
Further debugging reveals that, in all the Video streams and Audio stream, the same FlvExtractor is used. Even though the Video streams have the avc video track encoding and mp4a-latm audio track encoding. Is this normal?
Turns out it's because the stream was recognized to have two tracks/sampleQueues. One Audio track, and one track with null format. That null track was supposed to be the video track, which was supposed to exist according to the stream's flvHeader flag.
For now, I get around this by creating a custom MediaSource using a custom MediaPeriod. Said custom MediaPeriod having code to separate the video and audio tracks of the SampleQueues, then using the audio-only SampleQueue[] instead of the source SampleQueue[] when I want to play the audio-only stream.
Though this gives me another point of concern: There's something one can do to alter the 'has audio track (flag & 0x04) and video track (flag & 0x01)' flag in the rtmp stream, right?
Thanks for the comments, I'm new to ExoPlayer. But your comments helped me in debugging and getting multiple workarounds to the issue.
I tried to use custom MediaSource and custom MediaPeriod to address this audio issue. I have observed video format data coming after audio data incase of video+audio wowza stream, so the function maybeFinishPrepare() will wait for getting both video and audio format tag data before invoking onPrepared, incase if video tagData is received first. Incase of audio data received first, it wont wait and will call onPrepare().
With the above changes, I was able to play audio alone and video_audio wowza streams, where rtmp tagHeader with tagTypes were coming in the order of video tagData and then followed by audio data.
I wasn't able to use the same patch with srs server to play both audio_only and video_audio streams with the same changes. srs server is giving tagData in the order of audio and then video tagData,
So, I debugged further in FlvExtractor. In readFlvHeader, I have overriden the hasAudio and hasVideo variables. These variables will be set based on the first few tagHeaders(5 or 6). I used peekFully on input for 6 times in a loop. In each loop after fetching tagType and tagDataSize, tagDataSize is used to input.advancePeekPosition(), and tagType is used to identify whether we have audio/video format data in tagData. After peeking for first 6 consecutive tagHeaders, I was able to get actual values of hasAudio and hasVideo, and ignored the flvHeaders.flags, which were used to set these variables.
Custom FlvExtractor workaround, looked cleaner than custom MediaSource/MediaPeriod, as we will create those many tracks as necessary, as we are setting proper hasVideo/hasAudio values.
I've managed to combine multiple videos with audio tracks, but then I realized that if I combine multiple videos with one of them not having an audio track, I have to add silence to the combined audio track.
So, how do I go about doing it? Should I encode a ByteBuffer filled with 0s with timestamps for silence?
So, how do I go about doing it? Should I encode a ByteBuffer filled with 0s with timestamps for silence?
Essentially yes. I am using the function below to encode silence at a certain presentation time.
For the length of your video with no audio, you should be encoding silence at a regular interval. I determined that the interval should match the audio before it. So in my case, the period between audio presentation times of my first video was 21333 us.
Using that info I started encoding silence:
from the last presentation time of the first video's audio + 21333,
at intervals of 21333 until I encoded enough silence to last the full video
I am still trying to figure out how to use a video with no audio (as the first video) followed by a video with audio. I will update my answer if I figure it out.
private byte[] zerodArray = new byte[2048];// Used to encode silent audio... Not really sure how big this should be ......
private void encodeSilenceForFrame(long presentationTime){
//mAudioEncoder is the audio encoder you are using to combine the other videos' audio.
final int TIMEOUT_USEC = 10000;
int encoderInputBufferIndex = mAudioEncoder.dequeueInputBuffer(TIMEOUT_USEC);
if (encoderInputBufferIndex == MediaCodec.INFO_TRY_AGAIN_LATER) {
if (VERBOSE) Log.d(TAG, "no audio encoder input buffer");
}
if (VERBOSE) {
Log.d(TAG, "audio encoder: returned input buffer: " + encoderInputBufferIndex);
}
ByteBuffer encoderInputBuffer = mAudioEncoder.getInputBuffer(encoderInputBufferIndex);
encoderInputBuffer.position(0);
encoderInputBuffer.put(zerodArray);
Log.d(TAG, "audio silence: pending buffer for time " + presentationTime);
mAudioEncoder.queueInputBuffer(
encoderInputBufferIndex,
0,
zerodArray.length,
presentationTime,0);
}
I have a video file .mp4 - video track only.
I'm using MediaExtractor and MediaMuxer to add audio file.
this Works good.
On the processed file i want to add another audio track.
So i'm using again MediaExtractor and MediaMuxer to kind of copy the file, (Creating video and audio tracks, reading [extractor] and writing [muxer]). In addition i'm trying to add the second audio track to the muxer. but this throws the error Failed to add the track to the muxer.
in this link we can see that muxer does not support multiple tracks.
Code From the link:
// Throws exception b/c 2 audio tracks were added.
muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
muxer.addTrack(MediaFormat.createAudioFormat("audio/mp4a-latm", 48000, 1));
try {
muxer.addTrack(MediaFormat.createAudioFormat("audio/mp4a-latm", 48000, 1));
fail("should throw IllegalStateException.");
} catch (IllegalStateException e) {
// expected
}
Is there other way to do it ?
Elegant way ?
BTW, i'm trying to avoid using 3rd parties - like ffmpeg or so.. But if would be my only solution...
--EDIT--
Relevant piece of my code
MediaMuxer muxer = new MediaMuxer(outputFile, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
MediaExtractor extractor = new MediaExtractor();
extractor.setDataSource(videoAndAudioFile);
for (int currTrackIdx = 0; currTrackIdx < extractor.getTrackCount(); currTrackIdx++) {
MediaFormat trackFormat = extractor.getTrackFormat(currTrackIdx);
tracksIdx.add(muxer.addTrack(trackFormat));
}
MediaExtractor extractor2 = new MediaExtractor();
extractor2.setDataSource(secondAudioFile);
MediaFormat trackFormat = extractor2.getTrackFormat(0);
tracksIdx.add(muxer.addTrack(trackFormat)); // Crashes here
For someone who reaches here, I found this official doc at link. Muxing Multiple Video/Audio Tracks seems not supported in old API versions and even restricted in the latest version.
I have integrated FFMPEG into my application and I want to convert videos to audio files,
But I want to do it using native implementation , (JNI) I don't want to use ffmpeg scripts ,
I have already tried this
You can't convert video to audio. You can however extract and only store the audio sub-streams of an AVFormatContext. Pseudocode:
// look for the first auid substream, and save its index:
for (size_t i = 0; i < AvFormatContextInstance->nb_streams; ++i)
if (AvFormatContextInstance->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO)
streamindex = i;
Now all you need to do is discard all other streams on other indexes and save AVPackets from the recognized audio stream.