Transcode video to lower bitrate and stream - android

I have a working app that streams video to Chromecast(using nannoHttpd) and everything is working fine. Now my problem is: videos recorded using new devices are too large in size to stream, so I want to re-encode videos to some lower bitrate.
I tried ffmpeg but the results are not satisfactory and it will increase the apk size by 14 MB.
Now I am trying the MediaCodec api. It is faster than ffmpeg, but it takes the input file and writes it to the output file and I want to re-encode byte data that is to be served by nannohttpd.
Now a solution comes to my mind, that is to transcode the video and stream the output file but its has two drawbacks;
What if the file is too large and the user doesn't see the whole video? Much of CPU, battery resource is wasted.
What if the user fast forwards a long video to a time which is not re-encoded yet?

1 MediaCodec just do one thing decode encode! and you will get raw bytes of new encoded data. So it is up to the programmer to choose to either dump that into a container (.mp4 file) using a muxer. So no need here to rewrite everything back into a file.
2 Seek to the proper chunk of data and restart MediaCodec.

Related

Convert audio/video stream to mp4 on-the-fly

I have an audio and video stream and would like to generate an mp4. However, all the samples I've seen tend to generate an mp4 when it has the entire stream. What I want to accomplish is to encode the stream to mp4 while receiving the audio/video stream from the web.
It isn't clear whether an mp4 can be created on-the-fly or needs to wait until the final input data has been received. I believe that this has to be possible because when you record a video on your device using the camera, as soon as you stop recording, the video is already available. This means that it was encoding it on-the-fly. So it really shouldn't matter whether the frames come from the camera or from a video stream over the web. Or am I possibly missing something?
Without knowing how mp4 is actually generated, I'm guessing that even if the encoder receives a few seconds of frames and audio, it still can generate that portion of the mp4 and save it to disk. At any point in time, my app should be able to stop the streaming and the encoder just saves the final portion of the stream. What would be nice is that if the app were to crash during encoding, that at least the part that was encoded still gets saved to disk as a valid mp4. Not sure if that is even possible.
Ultimately I would like to do this using the MediaCodec, OpenGL ES and a Surface.

How to record microphone to more compressed format during WebRTC call on Android?

I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234

Decoding only some PCM bytes at a time from an mp3 file

How do I decode something on the order of a 1000 bytes of PCM audio from an mp3 file, without decoding the whole thing?
I need to mix together four to six tracks, to one, so that they're played simultaneously on an AudioTrack in the Android app.
This can be done if I can get a stream of PCM samples, and so simple add the decoded tracks together (and maybe adjust for clipping and volume), and then write them to an AudioTrack buffer.
That part is simple.
But how do I decode the individual mp3 files, to inputstreams I can get byte arrays from? I've found something called JLayer, but its not quite clear to me how to do this.
I'd rather avoid doing it in C++ (I'm a bit rusty, and my team doesn't like it), though if that's needed I can do it. Though I'd need a short example of how get say 240 decoded bytes from a file via mpg123, or other such libraries.
Any help is appreciated.
The smallest you can do is 576 samples, which is the smallest MP3 frame size. However, most MP3 streams use the bit reservoir meaning you likely have to decode frames around the frame you want to decode as well.
Complicating things further, bare MP3 streams don't have any internal timestamping, so if you want to drop accurately in the middle of a file, you have to decode up until that point. (MP3 frame headers don't contain byte lengths, so you can't just skim frame headers accurately.) You can try to needle-drop into the middle of the file based on byte length, but this isn't an accurate way of seeking and can be off by several seconds, even for CBR. For VBR, it's all over the place.
It sounds like all you need to do is have a stream decoder, so that the decoding happens as playback is occurring. I'm no Android developer, but it seems you can just use AudioTrack from the framework, in streaming mode. https://developer.android.com/reference/android/media/AudioTrack.html And then the MediaCodec to actually do the decoding. https://developer.android.com/reference/android/media/MediaCodec.html Android devices support MP3, so you don't need to do anything else.

WebM with VP9 vs MP4 with H.264 AVC which one is best overall

I have used VideoView and load MP4 file of length 1 minute.
The problem is it start's after delay.
I want it to start immediately so which codec and byte rate to choose.
Share your experience if any one related to video codec.
I want to see comparison between these two types.
Loading speed, length and file size and quality ratio
MP4 usually has all index tables at the end of the file, so it may require to scan the whole file on the disk in order to start playback.
You may convert into MP4 file, optimized for streaming, so that tables are at the beginning.
MPEG TS (Transport stream) also is loaded quickly.
Probably Webm will load faster than "standard" MP4, but I am not so familiar with Webm format.
All PCs and smartphones have hardware based AVC (H.264) video decoder. VP9 is mostly decoded in software. So presumably, AVC will be easier to decode for your computer.
Quality or size of VP9 can be better than AVC only if you use HD. On smaller videos quality should be more or less equal.
There are many useful tools to encode AVC, and not so much for VP9. Using ffmpeg and proper settings like 2-pass encoding you can compress AVC harder than VP9.
So I recommend to use AVC, and optimized MP4.

get audio from one mp4 and use it in resampled (smaller) mp4 in android

I have found a solution for resampling an .mp4 video taken with the camera on the device to make it smaller (resizing by resolution, bitrate, and framerate). The problem is, it doesn't carry the audio over.
I have looked at several different options for trying to get the audio out of my source (large) mp4 and push it into my smaller mp4 and I can't not seem to get any of these procedures to work correctly.
I've tried the following:
1) extracting the PCM audio from the source using: How do I extractor audio to mp3 from mp4 using java in Android?
2) converting the PCM to M4A and then adding the M4A to the smaller MP4 using: https://github.com/tqnst/MP4ParserMergeAudioVideo/blob/master/Mp4ParserSample-master/src/jp/classmethod/sample/mp4parser/MainActivity.java
that's the method I got closest with but the audio was really slow and didn't match up at all with the video in the smaller mp4.
I also tried a "direct copy" from one mp4 to the other with a variation of this: Concatenate multiple mp4 audio files using android´s MediaMuxer
that made my smaller mp4 actually larger (in file size) than my source mp4 and it didn't actually move the sound over.
The android documentation for MediaMuxer is pretty terrible and I can't make heads or tails of what I need to do to get this to work. It seems like it should be a pretty trivial task....
any suggestions or advice would be greatly appreciated.
TIA
I ended up just using ffmpeg with this solution:
https://github.com/WritingMinds/ffmpeg-android-java

Categories

Resources