I'm making an app which is using MediaCodec APIs.
The app runs on two phones. The first phone reads the video from the sdcard and then uses the MediaCodec encoder to encode the frames in avc format and then streams the frames to another device. The second device has a MediaCodec decoder running. The decoder decodes the frames and render them on a Surface.
The code is running fine but after sometime when the size of the frames gets more, the first device is sometime not able to stream the video and the encoder stops reporting the following log :
E/OMX-VENC-720p( 212): Poll timedout, pipeline stalled due to client/firmware ETB: 496, EBD: 491, FTB: 492, FBD: 492
So I want to implement frame skipping on the encoder side.
What's the best way to skip the frames and not stream them to the other device. ?
PS. On a separate note if anyone can suggest me of any other way of streaming a video to other device it'll be really nice.
Please try Intel INDE Media Pack with tutorials on https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials. It has Camera, File and Game streaming components, which make streaming with help of Wowza and a set of samples demonstrating how to use it as a server and as a client
Related
I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234
I'm using MediaCodec to decode a H.264 video # 30FPS that I receive from an RTSP live stream, the decoder runs on an android device.
However, I see a latency in the output of the MediaCodec's decoder.
It looks like the decoder waits until it receives about 15 frames before providing the decoded frames, resulting in ~500ms latency in the rendered video.
The latency is not accepted for my project, as the user expects to see the live video immediately when it arrives to his device.
Is there a way to configure the MediaCodec, so it doesn't buffer the incoming frames and outputs the decoded frames as soon as they are ready to be displayed?
Thanks for the help.
If possible, try to change the encoding of the videos.
I need to stream rtsp-video from IP camera in local network to my android app. It's very easy to use VideoView and play it as url, or SurfaceView and play stream on it with native MediaPlayer. But when I stream that way - I've recieved a 6-second delay when my phone is buffering that video. As I read, there is no way to change buffer size of MediaPlayer. But I saw several apps that stream video from my camera in almost real-time. I've read a lot about this - cause I'm not the first one who encountered this problem - but didn't find any useful info.
Many thanks for any help!
I'm using vlc-android, it works well to play my cameras' rtsp links:
https://github.com/mrmaffen/vlc-android-sdk#get-it-via-maven-central
The delay is about 1 second.
I am streaming live video from my camera on my android phone to my computer using the MediaRecorder class.
recorder.setCamera(mCamera);
recorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setOutputFile(uav_UDP_Client.pfd.getFileDescriptor());
recorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
That's the basic idea. So I would like to show this stream in real time. My plan is to use FFMpeg to turn the latest frame into a .bmp and show the .bmp on my C# program every time there is a new frame.
The problem is there is no header until I stop the recording. So I can not use FFMpeg unless there is a header. I've looked at spydroid and using RTP but I do not want to use this method for various reasons.
Any ideas on how I can do this easily?
You can consider streaming a MPEG2 TS and playing it back on your screen or you can also stream H.264 data over RTP and use a client to decode and display the same.
In Android, there is a sample executable which performs RTP packetization of H.264 stream and streams it over the network. You can find more details about the MyTransmitter from this file, which could serve as a good reference to your solution.
Additional Information
In Android 4.2.0 release onwards, there is a similar feature supported by the framework called Miracast or Wi-Fi Display which is standardized by Wi-Fi forum, which is a slightly complex use-case.
I'm writing an audio streaming app that buffers AAC file chunks, decodes those chunks to PCM byte arrays, and writes the PCM audio data to AudioTrack. Occasionally, I get the following error when I try to either skip to a different song, call AudioTrack.pause(), or AudioTrack.flush():
obtainbuffer timed out -- is cpu pegged?
And then what happens is that a split second of audio continues to play. I've tried reading a set of AAC files from the sdcard and got the same result. The behavior I'm expecting is that the audio stops immediately. Does anyone know why this happens? I wonder if its an Audio latency issue with Android 2.3.
edit: The AAC audio contains an ADTS Header. The header + audio payload constitute what I'm calling ADTSFrame. These are fed to the decoder one frame at a time. The resulting PCM byte array that gets returned from the C layer to the Java Layer gets fed to Android's AudioTrack API.
edit 2: I got my nexus 7 (Android 4.1 OS) today. Loaded the same APP onto the device. Didn't have any of these problems at all.
it is highly possible about sample rate. one of your devices might be supporting the sample rate u used while the other could not. Please check it. I had the same issue, it was about sample rate. use 44.1kHz (44100) and try again please.