H.264 Real-time Streaming, Timestamp in NAL Units? - android

I'm trying to build a system that live-streams video and audio captured by android phones. Video and auido are captured on the android side using MediaRecorder, and then pushed directly to a server written in python. Clients should access this live feed using their browser, so the I implemented the streaming part of the system using flash. Right now both video and audio content appear on the client side, but the problem is that they are out of sync. I'm sure this is caused by wrong timestamp values in flash (currently I increment ts by 60ms for a frame of video, but clearly this value should be variable).
The audio is encoded into amr on the android phone, so I know exactly each frame of amr is 20ms. However, this is not the case with video, which is encoded into H.264. To synchronized them together, I would have to know exactly how many millisecs each frame of H.264 lasts, so that I can timestamp them later when delivering content using flash. My question is is this kind of information available in NAL units of H.264? I tried to find the answer in H.264 standard, but the information there is just overwhelming.
Can someone please point me at the right direction? Thanks.

Timestamps are not in NAL units, but are typically part of RTP. RTP/RTCP also takes care of media synchronisation.
The RTP payload format for H.264 might also be of interest to you.
If you are not using RTP, are you just sending raw data units over the network?

Related

Real time audio analysis Android

I've got a rather complicated problem that I need to solve at work. It's pretty far out of my remit of "Android App Developer" - I would class it as a very specialized audio engineering problem.
I am tasked with developing an application, which needs to be able to stream either a local audio file or audio from streaming service apps such as, but not limited to, Spotify, to another device over Bluetooth.
In addition, the app needs to be able to estimate the BPM of the streamed audio (it is assumed all audio will be musical) and use this BPM value to control the playback speed of a lighting sequence.
This question is about how to estimate the BPM of the streamed music.
For the case where the audio file is local, I can think of some solutions for this, such as hardcoding the BPM into the app, in a map against the audio resources URL.
I have also investigated and experimented with "static" library (aubio) than can estimate BPM from an audio file, but not on the fly. It assumes .wav format. This won't be sufficient for what we are trying to achieve here.
However, given the requirement for streaming external audio from streaming service apps such as Spotify, a static analysis solution is pointless as the solution wouldn't work for the streaming service case, and the streaming service case solution will work for both cases.
Therefore, I have come to the conclusion that somehow, I need to on the fly analyze the streamed audio, perhaps with FFT or peak detection algorithms.
This question isn't about the actual BPM estimation algorithm itself (or the implementation details of how I would get there) and is about the basic starting point of such a solution:
How might I go about getting A) the raw bytes of streamed audio for both the local file case and the external streaming service app case and B) how might I process these bytes into a data structure representing the audio stream in a way amenable to running audio analysis algorithms on it.
I realize this is very open ended, quite vague question, but this is so far out of my comfort zone I've no idea how to even formulate a more coherent question.
Any help would be greatly appreciated!
I'd start by creating some separate, more tightly defined questions for the different pieces. For example, ask how to get access to the raw bytes when streaming local file, or streaming URL-sourced audio. Android has some nice support for streaming, including the ability to stream PCM, so I'd be pretty surprised if getting a hook for access to the byte stream were not possible.
Once you have a hooking point, to convert the bytes to "something useful" I'd look at using the audio format to tell you how to read the incoming bytes. The format should tell you how many channels (mono or stereo), the encoding (e.g., signed PCM is common, might be normalized floats), the number of bits per value (16 is common) and the order of the bytes (big-endian vs little endian).
I know that there are posts that will explain how to convert the raw audio bytes to PCM values based on this info, including some on stackoverflow. They should be reachable via search. I think signed normalized floats is the most common data representation used for processing audio signals.

What is the amplification factor required for android's media recorder to match the output of iOS's AVAudioRecorder?

I have a cross-platform(iOS and Android) app where I will record audio clips then send it to the server to do some machine learning operations. In my iOS app, I use AVAudioRecorder for recording the audio. In the Android app, I use MediaRecorder for recording the audio. In the mobile initially, I use m4a format because of size constrictions. After reaching the server I will convert it to wav format before using it in the ML operations.
My Problem is, in iOS the AVAudioRecorder by OS default does a factor of Amplification to the raw audio data before we the developer get access to the raw data. But in Android, the MediaRecorder doesn't provide any sort of default Amplification to the raw data. In other words, in iOS I will never get the raw audio stream from the microphone whereas in Android I will always only get the raw audio stream from the microphone. The distinction is clearly visible if you can record the same audio in both iPhone and Android phones side by side with a common audio source, then import the recorded audio in Audacity for visual representation. I have attached a sample representation screenshot below.
In the image, the first track is the Android recording and the second track is from the iOS recording. When I hear both the audio through headphones I can vaguely distinguish them but when I visualize the data points, you can clearly see the difference in the image. These distinctions are bad for ML operations.
Clearly in the iPhone, there is a certain amplification factor involved which I would like to implement in the Android also.
Is anyone aware of the amplification factor? OR are there any other possible alternatives?
It's quite possible that the difference is that the effect of Automatic Gain Control.
You can disable this in your app's AVAudioSession by setting its mode to AVAudioSessionModeMeasurement which you do once in your application - usually at startup. This disables a great deal of input signal processing.
Reading your problem description, you might be better off enabling AGC on Android.
If neither of these yields results, you might want to gain scale both signals so they are just below clipping.
let audioSession = AVAudioSession.sharedInstance()
audio.session.setMode(AVAudioSessionModeMeasurement)

How to record microphone to more compressed format during WebRTC call on Android?

I have an app calling using WebRTC. But during a call, I need to record microphone. WebRTC has an object WebRTCAudioRecord to record audio but the audio file is so large (PCM_16bit). I want to record but to a smaller size.
I've tried MediaRecorder but it doesn't work because WebRTC is recorded and MediaRecorder does not have permission to record while calling.
Has anyone done this, or have any idea that could help me?
Webrtc is considered as comparatively much better pre-processing tool for Audio and Video.
Webrtc native development includes fully optimized native C and C++ classes, In order to maintain wonderful Speech Quality and Intelligibility of audio and video which is quite interesting.
Visit Reference Link: https://github.com/jitsi/webrtc/tree/master/examples regularly.
As Problem states;
I want to record but smaller size. I've tried MediaRecorder and it doesn't work because WebRtc is recorded and MediaRecorder has not permission to record while calling.
First of all, to reduce or minimize the size of your recorded data (audio bytes), you should look at different types of speech codecs which basically reduce the size of recorded data by maintaining sound quality at a level. To see different voice codecs, here are well-known speech codecs as follows:
OPUS
SPEEX
G7.11 (G-Series Speech Codecs)
As far as size of the audio data is concerned, it basically depends upon the Sample Rate and Time for which you record a chunk or audio packet.
Supppose time = 40ms ---then---> Reocrded Data = 640 bytes (or 320 short)
Size of recorded data is **directly proportional** to both Time and Sample rate.
Sample Rate = 8000 or 16000 etc. (greater the sample rate, greater would be the size)
To see in more detail visit: fundamentals of audio data representation. But Webrtc mainly process 10ms audio data for pre-processing in which packet size is reduced up to 160 bytes.
Secondly, If you want to use multiple AudioRecorder instances at a time, then it is practically impossible. As WebRtc is already recording from microphone then practically MediaRecorder instance would not perform any function as this answer depicts audio-record-multiple-audio-at-a-time. Webrtc has following methods to manage audio bytes such as;
1. Push input PCM data into `ProcessCaptureStream` to process in place.
2. Get the processed PCM data from `ProcessCaptureStream` and send to far-end.
3. The far end pushed the received data into `ProcessRenderStream`.
I have maintained a complete tutorial related to audio processing using Webrtc, you can visit to see more details; Android-Audio-Processing-Using-Webrtc.
There are two parts for the solution:
Get the raw PCM audio frames from webrtc
Save them to a local file in compressed size so that it can be played out later
For the first part you have to attach the SamplesReadyCallback while creating audioDeviceManager by calling the setSamplesReadyCallback method of JavaAudioDeviceModule. This callback will give you the raw audio frames captured by webrtc's AudioRecord from the mic.
For the second part you have to encode the raw frames and write into a file. Check out this sample from google on how to do it - https://android.googlesource.com/platform/frameworks/base/+/master/packages/SystemUI/src/com/android/systemui/screenrecord/ScreenInternalAudioRecorder.java#234

Saving h.264 streaming video using MediaMuxer on Android

(I'm not really good in English but I'll do my best.)
I'm working on a Android app that saves received h.264 streaming video/audio frames into a clip. And I ran into a problem that mentioned here, "Missing codec specific data."
I tried some method to solve this.
Assign pre-defined codec specific data.
Which I borrow from this post, It worked surprisedly perfect - but only on my personal phone (Sony Xperia Z3, Android 5.1.1). Most test devices just crush.(Android 4.3/4.4).
Parsing codec specific data from video stream itself.
On my phone it crushed. But somehow works on some devices.
I use this code as an example.
Create a encoder to encode received video frames then pass them to MediaMuxer.
Yes it's a stupid idea, it doesn't work.
Create a decoder to decode received video frames, pass them to an encoder, and then pass encoded frames to MediaMuxer, save it.
App isn't able to get any free buffer from the encoder. DEADLOCK.
Now I'm running out of ideas.
The last hope I got is using ffmpeg.
But the resources I found are encode/decode videos from video files, not from stream.
Any suggestions?
Thanks in advance. :)
After been trying out a week including weekend studying h.264 spec and more.
I found out the problem isn't about video.
It's the AUDIO that is causing app to crush.
Case close, thanks for viewing. :\

Android How to show into surfaceview a stream from an IP camera

I am trying to develop a simple application that show the video stream from an IP camera into a surfaceview.
I am totally new to video decode/encode. In the last few days I have read a lot of information about mediacodec API and about how to implement it, but I can not find the right way. I still have to fully understand how buffers works and how depacketize the RTP packets from UDP and pass each frame to MediaCodec
I have a couple of Sony EP521 IP camera. From the CGI Command Manual I get that the cameras support Mpeg-4/H264 HTTP bit stream ("GET /h264...", the camera will send H.264 raw data as its response.) or RTP (UDP) bit stream.
My problem is that I do not know where to start:
Which is the "best" way to implement this? (with best I mean the most reliable/correct but still easy way)
Should I use HTTP bit stream or RTP?
Are MediaCodec strictly needed or can I implement this in another way? (ie, android.media.mediaplayer class already support h.264 raw data over RTP (I do not if it actually does or not))
How can I extract the video data from an HTTP bit stream?
I know that there are a lot if similar question, but no one seems to fully answer my doubts.
The camera also support MJpeg. This would be easier to implement, but for the moment I do not want to use MJpeg encoding.
Here the Camera CGI manual: http://wikisend.com/download/740040/G5%20Camera%20CGI%20manual.pdf
Thank you, and sorry If already been discussed.

Categories

Resources