we can make distinguish between audio and video if we use android standard api to implement apk to play music/movie. no matter under libaudioflinger or decoder's lib.
when decode audio/video in awesomeplayer.cpp,we can judge the source data't type,audio? or video?
we can make distinguish the app's type under libaudioflinger
use getCallingPid()
Question:
how can we make distinguish 3rd's data source type(Audio?video?)under audioflinger?
yes audioflinger process the pcm data .
However if you want to set some parameters from Application then you can use AudioManager's setParametes API and then have a handling for that parameter in AudioFlinger .
AudioManager am = (AudioManager)context.getSystemService(context.AUDIO_SERVICE);
am.setParameters("key_value_pair");
Related
Context
I'm creating an Android application playing Media Source Extensions streams using Multimedia Tunneling. I'm using the API call flow as provided by the documentation. Audio part is handled with an AudioTrack. AudioSessionID is shared between the video MediaCodec and AudioTrack. Android SDK version is 26.
Problem
Video is being played correctly but no audio can be heard.
I do not have any error reported by the API.
Buffers are written in OutputBuffer using AudioTrack.write.
Non tunneling playback audio works well.
audio_hal does not produce any error in the logs.
Question
I've looked into the ExoPlayer implementation and I see the use of a sync header before writing the buffer to the AudioTrack in tunneling playback.
ByteBuffer avSyncHeader = ByteBuffer.allocate(16);
avSyncHeader.order(ByteOrder.BIG_ENDIAN);
avSyncHeader.putInt(0x55550001);
avSyncHeader.putInt(4, size);
avSyncHeader.putLong(8, presentationTimeUs * 1000);
avSyncHeader.position(0);
audioTrack.write(avSyncHeader, avSyncHeader.remaining(), WRITE_NON_BLOCKING);
I have tried adding that header too but audio was still not heard.
Is this sync header necessary?
Is there any other non documented requirement for Multimedia Tunneling?
Avsync header is for the level SDK, you can use another AudioTrack.write, to write the every buffer timestamp. It can auto generate the AV sync header.
Use another API, which can write timestamp.
Try:
int write(ByteBuffer audioData, int sizeInBytes, int writeMode, long timestamp)
Writes the audio data to the audio sink for playback in streaming mode on a HW_AV_SYNC track
The AudioRecord class allows recording of phone calls with one of the following options as the recording source:
VOICE_UPLINK: The audio transmitted from your end to the other party. IOW, what you speak into the microphone.
VOICE_DOWNLINK: The audio transmitted from the other party to your end.
VOICE_CALL: VOICE_UPLINK + VOICE_DOWNLINK.
I'd like to build an App that records both VOICE_UPLINK & VOICE_DOWNLINK and identify the source of the voice.
When using VOICE_CALL as the AudioSource option, the UP/DOWN-LINK streams are bundled together in to the received data buffer which makes it hard to identify the source of the voice.
Using two AudioRecords with VOICE_UPLINK & VOICE_DOWNLINK does not work - the second AudioRecord fails to start because the first AudioRecord locks the recording stream.
Is there any creative way to bypass the locking problem presented at case (2), thus enable recording of the VOICE_UPLINK & VOICE_DOWNLINK streams simultaneously and easily identifying the source?
I need to stream audio from external bluetooth device and video from camera to wowza server so that I can then access the live stream through a web app.
I've been able to successfully send other streams to Wowza using the GOCOder library, but as far as I can tell, this library only sends streams that come from the device's camera and mic.
Does anyone have a good suggesting for implementing this?
In the GoCoder Android SDK, the setAudioSource method of WZAudioSource allows you to specify an audio input source other than the default. Here's the relevant API doc for this method:
public void setAudioSource(int audioSource)
Sets the actively configured input device for capturing audio.
Parameters:
audioSource - An identifier for the active audio source. Possible values are those listed at MediaRecorder.AudioSource. The default value is MediaRecorder.AudioSource.CAMCORDER. Note that setting this while audio is actively being captured will have no effect until a new capture session is started. Setting this to an invalid value will cause an error to occur at session begin.
I want to make an app that records incoming and outgoing calls. I am using the media recorder to do so and also I am using service/broadcastreceiver to detect phone state change. I have set the audio source as Audiosource.VOICE_CALLS.I am able to record voices at my end but not from the other end. The same happens when the audio source is set as Audiosource.MIC.
Please suggest a solution.
You cannot access the incall audio stream in Android using the public SDK. Call audio is not exposed to apps, and you can only do this if you modify Android at a source level, and then build and install a new image from the customized source to your device.
I am doing an Android IP phone application with android 2.1 version.
My application is to provide a simple ip phone function.My program consist of a listener thread which receive command and poll user to start the call.
The audiotrack class will receive audio data and playback
while it will record audio data with audiorecord and stream it out to the other side.
When there is only one user streaming to another, the sound quality is good.However, when the receiver side also start record and stream, both side ear weird sound and loud noise.But still both sides can hear what the others said.
Is it not suitable for using audiotrack and audiorecord class on the same side?I cannot figure out the problem. Can anyone suggest any solution?