I am trying to implement WebRTC DataChannel on Android. I want to create a simple peerconnection object which will open DataChannel to send data over the network using WebRTC. I am getting error when I try to create my PeerConnection Object. I learnt that we use factory to create peerconnection object using factory.createPeerConnection().
For this, I have to create the PeerConnectionFactory object first. After this, I can use it to create PeerConnection Object. I get errors Could not find method android.media.MediaCodec.setParameters and Fatal Signal 11 (SIGSEGV) at 0x00000000 (code=1) when I try to create PeerConnectionFactory object. I also tried the following code with PeerConnectionFactory.initializeAndroidGlobals(this, false, false, false); This is what I am trying to do:
PeerConnectionFactory factory = new PeerConnectionFactory();
peer = new Peer();
This is how my Peer object looks like:
public class Peer implements SdpObserver, PeerConnection.Observer, DataChannel.Observer {
private PeerConnection pc;
private DataChannel dc;
public Peer() {
this.pc = factory.createPeerConnection(RTCConfig.getIceServer(),
RTCConfig.getMediaConstraints(), this);
dc = this.pc.createDataChannel("sendDataChannel", new DataChannel.Init());
}
#Override
public void onAddStream(MediaStream arg0) {
// TODO Auto-generated method stub
}
#Override
public void onDataChannel(DataChannel dataChannel) {
this.dc = dataChannel;
}
#Override
public void onIceCandidate(final IceCandidate candidate) {
try {
JSONObject payload = new JSONObject();
payload.put("type", "candidate");
payload.put("label", candidate.sdpMLineIndex);
payload.put("id", candidate.sdpMid);
payload.put("candidate", candidate.sdp);
sendSocketMessageDataChannel(payload.toString());
} catch (JSONException e) {
e.printStackTrace();
}
}
#Override
public void onIceConnectionChange(IceConnectionState iceConnectionState) {
}
#Override
public void onIceGatheringChange(IceGatheringState arg0) {
// TODO Auto-generated method stub
}
#Override
public void onRemoveStream(MediaStream arg0) {
// TODO Auto-generated method stub
}
#Override
public void onRenegotiationNeeded() {
// TODO Auto-generated method stub
}
#Override
public void onSignalingChange(SignalingState arg0) {
// TODO Auto-generated method stub
}
#Override
public void onCreateFailure(String msg) {
Toast.makeText(getApplicationContext(),
msg, Toast.LENGTH_SHORT)
.show();
}
#Override
public void onCreateSuccess(SessionDescription sdp) {
try {
JSONObject payload = new JSONObject();
payload.put("type", sdp.type.canonicalForm());
payload.put("sdp", sdp.description);
sendSocketMessageDataChannel(payload.toString());
pc.setLocalDescription(FilePeer.this, sdp);
} catch (JSONException e) {
e.printStackTrace();
}
}
#Override
public void onSetFailure(String arg0) {
// TODO Auto-generated method stub
}
#Override
public void onSetSuccess() {
// TODO Auto-generated method stub
}
#Override
public void onMessage(Buffer data) {
Log.w("FILE", data.toString());
}
#Override
public void onStateChange() {
Toast.makeText(getApplicationContext(),
"State Got Changed", Toast.LENGTH_SHORT)
.show();
/*
byte[] bytes = new byte[10];
bytes[0] = 0;
bytes[1] = 1;
bytes[2] = 2;
bytes[3] = 3;
bytes[4] = 4;
bytes[5] = 5;
bytes[6] = 6;
bytes[7] = 7;
bytes[8] = 8;
bytes[9] = 9;
ByteBuffer buf = ByteBuffer.wrap(bytes);
Buffer b = new Buffer(buf, true);
dc.send(b);
*/
}
}
Can anybody point me to any sample source code which implements DataChannel on Android? Kindly also let me know if I am not doing it in a right way. I could not find the documentation for Android Native WebRTC which tells how to do it. I am trying to implement whatever I have learnt from using WebRTC on web.
Kindly, let me know if my question is not clear.
PeerConnectionFactory no longer requires initializing audio & video engines to be enabled.
PeerConnectionFactory.initializeAndroidGlobals(this, false, false, false);
Now you will be able to disable audio and video, and use data channels
This is a known bug in WebRTC code for Android. Following threads talk more on this bug:
https://code.google.com/p/webrtc/issues/detail?id=3416
https://code.google.com/p/webrtc/issues/detail?id=3234
The bug is currently in open status. However, there is a workaround available which will work for now. In Android Globals, we need to pass the audio and video parameters as true:
PeerConnectionFactory.initializeAndroidGlobals(getApplicationContext(), true, true, VideoRendererGui.getEGLContext());
Use this instead PeerConnectionFactory.initializeAndroidGlobals(acontext, TRUE, false, false, NULL);
Then create the factory. factory = new PeerConnectionFactory();
Then in your class Peer create the peer connection as this : factory.createPeerConnection(iceServers, sdpMediaConstraints, this);.
This worked for me to establish ONLY DataChannel without EGLContext for video streaming.
UPDATE: If you still have this error, go to a newer version! This is very deprecated.
Related
I'm trying to get the audio byte[] that's created when the TextToSpeech engine synthesises text.
I've tried creating a Visualiser and assigned a OnDataCaptureListener but the byte[] it provides is always the same, and therefore I don't believe the array is connected to the spoken text.
This is my implementation:
AudioManager audioManager = (AudioManager) this.getSystemService(Context.AUDIO_SERVICE);
audioManager.requestAudioFocus(focusChange -> Log.d(TAG, "focusChange is: is: " + focusChange), AudioManager.STREAM_MUSIC, AudioManager.AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK);
int audioSessionId = audioManager.generateAudioSessionId();
mVisualizer = new Visualizer(audioSessionId);
mVisualizer.setEnabled(false);
mVisualizer.setCaptureSize(Visualizer.getCaptureSizeRange()[0]);
mVisualizer.setDataCaptureListener(
new Visualizer.OnDataCaptureListener() {
public void onWaveFormDataCapture(Visualizer visualizer,
byte[] bytes, int samplingRate) {
//here the bytes are always equal to the bytes received in the last call
}
public void onFftDataCapture(Visualizer visualizer, byte[] bytes, int samplingRate) {
}
}, Visualizer.getMaxCaptureRate(), true, true);
mVisualizer.setEnabled(true);
I also found that you can use the SynthesisCallback to receive the byte[] via its audioAvailable() method but I can't seem to implement it properly.
I created a TextToSpeechService but its onSynthesizeText() method is never called. However, I can tell that the service is working as the onLoadLanguage() is called.
My question in a nutshell: How do I get the audio bytes[] representation of the audio created when the TextToSpeech engine synthesis text?
Thanks in advance.
I heard that onAudioAvailable() was deprecated and my callback is not called, too.
So a workaround is:
In Activity:
try
{
tts.shutdown();
tts = null;
}
catch (Exception e)
{}
tts = new TextToSpeech(this, this);
In OnInit() method:
#Override
public void onInit(int p1)
{
HashMap<String,String> mTTSMap = new HashMap<String,String>();
tts.setOnUtteranceProgressListener(new UtteranceProgressListener()
{
#Override
public void onStart(final String p1)
{
// TODO: Implement this method
Log.e(TAG, "START");
}
#Override
public void onDone(final String p1)
{
if (p1.compareTo("abcde") == 0)
{
synchronized (MainActivity.this)
{
MainActivity.this.notifyAll();
}
}
}
#Override
public void onError(final String p1)
{
//this is also deprecated...
}
#Override
public void onAudioAvailable(final String id, final byte[] bytes)
{
//never calked!
runOnUiThread(new Runnable(){
#Override
public void run()
{
// TODO: Implement this method
Toast.makeText(MainActivity.this, "id:" + id /*"bytes:" + Arrays.toString(bytes)*/, 1).show();
Log.v(TAG, "BYTES");
}});
//super.onAudioAvailable(id,bytes);
}
});
Locale enEn = new Locale("en_EN");
if (tts.isLanguageAvailable(enEn) == TextToSpeech.LANG_AVAILABLE)
{
tts.setLanguage(enEn);
}
/*public int synthesizeToFile(java.lang.CharSequence text, android.os.Bundle params, java.io.File file, java.lang.String utteranceId);*/
//#java.lang.Deprecated()
// public int synthesizeToFile(java.lang.String text, java.util.HashMap<java.lang.String, java.lang.String> params, java.lang.String filename);
mTTSMap.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, "abcde"); tts.synthesizeToFile("Hello",mTTSMap,"/storage/emulated/0/a.wav");
synchronized(MainActivity.this){
try{
MainActivity.this.wait();
}catch(InterruptedException e){}
ReadTheFile();
}
}
Then your work is to load the a.wav to the buffer you want. Using libraries like that was mentioned in this SO answer.
Summary:
Create TTS Engine.
Initialize it.
OnInit is called.
In OnInit(), you setup a new HashMap and put utterence id.
Register setOnUtteranceProgressListener.
Synthesize something to a file.
Call wait();
In onDone() method call notify();
After the wait(); read the synthesized file to a buffer.
I'm running a simple http server on my local network and trying to use NsdManager from an Android device to discover it. I've followed the Android training guide on this matter, but my device is not finding any services.
Here's my Android code:
#TargetApi(Build.VERSION_CODES.JELLY_BEAN)
private void setupNetworkDiscovery()
{
discoveryListener = new NsdManager.DiscoveryListener()
{
#Override
public void onStopDiscoveryFailed(String serviceType, int errorCode)
{
// TODO Auto-generated method stub
}
#Override
public void onStartDiscoveryFailed(String serviceType, int errorCode)
{
Log.d(getPackageName(), "Start failed");
}
#Override
public void onServiceLost(NsdServiceInfo serviceInfo)
{
// TODO Auto-generated method stub
}
#Override
public void onServiceFound(NsdServiceInfo serviceInfo)
{
Log.d(getPackageName(), "Found a service");
// display the service info
StringBuilder sb = new StringBuilder();
sb.append(serviceInfo.toString());
sb.append(" - name: ");
sb.append(serviceInfo.getServiceName());
sb.append("; type: ");
sb.append(serviceInfo.getServiceType());
sb.append("\n");
servicesLabel.append(sb.toString());
}
#Override
public void onDiscoveryStopped(String serviceType)
{
// TODO Auto-generated method stub
}
#Override
public void onDiscoveryStarted(String serviceType)
{
Log.d(getPackageName(), "Start succeeded");
}
};
nsdManager = (NsdManager) getSystemService(Context.NSD_SERVICE);
nsdManager.discoverServices("_http._tcp", NsdManager.PROTOCOL_DNS_SD, discoveryListener);
}
The only log message I get is "Start succeeded" from onDiscoveryStarted.
The server is certainly running, as I can connect to it using a browser. nmap also confirms the port is open:
Nmap scan report for 192.168.1.104
Host is up (0.00011s latency).
PORT STATE SERVICE
8080/tcp open http-proxy
What am I doing wrong?
Thanks in advance!
I don't think it's your primary issue, but the service type should be "_http._tcp."
(you are missing the dot at the end)
For me (API 21), your code works fine, discovering "_http._tcp" still returns the services that are "_http._tcp."
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
/**
* Minimalist Phono native client for Android.
* Illustrates most of the features of Phono Native in the
* simplest possible way.
*
* #author tim
*
*/
public class Audio {
/* pretty much any phono native usage will need these */
PhonoNative _pn;
private Context _ctx;
public PhonoMessaging _mess;
public PhonoPhone _phone;
public PhonoCall _call;
protected String _sessionId;
/* app specific */
protected AudioTrack _audioTrack;
private Data _data;
private static Audio audio;
// Note that the constructor is private
public static Audio getSingletonObject(Context ctx, Data data) {
if (audio == null) {
audio = new Audio(ctx, data);
}
return audio;
}
/*
* Standard Android stuff...
*/
Audio(Context ctx, Data data) {
_ctx = ctx;
_data = data;
/* we set some audio UX behaviour here - adjust as needed */
AudioManager as = (AudioManager) (_ctx
.getSystemService(Context.AUDIO_SERVICE));
if (as.isWiredHeadsetOn()) {
as.setSpeakerphoneOn(false);
} else {
as.setSpeakerphoneOn(true);
}
startPhono();
}
/*
* Phono native config and setup.
*/
private void startPhono() {
// we will need an implementation of the (Abstract) PhonoPhone class
// Our needs are simple enough to do that inline in an Anon class.
_phone = new PhonoPhone() {
#Override
// invoked when a new call is created
public PhonoCall newCall() {
// implement the abstract PhonoCall class with our behaviours
// again simple enough to do inline.
PhonoCall acall = new PhonoCall(_phone) {
#Override
public void onAnswer() {
android.util.Log.d("newCall", "Answered");
}
#Override
public void onError() {
android.util.Log.d("newCall", "Call Error");
}
#Override
public void onHangup() {
android.util.Log.d("newCall", "Hung up");
_call = null;
}
#Override
public void onRing() {
android.util.Log.d("newCall", "Ringing");
}
};
// set other initialization of the call - default volume etc.
acall.setGain(100);
acall.setVolume(100);
return acall;
}
#Override
public void onError() {
android.util.Log.d("PhonoPhone", "Phone Error");
}
#Override
public void onIncommingCall(PhonoCall arg0) {
android.util.Log.d("PhonoPhone", "Incomming call");
_call = arg0;
_call.answer();
}
};
// and we need an implementation of the (Abstract) PhonoMessaging class
_mess = new PhonoMessaging() {
#Override
public void onMessage(PhonoMessage arg0) {
android.util.Log.d("message", arg0.getBody());
}
};
// Likewise PhonoNative - optionally set the address of the phono server
_pn = new PhonoNative() {
// 3 boilerplate android methods - we return platform specific implementations of the AudioFace and PlayFace and DeviceInfoFace
// interfaces.
#Override
public AudioFace newAudio() {
DroidPhonoAudioShim das = new DroidPhonoAudioShim();
_audioTrack = das.getAudioTrack(); // in theory you might want to manipulate the audio track.
return das;
}
#Override
public PlayFace newPlayer(String arg0) {
PlayFace f = null;
try {
f = new Play(arg0, _ctx);
} catch (Exception e) {
e.printStackTrace();
}
return f;
}
#Override
public DeviceInfoFace newDeviceInfo(){
return new DeviceInfo();
}
// What to do when an error occurs
#Override
public void onError() {
android.util.Log.d("PhonoNative", "Error");
}
// we have connected to the Phono server so we now set the UI into motion.
#Override
public void onReady() {
// once we are connected, apply the messaging and phone instances we built.
_pn.setApiKey("******************************************");
_pn.setMessaging(_mess);
_pn.setPhone(_phone);
// This is where your Id mapping code would go
_sessionId = this.getSessionID();
if(audio._sessionId != null) {
JSONObject obj2 = new JSONObject();
try {
obj2.putOpt("action", "phone");
obj2.putOpt("id", _data.getUuid());
obj2.putOpt("name", _data.getMyName());
obj2.putOpt("roomId", _data.getRoomId());
obj2.putOpt("userId", _data.getUuid());
obj2.putOpt("ip", audio._sessionId);
obj2.putOpt("sip", "sip:"+audio._sessionId);
_data.broadcast(obj2);
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}//if
Message msg = Message.obtain();
msg.what = 0;
JSONObject obj = new JSONObject();
try {
obj.putOpt("action", "dialog");
obj.putOpt("message", "My call info: "+audio._sessionId+", sip:"+audio._sessionId);
} catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
msg.obj = obj;
_data.getMyViewUpdateHandler().sendMessage(msg);
android.util.Log.d("SESSION ID", _sessionId);
}
// we got disconnected from the phono server -
// retry logic goes here.
#Override
public void onUnready() {
android.util.Log.d("PhonoNative", "Disconnected");
}
};
// set the ringtones (ideally these are local resources - not remote URLS.)
Uri path = Uri.parse("android.resource://com.mypackage/"+R.raw.ring);
Uri path2 = Uri.parse("android.resource://com.mypackage/"+R.raw.ringback);
_phone.setRingTone(path.getPath());
_phone.setRingbackTone(path2.getPath());
// and request a connection.
// phono native will ensure that this (and all other network activity)
// occurs on a non UI thread.
_pn.connect();
}
}
/*Now the audio call
audio = new Audio(this, data);
audio._phone.dial(_callsip,null);
*/
Have you put the right permissions on the manifest file?
Since you are using phono you may want to use:
`<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />`
EDIT:
Also make sure you are using the right API targets also on the manifest file
for example:
<uses-sdk
android:minSdkVersion="8"
android:targetSdkVersion="19" />
i am working on an android application using
RecognizerIntent.ACTION_RECOGNIZE_SPEECH,,, my problem is that i don't know how
to create the buffer which will capture the voice that the user inputs. i
read alot on stack overflow, but i just don't understand how
i will include the buffer and the recognition service call back into my code. AND HOW WILL I DO PLAY BACK FOR THE CONTENTS WHICH WERE SAVED INTO THE BUFFER.
this is my code:
public class Voice extends Activity implements OnClickListener {
byte[] sig = new byte[500000] ;
int sigPos = 0 ;
ListView lv;
static final int check =0;
protected static final String TAG = null;
#Override
protected void onCreate(Bundle savedInstanceState) {
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
setContentView(R.layout.voice);
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,
"com.domain.app");
SpeechRecognizer recognizer = SpeechRecognizer
.createSpeechRecognizer(this.getApplicationContext());
RecognitionListener listener = new RecognitionListener() {
#Override
public void onResults(Bundle results) {
ArrayList<String> voiceResults = results
.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
if (voiceResults == null) {
Log.e(TAG, "No voice results");
} else {
Log.d(TAG, "Printing matches: ");
for (String match : voiceResults) {
Log.d(TAG, match);
}
}
}
#Override
public void onReadyForSpeech(Bundle params) {
Log.d(TAG, "Ready for speech");
}
#Override
public void onError(int error) {
Log.d(TAG,
"Error listening for speech: " + error);
}
#Override
public void onBeginningOfSpeech() {
Log.d(TAG, "Speech starting");
}
#Override
public void onBufferReceived(byte[] buffer) {
// TODO Auto-generated method stub
TextView display=(TextView)findViewById (R.id.Text1);
display.setText("True");
System.arraycopy(buffer, 0, sig, sigPos, buffer.length) ;
sigPos += buffer.length ;
}
#Override
public void onEndOfSpeech() {
// TODO Auto-generated method stub
}
#Override
public void onEvent(int eventType, Bundle params) {
// TODO Auto-generated method stub
}
#Override
public void onPartialResults(Bundle partialResults) {
// TODO Auto-generated method stub
}
#Override
public void onRmsChanged(float rmsdB) {
// TODO Auto-generated method stub
}
};
recognizer.setRecognitionListener(listener);
recognizer.startListening(intent);
startActivityForResult(intent,check);
}
#Override
public void onClick(View arg0) {
// TODO Auto-generated method stub
}
}
The Android speech recognition API (as of API level 17) does not offer a reliable way to capture audio.
You can use the "buffer received" callback but note that
RecognitionListener says about onBufferReceived:
More sound has been received. The purpose of this function is to allow
giving feedback to the user regarding the captured audio. There is no
guarantee that this method will be called.
buffer: a buffer containing a sequence of big-endian 16-bit
integers representing a single channel audio stream. The sample rate
is implementation dependent.
and RecognitionService.Callback says about bufferReceived:
The service should call this method when sound has been received. The
purpose of this function is to allow giving feedback to the user
regarding the captured audio.
buffer: a buffer containing a sequence of big-endian 16-bit
integers representing a single channel audio stream. The sample rate
is implementation dependent.
So this callback is for feedback regarding the captured audio and not necessarily the captured audio itself, i.e. maybe a reduced version of it for visualization purposes. Also, "there is no guarantee that this method will be called", i.e. Google Voice Search might provide it in v1 but then decide to remove it in v2.
Note also that this method can be called multiple times during recognition. It is not documented however if the buffer represents the complete recorded audio or only the snippet since the last call. (I'd assume the latter, but you need to test it with your speech recognizer.)
So, in your implementation you should copy the buffer into a global variable to be saved e.g. into a wav-file once the recognition has finished.
I have implemented a service which retreives the inbox and implements the MessageCountListener interface for listening to new email arrivals.But on new email arrival, it does not gets notified! What may be the reason and what else can be done?
Here is the code:
public class EmailRetreiverService extends Service implements MessageCountListener{
public static final Vector v=new Vector();
public static final Vector nwmsg=new Vector();
Message[] m=null;
#Override
public IBinder onBind(Intent intent) {
// TODO Auto-generated method stub
return null;
}
#Override
public void onCreate(){
Log.d("EmailRetreiverStarted"," ");
ConvertToSmtp cts=new ConvertToSmtp("myemail#gmail.com","mypassword"," "," "," ", " ");
Folder folder=cts.retreiveInbox();
try {
m=folder.getMessages();
} catch (MessagingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
for(int i=0;i<m.length;i++){
v.add(m[i]);
}
Log.d("EmailRetreiverMessageCount",new Integer(m.length).toString());
Collections.reverse(v);
/*folder.addMessageCountListener(new MessageCountAdapter(){
public void messagesAdded(MessageCountEvent ev) {
Log.d("MessageListener","message listner invoked.");
Message[] msgs = ev.getMessages();
TTSservice.say("Attention! "+msgs.length+" new messages have arrived now.Kindly retreive inbox again!");
Collections.reverse(v);
for (int i = 0; i < msgs.length; i++) {
v.add(msgs[i]);
//System.out.println("Got " + msgs.length + " new messages");
}
Collections.reverse(v);
// Just dump out the new messages
}
});*/
folder.addMessageCountListener(this);
}
#Override
public void onDestroy(){
v.removeAllElements();
}
#Override
public void messagesAdded(MessageCountEvent arg0) {
// TODO Auto-generated method stub
Log.d("EmailService","MessageArrived!");
}
#Override
public void messagesRemoved(MessageCountEvent arg0) {
// TODO Auto-generated method stub
Log.d("EmailService","MessageRemoved!");
}
}
It must be noted that the service is successfully retreiving the inbox.But just does not get notified.
You need to do something to cause JavaMail to receive the notification from the server of new messages. A simple approach is to call the getMessageCount() method periodically. Another approach is to use the IMAP-specific idle() method, which requires dedicating a thread to calling that method.