I have to develop a mobile application that monitors some info about calls, to limit users of a company to spend too much time with the phone near their ears. After x minutes, it should suggest to use earphones.
1st question: is it possible to monitor data like this? Phonecall time duration, start and end, if it's using earphones, internal or external speaker.. I mean, without using jailbreak or other hackings.
2nd question: is it possible doing this for IOS and Android?
3rt question: Do you know if Ionic has the capability to that?
Thank you.
Answering your questions:
Question1: Yes it's possible on Android. It's not possible on iOS. In Android, you can get call information if the user permits. You don't need to do jailbreaking or something. Whereas in iOS no way you can access call info.
Question2: Hope my first answer itself answers this. i.,e Android-Possible, iOS- not Possible
Question 3: AFAIK ionic framework is providing only basic details of Phone call time duration and contacts framework. You should explore more on Android to find out. Even if you use ionic framework you can't access this info at all on iPhone as native ios only not providing these details, we can't expect this from ionic framework.
For Android:
You can easily get the call history or incoming and outgoing call time.
So it is possible in android.
For iOS:
According to your question you want to limit the current calling time of phone near their ears.
So you also do it in iOS by some smartness.
In iOS 10 a new framework introduces for calling i.e. CallKit.
First, you have to get all contact in your application.
So user should call from your app.
For dialing also add the custom phone dialer.
By Some method of callKit you can do:
Add a call observer
#property ( nonatomic ) CXCallObserver *callObserver;
Initialize the call observer:
(instancetype)init
{
self = [super init];
if (self) {
//Initialize the call observer
_callObserver = [CXCallObserver new];
[_callObserver setDelegate:self queue:dispatch_get_main_queue()];
}
return self;
}
Add the delegate of call kit
#pragma mark - CXCallObserverDelegate
- (void)callObserver:(CXCallObserver *)callObserver callChanged:(CXCall *)call{
[self callStateValue:call];
}
#pragma mark - Callkit State
- (void)callStateValue:(CXCall *)call {
NSLog(#"Call UIID: %#", call.UUID);
NSLog(#"hasEnded %#", call.hasEnded? #"YES":#"NO");
NSLog(#"isOutgoing %#", call.isOutgoing? #"YES":#"NO");
NSLog(#"isOnHold %#", call.isOnHold? #"YES":#"NO");
NSLog(#"hasConnected %#", call.hasConnected? #"YES":#"NO");
if (call == nil || call.hasEnded == YES) {
NSLog(#"CXCallState : Disconnected");
[timer1 invalidate];
NSLog(#"%ld",(long)self.duration);
if(self.duration>1)
self.duration=1;
}
if (call.isOutgoing == YES && call.hasConnected == NO) {
}
if (call.isOutgoing == NO && call.hasConnected == NO && call.hasEnded == NO && call != nil) {
self.duration = 0;
NSLog(#"CXCallState : Incoming");
NSLog(#"Call Details: %#",call);
}
if (call.hasConnected == YES && call.hasEnded == NO) {
NSLog(#"CXCallState : Connected");
timer1 = [NSTimer scheduledTimerWithTimeInterval:1.0 repeats:YES block:^(NSTimer * _Nonnull timer) {
self.duration++;
NSLog(#"%ld",(long)self.duration);
}];
}
}
You can get the time duration and also add the condition After x minutes, it should suggest to use earphones.
Related
Making an app right now that interacts with an ESP32 through bluetooth classic. I'm reading the hall sensor and sending a 0 when its value is above 0, and a 1 when below. Now, when I register that 1 in ai2 and some things happen in the app because of it, the ESP32 malfunctions or something. I stop getting readings from the sensor in the serial monitor, and the bluetooth connection stops. It just seems like the whole esp just stops dead in its tracks. I'm also not sending any data to the ESP32, just receiving from it. The esp code is super small, but the app code not so much.
Only way to fix this issue is resetting the esp, which isn't really doable in my usecase. Any way to fix this?
#include "BluetoothSerial.h"
#if !defined(CONFIG_BT_ENABLED) || !defined(CONFIG_BLUEDROID_ENABLED)
#error Bluetooth is not enabled! Please run `make menuconfig` to and enable it
#endif
BluetoothSerial SerialBT;
void setup() {
Serial.begin(115200);
SerialBT.begin("Sport Spel"); //Bluetooth device name
}
void loop() {
Serial.println(hallRead());
if (SerialBT.available)
{
if (hallRead() < 0)
{
SerialBT.write('1');
}
else
{
SerialBT.write('0');
}
delay(20);
}
}
Image 1 of app code
Image 2
3
4
5
6
The line
if (SerialBT.available)
should be
if (SerialBT.available())
As it's written in your question, you're testing whether the address of the method named available on the SerialBT object is true, which it always will be. You want to actually call that method, so you need to include the parentheses in order to invoke it.
When a phone is ringing ( by an incoming call) If the phone number is a specific number I want to show my custom UI. If It is not, I want to pass it to the (built-in) system call app(Or any other call app is okay).
I should use 'InCallService' and the device set my app as 'a default call app' so that even when the phone screen is locked my custom-UI-activity be shown. The following kotlin source code is my goal.
override fun onCallAdded(call: Call) {
//app should receive a new incoming call via 'onCallAdded'
super.onCallAdded(call)
val phoneNumber = getPhoneNumber(call)
if (isMyTargetNumber(phoneNumber)) {
//show my custom UI
} else {
//run a built-in call app
}
}
The problem that I want to solve is how to run a built-in call app appropriately. I mean I want to complete to wirte the blank of 'else'
else {
//run a built-in call app
}
Apps on the android market like 'truecaller' or 'whosecall' works well the way I want to acheive. I want to make my app such apps. Please help me and advise something to me.
I'm trying to incorporate Pepper's built in Android tablet more in DialogFlow interactions. Particularly, my goal is to open applications installed on the tablet itself for people to use while they're talking with Pepper. I'm aware there is a 'j-tablet-browser' app installed on Pepper's end that can let a person browse the tablet like an ordinary Android device, but I would like to take it one step further and directly launch an Android app, like Amazon's Alexa.
The best solution I can up with is:
User says specific utterance (e.g. "Pepper, open Alexa please")
DialogFlow launches the j-tablet-browser behavior
{
"speak": "Sure, just a second",
"action": "startApp",
"action_parameters": {
"appId": "j-tablet-browser/."
}
}
User navigates the Android menu manually to tap the Alexa icon
My ideal goal is to make the process seamless:
User says specific utterance (e.g. "Pepper, open Alexa please")
DialogFlow launches the Alexa app installed on the Android tablet
Does anyone have an idea how this could be done?
This is quite a broad question so I'll try and focus on the specifics for launching an app with a Dialogflow chatbot. If you don't already have a QiSDK Dialogflow chatbot running on Pepper, there is a good tutorial here which details the full process. If you already have a chatbot implemented I hope the below steps are general enough for you to apply to your project.
This chatbot only returns text results for Pepper to say, so you'll need to make some modifications to allow particular actions to be launched.
Modifying DialogflowDataSource
Step 2 on this page of the tutorial details how to send a text query to Dialogflow and get a text response. You'll want to modify it to return the full reponse object (including actions), not just the text. Define a new function called detectIntentFullResponse for example.
// Change this
return response.queryResult.fulfillmentText
// to this
return response.queryResult
Modifying DialogflowChatbot
Step 2 on this page shows how to implement a QiSDK Chatbot. Add some logic to check for actions in the replyTo function.
var response: DetectIntentResponse? = null
// ...
response = dataSource.detectIntentFullResponse(input, dialogflowSessionId, language)
// ...
return if (reponse.action != null) {
StandardReplyReaction(
ActionReaction(qiContext, response), ReplyPriority.NORMAL
)
} else if (reponse.answer != null) {
StandardReplyReaction(
SimpleSayReaction(qiContext, reponse.answer), ReplyPriority.NORMAL
)
} else {
StandardReplyReaction(
EmptyChatbotReaction(qiContext), ReplyPriority.FALLBACK
)
}
Now make a new Class, ActionReaction. Note that the below is incomplete, but should serve as an example of how you can determine which action to run (if you want others). Look at SimpleSayReaction for more implementation details.
class ActionReaction internal constructor(context: QiContext, private val response: DetectIntentResponse) :
BaseChatbotReaction(context) {
override fun runWith(speechEngine: SpeechEngine) {
if (response.action == "launch-app") {
var appID = response.parameters.app.toString()
// launch app at appID
}
}
}
As for launching the app, various approaches are detailed in other questions, such as here. It is possible to extend this approach to do other actions, such as running or retrieving online data.
I am developing an application for my friend who is in sales, this application will make phone calls one after another, as soon as one phone call gets disconnected, it will automatically make call to another number from the list. This list can be read from and xml data source or json or mongodb or even from excel sheet.
This could be an ios app that reads data from an end point and stores them and can initiate the call at any point and it wont stop until all the calls are made.
Next call will be made only after the first call has been finished.
I am thinking about using node based web app using google voice to trigger the chain.
I've no experience with ios / android apis but Im willing to work on that if it's a viable thing on that platform.
Note: what we're trying to avoid is whole process of
looking up the phone number.
touch hangup and then click for another phone number.
It should self trigger the next call as soon as current call gets disconnected.
Also we're trying to avoid any paid services like twillo.
Thanks in advance :)
for IOS, you could use CTCallCenter
self.callCenter = [[CTCallCenter alloc] init];
self.callCenter.callEventHandler = ^(CTCall *call){
if ([call.callState isEqualToString: CTCallStateConnected])
{
//NSLog(#"call stopped");
}
else if ([call.callState isEqualToString: CTCallStateDialing])
{
}
else if ([call.callState isEqualToString: CTCallStateDisconnected])
{
//NSLog(#"call played");
}
else if ([call.callState isEqualToString: CTCallStateIncoming])
{
}
};
Download phone list, loop inside phone list, make a call, listening for CTCallCenter and appdelegate's Event, detect user have finish last call, our app active again, then make the next call.
Or you can try in Demo here !
I've been learning Cordova, now I am developing an app to record voice and I'd like to get the volume/Db/Amplitude of the sound being recorded.
I know that there's no oficial plugin by Cordova about this, so I searched and tested some plugins out there:
Wavesurfer.js:
It's easy and have a lot of features, but in android is not working, i don't know if the problem is webview or what (I have Android 4.1.2)
here are the details of my problem with this plugin:
https://github.com/katspaugh/wavesurfer.js/issues/341
MicVolume.js:
https://github.com/shukriadams/micVolume
I try this without success, I don't know exactly what's the problem but I think in this case it's in the cordova.exec
I can't find more plugins. Is there something else I can do or use, or maybe I am doing something wrong? I think it's strange that I canĀ“t find easy this kind of plugin, so maybe the solution it's starting to learn java from scratch? >:(
Thanks in advance.
You should use Crosswalk so that you can access the Web Audio API without a Cordova plugin. From there you could use something like this example:
http://tyleregeto.com/article/extracting-audio-data-with-the-web-audio-api
As of the current Crosswalk version, the Web Audio API is fully supported, and since you are using Cordova, you won't have any Cross-Origin issues to worry about (e.g. you will be on 'localhost' so there won't be any HTTPS problems).
I have built a Metronome in Angular that I used in a recent Ionic app with Crosswalk, and it works perfectly (even on a pretty bad XGODY device I used for testing). See the Gist below:
https://gist.github.com/MT--/ece6e388a693416aa7e7
I don't know how relevant this is since the question was asked over 1 year ago.
But it might help other people looking for an answer. MicVolume.js works but it is poorly documented, I spent a lot of time figuring it out. The following code should work:
function audioSuccessCallback(e) {
setInterval(function() {
micVolume.read(function(reading) {
console.log(reading.volume);
},
function(error){
console.log(error);
});
}, 200);
}
function audioErrorCallback(e) {
console.log(e);
}
micVolume.start(audioSuccessCallback, audioErrorCallback);
All you need to do is loop the micVolume.read() function and this will return you the "loudness" of the recording. Either through the internal mic or an external mic. Remember to call micVolume.start() after deviceready fires.
If you need to get the frequency of the sound, you could add following code to the plugin "MicVolumePlugin.java":
//Now we need to decode the PCM data using the Zero Crossings Method
int numCrossing = 0; //initialize your number of zero crossings to 0
for (int p = 0; p<bufferSize/4; p+=4) {
if (buffer[p]>0 && buffer[p+1]<=0) numCrossing++;
if (buffer[p]<0 && buffer[p+1]>=0) numCrossing++;
if (buffer[p+1]>0 && buffer[p+2]<=0) numCrossing++;
if (buffer[p+1]<0 && buffer[p+2]>=0) numCrossing++;
if (buffer[p+2]>0 && buffer[p+3]<=0) numCrossing++;
if (buffer[p+2]<0 && buffer[p+3]>=0) numCrossing++;
if (buffer[p+3]>0 && buffer[p+4]<=0) numCrossing++;
if (buffer[p+3]<0 && buffer[p+4]>=0) numCrossing++;
}//for p
for (int p=(bufferSize/4)*4;p<bufferSize-1;p++) {
if (buffer[p]>0 && buffer[p+1]<=0) numCrossing++;
if (buffer[p]<0 && buffer[p+1]>=0) numCrossing++;
}
// Set the audio Frequency to half the number of zero crossings, times the number of samples our buffersize is per second.
float frequency = (44100*4/bufferSize)*(numCrossing/2);
returnObj.put("frequency", frequency);
FFT is more accurate but it's a lot slower. This does the job pretty well. (I can't remember where I found this java code, but props to the one who wrote it!)