Qt positionning: no updates from QGeoPositionInfoSource - android

I'm starting using qt and I'm trying to get GPS coordinates from my ios and android with C++ following the exemple in official documentation.
The source is not null but the positionUpdated slot is never called.
Any tips would be welcomed, thank you.
ConnectionDeviceContextForDebug::ConnectionDeviceContextForDebug(StringDebugDisplayer* debugDisplayer):
_debugDisplayer(debugDisplayer)
{
QGeoPositionInfoSource *source = QGeoPositionInfoSource::createDefaultSource(this);
if(source){
connect(source, SIGNAL(positionUpdated(QGeoPositionInfo)),
this, SLOT(positionUpdated(QGeoPositionInfo)));
source->setUpdateInterval(100);
source->startUpdates();
_debugDisplayer->setText("source found");
//source->requestUpdate();
}
}
void ConnectionDeviceContextForDebug::positionUpdated(const QGeoPositionInfo &info)
{
_debugDisplayer->setText("position updated");
}

Related

rtlsdr doesn't open in qt android c++

I have a qt android project in c++. When I call "rtlsdr_get_device_name" function it returns the "Generic RTL2832U OEM" message. But when I call "rtlsdr_open" function it return -3. Please help me how can I solve this problem.
thank you silicontrip.
My project is a qt android project.
rtlsdr_dev_t *RtlSdrDevice;
int devicecount = rtlsdr_get_device_count();
if (devicecount != 0)
{
QString rtlname = rtlsdr_get_device_name(0);
//this function returns "Generic RTL2832U OEM "
retvalue = rtlsdr_open(&RtlSdrDevice, 0);
//this function returns "-3"
if (retvalue == 0) //if open rtl correctly
{
....
}
...
}
the rtlsdr device doesn't open successfully.

Microphone Timer in Android Flutter

Good morning guys, I'm using lib speech_to_text to be able to use the microphone and do voice recognition ... I put a Timer to be able to leave the microphone active for longer after the person speaks, but it only works on iOS, so I saw Android already has a native Timer .... does anyone know what I can do? Thank you!
#action
onPressedMic({bool continous = false}) {
this.initSpeech();
if (this.hasSpeech) {
try {
stt.listen(
localeId: LocaleUtils.getPtBR().localeId,
onSoundLevelChange: _sttOnSoundLevelChange,
onResult: _sttResultListener,
cancelOnError: true,
);
// displays the wave indicating the audio capture
status = StatusFooter.capturing_speech;
if (Platform.isIOS) _startTimerListen();
if (_startAudioCapture != null) _startAudioCapture();
} catch (e) {
status = StatusFooter.open_speech;
print("Pressed Mic error: $e");
}
}
}
#computed
bool get canShowSuggestions => (this.suggestionChips?.length ?? 0) > 0;
#action
_sttResultListener(SpeechRecognitionResult result) async {
// restart timer, if stt stop command has not been issued
if (Platform.isIOS && !result.finalResult) _startTimerListen();
this.textCaptureAudio = result.recognizedWords;
// sends the question, as the stt stop command has been issued
if (result.finalResult && this.textCaptureAudio.trim().isNotEmpty)
this.sendQuestion(this.textCaptureAudio);
}
void _startTimerListen() {
_cancelTimerListen();
timerListen = Timer(Duration(seconds: 3), () {
if (this.textCaptureAudio.trim().isNotEmpty) {
stt.stop();
} else {
_defineOpenSpeechStatus();
}
});
}
As far as I know there is really no way for an app to extend the period of time the microphone is listening on Android, not with Flutter and not with native Android development. I tried solving this problem for a own app a few years ago but the speech recognition on Android just does not support this. I am sorry but I hope I could help clarifying things.

How to turn on the torch/flashlight with GooglePlay Services Vision API Xamarin Android

I have been trying to implement the flashlight/torch feature of the camera using the GooglePlay Services Vision API (using Nuget from Visual Studio) for the past few days without success. I have noticed that there is a GitHub implementation of this API which has such functionality but that is only available to Java users.
I was wondering if there is anything related to C# Xamarin users.
The Camera object is not made available on this API therefore I am not able to alter the Camera parameters needed to activate the flashlight.
I would like to be sure if that functionality is not available so I don't waste more time over this. It just might be the case that the Xamarin developers have not attended to this functionality and they might in a near future.
UPDATE
https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/BarcodeCaptureActivity.java
In there you can see that on line 214 we have such method call:
mCameraSource = builder.setFlashMode(useFlash ? Camera.Parameters.FLASH_MODE_TORCH : null).build();
SetFlashMode is not a method of the CameraSource in Nuget, but it is on the GitHub (open source version).
Xamarin Vision Library Didn't expose the method to set Flash Mode.
WorkAround.
Using Reflection. You can get the Camera Object from CameraSouce and add the flash parameter then set the updated parameters to the camera.
This should be called after surfaceview has been created
Code
public Camera getCameraObject (CameraSource _camSource)
{
Field [] cFields = _camSource.Class.GetDeclaredFields ();
Camera _cam = null;
try {
foreach (Field item in cFields) {
if (item.Name.Equals ("zzbNN")) {
Console.WriteLine ("Camera");
item.Accessible = true;
try {
_cam = (Camera)item.Get (_camSource);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
}
} catch (Exception e) {
Logger.LogException (this, e);
}
return _cam;
}
public void setFlash (bool isEnable)
{
try {
isTorch = !isEnable;
var _cam = getCameraObject (mCameraSource);
if (_cam == null) return;
var _pareMeters = _cam.GetParameters ();
var _listOfSuppo = _cam.GetParameters ().SupportedFlashModes;
_pareMeters.FlashMode = isTorch ? _listOfSuppo [0] : _listOfSuppo [3];
_cam.SetParameters (_pareMeters);
} catch (Exception e) {
Logger.LogException (this, e);
}
}
Basically, anything you can do with Android can be done with Xamarin.Android. All the underlying APIs area available.
Since you have existing Java code, you can create a binding project that enables you to call the code from your Xamarin.Android project. Here's a good article on how to get started: Binding a Java Library
On the other hand, I don't think you need a library to do what you want to. If you only want torch/flashlight functionality, you just need to adapt the Java code from this answer to work in Xamarin.Android with C#.

Android- face Recognition using openCV?

In my application im going to implement face Recognition login... so i go with the openCV library for Recognize face... please help me to do this with sample code and tutorials....
Thanks in advance
Well, my colleagues and I did some investigation on face recognition last year, and these are some of ours considerations about using integrated recognition tools vs JavaCV (the Java bindings for OpenCV):
Please check below tutorials
Face Detection on Andriod Part-I ( Wayback link )
Face Detection on Andriod Part-II ( Wayback link )
Hope it helps :)
you can use NDK for using C/C++ OpenCV API
docs
beginner tutorial
void DetectMyFace ()
{
// image structure in opencv
IplImage *inImg = 0;
// face detector classifer
CvHaarClassifierCascade *clCascade = 0;
CvMemStorage *mStorage = 0;
CvSeq *faceRectSeq;
inImg = cvLoadImage("2.jpg");
mStorage = cvCreateMemStorage(0);
clCascade = (CvHaarClassifierCascade *)cvLoad("haarcascade_frontalface_default.xml", 0, 0, 0);
if ( !inImg || !mStorage || !clCascade )
{
printf("Initilization error : %s" , (!inImg)? "cant load image" : (!clCascade)?
"cant load haar cascade" :
"unable to locate memory storage");
return;
}
faceRectSeq = cvHaarDetectObjects(inImg,clCascade,mStorage,
1.2,
3,
CV_HAAR_DO_CANNY_PRUNING,
cvSize(25,25));
const char *winName = "Display Face";
cvNamedWindow(winName,CV_WINDOW_AUTOSIZE);
for ( int i = 0; i < (faceRectSeq ? faceRectSeq -> total:0); i++ )
{
CvRect *r = (CvRect*)cvGetSeqElem(faceRectSeq,i);
CvPoint p1 = { r->x, r->y };
CvPoint p2 = { r->x + r->width, r->y + r->height };
cvRectangle(inImg,p1,p2,CV_RGB(0,255,0),1,4,0);
}
cvShowImage(winName, inImg);
cvWaitKey(0);
cvDestroyWindow(winName);
// release the variables
cvReleaseImage(&inImg);
if(clCascade) cvReleaseHaarClassifierCascade(&clCascade);
if(mStorage) cvReleaseMemStorage(&mStorage);
}
I have already made an Android app for Face Recognition using OpenCV. You can check it out: https://github.com/yaylas/AndroidFaceRecognizer

Unity3d Input.location from documentation not available in code

I am working on a Unity3d for Android project. Using the documentation from Unity:
http://unity3d.com/support/documentation/ScriptReference/Input-location.html
I should be able to use Input.location to get access to GPS location data. But instead I get an error basically telling me the Input.location is not part of Unity.
Assets/Scripts/Prototype1.js(27,29): BCE0019: 'location' is not a
member of 'UnityEngine.Input'.
I've checked for updates and it tells me the system is fully up to date. I'm running version 3.4.2f3
Is the documentation outdated? Is there a different reference to the LocationService? How can I get the location data?
The files at unity3d.com contain the documentation for the most recent release Unity3D 3.5. If you look at your documentation in the local file system you won't find Input.location. Seems like they have changed the interface in Unity3D 3.5.
I haven't used GPS until now, maybe this thread and the links provided can throw some light on this:
How to import GPS location coordinates from Android device?
I found documentation to the old code, based in iPhoneSettings. This is the code I'm using now, it's not complete (doesn't respond to time out of the location service and other such well rounded behavior), but demonstrates the code.
function Start () {
locationStatus = LocationServiceStatus.Stopped;
StartCoroutine(startLocationService());
}
function Update () {
if(locationStatus == LocationServiceStatus.Running)
{
/*
lat = Input.location.latitude;
lon = Input.location.longitude;
alt = Input.location.altitude;
*/
lat = iPhoneInput.lastLocation.latitude;
lon = iPhoneInput.lastLocation.longitude;
alt = iPhoneInput.lastLocation.altitude;
}
}
function startLocationService()
{
//Input.location.Start(accuracy, distance);
iPhoneSettings.StartLocationServiceUpdates(accuracy, distance);
//while(Input.location.status == LocationServiceStatus.Initializing && maxWait > 0) {
while(iPhoneSettings.locationServiceStatus == LocationServiceStatus.Initializing && maxWait > 0)
{
yield WaitForSeconds(1);
maxWait--;
}
//locationStatus = Input.location.status;
locationStatus = iPhoneSettings.locationServiceStatus;
}

Categories

Resources