play video when text is recognized AR - android

I am trying to develop an android app via vuforia sdk and unity.
The app should:
detect a string and as soon as the string is detected, will play video on video prefabs (not full screen).
However, I could not figure out where in TextEventHandler.cs is handling whether text is detected...
Sorry guys.. i forgot to post code..
below was what I found in TextEventHandler that vuforia provides to me...
I was guessing maybe this is handling whether the text is detected or not
// Once the text tracker has initialized and every time the video background changed,
// set the region of interest
if (mVideoBackgroundChanged)
{
TextTracker textTracker = TrackerManager.Instance.GetTracker<TextTracker>();
if (textTracker != null)
{
CalculateLoupeRegion();
textTracker.SetRegionOfInterest(mDetectionAndTrackingRect, mDetectionAndTrackingRect);
//v.SetActive (true);
}
mVideoBackgroundChanged = false;
}

Use Text Recognition instead of this method. Visit here to see how to recognize text. And load your video when the text is recognized instead.

Related

react-native-video doesn't load videos with a dynamic uri

I'm working on an app where one part of the process is shooting a video, then uploading it. I'm using react-native-video to display the preview after the user has finished recording, and react-native-camera for the capturing process. I also use react-navigation to move between screens.
Currently I can get to the preview screen and set the video component's source uri from Redux. However, there is no player to be seen. The uri is in format "file:///path/video.mp4", so apparently it should be in the app cache as intended.
First the user is presented with a camera, where s/he can capture the video.
const recordVideo = async () => {
if (camera) {
const data = await camera.current.recordAsync()
if (data) {
dispatch(saveVideo(data)) <-- CONTAINS THE URI
navigation.navigate(CONFIRM)
}
}
When stopRecording() is called, the promise obviously resolves and the video's URI will be dispatched to Redux. Afterwards we navigate to the "confirmation screen", where the user can preview the video and choose whether to shoot another or go with this one.
My problem is, I can't get that preview video to play at all. I think I've tried pretty much everything within my power by now and I'm getting really tired of something so seemingly simple being so overly difficult to do. I've gotten the video to play a few times for some odd reason, so it's not the player's fault. At best what I've achieved is show the preview once, but when you go back and shoot another, there's no video preview anymore. Also, the "confirm" screen loads photos normally (that were taken in the same manner: camera -> confirm), but when it's the video's turn, it just doesn't work. The video component's onError handler also gives me this: {"error": {"extra": -2147483648, "what": 1}} which seems like just gibberish.
PS. yes, I've read through every related post here without finding a proper solution.
Use Exoplayer
Instead of using the older Media Player on Android, try using the more modern Exoplayer. If you're on React Native 0.60+, you can specify this in your react-native.config.js by doing the following:
module.exports = {
dependencies: {
"react-native-video": {
platforms: {
android: {
sourceDir: "../node_modules/react-native-video/android-exoplayer"
}
}
}
}
};
I was experiencing the same issue and this solution worked for us. Note, we're only supporting Android 5+ so not sure if this will work with devices older than that.
https://github.com/react-native-video/react-native-video/issues/1747#issuecomment-572512595

Android Fotoapparat library

I'm new to Android-Developement and I would like to make a Camera-app. I found this library (this is the Github page).
But I don't know how to implement a library. I followed these steps (method 2) but I'm getting an error in a popup window called 'IDE Fatal Errors'. It says: 'To investigate / fix the problem IDE wants to attach following files to the bug report. We recommend to include all the files providing maximum information. Note: all the data you send will be kept private.' Then I can select a 'diagnostic.txt'. There is a section 'file content' where 'rootsChanged' is written. I can report the whole window to Google.
The following step is to configure the 'Fotoapparat' instance. What is an instance? When I search on Google I only find articles talking about making a library.
I'm sorry if these are stupid question but I am a beginner and I would like to learn more about Android-Development. Thanks in advance for your time and help.
Add this line in your build.gradle(Module: app) file ->
dependecies {
//Your other dependencies...
implementation 'io.fotoapparat:fotoapparat:2.3.3'
}
And start using your code. Library is working fine.
EDIT - >
You need to learn basics of java.
To setup instance of the object you need to create a variable.
Hence in your case:
Fotoapparat yourVariableName = new FotoapparatFotoapparat
.with(context)
.into(cameraView) // view which will draw the camera preview
.previewScaleType(ScaleType.CenterCrop) // we want the preview to fill the view
.photoResolution(ResolutionSelectorsKt.highestResolution()) // we want to have the biggest photo possible
.lensPosition(LensPositionSelectorsKt.back()) // we want back camera
.focusMode(SelectorsKt.firstAvailable( // (optional) use the first focus mode which is supported by device
FocusModeSelectorsKt. continuousFocusPicture(),
FocusModeSelectorsKt.autoFocus(), // in case if continuous focus is not available on device, auto focus will be used
FocusModeSelectorsKt.fixed() // if even auto focus is not available - fixed focus mode will be used
))
.flash(SelectorsKt.firstAvailable( // (optional) similar to how it is done for focus mode, this time for flash
FlashSelectorsKt.autoRedEye(),
FlashSelectorsKt.autoFlash(),
FlashSelectorsKt.torch()
))
.frameProcessor(myFrameProcessor) // (optional) receives each frame from preview stream
.logger(LoggersKt.loggers( // (optional) we want to log camera events in 2 places at once
LoggersKt.logcat(), // ... in logcat
LoggersKt.fileLogger(this) // ... and to file
))
.build();
And start using yourVariableName.

Taking Screenshots Using Qt C++ on Android

thanks for checking my question out!
I'm currently working on a project using Qt C++, which is designed to be multi-platform. I'm a bit of a newcoming to it, so I've been asked to set up the ability to take screenshots from within the menu structure, and I'm having issues with the Android version of the companion app.
As a quick overview, it's a bit of software that send the content of a host PC's screen to our app, and I've been able to take screenshots on the Windows version just fine, using QScreen and QPixmap, like so:
overlaywindow.cpp
{
QPixmap screenSnapData = screenGrab->currentBackground();
}
screenGrabber.cpp
{
QScreen *screen = QGuiApplication::primaryScreen();
return screen->grabWindow( QApplication::desktop()->winId() );
}
Unfortunately, Android seems to reject QScreen, and with most suggestions from past Google searches suggesting the now-deprecated QPixmap::grab(), I've gotten a little stuck.
What luck I have had is within the code for the menu itself, and QWidget, but that isn't without issue, of course!
QFile doubleCheckFile("/storage/emulated/0/Pictures/Testing/checking.png");
doubleCheckFile.open(QIODevice::ReadWrite);
QPixmap checkingPixmap = QWidget::grab();
checkingPixmap.save(&doubleCheckFile);
doubleCheckFile.close();
This code does take a screenshot, but only of the button strip currently implemented, and not for the whole screen. I've also taken a 'screenshot' of just a white box with the screen's dimensions by using:
QDesktopWidget dw;
QWidget *screen=dw.screen();
QPixmap checkingPixmap = screen->grab();
Would anyone know of whether there was an alternative to using QScreen to take a screenshot in Android, or whether there's a specific way to get it working as compared to Windows? Or would QWidget be the right track? Any help's greatly appreciated!
as i can read in Qt doc : In your screenGrabber.cpp :
QScreen *screen = QGuiApplication::primaryScreen();
return screen->grabWindow( QApplication::desktop()->winId() );
replace with :
QScreen *screen = QGuiApplication::primaryScreen();
return screen->grabWindow( 0 ); // as 0 is the id of main screen
If you want to take a screenshot of your own widget, you can use the method QWidget::render (Qt Doc):
QPixmap pixmap(widget->size());
widget->render(&pixmap);
If you want to take a screenshot of another app/widget than your app, you should use the Android API...

Cardboard Android

Is it possible to focus and open up the information just like we do on click using magnet in cardboard android?
Like ray gaze in unity, is there a alternative for android?
I want to do something like the one shown in chromeexperiments
Finally solved!
In treasurehunt sample, you can find isLookingAtObject() method which detects where user is looking...
And another method called onNewFrame which performs some action in each frame...
My solution for our problem is:
in onNewFrame method I've added this snippet code:
if (isLookingAtObject()) {
selecting++; // selecting is an integer defined as a field with zero value!
} else {
selecting = 0;
}
if (selecting == 100) {
startYourFunction(); // edit it on your own
selecting = 0;
}
}
So when user gazes 100 frame at object, your function calls and if user's gaze finishes before selecting reaches 100, selecting will reset to zero.
Hope that this also works for you
Hope this helps. (Did small research, (fingers crossed) whether the link shared below directly answers your question)
You could check GazeInputModule.cs from GoogleSamples for
Cardboard-Unity from Github. As the documentation of that class says:
This script provides an implemention of Unity's BaseInputModule
class, so that Canvas-based UI elements (_uGUI_) can be selected by
looking at them and pulling the trigger or touching the screen.
This uses the player's gaze and the magnet trigger as a raycast
generator.
Please check some tutorial regarding Google Cardboard Unity here
Please check Google-Samples posted in Github here

Unity3D Webcam Autofocus Callback?

I want to take a picture from camera using Unity. Taking picture itself is not a big deal but I want more accurate one using autofocus callback method like in Android (onAutoFocus(boolean success, Camera camera)) So I can take a picture if callback returns success true. Is there any way to do it in Unity or I need some plugin for that? If there is one can somebody reference to it? Thanks a lot!
There is a plugin called Camera Capture Kit available on the assetstore that seems to be able to do what you want. We used the code to make theese features available on both Android as well as iPhone.
https://www.assetstore.unity3d.com/en/#!/content/56673
Camera Capture Kit comes with a plug-n-play camera app which enables Auto-focus - you can set the autofocus mode by calling .
CameraCapture.UnitySetFocusMode( webCamTextureReferance, FocusModes.Autofocus );
That will represent AVCaptureFocusModeAutoFocus and you should be able to trigger a callback for the focus event yourself by adding a piece of code like this in the function initCapture in the file Assets/Tastybits/Native/iOS/CameraCapture.mm :
[camDevice addObserver:self forKeyPath:#"adjustingFocus" options:flags context:nil];
Now, Camera Capture Kit doesn't give you a callback when focus is being done on iOS so you will have to add it yourself and calling back to unity yourself using SendMessage.
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
if( [keyPath isEqualToString:#"adjustingFocus"] ){
BOOL adjustingFocus = [ [change objectForKey:NSKeyValueChangeNewKey] isEqualToNumber:[NSNumber numberWithInt:1] ];
NSLog(#"Is adjusting focus? %#", adjustingFocus ? #"YES" : #"NO" );
if(adjustingFocus)
UnitySendMessage( "FocusController", "FocusChanged", "1" );
else
UnitySendMessage( "FocusController", "FocusChanged", "0" );
}}

Categories

Resources