I want to take a picture from camera using Unity. Taking picture itself is not a big deal but I want more accurate one using autofocus callback method like in Android (onAutoFocus(boolean success, Camera camera)) So I can take a picture if callback returns success true. Is there any way to do it in Unity or I need some plugin for that? If there is one can somebody reference to it? Thanks a lot!
There is a plugin called Camera Capture Kit available on the assetstore that seems to be able to do what you want. We used the code to make theese features available on both Android as well as iPhone.
https://www.assetstore.unity3d.com/en/#!/content/56673
Camera Capture Kit comes with a plug-n-play camera app which enables Auto-focus - you can set the autofocus mode by calling .
CameraCapture.UnitySetFocusMode( webCamTextureReferance, FocusModes.Autofocus );
That will represent AVCaptureFocusModeAutoFocus and you should be able to trigger a callback for the focus event yourself by adding a piece of code like this in the function initCapture in the file Assets/Tastybits/Native/iOS/CameraCapture.mm :
[camDevice addObserver:self forKeyPath:#"adjustingFocus" options:flags context:nil];
Now, Camera Capture Kit doesn't give you a callback when focus is being done on iOS so you will have to add it yourself and calling back to unity yourself using SendMessage.
- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context {
if( [keyPath isEqualToString:#"adjustingFocus"] ){
BOOL adjustingFocus = [ [change objectForKey:NSKeyValueChangeNewKey] isEqualToNumber:[NSNumber numberWithInt:1] ];
NSLog(#"Is adjusting focus? %#", adjustingFocus ? #"YES" : #"NO" );
if(adjustingFocus)
UnitySendMessage( "FocusController", "FocusChanged", "1" );
else
UnitySendMessage( "FocusController", "FocusChanged", "0" );
}}
Related
I want to run some code on a photo capture, but there isn't an obvious way to to this. I set tapPhoto to true so that the plugin captures pictures when the user taps the screen, and it seems to be working since it triggers the camera sound. My issue is that there does not seem to be a way to run code once the capture happens, so that I can close the camera, retrieve the image data, etc. This is my code:
angular.module('myApp').factory('photoService', function(...){
var cameraOptions = {
camera: CameraPreview.CAMERA_DIRECTION.BACK,
tapPhoto: true
}
CameraPreview.startCamera(cameraOptions);
//onCapture isn't a real part of the plugin, but this is just the code I want to run each
//time the user taps the phone screen
CameraPreview.onCapture = function(imgData){
showPhoto(imgData);
CameraPreview.stopCamera();
}
});
I saw online that takePicture() is basically what I want, but it looks like we have to call it once for it to work subsequent times (https://github.com/cordova-plugin-camera-preview/cordova-plugin-camera-preview/issues/364) and I don't want to do that because my app essentially opens the camera, allows the user to take a single photo, and then closes the camera, so having to take double the number of pictures than necessary is not ideal. Is there another way to achieve the functionality I want?
I'm working on an app where one part of the process is shooting a video, then uploading it. I'm using react-native-video to display the preview after the user has finished recording, and react-native-camera for the capturing process. I also use react-navigation to move between screens.
Currently I can get to the preview screen and set the video component's source uri from Redux. However, there is no player to be seen. The uri is in format "file:///path/video.mp4", so apparently it should be in the app cache as intended.
First the user is presented with a camera, where s/he can capture the video.
const recordVideo = async () => {
if (camera) {
const data = await camera.current.recordAsync()
if (data) {
dispatch(saveVideo(data)) <-- CONTAINS THE URI
navigation.navigate(CONFIRM)
}
}
When stopRecording() is called, the promise obviously resolves and the video's URI will be dispatched to Redux. Afterwards we navigate to the "confirmation screen", where the user can preview the video and choose whether to shoot another or go with this one.
My problem is, I can't get that preview video to play at all. I think I've tried pretty much everything within my power by now and I'm getting really tired of something so seemingly simple being so overly difficult to do. I've gotten the video to play a few times for some odd reason, so it's not the player's fault. At best what I've achieved is show the preview once, but when you go back and shoot another, there's no video preview anymore. Also, the "confirm" screen loads photos normally (that were taken in the same manner: camera -> confirm), but when it's the video's turn, it just doesn't work. The video component's onError handler also gives me this: {"error": {"extra": -2147483648, "what": 1}} which seems like just gibberish.
PS. yes, I've read through every related post here without finding a proper solution.
Use Exoplayer
Instead of using the older Media Player on Android, try using the more modern Exoplayer. If you're on React Native 0.60+, you can specify this in your react-native.config.js by doing the following:
module.exports = {
dependencies: {
"react-native-video": {
platforms: {
android: {
sourceDir: "../node_modules/react-native-video/android-exoplayer"
}
}
}
}
};
I was experiencing the same issue and this solution worked for us. Note, we're only supporting Android 5+ so not sure if this will work with devices older than that.
https://github.com/react-native-video/react-native-video/issues/1747#issuecomment-572512595
I'm new to Android-Developement and I would like to make a Camera-app. I found this library (this is the Github page).
But I don't know how to implement a library. I followed these steps (method 2) but I'm getting an error in a popup window called 'IDE Fatal Errors'. It says: 'To investigate / fix the problem IDE wants to attach following files to the bug report. We recommend to include all the files providing maximum information. Note: all the data you send will be kept private.' Then I can select a 'diagnostic.txt'. There is a section 'file content' where 'rootsChanged' is written. I can report the whole window to Google.
The following step is to configure the 'Fotoapparat' instance. What is an instance? When I search on Google I only find articles talking about making a library.
I'm sorry if these are stupid question but I am a beginner and I would like to learn more about Android-Development. Thanks in advance for your time and help.
Add this line in your build.gradle(Module: app) file ->
dependecies {
//Your other dependencies...
implementation 'io.fotoapparat:fotoapparat:2.3.3'
}
And start using your code. Library is working fine.
EDIT - >
You need to learn basics of java.
To setup instance of the object you need to create a variable.
Hence in your case:
Fotoapparat yourVariableName = new FotoapparatFotoapparat
.with(context)
.into(cameraView) // view which will draw the camera preview
.previewScaleType(ScaleType.CenterCrop) // we want the preview to fill the view
.photoResolution(ResolutionSelectorsKt.highestResolution()) // we want to have the biggest photo possible
.lensPosition(LensPositionSelectorsKt.back()) // we want back camera
.focusMode(SelectorsKt.firstAvailable( // (optional) use the first focus mode which is supported by device
FocusModeSelectorsKt. continuousFocusPicture(),
FocusModeSelectorsKt.autoFocus(), // in case if continuous focus is not available on device, auto focus will be used
FocusModeSelectorsKt.fixed() // if even auto focus is not available - fixed focus mode will be used
))
.flash(SelectorsKt.firstAvailable( // (optional) similar to how it is done for focus mode, this time for flash
FlashSelectorsKt.autoRedEye(),
FlashSelectorsKt.autoFlash(),
FlashSelectorsKt.torch()
))
.frameProcessor(myFrameProcessor) // (optional) receives each frame from preview stream
.logger(LoggersKt.loggers( // (optional) we want to log camera events in 2 places at once
LoggersKt.logcat(), // ... in logcat
LoggersKt.fileLogger(this) // ... and to file
))
.build();
And start using yourVariableName.
I am trying to develop an android app via vuforia sdk and unity.
The app should:
detect a string and as soon as the string is detected, will play video on video prefabs (not full screen).
However, I could not figure out where in TextEventHandler.cs is handling whether text is detected...
Sorry guys.. i forgot to post code..
below was what I found in TextEventHandler that vuforia provides to me...
I was guessing maybe this is handling whether the text is detected or not
// Once the text tracker has initialized and every time the video background changed,
// set the region of interest
if (mVideoBackgroundChanged)
{
TextTracker textTracker = TrackerManager.Instance.GetTracker<TextTracker>();
if (textTracker != null)
{
CalculateLoupeRegion();
textTracker.SetRegionOfInterest(mDetectionAndTrackingRect, mDetectionAndTrackingRect);
//v.SetActive (true);
}
mVideoBackgroundChanged = false;
}
Use Text Recognition instead of this method. Visit here to see how to recognize text. And load your video when the text is recognized instead.
I'm working with Titanium SDK 3.1.2 and deploying for iOS and Android.
You can find the code for my overlay HERE. I did this because the code is large and wanted my question to be clean and clear.
I'm trying to create my own overlay for the camera with the following functions:
Take a picture.
Show video camera.
Open gallery.
Close camera.
I'm able to close the camera and take a picture, but I'm unable to open the photo gallery. My galleryButton has a singletap event like this:
galleryButton.addEventListener("singletap", function(e){
openKineduPhotoGallery();
Ti.Media.hideCamera();
});
But nothing happens when I do this and after that I'm not able to close the camera nor take a picture. If I try to take a picture I get the following error:
Script Error {
backtrace = "#0 () at file://localhost/var/mobile/Applications/79D9256C-7782-4323-A371-1AD45B37D037/Full.app/ui/common/GenericWindow.js:1\n#1 () at file://localhost/var/mobile/Applications/79D9256C-7782-4323-A371-1AD45B37D037/Full.app/ui/common/CreateMoment.js:1";
line = 1;
message = "'null' is not an object (evaluating 'o.type')";
name = TypeError;
sourceId = 81147840;
sourceURL = "file://localhost/var/mobile/Applications/79D9256C-7782-4323-A371-1AD45B37D037/Full.app/ui/common/GenericWindow.js";
}
I can't figure out what object is turning null for this to appear.
I tried to swap the order in which I called the methods to make it look like this:
galleryButton.addEventListener("singletap", function(e){
Ti.Media.hideCamera();
openKineduPhotoGallery();
});
But that just hides the camera and doesn't show the gallery at all, plus I get the following warning in iOS:
Nov 6 18:37:20 Nenvo-iPod Full[3240] <Warning>: *** Assertion failure in -[UIWindowController transition:fromViewController:toViewController:target:didEndSelector:], /SourceCache/UIKit/UIKit-2380.17/UIWindowController.m:211
Nov 6 18:37:20 Nenvo-iPod Full[3240] <Warning>: Warning: Attempt to dismiss from view controller <UIImagePickerController: 0x1e5e17a0> while a presentation or dismiss is in progress!
I thought it was the hideCamera method fault so I commented that but that just triggers the error function on the showCamera method and I get a JSON error object like this:
{
"type": "error",
"code": 1,
"source": [object MediaModule],
"success": false
}
I tried to stringify MediaModule but it just returned an empty object.
Is it even possible to open the gallery from a camera overlay? What are my options? I'm trying to achieve a workflow similar to how instagram takes pictures, records video and selects a picture from gallery.
to answer for below problem is actually when you call hideCamera and open PhotoGallery then both are opened as modal window so we need to call openKineduPhotoGallery function after friction of seconds.
galleryButton.addEventListener("singletap", function(e){
Ti.Media.hideCamera();
openKineduPhotoGallery();
});
so you should try the below code.
galleryButton.addEventListener("singletap", function(e){
Ti.Media.hideCamera();
setTimeout(function(){
openKineduPhotoGallery();
},500);
});