Unable to change FOV of the google card board camera - android

I am developing a sniper game for android using Google card board unity SDK. Now there is the need to tweak the camera's FOV which leads me to interact a variable named 'mockFieldOfView' in CardBoard.cs. Tweaking that value in the Unity editor is fine but as soon as I make a build for Android it doesn't take effect at all. I'm unable to figure out the issue. Any idea or suggestion would be highly appreciated.
Apologize for the late reply, so ouflak you can see complete Cardboard.cs here Cardboard.cs

You don't want to change "mockFieldOfView". That only affects the in-editor FOV. The value you want to change is "matchMonoFOV" on the StereoController. You also have to set a "CenterOfInterest" game object on the StereoController. It makes the stereo FOV attempt to match the FOV on the Main Camera (or whichever camera has the StereoController script).
See StereoController.cs
Update: v0.4.5 of Cardboard SDK supports your use case. Use "matchByZoom" and set the FOV you want on the StereoController's camera. No center of interest is needed.

I had the same issue and in my case it helped to put the MainCamera closer to the object which was the Cockpit of a car in my case.
In order to put the MainCamera closer than 1 real-world-meter to the object you must change the default-minimum-value in Cardboard.cs - I use the following setting:
private readonly Vector2 defaultComfortableViewingRange = new Vector2(0.0f, 100000.0f);

Related

Xamarin ARcore change position of 3D object

I am trying to implement ARcore with Xamarin and want to set a 3D object in a specific geolocation (like in pokemongo). I tried to go through this sample that I found in this forum: https://blog.xamarin.com/augmented-reality-xamarin-android-arcore/ but it seems that I can't change the position of the 3d object and it is set according to the tap gesture only on a plane.
Is there a way to place an object and track it? I did manage to do that with ARkit, but until now no success for the ARcore Android.
Any ideas would be helpful.
It looks like the Xamarin wrapper for ARCore simply wraps OpenGL. As a result, drawing the object requires setting multiple matrices (Model, View and Projection) matrices:
objectRenderer.UpdateModelMatrix(anchorMatrix, scaleFactor);
objectRenderer.Draw(viewMatrix, projectionMatrix, lightIntensity);
If you simply remove this from within the foreach (var planeAttachment in planeAttachments) {
loop, then you can set the anchorMatrix (a.k.a. the modelMatrix) to a fixed/hardcoded translation then it'll fix itself relative to the camera.
Here's a decent article on View matrices: https://www.3dgep.com/understanding-the-view-matrix/#The_View_Matrix
-- Begin Shameless Plug --
However, if you are open to trying new platforms, my team has built a cross-platform React-Native library for AR/VR development (Viro React): https://viromedia.com/viroreact/
If you're more familiar with SceneKit on iOS, we have built an analogous solution on Android w/ AR/VR support (ViroCore): https://viromedia.com/virocore/
Either solution would allow you to skip over the intricacies of OpenGL and simply position your objects/models with relative ease.
ie.
Placing your model 1 meter in front of you would be as simple as (in Viro React):
<Viro3dObject source={require("./res/model.obj")} position={[0,0,-1]} type="OBJ" />

About dlib::frontal_face_detector optimization

hi I'm making a app which detects face landmarks ( 68 point )
I'm in trouble optimizing system. I'm using HOG method to detect faces.
In, detector(cv_grayscale, face_detections, -0.2); type "dlib::frontal_face_detector& detector"
There are so many computations in there. So, android cpu cannot cover them.
So, anybody who solved this problem or relevant issues ?
bool DetectFacesHOG(vector<cv::Rect_<double> >& o_regions, const cv::Mat_<uchar>& intensity, dlib::frontal_face_detector& detector, std::vector<double>& o_confidences)
{
double scaling = 1.3;
cv::Mat_<uchar> upsampled_intensity;
cv::resize(intensity, upsampled_intensity, cv::Size((int)(intensity.cols*scaling), (int)(intensity.rows*scaling)));
dlib::cv_image<uchar> cv_grayscale(upsampled_intensity);
std::vector<dlib::full_detection> face_detections;
// millions of computation !!!!!!!!!!!!!!!!!!!!!!!!
detector(cv_grayscale, face_detections, -0.2);
....
}
Download latest opencv android SDK from here.
it contains a lot of debugged samples. One of them is face detection and it detects faces with 22 frames per second speed on my Xperia-Z5 Phone. Finally, if opencv errors cause of rotation of camera, use this code. The code is very Clear and finds best frame resolution for your Camera View. İf you also want face recognition you can download C++ modules but you must use NDK(c++). Because Android SDK won't have face.h or other modules. You can combine detecting a face from java and recognize them from c++. Don't worry about speed opencv optimizes that. Face detecting lpcascade classificer xmls works high performance. But if you want more detect use haarcascade.

Is this a Digital Compass or Unity limitation?

I'm interested in AR applications of mobile devices and naturally I would like to make better use of the compass.
The only issue I've been having to work against isn't how twitchy the compass is. (Angular Smoothing seems to solve this issue just fine) My main issue is that when the device is held Vertical the compass values start freaking out. Causing an on screen compass to flip about all over the place. I don't have a lot of experience with mobile application development so I'm not sure what would be causing this issue, if its a Unity issue or if its just a limitation of the digital compass. I know other apps do seem to be able to use the compass fine in any orientation, but this is all stupidly new to me.
I've definitely tried moving the phone in a figure of 8. The device I have to play around with is a Nexus 4.
using UnityEngine;
using System.Collections;
public class Compass : MonoBehaviour {
// Use this for initialization
void Start () {
Input.location.Start ();
Input.compass.enabled = true;
}
// Update is called once per frame
void Update ()
{
var heading = Input.compass.trueHeading;
transform.eulerAngles = new Vector3 (0, 0, heading);
}
}
Preamble :)
First of, I'm not an expert (unfortunately) in subjects that I will talk about. But still, I've decided to share my thoughts.
Theory
The problem can be generalized in the following way. You want to have some continuous function that takes a 3D vector (which is device orientation in your case) and returns another vector that is orthogonal to original vector. Theory says (see hairy ball theorem) that for some arguments that function will return zero vectors. In case when such a function is compass, zero vectors are returned when device is oriented vertical (and this fells quite natural if you have ever used an ordinary compass).
Practice
Sometimes you want your app to tell which side of the world does phone back (rear camera) is pointing to.
Or maybe even you want combined approach:
If the phone is oriented flat, show what is the phone's top pointing to.
If the phone is oriented vertical, show what is the phone's back pointing to.
In both cases you need to use gyroscope in addition to compass.

AS3 AIR (ios and android) CameraRoll issue

I've been trying to sort out an issue for a week or so now. Googled to no avail. I'm currently working on an iOS/Android app that has a feature in the game to take a screenshot and have it show up in the mobile device's gallery.
I'm using the CameraRoll object and the issue is that some objects on screen have smoothing applied. However the CameraRoll screenshot ignores this. Which makes the resulting screen shot have some objects with jaggies.
I've found a number of cries for help on the same issue while googling, but no answers.
Any help is much appreciated.
Jaggies in flash are common since smoothing on bitmaps is disabled by default (more cpu intensive). I'd recommend creating a new bitmap from the CameraRoll MediaEvent.SELECT event. Inside, it should return event.data which is a MediaPromise object. Inside that, you should find a read-only file property where you should be able to find the image.
Then it's just a matter of creating your new image with smoothing.
var img:Bitmap = new Bitmap();
img.bitmapData = file.bitmapData;
img.smoothing = true;
addChild(img);
I've never tried this on mobile before, but it's a common issue which I believe you're encountering.
Addendum:
If you're having an issue with the system based screenshot services, you could create your own using pure AS3. The logic being, AS3 should do a pixel-by-pixel block copy of the stage (thereby respecting the smoothing values of your images).
Try this:
var myBitmapData:BitmapData = new BitmapData(stage.stageWidth, stage.stageHeight);
myBitmapData.draw(stage);

Android Camera.takePicture - Possible to disable shutter sound and preview surface?

I am working on an app that will allow a user to take quick click and forget snapshots. Most of the app is done except for the camera working that way I would like. Right now I have the camera working but I can't seem to find a way to disable the shutter sound and I cant find a way to disable displaying the preview. I was able to cover the preview up with a control but I would rather just not have it displayed if possible.
To sum things up, these are the items that I would like to disable while utilizing the built in Camera controls.
Shutter sound
Camera screen display
Image preview onPictureTaken
Does anyone know of a resource that could point me in the right direction, I would greatly appreciate it. I have been following CommonsWare's example from this sample fairly closely.
Thank you.
This is actually a property in the build.prop of a phone. I'm unsure if its possible to change this. Unless you completely override it and use your own camera code. Using what you can that is available in the SDK.
Take a look at this:
CameraService.cpp
. . .
CameraService::Client::Client(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const sp<CameraHardwareInterface>& hardware,
int cameraId, int cameraFacing, int clientPid) {
mPreviewCallbackFlag = FRAME_CALLBACK_FLAG_NOOP;
mOrientation = getOrientation(0, mCameraFacing == CAMERA_FACING_FRONT);
mOrientationChanged = false;
cameraService->setCameraBusy(cameraId);
cameraService->loadSound();
LOG1("Client::Client X (pid %d)", callingPid)
}
void CameraService::loadSound() {
Mutex::Autolock lock(mSoundLock);
LOG1("CameraService::loadSound ref=%d", mSoundRef);
if (mSoundRef++) return;
mSoundPlayer[SOUND_SHUTTER] = newMediaPlayer("/system/media/audio/ui/camera_click.ogg");
mSoundPlayer[SOUND_RECORDING] = newMediaPlayer("/system/media/audio/ui/VideoRecord.ogg");
}
As can be noted, the click sound is started without your interaction.
This is the service used in the Gingerbread Source code.
The reason they DON'T allow this is because it is illegal is some countries. Only way to achieve what you want is to have a custom ROM.
Update
If what being said here: http://androidforums.com/t-mobile-g1/6371-camera-shutter-sound-effect-off.html
still applies, then you could write a timer that turns off the sound (Silent Mode) for a couple of seconds and then turn it back on each time you take a picture.
You may use the data from the preview callback using a function to save it at a picture on some type of trigger such as a button, using onclick listener. you could compress the image to jpeg or png. In this way, there no shutterCallback to be implemented. and therefore you can play any sound you want or none when taking a picture.
You can effectively hide the preview surface by giving it dimensions of 1p in the xml file (I found an example the said 0p but for some reason that was giving me errors).
It may be illegal to have a silent shutter in some places, but it doesn't appear that the US is such a place, as my HTC One gives me an option to silence it, and in fact, since Android 4.2 you can do this:
Camera.CameraInfo info=new Camera.CameraInfo();
if (info.canDisableShutterSound) {
camera.enableShutterSound(false);
}

Categories

Resources