I'm trying to display a 3D stereo video in a googlevr app in a clean fashion, without showing the UI. I know about usability guidelines, but the device running the app will be always kept inside a viewer in a sort of demo, so no touch interaction expected.
I'm using a VrVideoView.
So I already got rid of fullscreen button, info button, stereo mode button, google cardboard tutorial screen named "transition view" and touch tracking to move the view.
videoWidgetView.setFullscreenButtonEnabled(false);
videoWidgetView.setInfoButtonEnabled(false);
videoWidgetView.setStereoModeButtonEnabled(false);
videoWidgetView.setTransitionViewEnabled(false);
videoWidgetView.setTouchTrackingEnabled(false);
I also enabled fullscreen stereo by default.
videoWidgetView.setDisplayMode(VrWidgetView.DisplayMode.FULLSCREEN_STEREO);
But I can't remove the close button "x" and the option button.
I think that the "x" is fullscreenBackButton of VrWidgetView, parent of VrVideoView. Which hasn't methods to control its visibility.
Is there a way to remove those two buttons?
Maybe subclassing and rewriting part of the widget code?
Maybe just a little hack putting a black overlay above those corners?
I've also tried as suggested
findViewById(R.id.ui_back_button).setVisibility(GONE);
or even
findViewById(com.google.vr.widgets.common.R.id.ui_back_button).setVisibility(GONE);
without success, they give:
NullPointerException: Attempt to invoke virtual method 'void android.view.View.setVisibility(int)' on a null object reference
Please check this post: Google VR Unity Divider, Settings and Back button hiding in v0.9.
VrVideoView extends VrWidgetView. There you will find a clue on how to disable the settings button in the updateButtonVisibility() method: vrUiLayer.setSettingsButtonEnabled(displayMode == 3).
Alternatively try tracing the resource ID of the buttons and do:
findViewById(R.id.ui_back_button).setVisibility(GONE);
findViewById(R.id.ui_settings_button).setVisibility(GONE);
You can also iterate on all resources and try disable one by one:
final R.drawable drawableResources = new R.drawable();
final Class<R.drawable> c = R.drawable.class;
final Field[] fields = c.getDeclaredFields();
for (int i = 0, max = fields.length; i < max; i++) {
final int resourceId;
try {
resourceId = fields[i].getInt(drawableResources);
} catch (Exception e) {
continue;
}
/* make use of resourceId for accessing Drawables here */
}
For me this solution works.
It uses field functionality to get vrUiLayer which is a private member of VrWidgetView which is the parent of VrVideoView.
Field f;
try {
f = this.videoWidgetView.getClass().getSuperclass().getDeclaredField("vrUiLayer");
f.setAccessible(true);
UiLayer vrLayer = (UiLayer) f.get(this.videoWidgetView);
// - here you can directly access to the UiLayer class - //
// to hide the up-right settings button
vrLayer.setSettingsButtonEnabled(false);
//setting listener to null hides the x button
vrLayer.setBackButtonListener(null);
// these visibility options are frequently updated when you play videos,
// so you have to call them often
// OR
vrLayer.setEnabled(false); // <- just hide the UI layer
} catch (NoSuchFieldException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
}
Don't forget to import the class
import com.google.vr.cardboard.UiLayer
Beware that using vrLayer.setEnabled(false) hides also the center line, leaving a complete clean vr experience.
Related
What does setting android:screenOrientation in the activity of the AndroidManifest.xml actually do?
If I set it, I can still change my screen orientation - so my question is what is its purpose?
I have a Unity game in portrait, that in one section I want to enable rotation - I can do this from Unity without changing the manifest - so it doesn't appear to be preventing me from changing screen orientation - so what is the purpose of it?
Should my game be SensorPortrait, or FullSensor because I enable rotation at one point? What will the difference be?
The docs indicate that it's used for filtering purposes in the Play store, but surely it serves some other purpose?
The android:screenOrientation parameter in the manifest is intended to change the activity orientation.
The reason why it doesn't seem to do anything is because of Unity, it seems. This forum describes a similar problem and solution:
We currently do override the Android manifest with a custom one, which works well in most cases. However, I found that when I try to override the screenOrientation value, it doesn't seem to stick through the build pipeline. At some point, I think Unity overwrites the screenOrientation attribute depending on the PlayerSettings values. Unfortunately though, there doesn't seem to be a PlayerSettings configuration that allows us to use the "userPortrait" setting.
// as an android plugin through unity
public static int GetAutorotateSetting(Activity activity)
{
int setting = 0;
try
{
setting = Settings.System.getInt(activity.getContentResolver(), Settings.System.ACCELEROMETER_ROTATION);
}
catch(Exception e)
{
Log.i("Unity", "Couldn't retrieve auto rotation setting: " + e.getMessage());
}
return setting;
}
// on the unity side:
public static bool AllowAutorotation()
{
bool doAutorotation = false;
#if !UNITY_EDITOR && UNITY_ANDROID
AndroidJavaClass unity = newAndroidJavaClass("com.unity3d.player.UnityPlayer");
AndroidJavaObject unityActivity = unity.GetStatic<AndroidJavaObject>("currentActivity");
using(AndroidJavaClass andClass = new AndroidJavaClass("PUT YOUR JAVA CLASS HERE"))
{
int allowAutorotation = andClass.CallStatic<int>("GetAutorotateSetting", unityActivity);
if(allowAutorotation == 0)
{
doAutorotation = false;
}
else
{
doAutorotation = true;
}
}
#endif
return doAutorotation;
}
The person who suggested this solution also put a .unitypackage file on github.
I am building this app, that will recognize painting and will display the info about it with the help of AR.
And I need to call multiple image target but not simultaneously, it will only call the image target if it is detected by AR camera.
*Ive tried creating many scenes with Image target on it but I cant call different imagetarget it keeps on reverting to only 1 imagetarget.
This is wat you can see in menu,
Main menu
Start AR camera(This part should have many image target but not detecting it simultaneously)
Help(how to use the app)
Exit
*Im using vuforia in creating AR
Thanks in advance for those who will help me.
This is the imagetarget and its Database
View post on imgur.com
Run the multi target scene sample. There are three target (stone, wood and road).
Each contains the TrackableBehaviour component.
Grab it and disable it in Start. If you do it in Awake it will be set back to active most likely in the Awake of the component itself or via some other manager.
public class TrackerController:MonoBehaviour
{
private IDictionary<string,TrackableBehaviours> trackers = null;
private void Start()
{
this.trackers = new Dictionary<string,TrackableBehaviour>();
var trackers = FindObjectsOfType<TrackableBehaviour>();
foreach(TrackingBehaviour tb in trackers)
{
this.trackers.Add(tb.TrackableName, tb);
tb.enabled = false;
}
}
public bool SetTracker(string name, bool value)
{
if(string.IsNullOrEmpty(name) == true){ return false; }
if(this.trackers.ContainsKey(name) == false){ return false; }
this.trackers[name].enabled = value;
return true;
}
}
The method finds all TrackableBehaviour and places them in a dictionary for easy access. The setting method return boolean, you can change it to throw exception or else.
I've created a minimal working example of an Input box I'd like to develop using a QGraphicsItem. Here is the code (I'd figure the .h is not necessary):
TestEditor::TestEditor()
{
text = "";
boundingBox = QRectF(0,0,200,100);
}
QRectF TestEditor::boundingRect() const{
return boundingBox;
}
void TestEditor::paint(QPainter *painter, const QStyleOptionGraphicsItem *option, QWidget *widget){
painter->setBrush(QBrush(Qt::gray));
painter->drawRect(boundingBox);
painter->setBrush(QBrush(Qt::black));
painter->drawText(boundingBox,text);
}
void TestEditor::keyReleaseEvent(QKeyEvent *event){
qDebug() << "Aca toy";
text = text + event->text();
update();
}
My tester application is simply adding it to a graphics view to test it:
TestEditor *editor = new TestEditor();
editor->setText("Algo de texto como para empezar");
editor->setFlag(QGraphicsItem::ItemAcceptsInputMethod,true);
editor->setFlag(QGraphicsItem::ItemIsFocusable,true);
editor->setFlag(QGraphicsItem::ItemIsSelectable,true);
ui->gvScreen->scene()->addItem(editor);
When I test this on my PC it works fine. When I compile it for android, I get the problem that keyboard doesn't appear so I can't try it out. How can I force the keyboard to appear?
Well In case anyone is wondering I've found a way to force the android keyboard to show.
QInputMethod *keyboard = QGuiApplication::inputMethod();
keyboard->show();
I've lost the code where I used it so I don't rembember if QGuiApplication can be called from anywhere. But if it can't you can simply sotre the pointer to the keyboard from your main form/class and pass it as a parameter to any sort of required item or class
In a Xamarin Forms app, I am trying to create a custom Entry implementation that does not automatically display the soft keyboard when it is focused. The goal is to use one instance of this entry alongside other conventional entries on a page.
I am familiar with the recommended Xamarin Forms pattern for custom view rendering, and have successfully created both the Entry and its renderer, as follows:
public class BlindEntry : Entry
{
}
[assembly: ExportRenderer(typeof(BlindEntry), typeof(BlindEntryRenderer))]
public class BlindEntryRenderer : EntryRenderer
{
protected override void OnElementChanged(ElementChangedEventArgs<Entry> e)
{
base.OnElementChanged(e);
if (Control != null)
{
Control.FocusChange += Control_FocusChange;
}
}
private void Control_FocusChange(object sender, FocusChangeEventArgs e)
{
if (e.HasFocus)
{
// What goes here?
}
else
{
// What goes here?
}
}
}
To show and hide the soft keyboard, I imagine one of the recommendations from this question will provide the solution, but there are many different opinions on which is the best approach. Also, even after choosing a suitable pattern, I am not clear how to access the required native Android APIs from within the above custom renderer.
For example, I know that I can obtain a reference to an InputMethodManager using the following call (from within an Activity), but it is not obvious how to reference the containing activity from inside the renderer:
var imm = GetSystemService(InputMethodService)
Thanks, in advance, for your suggestions.
Tim
Try this instead inside OnElementChanged():
Control.InputType = Android.Text.InputTypes.Null;
This will prevent the keyboard from appearing when selecting the Entry without having to check its focus.
=== Edit ===
Turns out there is actually the ShowSoftInputOnFocus property available for doing exactly this.
Control.ShowSoftInputOnFocus = false;
I am using the SDL-2.0.3 along with NDK-r10e, I'm attempting to make the return button switch the app to the background so I tried to use the function SDL_MinimizeWindow() but It does nothing ! is this a bug or do I miss something ?
here is my code :
if(event.key.keysym.sym == SDLK_AC_BACK)
{
SDL_MinimizeWindow(window);
SDL_Log("window minimized !\n");
}
everything just work fine and I get the log message when the button is pressed but the window is not minimized
That doesn't appear to be supported on Android (there's not really anything corresponding to minimizing a "window" on Android, unless you count finishing an Activity).
The SDL_MinimizeWindow function looks like this:
void
SDL_MinimizeWindow(SDL_Window * window)
{
CHECK_WINDOW_MAGIC(window, );
if (window->flags & SDL_WINDOW_MINIMIZED) {
return;
}
SDL_UpdateFullscreenMode(window, SDL_FALSE);
if (_this->MinimizeWindow) {
_this->MinimizeWindow(_this, window);
}
}
Where _this is an SDL_VideoDevice *, which is set to point to an SDL_VideoDevice for the appropriate platform at runtime. The Android video driver only sets up the following 3 Window-related functions:
device->CreateWindow = Android_CreateWindow;
device->SetWindowTitle = Android_SetWindowTitle;
device->DestroyWindow = Android_DestroyWindow;
Trying to perform any other operations on an SDL_Window on Android is likely to do nothing.
Some further information in the form of a couple of lines of code from SDL_androidwindow.c:
window->flags &= ~SDL_WINDOW_RESIZABLE; /* window is NEVER resizeable */
window->flags |= SDL_WINDOW_FULLSCREEN; /* window is always fullscreen */