Call a touch function using NGUI/Unity3D? - android

I have a game that is using GUITextures for buttons but it isn't working on different resoltutions. So I was told NGUI would do it very easily. But I already have the code written for the GUITexture buttons.
How do I do it for NGUI buttons using the same textures? I've searched everywhere and can not find any answers....the NGUI forums are not much help.

Your Gameobject should have any type of collider component then in script do the following:
//Use UIEventListener to bind your gameobject with your desired method
public GameObject YOUR_BUTTON_GAMEOBJECT;
UIEventListener.Get(YOUR_BUTTON_GAMEOBJECT).onClick += YOUR_METHOD_NAME;
//METHOD SIGNATURES
void YOUR_METHOD_NAME(GameObject gameObject){
//On Click Stuff
}
Hope this helps

Related

WebRTC for Android : VideoRendererGUI take a screenshot in the video call

The demand is that saving a video frame in the video call. I have made a demo that take a screenshot through the GLSurfaceView's method "onDrawFrame". But when I use the webrtc, it have its own renderer "VideoRendererGUI" .And then when I want to override it, I find it can't be overrided. the main part code :
vsv = (GLSurfaceView) findViewById(R.id.glviewchild_call);
vsv.setPreserveEGLContextOnPause(true);
vsv.setKeepScreenOn(true);
VideoRendererGui.setView(vsv, new Runnable() {
#Override
public void run() {
init();
}
});
And If you have another way to take a screenshot, you can also share with me.
Thanks a lot!
If you use a SurfaceViewRenderer to display the stream, you can use the solution in this answer to capture a frame on the call.
Basically, using surfaceViewRenderer.AddFrameListener with a class that implements EGLRenderer.FrameListener
I will assume that the SurfaceViewRenderer of the peer you want to take a screenshot of is called remotePeerSurfaceViewRenderer, also I will asume that the button that you will use to take a screenshot is called btnScreenshot
So all what you need to do is as "Webo80" said in the answer above use the FrameListener, the issue is the FrameListener implementation of the webrtc takes only the first frame available once the Listener is attached to the SurfaceViewRenderer, so my approach to do that is by attaching the webrtc implementation of the FrameListsner at the second I need to take a screenshot and remove it once it is taken:
btnScreenshot.setOnClickListener((view)-> {
remotePeerSurfaceViewRenderer.addFrameListsner(new EglRenderer.FrameListener() {
#Override
public void onFrame(Bitmap bitmap) {
runOnUiThread(() -> {
/*
do what ever you want with the bitmap for example
imgView.setImageBitmap(bitmap);
*/
localVideoView.removeFrameListener(this);
});
}
}, 1);
})
Important note:
1. Please don't forget to runOnUiThread as Iam doing
2. Please don't forget to remove the listener inside the button onClick() method
I have tried this solution and it is working more than fine, if you want to make your custom solution you have to completely implement the interface "EglRenderer.FrameListener"
I am sorry to say due to unavailability of canvas and other facilities in Android It's not possible to capture screenshot programatically using WebRTC. One can dodge this situation by animating the app's UI and capturing the screenshot manually , store it at configured location and exchange it with other party.

Get model object from ImageTarget Android

Currently, i'm trying to work with augment reality in android. For this task i'm using Unity + Vuforia.
So, i have made a scene, which is working, when i'm looking to specific object from my camera it shows me my model(basically 3d cat model with animation). I've done this according to tutorials like this:
text format tutorial and videos on youtube like this : video tutroial.
After this i've made android application, based on this scene, like this:
The result is Android project, which basically has one Activity and banch of assets and libs. The only connection with Unity that i see so far is UnityPlayer class, but it's just a ViewGroup, extended from FrameLayout
public class UnityPlayer extends FrameLayout implements com.unity3d.player.a.a
My goal: I need to overrideonClick on the view from Unity, which i've created(my 3d cat), something like when you click on cat on your phone, it will make some sound, and set some animation to it on after clicking. I have a model on the scene, just logically it has been converted to View class inside of Android, and i thought that it's just a child of UnityPlayer, but code like this :
mUnityPlayer.getChildAt(0).setOnClickListener
has no effect.
I want either have some object which will contain all the animations and other properties which model in unity has, or if it's impossible, learn how to set onClick listeners in Unity itself
I realize that this question might be unclear, and i would like to explain it in more details for those who would try to help.
If you need more info, just ask for it in comments. Thanks
Edit: As answer was suggesting, i could simply write a script for this, which i did, for using VirtualButton, it looks like this:
using UnityEngine;
using System.Collections.Generic;
using Vuforia;
public class VirtualButtonEventHandler : MonoBehaviour, IVirtualButtonEventHandler {
// Private fields to store the models
private GameObject kitten;
private GameObject btn;
/// Called when the scene is loaded
void Start() {
// Search for all Children from this ImageTarget with type VirtualButtonBehaviour
VirtualButtonBehaviour[] vbs = GetComponentsInChildren<VirtualButtonBehaviour>();
for (int i = 0; i < vbs.Length; ++i) {
// Register with the virtual buttons TrackableBehaviour
vbs[i].RegisterEventHandler(this);
}
// Find the models based on the names in the Hierarchy
kitten = transform.FindChild("kitten").gameObject;
btn = transform.FindChild("btn").gameObject;
kitten.SetActive(false);
btn.SetActive(true);
}
/// <summary>
/// Called when the virtual button has just been pressed:
/// </summary>
public void OnButtonPressed(VirtualButtonAbstractBehaviour vb) {
//Debug.Log(vb.VirtualButtonName);
//GUI.Label(new Rect(0, 0, 10, 5), "Hello World!");
}
/// Called when the virtual button has just been released:
public void OnButtonReleased(VirtualButtonAbstractBehaviour vb) {
}
}
As you can see, in Start() method i want to find and hide model, which called kitten, but it's not hiding
I've attached this script to virtual button object, i will provide a screen:
Edit: My mistake actually, for some reason, i had to attach VirtualButtonBehaviorHandler script to an ImageTarget, it's not that simple to understand for me, but i think i see some logic behind it right now.
But, for some unknow reason, if i'm adding this code:
public void OnButtonPressed(VirtualButtonAbstractBehaviour vb) {
//Debug.Log(vb.VirtualButtonName);
switch(vb.VirtualButtonName) {
case "btn":
kitten.setActive(true);
break;
}
}
It works instantly, even without touching on the button
Final edit: This was happenning, because i've add my button in database .xml, when i've removed button from it - everything worked, i'm marking the only one answer as correct, because it helped me
Brother Everything is possible if we do it. As per my understanding , what exactly you want to do is :
First : You need to clear some basic concept by reading blog & Tutorials:
As you mentioned object on which your white cyte :) cat render is "Marker"
In Unity Everything is gameobject you can write script to manipulate that gameobject (CAT) using script. which will be in either C#(Mono) or JavaScript for this work you can use Visual Studio or MonoDevelop by Unity
But before that please search for keywords on google
a) Touchevent, RayCastMenu Controle in unity: To handle Touch
b) MonoBehaviour class, Start() , Update(), OnGUI() method in Unity
You can identify any gameObject by using its name or Tag which you can see or change in Inspector window
These are some basic things . Please follow vuforia developer portal to learn more:
https://developer.vuforia.com/library/
Now: Coming to your question:
According to me you wanna do some stuff on Click of your sweet cat .
Its simple, if simply you want to launch android activity on click of cat then there are 2 possible ways:
Create android project and import it in unity as Library project in Unity.
OR
create android activity from unity project with the help of C# script. Attach this script to any GameObject in scene.
Here I providing the example of second one : On click of Button It will launch Android Activity.
What you have to do is:
Replace Button GameObject By CAT GameObjectExport Project as Android and write activity with same name and package as mention in C# code to do whatever you want to
Here In my example I have explained:
How to popup GUI when Marker is detected using Unity+ Vuforia
How to launch android Activity from Unity Code on Specific event
How to handle Event in Unity
How to maintain GUI same with multiple Resolutions
Please Study code Carefully and read the comments also :)
using UnityEngine;
using System.Collections;
using Vuforia; //import Vuforia
using System;
public class ButtonPopup : MonoBehaviour, ITrackableEventHandler
{
float native_width= 1920f;// Native Resolution to maintain resolution on different screen size
float native_height= 1080f;
public Texture btntexture;// drag and drop any texture in inspector window
private TrackableBehaviour mTrackableBehaviour;
private bool mShowGUIButton = false;
void Start () {
mTrackableBehaviour = GetComponent<TrackableBehaviour>();
if (mTrackableBehaviour) {
mTrackableBehaviour.RegisterTrackableEventHandler(this);
}
}
public void OnTrackableStateChanged(
TrackableBehaviour.Status previousStatus,
TrackableBehaviour.Status newStatus)
{
if (newStatus == TrackableBehaviour.Status.DETECTED ||
newStatus == TrackableBehaviour.Status.TRACKED ||
newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED)
{
mShowGUIButton = true;// Button Shown only when marker detected same as your cat
}
else
{
mShowGUIButton = false;
}
}
void OnGUI() {
//set up scaling
float rx = Screen.width / native_width;
float ry = Screen.height / native_height;
GUI.matrix = Matrix4x4.TRS (new Vector3(0, 0, 0), Quaternion.identity, new Vector3 (rx, ry, 1));
Rect mButtonRect = new Rect(1920-215,5,210,110);
if (!btntexture) // This is the button that triggers AR and UI camera On/Off
{
Debug.LogError("Please assign a texture on the inspector");
return;
}
if (mShowGUIButton) {
// different screen position for your reference
//GUI.Box (new Rect (0,0,100,50), "Top-left");
//GUI.Box (new Rect (1920 - 100,0,100,50), "Top-right");
//GUI.Box (new Rect (0,1080- 50,100,50), "Bottom-left");
//GUI.Box (new Rect (Screen.width - 100,Screen.height - 50,100,50), "Bottom right");
// draw the GUI button
if (GUI.Button(mButtonRect, btntexture)) {
// do something on button click
OpenVideoActivity();
}
}
}
public void OpenVideoActivity()
{
var androidJC = new AndroidJavaClass("com.unity3d.player.UnityPlayer”);// any package name maintain same in android studio
var jo = androidJC.GetStatic<AndroidJavaObject>("currentActivity");
// Accessing the class to call a static method on it
var jc = new AndroidJavaClass("com.mobiliya.gepoc.StartVideoActivity”);//Name of android activity
// Calling a Call method to which the current activity is passed
jc.CallStatic("Call", jo);
}
}
Remember : In Unity everything is GameObject and you can write script to manipulate any GameObject
Edit: Info for Virtual Button
Virtual Buttons detect when underlying features of the target image are obscured from the camera view. You will need to place your button over an area of the image that is rich in features in order for it to reliably fire its OnButtonPressed event. To determine where these features are in your image, use the Show Features link for your image in the Target Manager.
Choose areas in the images that have dimensions of approximately 10% of the image target’s size.
Here is example in image i have simplify for you:
Register the Virtual Button:
To add a virtual button to an image target, add the VirtualButton element and its attributes to the ImageTarget element in the .xml file.
XML Attributes:
Name - a unique name for the button
Rectangle - defined by the four corners of the rectangle in the
target's coordinate space
Enabled - a boolean indicating whether the button should be enabled
by default
Sensitivity - HIGH, MEDIUM, LOW sensitivity to occlusion
You can get .Xml file in streamingAsset folder in unity project .
<ImageTarget size="247 173" name="wood">
<VirtualButton name="red" sensitivity="HIGH" rectangle="-108.68 -53.52 -75.75 -65.87"
enabled="true" />
<VirtualButton name="blue" sensitivity="LOW" rectangle="-45.28 -53.52 -12.35 -65.87"
enabled="true" />
<VirtualButton name="yellow" sensitivity="MEDIUM" rectangle="14.82 -53.52 47.75 -65.87"
enabled="true" />
<VirtualButton name="green" rectangle="76.57 -53.52 109.50 -65.87"
enabled="true" />
</ImageTarget>
After Registering Virtual Button Code is simple then:
public class Custom_VirtualButton : MonoBehaviour, IVirtualButtonEventHandler
{
// Use this for initialization
void Start () {
// here it finds any VirtualButton Attached to the ImageTarget and register it's event handler and in the
//OnButtonPressed and OnButtonReleased methods you can handle different buttons Click state
//via "vb.VirtualButtonName" variable and do some really awesome stuff with it.
VirtualButtonBehaviour[] vbs = GetComponentsInChildren<VirtualButtonBehaviour>();
foreach (VirtualButtonBehaviour item in vbs)
{
item.RegisterEventHandler(this);
}
}
// Update is called once per frame
void Update () {
}
#region VirtualButton
public void OnButtonPressed(VirtualButtonAbstractBehaviour vb)
{
Debug.Log("Helllllloooooooooo");
}
public void OnButtonReleased(VirtualButtonAbstractBehaviour vb)
{
Debug.Log("Goooooodbyeeee");
}
#endregion //VirtualButton
}
and after writing this code you have to go to StreamingAsset/QCAR and find your ImageTarget XML Association & do something like this:
<?xml version="1.0" encoding="UTF-8"?>
<QCARConfig xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="qcar_config.xsd">
<Tracking>
<ImageTarget name="marker01" size="100.000000 100.000000">
<VirtualButton name="red" rectangle="-49.00 -9.80 -18.82 -40.07" enabled="true" />
</ImageTarget>
</Tracking>
</QCARConfig>
Best of Luck :) Bdw CAT is so cute:)

Cardboard Android

Is it possible to focus and open up the information just like we do on click using magnet in cardboard android?
Like ray gaze in unity, is there a alternative for android?
I want to do something like the one shown in chromeexperiments
Finally solved!
In treasurehunt sample, you can find isLookingAtObject() method which detects where user is looking...
And another method called onNewFrame which performs some action in each frame...
My solution for our problem is:
in onNewFrame method I've added this snippet code:
if (isLookingAtObject()) {
selecting++; // selecting is an integer defined as a field with zero value!
} else {
selecting = 0;
}
if (selecting == 100) {
startYourFunction(); // edit it on your own
selecting = 0;
}
}
So when user gazes 100 frame at object, your function calls and if user's gaze finishes before selecting reaches 100, selecting will reset to zero.
Hope that this also works for you
Hope this helps. (Did small research, (fingers crossed) whether the link shared below directly answers your question)
You could check GazeInputModule.cs from GoogleSamples for
Cardboard-Unity from Github. As the documentation of that class says:
This script provides an implemention of Unity's BaseInputModule
class, so that Canvas-based UI elements (_uGUI_) can be selected by
looking at them and pulling the trigger or touching the screen.
This uses the player's gaze and the magnet trigger as a raycast
generator.
Please check some tutorial regarding Google Cardboard Unity here
Please check Google-Samples posted in Github here

Andengine, can't find a way to work with tilting/accelerometer

I saw online this code used a lot:
public void onAccelerometerChanged(final AccelerometerData myAccelerometerData) {)
When I try to use it, eclipse will not recognize the AccelerometerData class.
I'm having a hard time:
Detecting tilt.
Using it to change the worlds physics with box2d.
It would help me if anyone could show me ways of detecting tilting and using it.
Thank you.
You can see this code used in the PhysicsExample where it is used to change the center of gravity.
You must use the same code branch as the one in the example you have found. Notice that I linked GLES2-AnchorCenter branch version of the PhysicsExample. This branch is the newest. There is no AccelerometerData class. It has been renamed to AccelerationData.
Tilting (phone orientation) can be detected in a similar fashion. You have to call the following methods in your Activity and pass the correct listener.
protected boolean enableOrientationSensor(final IOrientationListener pOrientationListener) {
return this.mEngine.enableOrientationSensor(this, pOrientationListener);
}
protected boolean enableOrientationSensor(final IOrientationListener pOrientationListener, final OrientationSensorOptions pLocationSensorOptions) {
return this.mEngine.enableOrientationSensor(this, pOrientationListener, pLocationSensorOptions);
}

Moving between "pages" ( CCLayer ) in cocos2dx

I have 2 MyGameScreen objects that extends cocos2d::CCLayer. I am capturing the ccTouchesMove of the first screen so that I can create the moving effect exactly like sliding between pages of iOS application screen.
My class is like so:
class MyGameScreen: public cocos2d::CCLayer {
cocos2d::CCLayer* m_pNextScreen;
}
bool MyGameScreen::init() {
m_pNextScreen = MyOtherScreen::create();
}
void MyGameScreen::ccTouchesMoved(CCSet *touches, CCEvent *event){
// it crashes here... on the setPosition... m_pNextScreen is valid pointer though I am not sure that MyOtherScreen::create() is all I need to do...
m_pNextScreen->setPosition( CCPointMake( (fMoveTo - (2*fScreenHalfWidth)), 0.0f ) );
}
EDIT: adding clear question
It crashed when I try to setPosition on m_pNextScreen...
I have no idea why it crashed as m_pNextScreen is a valid pointer and is properly initialized. Could anybody explain why?
EDIT: adding progress report
I remodelled the whole system and make a class CContainerLayer : public cocos2d::CCLayer that contains both MyGameScreen and MyOtherScreen side by side. However, this looked like not an efficient approach, as when it grows I may need to have more than 2 pages scrollable side by side, I'd prefer to load the next page only when it is needed rather than the entire CContainerLayer that contains all the upcoming pages whether the user will scroll there or not... Do you have any better idea or github open source sample that does this?
Thank you very much for your input!
Use paging enable scrollview.download files from following link and place in your cocos2d/extenision/gui/ after that you have to set property of scrollview to enablepaging true with paging view size.
https://github.com/shauket/paging-scrollview
For Scene Transitions you can do this:
void MyGameScreen::ccTouchesMoved(CCSet *touches, CCEvent *event)
{
CCScene* MyOtherScene = CCTransitionFadeUp::create(0.2f, MyOtherScreen::scene());
CCDirector::sharedDirector()->replaceScene(MyOtherScene);
}

Categories

Resources