Currently, i'm trying to work with augment reality in android. For this task i'm using Unity + Vuforia.
So, i have made a scene, which is working, when i'm looking to specific object from my camera it shows me my model(basically 3d cat model with animation). I've done this according to tutorials like this:
text format tutorial and videos on youtube like this : video tutroial.
After this i've made android application, based on this scene, like this:
The result is Android project, which basically has one Activity and banch of assets and libs. The only connection with Unity that i see so far is UnityPlayer class, but it's just a ViewGroup, extended from FrameLayout
public class UnityPlayer extends FrameLayout implements com.unity3d.player.a.a
My goal: I need to overrideonClick on the view from Unity, which i've created(my 3d cat), something like when you click on cat on your phone, it will make some sound, and set some animation to it on after clicking. I have a model on the scene, just logically it has been converted to View class inside of Android, and i thought that it's just a child of UnityPlayer, but code like this :
mUnityPlayer.getChildAt(0).setOnClickListener
has no effect.
I want either have some object which will contain all the animations and other properties which model in unity has, or if it's impossible, learn how to set onClick listeners in Unity itself
I realize that this question might be unclear, and i would like to explain it in more details for those who would try to help.
If you need more info, just ask for it in comments. Thanks
Edit: As answer was suggesting, i could simply write a script for this, which i did, for using VirtualButton, it looks like this:
using UnityEngine;
using System.Collections.Generic;
using Vuforia;
public class VirtualButtonEventHandler : MonoBehaviour, IVirtualButtonEventHandler {
// Private fields to store the models
private GameObject kitten;
private GameObject btn;
/// Called when the scene is loaded
void Start() {
// Search for all Children from this ImageTarget with type VirtualButtonBehaviour
VirtualButtonBehaviour[] vbs = GetComponentsInChildren<VirtualButtonBehaviour>();
for (int i = 0; i < vbs.Length; ++i) {
// Register with the virtual buttons TrackableBehaviour
vbs[i].RegisterEventHandler(this);
}
// Find the models based on the names in the Hierarchy
kitten = transform.FindChild("kitten").gameObject;
btn = transform.FindChild("btn").gameObject;
kitten.SetActive(false);
btn.SetActive(true);
}
/// <summary>
/// Called when the virtual button has just been pressed:
/// </summary>
public void OnButtonPressed(VirtualButtonAbstractBehaviour vb) {
//Debug.Log(vb.VirtualButtonName);
//GUI.Label(new Rect(0, 0, 10, 5), "Hello World!");
}
/// Called when the virtual button has just been released:
public void OnButtonReleased(VirtualButtonAbstractBehaviour vb) {
}
}
As you can see, in Start() method i want to find and hide model, which called kitten, but it's not hiding
I've attached this script to virtual button object, i will provide a screen:
Edit: My mistake actually, for some reason, i had to attach VirtualButtonBehaviorHandler script to an ImageTarget, it's not that simple to understand for me, but i think i see some logic behind it right now.
But, for some unknow reason, if i'm adding this code:
public void OnButtonPressed(VirtualButtonAbstractBehaviour vb) {
//Debug.Log(vb.VirtualButtonName);
switch(vb.VirtualButtonName) {
case "btn":
kitten.setActive(true);
break;
}
}
It works instantly, even without touching on the button
Final edit: This was happenning, because i've add my button in database .xml, when i've removed button from it - everything worked, i'm marking the only one answer as correct, because it helped me
Brother Everything is possible if we do it. As per my understanding , what exactly you want to do is :
First : You need to clear some basic concept by reading blog & Tutorials:
As you mentioned object on which your white cyte :) cat render is "Marker"
In Unity Everything is gameobject you can write script to manipulate that gameobject (CAT) using script. which will be in either C#(Mono) or JavaScript for this work you can use Visual Studio or MonoDevelop by Unity
But before that please search for keywords on google
a) Touchevent, RayCastMenu Controle in unity: To handle Touch
b) MonoBehaviour class, Start() , Update(), OnGUI() method in Unity
You can identify any gameObject by using its name or Tag which you can see or change in Inspector window
These are some basic things . Please follow vuforia developer portal to learn more:
https://developer.vuforia.com/library/
Now: Coming to your question:
According to me you wanna do some stuff on Click of your sweet cat .
Its simple, if simply you want to launch android activity on click of cat then there are 2 possible ways:
Create android project and import it in unity as Library project in Unity.
OR
create android activity from unity project with the help of C# script. Attach this script to any GameObject in scene.
Here I providing the example of second one : On click of Button It will launch Android Activity.
What you have to do is:
Replace Button GameObject By CAT GameObjectExport Project as Android and write activity with same name and package as mention in C# code to do whatever you want to
Here In my example I have explained:
How to popup GUI when Marker is detected using Unity+ Vuforia
How to launch android Activity from Unity Code on Specific event
How to handle Event in Unity
How to maintain GUI same with multiple Resolutions
Please Study code Carefully and read the comments also :)
using UnityEngine;
using System.Collections;
using Vuforia; //import Vuforia
using System;
public class ButtonPopup : MonoBehaviour, ITrackableEventHandler
{
float native_width= 1920f;// Native Resolution to maintain resolution on different screen size
float native_height= 1080f;
public Texture btntexture;// drag and drop any texture in inspector window
private TrackableBehaviour mTrackableBehaviour;
private bool mShowGUIButton = false;
void Start () {
mTrackableBehaviour = GetComponent<TrackableBehaviour>();
if (mTrackableBehaviour) {
mTrackableBehaviour.RegisterTrackableEventHandler(this);
}
}
public void OnTrackableStateChanged(
TrackableBehaviour.Status previousStatus,
TrackableBehaviour.Status newStatus)
{
if (newStatus == TrackableBehaviour.Status.DETECTED ||
newStatus == TrackableBehaviour.Status.TRACKED ||
newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED)
{
mShowGUIButton = true;// Button Shown only when marker detected same as your cat
}
else
{
mShowGUIButton = false;
}
}
void OnGUI() {
//set up scaling
float rx = Screen.width / native_width;
float ry = Screen.height / native_height;
GUI.matrix = Matrix4x4.TRS (new Vector3(0, 0, 0), Quaternion.identity, new Vector3 (rx, ry, 1));
Rect mButtonRect = new Rect(1920-215,5,210,110);
if (!btntexture) // This is the button that triggers AR and UI camera On/Off
{
Debug.LogError("Please assign a texture on the inspector");
return;
}
if (mShowGUIButton) {
// different screen position for your reference
//GUI.Box (new Rect (0,0,100,50), "Top-left");
//GUI.Box (new Rect (1920 - 100,0,100,50), "Top-right");
//GUI.Box (new Rect (0,1080- 50,100,50), "Bottom-left");
//GUI.Box (new Rect (Screen.width - 100,Screen.height - 50,100,50), "Bottom right");
// draw the GUI button
if (GUI.Button(mButtonRect, btntexture)) {
// do something on button click
OpenVideoActivity();
}
}
}
public void OpenVideoActivity()
{
var androidJC = new AndroidJavaClass("com.unity3d.player.UnityPlayer”);// any package name maintain same in android studio
var jo = androidJC.GetStatic<AndroidJavaObject>("currentActivity");
// Accessing the class to call a static method on it
var jc = new AndroidJavaClass("com.mobiliya.gepoc.StartVideoActivity”);//Name of android activity
// Calling a Call method to which the current activity is passed
jc.CallStatic("Call", jo);
}
}
Remember : In Unity everything is GameObject and you can write script to manipulate any GameObject
Edit: Info for Virtual Button
Virtual Buttons detect when underlying features of the target image are obscured from the camera view. You will need to place your button over an area of the image that is rich in features in order for it to reliably fire its OnButtonPressed event. To determine where these features are in your image, use the Show Features link for your image in the Target Manager.
Choose areas in the images that have dimensions of approximately 10% of the image target’s size.
Here is example in image i have simplify for you:
Register the Virtual Button:
To add a virtual button to an image target, add the VirtualButton element and its attributes to the ImageTarget element in the .xml file.
XML Attributes:
Name - a unique name for the button
Rectangle - defined by the four corners of the rectangle in the
target's coordinate space
Enabled - a boolean indicating whether the button should be enabled
by default
Sensitivity - HIGH, MEDIUM, LOW sensitivity to occlusion
You can get .Xml file in streamingAsset folder in unity project .
<ImageTarget size="247 173" name="wood">
<VirtualButton name="red" sensitivity="HIGH" rectangle="-108.68 -53.52 -75.75 -65.87"
enabled="true" />
<VirtualButton name="blue" sensitivity="LOW" rectangle="-45.28 -53.52 -12.35 -65.87"
enabled="true" />
<VirtualButton name="yellow" sensitivity="MEDIUM" rectangle="14.82 -53.52 47.75 -65.87"
enabled="true" />
<VirtualButton name="green" rectangle="76.57 -53.52 109.50 -65.87"
enabled="true" />
</ImageTarget>
After Registering Virtual Button Code is simple then:
public class Custom_VirtualButton : MonoBehaviour, IVirtualButtonEventHandler
{
// Use this for initialization
void Start () {
// here it finds any VirtualButton Attached to the ImageTarget and register it's event handler and in the
//OnButtonPressed and OnButtonReleased methods you can handle different buttons Click state
//via "vb.VirtualButtonName" variable and do some really awesome stuff with it.
VirtualButtonBehaviour[] vbs = GetComponentsInChildren<VirtualButtonBehaviour>();
foreach (VirtualButtonBehaviour item in vbs)
{
item.RegisterEventHandler(this);
}
}
// Update is called once per frame
void Update () {
}
#region VirtualButton
public void OnButtonPressed(VirtualButtonAbstractBehaviour vb)
{
Debug.Log("Helllllloooooooooo");
}
public void OnButtonReleased(VirtualButtonAbstractBehaviour vb)
{
Debug.Log("Goooooodbyeeee");
}
#endregion //VirtualButton
}
and after writing this code you have to go to StreamingAsset/QCAR and find your ImageTarget XML Association & do something like this:
<?xml version="1.0" encoding="UTF-8"?>
<QCARConfig xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="qcar_config.xsd">
<Tracking>
<ImageTarget name="marker01" size="100.000000 100.000000">
<VirtualButton name="red" rectangle="-49.00 -9.80 -18.82 -40.07" enabled="true" />
</ImageTarget>
</Tracking>
</QCARConfig>
Best of Luck :) Bdw CAT is so cute:)
Related
I am trying to build an Android app that uses Unity for UI rendering where the user can perform some actions from outside the scene.
What this means is that say I have an activity where I have a unity scene as a subview. The rest of the screen is composed of native android views (buttons etc). I want to use these views to make changes to the unity scene such as to show a text field, add text, hide text, add text effects etc.
So far I have added the unity scene as a subview by following this answer and it works fine. I am new to Unity and I am not sure how I can manage and change the scene from views outside the scene. Is this even possible? I know if I add buttons to the scene itself when I create the scene in Unity then I can use those buttons to alter the scene.
But is this possible by UI components that are not a part of the scene? If yes how can I do this? I have tried to find info on this but couldn't find much. Thanks for reading.
Make communicator between Unity side and Android.
It's possible to do that technically, if you make following function in unity. I've just wrote here how unity and android communicate so
before use this code you need to see brief of integration between Unity and Android to use native plugin from Unity so see this link to do that.
https://docs.unity3d.com/Manual/PluginsForAndroid.html
After you understand that how to make plugin for android, then see codes. It will help you.
GameObject Name: LoadGameSceneObject, loadScene.cs
public class loadScene : MonoBehaviour
{
public static void LoadScene(string sceneName)
{
SceneManager.LoadScene(sceneName);
}
}
SceneLoadActivie.java
public static void SendMessageToUnity(String gameObjectName, String methodName, String message) throws Exception
{
Log.d("unity", gameObjectName + " " + methodName + " " + message);
final Class<?> player = Class.forName("com.unity3d.player.UnityPlayer");
player.getMethod("UnitySendMessage", String.class, String.class,
String.class).invoke(null, gameObjectName, methodName, message);
}
...
// call this function.
public void LoadScene()
{
SceneLoadActivie.SendMessageToUnity("LoadGameSceneObject", "LoadScene", "SomeScene");
}
I'm new to Xamarin.Forms and mobile app development, so patience & kindness is appreciated! Am building a barcode scanner app with Xamarin.Forms PCL, trying to use MVVM. The scanner is an EXTERNAL bluetooth device (so can't use ZXing).
This project has a fixed requirement to use the scanner as a keyboard-type input and for the user to be able to quickly swap out one bluetooth device for another brand (so no device-specific APIs can be used). A second requirement is for the user to never be allowed to type anything directly into the Entry control. Input should come from the scanner and only the scanner, so therefore we don't ever want the keyboard showing on the scanning page.
There are other pages that have Entry controls where the user WILL need access to the keyboard, and the scanner should be able to stay connected to bluetooth even when a non-scanning screen is displayed. Therefore, I need a reliable way to set the soft keyboard to never be displayed on the scanning page (there is only one input control on this page, and it's intended for scanner use only), but to allow the keyboard to be accessed on other pages.
When on the scanning page, we want focus to always be set on the scanner's Entry control, so when the control gets a Completed event, we do stuff with the value received, then clear out the control and re-set focus on it to prepare for the next scan.
I have been stumbling around writing custom controls and android renderers, and with setting up dependencies (preferred), both with partial success. Either way, there's a timing issue related to how soon focus is set on the control. If there's not enough of a delay before focus is set, the soft keyboard stays visible. In the code sample provided, I added a short sleep delay, which mostly works to keep the keyboard hidden. However, the keyboard still "flashes" on the screen briefly with each scan, which looks terrible. Would really prefer a solution that is less hacky and ugly.
Is there a good, simple way to remove the soft keyboard entirely for a page, while still allowing an input control to receive focus, so that a scanned barcode can be received? And/or any other suggestions that will allow me to still meet the requirements?
(PS: the scanning page does not currently use MVVM binding. Just trying to get the keyboard to go away first, then will work on binding.)
Below is one way I tried to solve it. There were others as well. NOTE: Ultimately I went with a completely different approach which I'll post as an answer.
The custom control (in PCL):
using Xamarin.Forms;
namespace MyPCL.Views
{
//See ScanEntryRenderer in the Android project.
public class ScanEntryControl : Entry
{
public ScanEntryControl() { }
}
}
The Xaml page (notice InputTransparent = "True" on the custom control. This is so the user cannot directly enter input on the android device. All input must come from the bluetooth scanner).
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:MyPCL.Views"
x:Class="MyPCL.Views.ScanTestPage"
Title="Scan Test Page" >
<ContentPage.Content>
<StackLayout>
<Label Text="Scanner Test" />
<local:ScanEntryControl x:Name="BarcodeEntry"
Completed="BarcodeEntryCompleted"
InputTransparent="True"/>
<Label x:Name="ResultLabel" />
</StackLayout>
</ContentPage.Content>
</ContentPage>
The code behind for the form:
using System;
using Xamarin.Forms;
using Xamarin.Forms.Xaml;
namespace MyPCL.Views
{
[XamlCompilation(XamlCompilationOptions.Compile)]
public partial class ScanTestPage : ContentPage
{
public ScanTestPage()
{
InitializeComponent();
BarcodeEntry.Focus();
}
protected override void OnAppearing()
{
base.OnAppearing();
BarcodeEntry.Focus();
}
private void BarcodeEntryCompleted(object sender, EventArgs e)
{
if (!string.IsNullOrWhiteSpace(BarcodeEntry.Text))
{
ResultLabel.Text = "You entered: " + BarcodeEntry.Text;
BarcodeEntry.Text = string.Empty;
}
BarcodeEntry.Focus();
}
}
}
The Android renderer:
using Android.Content;
using Xamarin.Forms;
using MyPCL.Views;
using MyPCL.Droid;
using Xamarin.Forms.Platform.Android;
using Android.Views.InputMethods;
[assembly: ExportRenderer(typeof(ScanEntryControl), typeof(ScanEntryRenderer))]
namespace MyPCL.Droid
{
public class ScanEntryRenderer : EntryRenderer
{
protected override void OnElementChanged(ElementChangedEventArgs<Entry> e)
{
base.OnElementChanged(e);
if (e.NewElement != null)
{
((ScanEntryControl)e.NewElement).PropertyChanging += OnPropertyChanging;
}
if (e.OldElement != null)
{
((ScanEntryControl)e.OldElement).PropertyChanging -= OnPropertyChanging;
}
// Disable the Keyboard on Focus
this.Control.ShowSoftInputOnFocus = false;
}
private void OnPropertyChanging(object sender, PropertyChangingEventArgs propertyChangingEventArgs)
{
// Check if the view is about to get Focus
if (propertyChangingEventArgs.PropertyName == VisualElement.IsFocusedProperty.PropertyName)
{
// Dismiss the Keyboard
InputMethodManager imm = (InputMethodManager)this.Context.GetSystemService(Context.InputMethodService);
imm.HideSoftInputFromWindow(this.Control.WindowToken, 0);
}
}
}
}
I have been stumbling around writing custom controls and android renderers, and with setting up dependencies (preferred), both with partial success.
You can use EditText.ShowSoftInputOnFocus to achieve it in your scanning page, then the keyboard will not appear when your entry gets the focus:
using Android.Content;
using Android.Views.InputMethods;
using Edi;
using Edi.Droid;
using Xamarin.Forms;
using Xamarin.Forms.Platform.Android;
[assembly: ExportRenderer(typeof(ScanEntryControl), typeof(ScanEntryRenderer))]
namespace Edi.Droid
{
public class ScanEntryRenderer : EntryRenderer
{
protected override void OnElementChanged(ElementChangedEventArgs<Entry> e)
{
base.OnElementChanged(e);
if (e.NewElement != null)
{
((ScanEntryControl)e.NewElement).PropertyChanging += OnPropertyChanging;
}
if (e.OldElement != null)
{
((ScanEntryControl)e.OldElement).PropertyChanging -= OnPropertyChanging;
}
// Disable the Keyboard on Focus
this.Control.ShowSoftInputOnFocus = false;
}
private void OnPropertyChanging(object sender, PropertyChangingEventArgs propertyChangingEventArgs)
{
// Check if the view is about to get Focus
if (propertyChangingEventArgs.PropertyName == VisualElement.IsFocusedProperty.PropertyName)
{
// incase if the focus was moved from another Entry
// Forcefully dismiss the Keyboard
InputMethodManager imm = (InputMethodManager)this.Context.GetSystemService(Context.InputMethodService);
imm.HideSoftInputFromWindow(this.Control.WindowToken, 0);
}
}
}
}
In other pages you can still use Entry, so the keyboard will be appear.
UPDATE:
ScanEntryControl class in PCL:
using Xamarin.Forms;
namespace Edi
{
public class ScanEntryControl : Entry
{
}
}
.xaml file:
<?xml version="1.0" encoding="utf-8" ?>
<ContentPage xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
xmlns:local="clr-namespace:Edi"
x:Class="Edi.MainPage">
<ContentPage.Content>
<StackLayout>
<local:ScanEntryControl Text="ScanEntryControl"/>
<Entry Text="Entry"/>
</StackLayout>
</ContentPage.Content>
</ContentPage>
This answer does not solve the original issue directly, in the sense that it does not involve an Entry control. However, it was the only thing that worked for me, and ended up being a more elegant solution:
The Bluetooth scanner was in HID mode (Human Interface Device) by default, meaning the only way it could interact with the app was by imitating key presses, thereby necessitating an Entry (EditText) control, or similar. I switched the scanner to SPP mode (Serial Port Profile) and adapted the code from this page (see also the GitHub repo here, and for more info on HID vs SPP see this document).
The resulting code activates the scanner and then "listens" for input. When input is received, it is displayed in a Label rather than an Entry control.
There were other problems with the Entry control that I didn't mention prior: often it would add a repeat character to the front of the barcode and/or chop off one or more characters from the end. The SPP solution solved all that as well. If anyone wants the code I came up with, let me know. It will take some work to put together in a generic example, so not posting it at the moment.
I was facing the same problem. I had found one sample over in the Xamarin forums that IMHO contained the key solution:
You must override Focus() and must not call the base method. This gives you full control over the virtual keyboard. In all other solutions I have seen the virtual keyboard appears sometimes.
Of course your custom Entry needs methods to show/hide the keyboard. You would call them in your OnFocus() method. My sample control (see below) also has a bindable property that allows you to show the virtual keyboard automatically on Focus. So you may decide for every field if the keyboard should appear automatically or not.
In addition I have included another object that informs you if the virtual keyboard is currently visible and its size in case you need to size your layout accordingly.
Since this is quite a common question in several different forums I have decided to create a sample control and a small app to show the features.
In addition I wrote a detailed Readme that explains all crucial points of the implementation.
You will find it here: https://github.com/UweReisewitz/XamarinAndroidEntry
protected override void OnAppearing()
{
base.OnAppearing();
txtLotID.Focus();
}
private void OnLoad()
{
Init();
swScanMode.IsToggled = Global.IsScannable;
txtLotID.EnableKeyboard = !Global.IsScannable;
txtLotID.OnEntryScanned += BtnSearch_Clicked;
Device.StartTimer(TimeSpan.FromSeconds(1), () =>
{
if (txtLotID.IsReadOnly)
{
txtLotID.Text = "";
**txtLotID.IsReadOnly = false;**
txtLotID.GetFocus();
}
return true;
});
}
I have a game that is using GUITextures for buttons but it isn't working on different resoltutions. So I was told NGUI would do it very easily. But I already have the code written for the GUITexture buttons.
How do I do it for NGUI buttons using the same textures? I've searched everywhere and can not find any answers....the NGUI forums are not much help.
Your Gameobject should have any type of collider component then in script do the following:
//Use UIEventListener to bind your gameobject with your desired method
public GameObject YOUR_BUTTON_GAMEOBJECT;
UIEventListener.Get(YOUR_BUTTON_GAMEOBJECT).onClick += YOUR_METHOD_NAME;
//METHOD SIGNATURES
void YOUR_METHOD_NAME(GameObject gameObject){
//On Click Stuff
}
Hope this helps
I have 2 MyGameScreen objects that extends cocos2d::CCLayer. I am capturing the ccTouchesMove of the first screen so that I can create the moving effect exactly like sliding between pages of iOS application screen.
My class is like so:
class MyGameScreen: public cocos2d::CCLayer {
cocos2d::CCLayer* m_pNextScreen;
}
bool MyGameScreen::init() {
m_pNextScreen = MyOtherScreen::create();
}
void MyGameScreen::ccTouchesMoved(CCSet *touches, CCEvent *event){
// it crashes here... on the setPosition... m_pNextScreen is valid pointer though I am not sure that MyOtherScreen::create() is all I need to do...
m_pNextScreen->setPosition( CCPointMake( (fMoveTo - (2*fScreenHalfWidth)), 0.0f ) );
}
EDIT: adding clear question
It crashed when I try to setPosition on m_pNextScreen...
I have no idea why it crashed as m_pNextScreen is a valid pointer and is properly initialized. Could anybody explain why?
EDIT: adding progress report
I remodelled the whole system and make a class CContainerLayer : public cocos2d::CCLayer that contains both MyGameScreen and MyOtherScreen side by side. However, this looked like not an efficient approach, as when it grows I may need to have more than 2 pages scrollable side by side, I'd prefer to load the next page only when it is needed rather than the entire CContainerLayer that contains all the upcoming pages whether the user will scroll there or not... Do you have any better idea or github open source sample that does this?
Thank you very much for your input!
Use paging enable scrollview.download files from following link and place in your cocos2d/extenision/gui/ after that you have to set property of scrollview to enablepaging true with paging view size.
https://github.com/shauket/paging-scrollview
For Scene Transitions you can do this:
void MyGameScreen::ccTouchesMoved(CCSet *touches, CCEvent *event)
{
CCScene* MyOtherScene = CCTransitionFadeUp::create(0.2f, MyOtherScreen::scene());
CCDirector::sharedDirector()->replaceScene(MyOtherScene);
}
We are developing an Android application where the requirement is to have single image having different navigation when the different parts of the image are clicked. E.g. A full image of a building and clicking on the different flats we want to show the amenities offered in the flat ( build up area, number of rooms, number of balconies, terrace (if pent house) etc).
Have thought the following two possibilities,
Slice the images and put together (such that it look as a single image) in layout xml, once user clicks it, with the image view id we will know which portion is clicked.
Slice the images and put together (such that it look as a single image) in html with onclick event, load the html in webview, create JavaScript class that handles clicks.
With above two it is difficult because the parts are not proper rectangle or square shaped.
ASP.Net has ImageMap control for this purpose, do we have anything of that sort in Android?
There is no such component to do this in android, but you can easily create one:
public class ImageMap extends ImageView implements onClickListener
{
List<Rect> hotspot;
public ImageMap(Context c)
{
super(c);
hotspot = new List<Rect>();
setOnClickListener(this);
}
public void addHotSpot(Rect r)
{
hotspots.add(r);
}
public void onClick(Event e)
{
int currentPosition = 0;
for(Rect spot : hotspot){
if(spot.contains(e.getX(), e.getY()))
triggerClickInHotSpotAtPos(currentPosition);
currentPosition ++;
}
}
public void triggerHotSpot(int hotspotIndex)
{
//TODO do something useful like calling listener for this given hotspot etc ...
}
}
be aware that is not a finished work, but its more like pseudo code, the important is you got the idea.
You could do this with a single ImageView by tapping into its onTouch event handler.
See the MotionEvent documentation for specifics on how to get coordinates of the touch event.