I wanted to know if there a way to get Glass current orientation/position in degrees or something.
and by getting that, I want to be able to move an object (could be an image, and I will rotate it )
Thanks.
Agree with Alex K, go through a tutorial about sensors, or look at the official document here.
Additionally, consider Sensor.TYPE_GRAVITY.
Regarding moving your object, refer to rotate() on this page.
Related
I want to make a touch or tap somewhere on a widget without making the user explicitly touch the screen at that point. Is there any way to do so?
I've checked SO answers, and some recommend using "integration tests" but which is not available to do on devices which aren't physically or in some way connected to the laptop (couldn't find a better wording).
I also attempted to do a hitTest wondering if it actually touches or taps the screen at that point, while interacting with the UI, but seems like it doesn't.
onPressed: () {
RenderObject? rb = context.findRenderObject();
if (rb is RenderBox){
final hit = BoxHitTestResult();
if (rb.hitTest(hit, position: Offset(300,500))){
print(hit.path);
}
}
},
Also, please understand the question right, as in many of the answers I've been reading regarding my query has been answered with "you don't need it, just call the function/method". I want to "simulate" a tap on a certain specified position of the screen, maybe according the X,Y axis as I attempted on the code.
An honest thank you for any directions.
If anything is lacking in my question, please let me know.
Edit: (The following is an update to what I'm finding)
I went diving into the Flutter codes. I found a few files:
Under flutter/src/gestures/hit_test.dart I found void handleEvent(PointerEvent event, HitTestEntry entry);. I assume it consumes a PointerEvent. Checking further,
Under flutter/src/gestures/events.dart I find class PointerDownEvent extends PointerEvent and under flutter_test/src/test_pointer.dart
I find class TestPointer
Would any of these files give any light to what I want to do? I'm checking through.
Thanks to #pskink , the following code works the best for me/ the situation. I've been able to make the taps as wanted. Thank you very much.
WidgetsBinding.instance!.handlePointerEvent(PointerDownEvent(pointer: 0, position: Offset(100, 100),)); WidgetsBinding.instance!.handlePointerEvent(PointerUpEvent(pointer: 0,position: Offset(100, 100),));
The documentation online did not have these two actions (CAMERA_FOCUS and CAMERA_ZOOM). Could anyone tell me what parameters are acceptable for these actions? I'm assuming it would only work with the Inspire drones which is what I'm working with. Thank you.
Current documentation:
CAMERA_FOCUS and CAMERA_ZOOM available in SDK:
How the zoom function waypoint action basically works the same way as the setOpticalZoomFocalLength function in DJICamera.
You will need to calculate the focal length value to pass in base on the current lens you are using. The max and min focal length values you can find under getOpticalZoomSpec function.
More information on setOpticalZoomFocalLength in DJI android sdk api:
https://developer.dji.com/mobile-sdk/documentation/cn/faq/cn/api-reference/android-api/Components/Camera/DJICamera.html#djicamera_camerasettings_setopticalzoomfocallength_inline
Hope that helps.
Not really sure what is your meaning.
I guess you want to know how to control camera focus/zoom using DJI MSDK? refer to their class reference and call the corresponding function. EZ as ABC
https://developer.dji.com/cn/mobile-sdk/documentation/faq/cn/api-reference/android-api/Components/Camera/DJICamera.html
There are a few more missing. I see the following (including the ones you mention):
CAMERA_ZOOM(7),
CAMERA_FOCUS(8),
FINE_TUNE_GIMBAL_PITCH(16),
RESET_GIMBAL_YAW(17);
I'll see what I can find out and get back to you.
DJI got back to me the other day, apparently these values are inactive currently, they do nothing in the current release but it's possible that a future release will include the functionality.
No date, version or timeline was indicated on when they might work.
For now, ignore them, they will likely be removed in the next update.
I have a newbie question. I just started learning about libgdx and I'm a bit confused. I read the documentation/wiki, I followed some tutorials like the one from gamefromscratch and others and I still have a question.
What's the best way to check and do something for a touch/tap event?
I'm using Scenes and Actors and I found at least 4 ways (till now) of interacting with an Actor, let's say:
1) myActor.addListener(new ClickListener(){...});
2) myActor.setTouchable(Touchable.enabled); and putting the code in the act() method
3) verifying Gdx.input.isTouched() in the render() method
4) overriding touchDown, touchUp methods
Any help with some details and suggestions when to use one over the other, or what's the difference between them would be very appreciated.
Thanks.
I've always been using the first method and I think from an OOP viewpoint, it's the "best" way to do it.
The second approach will not work. Whether you set an Actor to be touchable or not, Actor.act(float) will still be called whenever you do stage.act(float). That means you would execute your code in every frame.
Gdx.input.isTouched() will only tell you that a touch event has happened anywhere on the screen. It would not be a good idea to try to find out which actor has been hit by that touch, as they are already able to determine that themselves (Actor.hit()).
I'm not sure where you'd override touchDown and touchUp. Actors don't have these methods, so I'm assuming you are talking about a standard InputProcessor. In this case you will have the same problem like with your 3rd approach.
So adding a ClickListener to the actors you want to monitor for these kind of events is probably the best way to go.
I have been working on a android project.
Then I came up with 2 question.
Q1. how to implement navigation drive ?
My logic and some work
- I am be able to draw path between 2 address. And my thought is that, use the onLocationChanged(current) method then call https://maps.googleapis.com/maps/api/directions/output?parameters with the current location and destination which through some method to draw the path on the map.
Upon every onLocationChanged() method call, I redraw the path on the map again.
" Is it how we would do it for navigation ? "
Q2. how to implement voice navigation to work with Q1 ?
- Did some research, can't find anything that seems clearly helpful. All I know its that, in the return JSON from the /api/directions, there are direction instruction in it.
" Do I use it for voice from the return JSON ? Or there is a better way ? "
Would be very helpful with some link or example in details.
Thanks in adavnce
Here is what I know, hope it helps you out.
Regarding the first question:
After retrieving the directions and the necessary data, you have to draw the direction once and only once! yes, you have to use the onLocationChanged() but not to redraw the whole thing again.. if you notice in most of the navigation application they still keep the main route, they don't remove the passed parts... but you have to use onLocationChanged() to check if the user is out of the drawn path (by maybe 100m) so you have to re-calculate and redraw it again... redraw the path every time the user move is a costly operation it is better to be avoided...
For the second question:
As you said, the data retrieved already has the navigation commands.. so what you have to do is create a class to map the command with the voice.. and if you notice within the legs -> steps tags, there is a start and ending coordinates for each sub-path, so you can use these data to calculate the distance between them on each 200m say the command that "how far the user is to turn left" for example.
Hope this gives you a general idea of how it works. Good luck and happy programming.
CCSprite texture1 = CCSprite.sprite("menu_background.png");
CCRenderTexture layerRenderTexture = CCRenderTexture.renderTexture(width, height);
layerRenderTexture.begin();
texture1.visit(CCDirector.gl);
layerRenderTexture.end();
this.addChild(layerRenderTexture);
I haven't seen any CCRenderTexture example on Internet. When I try to use it as above, I expected to see a nice background. Instead I see black :)
What am I doing wrong?
Thanks for your help.
I haven't seen any CCRenderTexture example on Internet.
I think you may have been looking on the wrong Internet? :)
Check out my article and Ray's article. Both come out on top when you google for CCRenderTexture. They use cocos2d-iphone, but the same principles apply.
In your particular case I don't see you adding the layerRenderTexture as child to the scene or another node. That would explain why you're not getting any results.