Currently I am developing a game using andengine. In my game I have to bring trial effect which follows finger like fruit ninja. I have tried Rendering with BaseGameActivity and LayoutGameActivity with xml file.
But while combining opengl Rendering with andengine RendererSurfaceView it shows some delay as well as sprite in RendererSurfaceView doesn't listen to the onAreaTouched().
I am in confusion. could you please help in what manner I have to bring Trial Effect like Fruit Ninja in my game?
Well, I'm not sure if I totally understand, but I may be able to start you off...
In your BaseGameActivity's onCreateScene method, after you create your scene, set a touch listener. In your listener check if it's a move event, so you can get the x and y coordinates.
scene.setOnSceneTouchListener(new IOnSceneTouchListener() {
#Override
public boolean onSceneTouchEvent(Scene pScene, TouchEvent pSceneTouchEvent) {
if (pSceneTouchEvent.isActionMove()) {
Log.d(App.LOG_TAG, "x: " + pSceneTouchEvent.getX() + " y: " + pSceneTouchEvent.getY());
return true;
}
return false;
}
});
You'll need to keep track of the coordinates and probably do a smoothing algorithm on them, then draw some graphic over that path.
I created something similar a while ago [wasn't intended but somehow I made it] but I kind of lost the code so I'll just explain the process to you
To create a trial effect all what you have to do is use particle systems and attach it on touch , move it with the finger [change it's location when isActionMove()] and detach [or have it fade out first so you get nice effect] it on isActionUp()
I used this process to create a flame effect that follows your finger, the rest is up to you and how you mess around with the particle emitter to make the particles appear as a slash
Related
I am new to Unity and I am trying to learn the basics, I learn physics at school(ten grade).
What I have done so far - added a ball to my project and used gravity on it with RigidBody.
I want to make the ball jump suddenly on air when there is some touch input, for example - flappy bird.
My script is basic:
void Update()
{
if (Input.touchCount == 1)
{
GetComponent<Rigidbody>().AddForce(new Vector2(0, 10), ForceMode.Impulse);
}
}
With this script, the ball is falling(gravity) and when I touch the script, is Y coordinate changes but it happens by sudden (no animation) and it changes by like ~1 and continue falling(I can't keep the ball on screen) , also I can make it jump only once, if I press multiple times it will jump only once, as you can see here:
https://vid.me/aRfk
Thank you for helping.
I have created same scene in Unity3D Editor and played a little with same setup you have. And yes I had similar problems adding force on Update and also (but less) on FixedUpdate. But adding force on LateUpdate seems to work okay.
Here is my code:
public Rigidbody2D rb2dBall;
void LateUpdate()
{
if(Input.GetKeyUp(KeyCode.Space))
{
rb2dBall.AddForce (Vector2.up * 10f, ForceMode2D.Impulse);
}
}
And also I turned on interpolation:
I cant say why the physics bahaves like this, could be some bug in Unity3D 5.X since on FixedUpdate it should work fine.
Firstly, when working with physics in Unity, it's highly recomended to use the FixedUpdate method, instead of the Update. You can check in this link http://docs.unity3d.com/ScriptReference/MonoBehaviour.FixedUpdate.html
The second thing is that maybe you are not applying so much force at the ball, the force needed to give a quite impulse will depends of the mass of your ball's rigidbody. So try to do something like that:
GetComponent<Rigidbody>().AddForce(Vector2.up * 100, ForceMode.Impulse);
Change the fixed value 100 to adjust your needs.
I am fresh learner of LibGDX. I am trying to learn LibGDX by developing a demo game. In the game when an army and enemy are visible to each other I want to draw a line of sight between them to prove that they see each other. The line of sight should increase gradually, say something like when we transfer file in windows 7 the green portion increases gradually. I am working with scene2D and have implemented Screen interface of scene2D.
You may want to look into a physics library. Either use it explicitly in your app (like Box2d or libgdx's BulletPhysics). Both of these have a concept of raycasting and some form of ray cast callback. This allows you to pick a "starting point" for your "line of sight" and see what the raycast hits/collides with.
If you don't want to use the physics library in your app, you could at least look at the source code for both, and roll your own, slimmed down functionality to achieve your line of sight goals.
So you are looking for something visually for the player and for calculations? I have no clue what you mean by the windows 7 file transfer thing.
It all depends on how accurate you want to have things but you need some kind of ray casting like Peter R is saying. You have libraries for this but depending on what you want this can be easy to implement yourself.
You take the Vector of a unit or army and the Vector of the enemy. Then check for obstructions between those Vectors. You should have a floatfor the distance each step of the line, the higher this is the more efficient but also less accurate since it could step over a small object.
Some crude untested pseudo code for 2D:
RayCast(Vector v1, Vector v2)
{
Vector2 p = v1;
Vector2 direction = (v2 - v1).normalize;
float distance = 0.5f;
float totalDistance = 200;
while (Distance(p & v2) > distance && Distance(p & v1) < totalDistance)
{
p += direction * distance;
if (some obstruction is at p)
{
//no line of sight
}
}
}
I have developed the game using AndEngine and i have used AndEngine box2D physics and i update the position between two players continuously as follows:
I have two players one player and other enemy. I have drawn the line and the end position of the line is given by player and enemy positions. I have done by getting their coordinates
//line of sight
final Line e1line = new Line(player.getX(), player.getY(), enemy.getX(), enemy.getY())
And finally i have used update handler to continusely move the line as the player moves as follows:
IUpdateHandler updatehand = new IUpdateHandler(){
#Override
public void onUpdate(float pSecondsElapsed) {
// line of sight and notifier update
e1line.setPosition(player.getX(), player.getY(), enemy1.getX(), enemy1.getY());
}
#Override
public void reset() {}
};
If I have an openGL app that has 2 threads (not created by me, these are the standard 'provided' threads). Those being:
GLRendering thread
Main / UI thread
And lets say that in my rendering thread I draw a sprite like so: (The methods shown are my own methods)
sprite.draw(); //This will draw the sprite at it's own internal X and Y coordinates
Now, on the UI Thread, I capture the current finger position: (The following occurs within onTouchEvent(MotionEvent event))....
case MotionEvent.ACTION_MOVE: {
sprite.setX(event.getX());
sprite.setY(event.setY());
}
So, what happens is that my sprite is drawn where the finger is allowing the user to drag it around the screen.
However, clearly, the two threads aren't guaranteed to run 'alternately' like so:
UI Thread: Capture finger position and set (X:10, Y:10)
Renderer Thread: Draw at 10, 10
UI Thread: Capture finger position and set (X:22, Y:31)
Renderer Thread Draw at 22, 31 etc.......
The above is great, but what happens when this occurs:
UI Thread: Capture finger position and set (X:10, Y:10)
Renderer Thread: Draw at 10, 10
Renderer Thread: Draw at 10, 10 << Draws again at the same position even though the finger has physically now moved to a different location causing an offset between object's on-screen position and the finger's physical positionon the screen
UI Thread: Capture finger position and set (X:22, Y:31)
Renderer Thread Draw at 22, 31 etc.......
This can seem to cause 'choppiness' in the drawing of the object.
So, what I'm asking is, is there any way for force the Rendering thread to somehow retrieve the actual current physical position of the currently-down pointer/finger before it draws it? Or some other solution to the problem? (Put simply the problem being that the rendering thread should always have access to the current position and not the previous position).
The short answer is NO!
The problem you are experiencing is not due to the two threads running in parallel. Instead, it is due to the fact that either
Your frame rate is too low
Your movement is so fast that choppiness becomes visible.
Solution to the first problem is to profile your app and reduce the lag time - try to hit the nominal frame rate of 60fps.
The solution to the second problem, however, is just a small trick. Instead of adjusting the position of the sprite to be exactly your touch points, make it move towards the touch point.
A crude implementation is like this:
case MotionEvent.ACTION_MOVE: {
sprite.setNewX(event.getX());
sprite.setNewY(event.setY());
}
In your Sprite Class add these:
private int mNewX, mNewY;
public void setNewX(int x){
mNewX = x;
}
public void setNewY(int y){
mNewY = y;
}
add these two lines to your draw method of your Sprite class:
public void draw(){
this.x = this.x + (int)(0.5 * (this.newX-this.x));
this.y = this.y + (int)(0.5 * (this.newY-this.y));
// of course, you can get rid of the "this" keywords above. I just put them there for the sake of clarity.
// drawing stuff
}
I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html
I thought I had understood this question, but something is quite wrong here. When the user (me, so far) tries to press keys, nothing really happens, and I am having a lot of trouble understanding what it is that I've missed.
Consider this before I present some code to help clarify my problem: I am using Android's Lunar Lander example to make my first "real" Android program. In that example, of course, there exist a class LunarView, and class nested therein LunarThread. In my code the equivalents of these classes are Graphics and GraphicsThread, respectively.
Also I can make sprite animations in 2D just fine on Android. I have a Player class, and let's say GraphicsThread has a Player member referred to as "player". This class has four coordinates - x1, y1, x2, and y2 - and they define a rectangle in which the sprite is to be drawn. I've worked it out so that I can handle that perfectly. Whenever the doDraw(Canvas canvas) method is invoked, it'll just look at the values of those coordinates and draw the sprite accordingly.
Now let's say - and this isn't really what I'm trying to do with the program - I'm trying to make the program where all it does is display the Player sprite at one location of the screen UNTIL the FIRST time the user presses the Dpad's left button. Then the location will be changed to another set position on the screen, and the sprite will be drawn at that position for the rest of the program invariably.
Also note that the GraphicsThread member in Graphics is called "thread", and that the SurfaceHolder member in GraphicsThread is called "mSurfaceHolder".
So consider this method in class Graphics:
#Override
public boolean onKeyDown(int keyCode, KeyEvent msg) {
return thread.keyDownHandler(keyCode, msg);
}
Also please consider this method in class GraphicsThread:
boolean keyDownHandler(int keyCode, KeyEvent msg) {
synchronized (mSurfaceHolder) {
if (keyCode == KeyEvent.KEYCODE_DPAD_LEFT) {
player.x1 = 100;
player.y1 = 100;
player.x2 = 120;
player.y2 = 150;
}
}
return true;
}
Now then assuming that player's coordinates start off as (200, 200, 220, 250), why won't he do anything different when I press Dpad: Left?
Thanks!
Before I would worry about actual movement and the like I would consider Log...
Something like:
Log.d("lunar", "keyCode = ["+String.valueOf(keyCode)+"] // msg = ["+String.valueOf(msg)+"]");
In doing so I can get a feel for what the system is registering before I worry about what I do with said registered data... After that you can decide if you're even sending it the right stuff and can then worry about thread work etc.
Hopefully that can help diagnose etc.(All of this was written freehand, may contain errors)
Throw away LunarLander and use a real guide: Playing with graphics in Android