LIBGDX Input - Number of fingers touching the screen - android

I wonder know How to get the total number of fingers touching the screen on my game.
Thanks you

If you use an InputProcessor for event-based input processing, just increment a counter at touchDown and decrement the counter at touchUp.
If you're using Gdx.input for polling-based input processing, use the isTouched(int) call to test if pointer N is down. The libGDX implementation tracks at most 20 pointers. I don't think any hardware supports that many (and your game may have a lower limit, too). You'll have to check all the pointer IDs, though, as pointer id N+1 can remain active after pointer id N has left. Something like:
int activeTouch = 0;
for (int i = 0; i < 20; i++) {
if (Gdx.input.isTouched(i)) activeTouch++;
}

Dont know the proper way of doing it but a nasty and simple method is coming in my mind.
Implement inputprocessor reference for input processor here and take a counter variable . Inside its touchdown method increase counter by 1 and inside touchup method decrease counter by 1. the value to counter will give total number of fingures presently touching the screen . Another way to doing it is by pointer in input processor. But I find this method more simpler :)

you can try this:
float Ipsi=0.5;
if(Gdx.input.isTouched()){
int xTouch = Gdx.input.getX();
int yTouch = Gdx.input.getY();
int count=0;
ArrayList<Integer> lx = new ArrayList<Integer>();
ArrayList<Integer> ly = new ArrayList<Integer>();
lx.add(xTouch);
ly.add(yTouch);
//lx.size()=ly.size()
if(2<lx.size()){
for(int i = 0; i < lx.size(); i++){
if((lx.get(i)-lx.get(i+1)<Ipsi)&&(ly.get(i)-ly.get(i+1)<Ipsi))
{count++;}
}
}
}
I did not test the code but i hope it will work. good luck.

Related

How to smooth out the android compas sensor

I am working on a semi augmented reality app where smooth and accurate data is very important. the sensors return values that jump around between 0 and 4 degrees and unfortunately it is making life difficult.
i have tried implementing a temporary solution:
private float[] Total = new float[11];
private float Average(){
if (counter == Total.length - 1) {
counter --;
for (int i = 0; i < Total.length - 1; i++) {
Total[i] = Total[i + 1];
}
}
float tot = 0;
for (int i = 0; i < Total.length - 1; i++) {
tot = tot + Total[i];
}
return tot/counter;
}
but this does not meet my needs any advice or help?
You are using a moving mean filter in a FIR implementation. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response.
A disadvantage of the FIR implementation is that the computation time increases with the size of the filter. You could look into implementing the filter in a IIR filter instead but for this simple application I would not recommend it.
Another improvement might be to use a window function to minimize edge effects.
You may find a good example of a low pass filter using a Hamming window here

Cross correlation to find sonar echoes

I'm trying to detect echoes of my chirp in my sound recording on Android and it seems cross correlation is the most appropriate way of finding where the FFTs of the two signals are similar and from there I can identify peaks in the cross correlated array which will correspond to distances.
From my understanding, I have come up with the following cross correlation function. Is this correct? I wasn't sure whether to add zeros to the beginning as and start a few elements back?
public double[] xcorr1(double[] recording, double[] chirp) {
double[] recordingZeroPadded = new double[recording.length + chirp.length];
for (int i = recording.length; i < recording.length + chirp.length; ++i)
recordingZeroPadded[i] = 0;
for (int i = 0; i < recording.length; ++i)
recordingZeroPadded[i] = recording[i];
double[] result = new double[recording.length + chirp.length - 1];
for (int offset = 0; offset < recordingZeroPadded.length - chirp.length; ++offset)
for (int i = 0; i < chirp.length; ++i)
result[offset] += chirp[i] * recordingZeroPadded[offset + i];
return result;
}
Secondary question:
According to this answer, it can also be calculated like
corr(a, b) = ifft(fft(a_and_zeros) * fft(b_and_zeros[reversed]))
which I don't understand at all but seems easy enough to implement. That said I have failed (assuming my xcorr1 is correct). I feel like I've completely misunderstood this?
public double[] xcorr2(double[] recording, double[] chirp) {
// assume same length arguments for now
DoubleFFT_1D fft = new DoubleFFT_1D(recording.length);
fft.realForward(recording);
reverse(chirp);
fft.realForward(chirp);
double[] result = new double[recording.length];
for (int i = 0; i < result.length; ++i)
result [i] = recording[i] * chirp[i];
fft.realInverse(result, true);
return result;
}
Assuming I got both working, which function would be most appropriate given that the arrays will contain a few thousand elements?
EDIT: Btw, I have tried adding zeros to both ends of both arrays for the FFT version.
EDIT after SleuthEye's response:
Can you just verify that, because I'm dealing with 'actual' data, I need only do half the computations (the real parts) by doing a real transform?
From your code, it looks as though the odd numbered elements in the array returned by the REAL transform are imaginary. What's going on here?
How am I going from an array of real numbers to complex? Or is this the purpose of a transform; to move real numbers into the complex domain? (but the real numbers are just a subset of the complex numbers and so wouldn't they already be in this domain?)
If realForward is in fact returning imaginary/complex numbers, how does it differ to complexForward? And how do I interpret the results? The magnitude of the complex number?
I apologise for my lack of understanding with regard to transforms, I have only so far studied fourier series.
Thanks for the code. Here is 'my' working implementation:
public double[] xcorr2(double[] recording, double[] chirp) {
// pad to power of 2 for optimisation
int y = 1;
while (Math.pow(2,y) < recording.length + chirp.length)
++y;
int paddedLength = (int)Math.pow(2,y);
double[] paddedRecording = new double[paddedLength];
double[] paddedChirp = new double[paddedLength];
for (int i = 0; i < recording.length; ++i)
paddedRecording[i] = recording[i];
for (int i = recording.length; i < paddedLength; ++i)
paddedRecording[i] = 0;
for (int i = 0; i < chirp.length; ++i)
paddedChirp[i] = chirp[i];
for (int i = chirp.length; i < paddedLength; ++i)
paddedChirp[i] = 0;
reverse(chirp);
DoubleFFT_1D fft = new DoubleFFT_1D(paddedLength);
fft.realForward(paddedRecording);
fft.realForward(paddedChirp);
double[] result = new double[paddedLength];
result[0] = paddedRecording[0] * paddedChirp[0]; // value at f=0Hz is real-valued
result[1] = paddedRecording[1] * paddedChirp[1]; // value at f=fs/2 is real-valued and packed at index 1
for (int i = 1; i < result.length / 2; ++i) {
double a = paddedRecording[2*i];
double b = paddedRecording[2*i + 1];
double c = paddedChirp[2*i];
double d = paddedChirp[2*i + 1];
// (a+b*j)*(c-d*j) = (a*c+b*d) + (b*c-a*d)*j
result[2*i] = a*c + b*d;
result[2*i + 1] = b*c - a*d;
}
fft.realInverse(result, true);
// discard trailing zeros
double[] result2 = new double[recording.length + chirp.length - 1];
for (int i = 0; i < result2.length; ++i)
result2[i] = result[i];
return result2;
}
However, until about 5000 elements each, xcorr1 seems to be quicker. Am I doing anything particularly slow (perhaps the constant 'new'ing of memory -- maybe I should cast to an ArrayList)? Or the arbitrary way in which I generated the arrays to test them? Or should I do the conjugates instead of reversing it? That said, performance isn't really an issue so unless there's something obvious you needn't bother pointing out optimisations.
Your implementation of xcorr1 does correspond to the standard signal-processing definition of cross-correlation.
Relative to your interrogation with respect to adding zeros at the beginning: adding chirp.length-1 zeros would make index 0 of the result correspond to the start of transmission. Note however that the peak of the correlation output occurs chirp.length-1 samples after the start of echoes (the chirp has to be aligned with the full received echo). Using the peak index to obtain echo delays, you would then have to adjust for that correlator delay either by subtracting the delay or by discarding the first chirp.length-1 output results. Noting that the additional zeros correspond to that many extra outputs at the beginning, you'd probably be better off not adding those zeros at the beginning in the first place.
For xcorr2 however, a few things need to be addressed. First, if the recording and chirp inputs are not already zero-padded to at least chirp+recording data length you would need to do so (preferably to a power of 2 length for performance reasons). As you are aware, they would both need to be padded to the same length.
Second, you didn't take into account that the multiplication indicated in the posted reference answer, correspond in fact to complex multiplications (whereas DoubleFFT_1D.realForward API uses doubles). Now if you are going to implement something such as a complex multiplication with the chirp's FFT, you might as well actually implement the multiplication with the complex conjugate of the chirp's FFT (the alternate implementation indicated in the reference answer), removing the need to reverse the time-domain values.
Also accounting for DoubleFFT_1D.realForward packing order for even length transforms, you would get:
// [...]
fft.realForward(paddedRecording);
fft.realForward(paddedChirp);
result[0] = paddedRecording[0]*paddedChirp[0]; // value at f=0Hz is real-valued
result[1] = paddedRecording[1]*paddedChirp[1]; // value at f=fs/2 is real-valued and packed at index 1
for (int i = 1; i < result.length/2; ++i) {
double a = paddedRecording[2*i];
double b = paddedRecording[2*i+1];
double c = paddedChirp[2*i];
double d = paddedChirp[2*i+1];
// (a+b*j)*(c-d*j) = (a*c+b*d) + (b*c-a*d)*j
result[2*i] = a*c + b*d;
result[2*i+1] = b*c - a*d;
}
fft.realInverse(result, true);
// [...]
Note that the result array would be of the same size as paddedRecording and paddedChirp, but only the first recording.length+chirp.length-1 should be kept.
Finally, relative to which function is the most appropriate for arrays of a few thousand elements, the FFT version xcorr2 is likely going to be much faster (provided you restrict array lengths to powers of 2).
The direct version doesn't require zero-padding first. You just take recording of length M and chirp of length N and calculate result of length N+M-1. Work through a tiny example by hand to grok the steps:
recording = [1, 2, 3]
chirp = [4, 5]
1 2 3
4 5
1 2 3
4 5
1 2 3
4 5
1 2 3
4 5
result = [1*5, 1*4 + 2*5, 2*4 + 3*5, 3*4] = [5, 14, 23, 4]
The FFT method is much faster if you have long arrays. In this case you have to zero-pad each input to size M+N-1 so that both input arrays are the same size before taking the FFT.
Also, the FFT output is complex numbers, so you need to use complex multiplication. (1+2j)*(3+4j) is -5+10j, not 3+8j. I don't know how your complex numbers are arranged or handled, but make sure this is right.
Or is this the purpose of a transform; to move real numbers into the complex domain?
No, the Fourier transform transforms from the time domain to the frequency domain. The time domain data can be either real or complex, and the frequency domain data can be either real or complex. In most cases you have real data with a complex spectrum. You need to read up on the Fourier transform.
If realForward is in fact returning imaginary/complex numbers, how does it differ to complexForward?
The real FFT takes a real input, while the complex FFT takes a complex input. Both transforms produce complex numbers as their output. That's what the DFT does. The only time a DFT produces real output is if the input data is symmetrical (in which case you can use the DCT to save even more time).

libgdx smooth movement of rectangles

I want to moove two objects smoothely at Touching.
Here is my Code:
for(int i = 0; i <96; i++){
Asstest.rect_pipe_down.y--);
}
This should move the rect 96 pixels down (SMOOTH)
But it just close without smoothed...
What did I wrong?
If you Touch, the pipes should close, but not hard, smooth should they close.
But with following code they just close hard...
Here is the full touched code:
if(Gdx.input.isTouched()){
Assets.rect_pipe_down.y = 512 - 320/2;
Assets.rect_pipe_up.y = -320 + 320/2;
for (int i = 0; i < 96; i++){
smoothTime = TimeUtils.millis();
if(TimeUtils.millis() - smoothTime > 10) {
Assets.rect_pipe_down.y--;
Assets.rect_pipe_up.y++;
batch.begin();
batch.draw(Assets.region_pipe_down, Assets.rect_pipe_down.x, Assets.rect_pipe_down.y);
batch.draw(Assets.region_pipe_up, Assets.rect_pipe_up.x, Assets.rect_pipe_up.y);
batch.end();
}
}
closed = true;
}
You cannot do rendering multiple times in one render() call, one call is for drawing exactly one frame. In your current code, the later images simply overwrite the previous ones.
What you could do is have a variable which persists between frames which stores whether or not the pipes are currently closing, a constant for the speed and some condition to tell when they can stop - maybe when they are some given distance from each other, not sure what you would want here. Anyway, that's what I'll use in my example.
Then in the render() method, before drawing anything, you can do this:
if (closing) {
Assets.rect_pipe_down.y -= CLOSE_SPEED * delta;
Assets.rect_pipe_up.y += CLOSE_SPEED * delta;
if (Assets.rect_pipe_down.y - Assets.rect_pipe_up.y < TARGET_DIST) {
Assets.rect_pipe_down.y = Assets.rect_pipe_up.y + TARGET_DIST;
closing = false;
}
}
Here, closing is a variable you set to true when you want them to start closing, the others are constants. You could add some more variables/constants if you want to make sure they end up at a specific height independent on framerate.

Add dynamic path to sprite

I have to move the sprite along the path that is drawn onTouch.For that I'm using Path and PathModifier
case MotionEvent.ACTION_UP:
int historySize = pSceneTouchEvent.getMotionEvent().getHistorySize();
pointX = new float[historySize];
pointY = new float[historySize];
for (int i = 1; i < historySize; i++) {
pointX[i] = pSceneTouchEvent.getMotionEvent().getHistoricalX(i);
pointY[i] = pSceneTouchEvent.getMotionEvent().getHistoricalY(i);
}
path = new path(pointX,pointY);
PathModifier pathModifier = new PathModifier(2.5f, path);
pathModifier.setRemoveWhenFinished(true);
sprite1.clearEntityModifiers();
sprite1.registerEntityModifier(pathModifier);
break;
Its giving me error as path needs at least 2 way points.
Any idea why so?
Normally this shouldn't happen, since a motion event is very often more than just one coordinate. Maybe you should test if the historySize is really bigger than 2. In addition you can add the sprites starting coordinates, otherwise the sprite would "jump" towards the first touch point (but that wasn't your question).
This isn't actually different – just another possibility:
path= new Path(historySize);
for (int i = 0; i < historySize; i++) {
float x = pSceneTouchEvent.getMotionEvent().getHistoricalX(i);
float y = pSceneTouchEvent.getMotionEvent().getHistoricalY(i);
path.to(x,y);
}
In addition I noticed you start your for-loop with int i=1 so if your historySizeis 2, the loop iterates only one times!
EDIT
I couldn't find the problem, but I found a solution:
Instead of using the motionEvent history, save the coordinates of the toucheEvent on the go as the touchEventoccurs:
ArrayList<Float> xCoordinates; // this is where you store all x coordinates of the touchEvents
ArrayList<Float> yCoordinates; // and here will be the y coordinates.
onSceneTouchEvent(TouchEvent sceneTouchEvent){
switch(sceneTouchEvent.getAction()){
case (TouchEvent.ACTION_DOWN):{
// init the list every time a new touchDown is registered
xCoordinates = new ArrayList<Float>();
yCoordinates = new ArrayList<Float>();
break;
}
case (TouchEvent.ACTION_MOVE): {
// while moving, store all touch points in the lists
xCoordinates.add(sceneTouchEvent.getX());
yCoordinates.add(sceneTouchEvent.getY());
break;
}
case (TouchEvent.ACTION_UP): {
// when the event is finished, create the path and make the sprite move
// instead of the history size use the size of your own lists
Path path = new Path(xCoordinates.size());
for (int i = 0; i < xCoordinates.size(); i++) {
path.to(xCoordinates.get(i), yCoordinates.get(i)); // add the coordinates to the path one by one
}
// do the rest and make the sprite move
PathModifier pathModifier = new PathModifier(2.5f, path);
pathModifier.setAutoUnregisterWhenFinished(true);
sprite1.clearEntityModifiers();
sprite1.registerEntityModifier(pathModifier);
break;
}
}
I tested this on my phone (which does not run in debug mode) and it works fine. But to make sure that no Exception will be thrown, you should always test if the xCoordinates list is bigger than 1. Although it is very probable that it is.
Well I hope it helps at least to go around your original problem. I noticed that some methods are named differently (e.g. setAutoUnregisterWhenFinished(true);) I guess you are using AndEngine GLES1 ? I use GLES2, so when a method has another name in my code, don't worry and just look for the equivalent in GLES1 (I didn't rename them because, the code works as it is)
Christoph

In ACTION_MOVE MotionEvents, are pointers guaranteed to be indexed from low to high id?

I am logging touch-based interactions to a database, and cannot afford to record all pointers. Therefore, as I process the pointers (and historical pointers) contained in the MotionEvent, I simply ignore pointers after a certain index. However, pointers with the lowest ids are the most relavent to me. Is it safe to assume that the pointers are ordered (indexed) in order of pointer id, even if those ids are not guaranteed to start at zero?
Testing indicates that this assumption is correct, but I can't find any verification in the documentation. Is anyone able to shed some light on this?
From the Android MotionEvent documentation, how to process all pointers:
void printSamples(MotionEvent ev) {
final int historySize = ev.getHistorySize();
final int pointerCount = ev.getPointerCount();
for (int h = 0; h < historySize; h++) {
System.out.printf("At time %d:", ev.getHistoricalEventTime(h));
for (int p = 0; p < pointerCount; p++) {
System.out.printf(" pointer %d: (%f,%f)",
ev.getPointerId(p), ev.getHistoricalX(p, h), ev.getHistoricalY(p, h));
}
}
System.out.printf("At time %d:", ev.getEventTime());
for (int p = 0; p < pointerCount; p++) {
System.out.printf(" pointer %d: (%f,%f)",
ev.getPointerId(p), ev.getX(p), ev.getY(p));
}
}
My code changes the line
final int pointerCount = ev.getPointerCount();
to
final int pointerCount = Math.min(ev.getPointerCount(), MAX_POINTER_COUNT);
which effectively gets only the first MAX_POINTER_COUNT pointers within each pointer loop. Can I rely on those pointers having the lowest pointer ids?
From my experience the order of pointers is the order of touch on the screen, so the first pointer is the first touch and like the 3rd finger touch simultaneously is the pointer 3.
So if you want to limit to only 2 or 3 finger gestures, you can set your Max pointer to 2 or 3.

Categories

Resources