I have a few problems with time in my game. Play my game (link won't post all code) to 50 points ... the tubes stay closed .. why?
The tubes only open in the game if a bird spawns, why? The tubes are actually only 1 sec to be getting, do I have something wrong in the code?
The file in which I place the most important is code: SquishyBird/SquishyBird/src/com/CoreTek/squishybird/GameScreen.java
Seondly,
how can I set a Delay when you touched at the background: touched -> door close -> 1sec -> door opens
Wwhile this 1 sec the emenies at my game should still move, so it cant be a freeze delay. How can I do this?
Maybe the error is here:
delayTime = TimeUtils.millis();
if(Gdx.input.isTouched()){
Assets.rect_pipe_down.y = 512 - 320/2 - 96;
Assets.rect_pipe_up.y = -320 + 320/2 + 96;
Assets.rect_hitbox.x = 288/2 - 52/2 + 2;
}
if(TimeUtils.millis() - delayTime > 1000){
Assets.rect_pipe_down.y = 512 - 320/2;
Assets.rect_pipe_up.y = -320 + 320/2;
Assets.rect_hitbox.x = 288/2 - 52/2 + 2 + 500;
}
My whole rendering:
#Override
public void render(float delta){
Gdx.gl.glClearColor(0F, 0F, 0F, 1F);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
stateTime += Gdx.graphics.getDeltaTime();
Assets.region_current_bird = Assets.animation_bird.getKeyFrame(stateTime, true);
batch.setProjectionMatrix(camera.combined);
batch.begin();
batch.draw(Assets.region_bg, 0, 0);
batch.draw(Assets.region_pipe_down, Assets.rect_pipe_down.x, Assets.rect_pipe_down.y);
batch.draw(Assets.region_pipe_up, Assets.rect_pipe_up.x, Assets.rect_pipe_up.y);
Assets.font_points.draw(batch, "Points: " + String.valueOf(points), 10, 512 - 10);
for(Rectangle rect_bird: Assets.rect_birds_array){
batch.draw(Assets.region_current_bird, rect_bird.x, rect_bird.y);
}
batch.end();
if(Gdx.input.isTouched()){
Assets.rect_pipe_down.y = 512 - 320/2 - 96;
Assets.rect_pipe_up.y = -320 + 320/2 + 96;
Assets.rect_hitbox.x = 288/2 - 52/2 + 2;
}
if(TimeUtils.millis() - delayTime > 1000){
Assets.rect_pipe_down.y = 512 - 320/2;
Assets.rect_pipe_up.y = -320 + 320/2;
Assets.rect_hitbox.x = 288/2 - 52/2 + 2 + 500;
}
if(TimeUtils.millis() - lastSpawnTime > spawnTime) spawnBirds();
Iterator<Rectangle> iter = Assets.rect_birds_array.iterator();
while(iter.hasNext()){
Rectangle bird = iter.next();
bird.x += 200 * Gdx.graphics.getDeltaTime();
if(bird.x - 34 > 288) iter.remove();
if(bird.overlaps(Assets.rect_hitbox)){
points++;
if(spawnTime > 10){
spawnTime-=10;
}
iter.remove();
}
}
}
Related
I have tried to use the logic and pictorial representation from this SO. I am though confused with the images since one of them follow 4:1:1 whereas the later one does 4:2:2 nomenclature for YUV image (NV21).
Right now the issue is that i get an image (converted to Bitmap/PNG) with YUV component all over, essentially an unusable image.
Any recommendation to fix this?
private byte[] cropImage(byte[] data, Rect cropRect) {
int dataHeight = 480;
int dataWidth = 640;
int totalWH = dataWidth * dataHeight;
// make rect points even, currently the width & height is even number
// adjust x coordinates to make them
if (cropRect.left % 2 != 0 || cropRect.right % 2 != 0) {
cropRect.left -= 1;
cropRect.right -= 1;
}
// adjust y coordinates to make them even
if (cropRect.top % 2 != 0 || cropRect.bottom % 2 != 0) {
cropRect.top -= 1;
cropRect.bottom -= 1;
}
int area = cropRect.width() * cropRect.height() * 3/2;
Logger.getLogger().d("Size of byte array " + data.length + " Size of alloc area " + area);
byte[] pixels = new byte[area];//the size of the array is the dimensions of the sub-photo
// size.total = size.width * size.height;
// y = yuv[position.y * size.width + position.x];
// u = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total];
// v = yuv[(position.y / 2) * (size.width / 2) + (position.x / 2) + size.total + (size.total / 4)];
try {
// copy Y plane first
int srcOffset = cropRect.top * dataWidth;
int destOffset = 0;
int lengthToCopy = cropRect.width();
int y = 0;
for (; y < cropRect.height(); y++, srcOffset += dataWidth, destOffset += cropRect.width()) {
// Logger.getLogger().d("IO " + srcOffset + cropRect.left + " oO " + destOffset + " LTC " + lengthToCopy);
System.arraycopy(data, srcOffset + cropRect.left, pixels, destOffset, lengthToCopy);
}
Logger.getLogger().d("Completed Y copy");
// U and V components are not-interleaved, hence their size is just 1/4th the original size
// copy U plane
int nonYPlanerHeight = dataHeight / 4;
int nonYPlanerWidth = dataWidth / 4;
srcOffset = totalWH + (cropRect.top / 4 * nonYPlanerWidth);
for (y = 0; y < cropRect.height();
y++, srcOffset += nonYPlanerWidth, destOffset += cropRect.width() / 4) {
System.arraycopy(data, srcOffset + cropRect.left / 4, pixels, destOffset, cropRect.width() / 4);
}
Logger.getLogger().d("Completed U copy " + y + " destOffset=" + destOffset);
// copy V plane
srcOffset = totalWH + totalWH / 4 + (cropRect.top / 4 * nonYPlanerWidth);
for (y = 0; y < cropRect.height();
y++, srcOffset += nonYPlanerWidth, destOffset += cropRect.width() / 4) {
System.arraycopy(data, srcOffset + cropRect.left / 4, pixels, destOffset, cropRect.width() / 4);
}
Logger.getLogger().d("Completed V copy " + y + " destOffset=" + destOffset);
} catch (ArrayIndexOutOfBoundsException ae) {
// do nothing
Logger.getLogger().e("Exception " + ae.getLocalizedMessage());
}
return pixels;
}
I have created custom image view for remove a selected part from bitmap images.
There are operation for select area to get rid from current bitmap by a path has a collection of points.
Here it is code snippet :
for (int i = points.size() - 2; i < points.size(); i++) {
if (i >= 0) {
Point point = points.get(i);
if (i == 0) {
Point next = points.get(i + 1);
point.dx = ((next.x - point.x) / 3);
point.dy = ((next.y - point.y) / 3);
} else if (i == points.size() - 1) {
Point prev = points.get(i - 1);
point.dx = ((point.x - prev.x) / 3);
point.dy = ((point.y - prev.y) / 3);
} else {
Point next = points.get(i + 1);
Point prev = points.get(i - 1);
point.dx = ((next.x - prev.x) / 3);
point.dy = ((next.y - prev.y) / 3);
}
}
}
path.cubicTo(prev.x + prev.dx, prev.y + prev.dy, point.x - point.dx,
point.y - point.dy, point.x, point.y);
paramCanvas.drawPath(path, paint);
Look at this output:
I used clip path to get crop this part but this not working for me.
I am getting stuck to clip that selected part so can you help me out to solve this thing.
I would greatly appreciated any help. Thanks
I try using OpenMP to parallel Deblocking filter of OpenHEVC.
But, It is more slower than serial to using openMP. even, I tried to blank code in for loop.
However it took four time as long than serial. I don't know why it happened.
Serial code
for (y = y0; y < y_end; y += 8) {
for (x = x0 ? x0 : 8; x < x_end; x += 8) {
const int bs0 = s->vertical_bs[(x >> 3) + (y >> 2) * s->bs_width];
const int bs1 = s->vertical_bs[(x >> 3) + ((y + 4) >> 2) * s->bs_width];
int c_tc[2], beta[2], tc[2];
uint8_t no_p[2] = { 0 };
uint8_t no_q[2] = { 0 };
if (bs0 || bs1) {
const int qp0 = (get_qPy(s, x - 1, y) + get_qPy(s, x, y) + 1) >> 1;
const int qp1 = (get_qPy(s, x - 1, y + 4) + get_qPy(s, x, y + 4) + 1) >> 1;
beta[0] = betatable[av_clip(qp0 + (beta_offset >> 1 << 1), 0, MAX_QP)];
beta[1] = betatable[av_clip(qp1 + (beta_offset >> 1 << 1), 0, MAX_QP)];
tc[0] = bs0 ? TC_CALC(qp0, bs0) : 0;
tc[1] = bs1 ? TC_CALC(qp1, bs1) : 0;
src = &s->frame->data[LUMA][y * s->frame->linesize[LUMA] + (x << s->sps->pixel_shift)];
if (pcmf) {
no_p[0] = get_pcm(s, x - 1, y);
no_p[1] = get_pcm(s, x - 1, y + 4);
no_q[0] = get_pcm(s, x, y);
no_q[1] = get_pcm(s, x, y + 4);
omp_set_lock(&writelock);
s->hevcdsp.hevc_v_loop_filter_luma_c(src,
s->frame->linesize[LUMA],
beta, tc, no_p, no_q);
omp_unset_lock(&writelock);
} else{
omp_set_lock(&writelock);
s->hevcdsp.hevc_v_loop_filter_luma(src,
s->frame->linesize[LUMA],
beta, tc, no_p, no_q);
}
}
}
}
Openmp code
omp_set_num_threads(4);
#pragma omp parallel shared(s) private(src)
{
#pragma omp for
for (y = y0; y < y_end; y += 8) {
for (x = x0 ? x0 : 8; x < x_end; x += 8) {
const int bs0 = s->vertical_bs[(x >> 3) + (y >> 2) * s->bs_width];
const int bs1 = s->vertical_bs[(x >> 3) + ((y + 4) >> 2) * s->bs_width];
int c_tc[2], beta[2], tc[2];
uint8_t no_p[2] = { 0 };
uint8_t no_q[2] = { 0 };
if (bs0 || bs1) {
const int qp0 = (get_qPy(s, x - 1, y) + get_qPy(s, x, y) + 1) >> 1;
const int qp1 = (get_qPy(s, x - 1, y + 4) + get_qPy(s, x, y + 4) + 1) >> 1;
beta[0] = betatable[av_clip(qp0 + (beta_offset >> 1 << 1), 0, MAX_QP)];
beta[1] = betatable[av_clip(qp1 + (beta_offset >> 1 << 1), 0, MAX_QP)];
tc[0] = bs0 ? TC_CALC(qp0, bs0) : 0;
tc[1] = bs1 ? TC_CALC(qp1, bs1) : 0;
src = &s->frame->data[LUMA][y * s->frame->linesize[LUMA] + (x << s->sps->pixel_shift)];
if (pcmf) {
no_p[0] = get_pcm(s, x - 1, y);
no_p[1] = get_pcm(s, x - 1, y + 4);
no_q[0] = get_pcm(s, x, y);
no_q[1] = get_pcm(s, x, y + 4);
s->hevcdsp.hevc_v_loop_filter_luma_c(src,
s->frame->linesize[LUMA],
beta, tc, no_p, no_q);
} else{
s->hevcdsp.hevc_v_loop_filter_luma(src,
s->frame->linesize[LUMA],
beta, tc, no_p, no_q);
}
}
}
}
}
Time(longest)
Serial : 1004ns
openMP : 4150ns
A blank loop will take longer in parallel than in series. You don't have nearly enough work inside the loop for it to be beneficial to you. There overhead required to spawn and close the threads is taking up most of that time.
Try putting a really heavy work load in there and see what happens! For example, I use OpenMP in Fortran code with loops that take 5 minutes per.
You could even put a 5 second sleep in just to test that they're actually running in parallel.
problem on calculation of x coordinate for plotting on iPhone screen.When points are within the range of 300 meter we are getting all the point of interest closer even-though In actual they are spread.I have even changed the width of the viewPort from 0.5 to 0.17(In degrees converted 28.647889757 to 10.0).Can anyone suggest such that every points of interest are properly placed with respect to the actual position.
The standard way(Mixare,ARToolkit) of calculating points on AR is
Calculate using ARKit
double pointAzimuth = coordinate.coordinateAzimuth;
//our x numbers are left based.
double leftAzimuth = self.currentCoordinate.coordinateAzimuth - VIEWPORT_WIDTH_RADIANS / 2.0;
if (leftAzimuth < 0.0) {
leftAzimuth = 2 * M_PI + leftAzimuth;
}
if (pointAzimuth < leftAzimuth) {
//it's past the 0 point.
point.x = ((2 * M_PI - leftAzimuth + pointAzimuth) / VIEWPORT_WIDTH_RADIANS) * 480.0;
} else {
point.x = ((pointAzimuth - leftAzimuth) / VIEWPORT_WIDTH_RADIANS) * 480.0;
}
IN Mixare:
CGPoint point;
CGRect viewBounds = self.overlayView.bounds;
//NSLog(#"pointForCoordinate: viewBounds.size.width = %.3f, height = %.3f", viewBounds.size.width, viewBounds.size.height );
double currentAzimuth = self.currentCoordinate.coordinateAzimuth;
double pointAzimuth = coordinate.coordinateAzimuth;
//NSLog(#"pointForCoordinate: location = %#, pointAzimuth = %.3f, pointInclination = %.3f, currentAzimuth = %.3f", coordinate.coordinateTitle, point.x, point.y, radiansToDegrees(pointAzimuth), radiansToDegrees(currentAzimuth), radiansToDegrees(pointInclination) );
double deltaAzimuth = [self deltaAzimuthForCoordinate:coordinate];
BOOL isBetweenNorth = [self isNorthForCoordinate:coordinate];
//NSLog(#"pointForCoordinate: (1) currentAzimuth = %.3f, pointAzimuth = %.3f, isNorth = %d", radiansToDegrees(currentAzimuth), radiansToDegrees(pointAzimuth), isBetweenNorth );
// NSLog(#"pointForCoordinate: deltaAzimuth = %.3f", radiansToDegrees(deltaAzimuth));
//NSLog(#"pointForCoordinate: (2) currentAzimuth = %.3f, pointAzimuth = %.3f, isNorth = %d", radiansToDegrees(currentAzimuth), radiansToDegrees(pointAzimuth), isBetweenNorth );
if ((pointAzimuth > currentAzimuth && !isBetweenNorth) ||
(currentAzimuth > degreesToRadians(360-self.viewRange) &&
pointAzimuth < degreesToRadians(self.viewRange))) {
// Right side of Azimuth
point.x = (viewBounds.size.width / 2) + ((deltaAzimuth / degreesToRadians(1)) * 12);
} else {
// Left side of Azimuth
point.x = (viewBounds.size.width / 2) - ((deltaAzimuth / degreesToRadians(1)) * 12);
}
I have created a method which performs a sobel edge detection.
I use the Camera yuv byte array to perform the detection on.
Now my problem is that I only get 5fps or something, which is really low.
I know it can be done faster because there are other apps on the market who are able to do it at good fps on good quality.
I pass images in a 800x400 resolution.
Can anyone check if my algorithm can be made shorter or more performant?
I already put the algorithm in native code but there seems to be no difference in fps.
public void process() {
progress=0;
index = 0;
// calculate size
// pixel index
size = width*(height-2) - 2;
// pixel loop
while (size>0)
{
// get Y matrix values from YUV
ay = input[index];
by = input[index+1];
cy = input[index+2];
gy = input[index+doubleWidth];
hy = input[index+doubleWidth+1];
iy = input[index+doubleWidth+2];
// get X matrix values from YUV
ax = input[index];
cx = input[index+2];
dx = input[index+width];
fx = input[index+width+2];
gx = input[index+doubleWidth];
ix = input[index+doubleWidth+2];
// 1 2 1
// 0 0 0
// -1 -2 -1
sumy = ay + (by*2) + cy - gy - (2*hy) - iy;
// -1 0 1
// -2 0 2
// -1 0 1
sumx = -ax + cx -(2*dx) + (2*fx) - gx + ix;
total[index] = (int) Math.sqrt(sumx*sumx+sumy*sumy);
// Math.atan2(sumx,sumy);
if(max < total[index])
max = total[index];
// sum = - a -(2*b) - c + g + (2*h) + i;
if (total[index] <0)
total[index] = 0;
// clamp to 255
if (total[index] >255)
total[index] = 0;
sum = (int) (total[index]);
output[index] = 0xff000000 | (sum << 16) | (sum << 8) | sum;
size--;
// next
index++;
}
//ratio = max/255;
}
Thx in Advance !
greetings
So I have two things:
I would consider loosing the Math.sqrt() expression: If you
are only interested in edge detection, I see no need for the this,
as the sqrt function is monotonic and it is really costly to
calculate.
I would consider another algorithm, especially I have had good results with a seperated convolution-filter: http://www.songho.ca/dsp/convolution/convolution.html#separable_convolution as this might bring down the number of arithmetic floating-point operations (which is probably your bottleneck).
I hope this helps, or at least sparks some inspiration. Good luck.
If you are using your algorithm in real-time, call it less often, maybe every ~20 frames instead of every frame.
Do more work per iteration, 800x400 in your algorithm is 318,398 iterations. Each iteration is pulling from the input array in a (to the processor) random way which causes issues with caching. Try pulling ay, ay2, by, by2, cy, cy2 and do twice the calculations per loop, you'll notice that the variables in the next iteration will relate to the previous. ay is now ay2 etc...
Here's a rewrite of your algo, doing twice the work per iteration. It saves a bit in redundant memory access, and ignores square root mentioned in another answer.
public void process() {
progress=0;
index = 0;
// calculate size
// pixel index
size = width*(height-2) - 2;
// do FIRST iteration outside of loop
// grab input avoid redundant memory accesses
ay = ax = input[index];
by = ay2 = ax2 = input[index+1];
cy = by2 = cx = input[index+2];
cy2 = cx2 = input[index+3];
gy = gx = input[index+doubleWidth];
hy = gy2 = gx2 = input[index+doubleWidth+1];
iy = hy2 = ix = input[index+doubleWidth+2];
iy2 = ix2 = input[index+doubleWidth+3];
dx = input[index+width];
dx2 = input[index+width+1];
fx = input[index+width+2];
fx2 = input[index+width+3];
//
sumy = ay + (by*2) + cy - gy - (2*hy) - iy;
sumy2 = ay2 + (by2*2) + cy2 - gy2 - (2*hy2) - iy2;
sumx = -ax + cx -(2*dx) + (2*fx) - gx + ix;
sumx2 = -ax2 + cx2 -(2*dx2) + (2*fx2) - gx2 + ix2;
// ignore the square root
total[index] = fastSqrt(sumx*sumx+sumy*sumy);
total[index+1] = fastSqrt(sumx2*sumx2+sumy2*sumy2);
max = Math.max(max, Math.max(total[index], total[index+1]));
// skip the test for negative value it can never happen
if(total[index] > 255) total[index] = 0;
if(total[index+1] > 255) total[index+1] = 0;
sum = (int) (total[index]);
sum2 = (int) (total[index+1]);
output[index] = 0xff000000 | (sum << 16) | (sum << 8) | sum;
output[index+1] = 0xff000000 | (sum2 << 16) | (sum2 << 8) | sum2;
size -= 2;
index += 2;
while (size>0)
{
// grab input avoid redundant memory accesses
ay = ax = cy;
by = ay2 = ax2 = cy2;
cy = by2 = cs = input[index+2];
cy2 = cx2 = input[index+3];
gy = gx = iy;
hy = gy2 = gx2 = iy2;
iy = hy2 = ix = input[index+doubleWidth+2];
iy2 = ix2 = input[index+doubleWidth+3];
dx = fx;
dx2 = fx2;
fx = input[index+width+2];
fx2 = input[index+width+3];
//
sumy = ay + (by*2) + cy - gy - (2*hy) - iy;
sumy2 = ay2 + (by2*2) + cy2 - gy2 - (2*hy2) - iy2;
sumx = -ax + cx -(2*dx) + (2*fx) - gx + ix;
sumx2 = -ax2 + cx2 -(2*dx2) + (2*fx2) - gx2 + ix2;
// ignore the square root
total[index] = fastSqrt(sumx*sumx+sumy*sumy);
total[index+1] = fastSqrt(sumx2*sumx2+sumy2*sumy2);
max = Math.max(max, Math.max(total[index], total[index+1]));
// skip the test for negative value it can never happen
if(total[index] >= 65536) total[index] = 0;
if(total[index+1] >= 65536) total[index+1] = 0;
sum = (int) (total[index]);
sum2 = (int) (total[index+1]);
output[index] = 0xff000000 | (sum << 16) | (sum << 8) | sum;
output[index+1] = 0xff000000 | (sum2 << 16) | (sum2 << 8) | sum2;
size -= 2;
index += 2;
}
}
// some faster integer only implementation of square root.
public static int fastSqrt(int x) {
}
Please note, the above code was not tested, it was written inside the browser window and may contain syntax errors.
EDIT You could try using a fast integer only square root function to avoid the Math.sqrt.
http://atoms.alife.co.uk/sqrt/index.html