how add the scene in particular region in android gaming using andengine? - android

I am new to game development.am using and-engine to develop the game.am create the scene like fall of object randomly from top of the screen use code below.
TimerHandler spriteTimerHandler;
float mEffectSpawnDelay = 3f;
spriteTimerHandler = new TimerHandler(mEffectSpawnDelay, true,
new ITimerCallback() {
#Override
public void onTimePassed(TimerHandler pTimerHandler) {
Random rand = new Random();
int y = (int) (resourcesManager.camera.getHeight() + resourcesManager.ball2.getHeight());
int minx = (int) (resourcesManager.ball1.getHeight());
int maxx = (int) (resourcesManager.camera.getWidth() - resourcesManager.ball2.getWidth());
int rangex = maxx- minx;
int x = rand.nextInt(rangex) + minx;
Sprite target = new Sprite(x, y, resourcesManager.ball2.deepCopy(),vbom);
attachChild(target);
int minDuration = 4;
int maxDuration = 8;
int rangeDuration = maxDuration - minDuration;
int actualDuration = rand.nextInt(rangeDuration) + minDuration;
MoveYModifier mod = new MoveYModifier(actualDuration, target.getY(),-target.getHeight());
target.registerEntityModifier(mod.deepCopy());
TargetsToBeAdded.add(target);
}
});
registerUpdateHandler(spriteTimerHandler);
from this code the object comes from the top of the screen but it comes all over the width.I want to set fall of the object from half of the screen to till end of the screen.Can any one know please help me to solve this issue.

int minx = (int) (resourcesManager.camera.getWidth() / 2);

Related

YoloV3 object detection with tflite model giving around 160 bounding boxes all tagged with first class from the label text

The shape of the tflite model is [1, 2535, 85]. You can find the tflite model here and label text here.
This is how the bug looks
This is the project I used https://github.com/hunglc007/tensorflow-yolov4-tflite/tree/master/android with some few changes. The changes are as following
Added the tflite model and the text in the assets folder (the label text is already present in the project, its the same)
Line 57 DetectorActivity.java
private static final String TF_OD_API_MODEL_FILE = "yolov3-tiny.tflite";
private static final String TF_OD_API_LABELS_FILE = "file:///android_asset/coco.txt";
line 181 tflite/YoloV4Classifier.java
private static boolean isTiny = true;
line 426 tflite/YoloV4Classifier.java (Replace the function to the down below)
This is the code
private ArrayList<Recognition> getDetectionsForTiny(ByteBuffer byteBuffer, Bitmap bitmap) {
ArrayList<Recognition> detections = new ArrayList<Recognition>();
Map<Integer, Object> outputMap = new HashMap<>();
// outputMap.put(0, new float[1][OUTPUT_WIDTH_TINY[0]][4]);
outputMap.put(0, new float[1][OUTPUT_WIDTH_TINY[1]][labels.size() + 5]);
Object[] inputArray = {byteBuffer};
tfLite.runForMultipleInputsOutputs(inputArray, outputMap);
int gridWidth = OUTPUT_WIDTH_TINY[0];
float[][][] bboxes = (float [][][]) outputMap.get(0);
// float[][][] out_score = (float[][][]) outputMap.get(1);
int count = 0;
for (int i = 0; i < gridWidth;i++){
float maxClass = 0;
int detectedClass = -1;
final float[] classes = new float[labels.size()];
for (int c = 0;c< labels.size();c++){
classes [c] = bboxes[0][i][c+5];
}
for (int c = 0;c<labels.size();++c){
if (classes[c] > maxClass){
detectedClass = c;
maxClass = classes[c];
}
}
final float score = maxClass;
if (score > getObjThresh()){
final float xPos = bboxes[0][i][0];
final float yPos = bboxes[0][i][1];
final float w = bboxes[0][i][2];
final float h = bboxes[0][i][3];
final RectF rectF = new RectF(
Math.max(0, xPos - w / 2),
Math.max(0, yPos - h / 2),
Math.min(bitmap.getWidth() - 1, xPos + w / 2),
Math.min(bitmap.getHeight() - 1, yPos + h / 2));
detections.add(new Recognition("" + i, labels.get(detectedClass),score,rectF,detectedClass ));
count++;
}
}
Log.d("Count", " "+count);
return detections;
}
Please I donno where I'm going wrong! Struggling with it since days! Thanks for helping.

How to change hue of selected range in Android opencv?

I am writing a program where I can change a specific color range to another color, eg- changing green to blue.
I have used Core.inRange function and my desired pixels are captured but now I want to change only the hue of the selected color pixel,
#Override
public boolean onTouch(View view, MotionEvent motionEvent) {
int cols = mHsv.cols();
int rows = mHsv.rows();
int xOffset = (view.getWidth() - cols) / 2;
int yOffset = (view.getHeight() - rows) / 2;
int x = (int)motionEvent.getX() - xOffset;
int y = (int)motionEvent.getY() - yOffset;
double[] data = mHsv.get(x, y);
int i = 12;
int H, S, V;
H = (int) data[0];
S = (int) data[1];
V = (int) data[2];
Mat thresh=new Mat();
Core.inRange(mHsv, new Scalar(H - i, 0, 20), new Scalar(H + i, 255, 255), thresh);
Utils.matToBitmap(thresh,mBmp);
mImageView.setImageBitmap(mBmp);
return false;
}

Circle drawing out of screen size

Can someone explain me how I can set that canvas doesnt draw circle outside the screen?
In screenshot it looks like this -
Click here to see image
As you can see, some of the circles is half outside the screen but I want that all of the circle is inside the screen.
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
Random random = new Random();
int minRadius = 50;
int w = this.getWidth();
int h = this.getHeight();
Paint paint = new Paint();
for (int i=0; i<resultInt; i++) {
int red = (int) (Math.random() * 255);
int green = (int) (Math.random() * 255);
int blue = (int) (Math.random() * 255);
int randX = random.nextInt(w);
int randY = random.nextInt(h);
int color = Color.rgb(red, green, blue);
paint.setColor(color);
canvas.drawCircle(randX, randY, minRadius, paint);
}
}
}
}
You have to subtract your double radius:
int w = this.getWidth() - 2 * minRadius;
int h = this.getHeight() - 2 * minRadius;
And then fix random point:
int randX = random.nextInt(w) + minRadius;
int randY = random.nextInt(h) + minRadius;
int w = this.getWidth() - minRadius * 2;
int h = this.getHeight() - minRadius * 2;
...
int randX = random.nextInt(w) + minRadius;
int randY = random.nextInt(h) + minRadius;

How to warp images in Android?

I am developing an application in which there is a module for Image warping.I referred several sites but could not get any solution that could solve my problem.
Any tutorials/links or suggestions for face warping would be helpful.
This is from the samples shipped with Android SDK. From your question it's not clear if you want to know the Android API or the very warping algorithm
public class BitmapMesh extends GraphicsActivity {
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(new SampleView(this));
}
private static class SampleView extends View {
private static final int WIDTH = 20;
private static final int HEIGHT = 20;
private static final int COUNT = (WIDTH + 1) * (HEIGHT + 1);
private final Bitmap mBitmap;
private final float[] mVerts = new float[COUNT*2];
private final float[] mOrig = new float[COUNT*2];
private final Matrix mMatrix = new Matrix();
private final Matrix mInverse = new Matrix();
private static void setXY(float[] array, int index, float x, float y) {
array[index*2 + 0] = x;
array[index*2 + 1] = y;
}
public SampleView(Context context) {
super(context);
setFocusable(true);
mBitmap = BitmapFactory.decodeResource(getResources(),
R.drawable.beach);
float w = mBitmap.getWidth();
float h = mBitmap.getHeight();
// construct our mesh
int index = 0;
for (int y = 0; y <= HEIGHT; y++) {
float fy = h * y / HEIGHT;
for (int x = 0; x <= WIDTH; x++) {
float fx = w * x / WIDTH;
setXY(mVerts, index, fx, fy);
setXY(mOrig, index, fx, fy);
index += 1;
}
}
mMatrix.setTranslate(10, 10);
mMatrix.invert(mInverse);
}
#Override protected void onDraw(Canvas canvas) {
canvas.drawColor(0xFFCCCCCC);
canvas.concat(mMatrix);
canvas.drawBitmapMesh(mBitmap, WIDTH, HEIGHT, mVerts, 0,
null, 0, null);
}
private void warp(float cx, float cy) {
final float K = 10000;
float[] src = mOrig;
float[] dst = mVerts;
for (int i = 0; i < COUNT*2; i += 2) {
float x = src[i+0];
float y = src[i+1];
float dx = cx - x;
float dy = cy - y;
float dd = dx*dx + dy*dy;
float d = FloatMath.sqrt(dd);
float pull = K / (dd + 0.000001f);
pull /= (d + 0.000001f);
// android.util.Log.d("skia", "index " + i + " dist=" + d + " pull=" + pull);
if (pull >= 1) {
dst[i+0] = cx;
dst[i+1] = cy;
} else {
dst[i+0] = x + dx * pull;
dst[i+1] = y + dy * pull;
}
}
}
private int mLastWarpX = -9999; // don't match a touch coordinate
private int mLastWarpY;
#Override public boolean onTouchEvent(MotionEvent event) {
float[] pt = { event.getX(), event.getY() };
mInverse.mapPoints(pt);
int x = (int)pt[0];
int y = (int)pt[1];
if (mLastWarpX != x || mLastWarpY != y) {
mLastWarpX = x;
mLastWarpY = y;
warp(pt[0], pt[1]);
invalidate();
}
return true;
}
}
}
Image warping generally consists of two main stages. In the first stage you look for points that match on each image. The second stage involves finding a transformation between the set of matched points. Neither stage is trivial and image warping (generally speaking) remains a difficult problem. I have had to solve this problem in the past and so can speak from experience.
By dividing the problem into two parts you can devise solutions for each part independently. It would be helpful to read some material on the web, http://groups.csail.mit.edu/graphics/classes/CompPhoto06/html/lecturenotes/14_WarpMorph_6.pdf, for example.
In stage one, cross correlation is often used as the basis for finding matching points on the two images.
The transformations used in stage two will determine how accurately you can warp one image onto another. A linear transformation will now be very good while a two dimensional transformation that uses spline approximation will certainly cope with nonlinearities.
Here is another helpful link

using bitmap overlays

i have an app that places a fisheye distortion effect on a bitmap. to create the distortion i must loop through the entire bitmap checking whether a given pixel falls with a circle bounds. if it does then i manipulate that pixel. This process is labour intensive and takes upto 50 secs. i was thinking of different ways to do this so i don't have to loop through the whole bitmap to apply the effect.
one idea i have is to draw the bitmap first and display it. then create a second bitmap overlay which only has the effect on. i could then overlay the second bitmap on the first. i'm just trying to think of ways in which i can apply this effect without looping through as many pixels to speed things up. i'll post the distortion class. thanks.
.
class Filters{
private float xscale;
private float yscale;
private float xshift;
private float yshift;
private int [] s;
private int [] scalar;
private int [] s1;
private int [] s2;
private int [] s3;
private int [] s4;
private String TAG = "Filters";
long getRadXStart = 0;
long getRadXEnd = 0;
long startSample = 0;
long endSample = 0;
public Filters(){
Log.e(TAG, "***********inside filter constructor");
s = new int[4];
scalar = new int[4];
s1 = new int[4];
s2 = new int[4];
s3 = new int[4];
s4 = new int[4];
}
public Bitmap barrel (Bitmap input, float k,float cenx, float ceny){
//Log.e(TAG, "***********INSIDE BARREL METHOD ");
Debug.startMethodTracing("barrel");
//float centerX=input.getWidth()/2; //center of distortion
//float centerY=input.getHeight()/2;
float centerX=cenx;
float centerY=ceny;
int width = input.getWidth(); //image bounds
int height = input.getHeight();
Bitmap dst = Bitmap.createBitmap(width, height,input.getConfig() ); //output pic
// Log.e(TAG, "***********dst bitmap created ");
xshift = calc_shift(0,centerX-1,centerX,k);
float newcenterX = width-centerX;
float xshift_2 = calc_shift(0,newcenterX-1,newcenterX,k);
yshift = calc_shift(0,centerY-1,centerY,k);
float newcenterY = height-centerY;
float yshift_2 = calc_shift(0,newcenterY-1,newcenterY,k);
xscale = (width-xshift-xshift_2)/width;
// Log.e(TAG, "***********xscale ="+xscale);
yscale = (height-yshift-yshift_2)/height;
// Log.e(TAG, "***********yscale ="+yscale);
// Log.e(TAG, "***********filter.barrel() about to loop through bm");
/*for(int j=0;j<dst.getHeight();j++){
for(int i=0;i<dst.getWidth();i++){
float x = getRadialX((float)i,(float)j,centerX,centerY,k);
float y = getRadialY((float)i,(float)j,centerX,centerY,k);
sampleImage(input,x,y);
int color = ((s[1]&0x0ff)<<16)|((s[2]&0x0ff)<<8)|(s[3]&0x0ff);
// System.out.print(i+" "+j+" \\");
dst.setPixel(i, j, color);
}
}*/
int origPixel;
long startLoop = System.currentTimeMillis();
for(int j=0;j<dst.getHeight();j++){
for(int i=0;i<dst.getWidth();i++){
origPixel= input.getPixel(i,j);
getRadXStart = System.currentTimeMillis();
float x = getRadialX((float)j,(float)i,centerX,centerY,k);
getRadXEnd= System.currentTimeMillis();
float y = getRadialY((float)j,(float)i,centerX,centerY,k);
sampleImage(input,x,y);
int color = ((s[1]&0x0ff)<<16)|((s[2]&0x0ff)<<8)|(s[3]&0x0ff);
// System.out.print(i+" "+j+" \\");
//if( Math.sqrt( Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) ) <= 150 ){
if( Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) <= 22500 ){
dst.setPixel(i, j, color);
}else{
dst.setPixel(i,j,origPixel);
}
}
}
long endLoop = System.currentTimeMillis();
long loopDuration = endLoop - startLoop;
long radXDuration = getRadXEnd - getRadXStart;
long sampleDur = endSample - startSample;
Log.e(TAG, "sample method took "+sampleDur+"ms");
Log.e(TAG, "getRadialX took "+radXDuration+"ms");
Log.e(TAG, "loop took "+loopDuration+"ms");
// Log.e(TAG, "***********filter.barrel() looped through bm about to return dst bm");
Debug.stopMethodTracing();
return dst;
}
void sampleImage(Bitmap arr, float idx0, float idx1)
{
startSample = System.currentTimeMillis();
// s = new int [4];
if(idx0<0 || idx1<0 || idx0>(arr.getHeight()-1) || idx1>(arr.getWidth()-1)){
s[0]=0;
s[1]=0;
s[2]=0;
s[3]=0;
return;
}
float idx0_fl=(float) Math.floor(idx0);
float idx0_cl=(float) Math.ceil(idx0);
float idx1_fl=(float) Math.floor(idx1);
float idx1_cl=(float) Math.ceil(idx1);
/* float idx0_fl=idx0;
float idx0_cl=idx0;
float idx1_fl=idx1;
float idx1_cl=idx1;*/
/* int [] s1 = getARGB(arr,(int)idx0_fl,(int)idx1_fl);
int [] s2 = getARGB(arr,(int)idx0_fl,(int)idx1_cl);
int [] s3 = getARGB(arr,(int)idx0_cl,(int)idx1_cl);
int [] s4 = getARGB(arr,(int)idx0_cl,(int)idx1_fl);*/
s1 = getARGB(arr,(int)idx0_fl,(int)idx1_fl);
s2 = getARGB(arr,(int)idx0_fl,(int)idx1_cl);
s3 = getARGB(arr,(int)idx0_cl,(int)idx1_cl);
s4 = getARGB(arr,(int)idx0_cl,(int)idx1_fl);
float x = idx0 - idx0_fl;
float y = idx1 - idx1_fl;
s[0]= (int) (s1[0]*(1-x)*(1-y) + s2[0]*(1-x)*y + s3[0]*x*y + s4[0]*x*(1-y));
s[1]= (int) (s1[1]*(1-x)*(1-y) + s2[1]*(1-x)*y + s3[1]*x*y + s4[1]*x*(1-y));
s[2]= (int) (s1[2]*(1-x)*(1-y) + s2[2]*(1-x)*y + s3[2]*x*y + s4[2]*x*(1-y));
s[3]= (int) (s1[3]*(1-x)*(1-y) + s2[3]*(1-x)*y + s3[3]*x*y + s4[3]*x*(1-y));
endSample = System.currentTimeMillis();
}
int [] getARGB(Bitmap buf,int x, int y){
int rgb = buf.getPixel(y, x); // Returns by default ARGB.
// int [] scalar = new int[4];
scalar[0] = (rgb >>> 24) & 0xFF;
scalar[1] = (rgb >>> 16) & 0xFF;
scalar[2] = (rgb >>> 8) & 0xFF;
scalar[3] = (rgb >>> 0) & 0xFF;
return scalar;
}
float getRadialX(float x,float y,float cx,float cy,float k){
x = (x*xscale+xshift);
y = (y*yscale+yshift);
float res = x+((x-cx)*k*((x-cx)*(x-cx)+(y-cy)*(y-cy)));
return res;
}
float getRadialY(float x,float y,float cx,float cy,float k){
x = (x*xscale+xshift);
y = (y*yscale+yshift);
float res = y+((y-cy)*k*((x-cx)*(x-cx)+(y-cy)*(y-cy)));
return res;
}
float thresh = 1;
float calc_shift(float x1,float x2,float cx,float k){
float x3 = (float)(x1+(x2-x1)*0.5);
float res1 = x1+((x1-cx)*k*((x1-cx)*(x1-cx)));
float res3 = x3+((x3-cx)*k*((x3-cx)*(x3-cx)));
if(res1>-thresh && res1 < thresh)
return x1;
if(res3<0){
return calc_shift(x3,x2,cx,k);
}
else{
return calc_shift(x1,x3,cx,k);
}
}
}// end of filters class
.
[update]
Hi, i've not watched all the vid cos i've only so much data allowance on dongle so going to wait till at work to watch it. I've modified the code to the one below. This stores the pixel data in an int array, so there's no call to dst.setPixel. it's still very slow(14 secs on 3.2MP camera) not at all like a few seconds as your code does. can you share that code or tell me if this is not what you meant. thanks Matt.
int origPixel = 0;
int []arr = new int[input.getWidth()*input.getHeight()];
int color = 0;
int p = 0;
int i = 0;
for(int j=0;j<dst.getHeight();j++){
for( i=0;i<dst.getWidth();i++,p++){
origPixel= input.getPixel(i,j);
float x = getRadialX((float)j,(float)i,centerX,centerY,k);
float y = getRadialY((float)j,(float)i,centerX,centerY,k);
sampleImage(input,x,y);
color = ((s[1]&0x0ff)<<16)|((s[2]&0x0ff)<<8)|(s[3]&0x0ff);
// System.out.print(i+" "+j+" \\");
//if( Math.sqrt( Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) ) <= 150 ){
if( Math.pow(i - centerX, 2) + ( Math.pow(j - centerY, 2) ) <= 22500 ){
//dst.setPixel(i, j, color);
arr[p]=color;
Log.e(TAG, "***********arr = " +arr[i]+" i = "+i);
}else{
//dst.setPixel(i,j,origPixel);
arr[p]=origPixel;
}
}
}
// Log.e(TAG, "***********filter.barrel() looped through bm about to return dst bm");
Debug.stopMethodTracing();
Bitmap dst2 = Bitmap.createBitmap(arr,width,height,input.getConfig());
return dst2;
}
I bet you'd cut down considerably on your execution time if you eliminated the call to dst.setPixel inside your inner loop. Instead of operating on the Bitmaps inside your loop, stuff the values into integer arrays during your loop and call setPixels at the end passing in the array.
I have image manipulation code that can loop through an entire 2MP image in a few seconds.
On older Android api's (I believe earlier than 2.3, but it might even include 2.3), the actual image data does not reside in the managed heap so there's probably some expensive operation going on to find the actual location of the bits you're overwriting in the call to setPixel. The source of my information is the Google I/O 2011 video on memory management in Android. If you're doing this kind of work in Android, it's a must watch:
http://www.youtube.com/watch?v=_CruQY55HOk

Categories

Resources