Overlapping GLSurfaceView on SurfaceView and vice versa - android

ExoPlayer - SurfaceView
Camera2 + MediaCodec - GLSurfaceView
I am using the above view groups for playing video and camera recording.
UI-1: Exo-Surf at the center and Cam-GLS in the top right corner.
UI-2: Cam-GLS at the center and Exo-Surf in the top right corner.
To achieve this I am using setZOrderOnTop to set z-index, as both are inside RelativeLayout.
(exoPlayerView.videoSurfaceView as? SurfaceView)?.setZOrderOnTop(true/false)
It seems working fine on Samsung S9+ with API 29 - Android 10, and also for API 28.
But for API 21-27, it behaves with some random issues.
Dash-A top part of SurfaceView/GLSurfaceView is not visible
Dash-B bottom part of SurfaceView/GLSurfaceView is not visible
Entire SurfaceView / GLSurfaceView becomes completely transparent in the top right corner
Also tried using setZOrderMediaOverlay but no luck.
I am sure two surface view works together as Whatsapp and google duo apps are using them in video calls. But I am wondering if GLSurfaceView is causing an issue "something about locking the GL thread" as commented below in this answer.
Hoping for a working solution for API 21+ or any reference link, suggestions would be highly appreciated.

Instead of using the built-in GLSurfaceView, you'll have to create multiple SurfaceViews, and manage how OpenGL draws on one (or more) of those.
The Grafika code (that I mentioned in my comment to the answer you link) is here:
https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/MultiSurfaceActivity.java
In that code, onCreate creates the surfaces:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_multi_surface_test);
// #1 is at the bottom; mark it as secure just for fun. By default, this will use
// the RGB565 color format.
mSurfaceView1 = (SurfaceView) findViewById(R.id.multiSurfaceView1);
mSurfaceView1.getHolder().addCallback(this);
mSurfaceView1.setSecure(true);
// #2 is above it, in the "media overlay"; must be translucent or we will totally
// obscure #1 and it will be ignored by the compositor. The addition of the alpha
// plane should switch us to RGBA8888.
mSurfaceView2 = (SurfaceView) findViewById(R.id.multiSurfaceView2);
mSurfaceView2.getHolder().addCallback(this);
mSurfaceView2.getHolder().setFormat(PixelFormat.TRANSLUCENT);
mSurfaceView2.setZOrderMediaOverlay(true);
// #3 is above everything, including the UI. Also translucent.
mSurfaceView3 = (SurfaceView) findViewById(R.id.multiSurfaceView3);
mSurfaceView3.getHolder().addCallback(this);
mSurfaceView3.getHolder().setFormat(PixelFormat.TRANSLUCENT);
mSurfaceView3.setZOrderOnTop(true);
}
The initial draw code is in:
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height)
which calls different local methods depending on some local flags. For example, it calls an example of GL drawing here:
private void drawRectSurface(Surface surface, int left, int top, int width, int height) {
EglCore eglCore = new EglCore();
WindowSurface win = new WindowSurface(eglCore, surface, false);
win.makeCurrent();
GLES20.glClearColor(0, 0, 0, 0);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
for (int i = 0; i < 4; i++) {
int x, y, w, h;
if (width < height) {
// vertical
w = width / 4;
h = height;
x = left + w * i;
y = top;
} else {
// horizontal
w = width;
h = height / 4;
x = left;
y = top + h * i;
}
GLES20.glScissor(x, y, w, h);
switch (i) {
case 0: // 50% blue at 25% alpha, pre-multiplied
GLES20.glClearColor(0.0f, 0.0f, 0.125f, 0.25f);
break;
case 1: // 100% blue at 25% alpha, pre-multiplied
GLES20.glClearColor(0.0f, 0.0f, 0.25f, 0.25f);
break;
case 2: // 200% blue at 25% alpha, pre-multiplied (should get clipped)
GLES20.glClearColor(0.0f, 0.0f, 0.5f, 0.25f);
break;
case 3: // 100% white at 25% alpha, pre-multiplied
GLES20.glClearColor(0.25f, 0.25f, 0.25f, 0.25f);
break;
}
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
}
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
win.swapBuffers();
win.release();
eglCore.release();
}
I haven't used this code, so I can only suggest you search for additional details about the various calls you see in that code.
FIRST, try to get a simple example working that has two overlapping SurfaceViews, WITHOUT any OpenGL calls. E.g. solid background color views that overlap. And I reiterate the key point: Do Not make either of them a GLSurfaceView!
THEN attempt to change one of the views to initialize and use OpenGL. (Using logic similar to the code I describe above; still NOT a GLSurfaceView.)

Related

Which is the best approach for dynamic drawing in Android?

I want to make a waveform drawing for an audio recorder in Android. The usual one with lines/bars, like this one:
More importantly, I want it live, while the song is being recorded. My app already computes the RMS through AudioRecord. But I am not sure which is the best approach for the actual drawing in terms of processing, resources, battery, etc.
The Visualizer does not show anything meaningful, IMO (are those graphs more or less random stuff??).
I've seen the canvas approach and the layout approach (there are probably more?). In the layout approach you add thin vertical layouts in a horizontal layout. The advantage is that you don't need to redraw the whole thing each 1/n secs, you just add one layout each 1/n secs... but you need hundreds of layouts (depending on n). In the canvas layout, you need to redraw the whole thing (right??) n times per second. Some even create bitmaps for each drawing...
So, which is cheaper, and why? Is there anything better nowadays? How much frequency update (i.e., n) is too much for generic low end devices?
EDIT1
Thanks to the beautiful trick #cactustictacs taught me in his answer, I was able to implement this with ease. Yet, the image is strangely rendered kind of "blurry by movement":
The waveform runs from right to left. You can easily see the blur movement, and the left-most and right-most pixels get "contaminated" by the other end. I guess I can just cut both extremes...
This renders better if I make my Bitmap bigger (i.e., making widthBitmap bigger), but then the onDraw will be heavier...
This is my full code:
package com.floritfoto.apps.ave;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;
import android.util.AttributeSet;
import java.util.Arrays;
public class Waveform extends androidx.appcompat.widget.AppCompatImageView {
//private float lastPosition = 0.5f; // 0.5 for drawLine method, 0 for the others
private int lastPosition = 0;
private final int widthBitmap = 50;
private final int heightBitmap = 80;
private final int[] transpixels = new int[heightBitmap];
private final int[] whitepixels = new int[heightBitmap];
//private float top, bot; // float for drawLine method, int for the others
private int aux, top;
//private float lpf;
private int width = widthBitmap;
private float proportionW = (float) (width/widthBitmap);
Boolean firstLoopIsFinished = false;
Bitmap MyBitmap = Bitmap.createBitmap(widthBitmap, heightBitmap, Bitmap.Config.ARGB_8888);
//Canvas canvasB = new Canvas(MyBitmap);
Paint MyPaint = new Paint();
Paint MyPaintTrans = new Paint();
Rect rectLbit, rectRbit, rectLdest, rectRdest;
public Waveform(Context context, AttributeSet attrs) {
super(context, attrs);
MyPaint.setColor(0xffFFFFFF);
MyPaint.setStrokeWidth(1);
MyPaintTrans.setColor(0xFF202020);
MyPaintTrans.setStrokeWidth(1);
Arrays.fill(transpixels, 0xFF202020);
Arrays.fill(whitepixels, 0xFFFFFFFF);
}
public void drawNewBar() {
// For drawRect or drawLine
/*
top = ((1.0f - Register.tone) * heightBitmap / 2.0f);
bot = ((1.0f + Register.tone) * heightBitmap / 2.0f);
// Using drawRect
//if (firstLoopIsFinished) canvasB.drawRect(lastPosition, 0, lastPosition+1, heightBitmap, MyPaintTrans); // Delete last stuff
//canvasB.drawRect(lastPosition, top, lastPosition+1, bot, MyPaint);
// Using drawLine
if (firstLoopIsFinished) canvasB.drawLine(lastPosition, 0, lastPosition, heightBitmap, MyPaintTrans); // Delete previous stuff
canvasB.drawLine(lastPosition ,top, lastPosition, bot, MyPaint);
*/
// Using setPixel (no tiene sentido, mucho mejor setPixels.
/*
int top = (int) ((1.0f - Register.tone) * heightBitmap / 2.0f);
int bot = (int) ((1.0f + Register.tone) * heightBitmap / 2.0f);
if (firstLoopIsFinished) {
for (int i = 0; i < top; ++i) {
MyBitmap.setPixel(lastPosition, i, 0xFF202020);
MyBitmap.setPixel(lastPosition, heightBitmap - i-1, 0xFF202020);
}
}
for (int i = top ; i < bot ; ++i) {
MyBitmap.setPixel(lastPosition,i,0xffFFFFFF);
}
//System.out.println("############## "+top+" "+bot);
*/
// Using setPixels. Works!!
top = (int) ((1.0f - Register.tone) * heightBitmap / 2.0f);
if (firstLoopIsFinished)
MyBitmap.setPixels(transpixels,0,1,lastPosition,0,1,heightBitmap);
MyBitmap.setPixels(whitepixels, top,1, lastPosition, top,1,heightBitmap-2*top);
lastPosition++;
aux = (int) (width - proportionW * (lastPosition));
rectLbit.right = lastPosition;
rectRbit.left = lastPosition;
rectLdest.right = aux;
rectRdest.left = aux;
if (lastPosition >= widthBitmap) { firstLoopIsFinished = true; lastPosition = 0; }
}
#Override
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(w, h, oldw, oldh);
width = w;
proportionW = (float) width/widthBitmap;
rectLbit = new Rect(0, 0, widthBitmap, heightBitmap);
rectRbit = new Rect(0, 0, widthBitmap, heightBitmap);
rectLdest = new Rect(0, 0, width, h);
rectRdest = new Rect(0, 0, width, h);
}
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
drawNewBar();
canvas.drawBitmap(MyBitmap, rectLbit, rectRdest, MyPaint);
canvas.drawBitmap(MyBitmap, rectRbit, rectLdest, MyPaint);
}
}
EDIT2
I was able to prevent the blurring just using null as Paint in the canvas.drawBitmap:
canvas.drawBitmap(MyBitmap, rectLbit, rectRdest, null);
canvas.drawBitmap(MyBitmap, rectRbit, rectLdest, null);
No Paints needed.
Your basic custom view approach would be to implement onDraw and redraw your current data each frame. You'd probably keep some kind of circular Buffer holding your most recent n amplitude values, so each frame you'd iterate over those, and use drawRect to draw the bars (you'd calculate things like width, height scaling, start and end positions etc in onSizeChanged, and use those values when defining the coordinates for the Rects).
That in itself might be fine! The only way you can really tell how expensive draw calls are is to benchmark them, so you could try this approach out and see how it goes. Profile it to see how much time it takes, how much the CPU spikes etc.
There are a few things you can do to make onDraw as efficient as possible, mostly things like avoiding object allocations - so watch out for loop functions that create Iterators, and in the same way you're supposed to create a Paint once instead of creating them over and over in onDraw, you could reuse a single Rect object by setting its coordinates for each bar you need to draw.
Another approach you could try is creating a working Bitmap in your custom view, which you control, and calling drawBitmap inside onDraw to paint it onto the Canvas. That should be a pretty inexpensive call, and it can easily be stretched as required to fit the view.
The idea there, is that very time you get new data, you paint it onto the bitmap. Because of how your waveform looks (like blocks), and the fact you can scale it up, really all you need is a single vertical line of pixels for each value, right? So as the data comes in, you paint an extra line onto your already-existing bitmap, adding to the image. Instead of painting the entire waveform block by block every frame, you're just adding the new blocks.
The complication there is when you "fill" the bitmap - now you have to "shift" all the pixels to the left, dropping the oldest ones on the left side, so you can draw the new ones on the right. So you'll need a way to do that!
Another approach would be something similar to the circular buffer idea. If you don't know what that is, the idea is you take a normal buffer with a start and an end, but you treat one of the indices as your data's start point, wrap around to 0 when you hit the last index of the buffer, and stop when you hit the index you're calling your end point:
Partially filled buffer:
|start
123400
|end
Data: 1234
Full buffer:
|start
123456
|end
Data: 123456
After adding one more item:
|start
723456
|end
Data: 234567
See how once it's full, you shift the start and end one step "right", wrapping around if necessary? So you always have the most recent 6 values added. You just have to handle reading from the correct index ranges, from start -> lastIndex and then firstIndex -> end
You could do the same thing with a bitmap - start "filling" it from the left, increasing end so you can draw the next vertical line. Once it's full, start filling from the left by moving end there. When you actually draw the bitmap, instead of drawing the whole thing as-is (723456) you draw it in two parts (23456 then 7). Make sense? When you draw a bitmap to the canvas, there's a call that takes a source Rect and a destination one, so you can draw it in two chunks.
You could always redraw the bitmap from scratch each frame (clear it and draw the vertical lines), so you're basically redrawing your whole data buffer each time. Probably still faster than the drawRect approach for each value, but honestly not much easier than the "treat the bitmap as another circular buffer" method. If you're already managing one circular buffer, it's not much more work - since the buffer and the bitmap will have the same number of values (horizontal pixels in the bitmap's case) you can use the same start and end values for both
You would never do this with layouts. Layouts are for premade components. They're high level combinations of components and you don't want to dynamically add or remove views from it frequently. For this, you use a custom view with a canvas. Layouts aren't even an option for something like this.

Draw a segmented circle in Android: OpenGL vs Cavans?

I need to draw something like this:
I was hoping that this guy posted some code of how he drew his segmented circle to begin with, but alas he didn't.
I also need to know which segment is where after interaction with the wheel - for instance if the wheel is rotated, I need to know where the original segments are after the rotation action.
Two questions:
Do I draw this segmented circle (with varying colours and content placed on the segment) with OpenGL or using Android Canvas?
Using either of the options, how do I register which segment is where?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
EDIT:
Ok, so I've figured out how to draw the segmented circle using Canvas (I'll post the code as an answer). And I'm sure I'll figure out how to rotate the circle soon. But I'm still unsure how I'll recognize a separate segment of the drawn wheel after the rotation action.
Because, what I'm thinking of doing is drawing the segmented circle with these wedges, and the sort of handling the entire Canvas as an ImageView when I want to rotate it as if it's spinning. But when the spinning stops, how do I differentiate between the original segments drawn on the Canvas?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I've read about how to draw a segment on its own (here also), OpenGL, Canvas and even drawing shapes and layering them, but I've yet to see someone explaining how to recognize the separate segments.
Can drawBitmap() or createBitmap() perhaps be used?
If I go with OpenGL, I'll probably be able to rotate the segmented wheel using OpenGL's rotation, right?
I've also read that OpenGL might be too powerful for what I'd like to do, so should I rather consider "the graphic components of a game library built on top of OpenGL"?
This kind of answers my first question above - how to draw the segmented circle using Android Canvas:
Using the code found here, I do this in the onDraw function:
// Starting values
private int startAngle = 0;
private int numberOfSegments = 11;
private int sweepAngle = 360 / numberOfSegments;
#Override
protected void onDraw(Canvas canvas) {
setUpPaint();
setUpDrawingArea();
colours = getColours();
Log.d(TAG, "Draw the segmented circle");
for (int i = 0; i < numberOfSegments; i++) {
// pick a colour that is not the previous colour
paint.setColor(colours.get(pickRandomColour()));
// Draw arc
canvas.drawArc(rectF, startAngle, sweepAngle, true, paint);
// Set variable values
startAngle -= sweepAngle;
}
}
This is how I set up the drawing area based on the device's screen size:
private void setUpDrawingArea() {
Log.d(TAG, "Set up drawing area.");
// First get the screen dimensions
Point size = new Point();
Display display = DrawArcActivity.this.getWindowManager().getDefaultDisplay();
display.getSize(size);
int width = size.x;
int height = size.y;
Log.d(TAG, "Screen size = "+width+" x "+height);
// Set up the padding
int paddingLeft = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingTop = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingRight = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
int paddingBottom = (int) DrawArcActivity.this.getResources().getDimension(R.dimen.padding_large);
// Then get the left, top, right and bottom Xs and Ys for the rectangle we're going to draw in
int left = 0 + paddingLeft;
int top = 0 + paddingTop;
int right = width - paddingRight;
int bottom = width - paddingBottom;
Log.d(TAG, "Rectangle placement -> left = "+left+", top = "+top+", right = "+right+", bottom = "+bottom);
rectF = new RectF(left, top, right, bottom);
}
That (and the other functions which are pretty straight forward, so I'm not going to paste the code here) draws this:
The segments are different colours with every run.

Converting Camera Coordinates to Custom View Coordinates

I am trying to make a simple face detection app consisting of a SurfaceView (essentially a camera preview) and a custom View (for drawing purposes) stacked on top. The two views are essentially the same size, stacked on one another in a RelativeLayout. When a person's face is detected, I want to draw a white rectangle on the custom View around their face.
The Camera.Face.rect object returns the face bound coordinates using the coordinate system explained here and the custom View uses the coordinate system described in the answer to this question. Some sort of conversion is needed before I can use it to draw on the canvas.
Therefore, I wrote an additional method ScaleFacetoView() in my custom view class (below) I redraw the custom view every time a face is detected by overriding the OnFaceDetection() method. The result is the white box appears correctly when a face is in the center. The problem I noticed is that it does not correct track my face when it moves to other parts of the screen.
Namely, if I move my face:
Up - the box goes left
Down - the box goes right
Right - the box goes upwards
Left - the box goes down
I seem to have incorrectly mapped the values when scaling the coordinates. Android docs provide this method of converting using a matrix, but it is rather confusing and I have no idea what it is doing. Can anyone provide some code on the correct way of converting Camera.Face coordinates to View coordinates?
Here's the code for my ScaleFacetoView() method.
public void ScaleFacetoView(Face[] data, int width, int height, TextView a){
//Extract data from the face object and accounts for the 1000 value offset
mLeft = data[0].rect.left + 1000;
mRight = data[0].rect.right + 1000;
mTop = data[0].rect.top + 1000;
mBottom = data[0].rect.bottom + 1000;
//Compute the scale factors
float xScaleFactor = 1;
float yScaleFactor = 1;
if (height > width){
xScaleFactor = (float) width/2000.0f;
yScaleFactor = (float) height/2000.0f;
}
else if (height < width){
xScaleFactor = (float) height/2000.0f;
yScaleFactor = (float) width/2000.0f;
}
//Scale the face parameters
mLeft = mLeft * xScaleFactor; //X-coordinate
mRight = mRight * xScaleFactor; //X-coordinate
mTop = mTop * yScaleFactor; //Y-coordinate
mBottom = mBottom * yScaleFactor; //Y-coordinate
}
As mentioned above, I call the custom view like so:
#Override
public void onFaceDetection(Face[] arg0, Camera arg1) {
if(arg0.length == 1){
//Get aspect ratio of the screen
View parent = (View) mRectangleView.getParent();
int width = parent.getWidth();
int height = parent.getHeight();
//Modify xy values in the view object
mRectangleView.ScaleFacetoView(arg0, width, height);
mRectangleView.setInvalidate();
//Toast.makeText( cc ,"Redrew the face.", Toast.LENGTH_SHORT).show();
mRectangleView.setVisibility(View.VISIBLE);
//rest of code
Using the explanation Kenny gave I manage to do the following.
This example works using the front facing camera.
RectF rectF = new RectF(face.rect);
Matrix matrix = new Matrix();
matrix.setScale(1, 1);
matrix.postScale(view.getWidth() / 2000f, view.getHeight() / 2000f);
matrix.postTranslate(view.getWidth() / 2f, view.getHeight() / 2f);
matrix.mapRect(rectF);
The returned Rectangle by the matrix has all the right coordinates to draw into the canvas.
If you are using the back camera I think is just a matter of changing the scale to:
matrix.setScale(-1, 1);
But I haven't tried that.
The Camera.Face class returns the face bound coordinates using the image frame that the phone would save into its internal storage, rather than using the image displayed in the Camera Preview. In my case, the images were saved in a different manner from the camera, resulting in a incorrect mapping. I had to manually account for the discrepancy by taking the coordinates, rotating it counter clockwise 90 degrees and flipping it on the y-axis prior to scaling it to the canvas used for the custom view.
EDIT:
It would also appear that you can't change the way the face bound coordinates are returned by modifying the camera capture orientation using the Camera.Parameters.setRotation(int) method either.

Painting text over background image is dim

I'm painting text over a background image on a canvas. I move the image interactively (like a Ouija board pointer). I've set the canvas to black, the pointer is red and I want to write white text over it so that the pointer has a player's name on it.
In Android 2.3.4 it appears as solid white text on top of the red pointer which is pretty clear, but I'd like to use any color. In Android 4.1.2 I can barely see the white text. Here's my code:
public Pointer(Context context) {
super(context);
paintBg = new Paint();
paintBg.setColor(Color.BLACK);
paintName = new Paint();
paintName.setColor(Color.WHITE);
paintName.setTextSize(50); // set text size
paintName.setStrokeWidth(5);
paintName.setTextAlign(Paint.Align.CENTER);
this.setImageResource(res); // pointer.png in res/drawable folder
Drawable d = getResources().getDrawable(res);
h = d.getIntrinsicHeight();
w = d.getIntrinsicWidth();
canvas = new Canvas();
canvas.drawPaint(paintBg);//make background black
// float imageScale = width / w; // how image size scales with screen
}
#Override
public void onDraw(Canvas canvas) {
y = this.getHeight() / 2; // center of screen
x = this.getWidth() / 2;
int left = Math.round(x - 0.8f * w);
int right = Math.round(x + 0.8f * w);
canvas.save();
canvas.rotate((direction + 180) % 360, x, y); // rotate to normal
canvas.drawText(s, x, y + 20, paintName); // draw name
canvas.restore();
canvas.rotate(direction, x, y); // rotate back
super.onDraw(canvas);
}
What changed in 4.1.2 that would affect this, or am I doning something incorrectly? Thanks for your help with this as it's driving me crazy.
Edit to include screen shots:
Android 2.3.4
Android 4.1.2
Note how the white text appears to be on top in 2.3.4 while it appears below or muddy in 4.1.2.
As free3dom pointes out it is related to alpha. I do change alpha because if I don't, the text does not appear on top of the arrow. It appears that the ImageView having the pointer image is always on top - could this be what's going on?
Here is how I handle setting alpha:
public static void setAlpha(View view, float alpha, int duration) {
if (Build.VERSION.SDK_INT < 11) {
final AlphaAnimation animation = new AlphaAnimation(alpha, alpha);
animation.setDuration(duration);
animation.setFillAfter(true);
view.startAnimation(animation);
} else //for 11 and above
view.setAlpha(alpha);
}
Maybe it has something to do with using this.setImageResource(res) to set the image resource? According to android developer guide, I can only set alpha to the single view and everything in the view is changed. Yet if I lower the alpha, the arrow image seems to become transparent enough to allow me to see the text.
You set a stroke width, but never indicate that stroke should be used for the Paint.
Try adding
paintName.setStyle( FILL_AND_STROKE );

Android Opengl ES tiling engine, smooth scrolling

Following this : Best approach for oldschool 2D zelda-like game
I got a simple 2D tiles generator working, im reading an int map[100][100] filled with either 1's or 0's and draw tiles according to their tile id, 0 is water, 1 grass.
Im using some basic Numpad control handler, using a camIncr (32.0f), i set the camera position according to the movement :
case KeyEvent.KEYCODE_DPAD_RIGHT:
cameraPosX = (float)(cameraPosX + camIncr);
break;
In my draw loop, im just drawing enough tiles to fit on my screen, and track the top left tile using cameraOffsetX and cameraOffsetY (its the camera position / tile size )
Im using a GLU.gluOrtho2D for my projection.
Here is the draw loop inside my custom renderer :
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode( GL10.GL_PROJECTION );
gl.glLoadIdentity( );
GLU.gluOrtho2D(gl, 0, scrWidth, scrHeight, 0);
repere.draw(gl, 100.0f); // this is just a helper, draw 2 lines at the origin
//Call the drawing methods
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
tiledBackground.draw(gl, filtering);
my tiledBackground draw function :
int cols = (569 / 32) + 2; // how many columns can fit on the screen
int rows = (320 / 32) + 1; // haw many rows can fit on the screen
int cameraPosX = (int) Open2DRenderer.getCameraPosX();
int cameraPosY = (int) Open2DRenderer.getCameraPosY();
tileOffsetX = (int) (cameraPosX / 32);
tileOffsetY = (int) (cameraPosY / -32);
gl.glPushMatrix();
for (int y = 0; y < rows; y++) {
for (int x = 0; x < cols; x++) {
try {
tile = map[y + tileOffsetY][x + tileOffsetX];
} catch (Exception e) {
e.printStackTrace(); //when out of array
tile = 0;
}
gl.glPushMatrix();
if (tile==0){
waterTile.draw(gl, filter);
}
if (tile==4) {
grassTile.draw(gl, filter);
}
gl.glTranslatef(32.0f, 0.0f, 0.0f);
}//
gl.glPopMatrix();
gl.glTranslatef(0.0f, 32.0f, 0.0f);
}
gl.glPopMatrix();
}
the waterTile and grassTile .draw function draw a 32x32 textured tile, might post the code if relevant.
Everything is fine, i can move using numpad arrows, and my map 'moves' with me, since im only drawing what i can see, its fast (see android OpenGL ES simple Tile generator performance problem where Aleks pointed me to a simple 'culling' idea)
I would like my engine to 'smooth scroll' now. I've tried tweaking the camIncr variable, the GLU.gluOrtho2D etc, nothing worked.
Any ideas ? :)
I finally found out.
i added a glTranslatef method right before entering the loop :
gl.glPushMatrix();
gl.glTranslatef(-cameraPosX%32, -cameraPosY%32, 0);
for (int y = 0; y < rows; y++) {
...
First, i was unsuccessfully trying to translate the scene using a brute cameraPosX / TILE_HEIGHT division, didn't work.
We have to translate the offset by which the tile extends beyond the screen, not the total cameraPosX offset, so we're using the Mod (%) operator instead of division.
Sorry for my bad english ^^

Categories

Resources