how to load texture for meshes (not a3d object!) in renderscript? - android
I have been creating meshes using equations, but applying texture seems to be an issue.
I have created a cube and cylinder but when texture is applied, the texture is not getting allocated and only the object is visible. I created a Cylinder successfully with the help of this link - How to make a cylinder in renderscript and I've created a cube of my own. The 2 objects are visible, but not the texture applied on them. I'm able to allocate textures only for my a3d objects that I've created, but not for the meshes created using equations. Any ideas where I'm going wrong? Please suggest what could be the problem, or is it not possible to apply texture for such meshes?
my CustomShaders that im using are shaderv.glsl and shaderf.glsl - same as what is there in the examples
public Mesh cubeMesh(float ltf, float lbf, float rbf, float rtf, float ltb, float lbb, float rbb, float rtb){
Mesh.TriangleMeshBuilder mbo= new TriangleMeshBuilder(mRSGL,3, Mesh.TriangleMeshBuilder.TEXTURE_0);
//front
mbo.setTexture(0, 0);
mbo.addVertex(-lbf, -lbf,lbf); //lbf
mbo.setTexture(1, 0);
mbo.addVertex( rbf, -rbf,rbf); //rbf
mbo.setTexture(1, 1);
mbo.addVertex( rtf, rtf, rtf); //rtf
mbo.setTexture(0, 1);
mbo.addVertex(-ltf, ltf, ltf); //ltf
//top
mbo.setTexture(0, 0);
mbo.addVertex(-ltf, ltf,ltf); //ltf
mbo.setTexture(1, 0);
mbo.addVertex(rtf, rtf,rtf); //rtf
mbo.setTexture(1, 1);
mbo.addVertex(rtb, rtb,-rtb); //rtb
mbo.setTexture(0, 1);
mbo.addVertex(-ltb, ltb,-ltb); //ltb
//back
mbo.setTexture(0, 0);
mbo.addVertex(rbb, -rbb,-rbb); //rbb
mbo.setTexture(1, 0);
mbo.addVertex(-lbb, -lbb,-lbb); //lbb
mbo.setTexture(1, 1);
mbo.addVertex(-ltb, ltb,-ltb); //ltb
mbo.setTexture(0, 1);
mbo.addVertex(rtb, rtb,-rtb); //rtb
//bottom
mbo.setTexture(0, 0);
mbo.addVertex(-lbb, -lbb,-lbb); //lbb
mbo.setTexture(1, 0);
mbo.addVertex(rbb, -rbb,-rbb); //rbb
mbo.setTexture(1, 1);
mbo.addVertex(rbf, -rbf,rbf); //rbf
mbo.setTexture(0, 1);
mbo.addVertex(-lbf, -lbf,lbf); //lbf
//left
mbo.setTexture(0, 0);
mbo.addVertex(-lbb, -lbb,-lbb); //lbb
mbo.setTexture(1, 0);
mbo.addVertex(-lbf, -lbf,lbf); //lbf
mbo.setTexture(1, 1);
mbo.addVertex(-ltf, ltf,ltf); //ltf
mbo.setTexture(0, 1);
mbo.addVertex(-ltb, ltb,-ltb); //ltb
//right
mbo.setTexture(0, 0);
mbo.addVertex(rbf, -rbf,rbf); //rbf
mbo.setTexture(1, 0);
mbo.addVertex(rbb, -rbb,-rbb); //rbb
mbo.setTexture(1, 1);
mbo.addVertex(rtb, rtb,-rtb); //rtb
mbo.setTexture(0, 1);
mbo.addVertex(rtf, rtf,rtf); //rtf
mbo.addTriangle(0,1,2);//1
mbo.addTriangle(2,3,0);
mbo.addTriangle(4,5,6);//2
mbo.addTriangle(6,7,4);
mbo.addTriangle(8,9,10);//3
mbo.addTriangle(10,11,8);
mbo.addTriangle(12,13,14);//4
mbo.addTriangle(14,15,12);
mbo.addTriangle(16,17,18);//5
mbo.addTriangle(18,19,16);
mbo.addTriangle(20,21,22);//6
mbo.addTriangle(22,23,20);
return mbo.create(true);
}
private Mesh cylinder(){
float radius=1.25f, halfLength=5;
int slices=16;
Mesh.TriangleMeshBuilder mbo= new TriangleMeshBuilder(mRSGL,3, Mesh.TriangleMeshBuilder.TEXTURE_0);
/*vertex at middle of end*/
mbo.addVertex(0.0f, halfLength, 0.0f);
mbo.addVertex(0.0f, -halfLength, 0.0f);
for(int i=0; i<slices; i++) {
float theta = (float) (i*2.0*Math.PI / slices);
float nextTheta = (float) ((i+1)*2.0*Math.PI / slices);
/*vertices at edges of circle*/
mbo.addVertex((float)(radius*Math.cos(theta)), halfLength, (float)(radius*Math.sin(theta)));
mbo.addVertex((float)(radius*Math.cos(nextTheta)), halfLength, (float)(radius*Math.sin(nextTheta)));
/* the same vertices at the bottom of the cylinder*/
mbo.addVertex((float)(radius*Math.cos(nextTheta)), -halfLength, (float)(radius*Math.sin(nextTheta)));
mbo.addVertex((float)(radius*Math.cos(theta)), -halfLength, (float)(radius*Math.sin(theta)));
/*Add the faces for the ends, ordered for back face culling*/
mbo.addTriangle(4*i+3, 4*i+2, 0);
//The offsets here are to adjust for the first two indices being the center points. The sector number (i) is multiplied by 4 because the way you are building this mesh, there are 4 vertices added with each sector
mbo.addTriangle(4*i+5, 4*i+4, 1);
/*Add the faces for the side*/
mbo.addTriangle(4*i+2, 4*i+4, 4*i+5);
mbo.addTriangle(4*i+4, 4*i+2, 4*i+3);
}
return mbo.create(true);
}
public void initMesh(){
m1= cubeMesh(1,1,1,1,1,1,1,1);
mScript.set_gCubeMesh(m1); //cube
m2 = cylinder();
mScript.set_gCylinder(m2); //Cylinder
}
and I'm loading the texture initially like this:
private void loadImages() {
cubetex = loadTextureRGB(R.drawable.crate);
cylindertex = loadTextureRGB(R.drawable.torusmap);
mScript.set_cubetex(cubetex);
mScript.set_gTexCylinder(cylindertex);
}
and in the rs side the cube and cylinder functions that are called by the root:
static void displayCustomCube()
{
// Update vertex shader constants
// Load model matrix
// Apply a rotation to our mesh
gTorusRotation += 50.0f * gDt;
if (gTorusRotation > 360.0f)
{
gTorusRotation -= 360.0f;
}
// Setup the projection matrix
if(once<1)
{
float aspect = (float)rsgGetWidth() / (float)rsgGetHeight();
rsMatrixLoadPerspective(&gVSConstants->proj, 40.0f, aspect, 0.1f, 1000.0f);
once++;
}
// Position our model on the screen
rsMatrixLoadTranslate(&gVSConstants->model, 0,2,-10);
setupCustomShaderLights();
rsgBindProgramVertex(gProgVertexCustom);
// Fragment shader with texture
rsgBindProgramStore(gProgStoreBlendNoneDepth);
rsgBindProgramFragment(gProgFragmentCustom);
rsgBindSampler(gProgFragmentCustom, 0, gLinearClamp);
rsgBindTexture(gProgFragmentCustom, 0, cubetex);
// Use no face culling
rsgBindProgramRaster(gCullNone);
rsgDrawMesh(gCubeMesh); // load cube model
}
static void displayCustomCylinder()
{
// Update vertex shader constants
// Load model matrix
// Apply a rotation to our mesh
gTorusRotation += 50.0f * gDt;
if (gTorusRotation > 360.0f)
{
gTorusRotation -= 360.0f;
}
// Setup the projection matrix
if(once<1)
{
float aspect = (float)rsgGetWidth() / (float)rsgGetHeight();
rsMatrixLoadPerspective(&gVSConstants->proj, 40.0f, aspect, 0.1f, 1000.0f);
once++;
}
// Position our model on the screen
rsMatrixLoadTranslate(&gVSConstants->model, 0,2,-10);
setupCustomShaderLights();
rsgBindProgramVertex(gProgVertexCustom);
// Fragment shader with texture
rsgBindProgramStore(gProgStoreBlendNoneDepth);
rsgBindProgramFragment(gProgFragmentCustom);
rsgBindSampler(gProgFragmentCustom, 0, gLinearClamp);
rsgBindTexture(gProgFragmentCustom, 0, gTexCylinder);
// Use no face culling
rsgBindProgramRaster(gCullNone);
rsgDrawMesh(gCylinder); // load cylinder model
}
definition of setCustomShaderLights() is:
static void setupCustomShaderLights()
{
float4 light0Pos = {xLight0Pos, yLight0Pos, zLight0Pos, aLight0Pos};
float4 light1Pos = { 0.0f, 0.0f, 20.0f, 1.0f};
float4 light0DiffCol = {xDiffColLight0, yDiffColLight0, zDiffColLight0, aDiffColLight0};
float4 light0SpecCol = {xSpecColLight0, ySpecColLight0, zSpecColLight0, aSpecColLight0};
float4 light1DiffCol = {0.5f, 0.5f, 0.9f, 1.0f};
float4 light1SpecCol = {0.5f, 0.5f, 0.9f, 1.0f};
gLight0Rotation += 50.0f * gDt;
if (gLight0Rotation > 360.0f)
{
gLight0Rotation -= 360.0f;
}
gLight1Rotation -= 50.0f * gDt;
if (gLight1Rotation > 360.0f)
{
gLight1Rotation -= 360.0f;
}
// Set light 0 properties
gVSConstants->light0_Posision = light0Pos;
gVSConstants->light0_Diffuse = DiffLight0Val;
gVSConstants->light0_Specular = SpecLight0Val;
gVSConstants->light0_CosinePower = Light0Cos;
// Set light 1 properties
gVSConstants->light1_Posision = light1Pos;
gVSConstants->light1_Diffuse = 1.0f;
gVSConstants->light1_Specular = 0.7f;
gVSConstants->light1_CosinePower = 25.0f;
rsgAllocationSyncAll(rsGetAllocation(gVSConstants));
// Update fragmetn shader constants
// Set light 0 colors
gFSConstants->light0_DiffuseColor = light0DiffCol;
gFSConstants->light0_SpecularColor = light0SpecCol;
// Set light 1 colors
gFSConstants->light1_DiffuseColor = light1DiffCol;
gFSConstants->light1_SpecularColor = light1SpecCol;
rsgAllocationSyncAll(rsGetAllocation(gFSConstants));
}
and loadTextureRGB() is:
private Allocation loadTextureRGB(int id)
{
return Allocation.createFromBitmapResource(mRSGL, mRes, id,
Allocation.MipmapControl.MIPMAP_ON_SYNC_TO_TEXTURE,
Allocation.USAGE_GRAPHICS_TEXTURE);
}
Texturing these meshes is definitely possible. It likely isn't the only solution to your problem, but one reason you are not getting any texturing of the cylinder is that you never declare texture coordinates for that mesh when you create it. You are for the cube though so just transfer that method over. As for why the texture is not showing up, I don't see anything immediately wrong with your code. What is the GLSL code for your custom shaders? Is it the same as from the MiscSamples example? What about your definition for setupCustomShaderLights() and loadTextureRGB()? Are they also the same from the example code?
Related
Vuforia 6.0.117 0x501 error when rendering texture
I'm trying to get Vuforia 6.0.117 working in my Android app. I'm using this specific version since its the last version supporting FrameMarkers. The detection of FrameMarkers is working fine, but when i'm trying to render a texture over the FrameMarker on my phone I get an error stating: After operation FrameMarkers render frame got glError 0x501 My renderFrame method: // Clear color and depth buffer GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT); // Get the state from Vuforia and mark the beginning of a rendering // section State state = Renderer.getInstance().begin(); // Explicitly render the Video Background Renderer.getInstance().drawVideoBackground(); GLES20.glEnable(GLES20.GL_DEPTH_TEST); GLES20.glEnable(GLES20.GL_BLEND); GLES20.glBlendEquation(GLES20.GL_FUNC_ADD); // GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA); // We must detect if background reflection is active and adjust the // culling direction. // If the reflection is active, this means the post matrix has been // reflected as well, // therefore standard counter clockwise face culling will result in // "inside out" models. GLES20.glEnable(GLES20.GL_CULL_FACE); GLES20.glCullFace(GLES20.GL_BACK); if (Renderer.getInstance().getVideoBackgroundConfig().getReflection() == VIDEO_BACKGROUND_REFLECTION.VIDEO_BACKGROUND_REFLECTION_ON) { GLES20.glFrontFace(GLES20.GL_CW); // Front camera } else { GLES20.glFrontFace(GLES20.GL_CCW); // Back camera } // Did we find any trackables this frame? if (mActivity.isHelpVisible() || state.getNumTrackableResults() == 0) { // no marker scanned mActivity.hideInfoButton(); } else { // Get the trackable: TrackableResult trackableResult = state.getTrackableResult(0); float[] modelViewMatrix = Tool.convertPose2GLMatrix(trackableResult.getPose()).getData(); // Check the type of the trackable: MarkerResult markerResult = (MarkerResult) trackableResult; Marker marker = (Marker) markerResult.getTrackable(); if (markerId != marker.getMarkerId()) { markerId = marker.getMarkerId(); tag = DataManager.getInstance().getTagByMarkerId(markerId); if (tag != null) { texture = Texture.loadTexture(tag.getTexture()); setupTexture(texture); tag.addToDB(); } } if (tag != null) { String poiReference = tag.getPoiReference(); if (!poiReference.isEmpty()) { mActivity.showInfoButton(poiReference); } // Select which model to draw: Buffer vertices = planeObject.getVertices(); Buffer normals = planeObject.getNormals(); Buffer indices = planeObject.getIndices(); Buffer texCoords = planeObject.getTexCoords(); int numIndices = planeObject.getNumObjectIndex(); float[] modelViewProjection = new float[16]; float scale = (float) tag.getScale(); Matrix.scaleM(modelViewMatrix, 0, scale, scale, scale); Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession.getProjectionMatrix().getData(), 0, modelViewMatrix, 0); GLES20.glUseProgram(shaderProgramID); GLES20.glVertexAttribPointer(vertexHandle, 3, GLES20.GL_FLOAT, false, 0, vertices); GLES20.glVertexAttribPointer(normalHandle, 3, GLES20.GL_FLOAT, false, 0, normals); GLES20.glVertexAttribPointer(textureCoordHandle, 2, GLES20.GL_FLOAT, false, 0, texCoords); GLES20.glEnableVertexAttribArray(vertexHandle); GLES20.glEnableVertexAttribArray(normalHandle); GLES20.glEnableVertexAttribArray(textureCoordHandle); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture.mTextureID[0]); GLES20.glUniformMatrix4fv(mvpMatrixHandle, 1, false, modelViewProjection, 0); GLES20.glUniform1i(texSampler2DHandle, 0); GLES20.glDrawElements(GLES20.GL_TRIANGLES, numIndices, GLES20.GL_UNSIGNED_SHORT, indices); GLES20.glDisableVertexAttribArray(vertexHandle); GLES20.glDisableVertexAttribArray(normalHandle); GLES20.glDisableVertexAttribArray(textureCoordHandle); SampleUtils.checkGLError("FrameMarkers render frame"); } } GLES20.glDisable(GLES20.GL_DEPTH_TEST); Renderer.getInstance().end(); } I'm loading a texture of the size 640x482 and is loading as follows: public class Texture { public int mWidth; // The width of the texture. public int mHeight; // The height of the texture. public int mChannels; // The number of channels. public ByteBuffer mData; // The pixel data. public int[] mTextureID = new int[1]; public boolean mSuccess = false; public static Texture loadTexture(String fileName) { try { InputStream inputStream = new FileInputStream(fileName); BufferedInputStream bufferedStream = new BufferedInputStream(inputStream); Bitmap bitMap = BitmapFactory.decodeStream(bufferedStream); bufferedStream.close(); inputStream.close(); int[] data = new int[bitMap.getWidth() * bitMap.getHeight()]; bitMap.getPixels(data, 0, bitMap.getWidth(), 0, 0, bitMap.getWidth(), bitMap.getHeight()); return loadTextureFromIntBuffer(data, bitMap.getWidth(), bitMap.getHeight()); } catch (IOException e) { Log.e(Constants.DEBUG, "Failed to load texture '" + fileName + "' from APK"); Log.i(Constants.DEBUG, e.getMessage()); return null; } } public static Texture loadTextureFromIntBuffer(int[] data, int width, int height) { // Convert: int numPixels = width * height; byte[] dataBytes = new byte[numPixels * 4]; for (int p = 0; p < numPixels; ++p) { int colour = data[p]; dataBytes[p * 4] = (byte) (colour >>> 16); // R dataBytes[p * 4 + 1] = (byte) (colour >>> 8); // G dataBytes[p * 4 + 2] = (byte) colour; // B dataBytes[p * 4 + 3] = (byte) (colour >>> 24); // A } Texture texture = new Texture(); texture.mWidth = width; texture.mHeight = height; texture.mChannels = 4; texture.mData = ByteBuffer.allocateDirect(dataBytes.length).order(ByteOrder.nativeOrder()); int rowSize = texture.mWidth * texture.mChannels; for (int r = 0; r < texture.mHeight; r++) { texture.mData.put(dataBytes, rowSize * (texture.mHeight - 1 - r), rowSize); } texture.mData.rewind(); texture.mSuccess = true; return texture; } } Anybody got an idea why i'm getting this error and how to fix it?
I cannot go over your entire code right now, and even if I could I'm not sure it would help. You first need to narrow down the problem, so I will first give you the method to do that, and I hope it will serve you in other cases as well. You managed to find out that there was an error - but you are checking it only at the end of the rendering function. What you need to do is to place the checkGLError call in several places inside the rendering code (print a different text message), until you can pin-point the exact line after which the error first appears. Then, if you cannot understand the problem, comment here what is the problematic line and I will try to help. UPDATE: After looking at the shader code, following your report that normalHandle is -1, I got to the following conclusions: The error, which indicates the variable vertexNormal cannot be found in the shader, may be due to the fact that this variable is probably optimized out during shader compilation, since it is not really required. Explanation: in the vertex shader (CUBE_MESH_VERTEX_SHADER), vertexNormal is assigned to a varying called normal (variable that is passed to the fragment shader). In the fragment shader, this varying is declared but not used. Therefore, you can actually delete the variables vertexNormal and normal from the shader, and you can delete all usages of 'normalHandle' in your code. This should eliminate the error.
Attaching a sprite to a Box2d body for movement
Im new on Box2d got a problem and couldnt solve it. I want to move my player left and right when the user touch my left and right buttons. I created a fixture I can move body and fixture but not the player sprite How can I attach my player sprite to my body ? and How should I control body because I cant stop it. I want to find a proper way of controlling player in box2d. I couldnt use setLinerVelocity etc. this is my codes public World world; public Body bplayer; public Box2DDebugRenderer b2dr; public Matrix4 cameraBox2D; PlayScreen buttonimage.addListener(new ClickListener() { public boolean touchDown(InputEvent event, float x, float y, int pointer, int button) { bplayer.setLinearVelocity(-5*PPM , 0); return true; } }); world = new World(new Vector2(player.getPosition().x , player.getPosition().y) , false); b2dr = new Box2DDebugRenderer(); bplayer = createPlayer(player.getPosition().x , player.getPosition().y); show method buttonimage.setPosition(160,0); rightbuttonimage.setPosition(320,0); pauseimage.setPosition(220,-20); cameraBox2D = camera.combined.cpy(); Render method Gdx.gl.glClearColor(0, 0, 2f, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); sb.setProjectionMatrix(camera.combined); player.position.y += 500 * Gdx.graphics.getDeltaTime(); sb.begin(); sb.draw(bg, 0, camera.position.y - (camera.viewportHeight/2)); sb.draw(player.sprite, player.getPosition().x , player.getPosition().y); for (Tube tube : tubes) { sb.draw(tube.getlefttube(), tube.getposlefttube().x, tube.getposlefttube().y); sb.draw(tube.getrighttube(), tube.getposrighttube().x, tube.getposrighttube().y); sb.draw(tube.getLight() , tube.getPoslight().x , tube.getPoslight().y); } delta*=speed; sb.end(); update(delta); b2dr.render(world , cameraBox2D); stage.draw(); app.batch.begin(); app.font23.draw(app.batch,"Lights collected :" + dropsGathered , 0, 720); app.batch.end(); cameraUpdate method Vector3 position = camera.position; position.x = player.position.x; position.y = player.position.y; camera.position.set(position); createPlayer method Body pBody; BodyDef def = new BodyDef(); def.type = BodyDef.BodyType.DynamicBody; def.position.set(x * PPM, y * PPM ); def.fixedRotation = true; pBody = world.createBody(def); return pBody; update method world.step(1 / 60f , 6 , 2); for(int i = 0; i < tubes.size; i++) { Tube tube = tubes.get(i); if (camera.position.y - (camera.viewportWidth/2) > tube.getposlefttube().y + tube.getlefttube().getWidth()) { tube.reposition(tube.getposlefttube().y + ( TUBE_COUNT) ); } if (tube.collides(player.getBounds())){ app.setScreen(new GameOver(app)); } if (tube.gathered(player.getBounds())){ dropsGathered++; } if (dropsGathered >= 50){ //app.setScreen(new Stage2(app)); } } camera.update(); handleInput(); camera.position.y = player.getPosition().y + 300; player.update(delta); camera.update(); cameraUpdate(delta); stage.act(delta);
Do not use the Sprite class. Use the TextureRegion class instead. Sprite is confusingly subclassed from TextureRegion, so when you call batch.draw(sprite, ...) its position and rotation parameters are ignored because it is being treated as a TextureRegion. You could use a Sprite by calling sprite.draw(batch) but a Sprite is redundant because your Body already has position and rotation parameters. Use a TextureRegion directly with the SpriteBatch. You can orient it with rotation parameters passed into the draw method.
libdgx vector3 conversion wrong
Evening Everyone, I am attempting to get familiar with libdgx and android by going thru the tutorial Here. All seems good except for grabbing the screen coordinates as they get skewed in a Vector3 conversion. So x input of 101 gets converted to -796, y input of 968 converted to -429 (touching the upper left corner of the screen, same results from emulator as from my phone). When clicking the bottom right corner, the animation fires in the middle of the screen. It all seems pretty basic so not really sure what I am setting incorrectly to get a skewed conversion. Any help is appreciated! camera creation: camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); camera.position.set(camera.viewportWidth * .5f, camera.viewportHeight * .5f, 0f); Grabbing touch coord: public boolean touchDown(int screenX, int screenY, int pointer, int button) { touchCoordinateX = screenX; touchCoordinateY = screenY; stateTime = 0; explosionHappening = true; return true; } Render loop: public void render () { Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); stateTime += Gdx.graphics.getDeltaTime(); batch.begin(); if (!explosionAnimation.isAnimationFinished(stateTime) && explosionHappening) { Vector3 touchPoint = new Vector3(); touchPoint.set(touchCoordinateX,touchCoordinateY,0); TextureRegion currentFrame = explosionAnimation.getKeyFrame(stateTime, false); // #16 camera.unproject(touchPoint); batch.draw(currentFrame, touchPoint.x, touchPoint.y); } // batch.draw(img, 0, 0); batch.end(); if (explosionAnimation.isAnimationFinished(stateTime)){explosionHappening = false;} }
I think you forgot to set camera projection matrix to your SpriteBatch. Just add batch.setProjectionMatrix(camera.combined); before batch.begin();
Cocos2d-X: CCDrawNode draw circle/custom shape
I am using CCDrawNode to create mask type effect (not exactly mask). Everything works well but there is one problem that CCDrawNode only draws square and i want to draw it with custom texture/sprite. Is there any solution to it. Below is my code of using CCDrawNode // on "init" you need to initialize your instance bool HelloWorld::init() { ////////////////////////////// // 1. super init first if ( !CCLayer::init() ) { return false; } CCSize visibleSize = CCDirector::sharedDirector()->getVisibleSize(); CCPoint origin = CCDirector::sharedDirector()->getVisibleOrigin(); CCLayer *layer = CCLayer::create(); CCSprite* pSprite = CCSprite::create("HelloWorld.png"); pSprite->setPosition(ccp(visibleSize.width/2 + origin.x, visibleSize.height/2 + origin.y)); layer->addChild(pSprite, 0); addChild(layer); //this is the layer that we want to "cut" CCLayer* layer1 = CCLayerColor::create(ccc4(122, 144, 0, 255), visibleSize.width, visibleSize.height); this->setTouchEnabled(true); //we need to create a ccnode, which will be a stencil for ccclipingnode, draw node is a good choice for that stencil = CCDrawNode::create(); //CCClipingNode show the intersection of stencil and theirs children CCClippingNode *cliper = CCClippingNode::create(stencil); cliper->setInverted(true); cliper->addChild(layer1); addChild(cliper); return true; } void HelloWorld::ccTouchesMoved(CCSet* touches, CCEvent* event) { CCTouch* touch = (CCTouch*)touches->anyObject(); // get start & end location CCPoint start = touch->getLocationInView(); CCPoint end = touch->getPreviousLocationInView(); // get corrected location start = CCDirector::sharedDirector()->convertToGL(start); end = CCDirector::sharedDirector()->convertToGL(end); //stencil->drawDot(start, 25, ccc4f(0, 0, 0, 255)); stencil->drawSegment(start, end, 25, ccc4f(0, 0, 0, 255)); }
If you want to draw custom texture you should use CCRenderTexture. In order to draw something you should go smthin like this myRenderTexture->begin(); mySpriteLoadedWithTexture->visit(); myRenderTexture->end(); Also if you want the drawn lines to be smooth you should draw it in loop so that they are placed in equal distance float distance = ccpDistance(start, end); for (int i = 0; i < distance; i++) { float difx = end.x - start.x; float dify = end.y - start.y; float delta = (float)i / distance; m_brush->setPosition(CCPoint(start.x + (difx * delta), start.y + (dify * delta))); m_brush->visit(); } Hope it helps
How can I to render a 2D graph/figure using OpenGL on Android?
I'm making a simple fractal viewing app for Android, just for fun. I'm also using it as an oppotunity to learn OpenGL since I've never worked with it before. Using the Android port of the NeHe tutorials as a starting point, my approach is to have one class (FractalModel) which does all the math to create the fractal, and FractalView which does all the rendering. The difficulty I'm having is in getting the rendering to work. Since I'm essentially plotting a graph of points of different colors where each point should correspond to 1 pixel, I thought I'd handle this by rendering 1x1 rectangles over the entire screen, using the dimensions to calculate the offsets so that there's a 1:1 correspondence between the rectangles and the physical pixels. Since the color of each pixel will be calculated independently, I can re-use the same rendering code to render different parts of the fractal (I want to add panning and zooming later on). Here is the view class I wrote: public class FractalView extends GLSurfaceView implements Renderer { private float[] mVertices; private FloatBuffer[][] mVBuffer; private ByteBuffer[][] mBuffer; private int mScreenWidth; private int mScreenHeight; private float mXOffset; private float mYOffset; private int mNumPixels; //references to current vertex coordinates private float xTL; private float yTL; private float xBL; private float yBL; private float xBR; private float yBR; private float xTR; private float yTR; public FractalView(Context context, int w, int h){ super(context); setEGLContextClientVersion(1); mScreenWidth = w; mScreenHeight = h; mNumPixels = mScreenWidth * mScreenHeight; mXOffset = (float)1.0/mScreenWidth; mYOffset = (float)1.0/mScreenHeight; mVertices = new float[12]; mVBuffer = new FloatBuffer[mScreenHeight][mScreenWidth]; mBuffer = new ByteBuffer[mScreenHeight][mScreenWidth]; } public void onDrawFrame(GL10 gl){ int i,j; gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); mapVertices(); gl.glColor4f(0.0f,1.0f, 0.0f,.5f); for(i = 0; i < mScreenHeight; i++){ for(j = 0; j < mScreenWidth; j++){ gl.glFrontFace(GL10.GL_CW); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVBuffer[i][j]); gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, mVertices.length / 3); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } } public void onSurfaceChanged(GL10 gl, int w, int h){ if(h == 0) { //Prevent A Divide By Zero By h = 1; //Making Height Equal One } gl.glViewport(0, 0, w, h); //Reset The Current Viewport gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Projection Matrix gl.glLoadIdentity(); //Reset The Projection Matrix //Calculate The Aspect Ratio Of The Window GLU.gluPerspective(gl, 45.0f, (float)w / (float)h, 0.1f, 100.0f); gl.glMatrixMode(GL10.GL_MODELVIEW); //Select The Modelview Matrix gl.glLoadIdentity(); } public void onSurfaceCreated(GL10 gl, EGLConfig config){ gl.glShadeModel(GL10.GL_SMOOTH); //Enable Smooth Shading gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); //Black Background gl.glClearDepthf(1.0f); //Depth Buffer Setup gl.glEnable(GL10.GL_DEPTH_TEST); //Enables Depth Testing gl.glDepthFunc(GL10.GL_LEQUAL); //The Type Of Depth Testing To Do //Really Nice Perspective Calculations gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); } private void mapVertices(){ int i,j; xTL = -1; yTL = 1; xTR = -1 + mXOffset; yTR = 1; xBL = -1; yBL = 1 - mYOffset; xBR = -1 + mXOffset; yBR = 1 - mYOffset; for(i = 0; i < mScreenHeight; i++){ for (j = 0; j < mScreenWidth; j++){ //assign coords to vertex array mVertices[0] = xBL; mVertices[1] = yBL; mVertices[2] = 0f; mVertices[3] = xBR; mVertices[4] = xBR; mVertices[5] = 0f; mVertices[6] = xTL; mVertices[7] = yTL; mVertices[8] = 0f; mVertices[9] = xTR; mVertices[10] = yTR; mVertices[11] = 0f; //add doubleBuffer mBuffer[i][j] = ByteBuffer.allocateDirect(mVertices.length * 4); mBuffer[i][j].order(ByteOrder.nativeOrder()); mVBuffer[i][j] = mBuffer[i][j].asFloatBuffer(); mVBuffer[i][j].put(mVertices); mVBuffer[i][j].position(0); //transform right transformRight(); } //transform down transformDown(); //reset x xTL = -1; xTR = -1 + mXOffset; xBL = -1; xBR = -1 + mXOffset; } } //transform all the coordinates 1 "pixel" to the right private void transformRight(){ xTL = xTL + mXOffset; //TL xBL = xBL + mXOffset; //BL xBR = xBR + mXOffset; //BR xTR = xTR + mXOffset; //TR; } //transform all of the coordinates 1 pixel down; private void transformDown(){ yTL = yTL - mYOffset; yBL = yBL - mYOffset; yBR = yBR - mYOffset; yTR = yTR - mYOffset; } } Basically I'm trying to do it the same way as this (the square in lesson 2) but with far more objects. I'm assuming 1 and -1 roughly correspond to screen edges, (I know this isn't totally true, but I don't really understand how to use projection matrices and want to keep this as simple as possible unless there's a good resource out there I can learn from) but I understand that OpenGL's coordinates are separate from real screen coordinates. When I run my code I just get a black screen (it should be green) but LogCat shows the garbage collector working away so I know something is happening. I'm not sure if it's just a bug caused by my just not doing something right, or if it's just REALLY slow. In either case, what should I do differently? I feel like I may be going about this all wrong. I've looked around and most of the tutorials and examples are based on the link above. Edit: I know I could go about this by generating a texture that fills up the entire screen and just drawing that, though the link I read which mentioned it said it would be slower since you're not supposed to redraw a texture every frame. That said, I only really need to redraw the texture when the perspective changes, so I could write my code to take this into account. The main difficulty I'm having currently is drawing the bitmap, and getting it to display correctly.
I would imagine that the blank screen is due to the fact that you are swapping buffers so many times, and also the fact that you are generating all your vertex buffers every frame. Thousands of buffer swaps AND thousands of buffer creations in a single frame would be INCREDIBLY slow. One thing to mention is that Android devices have limited memory, so the garbage collector working away is probably an indication that your buffer creation code is eating up a lot of the available memory and the device is trying to free up some for the creation of new buffers. I would suggest creating a texture that you fill with your pixel data each frame and then render to a single square that fills the screen. This will increase your speed by a huge amount, and also make your program more flexible. Edit: Look at the tutorial here : http://www.nullterminator.net/gltexture.html to get an idea on how to create textures and load them. You will basically need to fill BYTE* data with your own data. If you are changing the data dynamically, you will need to update the texture data. Use the information here : http://www.opengl.org/wiki/Texture : in the section about Texture image modification.