I'm using cocos2d.
Now I've added some images in the layer, and played around a bit.
I'm trying to save the whole screen as image file.
How can I do this?
The only way to capture the content of a SurfaceView is if you are rendering into it using OpenGL. You can use glReadPixels() to grab the content of the surface. If you are drawing onto the SurfaceView using a Canvas, you can simply create a Bitmap, create a new Canvas for that Bitmap and execute your drawing code with the new Canvas.
It is my understanding that cocos2d-android also has a CCRenderTexture class with the saveBuffer method. In that case have a look at my CCRenderTexture demo program and blog post for cocos2d-iphone which gives you an example for how to create a screenshot using CCRenderTexture and saveBuffer. The same principle should be applicable to cocos2d-android.
Related
I want to crop the camera preview in Android using camera2 api. I am using android-Camera2Basic the official example.
This is the result I am getting
And, the result exactly I want to achieve is this
I don't want to overlay the object on textureView. I want it actually to be of this size without stretching.
You'll need to edit the image yourself before drawing it, since the default behavior of a TextureView is to just draw the whole image sent to its Surface.
And adjusting the TextureView's transform matrix will only scale or move the whole image, not crop it.
Doing this requires quite a bit of boilerplate, since you need to re-implement most of a TextureView. For best efficiency, you likely want to implement the cropping in OpenGL ES; so you'll need a GLSurfaceView, and then you need to use the OpenGL context of that GLSurfaceView to create a SurfaceTexture object, and then using that texture, draw a quadrilateral with the cropping behavior you want in the fragment shader.
That's fairly basic EGL, but it's quite a bit if you've never done any OpenGL programming before. There's a small test program within the Android OS tree that uses this kind of path: https://android.googlesource.com/platform/frameworks/native/+/master/opengl/tests/gl2_cameraeye/#
I'm making an android game using andengine. I'm makin some sprites as object, where I can rotate sprite. But when I touch transparent part of image still it rotates the sprite.
To prevent this, I need to get Bitmap from Sprite. So can anyone provide method for that, by using that I can convert sprite to Bitmap.
I looked into this link.But in this it was using GLES1, but I'm working with GLES2.
bitmap for sprite follow this link for more
If you have not see this, there is a GLES2 version of Pixel-perfect collision. Its an android extension called "Andengine Collision Extension"
https://github.com/MakersF/AndEngineCollisionsExtension
It should handle the case you are describing.
NOTE: You have to modify you copy of the Andengine source code to use the extension.
I want to make brushes displayed in below image for drawing application. Which is a suitable method - Open GL or Canvas & How can we implement it?
I'd say Canvas, as you'll want to modify an image. OpenGLES is good for displaying images, but does not (as far as I know) have methods for modifying its textures (unless you render to a texture that then render to screen with some modifications, which is not always so effective).
Using the Canvas you will have the methods for drawing your brush-strokes onto the Bitmap you're painting on, in GLES you would have to modify a texture (by using a canvas) and then upload that to the GPU again, before it could be rendered, and the rendering would most likely just consist of drawing a square with your texture on it (as the fillrate for most mobile GPUs are quite bad, you don't want to draw the strokes separately).
What I'm trying to say is; The most convenient way to let the user draw on an openGLES surface would be by creating a texture by drawing on a Canvas.
But, there might still be some gain in using GL for drawing, as the Canvas-operations can be performed off-screen, and you can push this data to a gl-renderer to (possibly) speed up the on-screen drawing.
However; if you are developing for Android 3.x+ you should take a look at RenderScript, (which I personally have never had a chance to use), but seems like it would be a good solution in this case.
Your best solution is going to be using native code. That's how Sketchbook does it. You could probably figure out how by browsing through the GIMP source code http://www.gimp.org/source . Out of Canvas vs OpenGL, Canvas would be the way to go.
It depends. if you want to edit the image statically, go with canvas. But if you want after brushing the screen, to have the ability to edit, scale, rotate, it would be easier with opengl.
An example with opengl: Store the motion the user do with touchs. create a class that store a motion and have fields for size, rotation etc. to draw this class, just make a path of the brush image selected following the stored motion.
I'm quite new to animations in Android. For 3D animations I have to use OpenGL to make it look more fluid.
Is it possible to convert a Drawable that is i draw some rectangle or circle on a canvas and i want that to convert into a View using OpenGL. Is that possible and if so then how?
Can anyone please let me know what does the first point in the FEATURES say in this URL http://developer.android.com/reference/android/opengl/GLSurfaceView.html
Well, you can try to convert a Drawable to a bitmap and then map this bitmap on a 3D surface in OpenGL as a texture.
OpenGL doesn't operate on that level. OpenGL is just a drawing API, giving you "pens and brushes" to draw on some canvas provided by the operating system. Any OS specific concepts like Drawables and Views are out of the scope of OpenGL and won't be dealt with by it.
However maybe if you described in detail what it is you want to achieve we may be able to help.
I am trying to make a game for the android. I currently have all of my art assets loaded into the drawables folder, but my question is how do I actually reference a specific one to render it?
I know that the files each have a unique #id, and I probably have to implement in the #override of the onDraw(canvas) method. so the question is how do I actually display the image to the screen?
I have looked through a good half dozen books and all those talk about is getting an image off the web, or manually drawing it with the paint functionality, but I have the assets ready to go in .bmp, and they are complex enough that coding paint to draw them by hand will be a very great migrain.
I would prefer a direction to look in (specific book, tutorial, blog, open source code[assuming quality comments])I am still kinda learning java as my 2nd programming language and so I am not to the point of translating pseudocode directly to java yet.
[added in edit]
does drawing using Paint() every frame have a lower overhead then rendering a bitmap, and only changing the display when it changes?
for 2Dgames I would recommend you to use SurfaceView
load your image as a bitmap and let the canvas draw that bitmap.
to animate the images there are two possible way :
You could inflate the view
Create a looping thread to call the draw function
Here's some good starting tutorial about displaying image in Android, especially for games
You can use one of the drawBitmap variants of the Canvas class. A canvas object is passed in the onDraw method. You can use the BitmapFactory class to load a resourse.
You can read a nice tutorial here.