I am trying to understand below code from andengine to load the texture, i would like to know what is 512 n 512(i know its height and width), but the image of size 480 * 320.
public void onLoadResources() {
this.mTexture = new Texture(512, 512,
TextureOptions.BILINEAR_PREMULTIPLYALPHA);
this.mSplashTextureRegion = TextureRegionFactory
.createFromAsset(this.mTexture,this, "image.png", 0, 0);
this.mEngine.getTextureManager().loadTexture(this.mTexture);
}
I searched on the net but no expected explanation.
I don't know well AndEngine but this might be because a lot of libs assume that images dimension are power of two.
Here you can find a better explanation:
About power of 2 rule.
Related
I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.
I noticed that I can keep one atlas for one textureRegion although I can draw same region multiple times as a sprite. Is it possible to keep all textureRegions in one textureAtlas in a scene?
My special case is, I am generating images instead of using any image file. I do this with BaseBitmapTextureAtlasSourceDecorator and generate the region from IBitmapTextureAtlasSource.
Yes. Generally, though, you should only create atlases with a a max width/height of 1024 (these sizes must be powers of 2, by the way), to be efficient.
On another note, I've found it easier to use BuildableBitmapTextureAtlas. With this kind of atlas, you don't have to specify where in the atlas you are placing your textures. I think it also might take care of sprite-bleeding to some degree (not sure, though). It's the same idea really... Here is an example from my project:
BuildableBitmapTextureAtlas buttonAtlas = new BuildableBitmapTextureAtlas(getTextureManager(), 512, 512, TextureOptions.BILINEAR_PREMULTIPLYALPHA);
model.moveLeftButtonTR = BitmapTextureAtlasTextureRegionFactory.createFromAsset(buttonAtlas, this, "moveleft_button.png");
model.moveRightButtonTR = BitmapTextureAtlasTextureRegionFactory.createFromAsset(buttonAtlas, this, "moveright_button.png");
model.handleBlockButtonTR = BitmapTextureAtlasTextureRegionFactory.createFromAsset(buttonAtlas, this, "handleblock_button.png");
model.restartButtonTR = BitmapTextureAtlasTextureRegionFactory.createFromAsset(buttonAtlas, this, "restart_button.png");
try{ buttonAtlas.build(new BlackPawnTextureAtlasBuilder<IBitmapTextureAtlasSource, BitmapTextureAtlas>(0, 1, 1)); }
catch(Exception e){ e.printStackTrace(); }
buttonAtlas.load();
In your case, use the following method:
BitmapTextureAtlasTextureRegionFactory.createFromSource(BuildableBitmapTextureAtlas atlas, IBitmapTextureAtlasSource source)
In summary, the atlas just holds all the textures you add to it. Then you load this atlas into memory so that these textures can be retrieved quickly. You can then use a single instance of a texture to build as many independent sprites as you wish.
Honestly, I don't like to ask things this way, but I have no clue about this one!
Have you seen this before??
It can be seen that the image is scrambled following some defined pattern. This happens only in some (low end) devices, with Non Power of two images (FBO). It works well on other devices.
What I do, is to load an Android Bitmap to a FBO (works OK, as it shows ok on the screen). I do some editing (I paste a sticker, which in the image seems to be in the right place), and finally save the FBO into a Bitmap again. It works ok for a 512x512 FBO (the FBO has the image size), but no for that one (507x800).
Any Ideas??? I don't post code because I have no clue, please tell me and I'll add it.
This is the GL call to retrieve info from FBO
public Buffer toPixelBuffer(){
final int w = this.getWidth(); //colorTexture width
final int h = this.getHeight();
final ByteBuffer pixels = BufferUtils.newByteBuffer(w*h * 4);
Gdx.gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT, 1);
Gdx.gl.glReadPixels(0,0, w, h, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixels);
pixels.clear();
return pixels;
}
I also don't have a buggy device with me to test right now :(
Thank you!
I had the exact same problem. I experienced this on Galaxy Ace, Galaxy Y, and some other devices.
After lot of testing I did found out that it wasn't even required POT textures, so keeping the texture size with a 64 pixel increment made the trick. So lets say if I have a 122x53 texture, I need to convert it to 128x64. An so on.
Next is the function I use to get a valid texture dimension. Call it for both Width and Height.
/**
* Some GPUs such as the "VideoCore IV HW" on the Samsung Galaxy Ace
* require texture (FBO) sizes to be in '64' increments (WTF!!!!)
*
* #param dimension
* Base dimension to calculate
* #return Resolved 64 dimension
*/
public static int calculate64Dimension(final int dimension)
{
return (((dimension - 1) >> 6) << 6) + 64;
}
im new to android gaming and started andengine and facing problem while using createTiledFromAsset
the code where im getting problem is
#Override
public void onLoadResources() {
mBitmapTextureAtlas = new BitmapTextureAtlas(128, 128,
TextureOptions.BILINEAR);
BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/");
mPlayerTextureRegion = BitmapTextureAtlasTextureRegionFactory
.createTiledFromAsset(this.mBitmapTextureAtlas, this,
"move.png", 0, 0, 10, 1);
mEngine.getTextureManager().loadTexture(mBitmapTextureAtlas);
}
#Override
public Scene onLoadScene() {
mEngine.registerUpdateHandler(new FPSLogger());
mMainScene = new Scene();
mMainScene
.setBackground(new ColorBackground(0.09804f, 0.6274f, 0.8784f));
player = new AnimatedSprite(0, 0, mPlayerTextureRegion);
mMainScene.attachChild(player);
return mMainScene;
}
im not getting the error as my BitmapTextureAtlas is of 128*128 and each tiled part coming from createTiledFromAsset should be of 78*85 as the passed arguments to it are 1 row and 10 columns and my source image is of 779*85 which means when the width is tiled to 10 parts then 779/10=78 approx which will be width of each tiled part and as row is 1 so 85/1=85 hence the width*height of each tiled part which is to be placed on the BitmapTextureAtlas is 78*85 and the BitmapTextureAtlas itself has size 128*128 then why the error saying Supplied pTextureAtlasSource must not exceed bounds of Texture
what is happening here ...? or im not understanding the actual functions ...? if im wrong then how the process of createTiledFromAsset is working........?
I found the same problem too. I am using a large sprite sheet, and try to use it. after searching, I found some clue in this tutorial: getting-started-working-with-images and i realize that the value of width and height used in:
BitmapTextureAtlas(WIDTH, HEIGHT ,TextureOptions.BILINEAR);
must be higher than the image size. for example if I use 1200*100 sprite sheet, I must use the width and height higher than 1200*100.
BitmapTextureAtlas(2047, 128, TextureOptions.BILINEAR);
If I understand BitmapTextureAtlas correctly, you are trying to put 779*85 image into the small space of 128*128. TextureAtlas is a large canvas on which you are supposed to place many images. These images are later accessed using object called TextureRegion, which basically specifies the size and coordinates of the smaller picture on the canvas. The method createTiledFromAsset probably copies the original image 1:1 onto the TextureAtlas and saves the coordinates of the tiles.
Please note that TextureRegion has nothing to do with the image itself, it is merely a "pointer" to the place on TextureAtlas where the image is stored.
To get the idea of what a TextureAtlas actually is, look at the awesome pictures at the bottom of this page:
http://www.blackpawn.com/texts/lightmaps/default.html
When I activate the mipmaping on uncompressed texture, all is working perfectly.
When I do it on ETC1 texture, the texture is blank, certainly because le complete set of mipmaps was not given.
The code is very simple and works on iPhone (with PVR compression, of course).
It doesn't work on Android. The mipmap was build with an external tool, and past together.
I stop making mipmap at the size of 4, because glCompressedTexImage2D return an opengl error if try using mipmap lower.
for(u32 i=0; i<=levels; i++)
{
size = KC_TexByte(pagex, pagey, tex_type);
glCompressedTexImage2D(GL_TEXTURE_2D, i, type, pagex, pagey, 0, size, ptr);
pagex = MAX(pagex/2, 4);
pagey = MAX(pagey/2, 4);
ptr += size;
KC_Error(); // test openGL error
}
The reason your texture is blank is because it is required that the mipmap go all the way to 1x1.
I would imagine that the error you're getting with small compressed textures is because the texture format you're attempting to use (etc1?) doesn't support those sizes. You'd have to use non-compressed images at those small sizes...
Thanks, but your solution is not the right one; I found another solution.
you're right when you explain that all the mipmap is requiered, until size 1x1
you're wrong, we can't have different format between mipmap
The right way is:
using size to 1x1
keep in mind it's compressed data with bloc, so the size in BYTE doesn't divide by 4 each step. after 8x8, the size stay at the same value.
sx = size in X
sy = size in Y
byte = ((sx+3)/4)*((sy+3)/4) * 8 * 2; // 8 = bit per pixel
for(u32 i=0; i<=levels; i++)
Seems you'd want i < levels instead of <=.