Sprite dimension in Generic way not working - android

I have to set a new sprite on background sprite. Background sprite has dimensions set according to dimension of device get through pixel format ..
I am using andengine and ratio Resolution as well but ..
Problem is little bit complex.. I tried to use pixel let say
Glaxy S3 ha *dimensions 720*1280 And required sprite location is (584,608)
so i set in manner ( CAMERA_WIDTH/1.233f,CAMERA_HEIGHT/2.112f)
BUT HTC experia has dimensions 320*480 So the required positions according to
( CAMERA_WIDTH/1.233f,CAMERA_HEIGHT/2.112f) is (2599.5,227.27)
but this wrong according to display... when i set it on (244,172) for experia its working perfect.... please help.

in the method :
public Engine onCreateEngine() {
this.camara = new Camera(0, 0, camara_width, camera_height);
return new EngineOptions(true, ScreenOrientation.LANDSCAPE,
new FillResolutionPolicy(), this.camara).setNeedsSound(true).setNeedsMusic(true);
}
the parameter "new FillResolutionPolicy()" auto resize the width and height you only need make for a one resolution and the app run in any smartphone

Related

Image data from Android camera2 API flipped & squished on Galaxy S5

I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.

Get OpenGL max texture size

I'm developing an Android app that's going to work with bitmaps extensively and I'm looking for a reliable way to get the maximum texture size for OpenGL on different devices.
I know the minimum size = 2048x2048, but that's not good enough since there are already tablets out there with much higher resolutions (2560x1600 for example)
So is there a reliable way to get this information?
So far I've tried:
Canvas.getMaximumBitmapWidth() (Returns 32766, instead of 2048)
GLES10.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE ...) (Returns 0)
I'm working with minimum-sdk = 15 (ICS) and I'm testing it on a Asus Transformer TF700t Infinity
Does anyone know another way to get it?
Or will I have to compile a list of known GPUs with their max canvas size?
try using this code
int[] maxTextureSize = new int[1];
GLES10.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, maxTextureSize, 0);
maxTextureSize stores the size limit for decoded image such as 4096x4096, 8192x8192 . Remember to run this piece of code in the MainThread or you will get Zero.
This will give you the maximum height allowed.
Canvas canvas = new Canvas();
canvas.getMaximumBitmapHeight() / 8

getDesiredMinimumHeight/Width() in 'Wallapper manager'

i looked at these 2 function in the documentation here
i want to get the desired wallaper dimensions,
running those functions on an SGS3 (1280x720) with stock launcher,
i got both minDesiredWidth + minDesiredHight: 1280x1280
same thing with a Note 3 (1920x1080) i got 1920x1920
i want to know the desired ratio of wallpaper the device wants, and i thought i would get it from those 2 functions.
both those devices stock launchers have a static background image of their respective screen resolutions, so why does getDesiredMinimumWidth doesn't give me 1280/1080 for each device respectively?
how do i know the proper ratio for the device?
This is the intended result of the methods, the code used in the WallpaperManager class is:
return sGlobals.mService.getHeightHint();
and
return sGlobals.mService.getWidthHint();
It isn't mentioned anywhere why they return the same value, but to get the true values of WxH, you should use:
Point displaySize = new Point();
getWindowManager().getDefaultDisplay().getRealSize(displaySize);
and refer to the values with int width = displaySize.x and int height = displaySize.y

android & libgdx - disable blurry rendering

i'm trying out libgdx as an opengl wrapper , and i have some issues with its graphical rendering :
for some reason , all images (textures) on android device look a little blurred using libgdx . this also includes text (font) .
for text , i thought that it's because i use bitmap-fonts , but i can't find an alternative- i've found out that there is a library called "gdx-stb-truetype" , but i can't find how to download it and use it .
for normal images , even when i show the entire image without any scaling , i expect it to look as sharp as i see it on a computer's screen , especially if i have such a good screen on the device (it's galaxy nexus) .
i've tried to set the anti-aliasing off , by using the next code :
final AndroidApplicationConfiguration androidApplicationConfiguration=new AndroidApplicationConfiguration();
androidApplicationConfiguration.numSamples=0; //tried the value of 1 too.
...
i've also tried to set the scaling method to various methods , but with no luck. example:
texture.setFilter(TextureFilter.Nearest,TextureFilter.Nearest);
as a test , i've found a sharp image that is exactly the same as the seen resolution on the device (720x1184 for galaxy nexus , because of the buttons bar) , and i've put it to be on the background of the libgdx app . of course , i had to add extra blank space in order for the texute to be loaded , so the final size of the image (which will include content and empty space) is still a power of 2 for both width and height (1024x2048 in this case) .
on the desktop app , it look ok . on the device , it looked blurred.
a weird thing that i've noticed is that when i change the device's orientation (horizontal <=> vertical) , for the very short time before the rotating animation starts , i see both the image and the text very well .
surely libgdx can handle this , since the opengl part of the api-tests project of android shows images just fine.
can anyone please help me?
#user1130529 : i do use spritebatch . also , here's what i do for setting the viewport . it occurs whether i choose to keep the aspect ratio or not.
public static final int VIRTUAL_WIDTH =720;
public static final int VIRTUAL_HEIGHT =1280-96;
private static final float ASPECT_RATIO =(float)VIRTUAL_WIDTH/(float)VIRTUAL_HEIGHT;
...
#Override
public void resize(final int width,final int height)
{
// calculate new viewport
if(!KEEP_ASPECT_RATIO)
{
_viewport=new Rectangle(0,0,Gdx.app.getGraphics().getWidth(),Gdx.app.getGraphics().getHeight());
Gdx.app.log("DEBUG","size:"+_viewport);
return;
}
final float currentAspectRatio=(float)width/(float)height;
float scale=1f;
final Vector2 crop=new Vector2(0f,0f);
if(currentAspectRatio>ASPECT_RATIO)
{
scale=(float)height/(float)VIRTUAL_HEIGHT;
crop.x=(width-VIRTUAL_WIDTH*scale)/2f;
}
else if(currentAspectRatio<ASPECT_RATIO)
{
scale=(float)width/(float)VIRTUAL_WIDTH;
crop.y=(height-VIRTUAL_HEIGHT*scale)/2f;
}
else scale=(float)width/(float)VIRTUAL_WIDTH;
final float w=VIRTUAL_WIDTH*scale;
final float h=VIRTUAL_HEIGHT*scale;
_viewport=new Rectangle(crop.x,crop.y,w,h);
Gdx.app.log("DEBUG","viewport:"+_viewport+" originalSize:"+VIRTUAL_WIDTH+","+VIRTUAL_HEIGHT+" aspectRatio:"+ASPECT_RATIO+" currentAspectRatio:"+currentAspectRatio);
}
Try this:
TextureRegion.getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear);
Try the following:
texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
There are several types of TextureFilters. I assume that the Linear one (is that default?) is blurring.
If you have the Chainfire 3D application or another which reduce textures or change it to 16bit, turn it off; that works for me, and I had the same problem.

AndEngine getting error: Supplied pTextureAtlasSource must not exceed bounds of Texture

im new to android gaming and started andengine and facing problem while using createTiledFromAsset
the code where im getting problem is
#Override
public void onLoadResources() {
mBitmapTextureAtlas = new BitmapTextureAtlas(128, 128,
TextureOptions.BILINEAR);
BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/");
mPlayerTextureRegion = BitmapTextureAtlasTextureRegionFactory
.createTiledFromAsset(this.mBitmapTextureAtlas, this,
"move.png", 0, 0, 10, 1);
mEngine.getTextureManager().loadTexture(mBitmapTextureAtlas);
}
#Override
public Scene onLoadScene() {
mEngine.registerUpdateHandler(new FPSLogger());
mMainScene = new Scene();
mMainScene
.setBackground(new ColorBackground(0.09804f, 0.6274f, 0.8784f));
player = new AnimatedSprite(0, 0, mPlayerTextureRegion);
mMainScene.attachChild(player);
return mMainScene;
}
im not getting the error as my BitmapTextureAtlas is of 128*128 and each tiled part coming from createTiledFromAsset should be of 78*85 as the passed arguments to it are 1 row and 10 columns and my source image is of 779*85 which means when the width is tiled to 10 parts then 779/10=78 approx which will be width of each tiled part and as row is 1 so 85/1=85 hence the width*height of each tiled part which is to be placed on the BitmapTextureAtlas is 78*85 and the BitmapTextureAtlas itself has size 128*128 then why the error saying Supplied pTextureAtlasSource must not exceed bounds of Texture
what is happening here ...? or im not understanding the actual functions ...? if im wrong then how the process of createTiledFromAsset is working........?
I found the same problem too. I am using a large sprite sheet, and try to use it. after searching, I found some clue in this tutorial: getting-started-working-with-images and i realize that the value of width and height used in:
BitmapTextureAtlas(WIDTH, HEIGHT ,TextureOptions.BILINEAR);
must be higher than the image size. for example if I use 1200*100 sprite sheet, I must use the width and height higher than 1200*100.
BitmapTextureAtlas(2047, 128, TextureOptions.BILINEAR);
If I understand BitmapTextureAtlas correctly, you are trying to put 779*85 image into the small space of 128*128. TextureAtlas is a large canvas on which you are supposed to place many images. These images are later accessed using object called TextureRegion, which basically specifies the size and coordinates of the smaller picture on the canvas. The method createTiledFromAsset probably copies the original image 1:1 onto the TextureAtlas and saves the coordinates of the tiles.
Please note that TextureRegion has nothing to do with the image itself, it is merely a "pointer" to the place on TextureAtlas where the image is stored.
To get the idea of what a TextureAtlas actually is, look at the awesome pictures at the bottom of this page:
http://www.blackpawn.com/texts/lightmaps/default.html

Categories

Resources