I'm using a for loop to implement the loading and handling of Sprite objects for display for a keyboard for a game of hangman. The loop makes it through to the 4th iteration and crashes. The error it gives me says:
Texture must not exceed the bounds of the atlas
This should actually work as all the images are 64x64 and the atlas is declared as such:
this.mAtlas[i] = new BitmapTextureAtlas(this.getTextureManager(),256, 256,TextureOptions.BILINEAR);
I'm using an array of atlases and an array of textures in which to load the images and the I load the atlas. After that I'm then passing the texture into a custom class that implements sprite. And finally I attach the loaded sprite into the scene. Here is the whole code for the loop:
for(int i = 0; i < 28; i++)
{
String name = Integer.toString(i);
name+= ".png";
this.mAtlas[i] = new BitmapTextureAtlas(this.getTextureManager(),256, 256,TextureOptions.BILINEAR);
this.mTexture[i] = BitmapTextureAtlasTextureRegionFactory.createFromAsset(this.mAtlas[i], this, name, (i*64) + 5,0);
this.mAtlas[i].load();
if(i % 13 == 0)
{
yPos -= 64;
}
if(i < 26)
{
letterPass = alphabet.substring(i);
}
else if(i == 26)
{
letterPass = "BackSpace";
}
else if(i == 27)
{
letterPass = "return";
}
letters[i] = new Letter((i * 64)+ 5.0f, yPos, this.mTexture[i].getHeight(), this.mTexture[i].getHeight(), this.mTexture[i], this.mEngine.getVertexBufferObjectManager());
letters[i].setLetter(letterPass);
mScene.attachChild(letters[i]);
}
The line where the crash occurs is:
this.mTexture[i] = BitmapTextureAtlasTextureRegionFactory.createFromAsset(this.mAtlas[i], this, name, (i*64) + 5,0);
I cannot seem to figure out why it's crashing and I'd appreciate any help
You texture atlas is 256x256 pixels large. Your sprites are 64x64 pixels and you create an atlas for each of them... That means you are wasting a lot of space. And it doesn't even work because on this line:
this.mTexture[i] = BitmapTextureAtlasTextureRegionFactory.createFromAsset(this.mAtlas[i], this, name, (i*64) + 5,0);
You are placing the texture onto atlas at position [i * 64 + 5, 0]. I bet it fails on 4th texture. 3 * 64 + 5 +64 = 261, you are out of bounds.
Related
I'm trying to convert an YUV image to grayscale, so basically I just need the Y values.
To do so I wrote this little piece of code (with frame being the YUV image):
imageConversionTime = System.currentTimeMillis();
size = frame.getSize();
byte nv21ByteArray[] = frame.getImage();
int lol;
for (int i = 0; i < size.width; i++) {
for (int j = 0; j < size.height; j++) {
lol = size.width*j + i;
yMatrix.put(j, i, nv21ByteArray[lol]);
}
}
bitmap = Bitmap.createBitmap(size.width, size.height, Bitmap.Config.ARGB_8888);
Utils.matToBitmap(yMatrix, bitmap);
imageConversionTime = System.currentTimeMillis() - imageConversionTime;
However, this takes about 13500 ms. I need it to be A LOT faster (on my computer it takes 8.5 ms in python) (I work on a Motorola Moto E 4G 2nd generation, not super powerful but it should be enough for converting images right?).
Any suggestions?
Thanks in advance.
First of all I would assign size.width and size.height to a variable. I don't think the compiler will optimize this by default, but I am not sure about this.
Furthermore Create a byte[] representing the result instead of using a Matrix.
Then you could do something like this:
int[] grayScalePixels = new int[size.width * size.height];
int cntPixels = 0;
In your inner loop set
grayScalePixels[cntPixels] = nv21ByteArray[lol];
cntPixels++;
To get your final image do the following:
Bitmap grayScaleBitmap = Bitmap.createBitmap(grayScalePixels, size.width, size.height, Bitmap.Config.ARGB_8888);
Hope it works properly (I have not tested it, however at least the shown principle should be applicable -> relying on a byte[] instead of Matrix)
Probably 2 years too late but anyways ;)
To convert to gray scale, all you need to do is set the u/v values to 128 and leave the y values as is. Note that this code is for YUY2 format. You can refer to this document for other formats.
private void convertToBW(byte[] ptrIn, String filePath) {
// change all u and v values to 127 (cause 128 will cause byte overflow)
byte[] ptrOut = Arrays.copyOf(ptrIn, ptrIn.length);
for (int i = 0, ptrInLength = ptrOut.length; i < ptrInLength; i++) {
if (i % 2 != 0) {
ptrOut[i] = (byte) 127;
}
}
convertToJpeg(ptrOut, filePath);
}
For NV21/NV12, I think the loop would change to:
for (int i = ptrOut.length/2, ptrInLength = ptrOut.length; i < ptrInLength; i++) {}
Note: (didn't try this myself)
Also I would suggest to profile your utils method and createBitmap functions separately.
I'm new to LibGDX and was trying to implement parallax background.
Everything went good until I faced such issue: I get some stripes when scrolling background. You can see it in attached image:
So I looked deeper into an issue and figured out that this some sort of texture bleeding. But the case is that my textures already have [Linear, Nearest] filter set and TexturePacker uses duplicatePadding. Actually, I don't know any other methods to solve this issue. Please help!
Here's some of my code:
TexturePacker
TexturePacker.Settings settings = new TexturePacker.Settings();
settings.minWidth = 256;
settings.minHeight = 256;
settings.duplicatePadding = true;
TexturePacker.process(settings, "../../design", "./", "textures");
AssetLoader
textureAtlas = new TextureAtlas(Gdx.files.internal("textures.atlas"));
for (int i = 0; i < 2; i++) {
Background.skies.add(textureAtlas.findRegion("background/sky", i));
Background.skies.get(i).getTexture().setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Nearest);
}
for (int i = 0; i < 2; i++) {
Background.clouds.add(textureAtlas.findRegion("background/cloud", i));
Background.clouds.get(i).getTexture().setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Nearest);
}
for (int i = 0; i < 8; i++) {
Background.cities.add(textureAtlas.findRegion("background/city", i));
Background.cities.get(i).getTexture().setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Nearest);
}
Background.moon = textureAtlas.findRegion("background/moon");
Background.forest = textureAtlas.findRegion("background/forest");
Background.road = textureAtlas.findRegion("background/road");
Background.moon.getTexture().setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Nearest);
Background.forest.getTexture().setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Nearest);
Background.road.getTexture().setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Nearest);
BackgroundDrawer
private void drawParallaxTextureList(Batch batch, List<TextureAtlas.AtlasRegion> list,
float moveX, float posY) {
for (int i = 0; i < list.size(); i++) {
boolean needDraw = false;
float shift = GameScreen.VIEWPORT_WIDTH * i;
float drawX = 0.0f;
if (shift - moveX <= -(GameScreen.VIEWPORT_WIDTH)) { // If it's behind the screen
if (i == 0) { // If it's first element
if (moveX >= GameScreen.VIEWPORT_WIDTH * (list.size() - 1)) { // We need to show first after last
needDraw = true;
drawX = (GameScreen.VIEWPORT_WIDTH) - (moveX - ((GameScreen
.VIEWPORT_WIDTH) * (list.size() - 1)));
}
}
} else if (shift - moveX < (GameScreen.VIEWPORT_WIDTH - 1)) {
needDraw = true;
drawX = shift - moveX;
}
if (needDraw) {
batch.draw(list.get(i), (int) drawX, (int) posY);
}
}
}
NOTE: I don't use any camera for drawing right now. I only use FitViewport with size of 1920x1280. Also, bleeding sometimes appears even in FullHD resolution.
UPDATE: Setting both Nearest filters for minification and magification with increasing paddingX and disabling antialiasing solved issue, but final image become too ugly! Is there way to avoid disabling antialiasing? Because without it, downscale look awful.
Try to set both min and mag filters as Nearest
.setFilter(Texture.TextureFilter.Nearest, Texture.TextureFilter.Nearest);
In GUI TexturePacker there is an option to extrude graphics - it means repeating every of border pixel of texture. Then you can set both filters to Linear
.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
but unfortunately I cannot see this option in the TexturePacker.Settings object you are using. You can try to set Linear to both but I'm pretty sure it won't be working (Linear filter takes nearest 4 texels to generate the one so it will probably still generate issues).
Try to use GUI Texturepacker then with extrude option maybe
A few possible reasons for this artifact:
Maybe the padding is not big enough when the sprite resolution is shrunk down. Try changing your texture packer's filterMin to MipMapLinearNearest. And also try increasing the size of paddingX and paddingY.
Maybe you're seeing dim or brightened pixels at the edge of your sprite because you're not using pre-multiplied alpha and your texture's background color (where its alpha is zero) is white. Try setting premultiplyAlpha: true. If you do this, you need to also change the SpriteBatch's blend function to (GL20.GL_ONE, GL20.GL_ONE_MINUS_SRC_ALPHA) to render properly.
You seem to be rounding your sprite positions and sizes to integers when you draw them. This would work in a pixel perfect game, where you're sure the sprites are being rendered exactly at 1:1 resolution to the screen. But once the screen size does not match exactly, your rounding might produce gaps that are less than 1 pixel wide, which will look like semi-transparent pixels.
I am new to OpenCV and am trying to count the number of objects in an image. I have done this before using MATLAB Image Processing Toolbox and adapted the same approach in OpenCV (Android) also.
The first step was to convert an image to gray scale. Then to threshold it and then counting the number of blobs. In Matlab there is a command - "bwlabel", which gives the number of blobs. I couldn't find such thing in OpenCV (again, I am a noob in OpenCV as well as Android).
Here is my code,
//JPG to Bitmap to MAT
Bitmap i = BitmapFactory.decodeFile(imgPath + "mms.jpg");
Bitmap bmpImg = i.copy(Bitmap.Config.ARGB_8888, false);
Mat srcMat = new Mat ( bmpImg.getHeight(), bmpImg.getWidth(), CvType.CV_8UC3);
Utils.bitmapToMat(bmpImg, srcMat);
//convert to gray scale and save image
Mat gray = new Mat(srcMat.size(), CvType.CV_8UC1);
Imgproc.cvtColor(srcMat, gray, Imgproc.COLOR_RGB2GRAY,4);
//write bitmap
Boolean bool = Highgui.imwrite(imgPath + "gray.jpg", gray);
//thresholding
Mat threshed = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.adaptiveThreshold(gray, threshed, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 75, 5);//15, 8 were original tests. Casey was 75,10
Core.bitwise_not(threshed, threshed);
Utils.matToBitmap(threshed, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "threshed.jpg", threshed);
Toast.makeText(this, "Thresholded image saved!", Toast.LENGTH_SHORT).show();
In the next step, I tried to fill the holes and letters using dilation followed by an erosion but the blobs gets attached to each other which will ultimately give a wrong count. There is a tradeoff between filling holes and getting the blobs attached to each other on tuning the parameters for dilation and erosion.
Here is the code,
//morphological operations
//dilation
Mat dilated = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.dilate(threshed, dilated, Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new org.opencv.core.Size (16, 16)));
Utils.matToBitmap(dilated, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "dilated.jpg", dilated);
Toast.makeText(this, "Dilated image saved!", Toast.LENGTH_SHORT).show();
//erosion
Mat eroded = new Mat(bmpImg.getWidth(),bmpImg.getHeight(), CvType.CV_8UC1);
Imgproc.erode(dilated, eroded, Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new org.opencv.core.Size(15, 15)));
Utils.matToBitmap(eroded, bmpImg);
//write bitmap
bool = Highgui.imwrite(imgPath + "eroded.jpg", eroded);
Toast.makeText(this, "Eroded image saved!", Toast.LENGTH_SHORT).show();
Because sometimes my M&Ms might be just next to each other! ;)
I also tried to use Hough Circles but the result is very unreliable (tested with coin images as well as real coins)
Here is the code,
//hough circles
Mat circles = new Mat();
// parameters
int iCannyUpperThreshold = 100;
int iMinRadius = 20;
int iMaxRadius = 400;
int iAccumulator = 100;
Imgproc.HoughCircles(gray, circles, Imgproc.CV_HOUGH_GRADIENT,
1.0, gray.rows() / 8, iCannyUpperThreshold, iAccumulator,
iMinRadius, iMaxRadius);
// draw
if (circles.cols() > 0)
{
Toast.makeText(this, "Coins : " +circles.cols() , Toast.LENGTH_LONG).show();
}
else
{
Toast.makeText(this, "No coins found", Toast.LENGTH_LONG).show();
}
The problem with this approach is that the algorithm is limited to perfect circles only (AFAIK). So, it doesn't work well when I try to scan and count M&Ms or coins lying on my desk (because angle of the device changes). With this approach, sometimes I get less no. of coins detected and sometimes more (I don't get it why more??).
On scanning this image the app sometimes shows 19 coins and sometimes 38 coins counted...I know there are other features which may be detected as circles but I totally don't get it why 38..?
So my questions...
Is there a better way to fill holes without joining adjacent blobs?
How do I count the number of objects accurately? I don't want to limit my app to counting only circles with HoughCircles approach.
FYI : OpenCV-2.4.9-android-sdk. Kindly keep in mind that I am a newbie in OpenCV and Android too.
Any help is much appreciated.
Thanks & Cheers!
Jainam
So to proceed we take your threshold image which you have generated as input and further modify it. The present code is in C++ but I guess you can easily convert it into android platform
Now instead of dilation or blurring you can try flood fill
which results in
Finally now applying the contour detection algorithm algorithm we get
The code for the above is
Mat dst = imread($path to the threshold image); // image should be single channel black and white image
imshow("dst",dst);
cv::Mat mask = cv::Mat::zeros(dst.rows + 2, dst.cols + 2, CV_8U);
// A image with size greater than the present object is created
cv::floodFill(dst, mask, cv::Point(0,0), 255, 0, cv::Scalar(), cv::Scalar(), 4 + (255 << 8) + cv::FLOODFILL_MASK_ONLY);
erode(mask,mask,Mat());
// Now to remove the outer boundary
rectangle(mask,Rect(0,0,mask.cols,mask.rows), Scalar(255,255,255),2,8,0);
imshow("Mask",mask);
Mat copy;
mask.copyTo(copy);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( copy, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
vector<vector<Point> > contours_poly( contours.size() );
vector<Rect> boundRect( contours.size() );
vector<Point2f>center( contours.size() );
vector<float>Distance( contours.size() );
vector<float>radius( contours.size() );
Mat drawing = cv::Mat::zeros(mask.rows, mask.cols, CV_8U);
int num_object = 0;
for( int i = 0; i < contours.size(); i++ ){
approxPolyDP( Mat(contours[i]), contours_poly[i], 3, true );
// To get rid of the smaller object and the outer rectangle created
//because of the additional mask image we enforce a lower limit on area
//to remove noise and an upper limit to remove the outer border.
if (contourArea(contours_poly[i])>(mask.rows*mask.cols/10000) && contourArea(contours_poly[i])<mask.rows*mask.cols*0.9){
boundRect[i] = boundingRect( Mat(contours_poly[i]) );
minEnclosingCircle( (Mat)contours_poly[i], center[i], radius[i] );
circle(drawing,center[i], (int)radius[i], Scalar(255,255,255), 2, 8, 0);
rectangle(drawing,boundRect[i], Scalar(255,255,255),2,8,0);
num_object++;
}
}
cout <<"No. of object detected =" <<num_object<<endl;
imshow("drawing",drawing);
waitKey(2);
char key = (char) waitKey(20);
if(key == 32){
// You can save your images here using a space
}
I hope this helps you in solving your problem
Just check it out,
Blur source.
Threshold binary inverted on gray.
Find contours, note that you should use CV_RETR_EXTERNAL as contour retrieval mode.
You can take the contours size as your object count.
Code:
Mat tmp,thr;
Mat src=imread("img.jpg",1);
blur(src,src,Size(3,3));
cvtColor(src,tmp,CV_BGR2GRAY);
threshold(tmp,thr,220,255,THRESH_BINARY_INV);
imshow("thr",thr);
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
findContours( thr, contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through each contour.
{
Rect r= boundingRect(contours[i]);
rectangle(src,r, Scalar(0,0,255),2,8,0);
}
cout<<"Numeber of contour = "<<contours.size()<<endl;
imshow("src",src);
waitKey();
I was trying to realize an actually simple action, namely to change the color of a normal bitmap.
Unfortunately, there appeared a few errors. In my case I want a bitmap of grey steel to turn into redder steel. Therefore, I wrote some code, which gets the color int of each pixel and raises the red value for each one. Now two things happen:
Firstly, it really takes a long, long time to convert all the pixels, even if I use an AsyncTask.
Secondly, when it finishes one cycle, the whole bitmap kind of rotates and multiplies like in the picture below.
Is there any way to smoothly realize my aim? The thing is that I often see this action in other apps without having problems, so the must be a way to reach my goal.
Thank you!
PS: Please do not be irritated by the comments, they are just tries to find another way!
"steel image" https://drive.google.com/file/d/0B72QIg-baxzjakJsQkFRZFVFOFU/edit?usp=sharing
public void adjustColor(Bitmap bmp)
{
/*for(int i=0; i<bmp.getHeight()-1; i++) {
for(int j=0; j<bmp.getWidth()-1; j++) {
if(bmp.getPixel(j, i) != Color.TRANSPARENT) {
if(Color.green(bmp.getPixel(j, i)) <= 175 && Color.green(bmp.getPixel(j, i)) >= 65) {
red = Color.red(bmp.getPixel(j, i));
green = Color.green(bmp.getPixel(j, i));
blue = Color.blue(bmp.getPixel(j, i));
if (i == bmp.getHeight()-1) {
red = Color.red(bmp.getPixel(j, bmp.getHeight()));
green = Color.green(bmp.getPixel(j, bmp.getHeight()));
blue = Color.blue(bmp.getPixel(j, bmp.getHeight()));}
if ((red + mOfen.heatQ/10) <= 205) red = red + mOfen.heatQ/10;
bmp.setPixel(j, i, Color.rgb(red, green, blue));
}}
}
}*/
for(int h=0; h<bmp.getWidth()*bmp.getHeight(); h++) {
int x = h-bmp.getWidth()*(((int)h/bmp.getWidth()+1)-1);
int y = h/bmp.getWidth();
Log.d("roh", "y " + Integer.toString(y));
Log.d("roh", "height " + Integer.toString(bmp.getHeight()));
if(Color.green(bmp.getPixel(x, y)) <= 175 && Color.green(bmp.getPixel(x, y)) >= 65) {
bmp.setPixel(x, y, Color.WHITE);
red = Color.red( allpixels[h] );
green = Color.green( allpixels[h] );
blue = Color.blue( allpixels[h] );
/* for(int n=1; n<=10; n++) {
/*Log.d("roh", "left " + Float.toString(mOfen.rRoh[n-1].left));
Log.d("roh", "right " + Float.toString(mOfen.rRoh[n-1].right));
Log.d("roh", "n " + Integer.toString(n));
Log.d("roh", "x+RohX " + Float.toString(x+RohX));
Log.d("roh", "top " + Float.toString(mOfen.rRoh[n-1].top));
Log.d("roh", "bottom " + Float.toString(mOfen.rRoh[n-1].bottom));
Log.d("roh", "n " + Integer.toString(n));
Log.d("roh", "y+RohY " + Float.toString(y+RohY)); */
/* if( mOfen.rRoh[n-1].left < x+RohX && y+RohY > mOfen.rRoh[n-1].top &&
mOfen.rRoh[n-1].right > x+RohX && y+RohY < mOfen.rRoh[n-1].bottom) {
/*if ((mOfen.heat[n-1]) <= 245 && (mOfen.heat[n-1]) > red ) red = mOfen.heat[n-1]; */ if(red<255) red++;
/* Log.d("red", "red" + Integer.toString(red));
Log.d("red", Float.toString(x+mOfen.rRoh[n-1].left));
Log.d("red", Float.toString((mOfen.rRoh[n-1].left)));
}} */
allpixels[h] = Color.rgb(red, green, blue);
}
}
copyArrayIntoBitmap(bmp);
//bmp.setPixels(allpixels, 0, bmp.getWidth(), 0, 0, bmp.getWidth(), bmp.getHeight());
}
public void copyBitmapIntoArray(Bitmap bmp)
{
allpixels = new int[bmp.getWidth()*bmp.getHeight()];
int count = 0;
for(int i=0; i<bmp.getHeight()-1; i++) {
for(int j=0; j<bmp.getWidth()-1; j++) {
allpixels[count] = bmp.getPixel(j, i);
count++;
}
}
}
public void copyArrayIntoBitmap(Bitmap bmp)
{
int count = 0;
for(int i=0; i<bmp.getHeight()-1; i++) {
for(int j=0; j<bmp.getWidth()-1; j++) {
bmp.setPixel(j, i, allpixels[count]);
count++;
}
}
}
So your code and your question have different operations being said. Looks like you are doing a little more than just globally updating all pixels to be a bit more red. I say this because both code in your question, commented and uncommented, have a conditional part in them that determines how much more red to make a pixel. I also assume this is why a simple LightingColorFilter will not work, since you need to choose which pixels are affected.
Some issues I spotted immediately:
I am betting you are running into a lot of I/O issues with regards to all the memory allocation and memory copying in this code. WxH is your number of pixels, obviously. But that is also a fairly large amount of entites to do an operation on. So your calls to getPixel() and setPixel() iteratively are very very slow already and doing this for each pixel is just impractical. You should use bulk getPixels() and setPixels() if you are going to do it this way. Just this alone should speed up a large chunk of your process.
Now if you want a forward compatible performance boost (like one that scales better with better hardware), you could look into RenderScript This allows you to do work on a per pixel level, but across multiple CPU's and GPU's. It's sort of like a map-reduce framework for your image buffers. You will have to write some C, but if you find this component to be used a lot then this will probably help out quite a bit and be pretty snappy (especially for larger images).
Try to use LightingColorFilter, some examples are here:
LightingColorFilter example or how to use the LightingColorFilter to make the image form dark to light
I have a requirement to display somewhat big images on an Android app.
Right now I'm using an ImageView with a source Bitmap.
I understand openGL has a certain device-independent limitation as to
how big the image dimensions can be in order for it to process it.
Is there ANY way to display these images (with fixed width, without cropping) regardless of this limit,
other than splitting the image into multiple ImageView elements?
Thank you.
UPDATE 01 Apr 2013
Still no luck so far all suggestions were to reduce image quality. One suggested it might be possible to bypass this limitation by using the CPU to do the processing instead of using the GPU (though might take more time to process).
I don't understand, is there really no way to display long images with a fixed width without reducing image quality? I bet there is, I'd love it if anyone would at least point me to the right direction.
Thanks everyone.
You can use BitmapRegionDecoder to break apart larger bitmaps (requires API level 10). I've wrote a method that will utilize this class and return a single Drawable that can be placed inside an ImageView:
private static final int MAX_SIZE = 1024;
private Drawable createLargeDrawable(int resId) throws IOException {
InputStream is = getResources().openRawResource(resId);
BitmapRegionDecoder brd = BitmapRegionDecoder.newInstance(is, true);
try {
if (brd.getWidth() <= MAX_SIZE && brd.getHeight() <= MAX_SIZE) {
return new BitmapDrawable(getResources(), is);
}
int rowCount = (int) Math.ceil((float) brd.getHeight() / (float) MAX_SIZE);
int colCount = (int) Math.ceil((float) brd.getWidth() / (float) MAX_SIZE);
BitmapDrawable[] drawables = new BitmapDrawable[rowCount * colCount];
for (int i = 0; i < rowCount; i++) {
int top = MAX_SIZE * i;
int bottom = i == rowCount - 1 ? brd.getHeight() : top + MAX_SIZE;
for (int j = 0; j < colCount; j++) {
int left = MAX_SIZE * j;
int right = j == colCount - 1 ? brd.getWidth() : left + MAX_SIZE;
Bitmap b = brd.decodeRegion(new Rect(left, top, right, bottom), null);
BitmapDrawable bd = new BitmapDrawable(getResources(), b);
bd.setGravity(Gravity.TOP | Gravity.LEFT);
drawables[i * colCount + j] = bd;
}
}
LayerDrawable ld = new LayerDrawable(drawables);
for (int i = 0; i < rowCount; i++) {
for (int j = 0; j < colCount; j++) {
ld.setLayerInset(i * colCount + j, MAX_SIZE * j, MAX_SIZE * i, 0, 0);
}
}
return ld;
}
finally {
brd.recycle();
}
}
The method will check to see if the drawable resource is smaller than MAX_SIZE (1024) in both axes. If it is, it just returns the drawable. If it's not, it will break the image apart and decode chunks of the image and place them in a LayerDrawable.
I chose 1024 because I believe most available phones will support images at least that large. If you want to find the actual texture size limit for a phone, you have to do some funky stuff through OpenGL, and it's not something I wanted to dive into.
I wasn't sure how you were accessing your images, so I assumed they were in your drawable folder. If that's not the case, it should be fairly easy to refactor the method to take in whatever parameter you need.
You can use BitmapFactoryOptions to reduce size of picture.You can use somthing like that :
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 3; //reduce size 3 times
Have you seen how your maps working? I had made a renderer for maps once. You can use same trick to display your image.
Divide your image into square tiles (e.g. of 128x128 pixels). Create custom imageView supporting rendering from tiles. Your imageView knows which part of bitmap it should show now and displays only required tiles loading them from your sd card. Using such tile map you can display endless images.
It would help if you gave us the dimensions of your bitmap.
Please understand that OpenGL runs against natural mathematical limits.
For instance, there is a very good reason a texture in OpenGL must be 2 to the power of x. This is really the only way the math of any downscaling can be done cleanly without any remainder.
So if you give us the exact dimensions of the smallest bitmap that's giving you trouble, some of us may be able to tell you what kind of actual limit you're running up against.