I am taking a screenshot with glReadPixels to perform a "cross-over" effect between two images.
On the Marmalade SDK simulator, the screenshot is taken just fine and the "cross-over" effect works a treat:
However, this is how it looks on iOS and Android devices - corrupted:
(source: eikona.info)
I always read the screen as RGBA 1 byte/channel, as the documentation says it's ALWAYS accepted.
Here is the code used to take the screenshot:
uint8* Gfx::ScreenshotBuffer(int& deviceWidth, int& deviceHeight, int& dataLength) {
/// width/height
deviceWidth = IwGxGetDeviceWidth();
deviceHeight = IwGxGetDeviceHeight();
int rowLength = deviceWidth * 4; /// data always returned by GL as RGBA, 1 byte/each
dataLength = rowLength * deviceHeight;
// set the target framebuffer to read
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
uint8* buffer = new uint8[dataLength];
glReadPixels(0, 0, deviceWidth, deviceHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
return buffer;
}
void Gfx::ScreenshotImage(CIwImage* img, uint8*& pbuffer) {
int deviceWidth, deviceHeight, dataLength;
pbuffer = ScreenshotBuffer(deviceWidth, deviceHeight, dataLength);
img->SetFormat(CIwImage::ABGR_8888);
img->SetWidth(deviceWidth);
img->SetHeight(deviceHeight);
img->SetBuffers(pbuffer, dataLength, 0, 0);
}
That is a driver bug. Simple as that.
The driver got the pitch of the surface in the video memory wrong. You can clearly see this in the upper lines. Also the garbage you see at the lower part of the image is the memory where the driver thinks the image is stored but there is different data there. Textures / Vertex data maybe.
And sorry, I know of no way to fix that. You may have better luck with a different surface-format or by enabling/disabling multisampling.
In the end, it was lack of memory. The "new uint8[dataLength];" never returned an existent pointer, thus the whole process went corrupted.
TomA, your idea of clearing the buffer actually helped me to solve the problem. Thanks.
I don't know about android or the SDK you're using, but on IOS when I take a screenshot I have to make the buffer the size of the next POT texture, something like this:
int x = NextPot((int)screenSize.x*retina);
int y = NextPot((int)screenSize.y*retina);
void *buffer = malloc( x * y * 4 );
glReadPixels(0,0,x,y,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
The function NextPot just gives me the next POT size, so if the screen size was 320x480, the x,y would be 512x512.
Maybe what your seeing is the wrap around of the buffer because it's expecting a bigger buffer size ?
Also this could be a reason for it to work in the simulator and not on the device, my graphics card doesn't have the POT size limitation and I get similar (weird looking) result.
What I assume is happening is that you are trying to use glReadPixels on the window that is covered. If the view area is covered, then the result of glReadPixels is undefined.
See How do I use glDrawPixels() and glReadPixels()? and The Pixel Ownership Problem.
As said here :
The solution is to make an offscreen buffer (FBO) and render to the
FBO.
Another option is to make sure the window is not covered when you use glReadPixels.
I am getting screenshoot of my android game without any problems on android device using glReadPixels.
I am not sure yet what's the problem in your case, need more information.
So lets start:
I would recommend you not to specify PixelStore format. I am worried about your alignment in 1 byte, do you really "use it"/"know what does it do"? It seems you get exactly what you specify - one extra byte(look at your image, there one extra pixel all the time!) instead of fully packed image. SO try to remove this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
I am not sure in C code, as I was working only in java, but this looks as possible point:
// width/height
deviceWidth = IwGxGetDeviceWidth();
deviceHeight = IwGxGetDeviceHeight();
Are you getting device size? You should use your OpenGL surface size, like this:
public void onSurfaceChanged(GL10 gl, int width, int height) {
int surfaceWidth = width;
int surfaceHeight = height;
}
What are you doing next with captured image? Are you aware that memory block you got from opengl is RGBA, but all not-opengl image operations expect ARGB?
For example here in your code you expect alpha to be first bit, not last:
img->SetFormat(CIwImage::ABGR_8888);
In case if 1, 2 and 3 did not help you might want to save the captured screen to phone sdcard to examine later. I have a program that converts opengl RGBA block to normal bitmap to examine on PC. I may share it with you.
I don't have a solution for fixing glReadPixels. My suggestion is that you change your algorithm to avoid the need to read the data back from the screen.
Take a look at this page. These guys have done a page flip effect all in Flash. It's all in 2D, the illusion is achieved just with shadow gradients.
I think you can use a similar approach, but a little better in 3D. Basically you have to split the effect into three parts: the front facing top page (clouds), the bottom page (the girl) and the back side of the front page. You have to draw each part separately. You can easily draw the front facing top page and the bottom page together in the same screen, you just need to invoke the drawing code for each with a preset clipping region that is aligned with the split line where the top page bends. After you have to top and back page sections drawn, you can draw the gray back facing portion on top, also aligned to the split line.
With this approach the only thing you lose is a little bit of deformation where the clouds image starts to bend up, of course no deformation will occur with my method. Hopefully that will not diminish the effect, I think the shadows are way more important to give the depth effect and will hide this minor inconsistency.
Related
i am working on an Android app that will recognize a GO board and create a SGF file of it.
i made a version that is able to detect a board and warp the perspective to make it square ( code and example image below) unfortunately it gets a bit harder when adding stones.(image below)
Important things about a average go board:
round black and white stones
black lines on the board
board color ranges from white to light brown and sometimes with a wood grain
stones are placed on intersections of two lines
correct me if i am wrong but i think my current approach is not a good one.
Has somebody a general idea on how i can separate the stones and lines from the rest of the picture?
My code:
Mat input = inputFrame.rgba(); //original image
Mat gray = new Mat(); //grayscale image
//convert image to grayscale
Imgproc.cvtColor( input, gray, Imgproc.COLOR_RGB2GRAY);
//try to improve histogram (more contrast)
equalizeHist(gray, gray);
//blur image
Size s = new Size(5,5);
GaussianBlur(gray, gray, s, 0);
//apply adaptive treshold
adaptiveThreshold( gray, gray, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY,11,2);
//adding secondary treshold, removes a lot of noise
threshold(gray, gray, 0, 255, Imgproc.THRESH_BINARY + Imgproc.THRESH_OTSU);
Some images:
(source: eightytwo.axc.nl)
(source: eightytwo.axc.nl)
EDIT: 05-03-2016
Yay! managed to detect lines stones and color correctly. precondition the picture has to be only the board itself, without any other background visible.
I use houghLinesP (60lines) and houghCircles (17circles), duration on my phone(1th gen Moto G) about 5 seconds.
Detecting board and warp it turns out to be quite a challenge when it has to be working under different angles and lightning conditions.. still working on that
Suggestions for different approaches are still welcome!!
(source: eightytwo.axc.nl)
EDIT: 15-03-2016
i found a nice way to get line intersects with cross type morphological transformations, works amazing when the picture is taken directly above the board unfortunately not while at an angle (see below)
(source: eightytwo.axc.nl)
In my last update i showed line and stone detection with a picture taken from directly above since then i have been working on detecting the board and warping it in a way that my line and stone detection becomes useful.
harris corner detection
I struggled to get the right parameter settings and i am still not sure if they are optimal, can't find much information on how to optimize image before using harris corners. right now it detects to many corners to be useful. though it feels like it could work. (upper line with pictures in example)
Mat corners = new Mat();
Imgproc.cornerHarris(image, corners, 5, 3, 0.03);
Mat mask = new Mat(corners.size(), CvType.CV_8U, new Scalar(1));
Core.MinMaxLocResult maxVal = Core.minMaxLoc(corners);
Core.inRange(corners, new Scalar(maxVal.maxVal * 0.01), new Scalar(maxVal.maxVal), mask);
cross type morphological transformations
works great when picture is taken directly from above, used from an angle or with a rotated board does not work (middle line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
int morph_elem = 1; //0: Rect - 1: Cross - 2: Ellipse
int morph_size = 5;
int morph_operator = 0; //0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat
Mat element = getStructuringElement( morph_elem, new Size(2 * morph_size + 1, 2 * morph_size + 1), new Point( morph_size, morph_size ));
morphologyEx(image, image, morph_operator + 2, element);
contour and houghlines
if there are no stones on the outer boardline and light conditions not to harsh it works pretty well. contours are only part of the board quite often(lower line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
Mat hierarchy = new Mat();
MatOfPoint biggest = null;
int contourId = 0;
double biggestArea = 0;
double minSize = 2000;
List<MatOfPoint> contours = new ArrayList<>();
findContours(InvertedImage, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
//find biggest
for( int x = 0; x < contours.size() ; x++ ){
double area = Imgproc.contourArea(contours.get(x));
if( area > minSize && area > biggestArea ){
biggestArea = area;
biggest = contours.get(x);
contourId = x;
}
}
providing the right picture all three the methods work but not good enough to be reliable. any thoughts on parameters, image pre-processing, different approaches or anything that might improve the detection are welcome=)
link to picture
EDIT: 31-03-2016
detecting lines and stones is pretty much solved so i will close this question. created a new one for detecting and warping accurately.
anybody interested in my progress: this is my GOSU Snap Alpha channel don't expect to much of it right now!
EDIT: 16-10-2016
Update: i saw some people are still following this question.
I tested some more stuff and started using Tensorflow, my neural network looks promising, you can have a look at it here.
A lot of work has to be done still, my current image dataset is awful and right now i am working on getting a big dataset.
the app works best using a square board with thick lines and decent lightning.
Assuming that you don't want to "force" your end user to take a cleanest pictures (like using an overlay like some of the QR code scanner for example)
Perhaps you could use some morphological transformations with differents kernels :
Opening and closing with a rectangular kernel for the lines
Opening and closing with an ellipse kernel to get the stones (it should be possible to invert the image at some point to get back the white or the black one)
Take a look at http://docs.opencv.org/2.4/doc/tutorials/imgproc/opening_closing_hats/opening_closing_hats.html (sorry this one is in C++ but I think this is almost the same in Java)
I had try these operations to remove a grid from a Sudoku to avoid noise in cell extraction and it worked like a charm.
Let me know of these informations were usefull for you (this is for sure a very interesting case)
I'm working on same program. I avoid finding lines at all.
First use perspective transform to get the board into a square as you have done. Find the edges of the 19x19 grid. Then assuming the board is 19x19 you can just compute the position of the lines. This works well for me. Then you compute the closest intersection of the center of the stone to determine which row and col line the stone is on. Works pretty well for me. Only probably is calibrating program for different lighting conditions and different color stones and boards.
I want to make the area of camera preview be half of SurfaceView, so I modify the code of ContinuousCaptureActivity.
The detail is as follows:
Use GLES20.glViewport(0, 0, viewWidth / 2, viewHeight / 2); to replace https://github.com/google/grafika/blob/master/src/com/android/grafika/ContinuousCaptureActivity.java#L436
But the result is strange, see the picture below.
I really can not understand it and what is the right way to do it?
Who can give me some advices?
My first thought would be that you need a glClear() in there (with the viewport set to cover the entire surface) since you're no longer filling the entire surface with the blit. Otherwise you get uninitialized data, and on a tile-based architecture things can get strange.
Fiddling with the viewport isn't really the right way to go. Just draw a smaller rectangle. Use Sprite2d and change the X/Y scale factors. See TextureFromCameraActivity for an example -- you can scale the rect, zoom in, rotate, etc.
I am learning how to make live wallpapers, but I have a dilemma I'm sure all who start off have as well.
There is so many resolution screen sizes, how can I just make one set of artwork to be rescaled in code for all versions? I know it's been done as I seen the images in the apk's on a lot of them and they get rescaled.
If it was just one image that did not need any positioning that would be easy, but my problem is I have to get the background image rescaled to fit all devices, I also have animations that fit in a certain x and y position on that background image to fit in place so it looks like the whole background is being animated but only parts of it is (my way of staying away from 300 images of frame by frame live wallpapers).
So the background image needs to be rescaled and the animations need to be rescaled as well to the exact percentage as the background image and they need to sit in a specific x and y position.
Any help would be appreciated so I can get this going.
I tired a few things, figured I would make a scaler for everything example: int scaler; then in onSurfaceChanged scaler = width /1024; //if the bigger image is 1024. that will give me a ratio to work with everywhere. then scale accordingly using scaleBitmap by multiplying the scaler by the image height and width, and also use the same scaler for positioning example image x lets say is at 50, scale it using the same thing x = scaler * 50; that should take care of scaling and positioning, just how to translate all this into java is the next lesson, since I'm new to java, I used to program for flash and php but this is a lot different, take some getting used to. Next thing is how to pan the width, when you move your screen from side to side how to make the image show is the next puzzle I have figure out. Right now it just shows the same width no matter what even though the width is double what the surface shows. If you got an answer or somewhere I can find out the info on this one that would be greatly appreciated.
Well, um, all I can say is "Welcome to the real world." You get your screen dimensions passed to you via onSurfaceChanged, and yes, it is your job to figure out how to scale everything based on this data. That's why they pay us the big bucks. :-)
You will want to make sure your resources are large enough to fit the biggest display you intend to support, so you will always be shrinking things (which distorts much less than expanding things).
Suggest starting with "best practices for screen independence" here: http://developer.android.com/guide/practices/screens_support.html
Additional comments in re your request for more help...
You cannot (necessarily) scale your artwork just using the width, because you need to support multiple aspect ratios. If the screen proportions do not match your artwork, you must decide if you want to distort your artwork, leave blank spaces, etc.
I'm not sure how to interpret your trouble passing around the screen dimensions. Most of us put all of our active code within a single engine class, so our methods can share data via private variables. For example, in the Cube wallpaper in the SDK, onSurfaceChanged() sets mCenterX for later use in drawCube(). I suggest beginning with a similar, simple approach.
Handling scrolling takes some "intelligence" and a careful assessment of the data you receive via onOffsetsChanged(). xStep indicates how many screens your launcher supports. Normally xStep will be 0.25, indicating 5 screens (i.e. xOffset = 0, 0.25, 0.5, 0.75, or 1) but it can be any value from 0 to 1; 0.5 would indicate 3 screens. xPixels gives you an indication of how much the launcher "wants" you to shift your imagery based on the screen you're on; normally you should respect this. On my phone, the launcher "desires" a virtual wallpaper with twice the pixels of the physical screen, so each scroll is supposed to shift things only one quarter of one screen's pixels. All this, and more, is documented in http://developer.android.com/reference/android/app/WallpaperManager.html
This is not "easy" coding--apps are easier than wallpaper. :-)
Good luck...George
P.S. I'll throw in one more thing: somewhere along the line you might want to retrieve the "desired minimum width" of the wallpaper desired by the launcher, so you can explicitly understand the virtualization implicit in xPixels. For example, in my engine constructor, I have
mContext = getApplicationContext();
mWM = WallpaperManager.getInstance(mContext);
mDW = mWM.getDesiredMinimumWidth();
My device has 320 pixel width; I get mDW = 640; as I scroll from screen to screen, xPixels changes by 80 each time...because four scrolls (across five screens) is supposed to double the amount of revealed artwork (this effect is called "parallax scrolling"). The rightmost section has xPixels equals 0; the center (of five) sections has xPixels = -160, etc.
I've used this code snippet to scale one image to fit on different screen sizes.
Bitmap image1, pic1;
image1 = BitmapFactory.decodeResource(getResources(), R.drawable.image1);
float xScale = (float) canvas.getWidth() / image1.getWidth();
float yScale = (float) canvas.getHeight() / image1.getHeight();
float scale = Math.max(xScale, yScale); //selects the larger size to grow the images by
//scale = (float) (scale*1.1); //this allows for ensuring the image covers the whole screen.
scaledWidth = scale * image1.getWidth();
scaledHeight = scale * image1.getHeight();
pic1 = Bitmap.createScaledBitmap(image1, (int)scaledWidth, (int)scaledHeight, true);
Make sure that the edges don't contain vital information as it will be scaled out of the picture on some screen ratios.
I am learning how to make live wallpapers, but I have a dilemma I'm sure all who start off have as well.
There is so many resolution screen sizes, how can I just make one set of artwork to be rescaled in code for all versions? I know it's been done as I seen the images in the apk's on a lot of them and they get rescaled.
If it was just one image that did not need any positioning that would be easy, but my problem is I have to get the background image rescaled to fit all devices, I also have animations that fit in a certain x and y position on that background image to fit in place so it looks like the whole background is being animated but only parts of it is (my way of staying away from 300 images of frame by frame live wallpapers).
So the background image needs to be rescaled and the animations need to be rescaled as well to the exact percentage as the background image and they need to sit in a specific x and y position.
Any help would be appreciated so I can get this going.
I tired a few things, figured I would make a scaler for everything example: int scaler; then in onSurfaceChanged scaler = width /1024; //if the bigger image is 1024. that will give me a ratio to work with everywhere. then scale accordingly using scaleBitmap by multiplying the scaler by the image height and width, and also use the same scaler for positioning example image x lets say is at 50, scale it using the same thing x = scaler * 50; that should take care of scaling and positioning, just how to translate all this into java is the next lesson, since I'm new to java, I used to program for flash and php but this is a lot different, take some getting used to. Next thing is how to pan the width, when you move your screen from side to side how to make the image show is the next puzzle I have figure out. Right now it just shows the same width no matter what even though the width is double what the surface shows. If you got an answer or somewhere I can find out the info on this one that would be greatly appreciated.
Well, um, all I can say is "Welcome to the real world." You get your screen dimensions passed to you via onSurfaceChanged, and yes, it is your job to figure out how to scale everything based on this data. That's why they pay us the big bucks. :-)
You will want to make sure your resources are large enough to fit the biggest display you intend to support, so you will always be shrinking things (which distorts much less than expanding things).
Suggest starting with "best practices for screen independence" here: http://developer.android.com/guide/practices/screens_support.html
Additional comments in re your request for more help...
You cannot (necessarily) scale your artwork just using the width, because you need to support multiple aspect ratios. If the screen proportions do not match your artwork, you must decide if you want to distort your artwork, leave blank spaces, etc.
I'm not sure how to interpret your trouble passing around the screen dimensions. Most of us put all of our active code within a single engine class, so our methods can share data via private variables. For example, in the Cube wallpaper in the SDK, onSurfaceChanged() sets mCenterX for later use in drawCube(). I suggest beginning with a similar, simple approach.
Handling scrolling takes some "intelligence" and a careful assessment of the data you receive via onOffsetsChanged(). xStep indicates how many screens your launcher supports. Normally xStep will be 0.25, indicating 5 screens (i.e. xOffset = 0, 0.25, 0.5, 0.75, or 1) but it can be any value from 0 to 1; 0.5 would indicate 3 screens. xPixels gives you an indication of how much the launcher "wants" you to shift your imagery based on the screen you're on; normally you should respect this. On my phone, the launcher "desires" a virtual wallpaper with twice the pixels of the physical screen, so each scroll is supposed to shift things only one quarter of one screen's pixels. All this, and more, is documented in http://developer.android.com/reference/android/app/WallpaperManager.html
This is not "easy" coding--apps are easier than wallpaper. :-)
Good luck...George
P.S. I'll throw in one more thing: somewhere along the line you might want to retrieve the "desired minimum width" of the wallpaper desired by the launcher, so you can explicitly understand the virtualization implicit in xPixels. For example, in my engine constructor, I have
mContext = getApplicationContext();
mWM = WallpaperManager.getInstance(mContext);
mDW = mWM.getDesiredMinimumWidth();
My device has 320 pixel width; I get mDW = 640; as I scroll from screen to screen, xPixels changes by 80 each time...because four scrolls (across five screens) is supposed to double the amount of revealed artwork (this effect is called "parallax scrolling"). The rightmost section has xPixels equals 0; the center (of five) sections has xPixels = -160, etc.
I've used this code snippet to scale one image to fit on different screen sizes.
Bitmap image1, pic1;
image1 = BitmapFactory.decodeResource(getResources(), R.drawable.image1);
float xScale = (float) canvas.getWidth() / image1.getWidth();
float yScale = (float) canvas.getHeight() / image1.getHeight();
float scale = Math.max(xScale, yScale); //selects the larger size to grow the images by
//scale = (float) (scale*1.1); //this allows for ensuring the image covers the whole screen.
scaledWidth = scale * image1.getWidth();
scaledHeight = scale * image1.getHeight();
pic1 = Bitmap.createScaledBitmap(image1, (int)scaledWidth, (int)scaledHeight, true);
Make sure that the edges don't contain vital information as it will be scaled out of the picture on some screen ratios.
I have a imageview that I want to change the color based on a user choice
From examples on the internet I see the only way to really do this is by going through and modifying each pixel... however it seems to be EXTREMELY slow
if I add this into my code, it takes long enough that it prompts the user to force close or wait
for(int i =0 ; i < mBitmap.getHeight(); ++i)
{
for(int g = 0; g < mBitmap.getWidth(); ++g)
{
}
}
What is the best way to change the color of the image?
The Image is a small image 320x100 and is mostly transparent with a small image in the inside, the small image I want to change the color of
The Problem lies in using getPixel(x,y). Grabbing each pixel one by one is a very slow process. Instead use getPixels
void getPixels(int[] pixels, int offset, int stride, int x, int y, int width, int height)
Returns in pixels[] a copy of the data in the bitmap.
It'll return you an array of integers with the pixel values (and operate on that array and then use setPixels) and it will be faster (although requires more memory)
For a small image this method will do. Stride is equal to the image width.
mBitmap.getPixels( pixels, 0 , mBitmap.getWidth(), 0, 0 , mBitmap.getWidth(), mBitmap.getHeight());
In order of complexity:
Check the samples in API demos for usage of ColorFilter and ColorMatrix. But since you described it as a image within an image that you are trying to modify, this may not apply.
Put your processing code on its own thread to avoid the Application Not Responding issue. Look into AsyncTask. You may need to have a wait animation running while its processing.
Consider OpenGL ES 1.x. Use the image as a texture and overlay a color with alpha to get the effect. Although this would have better performance, the complexity of adding UI elements would need to be taken into account (i.e. build your own).