AndroidPlot: How to render a chart in an AppWidget? - android

I'm trying to use AndroidPlot to draw a chart in a homescreen widget.
I know that in a normal app, it uses a custom view and from what I've seen (Android: AppWidget with custom view not working), the workaround is to render this as a bitmap in an imageView.
Now I've taken the quickstart code for AndroidPlot and put it into the provider class but it doesn't seem to render anything when I drop it on the homescreen.
The difference between this code and the original quickstart code is that in the quickstart, it leverages the Activity.findViewById but obviously it can't be used here.
Can anyone see something here that I'm doing wrong that may be causing the empty rendering?
Appreciate any help you could provide!
private Bitmap getChartImage(Context context)
{
// initialize our XYPlot reference:
mySimpleXYPlot = new XYPlot(context, "My Simple XYPlot");
mySimpleXYPlot.setDrawingCacheEnabled(true);
// add a new series
mySimpleXYPlot.addSeries(new SimpleXYSeries(), LineAndPointRenderer.class, new LineAndPointFormatter(Color.rgb(0, 200, 0), Color.rgb(200, 0, 0)));
// reduce the number of range labels
mySimpleXYPlot.getGraphWidget().setRangeTicksPerLabel(4);
// reposition the domain label to look a little cleaner:
Widget domainLabelWidget = mySimpleXYPlot.getDomainLabelWidget();
mySimpleXYPlot.position(domainLabelWidget, // the widget to position
45, // x position value, in this case 45 pixels
XLayoutStyle.ABSOLUTE_FROM_LEFT, // how the x position value is applied, in this case from the left
0, // y position value
YLayoutStyle.ABSOLUTE_FROM_BOTTOM, // how the y position is applied, in this case from the bottom
AnchorPosition.LEFT_BOTTOM); // point to use as the origin of the widget being positioned
// get rid of the visual aids for positioning:
mySimpleXYPlot.disableAllMarkup();
//mySimpleXYPlot.measure(150, 150);
//mySimpleXYPlot.layout(0, 0, 150, 150);
Bitmap bmp = mySimpleXYPlot.getDrawingCache();
return bmp;
}

Have you stepped through to see if the bitmap is actually empty? My guess is that the Bitmap is fine and the problem exists outside of this chunk of code. Take a look at this example usage of a widget with AndroidPlot - it's super minimal and it definitely works. Hopefully there's a solution in there for you :)
Nick

Related

PDFTron : Drawing Ink Annotation programmatically

I am drawing ink annotation from points stored in db. Where those points were extracted from previously drawn shape over pdf. I have referred this example given by PDFTron but I am not able to see annotation drawn on page in proper manner.
Actual Image
Drawn Programmatically
Here is the code I have used for drawing annotation.
for (Integer integer : uniqueShapeIds) {
Config.debug("Shape Id's unique "+integer);
pdftron.PDF.Annots.Ink ink = pdftron.PDF.Annots.Ink.create(
mPDFViewCtrl.getDoc(),
getAnnotationRect(pointsArray, integer));
for (SaveAnnotationState annot : pointsArray) {
Config.debug("Draw "+annot.getxCord()+" "+annot.getyCord()+" "+annot.getPathIndex()+" "+annot.getPointIndex());
Point pt = new Point(annot.getxCord(), annot.getyCord());
ink.setPoint(annot.getPathIndex(), annot.getPointIndex(),pt);
ink.setColor(
new ColorPt(annot.getR()/255, annot.getG()/255, annot
.getB()/255), 3);
ink.setOpacity(annot.getOpacity());
BorderStyle border=ink.getBorderStyle();
border.setWidth(annot.getThickness());
ink.setBorderStyle(border);
}
ink.refreshAppearance();
Page page = mPDFViewCtrl.getDoc().getPage(mPDFViewCtrl.getCurrentPage());
Annot mAnnot=ink;
page.annotPushBack(mAnnot);
mPDFViewCtrl.update(mAnnot, mPDFViewCtrl.getCurrentPage());
}
can any one tell me what is going wrong here?
On a typical PDF page, the bottom left corner of the page is coordinate 0,0. However, for annotations the origin is the bottom left corner of the rectangle specified in the BBox entry. The BBox entry is the 3rd parameter of you call to Ink.Create, which is called pos unfortunately.
This means the Rect passed into Ink.Create, is supposed to be the minimum axis-aligned bounding box of the all the points that make up the Ink Annot.
I suspect in your call to getAnnotationRect you start with Rect(), which is really Rect(0,0,0,0), so when you union all the other points you end up with an inflated Rect.
What you should do is store the BBox in your database, by calling Annot.getRect().
If this is not possible, or too late, then initialize the Rect with the first point in your database.
Rect rect = new Rect(pt.x, pt.y, pt.x, pt.y);
API:
http://www.pdftron.com/pdfnet/mobile/docs/Android/pdftron
http://www.pdftron.com/pdfnet/mobile/docs/Android/pdftron/PDF/Annot.html#getRect%28%29/PDF/Annot.html#create%28pdftron.SDF.Doc,%20int,%20pdftron.PDF.Rect%29

Detach child, update, and attach? AndEngine, Android

engine.registerUpdateHandler(new TimerHandler(0.2f,
new ITimerCallback() {
public void onTimePassed(final TimerHandler pTimerHandler) {
pTimerHandler.reset();
Rectangle xpRect = new Rectangle(30, 200,
(float) (((player.getXp()) / (player
.getNextLevelxp())) * 800), 40, vbom);
HUD.attachChild(xpRect);
}
}));
I have this so far in my createHUD method. It's pretty simple, it creates a rectangle showing the player's xp in relation to the xp needed for the next level and attaches it to the HUD. The only problem is that the old rectangle is never deleted. How can I have a rectangle like that one that updates itself and removes old ones?
If you use the detachChild() or any other detach method too regularly, you might run into problems sooner or later. Especially because the detaching can only be made on an update thread. You'll never know when exactly your rectangle will be detached again. So, to save you a lot of attaching and detaching, reuse the rectangle:
i) Save a reference of the Rectangle somewhere (as a global variable for example in your Playerclass).
ii) At the beginning when you load your stuff also initialize the rectangle:
Rectangle xpRect = new Rectangle(30, 200, 0, 40, vbom); // initialize it
HUD.attachChild(xpRect); // attach it where it belongs
xpRect.setVisible(false); // hide it from the player
xpRect.setIgnoreUpdate(true); // hide it from the update thread, because you don't use it.
At this point it doesn't matter where you put your rectangle or how big it is. It's only important that it is there.
iii) Now when you want to show the player his XP you only have to make it visible
public void showXP(int playerXP, int nextXP){
float width= (float) ((playerXP / nextXP) * 800); // calculate your new width
xpRect.setIgnoreUpdate(false); // make the update thread aware of your rectangle
xpRect.setWidth(width); // now change the width of your rectangle
xpRect.setVisible(true); // make the rectangle visible again
}
iv) When you no longer need it: In order to make it invisible again just call
xpRect.setVisible(false); // hide it from the player
xpRect.setIgnoreUpdate(true); // hide it from the update thread, because you don't
Of course you can now use the showXP() method in anyway you like and use it in your TimerHandler. If you want a more effect full appearance you do something like this instead:
public void showXP(int playerXP, int nextXP){
float width= (float) ((playerXP / nextXP) * 800); // calculate your new width
xpRect.setIgnoreUpdate(false); // make the update thread aware of your rectangle
xpRect.setWidth(width); // now change the width of your rectangle
xpRect.setVisible(true);
xpRect.registerEntityModifier(new FadeInModifier(1f)); // only this line is new
}
It's actually the same as the above method with just a little change in the last line, that makes the rectangle appear a little more smoothly...
To detach a child from a HUD ,you can write
aHUD.detachChild(rectangle);
To clear all children from HUD
aHUD.detachChildren();
To to clear all HUD from camera , you can write
cCamera.getHUD().setCamera(null);
After using one of above, You can create a HUD & also attach as usual.

android - set starting point to top left on libgdx

i want to use libgdx as a solution to scaling of apps based on screens aspect ratio.
i've found this link , and i find it really useful:
http://blog.acamara.es/2012/02/05/keep-screen-aspect-ratio-with-different-resolutions-using-libgdx/
i'm quite rusty with opengl (haven't written for it in years) and i wish to use the example on this link so that it would be easy to put images and shapes .
sadly , the starting point is in the middle . i want to use it like on many other platforms - top left corner should be (0,0) , and bottom right corner should be (targetWidth-1,targetHeight-1) ,
from what i remember, i need to move(translate) and rotate the camera in order to achieve it , but i'm not sure .
here's my modified code of the link's example for the onCreate method:
#Override
public void create()
{
camera=new OrthographicCamera(VIRTUAL_WIDTH,VIRTUAL_HEIGHT);
// camera.translate(VIRTUAL_WIDTH/2,VIRTUAL_HEIGHT/2,0);
// camera.rotate(90,0,0,1);
camera.update();
//
font=new BitmapFont(Gdx.files.internal("data/fonts.fnt"),false);
font.setColor(Color.RED);
//
screenQuad=new Mesh(true,4,4,new VertexAttribute(Usage.Position,3,"attr_position"),new VertexAttribute(Usage.ColorPacked,4,"attr_color"));
Point bottomLeft=new Point(0,0);
Point topRight=new Point(VIRTUAL_WIDTH,VIRTUAL_HEIGHT);
screenQuad.setVertices(new float[] {//
bottomLeft.x,bottomLeft.y,0f,Color.toFloatBits(255,0,0,255),//
topRight.x,bottomLeft.y,0f,Color.toFloatBits(255,255,0,255),//
bottomLeft.x,topRight.y,0f,Color.toFloatBits(0,255,0,255),//
topRight.x,topRight.y,0f,Color.toFloatBits(0,0,255,255)});
screenQuad.setIndices(new short[] {0,1,2,3});
//
bottomLeft=new Point(VIRTUAL_WIDTH/2-50,VIRTUAL_HEIGHT/2-50);
topRight=new Point(VIRTUAL_WIDTH/2+50,VIRTUAL_HEIGHT/2+50);
quad=new Mesh(true,4,4,new VertexAttribute(Usage.Position,3,"attr_position"),new VertexAttribute(Usage.ColorPacked,4,"attr_color"));
quad.setVertices(new float[] {//
bottomLeft.x,bottomLeft.y,0f,Color.toFloatBits(255,0,0,255),//
topRight.x,bottomLeft.y,0f,Color.toFloatBits(255,255,0,255),//
bottomLeft.x,topRight.y,0f,Color.toFloatBits(0,255,0,255),//
topRight.x,topRight.y,0f,Color.toFloatBits(0,0,255,255)});
quad.setIndices(new short[] {0,1,2,3});
//
texture=new Texture(Gdx.files.internal(IMAGE_FILE));
spriteBatch=new SpriteBatch();
spriteBatch.getProjectionMatrix().setToOrtho2D(0,0,VIRTUAL_WIDTH,VIRTUAL_HEIGHT);
}
so far , i've succeeded to use this code in order to use scaled coordinates , and still keep aspect ratio (which is great) , but i didn't succeed in moving the starting point (0,0) to the top left corner .
please help me .
EDIT: ok , after some testing , i've found out that the reason for it not working is that i use the spriteBatch . i think it ignores the camera . this code occurs in the render part. no matter what i do to the camera , it will still show the same results.
#Override
public void render()
{
if(Gdx.input.isKeyPressed(Keys.ESCAPE)||Gdx.input.justTouched())
Gdx.app.exit();
// update camera
// camera.update();
// camera.apply(Gdx.gl10);
// set viewport
Gdx.gl.glViewport((int)viewport.x,(int)viewport.y,(int)viewport.width,(int)viewport.height);
// clear previous frame
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
//
final String msg="test";
final TextBounds textBounds=font.getBounds(msg);
spriteBatch.begin();
screenQuad.render(GL10.GL_TRIANGLE_STRIP,0,4);
quad.render(GL10.GL_TRIANGLE_STRIP,0,4);
Gdx.graphics.getGL10().glEnable(GL10.GL_TEXTURE_2D);
spriteBatch.draw(texture,0,0,texture.getWidth(),texture.getHeight(),0,0,texture.getWidth(),texture.getHeight(),false,false);
spriteBatch.draw(texture,0,VIRTUAL_HEIGHT-texture.getHeight(),texture.getWidth(),texture.getHeight(),0,0,texture.getWidth(),texture.getHeight(),false,false);
font.draw(spriteBatch,msg,VIRTUAL_WIDTH-textBounds.width,VIRTUAL_HEIGHT);
spriteBatch.end();
}
These lines:
camera.translate(VIRTUAL_WIDTH/2,VIRTUAL_HEIGHT/2,0);
camera.rotate(90,0,0,1);
should move the camera such that it does what you want. However, my code has an additional:
camera.update()
call after it calls translate, and it looks like you're missing that. The doc for Camera.update says:
update
public abstract void update()
Recalculates the projection and view matrix of this camera and the Frustum planes. Use this after you've manipulated any of the attributes of the camera.

How to add text to image and save as new image

I'm trying to create an Android app that adds a random quote to images.
The general process is this:
Start from a custom given image that shows when starting the app.
From this image all the user can do is tap on it and generate a new random "quote" that get overlaid on the image.
The user can save the newly created image with the quote he chose and set it as wallpaper.
I have got to the point where I can display the image in an ImageView.
My list of quotes is stored in my strings.xml file.
I do something like this in an app. Use Canvas.
I edited down a piece of my code, which actually adds a couple of other images on the background and stuff too.
Meat of code:
private static Bitmap getPoster(...) {
Bitmap background = BitmapFactory.decodeResource(res, background_id)
.copy(Bitmap.Config.ARGB_8888, true);
Canvas canvas = new Canvas(background);
Typeface font = Typeface.createFromAsset(res.getAssets(), FONT_PATH);
font = Typeface.create(font, Typeface.BOLD);
Paint paint = new Paint();
paint.setTypeface(font);
paint.setAntiAlias(true);
paint.setColor(Color.WHITE);
paint.setStyle(Style.FILL);
paint.setShadowLayer(2.0f, 1.0f, 1.0f, Color.BLACK);
float fontSize = getFontSize(background.getWidth(), THE_QUOTE, paint); //You'll have to define a way to find a size that fits, or just use a constant size.
paint.setTextSize(fontSize);
canvas.drawText(THE_QUOTE, (background.getWidth() - paint.measureText(THE_QUOTE)) / 2,
background.getHeight() - FILLER_HEIGHT, paint); //You might want to do something different. In my case every image has a filler in the bottom which is 50px.
return background;
}
Put your own version of that in a class and feed it the image id and anything else. It returns a bitmap for you to do whatever you want with (display it in an imageview, let the user save it and set as wallpape).
I know i did this for the PC with imagemagick a few years ago(save image with text on)
Seems like imagemagick have been ported to android, so I would start digging into thier documentation.
https://github.com/lilac/Android-ImageMagick
Ok! Francesco my friend, I've an idea although not a working code ('cuz I'm not really good at it). So, here it is:
Implement an onClickListener() on your ImageView like below:
ImageView iv = (ImageView)findViewById(R.id.imageview1);
iv.setOnClickListener(new View.OnClickListener()
{
public void onClick(View v)
{
/** When I say do your stuff here, I mean read the user input and set your wallpaper here. I'm sorry that I don't really know how to save/set the wallpaper */
}
});
When it comes to reading user input/generating random quotes, you can do this:
You said you already have the quotes saved in the strings.xml file. Using the ids of those strings, I think you can implement a switch case scenario where it uses java imports - java.util.Scanner and java.util.Random. Ultimately, using these in your ImageView onClickListener could/should result in the desired output.
I know my answer is too vague, but I've a faint hope that it has given you a minute lead as to what you can implement. I seriously hope there are better answers than this. If not, then I hope this helps you some, and I also hope that I'm not leading you in the wrong direction since this is just a mere speculation. Sorry, but this is all I've got.

How can I tell if a closed path contains a given point?

In Android, I have a Path object which I happen to know defines a closed path, and I need to figure out if a given point is contained within the path. What I was hoping for was something along the lines of
path.contains(int x, int y)
but that doesn't seem to exist.
The specific reason I'm looking for this is because I have a collection of shapes on screen defined as paths, and I want to figure out which one the user clicked on. If there is a better way to be approaching this such as using different UI elements rather than doing it "the hard way" myself, I'm open to suggestions.
I'm open to writing an algorithm myself if I have to, but that means different research I guess.
Here is what I did and it seems to work:
RectF rectF = new RectF();
path.computeBounds(rectF, true);
region = new Region();
region.setPath(path, new Region((int) rectF.left, (int) rectF.top, (int) rectF.right, (int) rectF.bottom));
Now you can use the region.contains(x,y) method.
Point point = new Point();
mapView.getProjection().toPixels(geoPoint, point);
if (region.contains(point.x, point.y)) {
// Within the path.
}
** Update on 6/7/2010 **
The region.setPath method will cause my app to crash (no warning message) if the rectF is too large. Here is my solution:
// Get the screen rect. If this intersects with the path's rect
// then lets display this zone. The rectF will become the
// intersection of the two rects. This will decrease the size therefor no more crashes.
Rect drawableRect = new Rect();
mapView.getDrawingRect(drawableRect);
if (rectF.intersects(drawableRect.left, drawableRect.top, drawableRect.right, drawableRect.bottom)) {
// ... Display Zone.
}
The android.graphics.Path class doesn't have such a method. The Canvas class does have a clipping region that can be set to a path, there is no way to test it against a point. You might try Canvas.quickReject, testing against a single point rectangle (or a 1x1 Rect). I don't know if that would really check against the path or just the enclosing rectangle, though.
The Region class clearly only keeps track of the containing rectangle.
You might consider drawing each of your regions into an 8-bit alpha layer Bitmap with each Path filled in it's own 'color' value (make sure anti-aliasing is turned off in your Paint). This creates kind of a mask for each path filled with an index to the path that filled it. Then you could just use the pixel value as an index into your list of paths.
Bitmap lookup = Bitmap.createBitmap(width, height, Bitmap.Config.ALPHA_8);
//do this so that regions outside any path have a default
//path index of 255
lookup.eraseColor(0xFF000000);
Canvas canvas = new Canvas(lookup);
Paint paint = new Paint();
//these are defaults, you only need them if reusing a Paint
paint.setAntiAlias(false);
paint.setStyle(Paint.Style.FILL);
for(int i=0;i<paths.size();i++)
{
paint.setColor(i<<24); // use only alpha value for color 0xXX000000
canvas.drawPath(paths.get(i), paint);
}
Then look up points,
int pathIndex = lookup.getPixel(x, y);
pathIndex >>>= 24;
Be sure to check for 255 (no path) if there are unfilled points.
WebKit's SkiaUtils has a C++ work-around for Randy Findley's bug:
bool SkPathContainsPoint(SkPath* originalPath, const FloatPoint& point, SkPath::FillType ft)
{
SkRegion rgn;
SkRegion clip;
SkPath::FillType originalFillType = originalPath->getFillType();
const SkPath* path = originalPath;
SkPath scaledPath;
int scale = 1;
SkRect bounds = originalPath->getBounds();
// We can immediately return false if the point is outside the bounding rect
if (!bounds.contains(SkFloatToScalar(point.x()), SkFloatToScalar(point.y())))
return false;
originalPath->setFillType(ft);
// Skia has trouble with coordinates close to the max signed 16-bit values
// If we have those, we need to scale.
//
// TODO: remove this code once Skia is patched to work properly with large
// values
const SkScalar kMaxCoordinate = SkIntToScalar(1<<15);
SkScalar biggestCoord = std::max(std::max(std::max(bounds.fRight, bounds.fBottom), -bounds.fLeft), -bounds.fTop);
if (biggestCoord > kMaxCoordinate) {
scale = SkScalarCeil(SkScalarDiv(biggestCoord, kMaxCoordinate));
SkMatrix m;
m.setScale(SkScalarInvert(SkIntToScalar(scale)), SkScalarInvert(SkIntToScalar(scale)));
originalPath->transform(m, &scaledPath);
path = &scaledPath;
}
int x = static_cast<int>(floorf(point.x() / scale));
int y = static_cast<int>(floorf(point.y() / scale));
clip.setRect(x, y, x + 1, y + 1);
bool contains = rgn.setPath(*path, clip);
originalPath->setFillType(originalFillType);
return contains;
}
I know I'm a bit late to the party, but I would solve this problem by thinking about it like determining whether or not a point is in a polygon.
http://en.wikipedia.org/wiki/Point_in_polygon
The math computes more slowly when you're looking at Bezier splines instead of line segments, but drawing a ray from the point still works.
For completeness, I want to make a couple notes here:
As of API 19, there is an intersection operation for Paths. You could create a very small square path around your test point, intersect it with the Path, and see if the result is empty or not.
You can convert Paths to Regions and do a contains() operation. However Regions work in integer coordinates, and I think they use transformed (pixel) coordinates, so you'll have to work with that. I also suspect that the conversion process is computationally intensive.
The edge-crossing algorithm that Hans posted is good and quick, but you have to be very careful for certain corner cases such as when the ray passes directly through a vertex, or intersects a horizontal edge, or when round-off error is a problem, which it always is.
The winding number method is pretty much fool proof, but involves a lot of trig and is computationally expensive.
This paper by Dan Sunday gives a hybrid algorithm that's as accurate as the winding number but as computationally simple as the ray-casting algorithm. It blew me away how elegant it was.
See https://stackoverflow.com/a/33974251/338479 for my code which will do point-in-path calculation for a path consisting of line segments, arcs, and circles.

Categories

Resources