I have a drawing app (Android 3.x+ target) that performs a full screen copy of a Bitmap to the Canvas in onDraw(), and I want to ensure I am copying from 32bit to 32bit, per this article. I want to ensure 32bit -> 32bit so that I have the best performance and so I don't have to supply any Paint to the Canvas.drawBitmap() operation.
When I create my Bitmap, I ensure that it is done via:
mBitmap = Bitmap.createBitmap(screenWidth, screenHeight, Bitmap.Config.ARGB_8888);
Now, in my drawing Activity, I query the Window via getWindow().getAttributes().format, but am returned OPAQUE (the default value) - question, is this 32bit? In Romain's article above, he mentions that in Android 2.3, windows are now 32bit by default, but a return value of OPAQUE is not so re-assuring.
If someone could clarify what I am seeing here it would be greatly appreciated.
You are creating a 32bit-bitmap, but you are not modifying the window-format.
Add a getWindow().setAttributes(attr) to ensure a full 32bit-compatibility.
Related
The Problem
I have been working on implementing a super resolution model with Tensorflow Lite. I have an empty bitmap 4x the size of the input bitmap (which is bmp):
Bitmap out = Bitmap.createBitmap(bmp.getWidth() * 4, bmp.getHeight() * 4, Bitmap.Config.ARGB_8888);
And I converted both bitmaps to TensorImages
TensorImage originalImage = TensorImage.fromBitmap(bmp);
TensorImage superImage = TensorImage.fromBitmap(out);
However, when I run the model (InterpreterApi tflite):
tflite.run(originalImage.getBuffer(), superImage.getBuffer());
The bitmap from superImage has not changed, and it holds the blank bitmap I made at the start.
superImage.getBitmap();
Things I've tried
I looked at basic examples and documentation, most are geared toward classification but they all seemed to do it this way.
I fed the input bitmap to the output, and my app showed the input, so I know that the file picking and preview works.
I tested with different datatypes to store the output, and they either left it blank or weren't compatible with Tensorflow.
What I think
I suspect the problem has something to do with tflite.run() changing a separate instance of superImage, and I get left with the old one. I may also need a different data format that I haven't tried yet.
Thank you for your time.
I am creating an Android Tile that is meant to display custom and dynamically created graphics, i.e. a chart.
However, due to several limitations I have yet to find a way to do so. Tiles seem to work fundamentally different than Activities do and the Tiles' API only allows for several, predefined UI elements to be created. The only usable one for me seems to be the Image LayoutElement.
The Image can be created by either passing a resource or a ByteArray. Former is not possible when dealing with dynamically created graphs.
Thus, my only hope (I think) is to create an Image in the form of a ByteArray myself.
How can I do this? Is there any Java framework to draw graphics directly?
I have considered the following:
Using the provided UI elements: wouldn't work since the placement is way to imprecise and the exact position of an element cannot be controlled. Also, these elements are not meant for drawing.
Using AWT: doesn't work on Android. Thus, almost any drawing and/or charting library is out of the game.
JavaFX: would probably work but there seems to be now way to draw directly on ByteArrays/BufferedImages as the application needs to be rendered first. Rendering JavaFX doesn't seem possible for Tiles.
Using Android's Canvas: again, an Activity is needed.
Turns out I was wrong: you can very well use the Canvas within a Tile. Converting it to a resource is, however, a little tricky, so here's some code:
final Bitmap bitmap = Bitmap.createBitmap(chart.getWidth(), chart.getHeight(),
Bitmap.Config.RGB_565);
final Canvas canvas = new Canvas(bitmap);
// Sets the background color
final Color background = Color.valueOf(chart.getBackgroundColor());
canvas.drawRGB(
Math.round(background.red() * 255),
Math.round(background.green() * 255),
Math.round(background.blue() * 255)
);
// YOUR DRAWING OPERATIONS: e.g. canvas.drawRect
final ByteBuffer byteBuffer = ByteBuffer.allocate(bitmap.getByteCount());
bitmap.copyPixelsToBuffer(byteBuffer);
final byte[] bytes = byteBuffer.array();
return new ResourceBuilders.ImageResource.Builder()
.setInlineResource(
new ResourceBuilders.InlineImageResource.Builder()
.setData(bytes)
.setWidthPx(chart.getWidth())
.setHeightPx(chart.getHeight())
.setFormat(ResourceBuilders.IMAGE_FORMAT_RGB_565)
.build()
)
.build();
This example shows using Compose Canvas to render charts for Tiles.
https://github.com/google/horologist/pull/249
Also you can encode to PDF
Remove
setFormat(ResourceBuilders.IMAGE_FORMAT_RGB_565)
and use
val bytes = ByteArrayOutputStream().apply {
compress(Bitmap.CompressFormat.PNG, 100, this)
}.toByteArray()
On ICS device, I tried the following code to draw two rectangles.
Path p1 = new Path();
p1.moveTo(0, 0);
p1.lineTo(0, 100);
p1.lineTo(100, 100);
p1.lineTo(100, 0);
p1.close;
Path p2 = new Path();
Matrix scaling = new Matrix();
scaling.preScale(2, 2);
p1.transform(scaling, p2);
canvas.drawPath(p1);
canvas.drawPath(p2);
Running the above code on ICS device with hardware acceleration enabled (as it is by default), p1 is drawn where as p2 is not.
In general, what happened to me is, as long as a Path is not hand-wired (i.e. by calling lineTo(), quadTo(), etc.), but obtained by copying or transforming (i.e. by calling the copy constructor, transform(matrix, dest), translate(x, y, dest), etc.), it is not drawn.
I found a "widely known" issue that is similar but not exactly the same as my problem: https://groups.google.com/forum/#!msg/android-developers/eTxV4KPy1G4/tAe2zUPCjMcJ
Therefore, can anyone tell me what is the issue I am running into? In my case, I have to resort to path transformation otherwise code complexity will be greatly increase. Thanks!
try setting android:layerType="software" in xml on the view to see if that fixes it. some methods aren't available with hardware acceleration on all apis.
the list is here:
http://developer.android.com/guide/topics/graphics/hardware-accel.html
note that if changing the layer type fixes it, you should create a separate layout for the newer APIs for optimal performance
I'm currently working on a project where I have to use RenderScript, so i started learning about it, and it's a great technology, because, just like openGL, it lets you use computational code that goes to a native level, and doesn't have to use the dalvik vm. This part of the code, being processed much faster than if you would use normal android code.
I started working with image processing and what i was wondering is:
Is it possible to resize a bitmap using RenderScript? this should be much faster then resizing an bitmap using android code. Plus, renderscript can process information that is bigger than 48mB (limit on some phones for each process).
While you could use Rendscript to do the bitmap resize, I'm not sure if that's the best choice. A quick look at the Android code base shows that Java API does go into native code to do a bitmap resize, although if the resize algorithm isn't to your needs, you'll have to implement your own.
There are a number of answers on SO for getting the bitmap scaled efficiently. My recommendation is to try those, and if they still aren't doing what your want, either as quickly or how the results appear visually to then investigate into writing your own. If you still want to write your own, do use the performance tools available to see if you really are faster or just reinventing the wheel.
You can use the below function to resize the image.
private Bitmap resize(Bitmap inBmp) {
RenderScript mRs = RenderScript.create(getApplication());
Bitmap outBmp = Bitmap.createBitmap(OUTPUT_IMAGE_WIDTH, inBmp.getHeight() * OUTPUT_IMAGE_WIDTH /inBmp.getWidth(), inBmp.getConfig());
ScriptIntrinsicResize siResize = ScriptIntrinsicResize.create(mRs);
Allocation inAlloc = Allocation.createFromBitmap(mRs, inBmp);
Allocation outAlloc = Allocation.createFromBitmap(mRs, outBmp);
siResize.setInput(inAlloc);
siResize.forEach_bicubic(outAlloc);
outAlloc.copyTo(outBmp);
inAlloc.destroy();
outAlloc.destroy();
siResize.destroy();
return outBmp;
}
OUTPUT_IMAGE is the integer value specifying the width of the output image.
NOTE: While using the RenderScript Allocation you have to be very careful as they lead to memory leakages.
I have a little experimentation app (essentially a very cut-down version of the LunarLander demo in the Android SDK), with a single SurfaceView. I have a Drawable "sprite" which I periodically draw into the SurfaceView's Canvas object in different locations, without attempting to erase the previous image. Thus:
private class MyThread extends Thread {
SurfaceHolder holder; // Initialised in ctor (acquired via getHolder())
Drawable sprite; // Initialised in ctor
Rect bounds; // Initialised in ctor
...
#Override
public void run() {
while (true) {
Canvas c = holder.lockCanvas();
synchronized (bounds) {
sprite.setBounds(bounds);
}
sprite.draw(c);
holder.unlockCanvasAndPost(c);
}
}
/**
* Periodically called from activity thread
*/
public void updatePos(int dx, int dy) {
synchronized (bounds) {
bounds.offset(dx, dy);
}
}
}
Running in the emulator, what I'm seeing is that after a few updates have occurred, several old "copies" of the image begin to flicker, i.e. appearing and disappearing. I initially assumed that perhaps I was misunderstanding the semantics of a Canvas, and that it somehow maintains "layers", and that I was thrashing it to death. However, I then discovered that I only get this effect if I try to update faster than roughly every 200 ms. So my next best theory is that this is perhaps an artifact of the emulator not being able to keep up, and tearing the display. (I don't have a physical device to test on, yet.)
Is either of these theories correct?
Note: I don't actually want to do this in practice (i.e. draw hundreds of overlaid copies of the same thing). However, I would like to understand why this is happening.
Environment:
Eclipse 3.6.1 (Helios) on Windows 7
JDK 6
Android SDK Tools r9
App is targetting Android 2.3.1
Tangential question:
My run() method is essentially a stripped-down version of how the LunarLander example works (with all the excess logic removed). I don't quite understand why this isn't going to saturate the CPU, as there seems to be nothing to prevent it running at full pelt. Can anyone clarify this?
Ok, I've butchered Lunar Lander in a similar way to you, and having seen the flickering I can tell you that what you are seeing is a simple artefact of the double-buffering mechanism that every Surface has.
When you draw anything on a Canvas attached to a Surface, you are drawing to the 'back' buffer (the invisible one). And when you unlockCanvasAndPost() you are swapping the buffers over... what you drew suddenly becomes visible as the "back" buffer becomes the "front", and vice versa. And so your next frame of drawing is done to the old "front" buffer...
The point is that you always draw to seperate buffers on alternate frames. I guess there's an implicit assumption in graphics architecture that you're always going to be writing every pixel.
Having understood this, I think the real question is why doesn't it flicker on hardware? Having worked on graphics drivers in years gone by, I can guess at the reasons but hesitate to speculate too far. Hopefully the above will be sufficient to satisfy your curiousity about this rendering artefact. :-)
You need to clear the previous position of the sprite, as well as the new position. This is what the View system does automatically. However, if you use a Surface directly and do not redraw every pixel (either with an opaque color or using a SRC blending mode) you must clear the content of the buffer yourself. Note that you can pass a dirty rectangle to lockCanvas() and it will do the union for you of the previous dirty rectangle and the one you are passing (this is the mechanism used by the UI toolkit.) It will also set the clip rect of the Canvas to be the union of these two rectangles.
As for your second question, unlockAndPost() will do a vsync, so you will never draw at more than ~60fps (most devices that I've seen have a display refresh rate set around 55Hz.)