I have asked similar questions a few times and still have not resolved my issue, so I thought I'd come at it another way and see if anyone can help me out.
I am writing a game for Android; this is my first attempt at a program this large/complex. The game is a 2d action/puzzler, and I am using Canvas rather than OpenGL ES for drawing.
Everything is going well, except when I try to draw pngs/bmps/jpgs, any images I imported myself. I can draw shapes and animations using the built-in Android canvas drawShape methods (including many Paints with Alpha content) till the cows come home, and maintain over 60fps, but as soon as I try to add my own image (say, a 60kb png saved from Photoshop) I immediately see a major performance hit. The bigger the size of the png on the screen, the bigger the hit (which makes sense).
I have a simple question that may help me understand if I'm doing something wrong here:
If you use the Canvas draw method to draw a red rectangle on the screen, should I expect it to be possible to instead import and display a red rectangle of the same dimensions without a loss in performance? I have done a lot of research on this issue but it is still not clear to me why Android (or Nexus 7) would have such a hard time with my images.
Is Canvas the problem? Do I need to port to Libdgx or AndEngine (that will be a process I think...)?
If it helps, this is how I'm loading my assets:
#Override
public Image newImage(String fileName, ImageFormat format) {
Config config = null;
if (format == ImageFormat.RGB565)
config = Config.RGB_565;
else if (format == ImageFormat.ARGB4444)
config = Config.ARGB_4444;
else
config = Config.ARGB_8888;
Options options = new Options();
options.inPreferredConfig = config;
InputStream in = null;
Bitmap bitmap = null;
try {
in = assets.open(fileName);
bitmap = BitmapFactory.decodeStream(in, null, options);
if (bitmap == null)
throw new RuntimeException("Couldn't load bitmap from asset '"
+ fileName + "'");
} catch (IOException e) {
throw new RuntimeException("Couldn't load bitmap from asset '"
+ fileName + "'");
} finally {
if (in != null) {
try {
in.close();
} catch (IOException e) {
}
}
}
if (bitmap.getConfig() == Config.RGB_565)
format = ImageFormat.RGB565;
else if (bitmap.getConfig() == Config.ARGB_4444)
format = ImageFormat.ARGB4444;
else
format = ImageFormat.ARGB8888;
return new AndroidImage(bitmap, format);
}
Canvas are slower than GLSurfaceView, but on Nexus 7 should handle 300 bitmap on one screen with no sweat.
Search your performence problem in onDraw.
Watch this for tips and tricks in real time game design:
http://www.google.com/events/io/2010/sessions/writing-real-time-games-android.html
Related
I need to get the preview-image-data from Android-phone-camera and publish it by ROS, and here is my sample-code:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if(data != null){
Camera.Size size = camera.getParameters().getPreviewSize();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);
if(yuvImage != null){
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ChannelBufferOutputStream stream = new ChannelBufferOutputStream(MessageBuffers.dynamicBuffer());
yuvImage.compressToJpeg(new Rect(0, 0, yuvImage.getWidth(), yuvImage.getHeight()), 80, baos);
yuvImage = null;
stream.buffer().writeBytes(baos.toByteArray());
try{
baos.flush();
baos.close();
baos = null;
}
catch(IOException e){
e.printStackTrace();
}
// compressedImage type
sensor_msgs.CompressedImage compressedImage = compressedImagePublisher.newMessage();
compressedImage.getHeader().setFrameId("xxx"); // frame id
Time curTime = connectedNode.getCurrentTime();
compressedImage.getHeader().setStamp(curTime); // time
compressedImage.setFormat("jpeg"); // format
compressedImage.setData(stream.buffer().copy()); // data
stream.buffer().clear();
try {
stream.flush();
stream.close();
stream = null;
}
catch (IOException e) {
e.printStackTrace();
}
// publish
System.out.println("-----Publish: " + compressedImage.getData().array().length + "-----");
compressedImagePublisher.publish(compressedImage);
compressedImage = null;
System.gc();
}
else{
Log.v("Log_Tag", "-----Failed to get yuvImage!-----");
}
}
else{
Log.v("Log_Tag", "-----Failed to get the preview frame!-----");
}
}
And then, I had subscribed the topic, just to check if the messages had been published completely and correctly. Just like the following code did:
#Override
public void onStart(ConnectedNode node) {
this.connectedNode = node;
// publisher
this.compressedImagePublisher = connectedNode.newPublisher(topic_name, sensor_msgs.CompressedImage._TYPE);
// subscriber
this.compressedImageSubscriber = connectedNode.newSubscriber(topic_name, sensor_msgs.CompressedImage._TYPE);
compressedImageSubscriber.addMessageListener(new MessageListener<CompressedImage>() {
#Override
public void onNewMessage(final CompressedImage compressedImage) {
byte[] receivedImageBytes = compressedImage.getData().array();
if(receivedImageBytes != null && receivedImageBytes.length != 0) {
System.out.println("-----Subscribe(+46?): " + receivedImageBytes.length + "-----");
// decode bitmap from byte[] with a strange number of offset and necessary
Bitmap bmp = BitmapFactory.decodeByteArray(receivedImageBytes, offset, receivedImageBytes.length - offset);
...
}
}
});
}
I'm so confused about the number of offset. It's means the size of image-bytes had changed after packaged and published by ROS, and if I don't set the offset there're will be wrong to decode a bitmap. And more strangely, sometimes the number of offset had a change too.
I don't know why, and I had read some articles about the jpg structure, and suspect that it's maybe the head-information of the jpg byte messages. However, this problem just happen in ros-android scene.
Anyone have a good idea about this?
OK! I know that the question I asked and problem described previously is terrible, that's why it got 2 negative-points. I'm so sorry about these, and I have to make up my fault by telling you guys more information, and making the problem more clear now.
At first, forget about all the code I pasted before. The problem happened in my ros-android project. In this project, I need to send the sensor messages of compressed image type to ros-server and get the processed image-bytes(jpg format) in publish/subscribe way. And in theory, the size of the image-bytes should be same, and in fact, this had been proved in my ros-C and ros-C# project under same conditions.
However, they're different in ros-android, it's get bigger! For this reason, I can't just decode a bitmap from the image bytes which I had subscribed, and I have to leave out the excrescent bytes with a offset in the image byte array. I don't know why this happened in ros-android or ros-java, and what's these adding part.
I can't find the reason from my code, that's why I pasted my code in detail for you. I really need help! Thanks in advance!
Maybe I really need to check the API first before ask question here. This's a so simple question if I had check the properties of ChannelBuffer in API, cause the property arrayOffset had already told me the answer of this question!
Well, for the sake of be more cautious and somebody of you guys need a clarification of my answer, so I have to make an explanation in details!
At first, I have to say that, I still don't know how the ChannelBuffer package the data of image bytes array, that's means I still don't know why there're should have the arrayOffset before the array data. Why we can't just get data? Is there really some important reasons for the being of the arrayOffset? For the sake of safety, or efficiency? I don't know, I can't find an answer from the API. So I'm really tired of this question now, and I'm tired of whether you guys make a negative point of this question or not, just let it go!
Hark back to the subject, the problem can be solved in this way:
int offset = compressedImage.getData().arrayOffset();
Bitmap bmp = BitmapFactory.decodeByteArray(receivedImageBytes, offset, receivedImageBytes.length - offset);
OK! I'm still hope that someone who have good knowledge of this can tell me why, I'll be so appreciate of this! If you guys are just tired of this question like me, so just let's vote to close it, I'll be appreciate it too! Thanks anyway!
My question is how to handle an Out of Memory error when decoding a byte array into a bitmap so I can do a rotation on it. My code is as follows and before you say its a duplicate, I have tried using BitmapFactory.Options and setting the sample size to 2. However the quality loss was far too bad to be acceptable. Also it appears to only be happening on one device so maybe its a one off thing, however I'm inclined to believe if it affects one, there will be 25 more like it later. Also this is happening on the FIRST photo taken and this is the only work that this activity does with regards to bitmaps. And while I'm working in Monodroid, Java answers are welcome too as I can usually translate them to C# fairly easily.
public void GotImage(byte[] image)
{
try
{
Android.Graphics.Bitmap thePicture = Android.Graphics.BitmapFactory.DecodeByteArray(image, 0, image.Length);
Array.Clear(image, 0, image.Length);
image = null;
GC.Collect();
Android.Graphics.Matrix m = new Android.Graphics.Matrix();
m.PostRotate(90);
Android.Graphics.Bitmap rotatedPicture = Android.Graphics.Bitmap.CreateBitmap(thePicture, 0, 0, thePicture.Width, thePicture.Height, m, true);
thePicture.Dispose();
thePicture = null;
GC.Collect();
using (MemoryStream ms = new MemoryStream())
{
rotatedPicture.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms);
image = ms.ToArray();
}
rotatedPicture.Dispose();
rotatedPicture = null;
GC.Collect();
listOfImages.Add(image);
storeButton.Text = " Store " + listOfImages.Count + " Pages ";
storeButton.Enabled = true;
takePicButton.Enabled = true;
gotImage = false;
cameraPreviewArea.camera.StartPreview();
}
catch (Exception ex)
{
AlertDialog.Builder alertDialog = new AlertDialog.Builder(this);
alertDialog.SetTitle("Error Taking Picture");
alertDialog.SetMessage(ex.ToString());
alertDialog.SetPositiveButton("OK", delegate { });
alertDialog.Show();
}
}
What's rotatedPicture.Dispose()? Does this just set the reference to null? The best and quickest way to get rid of a Bitmap's memory is via the recycle() method.
Well after a long day of learning, I discovered a fix/workaround. This involved setting the resolution of the picture being taken by the camera before the picture was taken instead of trying to scale it after the fact. I also set the option in settings for the user to try different resolutions till they get one that works best for them.
Camera.Parameters parameters = camera.GetParameters();
parameters.SetPictureSize(parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Width,
parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Height);
camera.SetParameters(parameters);
camera.StartPreview();
The problem
Hi there,
I'm developing an application where the user specifies some pictures and how long they are going to be on the screen.So sometimes he wants to create something like a small animation or viewing the images for a small amount of time.The problem is that after some time the images are not previewed when they should and we have a few ms of error.In the application that i'm developing time matters so I would like some help on what the problem might be.
The code
So let me explain how it works.I take the pictures from my web app and then I save them in a HashMap
Bitmap image = ImageOperations(url,String.valueOf(frameNum) + ".jpg");
ImageMap.put(String.valueOf(frameNum), image);
where the mathod ImageOperations is like that:
private Bitmap ImageOperations(String url, String saveFilename) {
try {
Display display = getWindowManager().getDefaultDisplay();
InputStream is = (InputStream) this.fetch(url);
Bitmap theImage = BitmapFactory.decodeStream(is);
if (theImage.getHeight() >= 700 || theImage.getWidth() >= 700) {
theImage = Bitmap.createScaledBitmap(theImage,
display.getWidth(), display.getHeight() - 140, true);
}
return theImage;
} catch (MalformedURLException e) {
e.printStackTrace();
return null;
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
So later I run a thread that updates the UI when the user specified.The method that updates it is this one.
public void setPictures(int NumOfFrame) {
if (frameArray.get(NumOfFrame - 1).frame_pic.contains("n/a") != true) {
ImagePlace.setImageBitmap(ImageMap.get(String.valueOf(NumOfFrame)));
} else {
ImagePlace.setImageDrawable(null);
}
}
After we update the image we put the thread for sleep and when runs again it updates the thread.Is there something that creates the problem?Does it have to do with Garbage collection?
Thank you in advance
Probably the issue is in increasing heap size when it loads additional images. I would suggest You to do some profiling so things will be much clearer and You'll get full picture of timings for the app.
First you are missing a null check at here:
ImageMap.get(String.valueOf(NumOfFrame))
And you do not recycle the old bitmap at here:
theImage.recycle(); // missing line
theImage = Bitmap.createScaledBitmap(theImage,
display.getWidth(), display.getHeight() - 140, true);
It may lead to outofmemory exceptions, with is most likely from your description of the problem.
Also I am not sure if BitmapFactory.decodeStream throw exception when he fails. You need to add a null point check there too.
I'm wanting to write a whiteboard app. I have a beginning that renders a bitmap (the drawing page) and then copies that bitmap to the surfaceView. It works perfectly in the emulator, but when I run it on my Samsung Galaxy Ace, it unexpectedly closes. This code:
public void surfaceCreated(SurfaceHolder holder) {
Log.d(TAG, "Create surface");
mo_paper = BitmapFactory.decodeResource(getResources(), R.drawable.paper);
Log.d(TAG, "Created paper");
mo_easel = new Canvas();
Log.d(TAG, "Created easel");
mo_easel.setBitmap(mo_paper);
Log.d(TAG, "Set easel");
mo_matrix = new Matrix();
Log.d(TAG, "Assets loaded");
mainThread.setRunning(true);
mainThread.start();
Log.d(TAG, "Threads started");
}
outputs 'Created easel' but not 'Set easel', so it appears the .setBitmap() method is causing the error.
Bitmaps loaded from resources are immutable. You need to pass a BitmapFactory.Options that tell BitmapFactory that you want the resulting bitmap to be immutable.
Romain Guy was correct. On the phone the bitmap was being loaded as immutable, but in the emulator and on another phone, it was being loaded as mutable (contrary to the documentation!). Setting the option inMutable isn't possible on API's before 11, so in my case, the simple solution was to create an empty bitmap with
mo_paper = Bitmap.createBitmap(paperWidth, paperHeight, Bitmap.Config.ARGB_8888);
and just draw on whatever other bitmaps I want
Abstract:
reading images from file
with toggled bits to make unusable for preview tools
cant use encryption, to much power needed
can I either optimize the code below, or is there a better approach
Longer description:
I am trying to improve my code, maybe you got some ideas or improvements for the following situation. Please be aware that I neither try to beat the CIA, nor care much if somebody "brakes" the encryption.
The background is simple: My app loads a bunch of images from a server into a folder on the SD card. I do NOT want the images to be simple JPG files, because in this case the media indexer would list them in the library, and a user could simply copy the whole folder to his harddrive.
The obvious way to go is encryption. But a full blown AES or other encryption does not make sense, for two reasons: I would have to store the passkey in the app, so anyone could get the key with some effort anyway. And the price for decrypting images on the fly is way too high (we are talking about e.g. a gallery with 30 200kB pictures).
So I decided to toggle some bits in the image. This makes the format unreadable for image tools (or previews), but is pretty easy undone when reading the images. For "encrypting" I use some C# tool, the "decrypt" lines are the following ones:
public class CustomInputStream extends InputStream {
private String _fileName;
private BufferedInputStream _stream;
public CustomInputStream(String fileName) {
_fileName = fileName;
}
public void Open() throws IOException {
int len = (int) new File(_fileName).length();
_stream = new BufferedInputStream(new FileInputStream(_fileName), len);
}
#Override
public int read() throws IOException {
int value = _stream.read() ^ (1 << 7);
return value;
}
#Override
public void close() throws IOException {
_stream.close();
}
}
I tried overwriting the other methods (read with more then one byte) too, but this kills the BitmapFactory - not sure why, maybe I did something wrong. Here is the code for the image bitmap creation:
Bitmap bitmap = null;
try {
InputStream i = CryptoProvider.GetInstance().GetDecoderStream(path);
bitmap = BitmapFactory.decodeStream(i);
i.close();
} catch (Exception e1) {
_logger.Error("Cant load image " + path + " ERROR " + e1);
}
if (bitmap == null) {
_logger.Error("Image is NULL for path " + path);
}
return bitmap;
Do you have any feedback on the chosen approach? Any way to optimize it, or a completely different approach for Android devices?
You could try XORing the bytestream with the output of a fast PRNG. Just use a different seed for each file and you're done.
note: As already noted in the question, such methods are trivial to bypass.