The problem
Hi there,
I'm developing an application where the user specifies some pictures and how long they are going to be on the screen.So sometimes he wants to create something like a small animation or viewing the images for a small amount of time.The problem is that after some time the images are not previewed when they should and we have a few ms of error.In the application that i'm developing time matters so I would like some help on what the problem might be.
The code
So let me explain how it works.I take the pictures from my web app and then I save them in a HashMap
Bitmap image = ImageOperations(url,String.valueOf(frameNum) + ".jpg");
ImageMap.put(String.valueOf(frameNum), image);
where the mathod ImageOperations is like that:
private Bitmap ImageOperations(String url, String saveFilename) {
try {
Display display = getWindowManager().getDefaultDisplay();
InputStream is = (InputStream) this.fetch(url);
Bitmap theImage = BitmapFactory.decodeStream(is);
if (theImage.getHeight() >= 700 || theImage.getWidth() >= 700) {
theImage = Bitmap.createScaledBitmap(theImage,
display.getWidth(), display.getHeight() - 140, true);
}
return theImage;
} catch (MalformedURLException e) {
e.printStackTrace();
return null;
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
So later I run a thread that updates the UI when the user specified.The method that updates it is this one.
public void setPictures(int NumOfFrame) {
if (frameArray.get(NumOfFrame - 1).frame_pic.contains("n/a") != true) {
ImagePlace.setImageBitmap(ImageMap.get(String.valueOf(NumOfFrame)));
} else {
ImagePlace.setImageDrawable(null);
}
}
After we update the image we put the thread for sleep and when runs again it updates the thread.Is there something that creates the problem?Does it have to do with Garbage collection?
Thank you in advance
Probably the issue is in increasing heap size when it loads additional images. I would suggest You to do some profiling so things will be much clearer and You'll get full picture of timings for the app.
First you are missing a null check at here:
ImageMap.get(String.valueOf(NumOfFrame))
And you do not recycle the old bitmap at here:
theImage.recycle(); // missing line
theImage = Bitmap.createScaledBitmap(theImage,
display.getWidth(), display.getHeight() - 140, true);
It may lead to outofmemory exceptions, with is most likely from your description of the problem.
Also I am not sure if BitmapFactory.decodeStream throw exception when he fails. You need to add a null point check there too.
Related
I am working on an app that uses a Recyclerview to display mp3 files, providing its cover art image along with other info. It works but is slow once it starts dealing with a dozen or more cover arts to retrieve, as I am currently doing this from the id3 on the main thread, which I know is not a good idea.
Ideally, I would work with placeholders so that the images can be added as they become available. I've been looking into moving the retrieval to a background thread and have looked at different options: AsyncTask, Service, WorkManager. AsyncTask seems not to be the way to go as I face memory leaks (I need context to retrieve the cover art through MetadataRetriever). So I am leaning away from that. Yet I am struggling to figure out which approach is best in my case.
From what I understand I need to find an approach that allows multithreading and also a means to cancel the retrieval in case the user has already moved on (scrolling or navigating away). I am already using Glide, which I understand should help with the caching.
I know I could rework the whole approach and provide the cover art as images separately, but that seems a last resort to me, as I would rather not weigh down the app with even more data.
The current version of the app is here (please note it will not run as I cannot openly divulge certain aspects). I am retrieving the cover art as follows (on the main thread):
static public Bitmap getCoverArt(Uri medUri, Context ctxt) {
MediaMetadataRetriever mmr = new MediaMetadataRetriever();
mmr.setDataSource(ctxt, medUri);
byte[] data = mmr.getEmbeddedPicture();
if (data != null) {
return BitmapFactory.decodeByteArray(data, 0, data.length);
} else {
return null;
}
}
I've found many examples with AsyncTask or just keeping the MetaDataRetriever on the main thread, but have yet to find an example that enables a dozen or more cover arts to be retrieved without slowing down the main thread. I would appreciate any help and pointers.
It turns out it does work with AsyncTask, as long as it is not a class onto itself but setup and called from a class with context. Here is a whittled down version of my approach (I am calling this from within my Adapter.):
//set up titles and placeholder image so we needn't wait on the image to load
titleTv.setText(selectedMed.getTitle());
subtitleTv.setText(selectedMed.getSubtitle());
imageIv.setImageResource(R.drawable.ic_launcher_foreground);
imageIv.setAlpha((float) 0.2);
final long[] duration = new long[1];
//a Caching system that helps reduce the amount of loading needed. See: https://github.com/cbonan/BitmapFun?files=1
if (lruCacheManager.getBitmapFromMemCache(selectedMed.getId() + position) != null) {
//is there an earlier cached image to reuse? imageIv.setImageBitmap(lruCacheManager.getBitmapFromMemCache(selectedMed.getId() + position));
imageIv.setAlpha((float) 1.0);
titleTv.setVisibility(View.GONE);
subtitleTv.setVisibility(View.GONE);
} else {
//time to load and show the image. For good measure, the duration is also queried, as this also needs the setDataSource which causes slow down
new AsyncTask<Uri, Void, Bitmap>() {
#Override
protected Bitmap doInBackground(Uri... uris) {
MediaMetadataRetriever mmr = new MediaMetadataRetriever();
mmr.setDataSource(ctxt, medUri);
byte[] data = mmr.getEmbeddedPicture();
Log.v(TAG, "async data: " + Arrays.toString(data));
String durationStr = mmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
duration[0] = Long.parseLong(durationStr);
if (data != null) {
InputStream is = new ByteArrayInputStream(mmr.getEmbeddedPicture());
return BitmapFactory.decodeStream(is);
} else {
return null;
}
}
#Override
protected void onPostExecute(Bitmap bitmap) {
super.onPostExecute(bitmap);
durationTv.setVisibility(View.VISIBLE);
durationTv.setText(getDisplayTime(duration[0], false));
if (bitmap != null) {
imageIv.setImageBitmap(bitmap);
imageIv.setAlpha((float) 1.0);
titleTv.setVisibility(View.GONE);
subtitleTv.setVisibility(View.GONE);
} else {
titleTv.setVisibility(View.VISIBLE);
subtitleTv.setVisibility(View.VISIBLE);
}
lruCacheManager.addBitmapToMemCache(bitmap, selectedMed.getId() + position);
}
}.execute(medUri);
}
I have tried working with Glide for the caching, but I haven't been able to link the showing/hiding of the TextViews to whether there is a bitmap. In a way though, this is sleeker as I don't need to load the bulk of the Glide-library. So I am happy with this for now.
I need to get the preview-image-data from Android-phone-camera and publish it by ROS, and here is my sample-code:
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if(data != null){
Camera.Size size = camera.getParameters().getPreviewSize();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, size.width, size.height, null);
if(yuvImage != null){
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ChannelBufferOutputStream stream = new ChannelBufferOutputStream(MessageBuffers.dynamicBuffer());
yuvImage.compressToJpeg(new Rect(0, 0, yuvImage.getWidth(), yuvImage.getHeight()), 80, baos);
yuvImage = null;
stream.buffer().writeBytes(baos.toByteArray());
try{
baos.flush();
baos.close();
baos = null;
}
catch(IOException e){
e.printStackTrace();
}
// compressedImage type
sensor_msgs.CompressedImage compressedImage = compressedImagePublisher.newMessage();
compressedImage.getHeader().setFrameId("xxx"); // frame id
Time curTime = connectedNode.getCurrentTime();
compressedImage.getHeader().setStamp(curTime); // time
compressedImage.setFormat("jpeg"); // format
compressedImage.setData(stream.buffer().copy()); // data
stream.buffer().clear();
try {
stream.flush();
stream.close();
stream = null;
}
catch (IOException e) {
e.printStackTrace();
}
// publish
System.out.println("-----Publish: " + compressedImage.getData().array().length + "-----");
compressedImagePublisher.publish(compressedImage);
compressedImage = null;
System.gc();
}
else{
Log.v("Log_Tag", "-----Failed to get yuvImage!-----");
}
}
else{
Log.v("Log_Tag", "-----Failed to get the preview frame!-----");
}
}
And then, I had subscribed the topic, just to check if the messages had been published completely and correctly. Just like the following code did:
#Override
public void onStart(ConnectedNode node) {
this.connectedNode = node;
// publisher
this.compressedImagePublisher = connectedNode.newPublisher(topic_name, sensor_msgs.CompressedImage._TYPE);
// subscriber
this.compressedImageSubscriber = connectedNode.newSubscriber(topic_name, sensor_msgs.CompressedImage._TYPE);
compressedImageSubscriber.addMessageListener(new MessageListener<CompressedImage>() {
#Override
public void onNewMessage(final CompressedImage compressedImage) {
byte[] receivedImageBytes = compressedImage.getData().array();
if(receivedImageBytes != null && receivedImageBytes.length != 0) {
System.out.println("-----Subscribe(+46?): " + receivedImageBytes.length + "-----");
// decode bitmap from byte[] with a strange number of offset and necessary
Bitmap bmp = BitmapFactory.decodeByteArray(receivedImageBytes, offset, receivedImageBytes.length - offset);
...
}
}
});
}
I'm so confused about the number of offset. It's means the size of image-bytes had changed after packaged and published by ROS, and if I don't set the offset there're will be wrong to decode a bitmap. And more strangely, sometimes the number of offset had a change too.
I don't know why, and I had read some articles about the jpg structure, and suspect that it's maybe the head-information of the jpg byte messages. However, this problem just happen in ros-android scene.
Anyone have a good idea about this?
OK! I know that the question I asked and problem described previously is terrible, that's why it got 2 negative-points. I'm so sorry about these, and I have to make up my fault by telling you guys more information, and making the problem more clear now.
At first, forget about all the code I pasted before. The problem happened in my ros-android project. In this project, I need to send the sensor messages of compressed image type to ros-server and get the processed image-bytes(jpg format) in publish/subscribe way. And in theory, the size of the image-bytes should be same, and in fact, this had been proved in my ros-C and ros-C# project under same conditions.
However, they're different in ros-android, it's get bigger! For this reason, I can't just decode a bitmap from the image bytes which I had subscribed, and I have to leave out the excrescent bytes with a offset in the image byte array. I don't know why this happened in ros-android or ros-java, and what's these adding part.
I can't find the reason from my code, that's why I pasted my code in detail for you. I really need help! Thanks in advance!
Maybe I really need to check the API first before ask question here. This's a so simple question if I had check the properties of ChannelBuffer in API, cause the property arrayOffset had already told me the answer of this question!
Well, for the sake of be more cautious and somebody of you guys need a clarification of my answer, so I have to make an explanation in details!
At first, I have to say that, I still don't know how the ChannelBuffer package the data of image bytes array, that's means I still don't know why there're should have the arrayOffset before the array data. Why we can't just get data? Is there really some important reasons for the being of the arrayOffset? For the sake of safety, or efficiency? I don't know, I can't find an answer from the API. So I'm really tired of this question now, and I'm tired of whether you guys make a negative point of this question or not, just let it go!
Hark back to the subject, the problem can be solved in this way:
int offset = compressedImage.getData().arrayOffset();
Bitmap bmp = BitmapFactory.decodeByteArray(receivedImageBytes, offset, receivedImageBytes.length - offset);
OK! I'm still hope that someone who have good knowledge of this can tell me why, I'll be so appreciate of this! If you guys are just tired of this question like me, so just let's vote to close it, I'll be appreciate it too! Thanks anyway!
I have an array of ParseObjects that are being displayed in a list view. Three text views are loaded into my custom cell, and then there's an image in a ParseFile that should also be loaded. The code I have gets the first cell to load correctly, but in every other cell the image doesn't load. Here's my code:
this.origImage = (ParseFile) posts.get(position).get("image");
try {
Log.d("MyMessage", "Gonna convert image");
this.imageData = this.origImage.getData();
this.options = new BitmapFactory.Options();
this.options.inDither = true;
this.options.inPreferredConfig = Bitmap.Config.ARGB_8888;
this.options.inSampleSize = 8;
this.notConvertedYet = BitmapFactory.decodeByteArray(this.imageData, 0, this.imageData.length, this.options);
if (this.notConvertedYet != null)
this.myBitmap = rotateImage(90, this.notConvertedYet);
else
this.myBitmap = this.notConvertedYet;
mHolder.picImageView.setImageBitmap(this.myBitmap);
Log.d("MyMessage", "Converted image");
} catch (ParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
What is happening that's messing it up?
when its bound to a list adapter, what does the
posts.get(position).get("image");
do actually? Is the parse SDK calling a series of AsyncTasks for the network Http GET on the actual URL in parse's file CDN?
You may have to find out more about what its doing because , as is, it may not be very efficient as used by the Adapter.getView()...
Any image loader framework like Volley, like Universal Image Loader, like AQuery will work like an api where you make a call providing the ImageView and the CDN URL for the image as parms. The framework will handle multithreading, pooled Parse.com connections , memCache and fileCache for all images. When using parse to store bitmaps in the file CDN you can still use any of the image loading frameworks.
AQuery sample in an adapter...
ImageView thumbnail=(ImageView)vi.findViewById(R.id.imageView1); // thumb image
mAquery.id(thumbnail).width(110).image($thumbUrl_CDN,
true, true, 0, R.drawable.default_video, null, 0, mAspectRatio);
Your code looks ok.. maybe the reason all the imageViews are black is that the adapter did not have time to get the files loaded across the network and the loop statement did not block the UI thread??
I have asked similar questions a few times and still have not resolved my issue, so I thought I'd come at it another way and see if anyone can help me out.
I am writing a game for Android; this is my first attempt at a program this large/complex. The game is a 2d action/puzzler, and I am using Canvas rather than OpenGL ES for drawing.
Everything is going well, except when I try to draw pngs/bmps/jpgs, any images I imported myself. I can draw shapes and animations using the built-in Android canvas drawShape methods (including many Paints with Alpha content) till the cows come home, and maintain over 60fps, but as soon as I try to add my own image (say, a 60kb png saved from Photoshop) I immediately see a major performance hit. The bigger the size of the png on the screen, the bigger the hit (which makes sense).
I have a simple question that may help me understand if I'm doing something wrong here:
If you use the Canvas draw method to draw a red rectangle on the screen, should I expect it to be possible to instead import and display a red rectangle of the same dimensions without a loss in performance? I have done a lot of research on this issue but it is still not clear to me why Android (or Nexus 7) would have such a hard time with my images.
Is Canvas the problem? Do I need to port to Libdgx or AndEngine (that will be a process I think...)?
If it helps, this is how I'm loading my assets:
#Override
public Image newImage(String fileName, ImageFormat format) {
Config config = null;
if (format == ImageFormat.RGB565)
config = Config.RGB_565;
else if (format == ImageFormat.ARGB4444)
config = Config.ARGB_4444;
else
config = Config.ARGB_8888;
Options options = new Options();
options.inPreferredConfig = config;
InputStream in = null;
Bitmap bitmap = null;
try {
in = assets.open(fileName);
bitmap = BitmapFactory.decodeStream(in, null, options);
if (bitmap == null)
throw new RuntimeException("Couldn't load bitmap from asset '"
+ fileName + "'");
} catch (IOException e) {
throw new RuntimeException("Couldn't load bitmap from asset '"
+ fileName + "'");
} finally {
if (in != null) {
try {
in.close();
} catch (IOException e) {
}
}
}
if (bitmap.getConfig() == Config.RGB_565)
format = ImageFormat.RGB565;
else if (bitmap.getConfig() == Config.ARGB_4444)
format = ImageFormat.ARGB4444;
else
format = ImageFormat.ARGB8888;
return new AndroidImage(bitmap, format);
}
Canvas are slower than GLSurfaceView, but on Nexus 7 should handle 300 bitmap on one screen with no sweat.
Search your performence problem in onDraw.
Watch this for tips and tricks in real time game design:
http://www.google.com/events/io/2010/sessions/writing-real-time-games-android.html
My question is how to handle an Out of Memory error when decoding a byte array into a bitmap so I can do a rotation on it. My code is as follows and before you say its a duplicate, I have tried using BitmapFactory.Options and setting the sample size to 2. However the quality loss was far too bad to be acceptable. Also it appears to only be happening on one device so maybe its a one off thing, however I'm inclined to believe if it affects one, there will be 25 more like it later. Also this is happening on the FIRST photo taken and this is the only work that this activity does with regards to bitmaps. And while I'm working in Monodroid, Java answers are welcome too as I can usually translate them to C# fairly easily.
public void GotImage(byte[] image)
{
try
{
Android.Graphics.Bitmap thePicture = Android.Graphics.BitmapFactory.DecodeByteArray(image, 0, image.Length);
Array.Clear(image, 0, image.Length);
image = null;
GC.Collect();
Android.Graphics.Matrix m = new Android.Graphics.Matrix();
m.PostRotate(90);
Android.Graphics.Bitmap rotatedPicture = Android.Graphics.Bitmap.CreateBitmap(thePicture, 0, 0, thePicture.Width, thePicture.Height, m, true);
thePicture.Dispose();
thePicture = null;
GC.Collect();
using (MemoryStream ms = new MemoryStream())
{
rotatedPicture.Compress(Android.Graphics.Bitmap.CompressFormat.Jpeg, 100, ms);
image = ms.ToArray();
}
rotatedPicture.Dispose();
rotatedPicture = null;
GC.Collect();
listOfImages.Add(image);
storeButton.Text = " Store " + listOfImages.Count + " Pages ";
storeButton.Enabled = true;
takePicButton.Enabled = true;
gotImage = false;
cameraPreviewArea.camera.StartPreview();
}
catch (Exception ex)
{
AlertDialog.Builder alertDialog = new AlertDialog.Builder(this);
alertDialog.SetTitle("Error Taking Picture");
alertDialog.SetMessage(ex.ToString());
alertDialog.SetPositiveButton("OK", delegate { });
alertDialog.Show();
}
}
What's rotatedPicture.Dispose()? Does this just set the reference to null? The best and quickest way to get rid of a Bitmap's memory is via the recycle() method.
Well after a long day of learning, I discovered a fix/workaround. This involved setting the resolution of the picture being taken by the camera before the picture was taken instead of trying to scale it after the fact. I also set the option in settings for the user to try different resolutions till they get one that works best for them.
Camera.Parameters parameters = camera.GetParameters();
parameters.SetPictureSize(parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Width,
parameters.SupportedPictureSizes[parameters.SupportedPictureSizes.Count - 1].Height);
camera.SetParameters(parameters);
camera.StartPreview();