Display a stream of bitmaps at 60fps smoothly on Android 4.x - android

(This is due to the limitations of the server software I will be using, if I could change it, I would).
I am receiving a sequence of 720x480 JPEG files (about 6kb in size), over a socket. I have benchmarked the network, and have found that I am capable of receiving those JPEGs smoothly, at 60FPS.
My current drawing operation is on a Nexus 10 display of 2560x1600, and here's my decoding method, once I have received the byte array from the socket:
public static void decode(byte[] tmp, Long time) {
try {
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferQualityOverSpeed = false;
options.inDither = false;
Bitmap bitmap = BitmapFactory.decodeByteArray(tmp, 0, tmp.length, options);
Bitmap background = Bitmap.createScaledBitmap
(bitmap, MainActivity.screenwidth, MainActivity.screenheight, false);
background.setHasAlpha(false);
Canvas canvas = MainActivity.surface.getHolder().lockCanvas();
canvas.drawColor(Color.BLACK);
canvas.drawBitmap(background, 0, 0, new Paint());
MainActivity.surface.getHolder().unlockCanvasAndPost(canvas);
} catch (Exception e) {
e.printStackTrace();
}
}
As you can see, I am clearing the canvas from a SurfaceView, and then drawing the Bitmap to the SurfaceView. My issue is that it is very, very, slow.
Some tests based on adding System.currentTimeMillis() before and after the lock operation result in approximately a 30ms difference between getting the canvas, drawing the bitmap, and then pushing the canvas back. The displayed SurfaceView is very laggy, sometimes it jumps back and forth, and the frame rate is terrible.
Is there a referred method for drawing like this? Again, I can't modify what I'm getting from the server, but I'd like the bitmaps to be displayed at 60FPS when possible.
(I've tried setting the contents of an ImageView, and am receiving similar results). I have no other code in the SurfaceView that could impact this. I have set the holder to the RGBA_8888 format:
getHolder().setFormat(PixelFormat.RGBA_8888);
Is it possible to convert this stream of Bitmaps into a VideoView? Would that be faster?
Thanks.

Whenever you run into performance questions, use Traceview to figure out exactly where your problem lies. Using System.currentTimeMillis() is like attempting to trim a steak with a hammer.
The #1 thing her is to get the bitmap decoding off the main application thread. Do that in a background thread. Your main application thread should just be drawing the bitmaps, pulling them off of a queue populated by that background thread. Android has the main application thread set to render on a 60fps basis as of Android 4.1 (a.k.a., "Project Butter"), so as long as you can draw your Bitmap in a couple of milliseconds, and assuming that your network and decoding can keep your queue current, you should get 60fps results.
Also, always use inBitmap with BitmapFactory.Options on Android 3.0+ when you have images of consistent size, as part of your problem will be GC stealing CPU time. Work off a pool of Bitmap objects that you rotate through, so that you generate less garbage and do not fragment your heap so much.
I suspect that you are better served letting Android scale the image for you in an ImageView (or just by drawing to a View canvas) than you are in having BitmapFactory scale the image, as Android can take advantage of hardware graphics acceleration for rendering, which BitmapFactory cannot. Again, Traceview is your friend here.
With regards to:
and have found that I am capable of receiving those JPEGs smoothly, at 60FPS.
that will only be true sometimes. Mobile devices tend to be mobile. Assuming that by "6kb" you mean 6KB (six kilobytes), you are assuming a ~3Mbps (three megabits per second) connection, and that's far from certain.
With regards to:
Is it possible to convert this stream of Bitmaps into a VideoView?
VideoView is a widget that plays videos, and you do not have a video.
Push come to shove, you might need to drop down to the NDK and do this in native code, though I would hope not.

Related

How to improve OpenCV face detection performance in android?

I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));

Resize an image on Android

after googling a lot I have not yet found a way to resize an image preserving quality.
I have my image - stored by camera in full resolution - in
String filePath = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_PICTURES) + "/my_directory/my_file_name.jpg";
Now, I need to resize it preserving aspect ratio and then save to another path.
What's the best way to do this without occurring the error "Out of memory on a xxxxxxx-byte allocation."?
I continue to retrieve this error on Samsung devices, I tried in every way, even with the library Picasso.
Thanks!
1st things 1st: depending on device and bitmap size, no matter what magic code you do, it will crash! Specially cheap Samsung phones that usually have no more than 16mb of RAM to the VM.
You can use this code How to get current memory usage in android? to check on amount of memory available and deal with it properly.
When doing those calculations, remember that bitmaps are uncompressed images, that means, even thou the JPG might be 100kb, the Bitmap might take several MB.
You'll use the code shown here https://developer.android.com/training/displaying-bitmaps/load-bitmap.html to read the bitmap boundaries, and do an approximate scale down as close as possible to the size you actually need, or enough to make the device not crash. That's why it's important to properly measure the memory.
That 1st code takes virtually no RAM as it creates from the disk, making it smaller by simply skipping pixels from the image. That's why it's approximate, it only does in power of 2 the scaling.
Then you'll use the standard API to scale down to the size you actually need https://developer.android.com/reference/android/graphics/Bitmap.html#createScaledBitmap(android.graphics.Bitmap, int, int, boolean)
so the pseudo code for it, will be:
try{
Info info = getImageInfo(File);
int power2scale = calculateScale(info, w, h);
Bitmap smaller = preScaleFromDisk(File, power2scale);
Bitmap bitmap = Bitmap.createScaledBitmap(smaller, w, h, f);
} catch(OutOfMemoryError ooe){
// call GC
// sleep to let GC run
// try again with higher power2scale
}

Parallel image detection and camera preview OpenCV Android

I'm using OpenCV to detect an image. Here is my problem: my function detect_image(mRgba) needs some time to perform operations and give some results. While function is computing camera preview is frozen because it only shows image when code reaches return inputFrame.rgba() I would like to know how to make those operation parallel, function will be computing in a background while camera preview is working with normal speed.
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
detect_image(mRgba);
return inputFrame.rgba();
}
To just get a taste at parallelization, the simple approach would be to just use an AsyncTask to process your images:
AsyncTask reference page
A more friendly introduction can be found here:
http://android-developers.blogspot.co.il/2010/07/multithreading-for-performance.html
while this:
http://developer.att.com/developer/forward.jsp?passedItemId=11900176
is a nice all-around introduction to multi-threading on Android.
If you want to just get started, a simple algorithm should work like this:
from within your "onCameraFrame" method check if you have an AsyncThread for processing the image which is already running
if the answer is "yes", just show mRgba in the preview window and return
if the answer is "no" start a new AsyncThread and let it run "detectImage" on mRgba, making sure that the results are saved in the onPostExecute method.
With this algorithm, if your system can detect 4 images per second while taking a preview at 60fps (for example), you will be able to get a smooth video with a new result about each 20-30 frames on a single processor device, under the realistic assumption that detect_image is CPU intensive while the camera preview/display are I/O intensive.
Capture: x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x....
Processing: 1.......1.......1.......1.....1.......1....
time ------------------------------------>
Starting with HoneyComb, a more refined approach would be to account for the number of cores in your CPU (multicore phones/tablets are becoming increasingly common) and start N AsyncTask in parallel (one for each core), feeding a different preview image to each one (maybe using a thread pool...).
If you separate each thread by a fixed delay (about the duration of detectImage/N ), you should get a constant stream of results with a frequency that should be a multiple of the single threaded version.
Capture: x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x.x....
Processing: 1.2.3.4.1.2.3.4.1.2.3.4.1.2.3.4.1.2.3.4....
time ------------------------------------>
Hope this helps

Android Live Wallpaper Animation

What's the best way to display an animation as a live wallpaper? Right now I have a gif split into 11 pngs (one per frame) and then I just am doing
public Bitmap frame0;
ArrayList<Bitmap> frameArray = new ArrayList<Bitmap>();
frame0 = BitmapFactory.decodeResource(getResources(), R.drawable.nyancat0);
frame0 = Bitmap.createScaledBitmap(frame0, minWidth, minHeight, true);
frameArray.add(frame0);
Then I just use a For Loop to loop through the frames and draw them on a canvas
canvas.drawBitmap(frameArray.get(indexnumber), 0, 0, mPaint);
and then I just change my indexnumber++ unless it's 11, then I go back to 1.
That works, but of course, storing that many Bitmaps is very memory inefficient. This stops me from doing multiple layers or other cool effects without lagging and battery drain. Is there a better way to display an animation on the Android Live wallpaper? I tried Movie for displaying the whole GIF but that's not supported for live wallpapers.
How long does the loading of images take? If it's negligible then why not load each image in right before you display it, discarding the old one? That way you only have 1 image in memory at any one stage.
Alternatively do something akin to using a back buffer, have two spaces in memory, one for the image being displayed now, an another into which you're loading the next image. When it's time to change you make the newly loaded bitmap visible, unload the other and then load the next frame into that.
Despite what people say, you actually can have a lot of images in your Live Wallpaper. The only tricky thing is the memory limit. I had as much as 40 .pngs loaded in my application and i reloaded them once in a minute.
But when you handling that many images in your application, you have to load them in a smart way:
public BitmapResult decodeResource(int file, int scale){
//Decode image size
BitmapFactory.Options o = new BitmapFactory.Options();
o.inPurgeable = true;
o.inInputShareable = true;
o.inJustDecodeBounds = true;
BitmapFactory.decodeResource(resources, file, o);
BitmapFactory.Options o2 = new BitmapFactory.Options();
o2.inPreferredConfig = Bitmap.Config.ARGB_8888;
o2.inSampleSize=scale;
return new BitmapResult(BitmapFactory.decodeResource(resources, file, o2),o2.outWidth,o2.outHeight);
}
You see that scale variable? It should be a power of 2 and it scales your bitmap down.
In case things got wrong, clean the bitmaps and reload bitmaps with a lower quality:
void init()
{
try
{
loadFirstBitmap();
loadSecondBitmap();
}
catch(java.lang.OutOfMemoryError error)
{
/*some infinite loop breaker*/
scale *= 2;
cleanup();
init();
}
}
Also, system won't get rid of a bitmaps for you, you have to clean them yourself and then probably call the garbage collector:
bitmap1.recycle();
bitmap2.recycle();
System.gc();
Resizing your bitmaps to the size you need is also a good idea because otherwise system would probably call createScaledBitmap each time you try to draw it which would require additional memory.
I never figured what's the memory cap for such kind of apps is and is it that a memory heap limit which most often equals 24 MB, but i can tell you that my app takes up to 13 MB of memory and no one ever reported a crash on Android devices >= 2.2.
So if you follow some optimization rules, you can load as much bitmaps in your application as you need.

Android OpenGL buffering and glFlush

So I'm making an Android 2.2 app that uses GLSurfaceView. My question is, since OpenGL tends to buffer commands, does that mean it requires associated memory (e.g. the bitmap in a call to glTexSubImage2D() ) to stick around until it is done? Or does it make itself a copy of any memory needed for buffered commands?
I ask as this code tends to cause a long stall and an eventual crash on hardware (HTC Desrire) but not on the emulator:
//bm is a Bitmap stored in a vector from previous commands
//pt is a Point stored at the same time
GLUtils.texSubImage2D(GL10.GL_TEXTURE_2D, 0, pt.x, pt.y, bm);
bm.recycle();
Now if I add glFlush() like so:
//bm is a Bitmap stored in a vector from previous commands
//pt is a Point stored at the same time
GLUtils.texSubImage2D(GL10.GL_TEXTURE_2D, 0, pt.x, pt.y, bm);
**AGL.glFlush();** //or glFinish
bm.recycle();
It appears to work great. Now is this an actual functionality for glFlush/glFinish, to prevent memory from being cleared out from underneath OpenGL?
Good question. With Textures, and Vertex Buffer Objects incidentally, once you have called the methods that actually load the image data for the Texture, you do not need to hold onto the buffer/array you have in client memory.
Don't mistake the rendering buffer with memory associated with Textures. The Texture graphics are saved in memory for the GPU, they are not part of the buffer that is being flushed to finally draw.

Categories

Resources