I'm making a simple game with my friends in Android using libgdx game engine.
We use "conelight" object of box2dlights extension in the game. Our problem is that after putting "conelight" in a game, our app starts to consume too much battery.
Is there a way too prevent this ?
Any help will be appreciated
Thank you
Calculating lights is computational intensive and thus increases energy consumption.
There are a few things you can check to try to minimize that:
Check if you only create those lights once (for example not in your render method)
Reduce the distance, cone degree and number of rays.
Do you use box2d or do you cast shadows? If not you can get an equal effect with light textures.
And then it might depend on the android version or how old the phone is.
Related
I am developing a game for Android. It uses a surface view and uses Bitmaps to draw objects each frame, redrawing on the Bitmap objects only when required.
Using the app I noticed that the battery is draining.
I would like to know if there is a library to understand which part I am consuming a lot of battery in or a way to apply the battery safety function in my surface view.
Each bitmap in the game is recycled every time it is instantiated. So apart from this little trick I don't know how to optimize my code.
Can anyone suggest me what I can do?
Thanks in advance
There is no exact solution to minimize battery usage - it depends on your code, and it's all about minimizing CPU & Network usage. Basically you already have powerful Profiler in your Android Studio. It shows CPU, RAM, network & battery usage on dashboards and you can find most heavy part of your code using it. More you optimize your code, more effectively app will use battery.
I'm working on Google Cardboard Game. I have 5-10 sprites, a terrain with 15-30 pieces of bush on it(single bushes like grass),15 trees(low poly with <64 vertices),a Camera (provided by GVR Sdk).
On the Editor Framerate is good. But when I test it on my Galaxy S6, the FPS is low as 10-20(I've attached a script that gets Framerate on a Text).
Any optimization tip?
Here is detail of stats in Unity Editor:
CPU: main 6.0ms
FPS: 160(average)
Batches:117
Tris: 6.1k
Verts: 3.3k
I believe you had lot of times used methods "GameObject.Find()" and "gameobject.GetComponent()" in the runtime.
This is most common issue of unity developers.
Better do not use them at runtime at all! Only inside of awake or in start methods.
You need to make all needed GetComponent<> inside of awake or start methods and assign result to some global variables. After -- just use this variable in runtime.
This will really increase your game speed on the mobile devices.
Make sure your device is not overheating, might seem obvious but you can't always tell when it is on your face (I also own an S6).
Also, make sure you are not in energy saver mode, sounds dumb but I fell for it already ;)
Among the huge amount of things that can ruin the performance of a game on smartphone are:
Make sure you don't have a script doing too much work in Update (especially Instanciate()/Destroy()
Don't move static objects, just don't
Make sure you don't use high resolution textures (in my small experience > 512x512 is, that they are squared and have a resolution that is a power of two
As a side note, GetComponents can be an issue, the alternative was already posted by #Andrew, just use GetComponent in the Start()/Awake() method and store them to use them later on.
I am developing app in which I need to get face landmarks points on a cam like mirror cam or makeup cam. I want it to be available for iOS too. Please guide me for a robust solution.
I have used Dlib and Luxand.
DLIB: https://github.com/tzutalin/dlib-android-app
Luxand: http://www.luxand.com/facesdk/download/
Dlib is slow and having a lag of 2 sec approximately (Please look at the demo video on the git page) and luxand is ok but it's paid. My priority is to use an open source solution.
I have also use the Google vision but they are not offering much face landmarks points.
So please give me a solution to make the the dlib to work fast or any other option keeping cross-platform in priority.
Thanks in advance.
You can make Dlib detect face landmarks in real-time on Android (20-30 fps) if you take a few shortcuts. It's an awesome library.
Initialization
Firstly you should follow all the recommendations in Evgeniy's answer, especially make sure that you only initialize the frontal_face_detector and shape_predictor objects once instead of every frame. The frontal_face_detector will initialize faster if you deserialize it from a file instead of using the get_serialized_frontal_faces() function. The shape_predictor needs to be initialized from a 100Mb file, and takes several seconds. The serialize and deserialize functions are written to be cross-platform and perform validation on the data, which is robust but makes it quite slow. If you are prepared to make assumptions about endianness you can write your own deserialization function that will be much faster. The file is mostly made up of matrices of 136 floating point values (about 120000 of them, meaning 16320000 floats in total). If you quantize these floats down to 8 or 16 bits you can make big space savings (e.g. you can store the min value and (max-min)/255 as floats for each matrix and quantize each separately). This reduces the file size down to about 18Mb and it loads in a few hundred milliseconds instead of several seconds. The decrease in quality from using quantized values seems negligible to me but YMMV.
Face Detection
You can scale the camera frames down to something small like 240x160 (or whatever, keeping aspect ratio correct) for faster face detection. It means you can't detect smaller faces but it might not be a problem depending on your app. Another more complex approach is to adaptively crop and resize the region you use for face detections: initially check for all faces in a higher res image (e.g. 480x320) and then crop the area +/- one face width around the previous location, scaling down if need be. If you fail to detect a face one frame then revert to detecting the entire region the next one.
Face Tracking
For faster face tracking, you can run face detections continuously in one thread, and then in another thread, track the detected face(s) and perform face feature detections using the tracked rectangles. In my testing I found that face detection took between 100 - 400ms depending on what phone I used (at about 240x160), and I could do 7 or 8 face feature detections on the intermediate frames in that time. This can get a bit tricky if the face is moving a lot, because when you get a new face detection (which will be from 400ms ago), you have to decide whether to keep tracking from the new detected location or the tracked location of the previous detection. Dlib includes a correlation_tracker however unfortunately I wasn't able to get this to run faster than about 250ms per frame, and scaling down the resolution (even drastically) didn't make much of a difference. Tinkering with internal parameters produced increase speed but poor tracking. I ended up using a CAMShift tracker based on the chroma UV planes of the preview frames, generating the color histogram based on the detected face rectangles. There is an implementation of CAMShift in OpenCV, but it's also pretty simple to roll your own.
Hope this helps, it's mostly a matter of picking the low hanging fruit for optimization first and just keep going until you're happy it's fast enough. On a Galaxy Note 5 Dlib does face+feature detections at about 100ms, which might be good enough for your purposes even without all this extra complication.
Dlib is fast enough for most cases. The most of processing time is taken to detect face region on image and its slow because modern smartphones are producing high-resolution images (10MP+)
Yes, face detection can take 2+ seconds on 3-5MP image, but it tries to find very small faces of 80x80 pixels size. I am really sure, that you dont need such small faces on high resolution images and the main optimization here is to reduce the size of image before finding faces.
After the face region is found, the next step - face landmarks detection is extremely fast and takes < 3 ms for one face, this time does not depend on resolution.
dlib-android port is not using dlib's detector the right way for now. Here is a list of recommendations how to make dlib-android port work much faster:
https://github.com/tzutalin/dlib-android/issues/15
Its very simple and you can implement it yourself. I am expecting performance gain about 2x-20x
Apart from OpenCV and Google Vision, there are widely-available web services like Microsoft Cognitive Services. The advantage is that it would be completely platform-independent, which you've listed as a major design goal. I haven't personally used them in an implementation yet but based on playing with their demos for awhile they seem quite powerful; they're pretty accurate and can offer quite a few details depending on what you want to know. (There are similar solutions available from other vendors as well by the way).
The two major potential downsides to something like that are the potential for added network traffic and API pricing (depending on how heavily you'll be using them).
Pricing-wise, Microsoft currently offers up to 5,000 transactions a month for free with added transactions beyond that being some fraction of a penny (depending on traffic, you can actually get a discount for high volume), but if you're doing, for example, millions of transactions per month the fees can start adding up surprisingly quickly. This is actually a fairly typical pricing model; before you select a vendor or implement this kind of a solution make sure you understand how they're going to charge you and how much you're likely to end up paying and how much you could be paying if you scale your user base. Depending on your traffic and business model it could be either very reasonable or cost-prohibitive.
The added network traffic may or may not be a problem depending on how your app is written and how much data you're sending. If you can do the processing asynchronously and be guaranteed reasonably fast Wi-Fi access that obviously wouldn't be a problem but unfortunately you may or may not have that luxury.
I am currently working with the Google Vision API and it seems to be able to detect landmarks out of the box. Check out the FaceTracker here:
google face tracker
This solution should detect the face, happiness, and left and right eye as is. For other landmarks, you can call the getLandmarks on a Face and it should return everything you need (thought I have not tried it) according to their documentation: Face reference
My game uses too much battery. I don't know exactly how much it uses as compared to comparable games, but it uses too much. Players complain that it uses a lot, and a number of them note that it makes their device "run hot". I'm just starting to investigate this and wanted to ask some theoretical and practical questions to narrow the search space. This is mainly about the iOS version of my game, but probably many of the same issues affect the Android version. Sorry to ask many sub-questions, but they all seemed so interrelated I thought it best to keep them together.
Side notes: My game doesn't do network access (called out in several places as a big battery drain) and doesn't consume a lot of battery in the background; it's the foreground running that is the problem.
(1) I know there are APIs to read the battery level, so I can do some automated testing. My question here is: About how long (or perhaps: about how much battery drain) do I need to let the thing run to get a reliable reading? For instance, if it runs for 10 minutes is that reliable? If it drains 10% of the battery, is that reliable? Or is it better to run for more like an hour (or, say, see how long it takes the battery to drain 50%)? What I'm asking here is how sensitive/reliable the battery meter is, so I know how long each test run needs to be.
(2) I'm trying to understand what are the likely causes of the high battery use. Below I list some possible factors. Please help me understand which ones are the most likely culprits:
(2a) As with a lot of games, my game needs to draw the full screen on each frame. It runs at about 30 fps. I know that Apple says to "only refresh the screen as much as you need to", but I pretty much need to draw every frame. Actually, I could put some work into only drawing the parts of the screen that had changed, but in my case that will still be most of the screen. And in any case, even if I can localize the drawing to only part of the screen, I'm still making an OpenGL swap buffers call 30 times per second, so does it really matter that I've worked hard to draw a bit less?
(2b) As I draw the screen elements, there is a certain amount of floating point math that goes on (e.g., in computing texture UV coordinates), and some (less) double precision math that goes on. I don't know how expensive these are, battery-wise, as compared to similar integer operations. I could probably cache a lot of these values to not have to repeatedly compute them, if that was a likely win.
(2c) I do a certain amount of texture switching when rendering the scene. I had previously only been worried about this making the game too slow (it doesn't), but now I also wonder whether reducing texture switching would reduce battery use.
(2d) I'm not sure if this would be practical for me but: I have been reading about shaders and OpenCL, and I want to understand if I were to unload some of the CPU processing to the GPU, whether that would likely save battery (in addition to presumably running faster for vector-type operations). Or would it perhaps use even more battery on the GPU than on the CPU?
I realize that I can narrow down which factors are at play by disabling certain parts of the game and doing iterative battery test runs (hence part (1) of the question). It's just that that disabling is not trivial and there are enough potential culprits that I thought I'd ask for general advice first.
Try reading this article:
Android Documents on optimization
What works well for me, is decreasing the use for garbage collection e.g. when programming for a desktop computer, you're (or i'm) used to defining variables inside loops when they are not needed out side of the loop, this causes a massive use of garbage collection (and i'm not talking about primitive vars, but big objects.
try avoiding things like that.
One little tip that really helped me get Battery usage (and warmth of the device!) down was to throttle FPS in my custom OpenGL Engine.
Especially while the scene is static (e.g. a turn-based game or the user tapped pause) throttle down FPS.
Or throttle if the user isn't responsive for more then 10 seconds, like a screensaver on a desktop pc. In the real world users often get distracted while using mobile devices. Don't let your app drain battery while your user figures out what subway-station he's in ;)
Also on the iPhone, sometimes 60FPS is the default, throttling this manually to 30 FPS is barely visible and safes you about half of the gpu cycles (and therefore a lot of battery!).
Is the technology there for the camera of a smartphone to detect a light flashing and to detect it as morse code, at a maximum of 100m?
There's already at least one app in the iPhone App store that does this for some unknown distance. And the camera can detect luminance at a much greater distance, given enough contrast of the exposure between on and off light levels, a slow enough dot rate to not alias against the frame rate (remember about Nyquist sampling), and maybe a tripod to keep the light centered on some small set of pixels. So the answer is probably yes.
I think it's possible in ideal conditions. Clear air and no other "light noise", like in a dark night in the mountain or so. The problem is that users would try to use it in the city, discos etc... where it would obviously fail.
If you can record a video of the light and easily visually decode it upon watching, then there's a fair chance you may be able to do so programmatically with enough work.
The first challenge would be finding the light in the background, especially if its small and/or there's any movement of the camera or source. You might actually be able to leverage some kinds of video compression technology to help filter out the movement.
The second question is if the phone has enough horsepower and your algorithm enough efficiency to decode it in real time. For a slow enough signaling rate, the answer would be yes.
Finally there might be things you could do to make it easier. For example, if you could get the source to flash at exactly half the camera frame rate when it is on instead of being steady on, it might be easier to identify since it would be in every other frame. You can't synchronize that exactly (unless both devices make good use of GPS time), but might get close enough to be of help.
Yes, the technology is definitely there. I written an Android application for my "Advanced Internet Technology" class, which does exactly what you describe.
The application has still problems with bright noise (when other light sources leave or enter the camera view while recording). The approach that I'm using just uses the overall brightness changes to extract the Morse signal.
There are some more or less complicated algorithms in place to correct the auto exposure problem (the image darkens shortly after the light is "turned on") and to detect the thresholds for the Morse signal strength and speed.
Overall performance of the application is good. I tested it during the night in the mountains and as long as the sending signal is strong enough, there is no problem. In the library (with different light-sources around), it was less accurate. I had to be careful not to have additional light-sources at the "edge" of the camera screen. The application required the length of a "short" Morse signal to be 300ms at least.
The better approach would be to "search" the screen for the actual light-source. For my project it turned out to be too much work, but you should get good detection in noisy environment with this.