Android Activity Animations Timing - android

I want to put some animations on my activities when they open and close. But so far if i put them on it just makes it feel like the phone is lagging. And if I speed them up then you barely notice them.
What is the smallest time fragment which the average human can identify to? 50ms 100ms?
How can I make an animation noticeable, but not take away from the responsiveness of the application? Because obviously the animations them selves probably slow down the app allot.
Maybe Im asking a stupid question, If so I am sorry. But I thought that it is a fairly important aspect of designing a gpood app.

http://en.wikipedia.org/wiki/Frame_rate says:
"The human eye and its brain interface, the human visual system, can process 10 to 12 separate images per second, perceiving them individually."
The 1896 standard movie frame rate was 16 fps. Today it is at least 24 fps.
I also remember the rate 18 events per second, not sure if it was MS-DOS or old TV.
The car driver's reaction time is estimated to be about 0.2 sec. (In the case that the driver is prepared to react.) This means that if something happens faster, a human has no time to move, despite of seeing it.
You can base your design on these digits.

Related

Is there a right way to programmatically prevent a brief wrong recognition (in object detection app) to trigger an action?

Context
I'm building an app which performs real-time object detection throught the camera module of the device. The render is like the image below.
Let's say I try to recognize an apple, most of the time the app will recognize an apple. However, sometimes, the app will recognize the wrong fruit (let's say a lemon) on a few camera frames.
Goal
As the recognition of a fruit triggers an action in my code, my goal is to programmatically prevent a brief wrong recognition to trigger an action, and only take into account the majority result.
What I've tried
I tried this way : if the same fruit is recognized several frames in a row, I assumed the result is supposed to be the right one. But as my device process image recognition several times per second, even a wrong guess can be recognized several times in a row, and leads to the wrong action.
Question
Is there any known techniques for avoiding this behavior ?
I feel like you've already answered your own question. In general the interpretation of a model's inference is it's own tuning step. You know for example in logistic regression tasks that the threshold does NOT have to be 0.5. In fact, it's quite common to flex the threshold to see what the recall and precision are at various thresholds, and you can pick a threshold that works given your business/product problem. (Fraud detection might favor high recall if you never want to miss any fraud... or high precision if you don't want to annoy users with lots of false positives).
In video this broad concept is extended to multiple frames as you know. You now have the tune the hyperparameters, "how many frames total?" and "how many frames voting [apple]"?
If you are analyzing fruit going down a conveyer belt one by one, and you know each piece of fruit will be in frame for X seconds and you are shooting at 60 fps, maybe you want 60 * X frames. And maybe you want 90% of the frames to agree.
You'll want to visualize how often your detector "flips" detections so you can make a business/product judgement call on what your threshold ought to be.
This answer hasn't been very helpful in giving you a bright line rule here, but I hope it's helpful in suggesting that there is in fact NO bright line rule. You have to understand the problem to set the key hyperparameters:
For each frame, is top-1 acc sufficient, or do I need [.75] or higher confidence?
How many frames get to vote? Say [100].
How many correlated votes are necessary to trigger an actual signal? maybe it's [85].
The above algo assumes you take a hardmax after step 1. another option would be to just average all 100 frames and pick a threshold. that's kind of a soft label version of the above algo.

Camera2 API - How to set long exposure times

I'm trying to capture images with 30 seconds exposure times in my app (I know it's possible since the stock camera allows it).
But SENSOR_INFO_EXPOSURE_TIME_RANGE (which it's supposed to be in nanoseconds) gives me the range :
13272 - 869661901
in seconds it would be just
0.000013272 - 0.869661901
Which obviously is less than a second.
How can I use longer exposure times?
Thanks in advance!.
The answer to your question:
You can't. You checked exactly the right information and interpreted it correctly. Any value you set for the exposure time longer than that will be clipped to that max amount.
The answer you want:
You can still get what you want, though, by faking it. You want 30 continuous seconds' worth of photons falling on the sensor, which you can't get. But you can get something (virtually) indistinguishable from it by accumulating 30 seconds' worth of photons with tiny missing intervals interspersed.
At a high level, what you need to do is create a List of CaptureRequests and pass it to CameraCaptureSession.captureBurst(...). This will take the shots with as minimal an interstitial time as possible. When each frame of image data is available, pass it to some new buffer somewhere and accumulate the information (simple point-wise addition). This is probably most properly done with an Allocation as the output Surface and some RenderScript.
Notes on data format:
The right way to do this is to use the RAW_SENSOR output format if you can. That way the accumulated output really is directly proportional to the light that was incident to the sensor over the whole 30s.
If you can't use that, for some reason, I would recommend using YUV_420_888 output, and make sure you set the tone map curve to be linear (unfortunately you have to do this manually by creating a curve with two points). Otherwise the non-linearity introduced will ruin our scheme. (Although I'm not sure simple addition is exactly right in a linear YUV space, but it's a first approach at least.) Whether you use this approach or RAW_SENSOR, you'll probably want to apply your own gamma curve/tone map after accumulation to make it "look right."
For the love of Pete don't use JPEG output, for many reasons, not the least of which is that this will most likely add a LOT of interstitial time between exposures, thereby ruining our approximation of 30s on continuous exposure.
Note on exposure equivalence:
This will produce almost exactly the exposure you want, but not quite. It differs in two ways.
There will be small missing periods of photon information in the middle of this chunk of exposure time. But on the time scale you are talking about (30s), missing a few milliseconds of light here and there is trivial.
The image will be slightly nosier than if you had taken a true single exposure of 30s. This is because each time you read out the pixel values from the actual sensor, a little electronic noise gets added to the information. So in the end you'll have 35 times as much of this additive noise (from the 35 exposures for your specific problem) as a single exposure would. There's no way around this, sorry, but it might not even be noticeable- this is usually fairly small relative to the meaningful photographic signal. It depends on the camera sensor quality (and ISO, but I imagine for this application you need that to be high.)
(Bonus!) This exposure will actually be superior in one way: Areas that might have been saturated (pure white) in a 30s exposure will still retain definition in these far shorter exposures, so you're basically guaranteed not to lose your high end details. :-)
You can't always trust SENSOR_INFO_EXPOSURE_TIME_RANGE as of May 2017. Try manually increasing the time and see what happens. I know my Pixel will actually take a 1.9 sec shot but SENSOR_INFO_EXPOSURE_TIME_RANGE has a value in the sub second range.

bad Accelerometer data with vibration

I am working an a bike computer app. I was hoping to work out the inclination of the slope using the accelerometer but things are not working too well.
I have put in test code getting the sensor data I am just smapeling at the UI rate and keeping a moving average over 128 samples which is about 6 seconds worth. With the phone in hand the data is good and I can calculate a good angle compared to my calibration flat vector.
With the phone mounted on the bike things are not at all good. I expect to get a good bit of noise but I was hoping that the large number of samples over the big time window would remove the vibration effects and general bike movements. Unfortunately this just is not working, the magnitude of the acceleration vector is not really staying around the 9.8 mark but is dropping lower which indicates to me that something is not right somewhere.
Here is a plot of the data from part of a test ride.
As you can see when stationary at the start the magnitude is OK but once I get going it drops. I am fairly sure the problem is vibration related I initially descend and there was heavy vibration I then climb and the vibration is less and the magnitude gets back towards 9.8 but then I drop down quickly on a bad road and the magnitude ends up less than 3.
This is with a SonyErricson Xperia Active which uses a BMA250 sensor the datasheat looks like the sensor should be capable. My only theory for the cause of the problem is that the range is set to the 2g range and the vibration is causing data to go out of range and this is causing my problems.
Has anyone seen anything like this?
Has anyone got any ideas on the cause of the problem?
Is there any way to change the sensitivity that I have not found?
Additional information.
OK I logged the raw sensor data before my filtering. A very small portion presented here
The major axis is in green and on the flat as I belive this should be without the vibration it should be about 8.5. There is no obvious clamping on the data but I get more below 8.5 values than above 8.5 values. Even if the sensor is set up for it's most sensative 2g range it looks like the vibration shgould be OK I have a max value here of just over 15 and a minimum of -10 well ib a +- 20 ragnge just not centered correctly on the 8.5 it should be.
I will dig out my other phone which looks to have a slightly different sensor a BMA150 and try with that but unless it is perfect I think I will have to give up on the idea.
I suspect the accelerometer is not linear over such large G ranges. If so, and if there is any asymmetry, it will do what you see.
The solution for that is to pad the accelerometer mount a bit more, foam rubber, bungy-cord, whatever, possibly mount it on a heavier stage to filter the vibration more.
Or (not a good solution) try to model the error and compensate for it.
I used the same handset and by coincidence the same averaging interval of 6 seconds for an application a few years ago and I don't recall seeing the behaviour in the graph.
I'm wondering whether the issue is in the way the 6 second averages are being accumulated. One problem I had is that the sampling interval was not constant but depends on how busy the processor is. A sample is acquired in the specified time but the calling of the event handler depends on the scheduler. When the processor is unloaded sampling occurs at a constant frequency but as the processor works harder the sampling frequency becomes slower and more erratic. You can write your app to keep processor load low while sampling. What we did is sample for 6 seconds, doing nothing else, then stop sampling and process the last sample set but this was only partially successful as you can't control other apps running at the same time and the scheduler is sharing processor resources across them all. On the Xperia Active I found it can occasionally go out to seconds between samples which I attributed to garbage collection in one of the JVMs. The solution for us was to time stamp each sample then run some quality checks over a sample set and discard those that failed the quality check. This is a poor solution as defining what is good enough is imprecise and when the user runs another app that uses a lot of resources most sample sets can be discarded so the app needs additional logic to handle that.
The current Android API, unavailable on the Xperia Active, should have eliminated this as samples can be batched as described at https://source.android.com/devices/sensors/hal-interface.html#batch_sensor_flags_sampling_period_maximum_report_latency .
If the algorithm assumed a particular number of samples rather than counting them and the processor worked harder as the bike went faster, though I'm not sure why it would, it would produce something like the first graph because when the bike is going downhill magnitude goes down and when going up hill it goes up. There is a lot of speculation there but a 6 second average giving a magnitude of less than 3 m/s^2 looks implausible from my experience with this sensor.

how to measure and improve battery use in iPhone/iPad game (Android also)

My game uses too much battery. I don't know exactly how much it uses as compared to comparable games, but it uses too much. Players complain that it uses a lot, and a number of them note that it makes their device "run hot". I'm just starting to investigate this and wanted to ask some theoretical and practical questions to narrow the search space. This is mainly about the iOS version of my game, but probably many of the same issues affect the Android version. Sorry to ask many sub-questions, but they all seemed so interrelated I thought it best to keep them together.
Side notes: My game doesn't do network access (called out in several places as a big battery drain) and doesn't consume a lot of battery in the background; it's the foreground running that is the problem.
(1) I know there are APIs to read the battery level, so I can do some automated testing. My question here is: About how long (or perhaps: about how much battery drain) do I need to let the thing run to get a reliable reading? For instance, if it runs for 10 minutes is that reliable? If it drains 10% of the battery, is that reliable? Or is it better to run for more like an hour (or, say, see how long it takes the battery to drain 50%)? What I'm asking here is how sensitive/reliable the battery meter is, so I know how long each test run needs to be.
(2) I'm trying to understand what are the likely causes of the high battery use. Below I list some possible factors. Please help me understand which ones are the most likely culprits:
(2a) As with a lot of games, my game needs to draw the full screen on each frame. It runs at about 30 fps. I know that Apple says to "only refresh the screen as much as you need to", but I pretty much need to draw every frame. Actually, I could put some work into only drawing the parts of the screen that had changed, but in my case that will still be most of the screen. And in any case, even if I can localize the drawing to only part of the screen, I'm still making an OpenGL swap buffers call 30 times per second, so does it really matter that I've worked hard to draw a bit less?
(2b) As I draw the screen elements, there is a certain amount of floating point math that goes on (e.g., in computing texture UV coordinates), and some (less) double precision math that goes on. I don't know how expensive these are, battery-wise, as compared to similar integer operations. I could probably cache a lot of these values to not have to repeatedly compute them, if that was a likely win.
(2c) I do a certain amount of texture switching when rendering the scene. I had previously only been worried about this making the game too slow (it doesn't), but now I also wonder whether reducing texture switching would reduce battery use.
(2d) I'm not sure if this would be practical for me but: I have been reading about shaders and OpenCL, and I want to understand if I were to unload some of the CPU processing to the GPU, whether that would likely save battery (in addition to presumably running faster for vector-type operations). Or would it perhaps use even more battery on the GPU than on the CPU?
I realize that I can narrow down which factors are at play by disabling certain parts of the game and doing iterative battery test runs (hence part (1) of the question). It's just that that disabling is not trivial and there are enough potential culprits that I thought I'd ask for general advice first.
Try reading this article:
Android Documents on optimization
What works well for me, is decreasing the use for garbage collection e.g. when programming for a desktop computer, you're (or i'm) used to defining variables inside loops when they are not needed out side of the loop, this causes a massive use of garbage collection (and i'm not talking about primitive vars, but big objects.
try avoiding things like that.
One little tip that really helped me get Battery usage (and warmth of the device!) down was to throttle FPS in my custom OpenGL Engine.
Especially while the scene is static (e.g. a turn-based game or the user tapped pause) throttle down FPS.
Or throttle if the user isn't responsive for more then 10 seconds, like a screensaver on a desktop pc. In the real world users often get distracted while using mobile devices. Don't let your app drain battery while your user figures out what subway-station he's in ;)
Also on the iPhone, sometimes 60FPS is the default, throttling this manually to 30 FPS is barely visible and safes you about half of the gpu cycles (and therefore a lot of battery!).

Scheduling android graphics events

I asked this question over in the Android Developer's user group, last week. Nobody responded, so I thought I'd ask it over here.
Does anyone have any suggestions about how to schedule video events to happen at an exact clock time? I've been thinking about an application that would require two adjacent phones to display the same thing at exactly the same time. I'm wondering what that granularity of "exactly" is going to be.
I've done some testing on a couple of devices and it seems that the delay between an invalidate and the subsequent redraw can be as much 16ms. Perhaps I can do better with OpenGL?
Ideas? Anyone?
OpenGL itself is capable of very high framerates (unless I am mistaken). What I can tell you is that plenty of games have been written to run and maintain 30 frames per second. That's one frame every 3.33ms. At that speed, the change should be imperceptible to the human eye, or so I've heard (the estimate limit is 5ms).
However, there is a major difference between what OpenGL can do, and what the device running OpenGL can do. Again, Unless I am mistaken, you should be able to instruct OpenGL to run at 200 frames per second. The caveat is that if the machine you are running the animation on can't handle that framerate, it will either frame-skip or lag, and in either case will hog the processor and GPU like no other.
Again, as I don't know the specifics, I can only guess, but I would think that this is less of an issue with OpenGL vs the other leading brand, and more of an issue of the devices you are trying to sync. With the right code, a proven framework, two powerful machines, and high-speed data transfer capability (read: LAN at the least), there is no reason why you shouldn't be able to sync up the video. If any of these things are not the case, all bets are off.
-Cody

Categories

Resources