My goal is to synchronize movie’s frames with device orientation data from phone’s gyroscope/accelerometer. Android provides orientation data with proper timestamps. As for a video its frame rate known only in general. Experiments show that changes in orientation measured by accelerometer don’t match changes in the scene. Sometimes it goes faster, sometimes it goes slower during the same video.
Is there any way to find out how much time passed between two consequent frames?
I’m going to answer this question myself.
First, yes, it is possible that fps or frame rate is not constant. Here is the quote from Android Developers Guide: “NOTE: On some devices that have auto-frame rate, this sets the maximum frame rate, not a constant frame rate.”
http://developer.android.com/reference/android/media/MediaRecorder.html#setVideoFrameRate%28int%29
Second, there is a function get(CV_CAP_PROP_POS_MSEC) in OpenCV VideoCapture class that allows reading current frame time position:
VideoCapture capture("test.mp4");
double curFrameTime;
double prevFrameTime = 0;
double dt;
while (capture.grab())
{
curFrameTime = capture.get(CV_CAP_PROP_POS_MSEC); //Current frame position
dt = curFrameTime - prevFrameTime; //Time difference
prevFrameTime = curFrameTime; //Previous frame position
}
capture.release();
If there is a better suggestion I’ll be glad to see it.
Related
I am using Unity to create a game, and using an android phone as a controller. The phone is functionaning as a websocket server that the unity app is connecting to and listening for the gyroscope sensor.
The problem is that if the unity app runs at lower fps it sees the phone rotate more, and at higher fps the phone rotates less.
This is where I am translating the data coming in and applying it to a capsule:
while (_ws.State == WebSocketState.Open)
{
...
//Data comes in as rad/sec, and I want it to be degrees/sec
float _radiansToDegrees = Time.deltaTime * Mathf.Rad2Deg;
var gyroRotation = new Vector3(g.values[0], g.values[1], g.values[2]) * _radiansToDegrees;
//I only care to rotate along x-axis
_capsule.Rotate(_uiController.IsPhoneAxis() ? new Vector3(gyroRotation.z, 0, 0) : new Vector3(-gyroRotation.x, 0, 0));
...
}
I had assumed using Time.deltaTime when converting radians to degrees would have made the incoming data not reliant on the games fps.
If I make the websockets sampling rate similar to the unity apps fps I get the expected rotation, however I need to prevent the fps from impacting my rotation.
After reading my own question I realized that the Time.deltaTime was the incorrect multiplier. I needed to multiply by my sampling rate.
I'm using CameraX and I'd like to play animation over canvas with the same fps which used by CameraX to show Preview.
question 1:
How can I to play 60 frames animation with 30 fps (for example) of CameraX in 2 seconds, if at all possible.
question 2:
How can I get CameraX fps?
Regarding the first question, add a listener to your SurfaceTexture and listen to onSurfaceTextureUpdated. You can use this method as reference to know when to render your animation.
About the second question, as far as I know there is no API in CameraX to get the FPS due to the variable nature of the a camera preview in terms of FPS. Only video recording has a fixed FPS value, for a different camera preview mode the FPS is variable. On the other hand, the camera2 API can be configured with a FPS range (min-max), so I guess the CameraX API will have something similar.
You can alternatively compute dynamically the FPS using the onSurfaceTextureUpdated method. In the internet you will find many pages showing how to calculate a FPS.
Here a small example, the code is untested, but should guide you. Call the method in onSurfaceTextureUpdated, after a few iterations you should get the active FPS.
private long lastFpsTime = 0L;
private float fps;
private void computeFPS()
{
if (this.lastFpsTime != 0L)
{
this.fps = 1000.0f / (SystemClock.elapsedRealtime() - this.lastFpsTime);
}
this.lastFpsTime = SystemClock.elapsedRealtime();
}
I have some code that allows me to detect faces in a live camera preview and draw a few GIFs over their landmarks using the play-services-vision library provided by Google.
It works well enough when the face is static, but when the face moves at a moderate speed, the face detector takes longer than the camera's framerate to detect the landmarks at the face's new position. I know it might have something to do with the bitmap draw speed, but I took steps to minimize the lag in them.
(Basically I get complaints that the GIFs' repositioning isn't 'smooth enough')
EDIT: I did try getting the coordinate detection code...
List<Landmark> landmarksList = face.getLandmarks();
for(int i = 0; i < landmarksList.size(); i++)
{
Landmark current = landmarksList.get(i);
//canvas.drawCircle(translateX(current.getPosition().x), translateY(current.getPosition().y), FACE_POSITION_RADIUS, mFacePositionPaint);
//canvas.drawCircle(current.getPosition().x, current.getPosition().y, FACE_POSITION_RADIUS, mFacePositionPaint);
if(current.getType() == Landmark.LEFT_EYE)
{
//Log.i("current_landmark", "l_eye");
leftEyeX = translateX(current.getPosition().x);
leftEyeY = translateY(current.getPosition().y);
}
if(current.getType() == Landmark.RIGHT_EYE)
{
//Log.i("current_landmark", "r_eye");
rightEyeX = translateX(current.getPosition().x);
rightEyeY = translateY(current.getPosition().y);
}
if(current.getType() == Landmark.NOSE_BASE)
{
//Log.i("current_landmark", "n_base");
noseBaseY = translateY(current.getPosition().y);
noseBaseX = translateX(current.getPosition().x);
}
if(current.getType() == Landmark.BOTTOM_MOUTH) {
botMouthY = translateY(current.getPosition().y);
botMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "b_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
if(current.getType() == Landmark.LEFT_MOUTH) {
leftMouthY = translateY(current.getPosition().y);
leftMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "l_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
if(current.getType() == Landmark.RIGHT_MOUTH) {
rightMouthY = translateY(current.getPosition().y);
rightMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "l_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
}
eyeDistance = (float)Math.sqrt(Math.pow((double) Math.abs(rightEyeX - leftEyeX), 2) + Math.pow(Math.abs(rightEyeY - leftEyeY), 2));
eyeCenterX = (rightEyeX + leftEyeX) / 2;
eyeCenterY = (rightEyeY + leftEyeY) / 2;
noseToMouthDist = (float)Math.sqrt(Math.pow((double)Math.abs(leftMouthX - noseBaseX), 2) + Math.pow(Math.abs(leftMouthY - noseBaseY), 2));
...in a separate thread within the View draw method, but it just nets me a SIGSEGV error.
My questions:
Is syncing the Face Detector's processing speed with the Camera Preview framerate the right thing to do in this case, or is it the other way around, or is it some other way?
As the Face Detector finds the faces in a camera preview frame, should I drop the frames that the preview feeds before the FD finishes? If so, how can I do it?
Should I just use setClassificationMode(NO_CLASSIFICATIONS) and setTrackingEnabled(false) in a camera preview just to make the detection faster?
Does the play-services-vision library use OpenCV, and which is actually better?
EDIT 2:
I read one research paper that, using OpenCV, the face detection and other functions available in OpenCV is faster in Android due to their higher processing power. I was wondering whether I can leverage that to hasten the face detection.
There is no way you can guarantee that face detection will be fast enough to show no visible delay even when the head motion is moderate. Even if you succeed to optimize the hell of it on your development device, you will sure find another model among thousands out there, that will be too slow.
Your code should be resilient to such situations. You can predict the face position a second ahead, assuming that it moves smoothly. If the users decide to twitch their head or device, no algorithm can help.
If you use the deprecated Camera API, you should pre-allocate a buffer and use setPreviewCallbackWithBuffer(). This way you can guarantee that the frames arrive to you image processor one at a time. You should also not forget to open the Camera on a background thread, so that the [onPreviewFrame()](http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html#onPreviewFrame(byte[], android.hardware.Camera)) callback, where your heavy image processing takes place, will not block the UI thread.
Yes, OpenCV face-detection may be faster in some cases, but more importantly it is more robust that the Google face detector.
Yes, it's better to turn the classificator off if you don't care about smiles and open eyes. The performance gain may vary.
I believe that turning tracking off will only slow the Google face detector down, but you should make your own measurements, and choose the best strategy.
The most significant gain can be achieved by turning setProminentFaceOnly() on, but again I cannot predict the actual effect of this setting for your device.
There's always going to be some lag, since any face detector takes some amount of time to run. By the time you draw the result, you will usually be drawing it over a future frame in which the face may have moved a bit.
Here are some suggestions for minimizing lag:
The CameraSource implementation provided by Google's vision library automatically handles dropping preview frames when needed so that it can keep up the best that it can. See the open source version of this code if you'd like to incorporate a similar approach into your app: https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/ui/camera/CameraSource.java#L1144
Using a lower camera preview resolution, such as 320x240, will make face detection faster.
If you're only tracking one face, using the setProminentFaceOnly() option will make face detection faster. Using this and LargestFaceFocusingProcessor as well will make this even faster.
To use LargestFaceFocusingProcessor, set it as the processor of the face detector. For example:
Tracker<Face> tracker = *your face tracker implementation*
detector.setProcessor(
new LargestFaceFocusingProcessor.Builder(detector, tracker).build());
Your tracker implementation will receive face updates for only the largest face that it initially finds. In addition, it will signal back to the detector that it only needs to track that face for as long as it is visible.
If you don't need to detect smaller faces, using the setMinFaceSize() larger will make face detection faster. It's faster to detect only larger faces, since it doesn't need to spend time looking for smaller faces.
You can turn of classification if you don't need eyes open or smile indication. However, this would only give you a small speed advantage.
Using the tracking option will make this faster as well, but at some accuracy expense. This uses a predictive algorithm for some intermediate frames, to avoid the expense running full face detection on every frame.
I am using the following code to calculate Frame Rate in Unity3d 4.0. It's applied on a NGUI label.
void Update () {
timeleft -= Time.deltaTime;
accum += Time.timeScale/Time.deltaTime;
++frames;
// Interval ended - update GUI text and start new interval
if( timeleft <= 0.0 )
{
// display two fractional digits (f2 format)
float fps = accum/frames;
string format = System.String.Format("{0:F2} FPS",fps);
FPSLabel.text = format;
timeleft = updateInterval;
accum = 0.0F;
frames = 0;
}
}
It was working previously, or at least seemed to be working.Then I was having some problem with physics, so I changed the fixed timestep to 0.005 and the Max timestep to 0.017 . Yeah I know it's too low, but my game is working fine on that.
Now the problem is the above FPS code returns 58.82 all the time. I've checked on separate devices (Android). It just doesn't budge. I thought it might be correct, but when I saw profiler, I can clearly see ups and downs there. So obviously it's something fishy.
Am I doing something wrong? I copied the code from somewhere (must be from the script wiki). Is there any other way to know the correct FPS?
By taking cues from this questions, I've tried all methods in the first answer. Even the following code is returning a constant 58.82 FPS. It's happening in the android device only. In the editor I can see fps difference.
float fps = 1.0f/Time.deltaTime;
So I checked the value of Time.deltaTime and it's 0.017 constant in the device. How can this be possible :-/
It seems to me that the fps counter is correct and the FPS of 58.82 is caused by the changes in your physics time settings. The physics engine probably cannot finish its computation in the available timestep (0.005, which is very low), and that means it will keep computing until it reaches the maximum timestep, which in your case is 0.017. That means all frames will take 0.017 plus any other overhead from rendering / scripts you may have. And 1 / 0.017 equals exactly 58.82.
Maybe you can fix any problems you have with the physics in other ways, without lowering the fixed timestep so much.
I'll try to explain what I mean.
I'm developing a 2d game. When I run the code below on the small screen it works more quickly than the same code on the big screen. I think it depends on an iteration of the game loop takes more time on the big screen than on the small. How can I implement time unit or something else to it doesn't depend on the iteration of the game loop?
private void createDebris(){
if(dx<=0) return;
if(stepDebris==2){
Debris debris = new Debris(gameActivity, dx-=1280*coefX/77, 800*coefY-50*coefY, coefX, coefY);
synchronized (necessaryObjects) {
necessaryObjects.add(debris);
}
stepDebris=-1;
Log.e("COUNT", (count++)+"");
}
stepDebris++;
}
P.S. Debris is visual object which is drawn on canvas. I'll appreciate your answers. Thanks.
If, stepDebris is the unit by which you move objects on the screen - then incrementing it per draw call is wrong, because it ties the rate of movement to the framerate.
What you want is something like this
stepDebris += elapsedMilliseconds * speedFactor
where elapsedMilliseconds is the time elapsed since the game started (in mS). Once you find the correct speedFactor for stepDebris - it will move at the same speed on different machines/resolutions irrespective of framerate.
Hope this helps!
You can make an 'iteration' take x milliseconds by measuring the time it takes to do the actual iteration (= y), and afterwards sleeping (x-y) millisecs.
See also step 3 of this tutorial
Android provies the Handler API, which implements timed event loops, to control the timing of computation. This article is pretty good.
You will save battery life by implementing a frame rate controller with this interface rather redrawing as fast as you can.