I have written a program that uses goodFeaturesToTrack and calcOpticalFlowPyrLK to track features from frame to frame. The program reliably works and can estimate the optical flow in the preview image on an Android camera from the previous frame. Here's some snippets that describe the general process:
goodFeaturesToTrack(grayFrame, corners, MAX_CORNERS, quality_level,
min_distance, cv::noArray(), eig_block_size, use_harris, 0.06);
...
if (first_time == true) {
first_time = false;
old_corners = corners;
safe_corners = corners;
mLastImage = grayFrame;
} else {
if (old_corners.size() > 0 && corners.size() > 0) {
safe_corners = corners;
calcOpticalFlowPyrLK(mLastImage, grayFrame, old_corners, corners,
status, error, Size(21, 21), 5,
TermCriteria(TermCriteria::COUNT + TermCriteria::EPS, 30,
0.01));
} else {
//no features found, so let's start over.
first_time = true;
}
}
The code above runs over and over again in a loop where a new preview frame is grabbed at each iteration. Safe_corners, old_corners, and corners are all arrays of class vector < Point2f > . The above code works great.
Now, for each feature that I've identified, I'd like to be able to assign some information about the feature... number of times found, maybe a descriptor of the feature, who knows... My first approach to doing this was:
class Feature: public Point2f {
private:
//things about a feature that I want to track
public:
//getters and fetchers and of course:
Feature() {
Point2f();
}
Feature(float a, float b) {
Point2f(a,b);
}
}
Next, all of my outputArrays are changed from vector < Point2f > to vector < Feature > which in my own twisted world ought to work because Feature is defined to be a descendent class of Point2f. Polymorphism applied, I can't imagine any good reason why this should puke on me unless I did something else horribly wrong.
Here's the error message I get.
OpenCV Error: Assertion failed (func != 0) in void cv::Mat::convertTo(cv::OutputArray, int, double, double) const, file /home/reports/ci/slave50-SDK/opencv/modules/core/src/convert.cpp, line 1095
So, my question to the forum is do the OpenCV functions truly require a Point2f vector or will a descendant class of Point2f work just as well? Next step would be to get gdb working with mobile code on an the Android phone and seeing more precisely where it crashes, however I don't want to go down that road if my approach is fundamentally flawed.
Alternatively, if a feature is tracked across multiple frames using the approach above, does the address in memory for each point change?
Thanks in advance.
The short answer is YES, OpenCV functions do require std::vector<cv::Point2f> as arguments.
Note that the vectors contain cv::Point2f objects themselves, not pointers to cv::Point2f, so there is no polymorphic behavior.
Additionally, having your Feature inherit from cv::Point2f is probably not an ideal solution. It would be simpler to use composition in this case, not to mention modeling the correct relationship (Feature has-a cv::Point2f).
Relying on an object's location in memory is also probably not a good idea. Rather, read up on your data structure of choice.
I'm just getting into OpenCV myself so can't address that aspect of the code but your problem might be a bug in your code that results in an uninitialized base class (at least not initialized as you might expect). Your code should look like this:
Feature()
: Point2f()
{
}
Feature(float a, float b)
: Point2f(a,b)
{
}
Your implementation creates two temporary Point2f objects in the constructor. Those temporary objects do not initialize the Feature object's Point2f base class and those temporary objects are destroyed at the end of the constructor.
Related
I'm stretching my very limited ARCore knowledge.
My question is similar (but different) to this question
I want to work out if my device camera node intersects/overlaps with my other nodes, but I've not been having any luck so far
I'm trying something like this (the camera is another node):
scene.setOnUpdateListener(frameTime -> {
Node x = scene.overlapTest(scene.getCamera());
if (x != null) {
Log.i(TAG, "setUpArComponents: CAMERA HIT DETECTED at: " + x.getName());
logNodeStatus(x);
}
});
Firstly, does this make sense?
I can detect all node collisions in my scene using:
for (Node node : nodes) {
...
ArrayList<Node> results = scene.overlapTestAll(node);
...
}
Assuming that there isn't a renderable for the Camera node (so no default collision shape), I tried to set my own collision shape, but this was actually catching all the tap events I was trying to perform, so I figured I must be doing this wrong.
I'm thinking about things like fixing a deactivated node in front of the camera.
I may be asking for too much of ARCore, but has anyone found a way to detect a collision between the "user" (i.e. camera node) and another node? Or should I be doing this "collision detection" via indoor positioning instead?
Thanks in advance :)
UPDATE: it's really hacky and performance-heavy, but you can actually compare the camera's and node's world space positions from within onUpdate inside a node, you'll probably have to manage some tolerance and other things to smooth out interactions.
One idea to do the same thing is to use a raycast to hit the objects and if they are close do something. You could use something like this in the onUpdateListener:
Camera camera = arSceneView.getScene().getCamera();
Ray ray = new Ray(camera.getWorldPosition(), camera.getForward());
HitTestResult result = arSceneView.getScene().hitTest(ray);
if (result.getNode() != null && result.getDistance() <= SOME_THRESHOLD) {
// Hit something
doSomething (result.getNode());
}
Based on this thread, is there a way to process an image from camera in QML without saving it?
Starting from the example of the doc the capture() function save the image to Pictures location.
What I would like to achieve, is to process the camera image every second using onImageCaptured but I don't want to save it to the drive.
I've tried to implement a cleanup operation using onImageSaved signal but it's affecting onImageCaptured too .
As explained in this answer you can bridge C++ and QML via the mediaObject. That can be done via objectName (as in the linked answer) or by using a dedicated Q_PROPERTY (more on that later). In either case you should end up with a code like this:
QObject * source // QML camera pointer obtained as described above
QObject * cameraRef = qvariant_cast<QMediaObject*>(source->property("mediaObject"));
Once you got the hook to the camera, use it as a source for a QVideoProbe object, i.e.
QVideoProbe *probe = new QVideoProbe;
probe->setSource(cameraRef);
Connect the videoFrameProbed signal to an appropriate slot, i.e.
connect(probe, SIGNAL(videoFrameProbed(QVideoFrame)), this, SLOT(processFrame(QVideoFrame)));
and that's it: you can now process your frames inside the processFrame function. An implementation of such a function looks like this:
void YourClass::processFrame(QVideoFrame frame)
{
QVideoFrame cFrame(frame);
cFrame.map(QAbstractVideoBuffer::ReadOnly);
int w {cFrame.width()};
int h {cFrame.height()};
QImage::Format f;
if((f = QVideoFrame::imageFormatFromPixelFormat(cFrame.pixelFormat())) == QImage::Format_Invalid)
{
QImage image(cFrame.size(), QImage::Format_ARGB32);
// NV21toARGB32 convertion!!
//
// DECODING HAPPENS HERE on "image"
}
else
{
QImage image(cFrame.bits(), w, h, f);
//
// DECODING HAPPENS HERE on "image"
}
cFrame.unmap();
}
Two important implementation details here:
Android devices use YUV format which is not supported currently by QImage and which should be converted by hand. I've made the strong assumption here that all the invalid formats are YUV. That would be better managed via ifdef's conditionals over the current OS.
The decoding can be quite costy so you can skip frames (simply add a counter to this method) or off load the work to a dedicated thread. That also depend on the pace at which frames are elaborated. Also reducing their size, e.g. taking only a portion of the QImage can greatly improve performances.
For that matter I would avoid at all the objectName approach for fetching the mediaObject and instead I would register a new type so that the Q_PROPERTY approach can be used. I'm thinking about something along the line of this:
class FrameAnalyzer
{
Q_OBJECT
Q_PROPERTY(QObject* source READ source WRITE setSource)
QObject *m_source; // added for the sake of READ function
QVideoProbe probe;
// ...
public slots:
void processFrame(QVideoFrame frame);
}
where the setSource is simply:
bool FrameAnalyzer::setSource(QObject *source)
{
m_source = source;
return probe.setSource(qvariant_cast<QMediaObject*>(source->property("mediaObject")));
}
Once registered as usual, i.e.
qmlRegisterType<FrameAnalyzer>("FrameAnalyzer", 1, 0, "FrameAnalyzer");
you can directly set the source property in QML as follows:
// other imports
import FrameAnalyzer 1.0
Item {
Camera {
id: camera
// camera stuff here
Component.onCompleted: analyzer.source = camera
}
FrameAnalyzer {
id: analyzer
}
}
A great advantage of this approach is readibility and the better coupling between the Camera code and the processing code. That comes at the expense of a (slightly) higher implementation effort.
I am adroid proggrammer,because of many object in scene my game has lagging
i have theory for remove lagging in my game.
if i can control rendering in unity i can remove lagging.
using UnityEngine;
using System.Collections;
public class Enemy : MonoBehaviour {
void Update(){
void Start(){
GetComponent<Renderer>().enabled = false;
}
object2 = GameObject.Find("TR");
var distance = Vector3.Distance(gameObject.transform.position, object2.transform.position);
print (distance);
if(distance <= 80){
GetComponent<Renderer>().enabled = true;
}
}
}
Don't work.how can i have boolean render that when have collision will render
else remove.
i want have zone that all object in my zone rendered and allthing outside do not render.
void OnTriggerEnter(Collider collision)
{
if(collision.gameObject.tag == "zone")
{
GetComponent<Renderer>().enabled = true;
}
else{
GetComponent<Renderer>().enabled = false;
}
don't work
void OnTriggerEnter(Collider collision)
{
if(collision.gameObject.tag == "zone")
{
gameObject.SetActive(false);
}
else{
gameObject.SetActive(true);
}
This is either implemented in Unity or implementing it is a bad idea because raycasts are expensive and you need a lot of them. Try finding other problems which cause lagging in your game, disable feature by feature and write how many frames you have, this will get you best overview of what's the problem. Look online which methods are expensive (Instantiating, Destroy, try merging all models you have, smaller amount of shaders, fast shaders, less textures to load, FindGameObjectByName (or tag...)).
Here you will find a great document about optimization. It's preapared for mobile devices but i hope you will find what you need: Unity Optimization Guide for x86 Android
I would recommend having your blue blobs in an object pool, and the ones leaving your screen getting disabled.
You know your position and you know the position of the objects in the pool, you can math your distance in one direction, for instance behind you and disable after x amount.
Raycasting or collisions are abundant.
On your terrain generation scripts, check for disabled pool objects and if one exist, it should be put ahead in the level and repositioned or w/e logic you have there.
Don't instantiate and destroy unless you really need it, do it on level-load instead of on the fly.
(It's expensive.)
There's some really good tutorials on the unity page, have a look there.
They cover things like endless-runners.
I try to create game for Android and I have problem with high speed objects, they don't wanna to collide.
I have Sphere with Sphere Collider and Bouncy material, and RigidBody with this param (Gravity=false, Interpolate=Interpolate, Collision Detection = Continuous Dynamic)
Also I have 3 walls with Box Collider and Bouncy material.
This is my code for Sphere
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
In project Settings I set: "Min Penetration For Penalty Force"=0.001, "Solver Interation Count"=50
When I play on the start it work fine (it bounces) but when speed go to high, Sphere just passes the wall.
Can anyone help me?
Thanks.
Edited
var hit : RaycastHit;
var mainGameScript : MainGame;
var particles_splash : GameObject;
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
function Update() {
if (rigidbody.SweepTest(transform.forward, hit, 0.5))
Debug.Log(hit.distance + "mts distance to obstacle");
if(transform.position.y < -3) {
mainGameScript.GameOver();
//Application.LoadLevel("Menu");
}
}
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function OnCollisionEnter(collision : Collision) {
Instantiate(particles_splash, transform.position, transform.rotation);
}
EDITED added more info
Fixed Timestep = 0.02 Maximum Allowed Tir = 0.333
There is no difference between running the game in editor player and on Android
No. It looks OK when I set 0.01
My Paddle is Box Collider without Rigidbody, walls are the same
There are all in same layer (when speed is normal it all works) value in PhysicsManager are the default (same like in image) exept "Solver Interation Co..." = 50
No. When I change speed it pass other wall
I am using standard cube but I expand/shrink it to fit my screen and other objects, when I expand wall more then it's OK it bouncing
No. It's simple project simple example from Video http://www.youtube.com/watch?v=edfd1HJmKPY
I don't use gravity
See:
Similar SO Question
A community script that uses ray tracing to help manage fast objects
UnityAnswers post leading to the script in (2)
You could also try changing the fixed time step for physics. The smaller this value, the more times Unity calculates the physics of a scene. But be warned, making this value too small, say <= 0.005, will likely result in an unstable game, especially on a portable device.
The script above is best for bullets or small objects. You can manually force rigid body collisions tests:
public class example : MonoBehaviour {
public RaycastHit hit;
void Update() {
if (rigidbody.SweepTest(transform.forward, out hit, 10))
Debug.Log(hit.distance + "mts distance to obstacle");
}
}
I think the main problem is the manipulation of Rigidbody's velocity. I would try the following to solve the problem.
Redesign your code to ensure that IncreaseBallVelocity and every other manipulation of Rigidbody is called within FixedUpdate. Check that there are no other manipulations to Transform.position.
Try to replace setting velocity directly by using AddForce or similar methods so the physics engine has a higher chance to calculate all dependencies.
If there are more items (main player character, ...) involved related to the physics calculation, ensure that their code runs in FixedUpdate too.
Another point I stumbled upon were meshes that are scaled very much. Having a GameObject with scale <= 0.01 or >= 100 has definitely a negative impact on physics calculation. According to the docs and this Unity forum entry from one of the gurus you should avoid Transform.scale values != 1
Still not happy? OK then the next test is starting with high velocities but no acceleration. At this phase we want to know, if the high velocity itself or the acceleration is to blame for the problem. It would be interesting to know the velocities' values at which the physics engine starts to fail - please post them so that we can compare them.
EDIT: Some more things to investigate
6.7 m/sec does not sound that much so that I guess there is a special reason or a combination of reasons why things go wrong.
Is your Maximum Allowed Timestep high enough? For testing I suggest 5 to 10x Fixed Timestep. Note that this might kill the frame rate but that can be dfixed later.
Is there any difference between running the game in editor player and on Android?
Did you notice any drops in frame rate because of the 0.01 FixedTimestep? This would indicate that the physics engine might be in trouble.
Could it be that there are static colliders (objects having a collider but no Rigidbody) that are moved around or manipulated otherwise? This would cause heavy recalculations within PhysX.
What about the layers: Are all walls on the same layer resp. are the involved layers are configured appropriately in collision detection matrix?
Does the no-bounce effect always happen at the same wall? If so, can you just copy the 1st wall and put it in place of the second one to see if there is something wrong with this specific wall.
If not to much effort, I would try to set up some standard cubes as walls just to be sure that transform.scale is not to blame for it (I made really bad experience with this).
Do you manipulate gravity or TimeManager.timeScale from within a script?
BTW: are you using gravity? (Should be no problem just
I am developing an android game with box2d and use a fixed timestep system for advancing the physics.
However as I use this system it requires the box2d positions to be interpolates. I read this article
and have implemented an interpolation method very much like the one in the article.
The method seems to work nicely on the computer but on my phone the positions of objects are very jumpy. There is of course a big frame rate difference between PC and phone, but I think this algorithm should not mind that.
Here is the just of the code if you don't feel like looking at the article :
void PhysicsSystem::smoothStates_ ()
{
const float oneMinusRatio = 1.f - fixedTimestepAccumulatorRatio_;
for (b2Body * b = world_->GetBodyList (); b != NULL; b = b->GetNext ())
{
if (b->GetType () == b2_staticBody)
{
continue;
}
PhysicsComponent & c = PhysicsComponent::b2BodyToPhysicsComponent (* b);
c.smoothedPosition_ =
fixedTimestepAccumulatorRatio_ * b->GetPosition () +
oneMinusRatio * c.previousPosition_;
c.smoothedAngle_ =
fixedTimestepAccumulatorRatio_ * b->GetAngle () +
oneMinusRatio * c.previousAngle_;
}
}
Does anyone know why my game is acting like this?
Thanks for the help
That code in and of itself doesn't appear to have any issues as compared to the article. You might want to try posting this on https://gamedev.stackexchange.com/ and see if they have any recommendations.
Alternatively, here is a very well written article about having a semi-fixed time step, and decoupling physics and frame rate, which I imagine could be the problem. It isn't for Box2D, but reading over it might help you pinpoint the issue with your physics.