I've just started to develop a project as face detection in the base of augmented reality on android phone. And am new to AR(augmented reality) as so far I contributed and evaluated algorithms for facial determinants but I don't have any idea regarding AR and wanna implement AR in my project So could you experts kindly tell me, where to start and and do I need any additional tools to create AR application( /do I've to add any plugins on IDE(eclipse))? or is there is any other IDE works better than eclipse for AR? please check the below link and give your comments because my projects is completely seems as given link below,
http://www.readwriteweb.com/archives/recognizr_facial_recognition_coming_to_android_phones.php
http://www.blackweb20.com/2010/03/01/recognizr-facial-recognition-on-android/#.TzNswE7xodM
thank you!
AR implementation itself is easy. It is basically just an overlay over preview picture, and you can put whatever you like on this overlay. One working example is contained in this project:
http://sourceforge.net/projects/javaocr/
( there are countles others )
Tricky parts starts from here. For face recognition one typically uses Haar transformation, and there are implementations in OpenCV (and also countles others) - but it is questionable if you can it performant enough in android java code, to be really usefull ( you wil have to do it in native code ). And this is only face recognition - it says you - "hey dude, here is the face. maybe" - not identification.
As for IDE, I prefer IntelliJ IDEA as it is just better java ide (somebody will lynch me right now for it ;) ), and it has better android support. But this is commercial product (free comminity edition is available for free, and individual license is not that expensive)
you can try this code:
public class FaceDetectionActivity extends Activity
{
/** Called when the activity is first created. */
#Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
//setContentView(R.layout.main);
setContentView(new MyView(this));
}
private class MyView extends View
{
private Bitmap myBitmap;
private int width, height;
private FaceDetector.Face[] detectedFaces;
private int NUMBER_OF_FACES=4;
private FaceDetector faceDetector;
private int NUMBER_OF_FACE_DETECTED;
private float eyeDistance;
public MyView(Context context)
{
super(context);
BitmapFactory.Options bitmapFatoryOptions=new BitmapFactory.Options();
bitmapFatoryOptions.inPreferredConfig=Bitmap.Config.RGB_565;
myBitmap=BitmapFactory.decodeResource(getResources(), R.drawable.faces,bitmapFatoryOptions);
width=myBitmap.getWidth();
height=myBitmap.getHeight();
detectedFaces=new FaceDetector.Face[NUMBER_OF_FACES];
faceDetector=new FaceDetector(width,height,NUMBER_OF_FACES);
NUMBER_OF_FACE_DETECTED=faceDetector.findFaces(myBitmap, detectedFaces);
}
#Override
protected void onDraw(Canvas canvas)
{
canvas.drawBitmap(myBitmap, 0,0, null);
Paint myPaint = new Paint();
myPaint.setColor(Color.GREEN);
myPaint.setStyle(Paint.Style.STROKE);
myPaint.setStrokeWidth(3);
for(int count=0;count<NUMBER_OF_FACE_DETECTED;count++)
{
Face face=detectedFaces[count];
PointF midPoint=new PointF();
face.getMidPoint(midPoint);
eyeDistance=face.eyesDistance();
canvas.drawRect(midPoint.x-eyeDistance, midPoint.y-eyeDistance, midPoint.x+eyeDistance, midPoint.y+eyeDistance, myPaint);
}
}
}
}
this code will detects the face from Bitmap so you should implement this technique also through camera
cheers .
Related
I am developing a Watch Face for android wear. I want to make the Canvas hardware accelerated.
In manifest I declared it like bellow
<application
android:hardwareAccelerated="true"
..>
<service
android:name=".WatchFaceMain"
android:hardwareAccelerated="true"
..>
</application>
But when I checked it from inside of My onDraw(Canvas canvas, Rect bounds) method canvas.isHardwareAccelerated() returns false. which clearly means my canvas is not hardware accelerated.
#Override
public void onDraw(Canvas canvas, Rect bounds) {
...
boolean isAccelarated = canvas.isHardwareAccelerated();
.......
}
How do I make this canvas Hardware Accelerated?
Finally found a solution for this issue. Though it wasn't a straight forward or easy solution.
First I got to copy the whole android.support.wearable.watchface.CanvasWatchFaceService class from librar, which my WatchFaceMain class extended.
Then I had to edit the draw() Method inside Engine Class like bellow
public abstract class CanvasWatchFaceService extends WatchFaceService {
.....
public class Engine extends android.support.wearable.watchface.WatchFaceService.Engine {
...
private void draw(SurfaceHolder holder) {
this.mDrawRequested = false;
Canvas canvas = holder.getSurface().lockHardwareCanvas();
if(canvas != null) {
try {
this.onDraw(canvas, holder.getSurfaceFrame());
} finally {
holder.getSurface().unlockCanvasAndPost(canvas);
}
}
}
}
}
I edited two lines inside onDraw like bellow.
from
Canvas canvas = holder.lockCanvas();
To
Canvas canvas = holder.getSurface().lockHardwareCanvas();
And From
holder.unlockCanvasAndPost(canvas);
To
holder.getSurface().unlockCanvasAndPost(canvas);
I still wonder why did CanvasWatchFaceService didn't offer any easier way to request for a hardware accelerated canvas!
More detail solution is available here in this Article.
Hopefully this answer helps someone else too.
With newer versions of the SDK, it's much easier. The CanvasWatchFaceService.Engine class has a boolean useHardwareCanvas constructor parameter. In your subclass, call the constructor as needed:
public Engine() {
super(true);
}
With the new APIs https://developer.android.com/reference/kotlin/androidx/wear/watchface/WatchFaceService, use CanvasType.HARDWARE
override suspend fun createWatchFace(
surfaceHolder: SurfaceHolder,
watchState: WatchState,
complicationSlotsManager: ComplicationSlotsManager,
currentUserStyleRepository: CurrentUserStyleRepository
) = WatchFace(
WatchFaceType.ANALOG,
object : Renderer.CanvasRenderer2<MySharedAssets>(
surfaceHolder,
currentUserStyleRepository,
watchState,
CanvasType.HARDWARE,
interactiveDrawModeUpdateDelayMillis = 16,
clearWithBackgroundTintBeforeRenderingHighlightLayer = true
) {
I have 2 separate Android app (apk).
App 1 creates SurfaceView inside it and should provide AIDL methods for other apps to obtain an instance of SurfaceHolder for such SurfaceView. So the other apps will be able to draw on that view, displayed inside app number 1.
I was able to transfer Surface itself via aidl easily, since it implements Parcelable interface.
// IMainService.aidl
package com.commonsware.cwac.preso.demo.service;
import android.view.Surface;
interface IMainService {
Surface getSurf();
}
But 3-rd party sdk need SurfaceHolder to draw on. So the question is - how can I create SurfaceHolder for a given Surface instance, or how can I transfer SurfaceHolder via AIDL. Is there any examples of how I can implement Parcelable for SurfaceHolder?
My use-case (if its matter): App 1 starts as a Service and draw UI on Presentation Screen. And I need a way to get another apps display Navigation Data via Nokia Here Mobile SDK inside app 1. I need SurfaceHolder in order to use this api.
Any help will be appreciated.
After a while, finally create a solution that works well with Here OffscreenRender API.
At one app I create SurfaceHolder and this app provide AIDL (see at the post above). With this AIDL I was able to correctly send Surface object and then, on app 2 side, make SurfaceHolder from it.
public class MySurfaceHolder implements SurfaceHolder {
private Surface surface;
private Rect rect;
MySurfaceHolder(Surface surface, Rect rect) {
this.surface = surface;
this.rect = rect;
}
///....
#Override
public Canvas lockCanvas() {
return surface.lockCanvas(this.rect);
}
#Override
public Canvas lockCanvas(Rect rect) {
return surface.lockCanvas(rect);
}
#Override
public void unlockCanvasAndPost(Canvas canvas) {
surface.unlockCanvasAndPost(canvas);
}
#Override
public Rect getSurfaceFrame() {
return rect;
}
#Override
public Surface getSurface() {
return surface;
}
}
Basically, its my own SurfaceHolder, which get initialized by my own, instead of system. This approach need extend AIDL to be able to request Rect sizes, as well.
Hope that will be helpful for someone.
In android I have taken a rotating sphere example given here. It creates a simple app showing a rotating sphere (the earth).
Now, in a class derived from GLSurfaceView I wait for an even (like a touch-screen event) in order to exchange the renderer. I want the current renderer to stop rendering, and the GLSurfaceView should use a different renderer instead (to display some other object).
I have tried to use the following code:
public class MyGLSurfaceView extends GLSurfaceView {
Context mycontext;
MyGLSurfaceView(Context context) {
super(context);
mycontext = context;
}
public boolean onTouchEvent(MotionEvent event) {
int x = (int)event.getX();
int y = (int)event.getY();
Log.d("onTouchEvent",String.format("keyCode: %d coords: %d %d", event.getActionMasked(), x, y));
GlRenderer renderer = new GlRenderer(mycontext);
setRenderer(renderer);
return super.onTouchEvent(event);
}
}
which gives the following error:
FATAL EXCEPTION: main
Process: com.jimscosmos.opengltexturedsphere, PID: 25708
java.lang.IllegalStateException: setRenderer has already been called for this instance.
I guess I have to stop/remove/destroy the 'old' renderer, but I did not find anything useful for that in the documentation. Maybe my approach is completely wrong? What else to do? How to do it right?
If you insist, technically it can be achieved using a 'mediator' - as suggested here: Change GlSurfaceView renderer.
However - I think this is an overhead you really do not need. Simply prepare various objects and change the various rendering properties in a single renderer.
I'm building a simple application which acts similar to built-in camera application - takes a photo and saves it locally, with some additional aŃtions. First I took Firemonkeys' TCameraComponent, placed it to the form and added transfered image from it to TImage. It works OK, I can change resolution, control flash power and so on. The problem is that preview framerate is very low while running at good quality. If I switch camera to 320x240 resolution, it works fast enough, but quality is poor and I have to switch to some higher resolution to take a good photo - loosing time on re-focusing and light adaptation. If I set high resolution (such as 1280x720) for preview, it slows down to about 3-5 fps.
So I need to use some another technique to access camera. App interface must still unchanged during usage, so I can't use TakePhotoFromCameraAction. I'm looking to JNI interfaces. So I need, using JNI calls, access camera and get a preview at some image object at my form. I use the such code to get preview (I'm a bit lack of understanding native android so it may look crazy):
J.Cam := TJCamera.JavaClass.open(0); // J.Cam is JCamera object
J.View := TJView.JavaClass.init( TAndroidHelper.Context ); // JView
J.ViewParams := TJViewGroup_LayoutParams.JavaClass.init(Width, Height); // i got Width and Height from J.Cam.getParameters()
TAndroidHelper.Activity.addContentView( J.View, J.ViewParams );
J.SurfaceView := TJSurfaceView.JavaClass.init( J.View.getContext );
J.LayoutParameters := TJViewGroup_LayoutParams.JavaClass.init(Width, Height);
J.SurfaceView.setLayoutParams( J.LayoutParameters );
J.Cam.setPreviewDisplay( J.SurfaceView.getHolder );
J.Cam.startPreview;
and it doesn't show anything except black screen. Google developer reference says that "If you are using SurfaceView, you will need to register a SurfaceHolder.Callback with addCallback(SurfaceHolder.Callback) and wait for surfaceCreated(SurfaceHolder) before calling setPreviewDisplay() or starting preview."
So the next step I should create the callback class, extending SurfaceView and implementing SurfaceHolder.Callback. There are many samples can be found like this:
public class TestActivity extends Activity {
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
setContentView(new MySurfaceView(this));
}
public class MySurfaceView extends SurfaceView implements SurfaceHolder.Callback {
public MySurfaceView(Context context) {
super(context);
getHolder().addCallback(this);
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { }
#Override
public void surfaceCreated(SurfaceHolder holder) { }
#Override
public void surfaceDestroyed(SurfaceHolder holder) { }
}
}
Now a have to translate this code to Delphi. Java code is classes only, but Delphi's JNI is interfaces AND classes so I can't figure out how to represent this:
class MySurfaceView extends SurfaceView implements SurfaceHolder.Callback
since both JNI's JSurfaceView and JSurfaceHolder_Callback are interfaces.
So, does anybody knows how do do this? (or completely another way to access camera)
[EDIT] Ignore this post, it was a noob's mistake [/EDIT]
I started Android last week (= I'm pretty new to it :) ) and I'm banging my head on a problem:
I try to perfom a raycasting to get the objects under a point of the screen.
I found out that the GLU.gluUnproject method was what I needed, after a decent amount of failures, I found a solution
I've copied the MatrixGrabber, MatrixStack and MatrixTrackingGL classes in my project, they seem fine.
the method I use goes as follow:
static public Vertex unProject( float x, float y, World world )
{
world.mg = new MatrixGrabber();
world.mg.getCurrentState(world.gl);
float[] pos = new float[4];
GLU.gluUnProject( x, y, 0f,
world.mg.mModelView, 0,
world.mg.mProjection, 0,
world.view().get_size(), 0,
pos, 0);
return new Vertex(pos[0], pos[1], pos[2]);
}
Vertex is a dataHolder with x,y,z floats
World extends GLSurfaceView, I do the GLWrapper replacement in the constructor:
world.mg is a MatrixGrabber()
public World( Context context )
{
super(context);
[...]
//allows matrix manipulation
setGLWrapper( new GLWrapper()
{
public GL wrap(GL gl)
{
return new MatrixTrackingGL(gl);
}
});
}
and I make sure that all the variables are instanciated when I do my call
but still I can't get this to work: the app crashes badly on the
world.mg.getCurrentState(world.gl);
call.
it also crashes it on getCurrentModelView(gl); and getCurrentProjection(gl);
I'm using Android 1.5, but tried with other versions up to 3. same thing.
I don't really know which version of OpenGLI'm using ; the GL10 is used everywhere, I don't know if it is important, all I've read concerned the GL10 "Type".
if anyone has a clue, an advice, a workaround or a solution, I'd be happy happy happy
and anyway, thanks for reading :)
private void getMatrix(GL10 gl, int mode, float[] mat) {
MatrixTrackingGL gl2 = new MatrixTrackingGL(gl);
gl2.glMatrixMode(mode);
gl2.getMatrix(mat, 0);
}
replace the casting on the first line of the MatrixGrabber.java
my World class extended GLSurfaceView and was using the MatrixTrackingGL as a wrapper.
the problem was that the Viewport also extended GLSurfaceView and was NOT using the MatrixTrackingGL... stupid.
now the world doesn't extend anything and the Viewport ( extending GLSurfaceView ) implements the Wrapper's change and everything's fine.