In my application im going to implement face Recognition login... so i go with the openCV library for Recognize face... please help me to do this with sample code and tutorials....
Thanks in advance
Well, my colleagues and I did some investigation on face recognition last year, and these are some of ours considerations about using integrated recognition tools vs JavaCV (the Java bindings for OpenCV):
Please check below tutorials
Face Detection on Andriod Part-I ( Wayback link )
Face Detection on Andriod Part-II ( Wayback link )
Hope it helps :)
you can use NDK for using C/C++ OpenCV API
docs
beginner tutorial
void DetectMyFace ()
{
// image structure in opencv
IplImage *inImg = 0;
// face detector classifer
CvHaarClassifierCascade *clCascade = 0;
CvMemStorage *mStorage = 0;
CvSeq *faceRectSeq;
inImg = cvLoadImage("2.jpg");
mStorage = cvCreateMemStorage(0);
clCascade = (CvHaarClassifierCascade *)cvLoad("haarcascade_frontalface_default.xml", 0, 0, 0);
if ( !inImg || !mStorage || !clCascade )
{
printf("Initilization error : %s" , (!inImg)? "cant load image" : (!clCascade)?
"cant load haar cascade" :
"unable to locate memory storage");
return;
}
faceRectSeq = cvHaarDetectObjects(inImg,clCascade,mStorage,
1.2,
3,
CV_HAAR_DO_CANNY_PRUNING,
cvSize(25,25));
const char *winName = "Display Face";
cvNamedWindow(winName,CV_WINDOW_AUTOSIZE);
for ( int i = 0; i < (faceRectSeq ? faceRectSeq -> total:0); i++ )
{
CvRect *r = (CvRect*)cvGetSeqElem(faceRectSeq,i);
CvPoint p1 = { r->x, r->y };
CvPoint p2 = { r->x + r->width, r->y + r->height };
cvRectangle(inImg,p1,p2,CV_RGB(0,255,0),1,4,0);
}
cvShowImage(winName, inImg);
cvWaitKey(0);
cvDestroyWindow(winName);
// release the variables
cvReleaseImage(&inImg);
if(clCascade) cvReleaseHaarClassifierCascade(&clCascade);
if(mStorage) cvReleaseMemStorage(&mStorage);
}
I have already made an Android app for Face Recognition using OpenCV. You can check it out: https://github.com/yaylas/AndroidFaceRecognizer
Related
I have a qt android project in c++. When I call "rtlsdr_get_device_name" function it returns the "Generic RTL2832U OEM" message. But when I call "rtlsdr_open" function it return -3. Please help me how can I solve this problem.
thank you silicontrip.
My project is a qt android project.
rtlsdr_dev_t *RtlSdrDevice;
int devicecount = rtlsdr_get_device_count();
if (devicecount != 0)
{
QString rtlname = rtlsdr_get_device_name(0);
//this function returns "Generic RTL2832U OEM "
retvalue = rtlsdr_open(&RtlSdrDevice, 0);
//this function returns "-3"
if (retvalue == 0) //if open rtl correctly
{
....
}
...
}
the rtlsdr device doesn't open successfully.
We have a native Android app that uses WebRTC, and we need to find out what video codecs are supported by the host device. (VP8 is always supported but H.264 is subject to the device having a compatible chipset.)
The idea is to create an offer and get the supported video codecs from the SDP. We can do this in a web app as follows:
const pc = new RTCPeerConnection();
if (pc.addTransceiver) {
pc.addTransceiver('video');
pc.addTransceiver('audio');
}
pc.createOffer(...);
Is there a way to do something similar on Android? It's important that we don't need to request camera access to create the offer.
Create a VideoEncoderFactory object and call getSupportedCodecs(). This will return a list of codecs that can be used. Be sure to create the PeerConnectionFactory first.
PeerConnectionFactory.InitializationOptions initializationOptions =
PeerConnectionFactory.InitializationOptions.builder(this)
.setEnableVideoHwAcceleration(true)
.createInitializationOptions();
PeerConnectionFactory.initialize(initializationOptions);
VideoEncoderFactory videoEncoderFactory =
new DefaultVideoEncoderFactory(eglBase.getEglBaseContext()
, true, true);
for (int i = 0; i < videoEncoderFactory.getSupportedCodecs().length; i++) {
Log.d("Codecs", "Supported codecs: " + videoEncoderFactory.getSupportedCodecs()[i].name);
}
I think this is what you are looking for:
private static void codecs() {
MediaCodecInfo[] codecInfos = new MediaCodecList(MediaCodecList.ALL_CODECS).getCodecInfos();
for(MediaCodecInfo codecInfo : codecInfos) {
Log.i("Codec", codecInfo.getName());
for(String supportedType : codecInfo.getSupportedTypes()){
Log.i("Codec", supportedType);
}
}
}
You can check example on https://developer.android.com/reference/android/media/MediaCodecInfo.html
After struggling a few hours on making my app detect this QRCode:
I realized that the problem was the in the QRCode appearance. After inverting the colors, the detection was working perfectly..
Is there a way to make Vision API detect the first QRCode? I tried to enable all symbologies but it did not work. I guess it is possible because the app QR Code Reader detects it.
I improved googles example app "barcode-reader" to detect both inverted colored barcodes and regular ones.
here is a link to googles example app:
https://github.com/googlesamples/android-vision/tree/master/visionSamples/barcode-reader
I did so by editing "CameraSource" class,
package: "com.google.android.gms.samples.vision.barcodereader.ui.camera".
I added a parameter: private boolean isInverted = false;
and changed function void setNextFrame(byte[] data, Camera camera):
void setNextFrame(byte[] data, Camera camera) {
synchronized (mLock) {
if (mPendingFrameData != null) {
camera.addCallbackBuffer(mPendingFrameData.array());
mPendingFrameData = null;
}
if (!mBytesToByteBuffer.containsKey(data)) {
Log.d(TAG,
"Skipping frame. Could not find ByteBuffer associated with the image " +
"data from the camera.");
return;
}
mPendingTimeMillis = SystemClock.elapsedRealtime() - mStartTimeMillis;
mPendingFrameId++;
if (!isInverted){
for (int y = 0; y < data.length; y++) {
data[y] = (byte) ~data[y];
}
isInverted = true;
} else {
isInverted = false;
}
mPendingFrameData = mBytesToByteBuffer.get(data);
// Notify the processor thread if it is waiting on the next frame (see below).
mLock.notifyAll();
}
}
I think this is still an open issue, please see link for details. One workaround for this as stated by a developer:
Right, the barcode API generally doesn't support color-inverted codes. There's no parameter or option to control this at the moment. Though some APIs support them, I don't believe it's a common feature.
For a workaround, you could preprocess the colors in the bitmap before passing them to the barcode API (perhaps inverting colors on alternate frames).
Hope this helps.
I'm starting using qt and I'm trying to get GPS coordinates from my ios and android with C++ following the exemple in official documentation.
The source is not null but the positionUpdated slot is never called.
Any tips would be welcomed, thank you.
ConnectionDeviceContextForDebug::ConnectionDeviceContextForDebug(StringDebugDisplayer* debugDisplayer):
_debugDisplayer(debugDisplayer)
{
QGeoPositionInfoSource *source = QGeoPositionInfoSource::createDefaultSource(this);
if(source){
connect(source, SIGNAL(positionUpdated(QGeoPositionInfo)),
this, SLOT(positionUpdated(QGeoPositionInfo)));
source->setUpdateInterval(100);
source->startUpdates();
_debugDisplayer->setText("source found");
//source->requestUpdate();
}
}
void ConnectionDeviceContextForDebug::positionUpdated(const QGeoPositionInfo &info)
{
_debugDisplayer->setText("position updated");
}
I'm current working wits AS3 and Flex 4.6 to create an android application.
i'm using the front camera and attach it to a local Video object that i add as an child to an VideoDisplay object.
When i debug on my computer everything is working perfectly, but when i build the project and run it on my Android device my local video display becomes an gray grid.
As example i took an picture of the device.
I wrote this method based on a post here on Stackoverflow to initialize the front and back camera.
private function InitCamera():void {
var CamCount:int = ( Camera.isSupported ) ? Camera.names.length : 0;
for( var i:int = 0; i < CamCount; i++ ) {
var cam:Camera = Camera.getCamera( String( i ) );
if( cam ) {
if( cam.position == CameraPosition.FRONT ) {
CamFront = cam;
continue;
}
if( cam.position == CameraPosition.BACK ) {
CamBack = cam;
continue;
}
if( cam.position == CameraPosition.UNKNOWN ) {
CamFront = cam;
continue;
}
}
}
}
And i wrote this method to create an Video object, attach the front Camera as the default camera and add the Video as an child to an VideoDisplay:
private function SetUpLocalVideo():void {
Debug( "Setting up local video" );
LocalVideo = new Video( this.LVideo.width, this.LVideo.height );
LocalVideo.attachCamera( CamFront );
LVideo.addChild( LocalVideo ); <--- this is the VideoDisplay
}
I've been searching on the internet for an solution, but so far i failed to find any.
Do any one else had this problem before ? can you share you solutions with me ?
I appreciate the help.
Thanks.
Set the render mode to direct on your application.xml
<renderMode>direct</renderMode>
If it still doesn't work, change the dpi settings to 240 of your main flex application.