I am preparing one android application in which I have to detect to movement of eyes. Somehow I am able to achieve the above thing on images but I want this on live eyes.
I am not able to understand that if we can use the proximity sensor to detect the eyes. Just like smartStay feature.
Please suggest the ideas to implement the same.
We can use the front camera to detect the eyes and eyes blink. Use the Vision api for detecting the eyes.
Code for eye tracking:
public class FaceTracker extends Tracker<Face> {
private static final float PROB_THRESHOLD = 0.7f;
private static final String TAG = FaceTracker.class.getSimpleName();
private boolean leftClosed;
private boolean rightClosed;
#Override
public void onUpdate(Detector.Detections<Face> detections, Face face) {
if (leftClosed && face.getIsLeftEyeOpenProbability() > PROB_THRESHOLD) {
leftClosed = false;
} else if (!leftClosed && face.getIsLeftEyeOpenProbability() < PROB_THRESHOLD){
leftClosed = true;
}
if (rightClosed && face.getIsRightEyeOpenProbability() > PROB_THRESHOLD) {
rightClosed = false;
} else if (!rightClosed && face.getIsRightEyeOpenProbability() < PROB_THRESHOLD) {
rightClosed = true;
}
if (leftClosed && !rightClosed) {
EventBus.getDefault().post(new LeftEyeClosedEvent());
} else if (rightClosed && !leftClosed) {
EventBus.getDefault().post(new RightEyeClosedEvent());
} else if (!leftClosed && !rightClosed) {
EventBus.getDefault().post(new NeutralFaceEvent());
}
}
}
//method to call the FaceTracker
private void createCameraResources() {
Context context = getApplicationContext();
// create and setup the face detector
mFaceDetector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true) // optimize for single, relatively large face
.setTrackingEnabled(true) // enable face tracking
.setClassificationType(/* eyes open and smile */ FaceDetector.ALL_CLASSIFICATIONS)
.setMode(FaceDetector.FAST_MODE) // for one face this is OK
.build();
// now that we've got a detector, create a processor pipeline to receive the detection
// results
mFaceDetector.setProcessor(new LargestFaceFocusingProcessor(mFaceDetector, new FaceTracker()));
// operational...?
if (!mFaceDetector.isOperational()) {
Log.w(TAG, "createCameraResources: detector NOT operational");
} else {
Log.d(TAG, "createCameraResources: detector operational");
}
// Create camera source that will capture video frames
// Use the front camera
mCameraSource = new CameraSource.Builder(this, mFaceDetector)
.setRequestedPreviewSize(640, 480)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(30f)
.build();
}
No you can't use proximity sensor for eye detection or tracking . Give a shot to OpenCV .
Link : OpenCv
github : OpenCv github
Related
How to capture images or videos from the camera2 api wide angle camera?
or the telescopic camera?
I know how to handle camera capture for front & back camera.
I just can't understand how to open the camera and choose the wide/telescopic camera?
I guess it has something to do with setting one of the following :
CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
CameraCharacteristics.getPhysicalCameraIds()
CameraCharacteristics.getAvailablePhysicalCameraRequestKeys()
CameraDevice.createCaptureSession(SessionConfiguration config)
CameraCharactersitics.LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE
But I fail to understand the scenario of setting it up and I didn;t find any good explanation.
I will appreciate any kind of tutorial or explanation.
Last question - how to test it with no phsical device? I mean - how to setup the Avd/emulator?
So I asked this on the CameraX discussion group and here is the reply from google:
For CameraX to support wide angle cameras we are working with manufacturers to expose those via camera2 first. Some devices indeed do so today in an non-determinstic manner. Will keep you posted as we progress, thanks!
So, If somebody still looks for the answer:
Almost no manufacturer supported that until android 10. from android 10 - all physical camera are logical cameras.
It means that you can see those cameras on
manager.getCameraIdList()
you will get a list of all available camera, just look for the CameraCharacteristics.LENS_FACING direction. and populate a list.
Here is the full code:
public CameraItem[] GetCameraListFirstTime() {
List<CameraItem> listValuesItems = new ArrayList<CameraItem>();
boolean IsMain = false;
boolean IsSelfie = false;
if(manager == null)
manager = (CameraManager)mContext.getSystemService(CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics chars = manager.getCameraCharacteristics(cameraId);
if (!IsMain && chars.get(CameraCharacteristics.LENS_FACING) == Camera.CameraInfo.CAMERA_FACING_FRONT) {
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Main"));
IsMain = true;
}
else if (!IsSelfie && chars.get(CameraCharacteristics.LENS_FACING) == Camera.CameraInfo.CAMERA_FACING_BACK) {
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Selfie"));
IsSelfie = true;
}
else
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Wide or Other"));
}
}
catch (CameraAccessException e) {
Log.e(TAG, e.toString());
}
return listValuesItems.toArray(new CameraItem[0]);
}
public class CameraItem implements java.io.Serializable{
public int Key;
public String Description;
public CameraItem(int key, String desc)
{
Key=key;
Description = desc;
}
I am working on a project and facing an issue with ARCore. I used ARCore Location in my project, I set the location of object using latitude and longitude. but when I see it in the device, object location varies in AR.
CompletableFuture<ViewRenderable> exampleLayout = ViewRenderable.builder()
.setView(this, R.layout.example_layout)
.build();
// When you build a Renderable, Sceneform loads its resources in the background while returning
// a CompletableFuture. Call thenAccept(), handle(), or check isDone() before calling get().
CompletableFuture<ModelRenderable> andy = ModelRenderable.builder()
.setSource(this, R.raw.andy)
.build();
CompletableFuture.allOf(
exampleLayout,
andy)
.handle(
(notUsed, throwable) -> {
// When you build a Renderable, Sceneform loads its resources in the background while
// returning a CompletableFuture. Call handle(), thenAccept(), or check isDone()
// before calling get().
if (throwable != null) {
DemoUtils.displayError(this, "Unable to load renderables", throwable);
return null;
}
try {
exampleLayoutRenderable = exampleLayout.get();
andyRenderable = andy.get();
hasFinishedLoading = true;
} catch (InterruptedException | ExecutionException ex) {
DemoUtils.displayError(this, "Unable to load renderables", ex);
}
return null;
});
// Set an update listener on the Scene that will hide the loading message once a Plane is
// detected.
arSceneView
.getScene()
.setOnUpdateListener(
frameTime -> {
if (!hasFinishedLoading) {
return;
}
if (locationScene == null) {
// If our locationScene object hasn't been setup yet, this is a good time to do it
// We know that here, the AR components have been initiated.
locationScene = new LocationScene(this, this, arSceneView);
// Now lets create our location markers.
// First, a layout
LocationMarker layoutLocationMarker = new LocationMarker(
77.398151,
28.540926,
getExampleView()
);
// An example "onRender" event, called every frame
// Updates the layout with the markers distance
layoutLocationMarker.setRenderEvent(new LocationNodeRender() {
#SuppressLint("SetTextI18n")
#Override
public void render(LocationNode node) {
View eView = exampleLayoutRenderable.getView();
TextView distanceTextView = eView.findViewById(R.id.textView2);
distanceTextView.setText(node.getDistance() + "M");
}
});
// Adding the marker
locationScene.mLocationMarkers.add(layoutLocationMarker);
// Adding a simple location marker of a 3D model
locationScene.mLocationMarkers.add(
new LocationMarker(
77.398151,
28.540926,
getAndy()));
}
Frame frame = arSceneView.getArFrame();
if (frame == null) {
return;
}
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
if (locationScene != null) {
locationScene.processFrame(frame);
}
if (loadingMessageSnackbar != null) {
for (Plane plane : frame.getUpdatedTrackables(Plane.class)) {
if (plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
}
}
}
});
// Lastly request CAMERA & fine location permission which is required by ARCore-Location.
ARLocationPermissionHelper.requestPermission(this);
The major problem in this is that it detects surface and place image according to that, If there is any possibility to disable surface detection in this then it works perfectly.
Modify the session configuration with EnablePlaneFinding = false and then disable and reenable the ARCoreSession. That would disable plane finding but would keep existing planes as they were at the moment.
If you don't want to disable the session you could force an OnEnable() call on the session without disabling it:
var session = GameObject.Find("ARCore Device").GetComponent<ARCoreSession>();
session.SessionConfig.EnablePlaneFinding = false; session.OnEnable();
You can use hide() method to hide the plane discovery in android. Also, by setting setEnabled() method as false to disable the plane renderer.
Try like this,
arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.ux_fragment);
arFragment.getPlaneDiscoveryController().hide();
arFragment.getPlaneDiscoveryController().setInstructionView(null);
arFragment.getArSceneView().getPlaneRenderer().setEnabled(false);
I'm trying to make screen sharing app using WebRTC. I have code that can get and share video stream from camera. I need to modify it to instead get video via MediaProjection API. Based on this post I have modified my code to use org.webrtc.ScreenCapturerAndroid, but there is no video output shown. There is only black screen. If I use camera, everything works fine (I can see camera output on screen). Could someone please check my code and maybe point me in right direction? I have been stuck on this for three days already.
Here is my code:
public class MainActivity extends AppCompatActivity {
private static final String TAG = "VIDEO_CAPTURE";
private static final int CAPTURE_PERMISSION_REQUEST_CODE = 1;
private static final String VIDEO_TRACK_ID = "video_stream";
PeerConnectionFactory peerConnectionFactory;
SurfaceViewRenderer localVideoView;
ProxyVideoSink localSink;
VideoSource videoSource;
VideoTrack localVideoTrack;
EglBase rootEglBase;
boolean camera = false;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
rootEglBase = EglBase.create();
localVideoView = findViewById(R.id.local_gl_surface_view);
localVideoView.init(rootEglBase.getEglBaseContext(), null);
startScreenCapture();
}
#TargetApi(21)
private void startScreenCapture() {
MediaProjectionManager mMediaProjectionManager = (MediaProjectionManager) getApplication().getSystemService(Context.MEDIA_PROJECTION_SERVICE);
startActivityForResult(mMediaProjectionManager.createScreenCaptureIntent(), CAPTURE_PERMISSION_REQUEST_CODE);
}
#Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode != CAPTURE_PERMISSION_REQUEST_CODE) { return; }
start(data);
}
private void start(Intent permissionData) {
//Initialize PeerConnectionFactory globals.
PeerConnectionFactory.InitializationOptions initializationOptions =
PeerConnectionFactory.InitializationOptions.builder(this)
.setEnableVideoHwAcceleration(true)
.createInitializationOptions();
PeerConnectionFactory.initialize(initializationOptions);
//Create a new PeerConnectionFactory instance - using Hardware encoder and decoder.
PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
DefaultVideoEncoderFactory defaultVideoEncoderFactory = new DefaultVideoEncoderFactory(
rootEglBase.getEglBaseContext(), true,true);
DefaultVideoDecoderFactory defaultVideoDecoderFactory = new DefaultVideoDecoderFactory(rootEglBase.getEglBaseContext());
peerConnectionFactory = PeerConnectionFactory.builder()
.setOptions(options)
.setVideoDecoderFactory(defaultVideoDecoderFactory)
.setVideoEncoderFactory(defaultVideoEncoderFactory)
.createPeerConnectionFactory();;
VideoCapturer videoCapturerAndroid;
if (camera) {
videoCapturerAndroid = createCameraCapturer(new Camera1Enumerator(false));
} else {
videoCapturerAndroid = new ScreenCapturerAndroid(permissionData, new MediaProjection.Callback() {
#Override
public void onStop() {
super.onStop();
Log.e(TAG, "user has revoked permissions");
}
});
}
videoSource = peerConnectionFactory.createVideoSource(videoCapturerAndroid);
DisplayMetrics metrics = new DisplayMetrics();
MainActivity.this.getWindowManager().getDefaultDisplay().getRealMetrics(metrics);
videoCapturerAndroid.startCapture(metrics.widthPixels, metrics.heightPixels, 30);
localVideoTrack = peerConnectionFactory.createVideoTrack(VIDEO_TRACK_ID, videoSource);
localVideoTrack.setEnabled(true);
//localVideoTrack.addRenderer(new VideoRenderer(localRenderer));
localSink = new ProxyVideoSink().setTarget(localVideoView);
localVideoTrack.addSink(localSink);
}
//find first camera, this works without problem
private VideoCapturer createCameraCapturer(CameraEnumerator enumerator) {
final String[] deviceNames = enumerator.getDeviceNames();
// First, try to find front facing camera
Logging.d(TAG, "Looking for front facing cameras.");
for (String deviceName : deviceNames) {
if (enumerator.isFrontFacing(deviceName)) {
Logging.d(TAG, "Creating front facing camera capturer.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
// Front facing camera not found, try something else
Logging.d(TAG, "Looking for other cameras.");
for (String deviceName : deviceNames) {
if (!enumerator.isFrontFacing(deviceName)) {
Logging.d(TAG, "Creating other camera capturer.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
return null;
}
}
ProxyVideoSink
public class ProxyVideoSink implements VideoSink {
private VideoSink target;
synchronized ProxyVideoSink setTarget(VideoSink target) { this.target = target; return this; }
#Override
public void onFrame(VideoFrame videoFrame) {
if (target == null) {
Log.w("VideoSink", "Dropping frame in proxy because target is null.");
return;
}
target.onFrame(videoFrame);
}
}
In logcat I can see, that some frames are rendered, but nothing is shown (black screen).
06-18 17:42:44.750 11357-11388/com.archona.webrtcscreencapturetest I/org.webrtc.Logging: EglRenderer: local_gl_surface_viewDuration: 4000 ms. Frames received: 117. Dropped: 0. Rendered: 117. Render fps: 29.2. Average render time: 4754 μs. Average swapBuffer time: 2913 μs.
06-18 17:42:48.752 11357-11388/com.archona.webrtcscreencapturetest I/org.webrtc.Logging: EglRenderer: local_gl_surface_viewDuration: 4001 ms. Frames received: 118. Dropped: 0. Rendered: 118. Render fps: 29.5. Average render time: 5015 μs. Average swapBuffer time: 3090 μs.
I'm using latest version of WebRTC library: implementation 'org.webrtc:google-webrtc:1.0.23546'.
My device has API level 24 (Android 7.0), but I have tested this code on 3 different devices with different API levels, so I don't suspect device specific problem.
I have tried building another app that uses MediaProjection API (without WebRTC) and I can see correct output inside SurfaceView.
I have tried downgrading webrtc library, but nothing seems to work.
Thanks for any help.
I was faced same issue using WebRTC library org.webrtc:google-webrtc:1.0.22672. I am using android 7.0 device. Video call is working fine. Issue is with screen sharing. Screen sharing showing black screen always.
Then I added following:
peerConnectionFactory.setVideoHwAccelerationOptions(rootEglBase.getEglBaseContext(), rootEglBase.getEglBaseContext());
Now it is working perfectly.
I'm making a 2-player android game using UNET. Now, all the movements of the host's objects are syncing across the network therefore it's working fine. But when the object on the client's side moves, it moves but it doesn't move in the host's screen. Therefore, the movement is not syncing.
I already attached NetworkIdentity, NetworkTransform, and PlayerController script to it. As well as the box collider (for the raycast).
The server and the client has the same script in the PlayerController but the only difference is, host could only move objects with Player tags and objects with Tagger tags for the client.
void Update () {
if(!isLocalPlayer){
return;
}
if(isServer){
Debug.Log("Server here.");
if (Input.GetMouseButtonDown(0))
{
Vector2 cubeRay = Camera.main.ScreenToWorldPoint(Input.mousePosition);
RaycastHit2D cubeHit = Physics2D.Raycast(cubeRay, Vector2.zero);
if (cubeHit)
{
if(cubeHit.transform.tag=="Player")
{
if (this.target != null)
{
SelectMove sm = this.target.GetComponent<SelectMove>();
if (sm != null) { sm.enabled = false; }
}
target = cubeHit.transform.gameObject;
selectedPlayer();
}
}
}
}
if(!isServer){
Debug.Log("Client here.");
if (Input.GetMouseButtonDown(0))
{
Vector2 cubeRay = Camera.main.ScreenToWorldPoint(Input.mousePosition);
RaycastHit2D cubeHit = Physics2D.Raycast(cubeRay, Vector2.zero);
if (cubeHit)
{
if(cubeHit.transform.tag=="Tagger")
{
if (this.target != null)
{
SelectMove sm = this.target.GetComponent<SelectMove>();
if (sm != null) { sm.enabled = false; }
}
target = cubeHit.transform.gameObject;
selectedPlayer();
}
}
}
}
}
I'm using (!isServer) to identify client because isClient sometimes doesn't work fine on my project. I also tried using it again to test it out, but still no luck.
You dont need to use tags to move players this one script PlayerController is enough using only isLocalPlayer check and try disabling this script using !isLocalPlayer on both clients. Use this for reference http://docs.unity3d.com/Manual/UNetSetup.html and check thier sample tutorial
I want my Android app to recognize sound. For example I want to know if the sound from microphone is a clapping or knocking or something else.
Do I need to use math, or can I just use some library for that?
If there are any libraries for sound analysis please let me know. Thanks.
Musicg library is useful for whistle detection. Concerning claps, I wouldn't recommend use it, cause it reacts to every loud sound (even speech).
For clap and other percussive sounds detection I recommend TarsosDSP. It has a simple API with a rich functionality (pitch detection and so on). For clap detection you can use something like (if you use TarsosDSPAndroid-v3):
MicrophoneAudioDispatcher mDispatcher = new MicrophoneAudioDispatcher((int) SAMPLE_RATE, BUFFER_SIZE, BUFFER_OVERLAP);
double threshold = 8;
double sensitivity = 20;
mPercussionDetector = new PercussionOnsetDetector(22050, 1024,
new OnsetHandler() {
#Override
public void handleOnset(double time, double salience) {
Log.d(TAG, "Clap detected!");
}
}, sensitivity, threshold);
mDispatcher.addAudioProcessor(mPercussionDetector);
new Thread(mDispatcher).start();
You can tune your detector by adjusting sensitivity (0-100) and threshold (0-20).
Good luck!
There is an Api that works very well for your needs in my opinion.
http://code.google.com/p/musicg/
Good Luck!!!
You don't need math and you don't need AudioRecord. Just check MediaRecorder.getMaxAmplitude() every 1000 milliseconds.
this code and this code might be helpful.
Here is some code you will need.
public class Clapper
{
private static final String TAG = "Clapper";
private static final long DEFAULT_CLIP_TIME = 1000;
private long clipTime = DEFAULT_CLIP_TIME;
private AmplitudeClipListener clipListener;
private boolean continueRecording;
/**
* how much louder is required to hear a clap 10000, 18000, 25000 are good
* values
*/
private int amplitudeThreshold;
/**
* requires a little of noise by the user to trigger, background noise may
* trigger it
*/
public static final int AMPLITUDE_DIFF_LOW = 10000;
public static final int AMPLITUDE_DIFF_MED = 18000;
/**
* requires a lot of noise by the user to trigger. background noise isn't
* likely to be this loud
*/
public static final int AMPLITUDE_DIFF_HIGH = 25000;
private static final int DEFAULT_AMPLITUDE_DIFF = AMPLITUDE_DIFF_MED;
private MediaRecorder recorder;
private String tmpAudioFile;
public Clapper() throws IOException
{
this(DEFAULT_CLIP_TIME, "/tmp.3gp", DEFAULT_AMPLITUDE_DIFF, null, null);
}
public Clapper(long snipTime, String tmpAudioFile,
int amplitudeDifference, Context context, AmplitudeClipListener clipListener)
throws IOException
{
this.clipTime = snipTime;
this.clipListener = clipListener;
this.amplitudeThreshold = amplitudeDifference;
this.tmpAudioFile = tmpAudioFile;
}
public boolean recordClap()
{
Log.d(TAG, "record clap");
boolean clapDetected = false;
try
{
recorder = AudioUtil.prepareRecorder(tmpAudioFile);
}
catch (IOException io)
{
Log.d(TAG, "failed to prepare recorder ", io);
throw new RecordingFailedException("failed to create recorder", io);
}
recorder.start();
int startAmplitude = recorder.getMaxAmplitude();
Log.d(TAG, "starting amplitude: " + startAmplitude);
do
{
Log.d(TAG, "waiting while recording...");
waitSome();
int finishAmplitude = recorder.getMaxAmplitude();
if (clipListener != null)
{
clipListener.heard(finishAmplitude);
}
int ampDifference = finishAmplitude - startAmplitude;
if (ampDifference >= amplitudeThreshold)
{
Log.d(TAG, "heard a clap!");
clapDetected = true;
}
Log.d(TAG, "finishing amplitude: " + finishAmplitude + " diff: "
+ ampDifference);
} while (continueRecording || !clapDetected);
Log.d(TAG, "stopped recording");
done();
return clapDetected;
}
private void waitSome()
{
try
{
// wait a while
Thread.sleep(clipTime);
} catch (InterruptedException e)
{
Log.d(TAG, "interrupted");
}
}
/**
* need to call this when completely done with recording
*/
public void done()
{
Log.d(TAG, "stop recording");
if (recorder != null)
{
if (isRecording())
{
stopRecording();
}
//now stop the media player
recorder.stop();
recorder.release();
}
}
public boolean isRecording()
{
return continueRecording;
}
public void stopRecording()
{
continueRecording = false;
}
}
I realize this is a year old, but I stumbled across it. I'm pretty sure that general, open domain sound recognition is not a solved problem. So, no, you're not going to find any kind of library to do what you want on Android, because such code doesn't exist anywhere yet. If you pick some restricted domain, you could train a classifier to recognize the kinds of sounds your interested in, but that would require lots of math, and lots of examples of each of the potential sounds. It would be pretty cool if the library you wanted existed, but as far as I know, the technology just isn't there yet.