How to disable surface detection in ARCore Android - android

I am working on a project and facing an issue with ARCore. I used ARCore Location in my project, I set the location of object using latitude and longitude. but when I see it in the device, object location varies in AR.
CompletableFuture<ViewRenderable> exampleLayout = ViewRenderable.builder()
.setView(this, R.layout.example_layout)
.build();
// When you build a Renderable, Sceneform loads its resources in the background while returning
// a CompletableFuture. Call thenAccept(), handle(), or check isDone() before calling get().
CompletableFuture<ModelRenderable> andy = ModelRenderable.builder()
.setSource(this, R.raw.andy)
.build();
CompletableFuture.allOf(
exampleLayout,
andy)
.handle(
(notUsed, throwable) -> {
// When you build a Renderable, Sceneform loads its resources in the background while
// returning a CompletableFuture. Call handle(), thenAccept(), or check isDone()
// before calling get().
if (throwable != null) {
DemoUtils.displayError(this, "Unable to load renderables", throwable);
return null;
}
try {
exampleLayoutRenderable = exampleLayout.get();
andyRenderable = andy.get();
hasFinishedLoading = true;
} catch (InterruptedException | ExecutionException ex) {
DemoUtils.displayError(this, "Unable to load renderables", ex);
}
return null;
});
// Set an update listener on the Scene that will hide the loading message once a Plane is
// detected.
arSceneView
.getScene()
.setOnUpdateListener(
frameTime -> {
if (!hasFinishedLoading) {
return;
}
if (locationScene == null) {
// If our locationScene object hasn't been setup yet, this is a good time to do it
// We know that here, the AR components have been initiated.
locationScene = new LocationScene(this, this, arSceneView);
// Now lets create our location markers.
// First, a layout
LocationMarker layoutLocationMarker = new LocationMarker(
77.398151,
28.540926,
getExampleView()
);
// An example "onRender" event, called every frame
// Updates the layout with the markers distance
layoutLocationMarker.setRenderEvent(new LocationNodeRender() {
#SuppressLint("SetTextI18n")
#Override
public void render(LocationNode node) {
View eView = exampleLayoutRenderable.getView();
TextView distanceTextView = eView.findViewById(R.id.textView2);
distanceTextView.setText(node.getDistance() + "M");
}
});
// Adding the marker
locationScene.mLocationMarkers.add(layoutLocationMarker);
// Adding a simple location marker of a 3D model
locationScene.mLocationMarkers.add(
new LocationMarker(
77.398151,
28.540926,
getAndy()));
}
Frame frame = arSceneView.getArFrame();
if (frame == null) {
return;
}
if (frame.getCamera().getTrackingState() != TrackingState.TRACKING) {
return;
}
if (locationScene != null) {
locationScene.processFrame(frame);
}
if (loadingMessageSnackbar != null) {
for (Plane plane : frame.getUpdatedTrackables(Plane.class)) {
if (plane.getTrackingState() == TrackingState.TRACKING) {
hideLoadingMessage();
}
}
}
});
// Lastly request CAMERA & fine location permission which is required by ARCore-Location.
ARLocationPermissionHelper.requestPermission(this);
The major problem in this is that it detects surface and place image according to that, If there is any possibility to disable surface detection in this then it works perfectly.

Modify the session configuration with EnablePlaneFinding = false and then disable and reenable the ARCoreSession. That would disable plane finding but would keep existing planes as they were at the moment.
If you don't want to disable the session you could force an OnEnable() call on the session without disabling it:
var session = GameObject.Find("ARCore Device").GetComponent<ARCoreSession>();
session.SessionConfig.EnablePlaneFinding = false; session.OnEnable();

You can use hide() method to hide the plane discovery in android. Also, by setting setEnabled() method as false to disable the plane renderer.
Try like this,
arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.ux_fragment);
arFragment.getPlaneDiscoveryController().hide();
arFragment.getPlaneDiscoveryController().setInstructionView(null);
arFragment.getArSceneView().getPlaneRenderer().setEnabled(false);

Related

Eliminating volume envelope retrigger clicks - Jsyn on Android

I am looking for ideas on how to handle envelope re-triggering of new notes in a monophonic sampler setup causing clicks if the previous note's envelope hasn't finished. In the current setup the previous note's instance is killed on the spot when a new note is triggered (the synth.stop method call), causing a click as the envelope doesn't get a chance to finish and reach 0 volume. Any hints are welcome.
I have also added in the below code my own un-satisfactory solution putting the gain of the voice to 0 and then putting the voice to sleep for 70ms. This introduces a 70ms latency to the user interaction but gets rid of any clicks. Any values below 70ms in the sleep don't solve the clicking.
The variable are public static at the moment just so I can still play around with where I'm calling them.
Here is my listener code:
buttonNoteC1Get.setOnTouchListener(new View.OnTouchListener() {
#Override
public boolean onTouch(View v, MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_UP) {
buttonNoteC1Get.setBackgroundColor(myColorWhite); // reset gui color
if (sample.getSustainBegin() > 0) { // trigger release for looping sample
ampEnv.dataQueue.queue(ampEnvelope, 3, 1); // release called
}
limit = 0; // reset action down limiter
return true;
}
if (limit == 0) { // respond only to first touch event
if (samplePlayer != null) { // check if a previous note exists
synth.stop(); // stop instance of previous note
}
buttonNoteC1Get.setBackgroundColor(myColorGrey); // key pressed gui color
samplePitch = octave * 1; // set samplerate multiplier
Sampler.player(); // call setup code for new note
Sampler.play(); // play new note
limit = 1; // prevent stacking of action down touch events
}
return false;
}
}); // end listener
Here is my Sampler code
public class Sampler {
public static VariableRateDataReader samplePlayer;
public static LineOut lineOut;
public static FloatSample sample;
public static SegmentedEnvelope ampEnvelope;
public static VariableRateMonoReader ampEnv;
public static MixerMonoRamped mixerMono;
public static double[] ampData;
public static FilterStateVariable mMainFilter;
public static Synthesizer synth = JSyn.createSynthesizer(new JSynAndroidAudioDevice());
// load the chosen sample, called by instrument select spinner
static void loadSample(){
SampleLoader.setJavaSoundPreferred(false);
try {
sample = SampleLoader.loadFloatSample(sampleFile);
} catch (IOException e) {
e.printStackTrace();
}
} // end load sample
// initialize sampler voice
static void player() {
// Create an amplitude envelope and fill it with data.
ampData = new double[] {
envA, 0.9, // pair 0, "attack"
envD, envS, // pair 2, "decay"
0, envS, // pair 3, "sustain"
envR, 0.0, // pair 4, "release"
/* 0.04, 0.0 // pair 5, "silence"*/
};
// initialize voice
ampEnvelope = new SegmentedEnvelope(ampData);
synth.add(ampEnv = new VariableRateMonoReader());
synth.add(lineOut = new LineOut());
synth.add(mixerMono = new MixerMonoRamped(2));
synth.add(mMainFilter = new FilterStateVariable());
// connect signal flow
mixerMono.output.connect(mMainFilter.input);
mMainFilter.output.connect(0, lineOut.input, 0);
mMainFilter.output.connect(0, lineOut.input, 1);
// set control values
mixerMono.amplitude.set(sliderVal / 100.0f);
mMainFilter.amplitude.set(0.9);
mMainFilter.frequency.set(mainFilterCutFloat);
mMainFilter.resonance.set(mainFilterResFloat);
// initialize and connect sampler voice
if (sample.getChannelsPerFrame() == 1) {
synth.add(samplePlayer = new VariableRateMonoReader());
ampEnv.output.connect(samplePlayer.amplitude);
samplePlayer.output.connect(0, mixerMono.input, 0);
samplePlayer.output.connect(0, mixerMono.input, 1);
} else if (sample.getChannelsPerFrame() == 2) {
synth.add(samplePlayer = new VariableRateStereoReader());
ampEnv.output.connect(samplePlayer.amplitude);
samplePlayer.output.connect(0, mixerMono.input, 0);
samplePlayer.output.connect(1, mixerMono.input, 1);
} else {
throw new RuntimeException("Can only play mono or stereo samples.");
}
} // end player
// play the sample
public static void play() {
if (samplePlayer != null)
{samplePlayer.dataQueue.clear();
samplePlayer.rate.set(sample.getFrameRate() * samplePitch); // set pitch
}
// start the synth engine
synth.start();
lineOut.start();
ampEnv.start();
// play one shot sample
if (sample.getSustainBegin() < 0) {
samplePlayer.dataQueue.queue(sample);
ampEnv.dataQueue.queue( ampEnvelope );
// play sustaining sample
} else {
samplePlayer.dataQueue.queueOn(sample);
ampEnv.dataQueue.queue( ampEnvelope, 0,3);
ampEnv.dataQueue.queueLoop( ampEnvelope, 1, 2 );
}
} }
Unsatisfactory solution that introduces 70ms of latency, changing the action down listener handling of a previous note to this:
if (limit == 0) {
if (samplePlayer != null) {
mixerMono.amplitude.set(0);
try {
synth.sleepFor(0.07);
synth.stop(); // stop instance of previous note
}catch (InterruptedException e) {
e.printStackTrace();
}
}
You should not call synth.start() and synth.stop() for every note. Think of it like powering on a physical synthesizer. Just start the synth and the lineOut once. If the ampEnv is connected indirectly to something else that is start()ed then you do not need to start() the ampEnv.
Then just queue your samples and envelopes when you want to start a note.
When you are all done playing notes then call synth.stop().

Microblink recognizer set up RegexParserSettings

I am trying to scan an image taken from resources using a Recognizer with a RegerParserSettings inside a fragment. The problem is that BaseRecognitionResult obtained through the callback onScanningDone is always null. I have tried to set up the RecognitionSettings with MRTDRecognizer and worked fine, so I think that the library is properly integrated. This is the source code that I am using:
#Override
public void onAttach(Context context) {
...
try {
mRecognizer = Recognizer.getSingletonInstance();
mRecognizer.setLicenseKey(context, LICENSE_KEY);
} catch (FeatureNotSupportedException | InvalidLicenceKeyException e) {
Log.d(TAG, e.getMessage());
}
buildRecognitionSettings();
mRecognizer.initialize(context, mRecognitionSettings, new DirectApiErrorListener() {
#Override
public void onRecognizerError(Throwable t) {
//Handle exception
}
});
}
private void buildRecognitionSettings() {
mRecognitionSettings = new RecognitionSettings();
mRecognitionSettings.setRecognizerSettingsArray(setupSettingsArray());
}
private RecognizerSettings[] setupSettingsArray() {
RegexParserSettings regexParserSettings = new RegexParserSettings("[A-Z0-9]{17}");
BlinkOCRRecognizerSettings sett = new BlinkOCRRecognizerSettings();
sett.addParser("myRegexParser", regexParserSettings);
return new RecognizerSettings[] { sett };
}
I scan the image like:
mRecognizer.recognizeBitmap(bitmap, Orientation.ORIENTATION_PORTRAIT, FragMicoblink.this);
And this is the callback handled in the fragment
#Override
public void onScanningDone(RecognitionResults results) {
BaseRecognitionResult[] dataArray = results.getRecognitionResults();
//dataArray is null
for(BaseRecognitionResult baseResult : dataArray) {
if (baseResult instanceof BlinkOCRRecognitionResult) {
BlinkOCRRecognitionResult result = (BlinkOCRRecognitionResult) baseResult;
if (result.isValid() && !result.isEmpty()) {
String parsedAmount = result.getParsedResult("myRegexParser");
if (parsedAmount != null && !parsedAmount.isEmpty()) {
Log.d(TAG, "Result: " + parsedAmount);
}
}
}
}
}`
Thanks in advance!
Helllo Spirrow.
The difference between your code and SegmentScanActivity is that your code uses DirectAPI, which can process only single bitmap image you send for processing, while SegmentScanActivity processes camera frames as they arrive from the camera. While doing so, it can utilize time redundant information to improve the OCR quality, i.e. it combines consecutive OCR results from multiple video frames to obtain a better quality OCR result.
This feature is not available via DirectAPI - you need to use either SegmentScanActivity, or custom scan activity with our camera management.
You can also find out more here:
https://github.com/BlinkID/blinkid-android/issues/54
Regards

Android : Detect movement of eyes using sensor at real time

I am preparing one android application in which I have to detect to movement of eyes. Somehow I am able to achieve the above thing on images but I want this on live eyes.
I am not able to understand that if we can use the proximity sensor to detect the eyes. Just like smartStay feature.
Please suggest the ideas to implement the same.
We can use the front camera to detect the eyes and eyes blink. Use the Vision api for detecting the eyes.
Code for eye tracking:
public class FaceTracker extends Tracker<Face> {
private static final float PROB_THRESHOLD = 0.7f;
private static final String TAG = FaceTracker.class.getSimpleName();
private boolean leftClosed;
private boolean rightClosed;
#Override
public void onUpdate(Detector.Detections<Face> detections, Face face) {
if (leftClosed && face.getIsLeftEyeOpenProbability() > PROB_THRESHOLD) {
leftClosed = false;
} else if (!leftClosed && face.getIsLeftEyeOpenProbability() < PROB_THRESHOLD){
leftClosed = true;
}
if (rightClosed && face.getIsRightEyeOpenProbability() > PROB_THRESHOLD) {
rightClosed = false;
} else if (!rightClosed && face.getIsRightEyeOpenProbability() < PROB_THRESHOLD) {
rightClosed = true;
}
if (leftClosed && !rightClosed) {
EventBus.getDefault().post(new LeftEyeClosedEvent());
} else if (rightClosed && !leftClosed) {
EventBus.getDefault().post(new RightEyeClosedEvent());
} else if (!leftClosed && !rightClosed) {
EventBus.getDefault().post(new NeutralFaceEvent());
}
}
}
//method to call the FaceTracker
private void createCameraResources() {
Context context = getApplicationContext();
// create and setup the face detector
mFaceDetector = new FaceDetector.Builder(context)
.setProminentFaceOnly(true) // optimize for single, relatively large face
.setTrackingEnabled(true) // enable face tracking
.setClassificationType(/* eyes open and smile */ FaceDetector.ALL_CLASSIFICATIONS)
.setMode(FaceDetector.FAST_MODE) // for one face this is OK
.build();
// now that we've got a detector, create a processor pipeline to receive the detection
// results
mFaceDetector.setProcessor(new LargestFaceFocusingProcessor(mFaceDetector, new FaceTracker()));
// operational...?
if (!mFaceDetector.isOperational()) {
Log.w(TAG, "createCameraResources: detector NOT operational");
} else {
Log.d(TAG, "createCameraResources: detector operational");
}
// Create camera source that will capture video frames
// Use the front camera
mCameraSource = new CameraSource.Builder(this, mFaceDetector)
.setRequestedPreviewSize(640, 480)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(30f)
.build();
}
No you can't use proximity sensor for eye detection or tracking . Give a shot to OpenCV .
Link : OpenCv
github : OpenCv github

Client's objects movement not syncing across the Network

I'm making a 2-player android game using UNET. Now, all the movements of the host's objects are syncing across the network therefore it's working fine. But when the object on the client's side moves, it moves but it doesn't move in the host's screen. Therefore, the movement is not syncing.
I already attached NetworkIdentity, NetworkTransform, and PlayerController script to it. As well as the box collider (for the raycast).
The server and the client has the same script in the PlayerController but the only difference is, host could only move objects with Player tags and objects with Tagger tags for the client.
void Update () {
if(!isLocalPlayer){
return;
}
if(isServer){
Debug.Log("Server here.");
if (Input.GetMouseButtonDown(0))
{
Vector2 cubeRay = Camera.main.ScreenToWorldPoint(Input.mousePosition);
RaycastHit2D cubeHit = Physics2D.Raycast(cubeRay, Vector2.zero);
if (cubeHit)
{
if(cubeHit.transform.tag=="Player")
{
if (this.target != null)
{
SelectMove sm = this.target.GetComponent<SelectMove>();
if (sm != null) { sm.enabled = false; }
}
target = cubeHit.transform.gameObject;
selectedPlayer();
}
}
}
}
if(!isServer){
Debug.Log("Client here.");
if (Input.GetMouseButtonDown(0))
{
Vector2 cubeRay = Camera.main.ScreenToWorldPoint(Input.mousePosition);
RaycastHit2D cubeHit = Physics2D.Raycast(cubeRay, Vector2.zero);
if (cubeHit)
{
if(cubeHit.transform.tag=="Tagger")
{
if (this.target != null)
{
SelectMove sm = this.target.GetComponent<SelectMove>();
if (sm != null) { sm.enabled = false; }
}
target = cubeHit.transform.gameObject;
selectedPlayer();
}
}
}
}
}
I'm using (!isServer) to identify client because isClient sometimes doesn't work fine on my project. I also tried using it again to test it out, but still no luck.
You dont need to use tags to move players this one script PlayerController is enough using only isLocalPlayer check and try disabling this script using !isLocalPlayer on both clients. Use this for reference http://docs.unity3d.com/Manual/UNetSetup.html and check thier sample tutorial

AndEngine Sprite/Box2D Body removal removes a particluar body (as it should), but removes all instances of the sprite?

I am making a game using Andengine/Box2D physics addon. I experienced the random crashes due to the addition/movement/deletion of box2d bodies during the world step calculation, so I have implemented code to flag sprites/bodies for removal using the setUserData method - I attach a JSON object to each body and sprite that contains the type of sprite, the body, and the sprite itself and its delete status:
private JSONObject makeUserData(int type, Body body, Object sprite)
{
JSONObject myObject = new JSONObject();
try {
myObject.put("type", type);
myObject.put("sprite", sprite);
myObject.put("body", body);
myObject.put("deleteStatus", false);
} catch (JSONException e) {
Log.d(TAG,"Exception creating user data:"+e);
}
return myObject;
}
then in an update thread iterate through all the bodies in my world looking for these flags and delete the sprites/bodies with the flag. The bodies remove correctly, however the sprite removal seems to delete every instance of that particluar sprite rather than just removing the particular one i flagged to remove! I can tell the bodies are still present without the sprite as my player collides with invisible objects! Here is the code for removal:
private void removeObjectsSetForDestruction()
{
if(this.mPhysicsWorld!=null)
{
Iterator<Body> allMyBodies = this.mPhysicsWorld.getBodies();
boolean isDelete = false;
JSONObject currentBodyData;
while(allMyBodies.hasNext())
{
try {
currentBodyData = (JSONObject)allMyBodies.next().getUserData();
if(currentBodyData!=null)
{
isDelete = (Boolean) currentBodyData.get("deleteStatus");
if(isDelete)
{
destroyObstruction((Body) currentBodyData.get("body"));
}
}
} catch (JSONException e) {
Log.d(TAG,"Error getting world bodies data:"+e);
}
}
}
}
private void destroyObstruction(Body obstructionBody) throws JSONException
{
obstructionBody.setActive(false);
JSONObject secondBodyData = null;
if(obstructionBody.getUserData()!=null)
{
secondBodyData=(JSONObject) obstructionBody.getUserData();
//explodeObstruction(((IEntity) secondBodyData.get("sprite")).getX(),((IEntity) secondBodyData.get("sprite")).getY());
if(secondBodyData.get("sprite") instanceof AnimatedSprite)
{
removeObject((AnimatedSprite) secondBodyData.get("sprite"));
}
else
{
removeObject((Sprite) secondBodyData.get("sprite"));
}
}
}
private void removeObject(final AnimatedSprite myAnimSprite)
{
final PhysicsConnector myPhysicsConnector = this.mPhysicsWorld.getPhysicsConnectorManager().findPhysicsConnectorByShape(myAnimSprite);
this.mPhysicsWorld.unregisterPhysicsConnector(myPhysicsConnector);
this.mPhysicsWorld.destroyBody(myPhysicsConnector.getBody());
this.mScene.unregisterTouchArea(myAnimSprite);
this.mScene.detachChild(myAnimSprite);
System.gc();
}
private void removeObject(final Sprite mySprite)
{
final PhysicsConnector myPhysicsConnector = this.mPhysicsWorld.getPhysicsConnectorManager().findPhysicsConnectorByShape(mySprite);
this.mPhysicsWorld.unregisterPhysicsConnector(myPhysicsConnector);
this.mPhysicsWorld.destroyBody(myPhysicsConnector.getBody());
this.mScene.unregisterTouchArea(mySprite);
this.mScene.detachChild(mySprite);
System.gc();
}
I would like to take a look at your objects creating code. I assume every sprite has the same TextureRegion used, so when the region of one sprite is being changed - same regions on other sprites are being changed too due to Android architecture specifics. For every sprite with same TextureRegion you should use textureRegion.clone() as the last parameter of the constructor. Hope this helps.

Categories

Resources