Tracking Android device in space using device camera - android

I am looking to get device position (x, y, and z) in space in the most efficient way possible. Currently I am using ArSceneView (ArCore) to get the cameras pose as it updates.
The problem is that ArSceneView or the class it extends from seems to have a memory leak that eventually crash my app. I have tried forcing garbage collection and increasing the heap size but the app is still crashing.
I do not need view to display the camera stream so a class with a view is not necessary, I just need a way to get the mobiles x y and z coordinates. Does anyone know of a way to get these using MLKit or pure ArCore (no view)?
Side note: my app does not crash when the android studio profiler is on... anyone know why?
WHERE I START AND STOP AR MODULE:
public void initializeSession() {
if (sessionInitializationFailed) {
return;
}
UnavailableException sessionException;
try {
session = new Session(context);
Config config = new Config(session);
config.setUpdateMode(Config.UpdateMode.LATEST_CAMERA_IMAGE);
setupSession(session);
session.configure(config);
return;
} catch (Exception e) {
e.printStackTrace();
sessionException = new UnavailableException();
sessionException.initCause(e);
}
sessionInitializationFailed = true;
}
void start() {
System.gc();
if (isStarted) {
return;
}
isStarted = true;
try {
initializeSession();
this.resume();
} catch (CameraNotAvailableException ex) {
sessionInitializationFailed = true;
}
}
void stop() {
if (!isStarted) {
return;
}
isStarted = false;
this.pause();
this.stop();
System.gc();
}
WHERE I LISTEN FOR CHANGES:
public void onUpdate(FrameTime frameTime) {
if (arModule == null) {
return;
}
Frame frame = arModule.getArFrame();
if (frame == null) {
return;
}
Camera camera = frame.getCamera();
Pose pose = camera.getPose();
TrackingState trackingState = camera.getTrackingState();
if (trackingState == TrackingState.TRACKING) {
float x = pose.tx();
float y = pose.ty();
float z = pose.tz();
}
}

In the HelloAR sample app, you can clearly see that they are getting the pose of the camera. Their app does not crash, so it is not a bug. It's hard to tell from your code whether you are getting a memory leak somewhere else. Just add the bit about storing camera pose inside the HelloAR sample app.

Related

Is it possible (if yes, how) to access a MIUI phone's depth camera directly?

I know that my phone and other models have a depth camera. I have used Portrait mode and extracted the depth information from the image using Desktop tools. I have attempted to use Unity's WebCamTexture.depthCameraName to do this on the device, but to no avail. Is this possible, or is the depth camera reserved for the camera app on MIUI?
Certainly, there might be the possibility to make the user take a photograph in the camera app and import it, but my application would benefit greatly from being able to read out this data in real time. I would appreciate any pointers on what to research, thank you in advance.
I would just like to add that if this is doable in Unity, that would be my preferred solution. However, if it has to be, I can make do with any other XR solution for android (position info will be relevant to the project)
As I know, there is a way to get depth image on Android studio. With camera2 API, you can use CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT to find depthcamera's CameraId and use it.
such as:
private String DepthCameraID() {
try {
for (String camera : cameraManager.getCameraIdList()) {
CameraCharacteristics chars = cameraManager.getCameraCharacteristics(camera);
final int[] capabilities = chars.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES);
boolean facingFront = chars.get(CameraCharacteristics.LENS_FACING) == CameraMetadata.LENS_FACING_BACK;
boolean depthCapable = false;
for (int capability : capabilities) {
boolean capable = capability == CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT;
depthCapable = depthCapable || capable;
}
if (depthCapable && facingFront) {
SizeF sensorSize = chars.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE);
Log.i(TAG, "Sensor size: " + sensorSize);
float[] focalLengths = chars.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);
if (focalLengths.length > 0) {
float focalLength = focalLengths[0];
double fov = 2 * Math.atan(sensorSize.getWidth() / (2 * focalLength));
Log.i(TAG, "Calculated FoV: " + fov);
}
return camera;
}
}
} catch (CameraAccessException e) {
Log.e(TAG, "Could not initialize Camera Cache");
e.printStackTrace();
}
return null;
}

MediaCodec's Persistent Input Surface unsupported by Camera2 Session?

I am writing an Android app that supports saving RAW/JPEG and records videos at the same time. I tried passing 4 surfaces when creating CameraCaptureSession: preview, 2x ImageSaver and 1x PersistentInputSurface created by MediaCodec#createPersistentInputSurface. By using persistent input surface, I intend to avoid a stoppage between 2 captures.
When creating the session it fails with:
W/CameraDevice-JV-0: Stream configuration failed due to: endConfigure:380: Camera 0: Unsupported set of inputs/outputs provided
Session 0: Failed to create capture session; configuration failed
I have tried taking out all other surfaces, leaving only the PersistentInputSurface, still fails.
#Override
public void onResume() {
super.onResume();
//Some other setups...
if (persistentRecorderSurface == null) {
persistentRecorderSurface = MediaCodec.createPersistentInputSurface();
}
startBackgroundThread();
startCamera();
if (mPreviewView.isAvailable()) {
configureTransform(mPreviewView.getWidth(), mPreviewView.getHeight());
} else {
mPreviewView.setSurfaceTextureListener(mSurfaceTextureListener);
}
if (mOrientationListener != null && mOrientationListener.canDetectOrientation()) {
mOrientationListener.enable();
}
}
private void createCameraPreviewSessionLocked() {
try {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
Surface surface = new Surface(texture);
mPreviewRequestBuilder = mBackCameraDevice.createCaptureRequest(
CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
mBackCameraDevice.createCaptureSession(Arrays.asList(
surface,
mJpegImageReader.get().getSurface(),
mRAWImageReader.get().getSurface(),
persistentRecorderSurface
), new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
synchronized (mCameraStateLock) {
if (mBackCameraDevice == null) {
return;
}
try {
setup3AControlsLocked(mPreviewRequestBuilder);
session.setRepeatingRequest(mPreviewRequestBuilder.build(),
mPreCaptureCallback, mBackgroundHandler);
mState = CameraStates.PREVIEW;
} catch (CameraAccessException | IllegalStateException e) {
e.printStackTrace();
return;
}
mSession = session;
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
showToast("Failed to configure camera.");
}
}, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
It'd be helpful to see the system log lines right before that error line to confirm, but most likely:
You need to actually tie the persistentRecorderSurface to a MediaRecorder or MediaCodec, and call prepare() on those, before you create the camera capture session.
Otherwise, there's nothing actually at the other end of the persistent surface, and the camera can't tell what resolution or other settings are required.
Also keep in mind that there are limits on how many concurrent outputs you can have from the camera, depending on its supported hardware level and capabilities. There is currently no requirement that a device must support your combination of outputs (preview, record, JPEG, RAW), unfortunately, so it's very likely many or all devices will still give you an error.

Burst capture at 10 FPS with preview without lag

I need to do an application with an activity wich captures images in burst mode at 10 FPS.
My problem is that I'm able to capture very fast pictures with the burst mode, but it seems that my buffer of images is full before I can have the time to treat them (I just want to put my images into an ArrayList).
So my buffer is full and my app crashes.
The size of the buffer that I've choose is 50 (I cannot take too much memory).
It seems that my burst mode can take 30 FPS, but I'm able to treat my images at 10 FPS.
So, I want to slow down my capturing, but when I do that (by waiting before repeat another capture), it slows down my preview too. So I have a lag in my preview.
I don't want to save my images, I just want to stock them in order to treat them later (in an other activity).
Do you have any idea, maybe to slow down my capturing, or maybe to speed up my treatment.
Code ::
My code is based on the sample Camera2Basic.
private void captureStillPicture() {
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
mPreviewRequestBuilder.set(CaptureRequest.EDGE_MODE, CaptureRequest.EDGE_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE, CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE_ON);
mPreviewRequestBuilder.set(CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE, CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AWB_LOCK, true);
mCaptureSession.stopRepeating();
mCaptureSession.abortCaptures();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.capture(mPreviewRequestBuilder.build(), captureCallback, null);
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
blockCapture() permits to wait 100 ms between captures.
private void blockCapture() {
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
while(Math.abs(timerBurst - System.currentTimeMillis()) < 1000){}
System.out.println("ELAPSED :::::::: " + Math.abs(timerBurst - System.currentTimeMillis()));
timerBurst = System.currentTimeMillis();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
for(int i =0; i< NB_BURST; i++){captureList.add(mPreviewRequestBuilder.build());}
}
private final CameraCaptureSession.CaptureCallback captureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
showToast("Saved: " + mFile);
Log.d(TAG,"Saved: " + mFile);
mCounterImageForSave ++;
mCounterBurst++;
if(mCounterImageForSave <= 1) {
timerBurst = System.currentTimeMillis();
}
if(mCounterBurst >= NB_BURST && isPlayed) {
mCounterBurst = 0;
try {
blockCapture();
mCaptureSession.capture(mPreviewRequestBuilder.build(),captureCallback,null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
if (mCounterBurst >= NB_BURST || !isPlayed)
unlockFocus();
}
};
I have tried to capture YUV_420_888 images instead of JPEG, but my images where all black, and I cannot convert them.
Sorry if I'd make mistakes with my english ...

onFaceDetection(Camera.Face[] faces, Camera camera) keeps on executing continuously

I am creating an application capture image as it detect the face and I am able to achieve that, but only one issue the OnFaceDetection function of FaceDetectionListener is keep on executing even if there is no face in-front of camera. I am pasting my code.
mCamera.setFaceDetectionListener(new Camera.FaceDetectionListener() {
#Override
public void onFaceDetection(Camera.Face[] faces, Camera camera) {
try {
if (lastCaptureTiume + 10000 <= System.currentTimeMillis() || !faceCaptured) {
mCamera.takePicture(null, null, jpegCallback);
lastCaptureTiume = System.currentTimeMillis();
faceCaptured = true;
}
}
catch (Exception e) {
}
}
});
The issue is it is keep on takeing picture although there is no face in front of camera.
This behavior is different for different devices For my Note 3 onFaceDetection keeps on executing even without face and for nexus phone its performing perfectly.
Well I didn't got any other solution so I put a face check condition.
if (faces != null && faces.length > 0) {
//Do code here
}

out of memory in thread repeatition?

friends,
i am using following code to display single full screen image in activity using thread problem scenario is from custom image gallery on each thumbnail clicking displays large image on this screen.
Now the problem is the user clicks on image and thread loads image and presses back button to go to previous page user keeps clicking each thumbnail one by one to display full screen image and repeats this scenario.
Finally, application crashes with out of memory error bitmap.
Please guide what mistake am i doing?
public void Move(final String path)
{
if (!isConnected()) {
Constants.DisplayMessage(getApplicationContext(),
Constants.CONNECTION_ERROR_MESSAGE);
return;
}
progressBar = (ProgressBar) findViewById(R.id.progressbar_default);
progressBar.setVisibility(View.VISIBLE);
myThread = new Thread(new Runnable() {
public void run() {
while (serviceData == null) {
serviceData = DisplayLiveImage(path);
callComplete = true;
}
mHandler.post(MoveNow());
}
});
if(myThread.getState() == Thread.State.NEW)
myThread.start();
}
private Runnable MoveNow() {
return new Runnable() {
public void run() {
if (callComplete) {
try
{
if (!serviceData.equals(""))
{
bm = (Bitmap)serviceData;
float ImageHeight = bm.getHeight();
float ImageWidth = bm.getWidth();
float totalHeight = (ImageHeight / ImageWidth ) * CurrentScreenWidth;
LinearLayout.LayoutParams params = new LinearLayout.LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.WRAP_CONTENT);
params.width = LayoutParams.FILL_PARENT;
params.height = (int)totalHeight;
img.setLayoutParams(params);
img.setImageBitmap(bm);
}
else
{
// show error
}
}catch(Exception ex)
{
// show error
}finally
{
progressBar.setVisibility(View.GONE);
serviceData = null;
callComplete = false;
}
}
}
};
}
public void stopThread()
{
try
{
if(myThread != null)
myThread.interrupt();
}catch(Exception ex)
{ }
}
#Override
public void onDestroy()
{
if(bm != null)
{
bm.recycle();
bm = null;
}
stopThread();
super.onDestroy();
}
Bitmaps are the most annoying thing to manage with Android.
To avoid most of the OOM exception I use :
//Try to free up some memory
System.gc();
System.runFinalization();
System.gc();
Yes you read right 2 system.gc...Go figure :s
Just before trying to decode it. So in your case it would be before the setImageBitmap I presume. But look into your logcats and put it just before the line that makes your app crashes.
If it still happen try to open a smaller version of your bitmap by using bitmapOtions.insamplesize when you allocate your bitmap.
Are you running the debugger when the out of memory error occurs? If so, then the debugger might be the source of the problem. If you run the app under the debugger, then any threads created will still be retained by the debugger, even when they're finished running. This leads to memory errors that won't occur when the app is running without the debugger. Try running without the debugger to see if you still get the memory leak.
http://code.google.com/p/android/issues/detail?id=7979
https://android.googlesource.com/platform/dalvik/+/master/docs/debugger.html
If this isn't the case, then try using the Eclipse Memory Analyzer to track down what the memory issue might be. http://www.eclipse.org/mat/

Categories

Resources