Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -
Related
I have an app which has to create Sprite-instances on the fly based on data contained in byte arrays (PNGs and JPGs). The following code is used to create the sprites:
Texture2D texture = new Texture2D(2, 2, TextureFormat.RGBA32,false,false);
texture.LoadImage(data);
Vector2 pivot = new Vector2(0.5f, 0.5f);
Rect tRect = new Rect(0, 0, texture.width, texture.height);
return Sprite.Create(texture, tRect, pivot);
This works fine, however, depending on the device and the size of the images, after a random number of images, the app freezes and then will be shut down by the OS. Its always another image, which fails. Also, the data source is irrelevant.
Looking into the logs of the app via adb shows nothing. If I write to the debug log, I can see, that the last statement which gets called is texture.LoadImage. However, there is no exception or another information about the error. Catching the exception does also not work.
The error does not occur in the editor. The error occurs on the android devices (2) in development build and in production build.
Searching the web, I found the below entry, which states the very same problem, but no solution has been posted (they circled around the www-part, however the problem is not with that):
https://forum.unity.com/threads/android-crash-when-using-multiple-www.483941/
UPDATE
One interesting finding is, that if I set the markNonReadable-Parameter of the texture.LoadImage() method to true, the error occurs less frequently, but still is there.
texture.LoadImage(data,true);
Textures are not garbage collected. So if you create a texture using new Texture then you need to destroy the texture with Destroy(texture) when you no longer need it. I believe Sprite object also needs to be destroyed.
In your case, textures that were loaded stayed in memory until Android OS closed your app because of memory pressure.
UnloadUnusedAssets() should also destroy all the textures and sprites that are no longer referenced, but it takes a lot of time (about 1 second), so it only makes sense to call that when changing scenes.
I'm attempting to extract the white balance parameters from the auto white balance algorithm in the S9. On every other device I've tested, it gives meaningful parameters back (the numbers have a floating point precision of like 6 digits and are constantly changing) but the S9 appears to round it's result parameters to the nearest whole number which ends up being giving some very poor results in terms of color balance. Here's the code I am using to do this:
if (result.get(CaptureResult.COLOR_CORRECTION_GAINS) != null) {
channelVector = result.get(CaptureResult.COLOR_CORRECTION_GAINS);
}
Anybody else run into this issue and if so... any solutions to it out there???
Consider working with custom Samsung Camera API. These days, it is based on camera2.
Specifically, they provide their COLOR_CORRECTION_GAINS. They also explain that
… the camera device may do additional processing but android.colorCorrection.gains and android.colorCorrection.transform will still be provided by the camera device (in the results) and be roughly correct.
(the emphasis is mine)
After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.
I am working on an app that will allow a user to take quick click and forget snapshots. Most of the app is done except for the camera working that way I would like. Right now I have the camera working but I can't seem to find a way to disable the shutter sound and I cant find a way to disable displaying the preview. I was able to cover the preview up with a control but I would rather just not have it displayed if possible.
To sum things up, these are the items that I would like to disable while utilizing the built in Camera controls.
Shutter sound
Camera screen display
Image preview onPictureTaken
Does anyone know of a resource that could point me in the right direction, I would greatly appreciate it. I have been following CommonsWare's example from this sample fairly closely.
Thank you.
This is actually a property in the build.prop of a phone. I'm unsure if its possible to change this. Unless you completely override it and use your own camera code. Using what you can that is available in the SDK.
Take a look at this:
CameraService.cpp
. . .
CameraService::Client::Client(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const sp<CameraHardwareInterface>& hardware,
int cameraId, int cameraFacing, int clientPid) {
mPreviewCallbackFlag = FRAME_CALLBACK_FLAG_NOOP;
mOrientation = getOrientation(0, mCameraFacing == CAMERA_FACING_FRONT);
mOrientationChanged = false;
cameraService->setCameraBusy(cameraId);
cameraService->loadSound();
LOG1("Client::Client X (pid %d)", callingPid)
}
void CameraService::loadSound() {
Mutex::Autolock lock(mSoundLock);
LOG1("CameraService::loadSound ref=%d", mSoundRef);
if (mSoundRef++) return;
mSoundPlayer[SOUND_SHUTTER] = newMediaPlayer("/system/media/audio/ui/camera_click.ogg");
mSoundPlayer[SOUND_RECORDING] = newMediaPlayer("/system/media/audio/ui/VideoRecord.ogg");
}
As can be noted, the click sound is started without your interaction.
This is the service used in the Gingerbread Source code.
The reason they DON'T allow this is because it is illegal is some countries. Only way to achieve what you want is to have a custom ROM.
Update
If what being said here: http://androidforums.com/t-mobile-g1/6371-camera-shutter-sound-effect-off.html
still applies, then you could write a timer that turns off the sound (Silent Mode) for a couple of seconds and then turn it back on each time you take a picture.
You may use the data from the preview callback using a function to save it at a picture on some type of trigger such as a button, using onclick listener. you could compress the image to jpeg or png. In this way, there no shutterCallback to be implemented. and therefore you can play any sound you want or none when taking a picture.
You can effectively hide the preview surface by giving it dimensions of 1p in the xml file (I found an example the said 0p but for some reason that was giving me errors).
It may be illegal to have a silent shutter in some places, but it doesn't appear that the US is such a place, as my HTC One gives me an option to silence it, and in fact, since Android 4.2 you can do this:
Camera.CameraInfo info=new Camera.CameraInfo();
if (info.canDisableShutterSound) {
camera.enableShutterSound(false);
}
So I am using the Android camera to take pictures within an Android app. About 90% of my users have no issues, but the other 10% get a picture that returns pure black or a weird jumbling of pixels.
Has anyone else seen this behavior? or have any ideas why it happens?
Examples:
Black:
Jumbled:
I've had similar problems like this.
The problem in short is: Missing data.
It occurs to a Bitmap/Stream if the datastream was interrupted for too long or it is accidentally no more available.
Another example where it may occur: Downloading and uploading images.
If the user disables all of a sudden Wifi/mobile network no more data can be transmitted.
You end up in a splattered image.
The image will appear/view okay(where okay means black/splattered, it's still viewable!) but is invalid internally (missing or corrupted information).
If it's not too critical you can try to move all the data into a Bitmap object (BitmapFactory.decode*) and test if the returned Bitmap is null. If yes the data possibly is corrupted.
This is just solving the consequences of the problem, as you can guess.
The better way would be to take the problem on the foot:
Ensure a good connection to your data source (Large enough, stout buffer).
Try to avoid unneccesary casts (e.g. from char to int)
Use the correct type of buffers (Either Reader/Writer for character streams or InputStream/OutputStream for byte streams).
From android 4.0 they put hardwareAcceleration set to true as default in the manifest. Hardwareaccelerated canvas does not support Pictures and you will get a black screen...
Please also check that whether you use BitmapFactory.Options object for generating the bitmap or not. Because few methods of this object also makes the bitmap corrupted.