Samsung S9: Reports rounded RggbChannelVector values - android

I'm attempting to extract the white balance parameters from the auto white balance algorithm in the S9. On every other device I've tested, it gives meaningful parameters back (the numbers have a floating point precision of like 6 digits and are constantly changing) but the S9 appears to round it's result parameters to the nearest whole number which ends up being giving some very poor results in terms of color balance. Here's the code I am using to do this:
if (result.get(CaptureResult.COLOR_CORRECTION_GAINS) != null) {
channelVector = result.get(CaptureResult.COLOR_CORRECTION_GAINS);
}
Anybody else run into this issue and if so... any solutions to it out there???

Consider working with custom Samsung Camera API. These days, it is based on camera2.
Specifically, they provide their COLOR_CORRECTION_GAINS. They also explain that
… the camera device may do additional processing but android.colorCorrection.gains and android.colorCorrection.transform will still be provided by the camera device (in the results) and be roughly correct.
(the emphasis is mine)

Related

Android Camera2 API - Set AE-regions not working

In my Camera2 API project for Android, I want to set a region for my Exposure Calculation. Unfortunately it doesn't work. On the other side the Focus region works without any problems.
Device: Samsung S7 / Nexus 5
1.) Initial values for CONTROL_AF_MODE & CONTROL_AE_MODE
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
2.) Create the MeteringRectangle List
meteringFocusRectangleList = new MeteringRectangle[]{new MeteringRectangle(0,0,500,500,1000)};
3.) Check if it is supported by the device and set the CONTROL_AE_REGIONS (same for CONTROL_AF_REGIONS)
if (camera2SupportHandler.cameraCharacteristics.get(CameraCharacteristics.CONTROL_MAX_REGIONS_AE) > 0) {
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_REGIONS, meteringFocusRectangleList);
}
4.) Tell the camera to start Exposure control
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CameraMetadata.CONTROL_AE_PRECAPTURE_TRIGGER_START);
The CONTROL_AE_STATE is always in CONTROL_AE_STATE_SEARCHING, but doesn't use the configured regions...
After long testing & development I've found an answer.
The coordinate system - Camera 1 API VS Camera 2 API
RED = CAM1; GREEN = CAM2; As shown in the image below, the blue rect are the coordinates for a possible focus/exposure area for the Cam1. By using the Cam2 API, there must be firstly queried the max of the height and the width. Please find more info here.
Initial values for CONTROL_AF_MODE & CONTROL_AE_MODE: See in the question above.
Set the CONTROL_AE_REGIONS: See in the question above.
Set the CONTROL_AE_PRECAPTURE_TRIGGER.
// This is how to tell the camera to start AE control
CaptureRequest captureRequest = camera2SupportHandler.mPreviewRequestBuilder.build();
camera2SupportHandler.mCaptureSession.setRepeatingRequest(captureRequest, captureCallbackListener, camera2SupportHandler.mBackgroundHandler);
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_START);
camera2SupportHandler.mCaptureSession.capture(captureRequest, captureCallbackListener, camera2SupportHandler.mBackgroundHandler);
The ''captureCallbackListener'' gives feedback of the AE control (of course also for AF control)
So this configuration works for the most Android phones. Unfortunately it doesn't work for the Samsung S6/7. For this reason I've tested their Camera SDK, which can be found here.
After deep investigations I've found the config field ''SCaptureRequest.METERING_MODE''. By setting this to the value of ''SCaptureRequest.METERING_MODE_MANUAL'', the AE area works also the Samsung phones.
I'll add an example to github asap.
Recently I had the same problem and finally found a solution that helped me.
All I needed to do was to step 1 pixel from the edges of the active sensor rectangle. In your example instead of this rectangle:
meteringRectangleList = new MeteringRectangle[]{new MeteringRectangle(0,0,500,500,1000)};
I would use this:
meteringRectangleList = new MeteringRectangle[]{new MeteringRectangle(1,1,500,500,1000)};
and it started working as magic on both Samsung and Nexus 5!
(note that you should also step 1 pixel from right/bottom edges if you use maximum values there)
It seems that many vendors have poorly implemented this part of documentation
If the metering region is outside the used android.scaler.cropRegion returned in capture result metadata, the camera device will ignore the sections outside the crop region and output only the intersection rectangle as the metering region in the result metadata. If the region is entirely outside the crop region, it will be ignored and not reported in the result metadata.

Android camera API blurry image on Samsung devices

After implementing the camera2 API for the inApp camera I noticed that on Samsung devices the images appear blurry. After searching about that I found the Sasmung Camera SDK (http://developer.samsung.com/galaxy#camera). So after implementing the SDK on Samsung Galaxy S7 the images are fine now, but on Galaxy S6 they are still blurry. Someone experienced those kind of issues with Samsung devices?
EDIT:
To complement #rcsumners comment. I am setting autofocus by using
mPreviewBuilder.set(SCaptureRequest.CONTROL_AF_TRIGGER, SCaptureRequest.CONTROL_AF_TRIGGER_START);
mSCameraSession.capture(mPreviewBuilder.build(), new SCameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(SCameraCaptureSession session, SCaptureRequest request, STotalCaptureResult result) {
isAFTriggered = true;
}
}, mBackgroundHandler);
It is a long exposure image where the use has to take an image of a static non moving object. For this I am using the CONTROL_AF_MODE_MACRO
mCaptureBuilder.set(SCaptureRequest.CONTROL_AF_MODE, SCaptureRequest.CONTROL_AF_MODE_MACRO);
and also I am enabling auto flash if it is available
requestBuilder.set(SCaptureRequest.CONTROL_AE_MODE,
SCaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
I am not really an expert in this API, I mostly followed the SDK example app.
There could be a number of issues causing this problem. One prominent one is the dimensions of your output image
I ran Camera2 API and the preview is clear, but the output was quite blurry
val characteristics: CameraCharacteristics? = cameraManager.getCameraCharacteristics(cameraId)
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // The issue
val width = imageDimension.width
val height = imageDimension.height
if (size != null) {
width = size[0].width; height = size[0].height
}
val imageReader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 5)
The code below was returning a dimension about 245*144 which was way to small to be sent to the image reader. Some how the output was stretching the image making it end up been blurry. Therefore I removed this line below.
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // this was returning a small
Setting the width and height manually resolved the issue.
You're setting the AF trigger for one frame, but then are you waiting for AF to complete? For AF_MODE_MACRO (are you verifying the device lists support for this AF mode?) you need to wait for AF_STATE_FOCUSED_LOCKED before the image is guaranteed to be stable and sharp. (You may also receive NOT_FOCUSED_LOCKED if the AF algorithm can't reach sharp focus, which could be because the object is just too close for the lens, or the scene is too confusing)
On most modern devices, it's recommended to use CONTINUOUS_PICTURE and not worry about AF triggering unless you really want to lock focus for some time period. In that mode, the device will continuously try to focus to the best of its ability. I'm not sure all that many devices support MACRO, to begin with.

Mobile - Scan text from camera, without taking a picture

Is there a known API or way to SCAN the text from a card without actually manually saving (and uploading) the picture? (iOS and Android)
Then I would need to know if that API can determine the marquee within the camera that should be scanned.
I want a behaviour similar to the one of QR scanners, or Augmented Reality apps. Where the user just directs the camera and the action occurs.
I have printed cards with a Redeem code in Text, and including QR will need to change the current card production.
The text is inside a white box, which may make it easier to recognise:
On iOS, you would use CIDetector with an AVCaptureSession. It can be used to process capture session output buffers as they come in from the camera without having to take a picture and provide text scanning.
For text detection, using CIDetector with CIDetectorTypeText will return areas that are likely to have text in them, but you would have to perform additional processing for Optical Character Recognition.
You could also use OpenCV for a not out of the box solution.
You can try this: https://github.com/gali8/Tesseract-OCR-iOS
Usage:
// Specify the image Tesseract should recognize on
tesseract.image = [[UIImage imageNamed:#"image_sample.jpg"] g8_blackAndWhite];
// Optional: Limit the area of the image Tesseract should recognize on to a rectangle
tesseract.rect = CGRectMake(20, 20, 100, 100);
// Optional: Limit recognition time with a few seconds
tesseract.maximumRecognitionTime = 2.0;
// Start the recognition
[tesseract recognize];

connectOnFrameAvailable() provides TangoImageBuffer with curious format infos

Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -

Is this a Digital Compass or Unity limitation?

I'm interested in AR applications of mobile devices and naturally I would like to make better use of the compass.
The only issue I've been having to work against isn't how twitchy the compass is. (Angular Smoothing seems to solve this issue just fine) My main issue is that when the device is held Vertical the compass values start freaking out. Causing an on screen compass to flip about all over the place. I don't have a lot of experience with mobile application development so I'm not sure what would be causing this issue, if its a Unity issue or if its just a limitation of the digital compass. I know other apps do seem to be able to use the compass fine in any orientation, but this is all stupidly new to me.
I've definitely tried moving the phone in a figure of 8. The device I have to play around with is a Nexus 4.
using UnityEngine;
using System.Collections;
public class Compass : MonoBehaviour {
// Use this for initialization
void Start () {
Input.location.Start ();
Input.compass.enabled = true;
}
// Update is called once per frame
void Update ()
{
var heading = Input.compass.trueHeading;
transform.eulerAngles = new Vector3 (0, 0, heading);
}
}
Preamble :)
First of, I'm not an expert (unfortunately) in subjects that I will talk about. But still, I've decided to share my thoughts.
Theory
The problem can be generalized in the following way. You want to have some continuous function that takes a 3D vector (which is device orientation in your case) and returns another vector that is orthogonal to original vector. Theory says (see hairy ball theorem) that for some arguments that function will return zero vectors. In case when such a function is compass, zero vectors are returned when device is oriented vertical (and this fells quite natural if you have ever used an ordinary compass).
Practice
Sometimes you want your app to tell which side of the world does phone back (rear camera) is pointing to.
Or maybe even you want combined approach:
If the phone is oriented flat, show what is the phone's top pointing to.
If the phone is oriented vertical, show what is the phone's back pointing to.
In both cases you need to use gyroscope in addition to compass.

Categories

Resources