Camera preview is dark using Camera2 - android

I am trying to use Camera2 to allow an app to take a simple picture. I managed to get a working sample using android-Camera2Basic sample code, the problem is that the camera preview is very dark (same problem as this other question), following some answers i did get a proper FPS range [15, 15], setting this in the lockFocus() method allows the app to great a clear picture with correct brightness and fixes the preview from the camera:
private void lockFocus() {
try {
// This is how to tell the camera to lock focus.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_START);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, Range.create(15, 15));
// Tell #mCaptureCallback to wait for the lock.
mState = STATE_WAITING_LOCK;
mCaptureSession.capture(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
But the preview before taking the pic is still very dark. I tried to set up the same line of code in other parts of the sample but is not working. How can i fix it in order to ge the same results in the preview? I am working with a Samsung SM-P355M tablet.

Using an FPS range with equal lower and upper bounds, such as [15,15], [30,30], [etc...], will put a constrain to the AE algorithm about how much it can adjust to light changes, which therefore may produce dark results. Such type of ranges are meant for video recording to maintain a constant FPS. For photos you need to find a range with a wide spread between the lower and upper bound, such as [7,30], [15,25], [etc...]
The next method can help you to find the optimal FPS range. Take into account that it is meant for photos and not video recording as it discards FPS ranges with equal lower and upper bounds.
(Adjust MIN_FPS_RANGE and MAX_FPS_RANGE to your requirements)
#Nullable
public static Range<Integer> getOptimalFpsRange(#NonNull final CameraCharacteristics characteristics) {
final int MIN_FPS_RANGE = 0;
final int MAX_FPS_RANGE = 30;
final Range<Integer>[] rangeList = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
if ((rangeList == null) || (rangeList.length == 0)) {
Log.e(TAG, "Failed to get FPS ranges.");
return null;
}
Range<Integer> result = null;
for (final Range<Integer> entry : rangeList) {
int candidateLower = entry.getLower();
int candidateUpper = entry.getUpper();
if (candidateUpper > 1000) {
Log.w(TAG,"Device reports FPS ranges in a 1000 scale. Normalizing.");
candidateLower /= 1000;
candidateUpper /= 1000;
}
// Discard candidates with equal or out of range bounds
final boolean discard = (candidateLower == candidateUpper)
|| (candidateLower < MIN_FPS_RANGE)
|| (candidateUpper > MAX_FPS_RANGE);
if (discard == false) {
// Update if none resolved yet, or the candidate
// has a >= upper bound and spread than the current result
final boolean update = (result == null)
|| ((candidateUpper >= result.getUpper()) && ((candidateUpper - candidateLower) >= (result.getUpper() - result.getLower())));
if (update == true) {
result = Range.create(candidateLower, candidateUpper);
}
}
}
return result;
}

After lots of reseach it seams there is no easy fix for this (at least not with our same hardware), so we implemented a new version of the camera activities this time using the deprecated Camera Api and everything works as espected. Not really a clean solution but so far works for me.

Related

Is it possible to capture the images without texture view using the camera 2 API?

In my case, I don't need to show the preview to the user and would like to capture the image from the service, to achieve this I have used ImageFormat .JPG to capture the images but output images are really very dark. I have tried this link in StackOverflow but it is not working.
val streamConfigurationMap =
mCameraCharacteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP) // Available stream configuration.
mPreviewSize = streamConfigurationMap!!.getOutputSizes(ImageFormat.JPEG)[0]
mCameraID = cameraId
mImageReader =
ImageReader.newInstance(mPreviewSize!!.width, mPreviewSize!!.height, ImageFormat.JPEG, 1)
mImageReader!!.setOnImageAvailableListener(onImageAvailable, mBackgroundHandler)
If I use the dummy surface texture view getting below error, after few seconds of app launch
E/BufferQueueProducer: [SurfaceTexture-1-20857-1] cancelBuffer: BufferQueue has been abandoned
First of all, you don't have to use a TextureView. The reason your preview is really dark is probably because of your CaptureRequest.builder. You want to control your Auto Exposure with for example, I explain later this below.
First, when you set your surface, you should set it as such:
builder.addTarget(mImageReader.getSurface());
Now on to the brightness issue, you can control your AE like this:
builder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE,getRange());
where getRange() is:
private Range<Integer> getRange() {
CameraCharacteristics chars = null;
try {
CameraManager manager = (CameraManager) ((Activity)getContext()).getSystemService(Context.CAMERA_SERVICE);
chars = manager.getCameraCharacteristics(mCameraId);
Range<Integer>[] ranges = chars.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
Range<Integer> result = null;
for (Range<Integer> range : ranges) {
int upper = range.getUpper();
// 10 - min range upper for my needs
if (upper >= 10) {
if (result == null || upper < result.getUpper().intValue()) {
result = range;
}
}
}
if (result == null) {
result = ranges[0];
}
return result;
} catch (CameraAccessException e) {
e.printStackTrace();
return null;
}
}
mImageReader = ImageReader.newInstance(hardcoded_width, hardcoded_height, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(mVideoCapture, mBackgroundHandler);
If you want to know more about custom brightness etc. Check this out

Android Camera2 increase brightness

I am using android camera2 in my application to take continuous images, Here when I use camera2 getting image preview brightness very dark compare to original camera. I seen this but there is no similar requirement in that answer.
I tried to set brightness in camera2 as suggested here:
Note that this control will only be effective if android.control.aeMode != OFF. This control will take effect even when android.control.aeLock == true.
captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
captureRequestBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);
captureRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, 6);
But it still showing preview as dark image only as shown below.
See the difference here:
Original Camera:
Using Camera2:
And what is the value I need to pass as second parameter in:
captureRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, 6);
I kept 6 because as suggested in doc's:
For example, if the exposure value (EV) step is 0.333, '6' will mean an exposure compensation of +2 EV; -3 will mean an exposure compensation of -1 EV.
But still no effect in brightness..
Here it is:
Add below code in onConfigured() and unlockFocus()
captureRequestBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE,getRange());
By using the above code you will get the better preview. But your captured picture will remain as it is. To get the better picture as well use the same below code in captureStillPicture()
captureBuilder.set(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, getRange());
getRange is:
private Range<Integer> getRange() {
CameraCharacteristics chars = null;
try {
chars = mCameraManager.getCameraCharacteristics(mCameraId);
Range<Integer>[] ranges = chars.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES);
Range<Integer> result = null;
for (Range<Integer> range : ranges) {
int upper = range.getUpper();
// 10 - min range upper for my needs
if (upper >= 10) {
if (result == null || upper < result.getUpper().intValue()) {
result = range;
}
}
}
if (result == null) {
result = ranges[0];
}
return result;
} catch (CameraAccessException e) {
e.printStackTrace();
return null;
}
}
CONTROL_AE_LOCK should be off. You have misinterpreted the doc, possibly document itself is a bit confusing.
Note that this control will only be effective if
android.control.aeMode != OFF. This control will take effect even when
android.control.aeLock == true.
What it means is that when AE lock is ON, the exposure compensation will be applied on the locked exposure and not on the instantaneous exposure at the time of taking picture.
Even in your repeat request, exposure is locked so it doesn't help.
Remove AE lock and it should work.
While setting CONTROL_AE_EXPOSURE_COMPENSATION the second parameter as defined by docs is relative to CameraCharacteristics.CONTROL_AE_COMPENSATION_STEP
The adjustment is measured as a count of steps, with the step size defined by android.control.aeCompensationStep and the allowed range by android.control.aeCompensationRange."
The value of 6 in the example for +2EV is correct only when step is 0.333 which is just an example.
Following code will give you the exposure compensation value to be used for +2EV
CameraManager manager = (CameraManager)this.getSystemService(Context.CAMERA_SERVICE);
CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId);
double exposureCompensationSteps = characteristics.get(CameraCharacteristics.CONTROL_AE_COMPENSATION_STEP).doubleValue();
int exposureCompensation = (int)( 2.0 / exposureCompensationSteps );
I would also suggest you check if the value is within the range specified by CameraCharacteristics.CONTROL_AE_COMPENSATION_RANGE
You can try this
public void setBrightness(int value) {
int brightness = (int) (minCompensationRange + (maxCompensationRange - minCompensationRange) * (value / 100f));
previewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, brightness);
applySettings();
}
private void applySettings() {
captureSession.setRepeatingRequest(previewRequestBuilder.build(), null, null);
}
I messed around with CaptureRequest.SENSOR_SENSITIVITY and it worked great on my Samsung s3, s7 and s8 phones.
You can get the CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE
sensitivity_range = chars.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE);
On my s7, the range is from mid 50s to more than 3000. I then set it to 1500 as follows.
mCaptureRequest.set(CaptureRequest.SENSOR_SENSITIVITY, 1500);
It brightened the preview a few factors.
First, don't lock autoexposure - that's not needed when adjusting exposure compensation.
Second, did you call CameraCaptureSession.setRepeatingRequest with your new capture request?

Android Camera.Parameters setPictureSize not working

I am trying to set the best possible output picture size in my camera object. So that, i can get a perfect downscaled sample image and display it.
During debugging i observed i am setting output picture size exactly the size of my screen dimensions. But when i DecodeBounds of the returned image by camera. I get some larger number!
Also i am not setting my display dimensions as expected output picture size. Code used to calculate and set the output picture size is given below.
I am using this code for devices having API level < 21, so using camera shouldn't be a problem.
I don't have any idea of why i am getting this behavior. Thanks in advance for help!
Defining Camera parameter
Camera.Parameters parameters = mCamera.getParameters();
setOutputPictureSize(parameters.getSupportedPictureSizes(), parameters); //update paramters in this function.
//set the modified parameters back to mCamera
mCamera.setParameters(parameters);
Optimal picture size calculation
private void setOutputPictureSize(List<Camera.Size> availablePicSize, Camera.Parameters parameters)
{
if (availablePicSize != null) {
int bestScore = (1<<30); //set an impossible value.
Camera.Size bestPictureSize = null;
for (Camera.Size pictureSize : availablePicSize) {
int curScore = calcOutputScore(pictureSize); //calculate sore of the current picture size
if (curScore < bestScore) { //update best picture size
bestScore = curScore;
bestPictureSize = pictureSize;
}
}
if (bestPictureSize != null) {
parameters.setPictureSize(bestPictureSize.width, bestPictureSize.height);
}
}
}
//calculates score of a target picture size compared to screen dimensions.
//scores are non-negative where 0 is the best score.
private int calcOutputScore(Camera.Size pictureSize)
{
Point displaySize = AppData.getDiaplaySize();
int score = (1<<30);//set an impossible value.
if (pictureSize.height < displaySize.x || pictureSize.width < displaySize.y) {
return score; //return the worst possible score.
}
for (int i = 1; ; ++i) {
if (displaySize.x * i > pictureSize.height || displaySize.y * i > pictureSize.width) {
break;
}
score = Math.min(score, Math.max(pictureSize.height-displaySize.x*i, pictureSize.width-displaySize.y*i));
}
return score;
}
Finally i resolved the issue after many attempts! Below are my findings:
Step 1. If we are already previewing, call mCamera.stopPreview()
Step 2. Set modified parameters by calling mCamera.setParameters(...)
Step 3. Start previewing again, call mCamera.startPreview()
If i call mCamera.setParameters without stopping preview (Assuming camera is previewing). Camera seems to ignore the updated parameters.
I came up with this solution after several trail and errors. Anyone know better way to update parameters during preview please share.

Samsung Galaxy SIII mediaRecorder() issues. (Corrupt Video)

I have a problem with the Samsung Galaxy SIII. In the app we are creating, we use a mediaRecorder to record a video of the user using the front camera. I have looked thoroughly in the documentation and all over forums and I have seen a few similar posts for the SII or crashes in general, but those fixes unfortunately did not work for us. The process that the camera records is as follows --> There is a function (code will be provided) that checks each devices compatible camera resolutions, then we check to see if they meet our specifications (right now it is 480p or less) If there is one that meets this criteria then it uses that quality to set the videoSize() (function provided by android to set the recording size of a video). This seems pretty trivial and looks like it would work with almost any device. This code does work for a couple different devices that we've tested it on (e.g. Galaxy S4, and Galaxy Stellar). But for some reason the SIII is being very difficult. When you record on the SIII in any resolution lower than 720p, the video becomes corrupt and plays back with multi colored screen (screen shots in link below). Why not just record the video in 720p+ then? Unfortunately we need lower video sizes so that it is not such a heavy data load to transfer over a cell phone provider network.
So my question is, why does recording corrupt the video in any lower resolution than 720p, when the resolution that it uses is being pulled out from a list of device supported resolutions?
This is the function to pull supported resolutions from the device.
public Camera.Size getSupportedRecordingSizes() {
Camera.Size result = null;
Camera.Parameters params = camera.getParameters();
List<Size> sizes = params.getSupportedPictureSizes();
for (Size s : sizes) {
if (s.height < 481 && s.width < 721) {
if (result == null) {
result = s;
} else {
int resultVideoSize = result.width * result.height;
int newVideoSize = s.width * s.height;
if (newVideoSize > resultVideoSize) {
result = s;
}
}
}
}
if (!sizes.isEmpty() && result == null) {
Context context = getApplicationContext();
CharSequence text = "Used default first value";
int duration = Toast.LENGTH_SHORT;
Toast toast = Toast.makeText(context, text, duration);
toast.show();
for (Size size : sizes) {
if (result == null) {
result = size;
}
int previousSize = result.width * result.height;
int newSize = size.width * size.height;
if (newSize < previousSize) {
result = size;
}
}
}
return (result);
}
This is the code for our mediaRecorder (shortened for simplicity)
mediaRecorder = new MediaRecorder();
setCameraDisplayOrientaion();
//sets devices supported video size less than or equal to 480p (720x480 resolution).
Camera.Size vidSize = getSupportedRecordingSizes();
camera.unlock();
mediaRecorder.setCamera(camera);
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);
mediaRecorder.setOutputFormat(videoFormat);
mediaRecorder.setAudioEncoder(audioEncoder);
mediaRecorder.setVideoEncoder(videoEncoder);
mediaRecorder.setOutputFile(fullQuestionPath);
if (vidSize == null) {
mediaRecorder.setVideoSize(480, 360);
} else {
mediaRecorder.setVideoSize(vidSize.width, vidSize.height);
}
mediaRecorder.setVideoFrameRate(videoFrameRate);
mediaRecorder.setPreviewDisplay(surfaceHolder.getSurface());
//set the bitrate manually if possible.
if (android.os.Build.VERSION.SDK_INT > 7) {
mediaRecorder.setVideoEncodingBitRate(videoBitrate);
}
try {
mediaRecorder.prepare();
mediaRecorder.start();
}
catch (Exception e) {
Log.e(ResponseActivity.class
.toString(), e.getMessage());
releaseMediaRecorder();
}
The images that correlate to this problem are here http://imgur.com/a/8F7Tb (Sorry about not posting earlier.)
The order of these images are as follows --> 1) Before recording, preview is going. 2)Recording, preview showing current recording/recording. 3)Stop Recording, recording has stopped as well as preview. 4) Playback, this is where the issue is, it shows this multicolored corrupt image, which is the same if you pull it from the device directly.
EDIT: Note, I have also tried using the CamcorderProfile and setting the quality to low or high. Setting to QUALITY_HIGH forces 720p which recording works at, but QUALITY_LOW, despite everyone having quite the opposite problem, does not work for me.
EDIT: Anyone have an idea to point me in the right direction?

Camera preview processing on Android

I'm making a line follower for my robot on Android (to learn Java/Android programming), currently I'm facing the image processing problem: the camera preview returns an image format called YUV which I want to convert to a threshold in order to know where the line is, how would one do that?
As of now I've succeeded getting something, that is I definitely can read data from the camera preview and by some miracle even know if the light intensity is over or under a certain value at a certain area on the screen. My goal is to draw the robot's path on an overlay over the camera's preview, that too works to some extent, but the problem is the YUV management.
As you can see not only the dark area is drawn sideways, but it also repeats itself 4 times and the preview image is stretched, I cannot figure out how to fix these problems.
Here's the relevant part of code:
public void surfaceCreated(SurfaceHolder arg0) {
// TODO Auto-generated method stub
// camera setup
mCamera = Camera.open();
Camera.Parameters parameters = mCamera.getParameters();
List<Camera.Size> sizes = parameters.getSupportedPreviewSizes();
for(int i=0; i<sizes.size(); i++)
{
Log.i("CS", i+" - width: "+sizes.get(i).width+" height: "+sizes.get(i).height+" size: "+(sizes.get(i).width*sizes.get(i).height));
}
// change preview size
final Camera.Size cs = sizes.get(8);
parameters.setPreviewSize(cs.width, cs.height);
// initialize image data array
imgData = new int[cs.width*cs.height];
// make picture gray scale
parameters.setColorEffect(Camera.Parameters.EFFECT_MONO);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
mCamera.setParameters(parameters);
// change display size
LayoutParams params = (LayoutParams) mSurfaceView.getLayoutParams();
params.height = (int) (mSurfaceView.getWidth()*cs.height/cs.width);
mSurfaceView.setLayoutParams(params);
LayoutParams overlayParams = (LayoutParams) swOverlay.getLayoutParams();
overlayParams.width = mSurfaceView.getWidth();
overlayParams.height = mSurfaceView.getHeight();
swOverlay.setLayoutParams(overlayParams);
try
{
mCamera.setPreviewDisplay(mSurfaceHolder);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
catch (IOException e)
{
e.printStackTrace();
mCamera.stopPreview();
mCamera.release();
}
// callback every time a new frame is available
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera)
{
// create bitmap from camera preview
int pixel, pixVal, frameSize = cs.width*cs.height;
for(int i=0; i<frameSize; i++)
{
pixel = (0xff & ((int) data[i])) - 16;
if(pixel < threshold)
{
pixVal = 0;
}
else
{
pixVal = 1;
}
imgData[i] = pixVal;
}
int cp = imgData[(int) (cs.width*(0.5+(cs.height/2)))];
//Log.i("CAMERA", "Center pixel RGB: "+cp);
debug.setText("Center pixel: "+cp);
// process preview image data
Paint paint = new Paint();
paint.setColor(Color.YELLOW);
int start, finish, last;
start = finish = last = -1;
float x_ratio = mSurfaceView.getWidth()/cs.width;
float y_ratio = mSurfaceView.getHeight()/cs.height;
// display calculated path on overlay using canvas
Canvas overlayCanvas = overlayHolder.lockCanvas();
overlayCanvas.drawColor(0, Mode.CLEAR);
// start by finding the tape from bottom of the screen
for(int y=cs.height; y>0; y--)
{
for(int x=0; x<cs.width; x++)
{
pixel = imgData[y*cs.height+x];
if(pixel == 1 && last == 0 && start == -1)
{
start = x;
}
else if(pixel == 0 && last == 1 && finish == -1)
{
finish = x;
break;
}
last = pixel;
}
//overlayCanvas.drawLine(start*x_ratio, y*y_ratio, finish*x_ratio, y*y_ratio, paint);
//start = finish = last = -1;
}
overlayHolder.unlockCanvasAndPost(overlayCanvas);
}
});
}
This code generates an error sometimes when quitting the application due to some method being called after release, which is the least of my problems.
UPDATE:
Now that the orientation problem is fixed (CCD sensor orientation) I'm still facing the repetition problem, this is probably related to my YUV data management...
Your surface and camera management looks correct, but I would doublecheck that camera actually accepted preview size settings ( some camera implementations reject some settings silently)
As you are working with portrait mode, you have to keep in mind that camera does not give a fart about prhone orientation - its coordinate origin isdetermined by CCD chip and is always to right corner and scan direction is from top to bottom and right to left - quite different from your overlay canvas. ( But if you are in landscape mode, everything is correct ;) ) - this is certaily source of odd drawing result
Your threshloding is bit naive and not very usefull in real life - I would suggest adaptive threshloding. In our javaocr project ( pure java, also has android demos ) we implemented efficient sauvola binarisation (see demos):
http://sourceforge.net/projects/javaocr/
Performance binarisation can be improved to work only on single image rows (patches welcome)
Issue with UV part of image is easy - default forman is NV21, luminance comes first
and this is just byte stream, and you do not need UV part of image at all (look into demos above)

Categories

Resources