How to set Exposure on Camera2 api - android

I want to set camera exposure.When camera starts i want to set higher values and when it stops it is set to lower value.So i used the below code.On emulator it is showing range of -9 to 9 but when i attached physical usb camera it is showing 0 value for lower range and higher range. I am trying to get
exposure time range it is showing null also . Range exposure_time= cameraCharacteristics.get(CameraCharacteristics.SENSOR_INFO_EXPOSURE_TIME_RANGE);
public void setExposure(Context context, double exposureAdjustment) {
CameraManager manager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);
try {
camId = manager.getCameraIdList()[0];
} catch (CameraAccessException e) {
e.printStackTrace();
}
try {
cameraCharacteristics = manager.getCameraCharacteristics(camId);
} catch (CameraAccessException e) {
e.printStackTrace();
}
Range<Integer> range1 = cameraCharacteristics.get(CameraCharacteristics.CONTROL_AE_COMPENSATION_RANGE);
Log.d(TAG,"range1" +range1);
Integer minExposure = range1.getLower();
Log.d(TAG,"minExposure" +minExposure);
Integer maxExposure = range1.getUpper();
Log.d(TAG,"maxExposure" +maxExposure);
if (minExposure != 0 || maxExposure != 0) {
float newCalculatedValue = 0;
if (exposureAdjustment >= 0) {
newCalculatedValue = (float) (maxExposure * exposureAdjustment);
} else {
newCalculatedValue = (float) (minExposure * exposureAdjustment);
}
if (requestBuilder != null) {
CaptureRequest captureRequest = requestBuilder.build();
try {
captureSession.setRepeatingRequest(captureRequest, captureCallback, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
requestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, (int) newCalculatedValue);
Log.d(TAG,"New Calculated VAlue "+newCalculatedValue);
try {
captureSession.capture(captureRequest,captureCallback,null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
}
}

First, you need to find out the available auto exposure modes.
You can do this by :
final int[] availableAeModes = characteristics.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_MODES);
for(int mode : availableAeModes){
Timber.d("AE mode : %d",mode);
}
The meaning of these integer values you can map here.
You would only be able to control the exposure manually, if among the available ae modes, the value of CONTROL_AE_MODE_OFF is present.
Otherwise, you won't be able to control the exposure.
Now, I am assuming that the CONTROL_AE_MODE_OFF is an available mode on your camera.
You can control the exposure by manipulating these two parameters (there are some other parameters as well, through which you can control the exposure, however, these two have worked perfectly for me) :
SENSOR_EXPOSURE_TIME
SENSOR_SENSITIVITY
For setting SENSOR_SENSITIVITY, check the range supported by your camera by :
final Range<Integer> isoRange =
characteristics.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE);
if(null != isoRange) {
Timber.d("iso range => lower : %d, higher : %d",isoRange.getLower(),isoRange.getUpper());
} else {
Timber.d("iso range => NULL NOT SUPPORTED");
}
For setting SENSOR_EXPOSURE_TIME, check the range supported by your camera by:
final Range<Long> exposureTimeRange = characteristics.get(CameraCharacteristics.SENSOR_INFO_EXPOSURE_TIME_RANGE);
if(null!=exposureTimeRange){
Timber.d("exposure time range => lower : %d, higher : %d",exposureTimeRange.getLower(),exposureTimeRange.getUpper());
}else{
Timber.d("exposure time range => NULL NOT SUPPORTED");
}
Now, you have the range of both exposure time and sensitivity.
The next step is to configure the preview with these values.
This is how you configure your preview :
final CaptureRequest.Builder previewRequest =
//it's important to set the manual template, because you want to change exposure manually
this.cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_MANUAL);
previewRequest.addTarget(this.previewReader.getSurface());
previewRequest.set(JPEG_ORIENTATION, 0);
//setting ae mode to off state
previewRequest.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
//we don't want to handle white balance manually, so set the white balance mode to auto
previewRequest.set(CaptureRequest.CONTROL_AWB_MODE,
CaptureRequest.CONTROL_AWB_MODE_AUTO);
//setting sensor sensitivity
previewRequest.set(CaptureRequest.SENSOR_SENSITIVITY, <YOUR_SELECTED_SENSITIVITY>);
//setting sensor exposure time
previewRequest.set(CaptureRequest.SENSOR_EXPOSURE_TIME, <YOUR_SELECTED_EXPOSURE_TIME>);
this.previewCaptureSession.setRepeatingRequest(previewRequest.build(),
null, this.cameraHandler);
Note : You can see that I use the template as TEMPLATE_MANUAL. Once, the template is set to manual. All three auto processes, namely, auto-exposure, auto-white-balance and auto-focus, will become manual.
The above code, doesn't take care of setting focus, since, I used it on a camera which didn't have auto-focus.
If your camera has auto-focus, then you will have to handle setting the focus separately.

Getting a 0 for both the lower and the higher range of the exposure compensation range means that the device, which runs your app, doesn't support exposure adjustments.
A null value for the SENSOR_INFO_EXPOSURE_TIME_RANGE is also returned when the given device doesn't support this feature or you can't adjust it.
You can read more about that from the official docs.

Related

Is it possible (if yes, how) to access a MIUI phone's depth camera directly?

I know that my phone and other models have a depth camera. I have used Portrait mode and extracted the depth information from the image using Desktop tools. I have attempted to use Unity's WebCamTexture.depthCameraName to do this on the device, but to no avail. Is this possible, or is the depth camera reserved for the camera app on MIUI?
Certainly, there might be the possibility to make the user take a photograph in the camera app and import it, but my application would benefit greatly from being able to read out this data in real time. I would appreciate any pointers on what to research, thank you in advance.
I would just like to add that if this is doable in Unity, that would be my preferred solution. However, if it has to be, I can make do with any other XR solution for android (position info will be relevant to the project)
As I know, there is a way to get depth image on Android studio. With camera2 API, you can use CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT to find depthcamera's CameraId and use it.
such as:
private String DepthCameraID() {
try {
for (String camera : cameraManager.getCameraIdList()) {
CameraCharacteristics chars = cameraManager.getCameraCharacteristics(camera);
final int[] capabilities = chars.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES);
boolean facingFront = chars.get(CameraCharacteristics.LENS_FACING) == CameraMetadata.LENS_FACING_BACK;
boolean depthCapable = false;
for (int capability : capabilities) {
boolean capable = capability == CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_DEPTH_OUTPUT;
depthCapable = depthCapable || capable;
}
if (depthCapable && facingFront) {
SizeF sensorSize = chars.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE);
Log.i(TAG, "Sensor size: " + sensorSize);
float[] focalLengths = chars.get(CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);
if (focalLengths.length > 0) {
float focalLength = focalLengths[0];
double fov = 2 * Math.atan(sensorSize.getWidth() / (2 * focalLength));
Log.i(TAG, "Calculated FoV: " + fov);
}
return camera;
}
}
} catch (CameraAccessException e) {
Log.e(TAG, "Could not initialize Camera Cache");
e.printStackTrace();
}
return null;
}

Burst capture at 10 FPS with preview without lag

I need to do an application with an activity wich captures images in burst mode at 10 FPS.
My problem is that I'm able to capture very fast pictures with the burst mode, but it seems that my buffer of images is full before I can have the time to treat them (I just want to put my images into an ArrayList).
So my buffer is full and my app crashes.
The size of the buffer that I've choose is 50 (I cannot take too much memory).
It seems that my burst mode can take 30 FPS, but I'm able to treat my images at 10 FPS.
So, I want to slow down my capturing, but when I do that (by waiting before repeat another capture), it slows down my preview too. So I have a lag in my preview.
I don't want to save my images, I just want to stock them in order to treat them later (in an other activity).
Do you have any idea, maybe to slow down my capturing, or maybe to speed up my treatment.
Code ::
My code is based on the sample Camera2Basic.
private void captureStillPicture() {
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
mPreviewRequestBuilder.set(CaptureRequest.EDGE_MODE, CaptureRequest.EDGE_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE, CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE_ON);
mPreviewRequestBuilder.set(CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE, CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE, CaptureRequest.NOISE_REDUCTION_MODE_OFF);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AWB_LOCK, true);
mCaptureSession.stopRepeating();
mCaptureSession.abortCaptures();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mCaptureSession.capture(mPreviewRequestBuilder.build(), captureCallback, null);
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
blockCapture() permits to wait 100 ms between captures.
private void blockCapture() {
mPreviewRequestBuilder.removeTarget(mImageReader.getSurface());
while(Math.abs(timerBurst - System.currentTimeMillis()) < 1000){}
System.out.println("ELAPSED :::::::: " + Math.abs(timerBurst - System.currentTimeMillis()));
timerBurst = System.currentTimeMillis();
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
for(int i =0; i< NB_BURST; i++){captureList.add(mPreviewRequestBuilder.build());}
}
private final CameraCaptureSession.CaptureCallback captureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(#NonNull CameraCaptureSession session,
#NonNull CaptureRequest request,
#NonNull TotalCaptureResult result) {
showToast("Saved: " + mFile);
Log.d(TAG,"Saved: " + mFile);
mCounterImageForSave ++;
mCounterBurst++;
if(mCounterImageForSave <= 1) {
timerBurst = System.currentTimeMillis();
}
if(mCounterBurst >= NB_BURST && isPlayed) {
mCounterBurst = 0;
try {
blockCapture();
mCaptureSession.capture(mPreviewRequestBuilder.build(),captureCallback,null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
if (mCounterBurst >= NB_BURST || !isPlayed)
unlockFocus();
}
};
I have tried to capture YUV_420_888 images instead of JPEG, but my images where all black, and I cannot convert them.
Sorry if I'd make mistakes with my english ...

Size not in valid set when recording without preview

I am trying to record video using a Vivo X20 (7.1.1) and the camera2 api without using a preview and without recording sound (Strictly recording HD Video only).
I'm currently stuck because I cannot figure out how to successfully call MediaRecorder.setVideoSize() and record a video in HD. Currently when I run the app the log shows the error: Surface with size (w=1920, h=1080) and format 0x1 is not valid, size not in valid set: [1440x1080, 1280x960, 1280x720, 960x540, 800x480, 720x480, 768x432, 640x480, 384x288, 352x288, 320x240, 176x144]
The phone's stock camera app can record video up to 4K so I'm definitely missing something here. There are a total of two camera devices identified by CameraManager. When I use getOutPutFormats() from CameraCharacteristics it shows the same valid set of resolutions for both cameras and it is the same range as the above error message.
The below is the code I am using to initialize MediaRecorder and initiate a capture session:
public void StartRecordingVideo() {
Initialize();
recordingVideo = true;
cameraManager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
if (ActivityCompat.checkSelfPermission(this, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
String[] cameraIDs = cameraManager.getCameraIdList();
//LogAllCameraInfo();
if (cameraIDs != null)
{
for(int x = 0; x < cameraIDs.length; x++)
{
Log.d(LOG_ID, "ID: " + cameraIDs[x]);
}
}
cameraManager.openCamera(deviceCameraID, cameraStateCallback, handler);
Log.d(LOG_ID, "Successfully opened camera");
}
else
{
throw new IllegalAccessException();
}
}
catch (Exception e)
{
recordingVideo = false;
Log.e(LOG_ID, "Error during record video start: " + e.getMessage());
}
}
private void Initialize()
{
videoRecordThread = new HandlerThread("video_capture");
videoRecordThread.start();
handler = new Handler((videoRecordThread.getLooper()));
try
{
vidRecorder = new MediaRecorder();
vidRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
vidRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
vidRecorder.setVideoFrameRate(30);
vidRecorder.setCaptureRate(30);
vidRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);
vidRecorder.setVideoEncodingBitRate(10000000);
vidRecorder.setVideoSize(1920, 1080);
String videoFilename = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS)+ File.separator + System.currentTimeMillis() + ".mp4";
vidRecorder.setOutputFile(videoFilename);
Log.d(LOG_ID, "Starting video: " + videoFilename);
vidRecorder.prepare();
}
catch (Exception e)
{
Log.e(LOG_ID, "Error during Initialize: " + e.getMessage());
}
}
And the onReady/onSurfacePrepared/Camera onOpened callbacks:
#Override
public void onReady(CameraCaptureSession session) {
Log.d(LOG_ID, "onReady: ");
super.onReady(session);
try {
CaptureRequest.Builder builder = deviceCamera.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
builder.addTarget(vidRecorder.getSurface());
CaptureRequest request = builder.build();
session.setRepeatingRequest(request, null, handler);
vidRecorder.start();
} catch (CameraAccessException e) {
Log.d(LOG_ID, "Error on Ready: " + e.getMessage());
}
}
#Override
public void onSurfacePrepared(CameraCaptureSession session, Surface surface) {
Log.d(LOG_ID, "onSurfacePrepared: ");
super.onSurfacePrepared(session, surface);
}
#Override
public void onOpened(CameraDevice camera) {
Log.d(LOG_ID, "onOpened: ");
deviceCamera = camera;
try {
camera.createCaptureSession(Arrays.asList(vidRecorder.getSurface()), recordSessionStateCallback, handler);
} catch (CameraAccessException e) {
Log.d(LOG_ID, "onOpened: " + e.getMessage());
}
}
I've tried messing with the order of calls and the output format/encoder with no luck. I am sure that I have all the required permissions. Thanks in advance for your time!
This device most likely supports camera2 at the LEGACY level; check what the output of INFO_SUPPORTED_HARDWARE_LEVEL is.
LEGACY devices are effectively running camera2 on top of the legacy android.hardware.Camera API (more complex than that, but roughly true); as a result, their capabilities via camera2 are restricted.
The maximum recording resolution is one significant problem; android.hardware.Camera records videos via a magic path that the LEGACY mapping layer cannot directly use (there's no Surface involved). As a result, camera2 LEGACY can only record at the maximum preview resolution supported by android.hardware.Camera, not at the maximum recording resolution.
Sounds like this device has no support for 1:1 1080p preview, which is pretty unusual for a device launched so recently.
You can verify if the set of supported preview sizes in the deprecated Camera API matches the list you get in your error; if it doesn't then there may be a OS bug in generating the list so it'd be good to know.
But in general, you can't request sizes that aren't enumerated in the CameraCharacteristics StreamConfiguraitonMap for the camera, no matter what the feature list on the side of the box says. Sometimes the OEM camera app has magic hooks to enable features that normal apps can't get to; often because the feature only works in some very very specific set of circumstances, which normal apps wouldn't know how to replicate.

How to get each frame data using camera2 API in Android5.0 in realtime

I am working with camera2Basic now and trying to get each frame data to do some image processing. I am using camera2 API in Android5.0, everything is fine when only doing the camera preview and it is fluid. But the preview stuck when I use the ImageReader.OnImageAvailableListener callback to get each frame data, this cause a bad User Experience.
The following is my related codes:
This is the setup for camera and ImageReader, I set the format of image is YUV_420_888
public<T> Size setUpCameraOutputs(CameraManager cameraManager,Class<T> kClass, int width, int height) {
boolean flagSuccess = true;
try {
for (String cameraId : cameraManager.getCameraIdList()) {
CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(cameraId);
// choose the front or back camera
if (FLAG_CAMERA.BACK_CAMERA == mChosenCamera &&
CameraCharacteristics.LENS_FACING_BACK != characteristics.get(CameraCharacteristics.LENS_FACING)) {
continue;
}
if (FLAG_CAMERA.FRONT_CAMERA == mChosenCamera &&
CameraCharacteristics.LENS_FACING_FRONT != characteristics.get(CameraCharacteristics.LENS_FACING)) {
continue;
}
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
Size largestSize = Collections.max(
Arrays.asList(map.getOutputSizes(ImageFormat.YUV_420_888)),
new CompareSizesByArea());
mImageReader = ImageReader.newInstance(largestSize.getWidth(), largestSize.getHeight(),
ImageFormat.YUV_420_888, 3);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
...
mCameraId = cameraId;
}
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (NullPointerException e) {
}
......
}
When the camera opened successfully, I Create a CameraCaptureSession for camera preview
private void createCameraPreviewSession() {
if (null == mTexture) {
return;
}
// We configure the size of default buffer to be the size of camera preview we want.
mTexture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview
Surface surface = new Surface(mTexture);
// We set up a CaptureRequest.Builder with the output Surface.
try {
mPreviewRequestBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(mImageReader.getSurface());
mPreviewRequestBuilder.addTarget(surface);
// We create a CameraCaptureSession for camera preview
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
if (null == mCameraDevice) {
return;
}
// when the session is ready, we start displaying the preview
mCaptureSession = session;
// Finally, we start displaying the camera preview
mPreviewRequest = mPreviewRequestBuilder.build();
try {
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
}, null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
The last is the ImageReader.OnImageAvailableListener callback
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.d(TAG, "The onImageAvailable thread id: " + Thread.currentThread().getId());
Image readImage = reader.acquireLatestImage();
readImage.close();
}
};
Maybe I do the wrong setup but I try several times and it doesn't work. Maybe there is another way to get frame data rather than ImageReader but I don't know.
Anybody knows how to get each frame data in realtime?
I do not believe that Chen is correct. The image format has almost 0 effect on the speed on the devices I have tested. Instead, the problem seems to be with the image size. On an Xperia Z3 Compact with the image format YUV_420_888, I am offered a bunch of different options in the StreamConfigurationMap's getOutputSizes method:
[1600x1200, 1280x720, 960x720, 720x480, 640x480, 480x320, 320x240, 176x144]
For these respective sizes, the maximum fps I get when setting mImageReader.getSurface() as a target for the mPreviewRequestBuilder are:
[13, 18, 25, 28, 30, 30, 30, 30 ]
So one solution is to use a lower resolution to achieve the rate you want. For the curious... note that these timings do not seem to be affected by the line
mPreviewRequestBuilder.addTarget(surface);
...
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
I was worried that adding the surface on the screen might be adding overhead, but if I remove that first line and change the second to
mCameraDevice.createCaptureSession(Arrays.asList(mImageReader.getSurface()),
then I see the timings change by less than 1 fps. So it doesn't seem to matter whether you are also displaying the image on the screen.
I think there is simply some overhead in the camera2 API or ImageReader's framework that makes it impossible to get the full rate that the TextureView is clearly getting.
One of the most disappointing things of all is that, if you switch back to the deprecated Camera API, you can easily get 30 fps by setting up a PreviewCallback via the Camera.setPreviewCallbackWithBuffer method. With that method, I am able to get 30fps regardless of the resolution. Specifically, although it does not offer me 1600x1200 directly, it does offer 1920x1080, and even that is 30fps.
I'm trying the same things, I think you may change the Format like
mImageReader = ImageReader.newInstance(largestSize.getWidth(),
largestSize.getHeight(),
ImageFormat.FLEX_RGB_888, 3);
Because using the YUV may cause CPU to compress the data and it may cost some time. RGB can be directly showed on the device. And detect face from image should put in other Thread you must know it.

Screens flickers and resizes when starting video recording

I'm integrating photo/video capture in my app and am having an issue with video capture. Whenever video recording starts, the screen flickers, I get a short pause, then video capture begins. However, using the phone's camera app, there isn't a flicker/pause at all.
Also, my camera preview display gets resized as soon as recorder.start() is called. I don't see why that is. It makes the preview distorted (everything looks squished and wider).
My Questions: How do I prevent the flicker/pause when starting video recording? How do I prevent recorder.start() from resizing my preview display?
Whenever "video mode" is enabled, initRecording() is immediately called. Once the user presses a button, startRecording() is called. Finally, when the button is pressed again, stopRecording() is called. Less importantly, when switching back to "picture mode", destroyRecorder() is called.
#Override
public void onResume() {
super.onResume();
Camera camera = null;
try {
camera = Camera.open();
}
catch (Exception e) {
// Camera isn't available
Toast.makeText( getActivity(), "Camera is not available at this time.", Toast.LENGTH_SHORT ).show();
getActivity().finish();
return;
}
if ( Build.VERSION.SDK_INT >= Build.VERSION_CODES.GINGERBREAD ) {
setCameraDisplayOrientation( camera );
}
else {
camera.setDisplayOrientation( 90 );
}
setCamera( camera );
setCameraZoomDisplay( camera );
if ( getSurfaceHolder() != null ) {
startPreview();
if ( getMode() == MODE_VIDEO ) {
initRecording();
}
}
}
private void setCameraDisplayOrientation( Camera camera ) {
CameraInfo info = new CameraInfo();
Camera.getCameraInfo( 0, info );
int rotation = getActivity().getWindowManager().getDefaultDisplay().getRotation();
int degrees = 0;
switch (rotation) {
case Surface.ROTATION_0:
degrees = 0;
break;
case Surface.ROTATION_90:
degrees = 90;
break;
case Surface.ROTATION_180:
degrees = 180;
break;
case Surface.ROTATION_270:
degrees = 270;
break;
}
int result = ( info.orientation - degrees + 360 ) % 360;
camera.setDisplayOrientation( result );
}
private void initRecording() {
MediaRecorder recorder = new MediaRecorder();
setRecorder( recorder );
Camera camera = getCamera();
camera.unlock();
recorder.setCamera( camera );
recorder.setAudioSource( MediaRecorder.AudioSource.MIC );
recorder.setVideoSource( MediaRecorder.VideoSource.CAMERA );
CamcorderProfile cp = CamcorderProfile.get( CamcorderProfile.QUALITY_HIGH );
recorder.setProfile( cp );
String extension;
switch (cp.fileFormat) {
case MediaRecorder.OutputFormat.MPEG_4:
extension = "mp4";
break;
case MediaRecorder.OutputFormat.THREE_GPP:
extension = "3gp";
break;
default:
extension = "tmp";
}
setVideoMimeType( MimeTypeMap.getSingleton().getMimeTypeFromExtension( extension ) );
File toFile = new File( getActivity().getCacheDir(), "tempvideo.tmp" );
if ( toFile.exists() ) {
toFile.delete();
}
setTempFile( toFile );
recorder.setOutputFile( toFile.getPath() );
recorder.setPreviewDisplay( getSurfaceHolder().getSurface() );
try {
recorder.prepare();
setRecorderInitialized( true );
}
catch (IllegalStateException e) {
e.printStackTrace();
}
catch (IOException e) {
e.printStackTrace();
}
}
private boolean startRecording() {
try {
getRecorder().start();
setRecording( true );
ImageView actionImageView = getActionImageView();
actionImageView.setImageResource( R.drawable.record_red );
}
catch (Exception e) {
getCamera().lock();
}
return true;
}
private void stopRecording() {
MediaRecorder recorder = getRecorder();
if ( recorder != null && isRecording() ) {
recorder.stop();
setRecording( false );
setRecorderInitialized( false );
try {
insertVideo();
}
catch (IOException e) {
e.printStackTrace();
}
initRecording();
ImageView actionImageView = getActionImageView();
actionImageView.setImageResource( R.drawable.record_green );
}
}
private void destroyRecorder() {
MediaRecorder recorder = getRecorder();
recorder.release();
setRecorder( null );
getCamera().lock();
}
The reason for the slight zoom when MediaRecorder.start() is invoked is due to the Camera resizing its preview to match the resolution of the video being recorded. This problem can be fixed by setting the preview and video resoultion on setup. I'm thinking I've also found a way to stop the flicker, although I've found that when working with Camera and MediaRecorder a little bit of lag or flicker can originate from any of a number of places, so that may be a bit tougher to track down. The Android documentation for setting up a camera/video recorder is a good place to start to ensure the main pieces are set up correctly, but I've found that it's necessary to delve into some of the class api documentation to debug and make the experience really smooth.
The Camera.Parameters class is the key to maintaining a smooth video recording experience. Once you have a Camera object, you can use Camera.getParameters() to get the current parameters to modify them. Camera.setParameters(Camera.Parameters) can then be used to trigger any changes that have been made.
To prevent the video resize, we need to ensure that the Parameters' preview size is in line with the video resolution to be recorded. To get a list of supported Video/Preview sizes, we can use Camera.Parameters.getSupportedPreviewSizes() on our current Parameters object, which will return a list of Camera.Size objects. Each of these objects will have a width and height property, accessed directly via Camera.Size.width and Camera.Size.height (no getter methods). The getSupportedPreviewSizes() method is guaranteed to return at least one result, and it seems as though the results are ordered from highest resolution to lowest.
(For API level > 11 there is also a getSupportedVideoSizes() method, but it seems like this is only if the device has some video sizes that are different from preview sizes, else it returns null. I haven't had success with this method, so I'd stick with using PreviewSizes for now as it guarantees returning a size good for both video and preview, but it's something to be aware of going forward.)
Once we have the Camera.Size that corresponds to the video resolution we want, we can set that size using Camera.Parameters.setPreviewSize(width, height). Additionally, to help with the flicker, I use Camera.Parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO). One these steps are taken, use Camera.setParameters() to notify the Camera of your changes. I've had success setting these parameters once right after getting the camera, as setting parameters while the user is interacting with this Activity has caused some lag. If you're using the same Activity for Video and Picture capture, you can also set picture parameters here, the Camera object will handle using the proper params for each mode.
Almost done! Now that the preview is taken care of, all that's left is to ensure that the MediaRecorder uses the same resolution as the preview. When preparing your media recorder, between the calls to MediaRecorder.setProfile() (or setting the encoders, for API Level < 8) and MediaRecorder.setOutputFile(), place a call to MediaRecorder.setVideoSize(width, height) using the same values as your preview. Now, the transition from preview to record using MediaRecorder.start() should be seamless, as they're both using the same resolution.
Here're some quick code snippets so you can see everything in action:
Getting and setting the params:
Camera.Parameters params = camera.getParameters();
params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
//myVideoSize is an instance of Camera.Size
List<Camera.Size> previewSizes = params.getSupportedPreviewSizes();
myVideoSize = previewSizes.get(0);
params.setPreviewSize(myVideoSize.width, myVideoSize.height);
camera.setParameters(params);
Then setting the size on the media recorder:
//After setting the profile....
mediaRecorder.setProfile(CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH));
//Use myVideoSize from above
mediaRecorder.setVideoSize(myVideoSize.width, myVideoSize.height);
//Before setting the output file
mediaRecorder.setOutputFile(myFile);

Categories

Resources