Android CameraX - Get camera info (view angle, preview image size) - android

I'm trying to switch from old Android camera API to new CameraX API. I'm using preview mode for an Augmented Reality App and I need to get some info like angle of view and size on my camera used by the Preview.
This is my code so far:
PreviewConfig config = new PreviewConfig.Builder()
.setLensFacing(CameraX.LensFacing.BACK)
.setTargetResolution(new Size(dsiWidth, dsiHeight))
.build();
Preview preview = new Preview(config);
preview.setOnPreviewOutputUpdateListener(new Preview.OnPreviewOutputUpdateListener() {
#Override
public void onUpdated(Preview.PreviewOutput output) {
tvCameraView.setSurfaceTexture(output.getSurfaceTexture());
}
});
CameraX.bindToLifecycle(this, preview);
This works so far. But how do I get information on the camera used by the Preview? Thanks a lot in advance!

When you use the "androidx.camera:camera-camera2:1.0.0-alpha02" dependency you can have a look at the class Camera2CameraFactory. There you can see how the front and back facing camera is determined.
#Override
public Set<String> getAvailableCameraIds() throws CameraInfoUnavailableException {
List<String> camerasList = null;
try {
camerasList = Arrays.asList(mCameraManager.getCameraIdList());
} catch (CameraAccessException e) {
throw new CameraInfoUnavailableException(
"Unable to retrieve list of cameras on device.", e);
}
// Use a LinkedHashSet to preserve order
return new LinkedHashSet<>(camerasList);
}
#Nullable
#Override
public String cameraIdForLensFacing(LensFacing lensFacing)
throws CameraInfoUnavailableException {
Set<String> cameraIds = getAvailableCameraIds();
// Convert to from CameraX enum to Camera2 CameraMetadata
Integer lensFacingInteger = -1;
switch (lensFacing) {
case BACK:
lensFacingInteger = CameraMetadata.LENS_FACING_BACK;
break;
case FRONT:
lensFacingInteger = CameraMetadata.LENS_FACING_FRONT;
break;
}
for (String cameraId : cameraIds) {
CameraCharacteristics characteristics = null;
try {
characteristics = mCameraManager.getCameraCharacteristics(cameraId);
} catch (CameraAccessException e) {
throw new CameraInfoUnavailableException(
"Unable to retrieve info for camera with id " + cameraId + ".", e);
}
Integer cameraLensFacing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (cameraLensFacing == null) {
continue;
}
if (cameraLensFacing.equals(lensFacingInteger)) {
return cameraId;
}
}
return null;
}
It boils down to picking the first camera that matches the orientation from the camera service. I would assume that they will expand those apis in the future camerax release.

Related

Android CameraX throwing exception when enabling video capture usecase

I'm using the CameraX for the first time, and following the android documentation guide, but having issues, search a lot but did not find anything helping, so basically I'm trying to capture screen using CameraX and my captureVideo() method code is,
#RequiresApi(api = Build.VERSION_CODES.P)
public void takeVideo() {
ContentValues contentValues;
if (videoCapture == null) {
return;
}
Recording curRecording = recording;
if (curRecording != null) {
// Stop the current recording session.
recording.stop();
recording = null;
return;
}
// create and start a new recording session
String name = DateFormat.getInstance().format(new Date().getTime()).toString();
contentValues = new ContentValues();
contentValues.put(MediaStore.MediaColumns.DISPLAY_NAME, name);
contentValues.put(MediaStore.MediaColumns.MIME_TYPE, "video/mp4");
if (Build.VERSION.SDK_INT > Build.VERSION_CODES.P) {
contentValues.put(MediaStore.Video.Media.RELATIVE_PATH, "Movies/CameraX-Video");
}
MediaStoreOutputOptions mediaStoreOutputOptions = new MediaStoreOutputOptions.Builder(getContentResolver(), MediaStore.Video.Media.EXTERNAL_CONTENT_URI)
.setContentValues(contentValues)
.build();
PendingRecording pendingRecording = (videoCapture.getOutput()
.prepareRecording(this, mediaStoreOutputOptions));
if (ContextCompat.checkSelfPermission(this, Manifest.permission.RECORD_AUDIO) == PackageManager.PERMISSION_GRANTED) {
pendingRecording.withAudioEnabled();
recording = pendingRecording.start(getMainExecutor(), new Consumer<VideoRecordEvent>() {
#Override
public void accept(VideoRecordEvent videoRecordEvent) {
if (videoRecordEvent instanceof VideoRecordEvent.Start) {
btnTakeVide.setText("Stop Video");
} else if (videoRecordEvent instanceof VideoRecordEvent.Pause) {
// Handle the case where the active recording is paused
} else if (videoRecordEvent instanceof VideoRecordEvent.Resume) {
// Handles the case where the active recording is resumed
} else if (videoRecordEvent instanceof VideoRecordEvent.Finalize) {
btnTakeVide.setText("Start Video");
VideoRecordEvent.Finalize finalizeEvent =
(VideoRecordEvent.Finalize) videoRecordEvent;
// Handles a finalize event for the active recording, checking Finalize.getError()
if (!finalizeEvent.hasError()) {
String msg = "Video capture succeeded: " + ((VideoRecordEvent.Finalize) videoRecordEvent).getOutputResults().getOutputUri();
Toast.makeText(MainActivity.this, msg, Toast.LENGTH_SHORT)
.show();
} else {
if (recording != null) {
recording.close();
recording = null;
Log.e("TAG", "Video capture ends with error: ");
}
}
}
}
});
}
}
code inside setPreview is,
private void startCameraPreview() {
listenableFuture = ProcessCameraProvider.getInstance(MainActivity.this);
listenableFuture.addListener(new Runnable() {
#Override
public void run() {
try {
cameraProvider = listenableFuture.get();
preview = new Preview.Builder().build();
preview.setSurfaceProvider(cameraView.getSurfaceProvider());
recorder = new Recorder.Builder()
.setQualitySelector(QualitySelector.from(Quality.LOWEST))
.build();
videoCapture = VideoCapture.withOutput(recorder);
cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA;
cameraProvider.unbindAll();
cameraProvider.bindToLifecycle(MainActivity.this, cameraSelector, preview, videoCapture);
} catch (ExecutionException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}, ContextCompat.getMainExecutor(this));
}
the exception being thrown is,
I inserted following dependencies,
def camerax_version = "1.2.0-alpha04"
implementation "androidx.camera:camera-lifecycle:$camerax_version"
implementation "androidx.camera:camera-core:$camerax_version"
implementation "androidx.camera:camera-camera2:$camerax_version"
implementation "androidx.camera:camera-view:1.1.0"
Sorry developers, I forgot to add the video dependency,
implementation "androidx.camera:camera-video:${camerax_version}"
after adding dependency it works fine.

OpenGL ES: Rotate the complete scene to match portrait mode

Note Im new to Android and OpenGL
Im building an Augmented Reality App based on ARToolKitX (Github: https://github.com/artoolkitx/artoolkitx/tree/8c6bd4e7be5e80c8439066b23473506aebbb496c/Source/ARXJ/ARXJProj/arxj/src/main/java/org/artoolkitx/arx/arxj).
The application shows the camera frame and displays objects with opengl on top.
My Problem:
ARToolKitX forces the app to be in landscape mode:
setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
but when I change the screen orientation to SCREEN_ORIENTATION_PORTRAIT, the camera image and the opengl objects dont rotate to the correct orientation and stay in landscape mode.
Inside the ARRenderer I can use the drawVideoSettings method to rotate the camera image by itself, but that doesnt apply to the opengl objects.
ARToolKitX also provides a SurfaceChanged method inside the CameraSurface class, with the comment: "This is where [...] to create transformation matrix to scale and then rotate surface view, if the app is going to handle orientation changes."
But I dont have any idea, how the transformation matrix has too look like and how to apply it.
Any help is appreciated.
ARRenderer:
public abstract class ARRenderer implements GLSurfaceView.Renderer {
private MyShaderProgram shaderProgram;
private int width, height, cameraIndex;
private int[] viewport = new int[4];
private boolean firstRun = true;
private final static String TAG = ARRenderer.class.getName();
/**
* Allows subclasses to load markers and prepare the scene. This is called after
* initialisation is complete.
*/
public boolean configureARScene() {
return true;
}
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Transparent background
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 0.f);
this.shaderProgram = new MyShaderProgram(new MyVertexShader(), new MyFragmentShader());
GLES20.glUseProgram(shaderProgram.getShaderProgramHandle());
}
public void onSurfaceChanged(GL10 unused, int w, int h) {
this.width = w;
this.height = h;
if(ARController.getInstance().isRunning()) {
//Update the frame settings for native rendering
ARController.getInstance().drawVideoSettings(cameraIndex, w, h, false, false, false, ARX_jni.ARW_H_ALIGN_CENTRE, ARX_jni.ARW_V_ALIGN_CENTRE, ARX_jni.ARW_SCALE_MODE_FILL, viewport);
}
}
public void onDrawFrame(GL10 unused) {
if (ARController.getInstance().isRunning()) {
// Initialize artoolkitX video background rendering.
if (firstRun) {
boolean isDisplayFrameInited = ARController.getInstance().drawVideoInit(cameraIndex);
if (!isDisplayFrameInited) {
Log.e(TAG, "Display Frame not inited");
}
if (!ARController.getInstance().drawVideoSettings(cameraIndex, this.width, this.height, false, false,
false, ARX_jni.ARW_H_ALIGN_CENTRE, ARX_jni.ARW_V_ALIGN_CENTRE,
ARX_jni.ARW_SCALE_MODE_FILL, viewport)) {
Log.e(TAG, "Error during call of displayFrameSettings.");
} else {
Log.i(TAG, "Viewport {" + viewport[0] + ", " + viewport[1] + ", " + viewport[2] + ", " + viewport[3] + "}.");
}
firstRun = false;
}
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
if (!ARController.getInstance().drawVideoSettings(cameraIndex)) {
Log.e(TAG, "Error during call of displayFrame.");
}
draw();
}
}
/**
* Should be overridden in subclasses and used to perform rendering.
*/
public void draw() {
GLES20.glViewport(viewport[0], viewport[1], viewport[2], viewport[3]);
//TODO: Check how to refactor near and far plane
shaderProgram.setProjectionMatrix(ARController.getInstance().getProjectionMatrix(10.0f, 10000.0f));
float[] camPosition = {1f, 1f, 1f};
shaderProgram.render(camPosition);
}
#SuppressWarnings("unused")
public ShaderProgram getShaderProgram() {
return shaderProgram;
}
public void setCameraIndex(int cameraIndex) {
this.cameraIndex = cameraIndex;
}
}
CameraSurface
class CameraSurfaceImpl implements CameraSurface {
/**
* Android logging tag for this class.
*/
private static final String TAG = CameraSurfaceImpl.class.getSimpleName();
private CameraDevice mCameraDevice;
private ImageReader mImageReader;
private Size mImageReaderVideoSize;
private final Context mAppContext;
private final CameraDevice.StateCallback mCamera2DeviceStateCallback = new CameraDevice.StateCallback() {
#Override
public void onOpened(#NonNull CameraDevice camera2DeviceInstance) {
mCameraDevice = camera2DeviceInstance;
startCaptureAndForwardFramesSession();
}
#Override
public void onDisconnected(#NonNull CameraDevice camera2DeviceInstance) {
camera2DeviceInstance.close();
mCameraDevice = null;
}
#Override
public void onError(#NonNull CameraDevice camera2DeviceInstance, int error) {
camera2DeviceInstance.close();
mCameraDevice = null;
}
};
/**
* Listener to inform of camera related events: start, frame, and stop.
*/
private final CameraEventListener mCameraEventListener;
/**
* Tracks if SurfaceView instance was created.
*/
private boolean mImageReaderCreated;
public CameraSurfaceImpl(CameraEventListener cameraEventListener, Context appContext){
this.mCameraEventListener = cameraEventListener;
this.mAppContext = appContext;
}
private final ImageReader.OnImageAvailableListener mImageAvailableAndProcessHandler = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader)
{
Image imageInstance = reader.acquireLatestImage();
if (imageInstance == null) {
//Note: This seems to happen quite often.
Log.v(TAG, "onImageAvailable(): unable to acquire new image");
return;
}
// Get a ByteBuffer for each plane.
final Image.Plane[] imagePlanes = imageInstance.getPlanes();
final int imagePlaneCount = Math.min(4, imagePlanes.length); // We can handle up to 4 planes max.
final ByteBuffer[] imageBuffers = new ByteBuffer[imagePlaneCount];
final int[] imageBufferPixelStrides = new int[imagePlaneCount];
final int[] imageBufferRowStrides = new int[imagePlaneCount];
for (int i = 0; i < imagePlaneCount; i++) {
imageBuffers[i] = imagePlanes[i].getBuffer();
// For ImageFormat.YUV_420_888 the order of planes in the array returned by Image.getPlanes()
// is guaranteed such that plane #0 is always Y, plane #1 is always U (Cb), and plane #2 is always V (Cr).
// The Y-plane is guaranteed not to be interleaved with the U/V planes (in particular, pixel stride is
// always 1 in yPlane.getPixelStride()). The U/V planes are guaranteed to have the same row stride and
// pixel stride (in particular, uPlane.getRowStride() == vPlane.getRowStride() and uPlane.getPixelStride() == vPlane.getPixelStride(); ).
imageBufferPixelStrides[i] = imagePlanes[i].getPixelStride();
imageBufferRowStrides[i] = imagePlanes[i].getRowStride();
}
if (mCameraEventListener != null) {
mCameraEventListener.cameraStreamFrame(imageBuffers, imageBufferPixelStrides, imageBufferRowStrides);
}
imageInstance.close();
}
};
#Override
public void surfaceCreated() {
Log.i(TAG, "surfaceCreated(): called");
SharedPreferences prefs = PreferenceManager.getDefaultSharedPreferences(mAppContext);
int defaultCameraIndexId = mAppContext.getResources().getIdentifier("pref_defaultValue_cameraIndex","string", mAppContext.getPackageName());
mCamera2DeviceID = Integer.parseInt(prefs.getString("pref_cameraIndex", mAppContext.getResources().getString(defaultCameraIndexId)));
Log.i(TAG, "surfaceCreated(): will attempt to open camera \"" + mCamera2DeviceID +
"\", set orientation, set preview surface");
/*
Set the resolution from the settings as size for the glView. Because the video stream capture
is requested based on this size.
WARNING: While coding the preferences are taken from the res/xml/preferences.xml!!!
When building for Unity the actual used preferences are taken from the UnityARPlayer project!!!
*/
int defaultCameraValueId = mAppContext.getResources().getIdentifier("pref_defaultValue_cameraResolution","string",mAppContext.getPackageName());
String camResolution = prefs.getString("pref_cameraResolution", mAppContext.getResources().getString(defaultCameraValueId));
String[] dims = camResolution.split("x", 2);
mImageReaderVideoSize = new Size(Integer.parseInt(dims[0]),Integer.parseInt(dims[1]));
// Note that maxImages should be at least 2 for acquireLatestImage() to be any different than acquireNextImage() -
// discarding all-but-the-newest Image requires temporarily acquiring two Images at once. Or more generally,
// calling acquireLatestImage() with less than two images of margin, that is (maxImages - currentAcquiredImages < 2)
// will not discard as expected.
mImageReader = ImageReader.newInstance(mImageReaderVideoSize.getWidth(),mImageReaderVideoSize.getHeight(), ImageFormat.YUV_420_888, /* The maximum number of images the user will want to access simultaneously:*/ 2 );
mImageReader.setOnImageAvailableListener(mImageAvailableAndProcessHandler, null);
mImageReaderCreated = true;
} // end: public void surfaceCreated(SurfaceHolder holder)
/* Interface implemented by this SurfaceView subclass
holder: SurfaceHolder instance associated with SurfaceView instance that changed
format: pixel format of the surface
width: of the SurfaceView instance
height: of the SurfaceView instance
*/
#Override
public void surfaceChanged() {
Log.i(TAG, "surfaceChanged(): called");
// This is where to calculate the optimal size of the display and set the aspect ratio
// of the surface view (probably the service holder). Also where to Create transformation
// matrix to scale and then rotate surface view, if the app is going to handle orientation
// changes.
if (!mImageReaderCreated) {
surfaceCreated();
}
if (!isCamera2DeviceOpen()) {
openCamera2(mCamera2DeviceID);
}
if (isCamera2DeviceOpen() && (null == mYUV_CaptureAndSendSession)) {
startCaptureAndForwardFramesSession();
}
}
private void openCamera2(int camera2DeviceID) {
Log.i(TAG, "openCamera2(): called");
CameraManager camera2DeviceMgr = (CameraManager)mAppContext.getSystemService(Context.CAMERA_SERVICE);
try {
if (PackageManager.PERMISSION_GRANTED == ContextCompat.checkSelfPermission(mAppContext, Manifest.permission.CAMERA)) {
camera2DeviceMgr.openCamera(Integer.toString(camera2DeviceID), mCamera2DeviceStateCallback, null);
return;
}
} catch (CameraAccessException ex) {
Log.e(TAG, "openCamera2(): CameraAccessException caught, " + ex.getMessage());
} catch (Exception ex) {
Log.e(TAG, "openCamera2(): exception caught, " + ex.getMessage());
}
if (null == camera2DeviceMgr) {
Log.e(TAG, "openCamera2(): Camera2 DeviceMgr not set");
}
Log.e(TAG, "openCamera2(): abnormal exit");
}
private int mCamera2DeviceID = -1;
private CaptureRequest.Builder mCaptureRequestBuilder;
private CameraCaptureSession mYUV_CaptureAndSendSession;
private void startCaptureAndForwardFramesSession() {
if ((null == mCameraDevice) || (!mImageReaderCreated) /*|| (null == mPreviewSize)*/) {
return;
}
closeYUV_CaptureAndForwardSession();
try {
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
List<Surface> surfaces = new ArrayList<>();
Surface surfaceInstance;
surfaceInstance = mImageReader.getSurface();
surfaces.add(surfaceInstance);
mCaptureRequestBuilder.addTarget(surfaceInstance);
mCameraDevice.createCaptureSession(
surfaces, // Output surfaces
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession session) {
try {
if (mCameraEventListener != null) {
mCameraEventListener.cameraStreamStarted(mImageReaderVideoSize.getWidth(), mImageReaderVideoSize.getHeight(), "YUV_420_888", mCamera2DeviceID, false);
}
mYUV_CaptureAndSendSession = session;
// Session to repeat request to update passed in camSensorSurface
mYUV_CaptureAndSendSession.setRepeatingRequest(mCaptureRequestBuilder.build(), /* CameraCaptureSession.CaptureCallback cameraEventListener: */null, /* Background thread: */ null);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession session) {
Toast.makeText(mAppContext, "Unable to setup camera sensor capture session", Toast.LENGTH_SHORT).show();
}
}, // Callback for capture session state updates
null); // Secondary thread message queue
} catch (CameraAccessException ex) {
ex.printStackTrace();
}
}
#Override
public void closeCameraDevice() {
closeYUV_CaptureAndForwardSession();
if (null != mCameraDevice) {
mCameraDevice.close();
mCameraDevice = null;
}
if (null != mImageReader) {
mImageReader.close();
mImageReader = null;
}
if (mCameraEventListener != null) {
mCameraEventListener.cameraStreamStopped();
}
mImageReaderCreated = false;
}
private void closeYUV_CaptureAndForwardSession() {
if (mYUV_CaptureAndSendSession != null) {
mYUV_CaptureAndSendSession.close();
mYUV_CaptureAndSendSession = null;
}
}
/**
* Indicates whether or not camera2 device instance is available, opened, enabled.
*/
#Override
public boolean isCamera2DeviceOpen() {
return (null != mCameraDevice);
}
#Override
public boolean isImageReaderCreated() {
return mImageReaderCreated;
}
}
Edit:
/**
* Override the draw function from ARRenderer.
*/
#Override
public void draw() {
super.draw();
fpsCounter.frame();
if(maxfps<fpsCounter.getFPS()){
maxfps= fpsCounter.getFPS();
}
logger.log(Level.INFO, "FPS: " + maxfps);
// Initialize GL
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glFrontFace(GLES20.GL_CCW);
// Look for trackables, and draw on each found one.
for (int trackableUID : trackables.keySet()) {
// If the trackable is visible, apply its transformation, and render the object
float[] modelViewMatrix = new float[16];
if (ARController.getInstance().queryTrackableVisibilityAndTransformation(trackableUID, modelViewMatrix)) {
float[] projectionMatrix = ARController.getInstance().getProjectionMatrix(10.0f, 10000.0f);
trackables.get(trackableUID).draw(projectionMatrix, modelViewMatrix);
}
}
}

Android Camera 2: ImageReader's Images Have no Stride Values

I've been trying to route the images from the camera to an ImageReader so that I can manipulate the images directly using the Camera2 API. When I have the capture session stream to a SurfaceView, the stream works just fine. When I then set the capture session stream to my ImageReader, I notice that the images are somehow invalid.
In my ImageReader's OnImageAvailable callback function, I pull the next available Image and try to read it. This is where I have the problem. The Image isn't null and the planes are there, but the planes' buffers are null at first. When I try to grab the buffers, they are suddenly not null, but trying to read from them crashes the app without a stack trace. Further, the pixel and row strides in the planes are set to 0. The width and height of the image are properly set, though.
Therefore, I think that I'm not setting my ImageReader up correctly. The question is then what am I not doing correctly?
Code:
public class CompatibleCamera {
private static final int CAMERA2_API_LEVEL = 23;
public static final int FORMAT_RAW = ImageFormat.RAW_SENSOR;
public static final int FORMAT_JPEG = ImageFormat.JPEG;
private static final int MAX_IMAGES = 2;
// Interface for the user to use. User supplies the function to manipulate the image
public interface ImageTransform
{
void doTransform(Image image);
}
//***********Camera 2 API Members***********
// The camera2 API CameraManager. Used to access the camera device
private CameraManager mCamera2Manager;
// The information used by the device to reference the camera. Not a camera object itself
private CameraDevice mCamera2Device;
private String mCamera2DeviceID = "";
// The class that allows us to get the camera's image
private ImageReader mImageReader;
// This listener is where we have the programmer deal with the image. Just edit the interface
private ImageReader.OnImageAvailableListener mListener;
// This is the thread for the handler. It keeps it off the UI thread so we don't block the GUI
private HandlerThread mCameraCaptureHandlerThread;
// This runs in the background and handles the camera feed, activating the OnImageAvailableListener
private Handler mCameraCaptureHandler;
private HandlerThread mImageAvailableHandlerThread;
// This runs in the background and handles the camera feed, activating the OnImageAvailableListener
private Handler mImageAvailableHandler;
// This object is the camera feed, essentially. We store it so we can properly close it later
private CameraCaptureSession cameraCaptureSession;
// DEBUG
private boolean TEST_SURFACE_VIEW = false;
private Surface dbSurface;
// Mutex lock. Locks and unlocks when the ImageReader is pulling and processing an image
private Semaphore imageReaderLock = new Semaphore(1);
//***********Common Members***********
// The context of the activity holding this object
private Context mContext;
// Our ImageTransform implementation to alter the image as it comes in
private ImageTransform mTransform;
private int iImageFormat= FORMAT_RAW;
//==========Methods==========
public CompatibleCamera(Context context, ImageTransform transform, int imageFormat)
{
mContext = context;
mTransform = transform;
mListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader imageReader) {
try {
imageReaderLock.acquire();
Image image = imageReader.acquireNextImage();
//<--------------Problem With Image is Here-------------->
mTransform.doTransform(image);
image.close();
imageReaderLock.release();
}
catch(InterruptedException ex)
{
ex.printStackTrace();
}
}
};
}
private boolean camera2GetManager()
{
//----First, get the CameraManager and a Camera Device----
mCamera2Manager = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
if (mCamera2Manager == null) {
System.out.println(" DEBUG: Manager is null");
return false;
}
else {
System.out.println(" DEBUG: Camera Manager obtained");
try {
String[] cameraIDs = mCamera2Manager.getCameraIdList();
for (String cameraID : cameraIDs) {
CameraCharacteristics cameraCharacteristics = mCamera2Manager.getCameraCharacteristics(cameraID);
if (cameraCharacteristics.get(CameraCharacteristics.LENS_FACING) ==
CameraCharacteristics.LENS_FACING_BACK) {
mCamera2DeviceID = cameraID;
break;
}
}
if (mCamera2DeviceID.equals("")) {
System.out.println("No back camera, exiting");
return false;
}
System.out.println(" DEBUG: Camera Device obtained");
// Open the Camera Device
} catch (Exception ex) {
ex.printStackTrace();
return false;
}
return camera2OpenCamera();
}
}
private boolean camera2SetupImageReader()
{
// Get the largest image size available
CameraCharacteristics cameraCharacteristics;
try {
cameraCharacteristics= mCamera2Manager.getCameraCharacteristics(mCamera2DeviceID);
} catch(Exception e) {
e.printStackTrace();
return false;
}
StreamConfigurationMap map = cameraCharacteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
Size largestSize = Collections.max(
Arrays.asList(map.getOutputSizes(iImageFormat)),
new CompareSizesByArea());
// Set up the handler
mCameraCaptureHandlerThread = new HandlerThread("cameraCaptureHandlerThread");
mCameraCaptureHandlerThread.start();
mCameraCaptureHandler = new Handler(mCameraCaptureHandlerThread.getLooper());
mImageAvailableHandlerThread = new HandlerThread("imageReaderHandlerThread");
mImageAvailableHandlerThread.start();
mImageAvailableHandler = new Handler(mImageAvailableHandlerThread.getLooper());
mImageReader = ImageReader.newInstance( largestSize.getWidth(),
largestSize.getHeight(),
iImageFormat,
MAX_IMAGES);
mImageReader.setOnImageAvailableListener(mListener, mImageAvailableHandler);
// This callback is used to asynchronously set up the capture session on our end
final CameraCaptureSession.StateCallback captureStateCallback = new CameraCaptureSession.StateCallback() {
// When configured, set the target surface
#Override
public void onConfigured(#NonNull CameraCaptureSession session) {
try
{
CaptureRequest.Builder requestBuilder = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
if (TEST_SURFACE_VIEW)
requestBuilder.addTarget(dbSurface);
else
requestBuilder.addTarget(mImageReader.getSurface());
//set to null - image data will be produced but will not receive metadata
session.setRepeatingRequest(requestBuilder.build(), null, mCameraCaptureHandler);
cameraCaptureSession = session;
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession cameraCaptureSession) {
System.out.println("Failed to configure the capture session :(");
}
};
ArrayList<Surface> surfaces = new ArrayList<>();
if (TEST_SURFACE_VIEW)
surfaces.add(dbSurface);
else
surfaces.add(mImageReader.getSurface());
try
{
mCamera2Device.createCaptureSession(surfaces, captureStateCallback, mCameraCaptureHandler);
}
catch(Exception ex)
{
ex.printStackTrace();
}
return true;
}
}
RAW_SENSOR is a special beast of formats.
General raw camera sensor image format, usually representing a single-channel Bayer-mosaic image. Each pixel color sample is stored with 16 bits of precision.
The layout of the color mosaic, the maximum and minimum encoding values of the raw pixel data, the color space of the image, and all other needed information to interpret a raw sensor image must be queried from the android.hardware.camera2.CameraDevice which produced the image.
You should not attempt to use its stride info directly, as if it were a YUV frame.

Turning on/off flash with Android camera2 API not working

I'm creating an Android app with a custom camera and I'm switching to the new camera2 API. I have a button allowing to turn ON and OFF the flash when the back camera is on (without stopping the camera, like any classic camera app).
When I tap the flash icon, nothing happens and this is what the logcat returns:
D/ViewRootImpl: ViewPostImeInputStage processPointer 0
D/ViewRootImpl: ViewPostImeInputStage processPointer 1
I don't know why it's not working. Here is the code:
I have a RecordVideoActivity using a RecordVideoFragment. Here is the fragment's XML part that contains the flash button code:
<ImageButton
android:id="#+id/button_flash"
android:src="#drawable/ic_flash_off"
android:layout_alignParentLeft="true"
style="#style/actions_icons_camera"
android:onClick="actionFlash"/>
And the Java code:
ImageButton flashButton;
private boolean hasFlash;
private boolean isFlashOn = false;
With in the onViewCreated:
#Override
public void onViewCreated(final View view, Bundle savedInstanceState) {
...
[some code]
...
// Flash on/off button
flashButton = (ImageButton) view.findViewById(R.id.button_flash);
// Listener for Flash on/off button
flashButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
actionFlash();
}
});
And here is the actionFlash() function definition:
private void actionFlash() {
/* First check if device is supporting flashlight or not */
hasFlash = getActivity().getApplicationContext().getPackageManager()
.hasSystemFeature(PackageManager.FEATURE_CAMERA_FLASH);
if (!hasFlash) {
// device doesn't support flash
// Show alert message and close the application
AlertDialog alert = new AlertDialog.Builder(this.getActivity())
.create();
alert.setMessage("Sorry, your device doesn't support flash light!");
alert.setButton(DialogInterface.BUTTON_POSITIVE, "OK", new DialogInterface.OnClickListener() {
public void onClick(DialogInterface dialog, int which) {
dialog.dismiss();
}
});
alert.show();
return;
}
else { // the device support flash
CameraManager mCameraManager = (CameraManager) getActivity().getSystemService(Context.CAMERA_SERVICE);
try {
String mCameraId = mCameraManager.getCameraIdList()[0];
if (mCameraId.equals("1")) { // currently on back camera
if (!isFlashOn) { // if flash light was OFF
// Turn ON flash light
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
mCameraManager.setTorchMode(mCameraId, true);
}
} catch (Exception e) {
e.printStackTrace();
}
// Change isFlashOn boolean value
isFlashOn = true;
// Change button icon
flashButton.setImageResource(R.drawable.ic_flash_off);
} else { // if flash light was ON
// Turn OFF flash light
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
mCameraManager.setTorchMode(mCameraId, false);
}
} catch (Exception e) {
e.printStackTrace();
}
// Change isFlashOn boolean value
isFlashOn = false;
// Change button icon
flashButton.setImageResource(R.drawable.ic_flash_on);
}
}
} catch (CameraAccessException e) {
Toast.makeText(getActivity(), "Cannot access the camera.", Toast.LENGTH_SHORT).show();
getActivity().finish();
}
}
}
Any idea what could be wrong?
(I already looked at this question but it doesn't address my problem)
Thank you very much for your help. This is driving me crazy.
Create these variables:
public static final String CAMERA_FRONT = "1";
public static final String CAMERA_BACK = "0";
private String cameraId = CAMERA_BACK;
private boolean isFlashSupported;
private boolean isTorchOn;
then add these methods:
public void switchFlash() {
try {
if (cameraId.equals(CAMERA_BACK)) {
if (isFlashSupported) {
if (isTorchOn) {
mPreviewBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF);
mPreviewSession.setRepeatingRequest(mPreviewBuilder.build(), null, null);
flashButton.setImageResource(R.drawable.ic_flash_off);
isTorchOn = false;
} else {
mPreviewBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_TORCH);
mPreviewSession.setRepeatingRequest(mPreviewBuilder.build(), null, null);
flashButton.setImageResource(R.drawable.ic_flash_on);
isTorchOn = true;
}
}
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
public void setupFlashButton() {
if (cameraId.equals(CAMERA_BACK) && isFlashSupported) {
flashButton.setVisibility(View.VISIBLE);
if (isTorchOn) {
flashButton.setImageResource(R.drawable.ic_flash_off);
} else {
flashButton.setImageResource(R.drawable.ic_flash_on);
}
} else {
flashButton.setVisibility(View.GONE);
}
}
after this line add this code:
Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
isFlashSupported = available == null ? false : available;
setupFlashButton();
at the end call switchFlash() in your desired click listener.
et voilĂ !
PS. This might be helpful - front/back camera switcher
I see for cameraX, ImageCapture.flashMode only has effect during we build with initial configuration, ImageCapture.Builder() etc.
But if you want to enable/disable flash dynamically, you will have to use the following.
camera?.cameraControl?.enableTorch(enableFlash)
If you are wondering what camera is? Captured it from documentation.
// A variable number of use-cases can be passed here -
// camera provides access to CameraControl & CameraInfo
camera = cameraProvider.bindToLifecycle(
this, cameraSelector, preview, imageCapture
)
The answer by #MrOnyszko is correct, but it needs to be updated to
isTorchAvailable = camChars.get(CameraCharacteristics.FLASH_INFO_AVAILABLE).booleanValue();
CameraManager camManager = (CameraManager).context.getSystemService(Context.CAMERA_SERVICE);
try {
String cameraId cameraId = camManager.getCameraIdList()[0];
CameraCharacteristics camChars = camManager.getCameraCharacteristics(cameraId);
boolean isTorchAvailable = camChars.get(CameraCharacteristics.FLASH_INFO_AVAILABLE).booleanValue();
if(isTorchAvailable) {
camManager.setTorchMode(cameraId, true); //Turn ON
camManager.setTorchMode(cameraId, false);
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
The problem with your code is "mCameraManager.setTorchMode(mCameraId, true);"
This call requires API 23+. Hence, You can not call this native method from APIs lower than API 23.
What you could do is use Depricated Camera API. It works for all devices and will keep on working for quite sometime in near future.
Here is what you could do, add this code using if-else statements. If device is API23+, use Camera2 otherwise use Camera. That's what I do.
Add hardware.Camera in your imports
import android.hardware.Camera;
Initialize variables
Camera camera;
Camera.Parameters params;
Now, Get the Camera and turn on the flash
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
mCameraManager.setTorchMode(CameraId, true);
}
else {
camera = Camera.open();
params = camera.getParameters();
params.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH);
camera.setParameters(params);
camera.startPreview();
}
catch (Exception e)
{
e.printStackTrace();
}
And you are good to go.
The logic that you are using is not backward compatible, and I have also verified that it does not work even on some of the 23+ devices, its better to use the camera manager api (supported 21+ onwards)
mPreviewBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF);
Refer the accepted answer for details

How does one retrieve the cameraID and use setTorchMode?

So Android M recently came out and it now has a built in cameralight function called setTorchMode. I was curious as to how this worked as the parameters are (String cameraID, Boolean true/false). The Boolean obviously dictates whether the light is on or off, but how do you get the cameraID? I know there's a method called getCameraIDList, but that returns an array of IDs, not just one. How do you know which one in that list to use?
You should use CameraManager "getCameraIdList" function which will retrieve you a list of strings where each represent an active camera.
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.M) {
mCameraManager = (CameraManager) context.getSystemService(Context.CAMERA_SERVICE);
try {
for (String camID : mCameraManager.getCameraIdList()) {
CameraCharacteristics cameraCharacteristics = mCameraManager.getCameraCharacteristics(camID);
int lensFacing = cameraCharacteristics.get(CameraCharacteristics.LENS_FACING);
if (lensFacing == CameraCharacteristics.LENS_FACING_FRONT && cameraCharacteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE)) {
mCameraId = camID;
break;
} else if (lensFacing == CameraCharacteristics.LENS_FACING_BACK && cameraCharacteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE)) {
mCameraId = camID;
}
}
if (mCameraId != null) {
mCameraManager.setTorchMode(mCameraId, true);
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
mCameraId will turn on front camera flash if available or else back camera flash if available. If no flash is available then mCameraId will be null and setTorchMode will not be called.

Categories

Resources