The title may be unclear, but I'm using this awesome library by CommonsWare(nice meeting you at DroidCon btw) to deal with the notorious issues with Android's fragmented camera api.
I want to take 5 photos, or frames..but not simultaneously. Each frame should capture another shot a few milliseconds apart, or presumably after the previous photo has been successfully captured. Can this be done?
I'm following the standalone implementation in the demos, and simply taking a photo using
mCapture.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
try {
takePicture(true, false);
}catch(Exception e){
e.printStackTrace();
}
}
});
Passing in true to takePicture() because I will need the resulting Bitmap. I also disabled single shot mode since I will want to take another photo right after the previous has be snapped, and the preview is resumed
By default, the result of taking a picture is to return the
CameraFragment to preview mode, ready to take the next picture.
If, instead, you only need the one picture, or you want to send the
user to some other bit of UI first and do not want preview to start up
again right away, override useSingleShotMode() in your CameraHost to
return true. Or, call useSingleShotMode() on your
SimpleCameraHost.Builder, passing in a boolean to use by default. Or,
call useSingleShotMode() on your PictureTransaction, to control this
for an individual picture.
I was looking for a callback like onPictureTaken() or something similar inside CameraHost, that would allow me to go ahead and snap another photo right away before releasing the camera, but I don't see anything like this. Anyone ever done something like this using this library? Can the illustious CommonsWare please shed some light on this as well(if you see this?)
Thank you!
Read past the quoted paragraph to the next one, which begins with:
You will then probably want to use your own saveImage() implementation in your CameraHost to do whatever you want instead of restarting the preview. For example, you could start another activity to do something with the image.
If what you want is possible, you would call takePicture() again in saveImage() of your CameraHost, in addition to doing something with the image you received.
However:
Even with large heap enabled, you may not have enough heap space for what you are trying to do. You may need to explicitly choose a lower resolution image for the pictures.
This isn't exactly within the scope of the library. It may work, and I don't have a problem with it working, but being able to take N pictures in M seconds isn't part of the library's itch that I am (very very slowly) scratching. In particular, I don't think I have tested taking a picture with the preview already off, and there may be some issues in my code in that area.
Long-term, you may be better served with preview frame processing, rather than actually taking pictures.
Related
I need to implement text recognition using MLKit on Android and I have decided to use the new CameraX api as the camera lib. I am struggling with the correct "pipeline" of classes or data flow of the image because CameraX is quite new and not many resources is out there. The use case is that I take the picture, crop it in the middle by some bounds that are visible in the UI and then pass this cropped image to the MLKit that will process the image.
Given that, is there some place for ImageAnalysis.Analyzer
api? From my understanding this analyzer is used only for previews and not the captured image.
My first idea was to use takePicture method that accepts OnImageCapturedCallback but when I've tried access eg. ImageProxy.height the app crashed with an exception java.lang.IllegalStateException: Image is already closed and I could not find any fix for that.
Then I've decided to use another overload of takePicture method and now I save image to the file, then read it to Bitmap, crop this image and now I have an image that can be passed to MLKit. But when I take a look at FirebaseVisionImage that is passed to FirebaseVisionTextRecognizer it has a factory method to which I can pass the Image that I get from OnImageCapturedCallback which seems that I am doing some unnecessary steps.
So my questions are:
Is there some class (CaptureProcessor?) that will take care of the cropping of taken image? I suppose that then I could use OnImageCapturedCallback where I would receive already cropped image.
Should I even use ImageAnalysis.Analyzer if I am not doing realtime processing and I am doing post processing?
I suppose that I can achieve what I want with my current approach but I am feeling that I could use much more of CameraX than I currently am.
Thanks!
Is there some class (CaptureProcessor?) that will take care of the
cropping of taken image?
You can set the crop aspect ratio after you build the ImageCapture use case by using the setCropAspectRatio(Rational) method. This method crops from the center of the rotated output image. So basically what you'd get back after calling takePicture() is what I think you're asking for.
Should I even use ImageAnalysis.Analyzer if I am not doing realtime
processing and I am doing post processing?
No, it wouldn't make sense in your scenario. As you mentioned, only when doing real-time image processing would you want to use ImageAnalysis.Analyzer.
ps: I'd be interested in seeing the code you use for takePicture() that caused the IllegalStateException.
[Edit]
Taking a look at your code
imageCapture?.takePicture(executor, object : ImageCapture.OnImageCapturedCallback() {
override fun onCaptureSuccess(image: ImageProxy) {
// 1
super.onCaptureSuccess(image)
// 2
Log.d("MainActivity", "Image captured: ${image.width}x${image.height}")
}
})
At (1), if you take a look at super.onCaptureSuccess(imageProxy)'s implementation, it actually closes the imageProxy passed to the method. Accessing the image's width and height in (2) throws an exception, which is normal -since the image has been closed-. The documentation states:
The application is responsible for calling ImageProxy.close() to close
the image.
So when using this callback, you should probably not call super..., just use the imageProxy, then before returning from the method, manually close it (ImageProxy.close()).
Also trying to get access to color data bytes from color cam of Tango, I was stuck on java API by being able to connect tango Cam to a surface for display (but just OK for display in fact, no easy access to raw data, nor time stamp)... so finally I switch using C API on native code (latest FERMAT lib and header) and follow recommendation I found on stack Overflow by registering a derivated sample code to connectOnFrameAvailable()... (I start using PointCloudActivity sample for that test).
First problem I found is somewhat a side effect of registering to that callback, that works usually fine (callbacks gets fire regularly), but then another callback that I also registered, to get xyz clouds, start to fail to fire. Like in sample code I mentioned, clouds are get through a onXYZijAvailable() callback, that the app registers using TangoService_connectOnXYZijAvailable(onXYZijAvailable).
So failing to get xyz callback fired is not happening always, but usually half of the time, during tests, with a awful workaround that is by taking the app in background then foreground again ... this is curious, is this "recover" related to On-pause/On-resume low level stuff??). If someone has clues ....
By the way in Java API, same side effect was observed, once connecting cam texture for display (through Tango adequate API ...)
But here is my second "problem", back to acquiring YV12 color data from camera :
through registering to TangoService_connectOnFrameAvailable( TangoCameraId::TANGO_CAMERA_COLOR, nullptr, onFrameAvailable)
and providing static funtion onFrameAvailable defined like this :
static void onFrameAvailable(void* ctx, TangoCameraId id, const TangoImageBuffer* buffer)
{
...
LOGI("OnFrameAvailable(): Cam frame data received");
// Check if data format of expected type : YV12 , i.e.
// TangoImageFormatType::TANGO_HAL_PIXEL_FORMAT_YV12
// i.e. = 0x32315659 // YCrCb 4:2:0 Planar
//LOGI("OnFrameAvailable(): Frame data format (%x)", buffer->format);
....
}
the problem is that width, height, stride information of received TangoImageBuffer structure seems valid (1280x720, ...), BUT the format returned is changing every-time, and not the expected magic number (here 0x32315659) ...
I am doing something wrong there ? (but other info are OK ...)
Also, there is apparently only one data format defined (YV12 ) here, but seeing Fish Eye images from demo app, it seems grey level image, is it using same (color) format as low level capture than the RGB cam ???
1) Regarding the image from the camera, I came to the same conclusion you did - only availability of image data is through the C API
2) Regarding the image - I haven't had any issues with YUV, and my last encounter with this stuff was when I wrote JPEG stuff - the format is naked, i.e. it's an organizational structure and has no header information save the undefined metadata in the first line of pixels mentioned here - Here's a link to some code that may help you decode the image in a response to another message here
3) Regarding point cloud returns -
Please note this information is anecdotal, and to some degree the product of superstition - what works for me only does that sometimes, and may not work at all for you
Tango does seem to have a remarkable knack to simply stop producing point clouds. I think a lot of it has to do with very sensitive timing internally (I wonder if anyone mentioned that Linux ain't an RTOS when this was first crafted)
Almost all issues I encounter can be attributed to screwing up the timing where
A. Debugging the C level can may point clouds stop coming
B. Bugs in the native or java code that cause hiccups in the threads that are handling the callbacks can cause point clouds to stop coming
C. Excessive load can cause the system to loose sync, at which point the point clouds will stop coming - this is detectable, you will start to see a silvery grid pattern appear in rectangular areas of the image, and point clouds will cease. Rarely, the system will recover if load decreases, the silvery pattern goes away, and point clouds come back - more commonly the silvery pattern (I think its the 3d spatializing grid) grows to cover more of the image - at least a restart of the app is required for me, and a full tablet reboot every 3rd time or so
Summarizing, that's my suspicions and countermeasures, but it's based completely on personal experience -
I'm using the cwac-camera library to take photos with a custom in-app camera.
I'm overriding adjustPreviewParameters in SimpleCameraHost and am setting the JPEG quality.
#Override
public Parameters adjustPreviewParameters(Parameters parameters) {
super.adjustPreviewParameters(parameters);
parameters.setJpegQuality(80);
return (parameters);
}
Unfortunately, as per this question, the setJpegQuality method doesn't work on some devices (e.g. the S3).
I can see that the cwac-camera ImageCleanupTask always saves the manipulated image at 100% JPEG quality.
What's the best way to customize the ImageCleanupTask?
Should I expose a setJpegQuality method in PictureTransaction? Or do we want a more versatile solution (like allowing the ImageCleanupTask to be injected)?
I can see that the cwac-camera ImageCleanupTask always saves the manipulated image at 100% JPEG quality.
Ideally, that would be configurable. There are lots of things the library would ideally do. :-)
What's the best way to customize the ImageCleanupTask?
If you mean "how would one get the JPEG percentage in there?", augment PictureTransaction.
Should I expose a setJpegQuality method in PictureTransaction?
I would do jpegQuality(), as PictureTransaction uses the builder/fluent API pattern.
Note that with this change, you would want to remove parameters.setJpegQuality(80); from your existing code. Otherwise, the image will be degraded twice, once on capture (for devices that support it) and once when the image is written to disk, and that's probably not what you want.
I am working on an app that will allow a user to take quick click and forget snapshots. Most of the app is done except for the camera working that way I would like. Right now I have the camera working but I can't seem to find a way to disable the shutter sound and I cant find a way to disable displaying the preview. I was able to cover the preview up with a control but I would rather just not have it displayed if possible.
To sum things up, these are the items that I would like to disable while utilizing the built in Camera controls.
Shutter sound
Camera screen display
Image preview onPictureTaken
Does anyone know of a resource that could point me in the right direction, I would greatly appreciate it. I have been following CommonsWare's example from this sample fairly closely.
Thank you.
This is actually a property in the build.prop of a phone. I'm unsure if its possible to change this. Unless you completely override it and use your own camera code. Using what you can that is available in the SDK.
Take a look at this:
CameraService.cpp
. . .
CameraService::Client::Client(const sp<CameraService>& cameraService,
const sp<ICameraClient>& cameraClient,
const sp<CameraHardwareInterface>& hardware,
int cameraId, int cameraFacing, int clientPid) {
mPreviewCallbackFlag = FRAME_CALLBACK_FLAG_NOOP;
mOrientation = getOrientation(0, mCameraFacing == CAMERA_FACING_FRONT);
mOrientationChanged = false;
cameraService->setCameraBusy(cameraId);
cameraService->loadSound();
LOG1("Client::Client X (pid %d)", callingPid)
}
void CameraService::loadSound() {
Mutex::Autolock lock(mSoundLock);
LOG1("CameraService::loadSound ref=%d", mSoundRef);
if (mSoundRef++) return;
mSoundPlayer[SOUND_SHUTTER] = newMediaPlayer("/system/media/audio/ui/camera_click.ogg");
mSoundPlayer[SOUND_RECORDING] = newMediaPlayer("/system/media/audio/ui/VideoRecord.ogg");
}
As can be noted, the click sound is started without your interaction.
This is the service used in the Gingerbread Source code.
The reason they DON'T allow this is because it is illegal is some countries. Only way to achieve what you want is to have a custom ROM.
Update
If what being said here: http://androidforums.com/t-mobile-g1/6371-camera-shutter-sound-effect-off.html
still applies, then you could write a timer that turns off the sound (Silent Mode) for a couple of seconds and then turn it back on each time you take a picture.
You may use the data from the preview callback using a function to save it at a picture on some type of trigger such as a button, using onclick listener. you could compress the image to jpeg or png. In this way, there no shutterCallback to be implemented. and therefore you can play any sound you want or none when taking a picture.
You can effectively hide the preview surface by giving it dimensions of 1p in the xml file (I found an example the said 0p but for some reason that was giving me errors).
It may be illegal to have a silent shutter in some places, but it doesn't appear that the US is such a place, as my HTC One gives me an option to silence it, and in fact, since Android 4.2 you can do this:
Camera.CameraInfo info=new Camera.CameraInfo();
if (info.canDisableShutterSound) {
camera.enableShutterSound(false);
}
I am doing some image processing of previews captured by the camera. It is cpu consuming task and I have to stop preview to make it faster. Before new frame is being processed I am calling Camera.stopPreview() and after Camera.startPreview().
However, I would like to have last captured frame displayed on SurfaceView after stopping the preview. It works 'out of the box' on 2.3 devices, however, SurfaceView gets black after calling Camera.stopPreview() on older versions of SDK. Does anyone know what has changed and what to do?
Yes, this was an improvement of the 2.3.
I had this problem in 2.2 as well, there was no way to work on a preview image despite the fact this was theoretically possible according to the API. To solve this I had to actually take a picture using Camera.takePicture(null, null, Camera.PictureCallback myCallback) (see info here) and then to implement a callback to handle the taken picture. The instance of the class that implements this callback is actually the parameter to pass to Camera.takePicture() and the callback method itself looks like this:
public void onPictureTaken(byte[] JPEGData, Camera camera) {
final Bitmap bitmap = createBitmapFromView(JPEGData);
// do something with the Bitmap
}
Doing that way prevents the picture to be saved on the external storage with the regular pictures taken with the camera application. Should you need to serialize the Bitmap you'll have to do it explicitely. But it doesn't prevent the camera's trigger sound from being emitted.
Camera.takePicture() has to be called wile the preview is running. stopPreview() can be called right after.
One thing to be careful with /!\:
Camera.takePicture() is not reentrant (at all). The callback must have returned before any subsequent call of Camera.takePicture(). This was freezing my phone, I had to shutdown and restart it before it to be usable again. As the action was triggered by a button, on my side, I had to shield it with a boolean:
if (!mPictureTaken) {
mPictureTaken = true; // absolutely NOT reentrant. Any double click sticks the phone otherwise.
mCameraView.takePicture(callback);
}