it is possible to save pictures from surfaceview? I want a costum camera(smaller than screen) I want the preview and take pictures.
I try a lot of code but nothing work properly
How to capture image from custom CameraView in Android?
Yes, it's possible and code by your link is correct in common case. But there is one more thing which you should check if you trying to convert data to specific format in callback:
public void onPictureTaken(byte[] arg0, Camera arg1)
You can check picture format of Camera arg with method:
public int getPictureFormat ()
This returns ImageFormat (like JPEG, NV21, YUV etc,) for camera in your device. You should use this value to format correctly byte data from onPictureTaken because BitmapFactory.decodeByteArray(arg0, 0, arg0.length) method can work correctly only with JPEG or PNG data.
From Android Developer BitmapFactory documentation:
Prior to KITKAT additional constraints apply: The image being decoded
(whether as a resource or as a stream) must be in jpeg or png format.
Only equal sized bitmaps are supported, with inSampleSize set to 1.
Additionally, the configuration of the reused bitmap will override the
setting of inPreferredConfig, if set.
Additionally I propose you to use TextureView instead of SurfaceView because in this case you can use simple way and get picture from TextureView by this method:
public Bitmap getBitmap ()
Related
I'm currently using an OnImageCaptured callback to get my image instead of saving it to the device. I'm having trouble understanding when it's necessary to rotate an image when it comes from an ImageProxy.
I use the following method to convert the data from an ImageProxy to a BitMap:
...
val buffer: ByteBuffer = imageProxy.planes[0].buffer // Only first plane because of JPEG format.
val bytes = ByteArray(buffer.remaining())
buffer.get(bytes)
return BitmapFactory.decodeByteArray(bytes, 0, bytes.size)
The resulting bitmap is sometimes rotated, and sometimes not, depending on the device the picture is taken from. ImageProxy.getImageInfo().getRotationDegrees() returns the correct rotation, but I don't know when it's necessary to apply it, since sometimes it's applied in the bitmap, and sometimes not.
The ImageCapture.OnCapturedImageListener documentation also says:
The image is provided as captured by the underlying ImageReader without rotation applied. rotationDegrees describes the magnitude of clockwise rotation, which if applied to the image will make it match the currently configured target rotation.
which leads me to think that I'm getting the bitmap incorrectly, because sometimes it has the rotation applied. Is there something I'm missing here?
Well, as it turn out the only necessary information is the Exif metadata. rotationDegrees contains the final orientation the image should be in, starting from the base orientation, but the Exif metadata only shows the rotation I had to make to get the final result. So rotating according to TAG_ORIENTATION solved the issue.
UPDATE: This was an issue with the CameraX library itself. It was fixed in 1.0.0-beta02, so now the exif metadata and rotationDegrees contain the same information.
I am working on Camera2 api with real time Image processing, i get
found method
onCaptureProgressed(CameraCaptureSession, CaptureRequest, CaptureResult)
call on every capturing fram but i have no idea how to get byte[] or data from CaptureResult
You can't get image data from CaptureResult; it only provides image metadata.
Take a look at the Camera2Basic sample app, which captures JPEG images with an ImageReader. If you change the JPEG format to YUV, set the resolution to preview size, and set the ImageReader Surface as a target for the preview repeating request, you'll get an ImageReader.Image for every frame captured.
In my Android application, I use Camera2 API to allow the user to take a snapshot. The code mostly is straight out of standard Camera2 sample application.
When the image is available, it is obtained by calling acquireNextImage method:
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
In my case, when I obtain the width and height of the Image object, it reports it as 4160x3120. However, in reality, it is 3120x4160. The real size can be seen when I dump the buffer into a jpg file:
ByteBuffer buffer = mImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
// Dump "bytes" to file
For my purpose, I need the correct width and height. Wondering if the width and height are getting swapped because the sensor orientation is 90 degrees.
If so, I can simply swap the dimensions if I detect that the sensor orientation is 90 or 270. I already have this value:
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
I feel this is a general problem and not specific to my code. Regards.
Edit:
Turns out the image size reported is correct. Jpeg image metadata stores a field called "Orientation." Most image viewers know how to interpret this value.
From an Android camera, I take YUV array and decode it to RGB. (JNI NDK) Then, I using black-white filter for RGB matrix, and show on CameraPrewiev in format YCbCr_420_SP
lParameters.setPreviewFormat(PixelFormat.YCbCr_420_SP);
Now I need to take a photo. But when I takePhoto, i have this error:
CAMERA-JNI Manually set buffer was too small! Expected 1138126 bytes, but got 165888!
Because from Surface you are not give the image. You must give bitmap from layout and than save on SdCsrd in some folder as Compress JPG. Thanks for all. This question is closed.
I'm drawing an overlay on an image from the camera and saving the result to a file. To do this, I am passing a callback containing the code below to takePicture(). With larger image sizes, I am getting crashes with an OutOfMemoryError at the first line of the method.
Is there any way I can do this more efficiently? It seems that it's not possible to make a mutable Bitmap from the byte[], which doubles my memory usage immediately. If it can't be done this way at high resolutions, how can I produce an overlay on a large captured image without running out of memory?
public void onPictureTaken(byte[] rawPlainImage, Camera camera) {
Bitmap plainImage = BitmapFactory.decodeByteArray(rawPlainImage, 0, rawPlainImage.length);
plainImage = plainImage.copy(plainImage.getConfig(), true);
Canvas combinedImage = new Canvas(plainImage);
combinedImage.drawBitmap(mOverlay, mOverlayTransformation, null);
//Write plainImage (now modified) out to a file
plainImage.recycle();
}
You don't actually need to decode the image. Instead draw the overlay to the canvas, save the canvas as a bitmap, convert that bitmap to a byte array and then combine the byte array of the canvas and the bitmap and then save that.