So we can detect some objects and so on with VisionImageProcessor like this
imageProcessor.processImageProxy(imageProxy, graphicOverlay)
and it shows results on GraphicOverlay view which we can add to Activity/Fragment
But how can I get the result in Bitmap instead for example so I could save it to an image file (png/jpg) or a video file? Basically blend Camera frame with image result from VisionImageProcessor and save it to image/video file
Related
I have a method that returns selected images from the camera or gallery path to MultiBodyPart.Part to be sent to API which is working fine, but at the same time, I need to use this MultiBodyPart.Part and show this in the imageview.
What I am looking for is how can I use MultiBodyPart response to show in my imageview.
I am working on Camera2 api with real time Image processing, i get
found method
onCaptureProgressed(CameraCaptureSession, CaptureRequest, CaptureResult)
call on every capturing fram but i have no idea how to get byte[] or data from CaptureResult
You can't get image data from CaptureResult; it only provides image metadata.
Take a look at the Camera2Basic sample app, which captures JPEG images with an ImageReader. If you change the JPEG format to YUV, set the resolution to preview size, and set the ImageReader Surface as a target for the preview repeating request, you'll get an ImageReader.Image for every frame captured.
As I know, when you using camera it crops some part of image. I mean that the application cuts out that part of the photo that goes beyond the rectangle.
Is there any way to get the original image that is full-sized and received directly from the camera's matrix?
Root access on my device is available.
I did a small demo years ago:
https://sourceforge.net/p/javaocr/code/HEAD/tree/trunk/demos/camera-utils/src/main/java/net/sf/javaocr/demos/android/utils/camera/CameraManager.java#l8
Basic idea is to set up callback, then you raw image data is delivered via byte array ( getPreviewFrame() / onPreviewFrame ) - no root access is necessary.
Actually, this data comes as mmapped memory buffer directly from adress space of camera app - no root is necessary
As this byte array does not provide any meta information, you have to get all the params from camera object yourself
it is possible to save pictures from surfaceview? I want a costum camera(smaller than screen) I want the preview and take pictures.
I try a lot of code but nothing work properly
How to capture image from custom CameraView in Android?
Yes, it's possible and code by your link is correct in common case. But there is one more thing which you should check if you trying to convert data to specific format in callback:
public void onPictureTaken(byte[] arg0, Camera arg1)
You can check picture format of Camera arg with method:
public int getPictureFormat ()
This returns ImageFormat (like JPEG, NV21, YUV etc,) for camera in your device. You should use this value to format correctly byte data from onPictureTaken because BitmapFactory.decodeByteArray(arg0, 0, arg0.length) method can work correctly only with JPEG or PNG data.
From Android Developer BitmapFactory documentation:
Prior to KITKAT additional constraints apply: The image being decoded
(whether as a resource or as a stream) must be in jpeg or png format.
Only equal sized bitmaps are supported, with inSampleSize set to 1.
Additionally, the configuration of the reused bitmap will override the
setting of inPreferredConfig, if set.
Additionally I propose you to use TextureView instead of SurfaceView because in this case you can use simple way and get picture from TextureView by this method:
public Bitmap getBitmap ()
It looks like in camera image capture, one can only capture either thumbnail or full image but not both in one pass because
public void startCamera() {
...
camera.putExtra("output", imageUri); (step 1)
...
needs to be declared before
...
startActivityForResult(camera, IMAGE_CAPTURE); (step 2)
...
Bundle extras = camera.getExtras();
mImageBitmap = (Bitmap) extras.get("data");
imageView.setImageBitmap(mImageBitmap);
...
But once "onActivityResult" returns, the full image is already saved into imageUri and the buffer cleared. But to capture the thumbnail of an image taken, the code needs to be executed after "startActivityForResult". The problem is the image buffer is cleared once the image is saved in step 2. To capture the image thumbnail, one will need to skip saving the full image in step 1 in order to capture the thumbnail image in step 2.
I can use an alternative to save the full image, reload the full image into bitmap, scale the image into a thumbnail size and resave the image but it seems to be redundant. Any idea if I can do both in one pass?
Check out MediaStore.Images.Thumbnails, and specifically getThumbnail (near the bottom): http://developer.android.com/reference/android/provider/MediaStore.Images.Thumbnails.html .
If that doesn't work, yes, you will have to manually re-scale and save the thumbnail yourself.