Camera intent with resolution parameters in Android - android

I am building an Android application where part of the functionality involves users taking images and recording video.
For the application there is a need to set a specific resolution for both the images and the video.
Is it possible to specify the resolution parameters and then use a camera intent to capture images and video or do I need to build my own camera activity?
Any advice would be greatly appreciated.
Edit: I did some additional research and had a look at http://developer.android.com/guide/topics/media/camera.html#intents.
If I understand correctly there is no option to specify resolution parameters when using the Image capture intent http://developer.android.com/reference/android/provider/MediaStore.html#ACTION_IMAGE_CAPTURE.
For the Video capture intent it seems I have the option to use the Extra Video Quality parameter, however that only gives me the option of high quality and low quality (which I am not quite sure what corresponds to in terms of resolution) http://developer.android.com/reference/android/provider/MediaStore.html#EXTRA_VIDEO_QUALITY
It seems I best get started developing my own image and video activities then, unless I missed some other options with the image and video intent.

Camera intent starts external camera applciation which MAY use your hints (but MIGHT NOT). The activity/application is non standard (phone vendor dependent), as well as the concrete implementation of the camera software.
You can also use the camera api ( working examples are in this project: http://sourceforge.net/projects/javaocr/ ) which allows you to:
query supported image formats and resolutions (you guessed it - vendor dependent)
set up preview and capure resolutions and formats (but camera software is free to ignore this setting, and some formats and resolutions can produce weird exceptions despite being advertised as supported)
Conclusion: cameras in android devices are different and the camera API is underdocumented mess. So be as defensive as possible.

Related

(Camera2 API) Can I run 2 ImageReader instances of different configs at the same time?

I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.

Is it possible to capture a High-Res image while using ArCore?

In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!

How to improve captured image resolution with Camera2 API android?

I am using this project android-camera2-secret-picture-taker to capture image without open camera view, but the captured images is very bad like this
any help to make this better?
thanks
[Edit]
I tried other phones and it works fine, I take this bad images on Huawei Y6II only and I don't know why? the phone camera is 13 mpx and works fine with native camera app.
Did you issue only a single capture request to the camera device? (No free-running preview or such).
Generally, the auto-exposure, focus, and white-balance routines take a second or so of streaming before they stabilize to good values.
Even if you don't want a preview on screen, you need to request 10-30 frames of data from the camera to start before you save a final image. Or to be more robust, set a repeating request targeting some low-resolution SurfaceTexture, and wait until the CaptureResult CONTROL_AE_STATE / AWB_STATE fields reach CONVERGED, and the AF_STATE field is what you want as well (depends on what AF mode you're using). Then capture your image.
This is a wildly blind guess, but hey, worth a try.
If you used some code snippet from the web which suggests to get a list of supported image sizes and just pick the first one - well this has backfired for me on Huawei devices (more than one model) because Huawei seems to provide the list in the ascending order of resolution (i.e. smallest-first), whereas most other devices I've seen does that in descending order (i.e. largest-first).
So if this is a resolution issue, it might be worth a check.

Android - Setting Camera Resolution to 320 * 240 For Video recording programmatically using Intent

In Android, want to set the Camera Resolution to 320 * 240 programmatically including 30 seconds recording only . Please suggest me some Logic .Remember that I want to perform all this by using Intent only i.e I am launching Video Camera using intent . So I required the solution to minimize the camera resolution programmatically using intents only.
So i required the solution to minimize the camera resolution programmatically using intents only.
You cannot set the camera resolution when invoking a third-party camera app for taking pictures or recording videos. Third-party camera apps are written by actual programmers, and those programmers can do what they want.
If you are using ACTION_VIDEO_CAPTURE, you are welcome to include extras, like EXTRA_VIDEO_QUALITY, to request certain characteristics. But camera apps are welcome to ignore those extras (and some do), and there is no extra to force a particular resolution.
You are also welcome to implement your own video recording using MediaRecorder, skipping the third-party apps. Here, you can control the resolution... but only within the roster of resolutions supported by the device. There is no requirement that every Android device support recording any particular resolution, such as 320x240.

grab frame from video in Android

I've been looking at different ways of grabbing a YUV frame from a video stream but most of what I've seen rely on getting the width and height from previewSize. However, a cell phone can shoot video at 720p but a lot of phones can only display it at a lower resolution (ie 800x480) so is it possible to grab a screen shot that's closer to 1920x1080 (if video is being shot at 720p)? Or Am i forced to use the preview resolution (800x400 on some phones)?
Thanks
Yes, you can. *
* Conditions Apply -
You need access to middle layer, mediaframe work to be more precise
No, it cannot be done only through the application
Now if you want to do it at the mediaframe work level here are steps -
Assuming you are using Froyo and above, the default mediaframe work used is StageFright
In StageFright go to method onVideoEvent after a buffer is read from the mVideoSource use the mVideoBuffer to access the video frame at its original resolution
Linking this with your application -
You will need a button in the application to indicate screen capture
Once the user presses this button then you read the video frame from the location mentioned above and then return this buffer to the Java layer
From here you can use the JPEG Encoder to convert the raw video frame to an image.
EDIT:
Re read your question, you were asking for the screen capture during recording or the camera path. Even for this there is no way to achieve this in application alone, you will have to do something similar but you will need access to CameraSource in the StageFright framework.

Categories

Resources