I am creating an app where the user has to take multiple pictures of an object. I need to guide him through the process by adding a message on top of the screen when the camera is opened (something like: Right side, Left side, Front, etc) Toasts are not good since I need the message to say there until the picture is taken. Is there a way to add a layout over the default camera? I considered creating a camera activity but with android.hardware.Camera deprecated, and my minimum SDK not supporting camera2, it seems quite the predicament.
Any suggestion is appreciated.
Thank you
Is there a way to add a layout over the default camera?
There is no single "default camera". There are several thousand Android device models. There are hundreds of different camera apps that come pre-installed on those device models. Plus, the user may elect to use a different camera app, one that they installed, when you try starting an ACTION_IMAGE_CAPTURE activity (or however you plan on taking a picture). As a result, you have no way of knowing what the UI is of the camera activity to be able to annotate it, even if annotating it were practical (which, on the whole, it isn't).
I considered creating a camera activity but with android.hardware.Camera deprecated, and my minimum SDK not supporting camera2, it seems quite the predicament.
In Android, "deprecated" means "we have something else that we think that you should use". Unless the documentation specifically states that the deprecated stuff does not work starting with such-and-so API level, the deprecated APIs still work. They may not be the best choice, in light of the alternative, but they still work.
In particular, deprecated APIs sometimes are your only option. The camera is a great example. android.hardware.Camera is the only way to take pictures on Android 4.4 and below. At the present time, that still represents the better part of a billion devices. Here, "deprecated" means "on Android 5.0 and higher, you should consider using android.hardware.camera2". Whether you choose to use both APIs (android.hardware.camera2 on API Level 21+ and android.hardware.Camera before then), or whether you choose to use android.hardware.Camera across the board, is your decision to make.
If you want to offer some sort of guided picture-taking, implementing your own camera UI in your own app is pretty much your only choice.
Related
I have quite a bit of experience with the camera API, but I could not find any documentation to answer this question...
Most phones already have a front and a back camera. Would it be possible to simulate a 3rd camera via software (probably using a service), and register that with the api?
The idea would be that we define a custom camera, register it with the Api, and then any camera app would be able to get it by looping through the available cameras.
I imagine several cases where we might want this...
There are some external cameras (such as the FLIR thermal camera) that could provide this.
We might want to concatenate the front and back camera images into a single image, and preview that. I know not all phones support opening both cameras concurrently, but some do, and i could imagine this functionality would be cool for 3rd party video chat apps like Skype... Specifically, since Skype doesnt natively support this, by registering directly with the Android Camera Api, we could get around the limitations of the Skype API, since our custom camera would just look like one of the default Android cameras.
So would this be possible? Or what is the technical limitations that prevents us from doing it. Perhaps the Android Api simply doesnt let us define a custom source (I know the Sensor API doesnt, so I would not be surprised if this was the case for the Camera API as well).
I'm to the point in my app where I am ready to implement the camera. I've used the intent method before and there is a noticeable delay when launching the phones default camera activity.
I thought I'd take a look at baking the camera into a fragment but looks like it's not exactly a trivial task, especially with Camera and Camera2 and keeping backwards comparability in mind.
Is there a significant performance increase in creating my own camera fragment within my app that would make the effort worth while? I'll be taking a lot of pictures and then uploading them via JCIFS on background threads which means I'll be calling the default camera activity via intent quite a bit.
Is there a significant performance increase in creating my own camera fragment within my app that would make the effort worth while?
Performance? Marginally, at best.
However, ACTION_IMAGE_CAPTURE implementations are notoriously buggy. Relying upon them is unrealistic.
My general recommendation is:
If taking photos is a "nice to have" capability for your app, such that if ACTION_IMAGE_CAPTURE does not work reliably for the user, you and the user are willing to shrug your virtual shoulders and move on, use ACTION_IMAGE_CAPTURE. An example of this would be a note-taking app, where you want to allow users to attach photos.
If taking photos is more important than that, but it is still somewhat of a side feature of your app, try using a library for integrating with the camera, such as mine. Or, give the user the choice between ACTION_IMAGE_CAPTURE and the library's camera capability, so the user can pick between a full-featured camera (that might not work) and something fairly basic from the library (but is more likely to succeed). An example of this would be an app for an insurance claims adjuster, where she needs to be able to document the claim with photos, but the point of the app is bigger than the photos.
If taking photos is the point of your app, integrate with the APIs directly, so you can have complete control over the experience. An example of this would be Snapchat. Or InstantFace, presumably.
What I'm trying to achieve: access both front and back cameras at the same time.
What I've researched: I know android camera API doesn't give support for using multiple instances of the Camera and you have to release a camera before using the other one. I've read tens of questions about this, I know on some devices it's possible (like Samsung S4, or other new devices from them).
I've also found out that it's possible to have access to both of them in Android KitKat on SOME devices.
I also know that on api >= 21, using the camera2 API, it's possible to access both of them at the same time because it's thread safe.
What I've got so far: implementation for accessing the cameras one at the time in order to provide a picture-in-picture.
I know it's not possible to implement dual simultaneously camera on every device, I just want a way to make it available to some devices.
How can I test to see if the device is capable of accessing both of them?
I've also searched for a library that can allow me such thing, but I didn't find anything. Is there such a library?
I would like to make this feature available for as many devices as possible, and for the others, I'll leave the current state (one by one) of the feature.
Can anyone please help me, at least with some pieces of advice?
Thanks
!
The Android camera APIs generally allow multiple cameras to be used at the same time, but most devices do not have enough hardware resources to support that in practice - for example, there's often only one camera image processor shared by both cameras.
There's no query that's included in the Android APIs that'll tell you up front if you can use multiple cameras at the same time.
The only way to tell is to try to open a second camera when you already have one open. If you can open the second camera, then you can do picture-in-picture, etc. If you get an exception trying to open the second camera, then that particular device doesn't support having both cameras open.
It is possible using the Android Camera2 API, but as indicated above most devices don't have hardware support. If you have a Nexus 5X, Nexus 6, or Nexus 6P it will work and you can test with this BothCameras app. I've implemented blitting to allow video recording as well (in addition to still pictures) using the hardware h264 encoder.
You can not access both the cameras in all android mobile phones due to hardware limitations. The best alternative can be using both the camera one by one. For that you can use single camera object and can change camera face to take another photo.
I have done this in one of my application.
https://play.google.com/store/apps/details?id=com.ushaapps.bothie
I've decided to mention that in some cases just opening two cameras with Camera2 API is not enough to know about support.
There are some devices which are not throwing error during opening. The second camera is opened correctly but the first one will call onCaptureFailed callback.
So the most accurate way is starting both cameras and wait frames from each of them and check there are no capture failure errors.
I would like to use the mobile camera and develop a smart magnifier that can zoom and freeze-frame what we are viewing, so we don't have to keep holding the device steady while we read. Also should be able to change colors as given in the image in the link below.
https://lh3.ggpht.com/XhSCrMXS7RCJH7AYlpn3xL5Z-6R7bqFL4hG5R3Q5xCLNAO0flY3Fka_xRKb68a2etmhL=h900-rw
Since i'm new to android i have no idea on how to start, do you have any idea?
Thanks in advance for your help :)
I've done something similar and published it here. I have to warn you though, this is not a task to start Android development with. Not because of development skills, the showstopper here is a need for massive amount of devices to test it on.
Basically, two reasons:
Camera API is quite complicated and the different HW devices behave differently. Forget about using emulator, you would need a bunch of real HW devices.
There is a new API, Camera2 for platform 21 and higher, and the old Camera API is deprecated (kind of 'in limbo' state).
I have posted some custom Camera code on GitHub here, to show some of the hurdles involved.
So the easiest way out in your situation would be to use camera intent approach, and when you get your picture back (it is a jpeg file) just decompress it and zoom-in to the center of the resulting bitmap.
Good Luck
I want to write an activity that:
Shows the camera preview (viewfinder), and has a "capture" button.
When the "capture" button is pressed, takes a picture and returns it to the calling activity (setResult() & finish()).
Are there any complete examples out there that works on every device? A link to a simple open source application that takes pictures would be the ideal answer.
My research so far:
This is a common scenario, and there are many questions and tutorials on this.
There are two main approaches:
Use the android.provider.MediaStore.ACTION_IMAGE_CAPTURE event. See this question
Use the Camera API directly. See this example or this question (with lots of references).
Approach 1 would have been perfect, but the issue is that the intent is implemented differently on each device. On some devices it works well. However, on some devices you can take a picture but it is never returned to your app. On some devices nothing happens when you launch the intent. Typically it also saves the picture to the SD card, and requires the SD card to be present. The user interaction is also different on every device.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
I would have used ZXing as an example application (I work with it a lot), but it only uses the preview (viewfinder), and doesn't take any pictures. I also found that on some devices, ZXing did not automatically adjust the white balance when the lighting conditions changed, while the native camera app did it properly (not sure if this can be fixed).
Update:
For a while I used the camera API directly. This gives more control (custom UI, etc), but I would not recommend it to anyone. I would work on 90% of devices, but every now and again a new device would be released, with a different problem.
Some of the problems I've encountered:
Handling autofocus
Handling flash
Supporting devices with a front camera, back camera or both
Each device has a different combination of screen resolution, preview resolutions (doesn't always match the screen resolution) and picture resolutions.
So in general, I'd not recommend going this route at all, unless there is no other way. After two years I dumped by custom code and switched back to the Intent-based approach. Since then I've had much less trouble. The issues I've had with the Intent-based approach in the past was probably just my own incompetence.
If you really need to go this route, I've heard it's much easier if you only support devices with Android 4.0+.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
Either there is a bug in the examples or there is a compatibility issue with the devices.
The example that CommonsWare gave works well. The example works when using it as-is, but here are the issues I ran into when modifying it for my use case:
Never take a second picture before the first picture has completed, in other words PictureCallback.onPictureTaken() has been called. The CommonsWare example uses the inPreview flag for this purpose.
Make sure that your SurfaceView is full-screen. If you want a smaller preview you might need to change the preview size selection logic, otherwise the preview might not fit into the SurfaceView on some devices. Some devices only support a full-screen preview size, so keeping it full-screen is the simplest solution.
To add more components to the preview screen, FrameLayout works well in my experience. I started by using a LinearLayout to add text above the preview, but that broke rule #2. When using a FrameLayout to add components on top of the preview, you don't have any issues with the preview resolution.
I also posted a minor issue relating to Camera.open() on GitHub.
"the recommended way to access the camera is to open Camera on a separate thread". Otherwise, Camera.open() can take a while and might bog down the UI thread.
"Callbacks will be invoked on the event thread open(int) was called from". That's why to achieve best performance with camera preview callbacks (e.g. to encode them in a low-latency video for live communication), I recommend to open camera in a new HandlerThread, as shown here.