I'm to the point in my app where I am ready to implement the camera. I've used the intent method before and there is a noticeable delay when launching the phones default camera activity.
I thought I'd take a look at baking the camera into a fragment but looks like it's not exactly a trivial task, especially with Camera and Camera2 and keeping backwards comparability in mind.
Is there a significant performance increase in creating my own camera fragment within my app that would make the effort worth while? I'll be taking a lot of pictures and then uploading them via JCIFS on background threads which means I'll be calling the default camera activity via intent quite a bit.
Is there a significant performance increase in creating my own camera fragment within my app that would make the effort worth while?
Performance? Marginally, at best.
However, ACTION_IMAGE_CAPTURE implementations are notoriously buggy. Relying upon them is unrealistic.
My general recommendation is:
If taking photos is a "nice to have" capability for your app, such that if ACTION_IMAGE_CAPTURE does not work reliably for the user, you and the user are willing to shrug your virtual shoulders and move on, use ACTION_IMAGE_CAPTURE. An example of this would be a note-taking app, where you want to allow users to attach photos.
If taking photos is more important than that, but it is still somewhat of a side feature of your app, try using a library for integrating with the camera, such as mine. Or, give the user the choice between ACTION_IMAGE_CAPTURE and the library's camera capability, so the user can pick between a full-featured camera (that might not work) and something fairly basic from the library (but is more likely to succeed). An example of this would be an app for an insurance claims adjuster, where she needs to be able to document the claim with photos, but the point of the app is bigger than the photos.
If taking photos is the point of your app, integrate with the APIs directly, so you can have complete control over the experience. An example of this would be Snapchat. Or InstantFace, presumably.
Related
I have quite a bit of experience with the camera API, but I could not find any documentation to answer this question...
Most phones already have a front and a back camera. Would it be possible to simulate a 3rd camera via software (probably using a service), and register that with the api?
The idea would be that we define a custom camera, register it with the Api, and then any camera app would be able to get it by looping through the available cameras.
I imagine several cases where we might want this...
There are some external cameras (such as the FLIR thermal camera) that could provide this.
We might want to concatenate the front and back camera images into a single image, and preview that. I know not all phones support opening both cameras concurrently, but some do, and i could imagine this functionality would be cool for 3rd party video chat apps like Skype... Specifically, since Skype doesnt natively support this, by registering directly with the Android Camera Api, we could get around the limitations of the Skype API, since our custom camera would just look like one of the default Android cameras.
So would this be possible? Or what is the technical limitations that prevents us from doing it. Perhaps the Android Api simply doesnt let us define a custom source (I know the Sensor API doesnt, so I would not be surprised if this was the case for the Camera API as well).
I am creating an app where the user has to take multiple pictures of an object. I need to guide him through the process by adding a message on top of the screen when the camera is opened (something like: Right side, Left side, Front, etc) Toasts are not good since I need the message to say there until the picture is taken. Is there a way to add a layout over the default camera? I considered creating a camera activity but with android.hardware.Camera deprecated, and my minimum SDK not supporting camera2, it seems quite the predicament.
Any suggestion is appreciated.
Thank you
Is there a way to add a layout over the default camera?
There is no single "default camera". There are several thousand Android device models. There are hundreds of different camera apps that come pre-installed on those device models. Plus, the user may elect to use a different camera app, one that they installed, when you try starting an ACTION_IMAGE_CAPTURE activity (or however you plan on taking a picture). As a result, you have no way of knowing what the UI is of the camera activity to be able to annotate it, even if annotating it were practical (which, on the whole, it isn't).
I considered creating a camera activity but with android.hardware.Camera deprecated, and my minimum SDK not supporting camera2, it seems quite the predicament.
In Android, "deprecated" means "we have something else that we think that you should use". Unless the documentation specifically states that the deprecated stuff does not work starting with such-and-so API level, the deprecated APIs still work. They may not be the best choice, in light of the alternative, but they still work.
In particular, deprecated APIs sometimes are your only option. The camera is a great example. android.hardware.Camera is the only way to take pictures on Android 4.4 and below. At the present time, that still represents the better part of a billion devices. Here, "deprecated" means "on Android 5.0 and higher, you should consider using android.hardware.camera2". Whether you choose to use both APIs (android.hardware.camera2 on API Level 21+ and android.hardware.Camera before then), or whether you choose to use android.hardware.Camera across the board, is your decision to make.
If you want to offer some sort of guided picture-taking, implementing your own camera UI in your own app is pretty much your only choice.
So, I've come to find that the method of starting a camera intent provided by:
http://developer.android.com/training/camera/photobasics.html
Will eventually fail if you take a photo, and then hit cancel multiple times. I've tried it on other apps than the one I'm building and sure enough, the application also crashes after a few photos taken and cancelled. I assume this is because each photo remains in memory until returning to another activity. Combine that with the fact that hitting the cancel button does not produce an activity result in which to handle the previously taken picture, it seems like a limitation of the camera application itself.
My question is: Is there a way of getting around this while still using the built in camera application via intent? Is there a method I can call to keep it from caching the canceled images into memory?
it seems like a limitation of the camera application itself
Please bear in mind that there are hundreds of pre-installed camera apps (across thousands of device models and well over a billion active Android users), and countless more such apps on the Play Store and elsewhere, that might respond to ACTION_CAPTURE_IMAGE. While many have bugs, the bugs vary, and so the behavior that you are seeing is tied to that specific camera app that you happen to be invoking.
I assume this is because each photo remains in memory until returning to another activity
I certainly would not assume this, but if the other app is what is crashing, there is little that you or I can do about it.
Is there a way of getting around this while still using the built in camera application via intent?
You are not in position to fix all the bugs across all the camera apps across all the devices in use.
Is there a method I can call to keep it from caching the canceled images into memory?
There is nothing in the Android SDK for you to tell another app that it has a bug and should stop breaking.
According to Eric Ahn (Android - How to stop camera intent from saving on the phone) the only way around this is to create your own camera activity and handle the callbacks yourself via the camera's API. I'll probably look into it some more later, but I suspect Eric is right.
Is it basically the same process done in a different way? Or do they yield different results under certain circumstances?
It's not really a question of app performance persay, using the intent approach means you use the phones camera interface. When building an in app camera you are responsible for creating the ui/ux.
I suppose a poorly implemented custom camera would have bad performance over using the built in camera interfaces.
Using a custom camera vs an intent should be more about wether or not you want the user to leave the app and come back vs. stay in a customized ux.
Hope this helps
I want to write an activity that:
Shows the camera preview (viewfinder), and has a "capture" button.
When the "capture" button is pressed, takes a picture and returns it to the calling activity (setResult() & finish()).
Are there any complete examples out there that works on every device? A link to a simple open source application that takes pictures would be the ideal answer.
My research so far:
This is a common scenario, and there are many questions and tutorials on this.
There are two main approaches:
Use the android.provider.MediaStore.ACTION_IMAGE_CAPTURE event. See this question
Use the Camera API directly. See this example or this question (with lots of references).
Approach 1 would have been perfect, but the issue is that the intent is implemented differently on each device. On some devices it works well. However, on some devices you can take a picture but it is never returned to your app. On some devices nothing happens when you launch the intent. Typically it also saves the picture to the SD card, and requires the SD card to be present. The user interaction is also different on every device.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
I would have used ZXing as an example application (I work with it a lot), but it only uses the preview (viewfinder), and doesn't take any pictures. I also found that on some devices, ZXing did not automatically adjust the white balance when the lighting conditions changed, while the native camera app did it properly (not sure if this can be fixed).
Update:
For a while I used the camera API directly. This gives more control (custom UI, etc), but I would not recommend it to anyone. I would work on 90% of devices, but every now and again a new device would be released, with a different problem.
Some of the problems I've encountered:
Handling autofocus
Handling flash
Supporting devices with a front camera, back camera or both
Each device has a different combination of screen resolution, preview resolutions (doesn't always match the screen resolution) and picture resolutions.
So in general, I'd not recommend going this route at all, unless there is no other way. After two years I dumped by custom code and switched back to the Intent-based approach. Since then I've had much less trouble. The issues I've had with the Intent-based approach in the past was probably just my own incompetence.
If you really need to go this route, I've heard it's much easier if you only support devices with Android 4.0+.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
Either there is a bug in the examples or there is a compatibility issue with the devices.
The example that CommonsWare gave works well. The example works when using it as-is, but here are the issues I ran into when modifying it for my use case:
Never take a second picture before the first picture has completed, in other words PictureCallback.onPictureTaken() has been called. The CommonsWare example uses the inPreview flag for this purpose.
Make sure that your SurfaceView is full-screen. If you want a smaller preview you might need to change the preview size selection logic, otherwise the preview might not fit into the SurfaceView on some devices. Some devices only support a full-screen preview size, so keeping it full-screen is the simplest solution.
To add more components to the preview screen, FrameLayout works well in my experience. I started by using a LinearLayout to add text above the preview, but that broke rule #2. When using a FrameLayout to add components on top of the preview, you don't have any issues with the preview resolution.
I also posted a minor issue relating to Camera.open() on GitHub.
"the recommended way to access the camera is to open Camera on a separate thread". Otherwise, Camera.open() can take a while and might bog down the UI thread.
"Callbacks will be invoked on the event thread open(int) was called from". That's why to achieve best performance with camera preview callbacks (e.g. to encode them in a low-latency video for live communication), I recommend to open camera in a new HandlerThread, as shown here.