I have a working bit of Android Native code which process video frames on the fly via the CameraX API. The basic workflow is this:
User points phone at things
Code takes a single frame and processes it then returns a result. While this happens, all other frames are dropped (but the user doesn't notice - they just see smooth video)
When available, the result is displayed for the user as on screen text
Rinse and repeat
My app will be in React-Native but I need to keep this bit of code on Native for performance reasons.
So I'm wondering if I can use a Native Module as a sort of black box that takes in a camera stream and returns results. If so, can I keep the flow as above? I particularly want to avoid the user having to do anything other than point the camera.
I am adding zebra scanning sdk to my app. I see that when camera is open, the hardware scanner does not work. I have implemented Scanner.StatusListener but I see that this is not invoked when camera is open. I am seeing a way to know when user clicks the hardware button when camera is open to show them a toast. How can I get that callback
Unfortunately it is not possible to use both the camera and the scanner within the same app because of a low level hardware dependency (even if you are using the 2D imager for scanning, rather than the camera, this hardware dependency exists). There is no easy way to programmatically determine that the user has pressed the trigger in this scenario, to display the toast as you say, the only way I can think of would be for your app to remap the trigger to some other action using the KeyMapping Manager and then revert the trigger back to its original behaviour when the camera is dismissed. Rather than try to manage the EMDK enabling & disabling when the camera is used I would recommend using DataWedge for scanning in your app, you still can't do scanning when the camera is displayed but it should make your application logic simpler
I have doing a basic object detection on the camera preview screen in Android (greater than 3.2). For the devices which do not support processing on preview screen, I am buffering the preview screen, processing it and clearing the buffer. This part is working as desired.
What I now want is this app to run in the background while any other app is running in the foreground. I am using android service and am able to run a small test app in the background. However my concern is with the camera preview app.
I don't want to display the preview screen but use the preview screen information for processing. This might be too much to ask, but I wanted to know if this is even possible. I came across this link which shows some hope. Basically I want to process the video (preview) stream without displaying it on the screen. If this is doable, then I can think of putting this app in the background and some other app in the foreground.
Unfortunately I won't be able to share the code, however it is the standard logic of creating a surface view and starting the preview.
I would really appreciate any insight into this.
Check comments here.Basically he opens camera hardware, set preview callback and do startpreview without setting the previewDisplay (this might not work on every device). You can try this from your background service. All this will work if your foreground doesn't access the camera app. Please update this if it works. I am interested to know.
I want to write an activity that:
Shows the camera preview (viewfinder), and has a "capture" button.
When the "capture" button is pressed, takes a picture and returns it to the calling activity (setResult() & finish()).
Are there any complete examples out there that works on every device? A link to a simple open source application that takes pictures would be the ideal answer.
My research so far:
This is a common scenario, and there are many questions and tutorials on this.
There are two main approaches:
Use the android.provider.MediaStore.ACTION_IMAGE_CAPTURE event. See this question
Use the Camera API directly. See this example or this question (with lots of references).
Approach 1 would have been perfect, but the issue is that the intent is implemented differently on each device. On some devices it works well. However, on some devices you can take a picture but it is never returned to your app. On some devices nothing happens when you launch the intent. Typically it also saves the picture to the SD card, and requires the SD card to be present. The user interaction is also different on every device.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
I would have used ZXing as an example application (I work with it a lot), but it only uses the preview (viewfinder), and doesn't take any pictures. I also found that on some devices, ZXing did not automatically adjust the white balance when the lighting conditions changed, while the native camera app did it properly (not sure if this can be fixed).
Update:
For a while I used the camera API directly. This gives more control (custom UI, etc), but I would not recommend it to anyone. I would work on 90% of devices, but every now and again a new device would be released, with a different problem.
Some of the problems I've encountered:
Handling autofocus
Handling flash
Supporting devices with a front camera, back camera or both
Each device has a different combination of screen resolution, preview resolutions (doesn't always match the screen resolution) and picture resolutions.
So in general, I'd not recommend going this route at all, unless there is no other way. After two years I dumped by custom code and switched back to the Intent-based approach. Since then I've had much less trouble. The issues I've had with the Intent-based approach in the past was probably just my own incompetence.
If you really need to go this route, I've heard it's much easier if you only support devices with Android 4.0+.
With approach 2 the issues is stability. I tried some examples, but I've managed to stop the camera from working (until a restart) on some devices and completely freeze another device. On another device the capture worked, but the preview stayed black.
Either there is a bug in the examples or there is a compatibility issue with the devices.
The example that CommonsWare gave works well. The example works when using it as-is, but here are the issues I ran into when modifying it for my use case:
Never take a second picture before the first picture has completed, in other words PictureCallback.onPictureTaken() has been called. The CommonsWare example uses the inPreview flag for this purpose.
Make sure that your SurfaceView is full-screen. If you want a smaller preview you might need to change the preview size selection logic, otherwise the preview might not fit into the SurfaceView on some devices. Some devices only support a full-screen preview size, so keeping it full-screen is the simplest solution.
To add more components to the preview screen, FrameLayout works well in my experience. I started by using a LinearLayout to add text above the preview, but that broke rule #2. When using a FrameLayout to add components on top of the preview, you don't have any issues with the preview resolution.
I also posted a minor issue relating to Camera.open() on GitHub.
"the recommended way to access the camera is to open Camera on a separate thread". Otherwise, Camera.open() can take a while and might bog down the UI thread.
"Callbacks will be invoked on the event thread open(int) was called from". That's why to achieve best performance with camera preview callbacks (e.g. to encode them in a low-latency video for live communication), I recommend to open camera in a new HandlerThread, as shown here.
I developed a small video camera application. It all works fine except focus.
I understand I need to call camera.autofocus, but I don't really know where is the right place to put the call it.
Anyone ever succeeded in autofocusing a video camera on android?
Thank
Eli
It's probably a matter of preference based on how you think users will use your app.
I think a common convention is to do an auto-focus when the user touches the scene in the preview. Most OEM camera apps seem to do this.
Doing auto-focus after zooming is also a common thing.
Finally, you might want to have a look at the zxing project (bar code scanner app) which has a nifty continuous auto focus approach that might be of use, though since youre capturing video, it might not be ideal as the focus transitions might be too noticeable.
http://code.google.com/p/zxing/source/browse/trunk/android/src/com/google/zxing/client/android/camera/AutoFocusCallback.java?r=1698