I'm trying to open camera and display using SurfaceView. This delays the loading of the activity for a really long time. So I'm wondering what are the best practices of opening camera.
From http://developer.android.com/training/camera/cameradirect.html :
Getting an instance of the Camera object is the first step in the
process of directly controlling the camera. As Android's own Camera
application does, the recommended way to access the camera is to open
Camera on a separate thread that's launched from onCreate(). This
approach is a good idea since it can take a while and might bog down
the UI thread. In a more basic implementation, opening the camera can
be deferred to the onResume() method to facilitate code reuse and keep
the flow of control simple.
So the official recommendation is to use a separate thread. This will mean modifying the activity to be able to deal with a state where the camera isn't open yet, and could even fail to open entirely.
If you aren't comfortable with multithreading and Android app development, it's probably best to just let the Activity start stall. On most devices the camera opens very quickly.
While it's very difficult to make the camera start faster, you can make the Activity start faster by offloading camera.open() onto a background thread via an AsyncTask or some other method. You can also delay the camera.open() call by some arbitrary number of milliseconds after onResume(), so that the Activity is already visible before camera loading happens.
I don't recommend the AsyncTask method -- making camera loading an asynchronous operation is very prone to errors.
The latter method is also pretty useless as the camera won't be usable until after it loads anyway.
Related
I am working on an Android app that performs OpenCL/OpenGL interops on the camera perview. I am using GLSurfaceView.Renderer. Naturally the code to create and initialize the OpenCL running environment (from OpenGL) is called from onSurfaceCreated, and the actual processing of each preview frame happens in onDrawFrame.
All work well, except when I am finished, I want to clean up the OpenCL stuff. Ideally an onSurfaceDestroyed method would be the perfect place to clean up, but there is no such method in GLSurfaceView.Renderer. So the cleanup code has nowhere to go, and there is probably memory leak in my app.
Here are my questions:
Why is there no onSurfaceDestroyed method in GLSurfaceView.Renderer? There are onSurfaceCreated and onSurfaceChanged. One would expect onSurfaceDestroyed to be there.
Given the fact that no onSurfaceDestroyed exists in GLSurfaceView.Renderer, where should my cleanup code go, and why?
GLSurfaceView is a collection of helper code that simplifies the use of OpenGL ES with a SurfaceView. You're not required to use it to use GLES, and if you've got a bunch of other stuff going on at the same time I recommend that you don't.
If you compare the complexity of Grafika's "show + capture camera", which uses GLSurfaceView, to "continuous capture", which uses plain SurfaceView, you can see that the latter requires a bunch of extra code to manage EGL and the renderer thread, but it also has fewer hoops to jump through because it doesn't have to fight with GLSurfaceView's EGL and thread management. (Just read the comments at the top of the CameraCaptureActivity class.)
As one of the commenters noted, I suspect there's no "on destroyed" callback because the class aggressively destroys its EGL context, so there's no GLES cleanup needed. It would certainly have been useful for the renderer thread to have an opportunity to clean up non-GLES resources, but it doesn't, so you have to handle that through the Activity lifecycle callbacks. (At one point in development, CameraCaptureActivity handled the camera on the renderer thread, but the lack of a reliable shutdown callback made that difficult.)
Your cleanup code should probably be based on the Activity lifecycle callbacks. Note these are somewhat dissociated from the SurfaceView callbacks. A full explanation can be found in an architecture doc appendix.
This question is for Android developers who are familiar with the Activity Lifecycle.
I'm developing an app that performs face detection and facial landmark recognition.
The according machine learning models take a long time to be parsed from the SD storage and loaded into memory. On current average Android devices, it easily takes up to 20 seconds. By the way, all of this face analysis stuff and model loading happens in C++ native code, which is integrated using the Android NDK + JNI.
Because the model loading takes a long time, the actual parsing and loading is scheduled early in the background via AsyncTasks, so that the user does not notice a huge delay.
Before the actual face analysis is performed, the user can take a selfie via the MediaStore.ACTION_IMAGE_CAPTURE. This will call a separate camera app installed on the device and receive the picture via onActivityResult.
Now the problem starts: Almost always the whole app process will be killed while the user is in the separate camera Activity/App. Mostly it seems to happen right before returning from the Camera app (the timing seems odd). I did another test to confirm that it happens when the capture button is pressed inside the camer app. At that moment, my app is killed. When pressing the 'Accept image' button, the app is recreated. The reason given in logcat by the ActivityManager for the process kill is 'prev LAST' (I've found nothing via Google on the meaning of this, but I saw that many other apps are also killed with this reason, so it seems to happen quite often).
Thus, all of the Activities of my app need to be recreated by Android (fine by me, because it happens fast), but also the face analysis models must be loaded again from scratch, and the user will notice a huge delay before his selfie can be processed.
My question is: Is there any possibility to tell Android that an Activity/App has a legitimate reason to not be killed while being in the background temporarily to get a camera picture? After all, the ActivityManager makes a wrong decision to kill the app. Having to reload the models so frequently takes up a lot of CPU and memory resources.
It seems like an oversight in the Android lifecycle architecture. I know that few apps have the specific requirements of my app, but still, it seems stupid. The only way I can think of to 'fix' this issue is to implement my own camera Activity inside the app, but this goes counter Android's own best practices.
There is also some 'android:persistent' flag that you can stick to your Activity via AndroidManifest.xml, but the docs are totally unclear about the implications of this. See the docs on this.
By the way: onDestroy is not called when the app process is killed. I've read somewhere that there is no guarantee that onDestroy will be called, and this is actually not a problem for me. Although I wonder why the Android docs do not state this clearly.
Almost always the whole app process will be killed while the user is in the separate camera Activity/App
This is not surprising. A camera app can consume quite a bit of memory, so Android needs to free up memory by terminating background apps' processes.
After all, the ActivityManager makes a wrong decision to kill the app
Given that a likely alternative is the OS crashing, I suspect the user would agree with the OS decision to terminate your process.
Having to reload the models so frequently takes up a lot of CPU and memory resources.
Then perhaps you should not be starting another app from yours. Take the photo yourself. Use the camera APIs directly, or use libraries like Fotoapparat and CameraKit-Android as simpler wrappers around those APIs.
The only way I can think of to 'fix' this issue is to implement my own camera Activity inside the app, but this goes counter Android's own best practices.
By that argument, no devices would ever have a camera app, as writing any camera app "goes counter Android's own best practices".
Any app that needs a camera must use the camera APIs (directly or indirectly) to have any shot at reliable behavior. You are assuming that thousands of camera apps are all properly written and will correctly honor your ACTION_IMAGE_CAPTURE Intent (e.g., putting the results in the place that you designate with EXTRA_OUTPUT). Many camera apps have buggy ACTION_IMAGE_CAPTURE implementations. ACTION_IMAGE_CAPTURE is not unreasonable for cases where you and the user can live without the picture being taken (e.g., a note-taker app that has an "attach photo" feature), but that would not seem to be the case with your app.
I have imported into Eclipse Juno the sample of OpenCv4Android, ver. 2.4.5, called "cameracontrol". It can be found here:Camera Control OpenCv4Android Sample.
Now I want to use this project as base for mine. I want to process every frame with image-processing techniques, so, in order to improve performance, I want to split the main activity of the project in two classes: one that is merely the activity and one (a thread) that is responsible for preview.
How can I do? Are there any examples about this?
This might not be a complete answer as I'm only learning about this myself at the moment, but I'll provide as much info as I can.
You will likely have to grab the images from the camera yourself and dispatch it to threads. This is because your activity in the example gets called with a frame from the camera and has to return the frame to be displayed (immediately) as the return value. You can't get 2+ frames to process in parallel without showing a blank screen in the meantime or some other hacky stuff. You'll probably want to allocate a (fixed sized) buffer somewhere, then start processing a frame with a worker thread when you get one (this would be the dispatcher). Once your worker thread is done he notifies the dispatcher who gives the image to the view. If frames come from the camera while all worker threads are busy (i.e. there are no free slots in the buffer), the frame is dropped. Once a space in the buffer frees up again, the next frame is accepted and processed.
You can look at the code of the initialitzeCamera function of JavaCameraView and NativeCameraView to get an idea of how to do this (also google should help, as this is how apps without OpenCV have to do it as well). For me the native camera performs significantly better though (even without heavy processing it's just much smoother), but ymmv...
I can't help with actual details about the implementation since I'm not that far into it myself yet. I dope this provides some ideas though.
I have a an app whose Main activity shows a GLSurfaceView. Every time that a new activity is launched, say for example Settings Activity, the OpenGl surface is destroyed, and a new one is created when the user returns to the main activity.
This is VERY slow, as I need to regenerate textures each time so that they can be bound to the new Surface. (Caching the textures is possible, but it would be my second choice, because of limited availability of memory.)
Is there a way to prevent the Surface being recreated every time?
My own analysis by looking at the code is:
There are two triggers for the Surface to be destroyed:
GLSurfaceView.onPause() is called by the activity
The view gets detached from the window
Is there a way to prevent #2 from happening when launching a new activity?
If you're targeting 3.0 or later, take a look at GLSurfaceView.setPreserveEGLContextOnPause(). Keep in mind that devices might still support only one OpenGL context at a time, so if you're switching to another activity that uses OpenGL and back to yours, you will have to reupload anyway – so I'd recommend keeping a cache and dropping it when your Activity's onLowMemory() is called.
Caching the textures is possible, by
it would be my second choice, because
of limited availability of memory
If there's a lot of textures, the only solution is to cache them AFAIK, there's no way to keep the surface and textures alive across activity restart. By the way, you can use a lot more memory in native code than in Java.
There has been a long discussion about this on android-ndk. Especially, I posted some benchmarks, which show that bitmap decoding can take up to 85% of texture loading time, texImage2D taking only the remaining 15%.
So caching decoded image data is likely to be a great performance booster, and by coupling this with memory mapped files (mmap), you may even cache a real lot of image data while staying memory-friendly.
Short answer: No
Long Answer:
You could cache/save all the textures when you get OnPause(), and restore them at OnResume(). But otherwise the android Activity lifecycle demands the view to be restored from scratch onResume()
From activity cycle
If an activity is paused or stopped, the system can drop the activity from memory by either asking it to finish, or simply killing its process. When it is displayed again to the user, it must be completely restarted and restored to its previous state.
One way to get around it thought is that if you are within you own Application, instead of starting a new activity, create a Overlay/Dialog over the current activity and put the SettingsView in there.
It is necessary for my application to keep the camera captured until released manually by an Activity (I realize this is bad practice, as no other applications will be able to use the camera). I used to be able to do this by avoiding the camera.release() call in the surfaceDestroyed function from CameraPreview, but this no longer works after 2.1.
Is there any way to keep the camera in captivity, without it being automatically released after surfaceDestroyed?
As a workaround question to answer instead of the previous one, is there any way to take a picture from within a service, without the preview view inflated?
You may be able to call camera.lock() to re-obtain the lock on the camera hardware. Failing that you can reopen the camera.
However, the drawback to this is much worse than preventing other apps from accessing the camera. This will also rapidly drain the battery, because it keeps the camera sensor and DSP powered. According to this thread, that could potentially kill a battery dead in two hours.