My app uses the camera stream to perform some live image classification on the incoming images. For debugging purposes, I'd like to use some prerecorded videos instead.
Does someone know how to use a video in the similar way as the camera, meaning that the video should play continuously while the classifier runs in parallel and always takes the most recent frame (dropping intermediary frames as long as the classification is running).
I just need some high level information how to do it and can figure out the coding myself.
It seems to me that I would need to run the classifier on a background thread that never dies, but rather waits for an input Image -> processes the image -> then posts the results back to the UI-thread for displaying them -> then take the most recent image from the video stream and classify it.
In the camera based version of the app, the above mentioned infinite classification loop is triggered by the cameras OnImageAvailableListener and it's OnImageAvailable method.
The question is: What is the best way to imitate this behaviour? Using threads, Handlers, Loopers? Or is there some way to put attach a Listener to the Classifier to notify the UI-thread that the image processing has been completed?
Related
I have a working bit of Android Native code which process video frames on the fly via the CameraX API. The basic workflow is this:
User points phone at things
Code takes a single frame and processes it then returns a result. While this happens, all other frames are dropped (but the user doesn't notice - they just see smooth video)
When available, the result is displayed for the user as on screen text
Rinse and repeat
My app will be in React-Native but I need to keep this bit of code on Native for performance reasons.
So I'm wondering if I can use a Native Module as a sort of black box that takes in a camera stream and returns results. If so, can I keep the flow as above? I particularly want to avoid the user having to do anything other than point the camera.
I am writing some code that adds a watermark to an already existing video using OpenGL.
I took most of the code from ContinuousCaptureActivity in Grafika - https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/ContinuousCaptureActivity.java
Where instead of using camera to render on a SurfaceTexture, I use the MoviePlayer class, also present in grafika. Also, instead of rendering random boxes, I render the watermark.
I run MoviePlayer at its full speed, i.e., reading from source and rendering on to the surface as soon as a frame is decoded. This does the job for a 30s video in 2s or less.
Now the issue comes with onFrameAvailable callback. It is called only once after every 4 or 5 frames are rendered by the MoviePlayer class. This makes me lose frames in the output video. Now if I make the MoviePlayer thread go to sleep until the corresponding onFrameAvailable is called, everything is fine and no frames are missed. However, now processing my 30s video takes around 5s.
My question is how do I make SurfaceTexture faster? Or is there some completely different approach that I have to look into?
Note that I do not need to render anything on the screen.
I trying to create an Android App that using Camera2 API, as part of the functionality I want to develop a module that saving multiple images produced by ImageReader as followed:
Image image = reader.acquireLatestImage();
I'm getting the followed Exception:
IllegalStateException too many images are currently acquired
as mentioned in the documention:
https://developer.android.com/reference/android/media/ImageReader#acquireLatestImage()
This is because the image returned from 'acquireLatestImage' is still belongs to the ImageReader Queue.
Is there any way to detach images returning from 'ImageReader' ?
Is there a way to copy an image, preferably without to store it on disk, that is a resource consuming operation ?
Thank's
If this is a YUV_420_888 image, you can copy each of the plane ByteBuffers to keep the data around indefinitely. That is somewhat expensive, of course.
Unfortunately, there's no way to easily detach the Image from the ImageReader. The Reader is a circular buffer queue internally, so removing an Image would require the Reader to allocate a new image to replace the one removed, which is somewhat expensive as well.
It can be done by using an ImageWriter.queueInputImage, connected to another ImageReader with the same configuration as the original ImageReader. When you receive image A from your first ImageReader, and you want to keep it, you can queue it into the ImageWriter, and then get the Image handle again from the second ImageReader. Still need to set maxImages on the second reader high enough to account for all the Images you want to keep around at once, of course.
That's fairly cumbersome, of course, and if you're doing this continually you'll cause a lot of memory reallocation and copies may be just as expensive (and simpler in many ways).
I want to image process android front camera frames inside a service.
I use OpenCV and therefore using CameraBridgeViewBase which asks for camera view.
I dont want to record the video. i need to process each frame in real time.
any solution ?
According to the examples of fadden available (have a look here), you can do it using a SurfaceTexture instead of a SurfaceView for getting the frames. It was also answered here before, but unfortunately never validated.
I could get frames from a service this way.
I was wondering whether it is possible to have 2 instances of the camera preview in android. What I mean is running 2 instances of the camera at the same time. If it is, how would one go about this, will there be need to implement an instance on a different thread? I have not used the camera API before, so I would appreciate it if I can have a heads up on the issue, so I don't waste time on it.
Thank you.
It is not possible to have two open connections to the camera - you have to lock the camera in order to get a preview and it can only be locked once. Indeed, if you have the camera locked, and your app crashes before you've unlocked it, then nobody can use the camera!
See http://developer.android.com/reference/android/hardware/Camera.html#open%28int%29
You must call release() when you are
done using the camera, otherwise it
will remain locked and be unavailable
to other applications.
...
RuntimeException: if connection to the
camera service fails (for example, if
the camera is in use by another
process).
That said, you can certainly register a preview callback and take the preview data from your single camera instance to use in multiple views. But be aware of the issues with the YUV format of the raw byte[] data provided by the preview callback: Getting frames from Video Image in Android (note that the preview data is raw from the camera driver and may vary from device to device)
Ignoring the big Why question, your best bet would be to make a service that interacts with the camera, and go from there.