I'm trying to make application with camera delay. It should work like this:
user see preview of live camera
user choose for example delay 5 sec
after waiting delay time user see that what camera saw 5 sec ago.
I thought about taking frames from preview and showing them after this delay time, but I'm not sure how to take them and make "movie" from them. I was thinking about CameraX but I'm not sure that this is the best option. I will really appreciate every help.
You can use CameraX as well for your project. Of course I won't go into so much detail but you can follow a path like:
Do not bind a preview use case and do not use preview view.
Instead use an Image view or equivalent in which you can set/draw an image.
Do bind ImageAnalysis use case.
Get the frames for ImageAnalysis and then send it directly to be drawn onto the view mentioned above. (Image view might be too expensive, not sure)
In the above step you can add a delay function to delay drawing as much as you wish.
Of course this is probably not the most effective or elegant way to solve this, but it would work.
Related
You might consider this question a sequel to this other one I've recently asked. I have a custom device with a front camera that is mirrored by default and I would like it to always behave like it was flipped horizontally. Note that the end goal of my app is to perform face, barcode and facemask detection (the last one with a custom .tflite model), and I'm trying to understand if it's possible with the current state of CameraX and the quirks of the device I'm using.
For the preview, I can force PreviewView to use a TextureView by calling PreviewView#implementationMode = PreviewView.ImplementationMode.COMPATIBLE so I can just call PreviewView#setScaleX(-1) and it gets displayed like I want.
For taking pictures, I haven't tried yet but I know I can pass setReversedHorizontal(true) to the ImageCapture.OutputFileOptions used in the ImageCapture.Builder, so the image should be saved in reverse.
For image analysis I really don't know. If the input image is taken directly from the preview stream, then it should still be in its default state because PreviewView#setScaleX(-1) only flips the View on screen, right? So in my Analyzer's analyze() I would need to convert the ImageProxy#image to bitmap, then flip it and pass the result to InputImage.fromBitmap().
Is this everything I need? Is there a better way to do this?
I am trying to add a real time overlay to video capture from the camera feed. Which api should I use and how?
Idea is below,
Get camera feed (left)
Generate overlay from the feed, I'm using deep models here (middle)
Add overlay on top of the original video feed in real time(right)
OpenCV (https://opencv.org) will allow you take your video feed, and frame by frame:
load the frame
analyse it and generate your overlay
add your overlay to the frame (or replace the frame with your merged overlay)
display and/or save the frame
Depending on the amount of processing you need to do, the platform or device you are running on and whether you need it in real time you may find it hard to complete this every frame for high frame rates. One solution to this, if it is ok for your problem domain, is to only do the processing very nth frame.
I have used something similar with the GraphicOverlay for text.
Also, ViewOverlay may be something to look into.
I've been fiddling around with RenderBlur on an image for my login screen, i've gotten everything to work with a runnable() and a delay to blur the image immediately, however i'm trying to make it a slow blur while the username and password fields come into view.
I've had a bit of a look around (could be looking in the wrong places) but I haven't found anything related to what i'm after. I've attempted to use a while loop and have a blurradius value increment with a delay afterwards and send to the blurBitmap class method, but it either still blurs it immediately (meaning I probably messed something up somewhere and will most likely keep trying with this method until a better solution is found).. or it crashes.
Does anyone know if, in the first place this is possible with RenderScript? And if so, what should I be searching for.
Thanks for any help you can provide.
Resources: https://futurestud.io/blog/how-to-use-the-renderscript-support-library-with-gradle-based-android-projects
You can do this with RenderScript, though how you are doing it now doesn't sound like a good idea. Look into using a custom Animator which you can then run a RS blur against the image. Using Animator will be more flexible in the long run and automatically ties in with the view system rather than requiring you to handle View or Activity state explicitly.
The approach #JasonWihardja outlined will also work, but again I would suggest doing this in an Animator or similar mechanism.
Blurring an image repeatedly might cause some performance issues with your app.
Here's what you could do:
Arrange 2 images (1 for the clear image and the other for the
maximum blur version) so that the blur image is placed exactly on
top of the clear image. The easiest way would be placing 2 ImageViews in a FrameLayout
To achieve the "slow" blur effect, initially set the blur image's alpha to 0.
Then, using one of the view, issue a postDelayed event to slowly increase the blur image's alpha to 255.
I've seen many threads on this but cannot figure out what is the best possible way of doing camera capture without having a preview.
The way that I am doing it now is that I create a SurfaceTexture like this:
SurfaceTexture fakePreview = new SurfaceTexture(10);
The fakePreview object is not used for any other purpose and is just set as the preview texture, and disposed of when finished with the Activity.
The fakePreview SurfaceTexture is set and the preview then started like this:
mCamera.setPreviewTexture(fakePreview);
mCamera.startPreview();
This seems to work fine, but I'm using a rather low-quality sensor and would like to get the highest possible capture quality out of it. So I'm always looking for ways to improve upon this.
I've seen a few methods other than this for starting the camera without showing preview such as setting up a preview normally but setting it's size to 0x0dp: https://stackoverflow.com/a/9745780/593340
As well as using a dummy surfaceholder.callback: https://stackoverflow.com/a/23621826/593340
My question is is there any advantages or disadvantages by using the method I described above?
If my method has issues, do the two other examples posted represent a better answer or are they just as bad?
Thirdly, am I gaining any performance by neglecting to show preview or am I just causing potential bugs by doing this?
Thanks for looking!
-Rob
I am creating an Android App that produces random images based on complex mathematical expressions. The color of a pixel depends on its location and the expression chosen. (I have seen many iPhone apps that produce "random art" but relatively few Android apps.)
It takes 5 to 15 seconds for the image to be drawn on a Nexus S dev phone.
To keep the UI thread responsive this seems like the purpose of the SurfaceView class. Almost all the examples of SurfaceView deal with animation, not a single complex image. Once the image is done being drawn / rendered I won't change it until the user
So, is SurfaceView the right component to use? If so, can I get a callback from the SurfaceView or its internal thread when it is done drawing when it is done rendering the image? The callback is so I know it is okay to switch the low resolution and blocky version of the image art with the high resolution one?
Is there an alternative to SurfaceView that is better for complex rendering of single images. (Not animation.)
Cheers!
If all you want to do is render a single complex image on another thread to keep the UI responsive, then after it's done rendering actually draw it, you might consider just doing this in the standard keep-work-off-the-UI-thread way by using something like an AsyncTask. It's not like you're doing compositing or anything that really is GPU-specific (unless as others have suggested you can offload the actual rendering calculations to the GPU).
I would at least experiment with simply building an array representing your pixels in an AsyncTask then when you're done, create a bitmap with it using setPixels and set the source of an ImageView to that bitmap.
If on the other hand you want your image to appear pixel by pixel, then maybe SurfaceView might be a reasonable choice, in which case it'll basically be an animation so you can follow other tutorials. There's some other setup, but the main thing would be to override onDraw and then you'll probably have to use Canvas.drawPoint to draw each pixel.