I am new to android, I got a task to display FPS(dynamic) and elapsed time since Camera Open.
I got an android basic sample project where it will display the camera in surface Texture view.
but I did not find a way to get the frames count or FPS rate so that I can overlay this data on surface Texture view.
I just wanted to know whether we can get this data from Camera2 API? if yes please share the related links or code snipet or idea to get this.
You can look at the output of CaptureResults for each frame. They contain a variety of metadata about captured camera frames, including the start of exposure time for each frame. You can use that to calculate frame rates.
Related
I am modifying (Java) the TF Lite sample app for object detection. It has a live video feed that shows boxes around common objects. It takes in ImageReader frames at 640*480.
I want to use these bounds to crop the items, but I want to crop them from a high-quality image. I think the 5T is capable of 4K.
So, is it possible to run 2 instances of ImageReader, one low-quality video feed (used by TF Lite), and one for capturing full-quality still images? I also can't pin the 2nd one to any Surface for user preview, pic has to be captured in the background.
In this medium article (https://link.medium.com/2oaIYoY58db) it says "Due to hardware constraints, only a single configuration can be active in the camera sensor at any given time; this is called the active configuration."
I'm new to android here, so couldn't make much sense of this.
Thanks for your time!
PS: as far as I know, this isn't possible with CameraX, yet.
As the cited article explains, you can use a lower-resolution preview stream and periodically capture higher-rez still images. Depending on hardware, this 'switch' may take time, or be really quick.
In your case, I would run a preview capture session at maximum resolution, and shrink (resize) the frames to feed into TFLite when necessary.
I have a simple android app with a camera preview.
I would like to set the preview so that it shows what happened x seconds before.
I'm trying to do a buffer but it looks like there are no way to control what's inside the preview.
I use camera2 and a textureView for the preview.
Do you have any ideas, or library that could help me ?
Thanks !
You need to cache ~30 preview buffers somehow.
One possible way is to use an ImageReader where you wait for 30 onImageAvailable callbacks to fire before you acquire the first Image. But this requires you to draw the Image yourself to the preview TextureView once you start acquiring them, which is difficult to do correctly before Android Q's ImageReader with usage flags constructor.
You can also cache things in OpenGL; use a SurfaceTexture and a GLSurfaceView, and copy the SurfaceTexture frames to a list of 30 other textures in a circular buffer, then start rendering when the 30th one is drawn. But requires quite a bit of scaffolding code to implement.
I am building an application that will use a live filter on the camera's preview and I need to record a video of this filtered preview.
I have already checked the following:
Graphika CameraCaptureActivity, have a preview with a filter but the recording does not include the filter. Unfortunately GLSurfaceView does not supports this. I could render twice the filter - once on the preview and once on the recording frames - but this is not very efficient.
Graphika RecordFBOActivity, records opengl video but not from the camera.
bigflake CameraToMpegTest, records from camera but preview is not displayed on the screen.
There is a big choice of solutions and component combination on this and I am quite confused so how this can be implemented?
I am new on this area so if you can point to a working example it will be great.
Thank you
What I'm working on is an application which can get frames then modify it (like move pixels in the frame). After that show the edited frames onto the screen with good speed, glSurfaceView will do that part I think. And MediaCodec will help getting the frames.
I'm trying a good example about MediaCodec like ExtractMpegFramesTest from http://bigflake.com/mediacodec/. Can I directly edit the received frames with GLES code and show it onto the screen without getting Bitmap?
This is my scenario: I am trying to take a picture from the front camera when someone puts in the incorrect password in the lock screen. Basically, I need to be able to take a picture out of the front cam without a preview.
After much googling, I figured out that the way to do it is opengl and SurfaceTexture. You direct the camera preview to a SurfaceTexture, and later extract the picture from this texture somehow. I found this out from the following resources:
https://stackoverflow.com/a/10776349/902572 (suggestion 1)
http://www.freelancer.com/projects/Android-opengl/Android-OpenGL-App-Access-Raw.html, which is the same as (1)
https://groups.google.com/forum/#!topic/android-developers/U5RXFGpAHPE (See Romain's post on 12/22/11)
I understand what is to be done, but i have been unable to correctly put them into code, as I am new to opengl.
The CameraToMpegTest example has most of what you need, though it goes well beyond your use case (it's recording a series of preview frames as a video).
The rest of what you need is in ExtractMpegFramesTest. In particular, you want to render to an offscreen pbuffer (rather than a MediaCodec encoder input surface), and you can save the pbuffer contents as a PNG file using saveFrame().
The above are written as small "headless" test cases. You can see similar code in a full app in Grafika.