I am currently working on an Augmented Reality application for Android devices and want to modify camera preview with information I gained in previous steps.
Now there seem to be 2 ways to do this:
I could modify the data[] array to directly apply my overlay on the actual preview frame.
I could make a second view which holds the information to be displayed and display it over the camera preview.
Which of these approaches is the better one, and for what reason? I cannot see any advantages of one over the other yet.
Related
I am fairly new to Android and especially to the various camera systems in this platform. I'm building an app where I need to integrate ARCore only to track the camera pose (among other things like objects in the scene, planes etc). I don't want to augment anything in the "real world" , so I am not looking to preview the frames being fed to the camera. I've looked through all of the examples in the arcore-sdk and sample code in google's documentation. None of them cover my use case where I want to be able to fetch camera's pose without previewing the camera images onto a surface view or something. I also don't want to 'fake' it by creating a view and hiding it. I would like to know if anyone has experience with such a thing or any ideas how we can achieve it or if we can achieve this at all? Does ARCore even support this?
UPDATE: I found this https://github.com/google-ar/arcore-android-sdk/issues/259 where they mention that it's possible with just an OpenGL context. But I have no clue how to get started. Any samples or pointers would be appreciated!
You can run an ArSession for tracking. ArSession doesn't depend on View.
In a fragment I have a Texture View displaying the camera preview and on the top of that I draw several other views.
My goal is to record a short video of what the user sees (all views) or save several screenshots to compile later into a video.
Since I don't want any disclaimer to show up and Intent associated, I don't want to use MediaProjection.
I've tried many things but all either don't work or take screenshot/record all views except for the TextureView, which turns out black in the outcome. Note that I don't wish to use MediaRecorder either because it'll only allow me to record the textureview and I want all of the contents to be recorded/screenshot.
I understand that this is the reason TextureView comes out black.
I have actually managed to get screenshots with the PixelCopy api, particularly this call, but minimum sdk version is 26 and I needed a solution to work for minimum 24 sdk version, otherwise it would be an option for me... Also, the ideal scenario would be getting a video and not the frames for later making the video.
So, can anyone point out the better way of doing this? I'm currently not seeing any alternatives...
Again, I want to give the user a small video of the entire screen display (all views).
Thanks a lot in advance!
I am currently working on an application in android studios that messes with the colors of live camera feed from a phones camera. For example, I may want to filter out all the reds, or maybe I want to make displayed camera image black-and-white.
However, I haven't really found much on how to do this. I've found tutorials on using both the deprecated Camera class and the android.hardware.camera2 class. My preferred sample code was for camera2, found directly here (takes you directly to Java class files, not whole project).
So does anyone know how to use camera2 to do what I want? Do I need to use the deprecated class Camera instead? My idea is that I need to have an activity that has the main job of displaying images, and behind the scenes the phone camera is running, sending the image (in whatever format, Bitmap) to have the colors messed with (by some code I will make), which then sends the image to be displayed in the main activity.
So that is three main pieces: (1) Camera to Bitmap, to get what is currently seen by the phone Camera and store it in code; (2) mess with the colors of the Bitmap to distort the current view in my desired way; and (3) then a way of taking the resulting distorted view and displaying that on the screen. Of course, as mentioned, it's the first and last of the three just mentioned that I really need help with.
Please let me know what other details will be helpful to know about.
I need to scan a special object within my android application.
I thought about using OpenCV but it is scanning all objects inside the view of the camera. I only need the camera to regognize a rectangular piece of paper.
How can i do that?
My first thought was: How do barcode scanners work? They are able to regognize the barcode area and automatically take a picture when the barcode is inside a predefined area of the screen and when its sharp. I guess it must be possible to transfer that to my problem (tell me if im wrong).
So step by step:
Open custom camera application
Scan objects inside the view of the camera
Recognize the rectangular piece of paper
If paper is inside a predefined area and sharp -> take a picture
I would combine this with audio. If the camera recognized the paper make some noice like a peep or something and the more the object is fitting the predefined area the faster the peep sound is played. That would make taking pictures for blind people possible.
Hope someone got ideas on that.
OpenCV is an image processing framework/library. It does not "scan all objects inside the view of the camera". By itself it does nothing and yet it gives the use of a number of useful functions, many of which could be used for your specified application.
If the image is not cluttered and nothing is on the paper, I would look into using edge detection (i.e. Canny or similar) or even colour blobs (even though colour is never a good idea, if your application is always for white uncovered paper, it should work robustly).
OpenCV does add some overhead, but it would allow you to quickly use functions for a simple solution.
Is it possible to broadcast the android camera preview into 2 different SurfaceView controls at the same time? I have seen some apps that show effects into different previews in real-time, how do they achieve that? I read about the TextureView, is this the view to use? where can I find examples of multiple simultaneous camera previews?
Thanks
Well, as they answered in this question, I downloaded the grafika project and revised the "texture from camera" example.
In the RenderThread is a Sprite2d atribute called mRect.
I just make another instance called mRect2, and configuired it with the same parameters that mRect has, except the rotation, I put it to the double:
mRect.setRotation(rotAngle);
mRect2.setRotation(rotAngle*2);
This is the result
There is still a lot of code to understand, but it works and seems a very promising path to continue by.
I don't think that it is possible to independently open 2 camera previews at the same time, as the camera is treated as a shared resource. However, it will be possible to draw to multiple SurfaceViews which is what the apps you describe do.