I want to develop customize camera application like Snap chat!, with out using Surface view.
First i used surface view to develop the app,but i am unable get quality image and also i am unable to get all features what default camera app is providing, like Zoom, focus,face reorganization etc. Please provide me any solution to achieve this
sorry for my english
github/xplodwild/android_packages_apps_Focal
github/almalence/OpenCamera
github/troop/FreeDCam
github/rexstjohn/UltimateAndroidCameraGuide
maybe one those might help
Related
I inserted a webview in my 3D Game. I can manage to retrieve URL and display it on a canvas and texture it in my 3D rendering.
Everything work fine except from the Video the WebGL and Threeje rendering.
For this 3 case I cannot see anything except the 2 canvas from threeJS (the frame canvas information) and ( the check box option)
In all these 3 case the view where WebGL or video are empty should be rendered are empty.
For the video I found out that I can see the image only when I touch the seekbar under the play button and audio button.
So it looks like the rendering view seems to be hidden or not visible or something else.
So I tried to get the view rendered by WebView to check the status. But WebView only got One View and it is not possible to get the elements view inside to check it.
So I am not sure if the problem come from the view status or from conflict with the 3D environment.
So I would like to know if someone have an idea about the problem. Or if someone could tell me how to retrieve the detailed view from webview, if it is possible.
NEW INFORMATION
I think that the problem could come from:
running WEBGL(opengl 1.4) from WebView inside GvrActivity(OpenGL es 2)
may cause a conflict when rendering both Android OpenGL at same time.
Concerning the media and audio. I am also running voice recognition
(Context.AUDIO_SERVICE) and mediaplayer to watch loader video.
LAST TESTING:
So if I run WebView in another activity everything is fine. The problem is that I would like to retrieve an access to this activity to get the Webview VIEW displayed in the main activity layout.
Is that possible?
LAST TESTING:
Changing Context Activity starting WebView does not resolve the problem.
So I found out that if I attach the WebView to the getWindow().addContentView
there is no problem But is if I add the view to MyGLSurfaceView extends GvrView
i got the problem. An I think it is because I already use the Media player to render video To 3D mesh and OpenGL to draw The 3D scene. But I am not sure of that.
LAST TESTING : (je rame (french expression))
I try everything. I think I did. And I think that after 3 weeks I start to be out of resources.
Concerning the audio from the webview I am sure a get in conflict with the voice recognition mAudioManager = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE); Which I think is use by Webview because I can use it displaying video inside canvas in 3D OpenglES.
Concerning the video from Webview in need to touch The progression bar to see something. But as Webview is a black box. No way to get the progress bar view and act on it.
Concerning WebGL and threeJS. I can display web text and image but nothing related to opengl, white display but not transparent, because a set transparency to check. Can only be displayed outside of the OPenGL environment and outside of Surface and surfacetexture.
Concerning Webview: Only one view output. So all the view rendering seems to be done internally using tag parser for position and canvas construction, so no view may be used. But is I consider that the ourput view of Webview is a complete bitmap of all the tag canvas (that I my one interpretation it is easy to use tag tree for rendering fast access and easy structured design information structure). But I am wondering in this case why WebGL and ThreeJs cannot be copied using surfacetexture.updateTexImage() if webview output is a bitmap canvas.
Because everything are 2D canvas when they are displayed inside view.
So 3 week trying to find answer. I hope that someone will find it.
Because I was planing to do an ART gallery in VR where anyone could watch video or 3D ART. For the video and 360 video I could make it but not for WebGL and ThreeJS.
3D inside 3D is the top in ART technology. Imagine that you could go in any 3D shop or Accessing Web URL to any Video of artiste or whatever. The last thing is possible but not the WEGgl and TrheeJS. Watching YouTube 360 VR is easy a can make it, it works very well with voice command, just Hack to get the download address, easy. Why to block when the best is to be open and let the imagination create the tomorrow tools?
So I give up and go back to my OPenCL recognition application which is nearly finished. Form recognition could be great new tools.
By the way. Do not hesitate to ask me for my APK if you want to check by yourself. Stack Overflow are allowed to contact me as they got my email address.
LAST TEST (07/07/2020)
So i think I understood the problem concerning the non accessibility to the Video and webgL display. The first intention concerning Video is surely the fact that they wanted to avoid the ability to copy the video and the problem of optimization of the rendering. Concerning WebGL their is no need to avoid the copying. But it need to be optimized to avoid to redraw the all webview picture when only one part is modified. So it look like the rendering of video and webGL is done in another thread and that the rendering is independent of the other HTML TAG.
Some said that because it is done in other thread it is not possible to access it. Which is false, because the canvas where the rendering is done must be accessible if we want it to be accessible. It is just a bitmap for the video and a frame buffer(viewport) for webGL. I agree that is done by GPU card and not by software but the resulting canvas(rectangle) could be accessible because at the end it is displayed. The problem is that it is not part anymore of the WebView final view, they just send the position where the display must be done to the GPU as viewport coordinate.
So when I try to SurfaceTexture.updateTexImage() I just get some part of the web page, no video and no webgl.
Here is an example to anderstand:
if you load this URL "https://webglfundamentals.org/webgl/webgl-2d-triangle-with-position-for-color.html" and look at the code in you browser.
I can see and manipulate the slider without problem but the GL result is not visible. Because the canvas is an openGL context and it is process by the GPU and directly copy to the final frame buffer at the right place but it is not part of the Webview final view, which is unique. For the video if a can get the slider i can have some picture but i have to get it and check is modification, hard work and i am not really interested by video, found other way to watch it in OpenGL .
So I think that Android could make an effort to try to give the access to the Canvas for webgl rendering. And find a way to give an access to it inside WebView. 3D inside 3D is already 4 dimension. Could be good. But not available at the moment. But it will.
I will look at it, just by curiosity. But no chance I think.
here is the only answer i have found :
https://groups.google.com/a/chromium.org/forum/#!topic/chromium-dev/3wrULcul8lw
today (24/09/2020) i read this :
DRM-protected video can be presented only on an overlay plane. Video players that support protected content must be implemented with SurfaceView. Software running on unprotected hardware can't read or write the buffer; hardware-protected paths must appear on the Hardware Composer overlay (that is, protected videos disappear from the display if Hardware Composer switches to OpenGL ES composition).
30/09/2020 :
i am able to see and manipulate WebGl display using Surface.lockHardwareCanvas but a looze the focus during manipulation and sometimes it does not work, little probleme ;)).
But using lockHardwareCanvas when a display youtube web page with video or without, everything is frezzing, cannot udtate the view neither scroll it.
Mais ça avance ;))
Here is my solution.
WebGL and Jthree can be display using Surface.lockHardwareCanvas
the problem is using Surface.lockHardwareCanvas with htlm other than accelerated View. In this case there is no frame to listen so no refreshing of the view. so must be done in other way.
concerning the OpenGL,WebGL and Jthree. It depends on the application somme are not weel displayed for many reason (no GL_TEXTURE_EXTERNAL_OES, canvas not facteur of 2, and bad view size). And the problem is that there is no new onframeavailable so need to find other way to refresh the view.
It took quite a long time to anderstand but it is working even in GVRActivity ;))
Regards.
I am working on an app where I need to use ARCore. Since I don't have any prior experience in ARCore. I don't know how to do it.
REQUIREMENT
I have a requirement where I have to open a camera and place an ARObject on any x,y coordinates.
I know the ARObject needs 'z' as well.
If My camera view shows a shelf with items placed on it, I want to show an ARObject at 3rd item from left. I already have the x,y points of that item on shelf and now just have to place the AR object.
Please let me know if it is even possible or not?
What I Tried
The anchor object. It is not created using x and y coordinates.
I tried the Hello Sceneform example from google. But to use that
I need to calibrate the ARScene first by moving the camera in a
specific manner.
I tried the Augmented Images example from Medium, which lets me add an ARObject/AugmentedImage on the camera without calibrating. But it needs an Augmented Image position as a reference. The item on shelf won't be an AugmentedImage.
Also, the camera view very blur on the ARScene.
Please guide me how can I achieve this. Any tutorial links are welcome
I want to super-impose an outline of a face when the camera launches in android. Kind of like an empty avatar or a silhouette. I'm NOT looking for face detection. I just the want the user to have the ability to align the outline to the person's face when taking the picture, so that all pictures are taken at the same distance. I already have the camera launching. I'm thinking of putting the facial outline as my main layout and have the camera launching as the background of that layout. Any ideas?
Here you go
there is a section detailing how to do this.
http://developer.android.com/guide/topics/media/camera.html#custom-camera
So, I'm trying to create a Augmented reality app on android (client/server).
My question is if i can overlay images and text boxes over the camera in real time or only if i have the capture to display it on the screen and add the extra information
If the first version can be implemented can someone help me with some starting code or links with suggestions
I did something similar for one of my apps,
And the way i did it is by placing an empty View on top of camera surfaceView using FrameLayout, then used onDraw method in the View class, to play with canvas and put anything i wanted on top of the camera view, you can practically do everything with Canvas, and since literally your view is overlaying the camera surfaceView, will do exactly the trick you are trying to accomplish here...
Regards!
I have a camera preview in my android app. As you all probably know it is implemented by a surfaceview in android.
In my photo app, which allows users to take pictures, I want to blur the camera preview (the surface view) if the user has not logged in yet if the user is logged in, I will display the normal preview (without blur)
Blur as something like
But there seems to be no way to do it
Couple things come to mind but I am not sure how to achieve it
use a blur overlay and place it on top of the surface view, but how do u create such blur overlay?
another approach is to change the attribute of the window, but I can't do it to surfaceview,
getWindow().addFlags(WindowManager.LayoutParams.FLAG_BLUR_BEHIND);
So can we create a window overlay the surfaceview and set the flag like that? I don't think so
can someone tell me how to blur a camera preview, which is a surface view
note: I am trying to blur an area which is the output from camera preview, so it is not like I am blurring a static image, the blur area will change depending on where you point your phone camera
The best way to do this is to take a screen shot of the control, apply a blur to it, and then show that image over the top of the original control. This is how the yahoo weather app does it and its how google suggest you do things like this.
Render script does bluring fast. I've also got some code, but it's not currently at hand right now.
These might help:
http://blog.neteril.org/blog/2013/08/12/blurring-images-on-android
http://docs.xamarin.com/recipes/android/other_ux/drawing/blur_an_image_with_renderscript/
I've also read that there are methods built into Android that do this, but the API isn't public so we cannot use it... which sucks.
Although your 2nd option is losing support in later versions of the Android OS, it may be a good option. I would think to bring a window IN FRONT of your surfaceview, and use the blur_behind property of your now new IN FRONT window.
Android API documentation
"
This constant was deprecated in API level 14.
Blurring is no longer supported.
Window flag: blur everything behind this window.
Constant Value: 4 (0x00000004)"
I do have to admit, Im not 100% sure this would work, but its definitely worth a try.
You have to change the camera parameters; in this case with Camera.Parameters.setPictureSize().
The basic workflow here is :
Camera.Parameters cp = mCamera.getParameters(); // get the current params
cp.set...(); // change something
mCamera.setParameters(cp); // write the params back
Make sure that every resolution that you set via this function is supported. You can get a list of the resolutions that are supported on a device via Camera.Parameters.getSupportedPictureSizes(). and check this Camera documentation.