I use webRTC's android library and I want to mirror image(footage) displayed on SurfaceView. (It is front camera footage)
I did same in IOS easily with changing scale of surfaceView like this self.LocalView.transform = CGAffineTransformMakeScale(-1.0, 1.0); But in android localRenderer.scaleX = -1f gives black screen result.
This is only source I found which is talking about this: link
It says something like this:
WebRTC Android provides VideoRenderGui as a video rendering interface
VideoRenderGui's update interface provides mirroring parameters. Set to true to mirror reverse when rendering.
public static void update(Callbacks renderer, int x, int y, int width, int height, VideoRendererGui.ScalingType scalingType, boolean mirror)
But I can't find an example how to implement this VideoRenderGui class.
I am not sure whether just want to mirror your local view or you want the camera stream to be mirrored. But if you just want to show front camera footage mirrored then the below solution will do the work.
Here localVideoView is SurfaceViewRenderer.
localVideoView?.setMirror(true)
Related
Background
I have a web-based animation that I've turned into an Android app with live wallpaper:
http://pixfabrik.com/livingworlds/
I've done so by creating a WebView and periodically copying its contents to the WallpaperService's Surface. Here's the code for that:
https://gist.github.com/iangilman/71650d46384a2d4ae6387f2d4087cc37
… And here's how I got to that solution:
Android: Use WebView for WallpaperService
That's been working great for the last four months, but WebView 76 broke the live wallpaper by introducing this bug:
https://bugs.chromium.org/p/chromium/issues/detail?id=991078
The bug is being worked on, so I'm optimistic that WebView 77 (due to be released September 10), and will have it fixed, but it would be nice to fix my app sooner than that, if possible!
The Issue
In the bug report above, one of the Chromium developers suggested using VirtualDisplay to connect the WebView to the WallpaperService instead, so now I'm pursuing that. I'm relatively new to Android, so I'm doing it naïvely, and so far it's not working. I'm writing here seeking help!
Here's what I currently have (in my Engine's OnSurfaceChanged (so I can take advantage of the width/height it gives me)):
#Override
public void onSurfaceChanged(SurfaceHolder holder, int format, int width, int height) {
super.onSurfaceChanged(holder, format, width, height);
DisplayManager mDisplayManager = (DisplayManager)getSystemService(Context.DISPLAY_SERVICE);
int flags = DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY;
int density = DisplayMetrics.DENSITY_DEFAULT;
VirtualDisplay virtualDisplay = mDisplayManager.createVirtualDisplay("MyVirtualDisplay",
width, height, density, holder.getSurface(), flags);
Presentation myPresentation = new Presentation(myContext, virtualDisplay.getDisplay());
WebView myWebView = new WebView(myPresentation.getContext());
myWebView.loadUrl("file:///android_asset/index.html");
ViewGroup.LayoutParams params = new ViewGroup.LayoutParams (width, height);
myPresentation.setContentView(myWebView, params);
}
I'm using a Presentation to connect the VirtualDisplay to the WebView (on the recommendation of the Chromium developer), but I don't know for sure if that's the right way to go about it.
The WallpaperService runs, and I don't get any errors, but I also don't see my webpage; it's just a white screen.
Hopefully I'm just doing something dumb… Please enlighten me! :-)
You must call the show() method on your Presentation object otherwise it won't be displayed to the Virtual Display.
I followed the Android Studio tutorial to get the CameraPreview to work (Camera API Android Developer Guide). This works fine for me and i can view the camera stream in my FrameLayout.
But I would like to get the RGB values from a specific Pixel in the Preview everytime it changes. I did not find a method which gives me the previewImage as a bitmap and was not able to understand the usage of the onPreviewFrame method
#Override
public void onPreviewFrame(byte[] data, Camera camera) {}
How can I get the RGB values from a Camerapreview Pixel?
If you are using the Camera2 API, you can implement the ImageReader.OnImageAvailableListener class in your application. After that, you override the onImageAvailable function , which gets an ImageReader as argument. Then you can access the image just recorded with imageReader.acquireNextImage().
With either API, you need to handle processing YUV data yourself, unfortunately.
Camera devices natively produce YUV data, not RGB, so the API doesn't spend extra resources to auto-convert the data. The main easy exception is piping data to the GPU, where the GPU driver auto-converts YUV to RGB for you within your pixel shader.
But if you're just in regular app code, you need to parse the data.
For the deprecated android.hardware.Camera API, the output is NV21 by default, and you can usually select YV12 as another option.
The wikipedia article on YUV is relatively helpful: https://en.wikipedia.org/wiki/YUV
But it does have the wrong conversion coefficients for YUV->RGB conversion; they should be:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
(Cb = U, Cr = V)
You can also take a look at this stackoverflow post:
Extract black and white image from android camera's NV21 format
which has code that looks to be correct for the conversion.
Am trying to understand the following piece of code. According to the author,
he is trying to reset the camera position based on the gutter width and height. By gutter, I take it the author means the black bars on the screen.
The problem is that I cant seem to find the methods
setViewport(int,int,boolean) and getGutterWidth() and getGutterHeight() on the Stage class. I think this code was written with an outdated Libgdx API. What am looking for is the equivalent code that will perform the same task as this outdated code:
private Stage stage;
public void resize(int width, int height){
stage.setViewport(MyGame.WIDTH, MyGame.HEIGHT, true);
stage.getCamera().translate(-stage.getGutterWidth(),
-stage.getGutterHeight(), 0);}
These black bars are now handled by the viewport classes, see https://github.com/libgdx/libgdx/wiki/Viewports for a overview and short description.
In our case I would suggest a FitViewport:
Viewport viewport = FitViewport(MyGame.WIDTH, MyGame.HEIGHT, camera);
Stage stage = new Stage(viewport);
I'm extremely new to Unity and VR and I've just been finishing up with Unity tutorials from YouTube. Unfortunately there isn't exactly much documentation on the process one should follow to make a VR app with Unity.
What I need is to be able to replicate the Photosphere App in the Cardboard App for Android. I need to do this using Unity if possible.
The Photosphere's have been taken from a Nexus 4's Camera with the photosphere option and look like below image:
I tried following this really nice walkthrough which attaches a cubemap skybox to the lighting. The problem is the top and the bottom part of of the cube don't seem to show the proper image.
I tried doing it with a 6 sided skybox too but I'm pretty lost about the way i should proceed with that. Primarily because I've just got one Photosphere image and the 6 sided skybox has 6 input texture parameters.
I also tried following this link but the information there is slightly overwhelming.
Any help or pointers in the right direction would be extremely appreciated!
Thank you :)
I also went through the tutorial of the link you provided, but it seems that they did it "manually".
Since you have Unity 3D and Cardboard SDK for Unity, you don't have to do configure the cameras.
You please follow this tutorial.
There is an alternative way doing a Sphere and put the cam inside:
http://zhvillues.tumblr.com/post/126331275376/creating-a-360-viewer-using-unity-3d
You have to apply a custom shader to the Sphere to render the inside of it.
Shader "Custom/sphereShader" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_Color ("Main Color", Color) = (1,1,1,0.5)
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Front
CGPROGRAM
#pragma surface surf Lambert vertex:vert
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
float4 color : COLOR;
};
void vert(inout appdata_full v)
{
v.normal.xyz = v.normal * -1;
}
void surf (Input IN, inout SurfaceOutput o) {
fixed3 result = tex2D(_MainTex, IN.uv_MainTex);
o.Albedo = result.rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "Diffuse"
}
I am using GPUImage library to compress a video in my iOs app (GPUimageVideoCamera)
https://github.com/BradLarson/GPUImage/
I have worked with it on iOS and it is very fast
I want to do the same in my android app, but it seems that GPUImageMovie class doesn't exist in android library:
https://github.com/CyberAgent/android-gpuimage/tree/master/library/src/jp/co/cyberagent/android/gpuimage
It seems that android library only work on images (no video).
Anyone know if this library can do the job? If not, did someone developed GPUImage all library? If not, what is the best library i can use that can do the job as fast as GPUImage library do.
That's what GPUimageVideoCamera do in iOs (Filtering live video):
To filter live video from an iOS device's camera, you can use code like the following:
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageFilter *customFilter = [[GPUImageFilter alloc] initWithFragmentShaderFromFile:#"CustomShader"];
GPUImageView *filteredVideoView = [[GPUImageView alloc] initWithFrame:CGRectMake(0.0, 0.0, viewWidth, viewHeight)];
// Add the view somewhere so it's visible
[videoCamera addTarget:customFilter];
[customFilter addTarget:filteredVideoView];
[videoCamera startCameraCapture];
This sets up a video source coming from the iOS device's back-facing camera, using a preset that tries to capture at 640x480. This video is captured with the interface being in portrait mode, where the landscape-left-mounted camera needs to have its video frames rotated before display. A custom filter, using code from the file CustomShader.fsh, is then set as the target for the video frames from the camera. These filtered video frames are finally displayed onscreen with the help of a UIView subclass that can present the filtered OpenGL ES texture that results from this pipeline.
The fill mode of the GPUImageView can be altered by setting its fillMode property, so that if the aspect ratio of the source video is different from that of the view, the video will either be stretched, centered with black bars, or zoomed to fill.
For blending filters and others that take in more than one image, you can create multiple outputs and add a single filter as a target for both of these outputs. The order with which the outputs are added as targets will affect the order in which the input images are blended or otherwise processed.
Also, if you wish to enable microphone audio capture for recording to a movie, you'll need to set the audioEncodingTarget of the camera to be your movie writer, like for the following:
videoCamera.audioEncodingTarget = movieWriter;
Is there a library that can do the same in android?