I am using an Android video chat SDK in Unity. I want to provide Unity camera frame to the SDK.
Now I set a RenderTexture to Camera.targetTexture, and I pass RenderTexture.GetNativeTexturePtr() to Android.
What should I do next? I googled many times, should I use GLES20 api? Or is there any other solutions?
Related
In my webrtc app i am sending the captured VideoStream to a remote browser. My goal is now that i want to augment some AR Stuff with wikitude (i am using Wikitude 7.2.1 for compatibility reasons) on that same videostream on the android part and show it on the display (it currently only shows the camera output). I know that i can't just open another Camera Instance since Webrtc already uses one so i need to pass the output from the webrtc VideoCapturer (or alternatively the contents of some surface that output gets rendered too?) to the Wikitude SDK, however, i have difficulties doing so.
I have seen that the Input Plugin wikitude needs for an external Camera Input uses an ImageReader with an ImageAvaibleListener like this:
mWikitudeCamera2.start(new ImageReader.OnImageAvailableListener() { ... }
and on the WikitudeCamera2 side:
mManager.openCamera(getCamera(), cameraStateCallback, null);
mImageReader = ImageReader.newInstance(mFrameWidth, mFrameHeight, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(onImageAvailableListener, null);
that Image Readers surface then gets attached to the actual camera output:
CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
builder.addTarget(mImageReader.getSurface());
However since i need to use the provided CameraCapturer from Webrtc i am not able to do that.
Is there a way to get the camera stream from maybe the Webrtc's SurfaceViewRender, or render it to another (maybe fake?) SurfaceView and attach that to an Image reader. Or some other way to pass the output that gets sended to the client to wikitude without the use of an Image Reader?
Or generally some other way i could get this to work that i havent thought off. I would really appreciate some help since im stuck on that part as of right now.
You're on the right path but unfortunately, since the Wikitude's SDK is consuming the camera API, it is not possible (yet, SDK 8.2) to access the same stream via the WebView / WebRTC.
You have to stick to the ImageReader approach for now and find a way to process the frames in a native environment.
Best regards
Does the ARCore SDK allow to configure the video source as the input for processing to others than the embedded cameras? Similar functionality is available in Wikitude and ARToolkit for instance, where the camera input can be replaced by custom video frames from other sources.
I don't think that there is such option. Remember that ARCore is using not only frames from camera but also position of your device as one set of input data. When you will push the video frames as a source of what app see it will not correspond with the position. This will probably be cause of situation where your app will not be able to map the world.
Remember that ARCore is not working only based on the camera/vide but it combines all other elements like gyroscope or accelerometer
I am trying to develop a VR video player using latest Google VR SDK for Android (v1.0.3), but there is no high-level API to build VR playback controls.
YouTube VR player uses old version of gvr toolkit and renders controls (for example, com.google.android.libraries.youtube.common.ui.TouchImageView) in some way.
What is the best way to implement such controls using latest VR SDK? Do I need to use custom renderer with OpenGL or NDK?
I would be very grateful for implementation details.
GVR SDK does not provide a way do draw something over VrVideoView, so we need to implement VR video by ourselves.
The main idea of solution - use GvrView with custom StereoRenderer.
First of all, we need to implement VR video renderer (using VR shaders and MediaPlayer/ExoPlayer).
Then we need to implement custom controls on the scene using OpenGL ES and GVR SDK (HeadTracking, Eye, etc.).
You need to use OpenGL or other engine such as Unity3D to show the video texture. Decoding the video in android and parse the texture to OpenGL
or engine to show.
Recently, I want to develop an android app, I've developed a MediaPlayer to play video ,when the video playing,I want to get the image data of each frame, and do some processing, and then rendering to the screen.Does anybody have suggestions?Thank you in advice!
I am not sure about your android version. From Android JellyBean 4.2.0 onwards, there is a new component introduced as CPUConsumer (Reference: 4.2_r1 Sources) which can enable a host program to access an underlying gralloc frame.
Some examples to test this functionality is also available in android at here.
I am developing an android application which receiving the video stream from a server. I want to use QCAR SDK to track the Frame Marker in the video. However, it seems that QCAR can only work on the video from camera device. How can I use QCAR to do the AR with the specific video instead of camera video? Or is any other SDK can do this?
QCAR can only use the live camera feed in v1.5. The video goes directly to the tracker without any hooks for developer intervention or redirection from a video source.
This feature has been requested on the wish list.
You may be able to do it with the Metaio UnifeyeMobile SDK. It is more configurable in that way - but it can be quite expensive (unless you are ok with the limitations of the free version).