I am developing an android application which receiving the video stream from a server. I want to use QCAR SDK to track the Frame Marker in the video. However, it seems that QCAR can only work on the video from camera device. How can I use QCAR to do the AR with the specific video instead of camera video? Or is any other SDK can do this?
QCAR can only use the live camera feed in v1.5. The video goes directly to the tracker without any hooks for developer intervention or redirection from a video source.
This feature has been requested on the wish list.
You may be able to do it with the Metaio UnifeyeMobile SDK. It is more configurable in that way - but it can be quite expensive (unless you are ok with the limitations of the free version).
Related
In my webrtc app i am sending the captured VideoStream to a remote browser. My goal is now that i want to augment some AR Stuff with wikitude (i am using Wikitude 7.2.1 for compatibility reasons) on that same videostream on the android part and show it on the display (it currently only shows the camera output). I know that i can't just open another Camera Instance since Webrtc already uses one so i need to pass the output from the webrtc VideoCapturer (or alternatively the contents of some surface that output gets rendered too?) to the Wikitude SDK, however, i have difficulties doing so.
I have seen that the Input Plugin wikitude needs for an external Camera Input uses an ImageReader with an ImageAvaibleListener like this:
mWikitudeCamera2.start(new ImageReader.OnImageAvailableListener() { ... }
and on the WikitudeCamera2 side:
mManager.openCamera(getCamera(), cameraStateCallback, null);
mImageReader = ImageReader.newInstance(mFrameWidth, mFrameHeight, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(onImageAvailableListener, null);
that Image Readers surface then gets attached to the actual camera output:
CaptureRequest.Builder builder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
builder.addTarget(mImageReader.getSurface());
However since i need to use the provided CameraCapturer from Webrtc i am not able to do that.
Is there a way to get the camera stream from maybe the Webrtc's SurfaceViewRender, or render it to another (maybe fake?) SurfaceView and attach that to an Image reader. Or some other way to pass the output that gets sended to the client to wikitude without the use of an Image Reader?
Or generally some other way i could get this to work that i havent thought off. I would really appreciate some help since im stuck on that part as of right now.
You're on the right path but unfortunately, since the Wikitude's SDK is consuming the camera API, it is not possible (yet, SDK 8.2) to access the same stream via the WebView / WebRTC.
You have to stick to the ImageReader approach for now and find a way to process the frames in a native environment.
Best regards
Does the ARCore SDK allow to configure the video source as the input for processing to others than the embedded cameras? Similar functionality is available in Wikitude and ARToolkit for instance, where the camera input can be replaced by custom video frames from other sources.
I don't think that there is such option. Remember that ARCore is using not only frames from camera but also position of your device as one set of input data. When you will push the video frames as a source of what app see it will not correspond with the position. This will probably be cause of situation where your app will not be able to map the world.
Remember that ARCore is not working only based on the camera/vide but it combines all other elements like gyroscope or accelerometer
I'm building app using CWAC-Cam2 library. I need to test different application behaviour depending on objects exist on captured pictures. Is it possible to feed to the library video from a video file instead of actual camera output to simulate real environment conditions?
Thank you.
Sincerely,
Sergey
I want to detect the target on a saved video feed or URL video. For this, I have been looking around AR SDKs But found no sdk to support this. All SDKs are supporting the target detection on live camera only But not on saved video or URL streaming video.
Is there any way to do this?
Which SDK support this feature?
I'm not aware of any SDK doing this natively but instead of using camera frames you need to write a piece of code that reads the stored video (or connects to a given URL) and use those frames. In other words, all you need to do is replace the input source. Therefore I assume that any SDK that allows you to provide your own frames would be suited for your need.
I am developing an android application in which a specific video is played when the poster of a specific movie is shown infront of the camera in android, i found many AR tutorial just show 3D object when detect a pattern, i need some advice to make application that can play video with AR application using android camera and qcar SDK
I don't know qcar, but of course you can put a SurfaceView upon an existing SurfaceView from your CameraPreview and play a video in it.
What I would do is implementing your poster-recognition logic, this will be tricky enough and then, If it works reliable, I would add another SurfaceView in which the video is played.
You will find out, that the CameraPreview surface has actually to be on top of the video surface.
I assume you are trying to play video on glsurface ,it is possible by using surface video texture ,but only ics(4.0) and above devices support surface video texture.Vuforia have a very good sample of this here,you can handle what to do for lower devices such as playing video fullscreen etc..