I'm working on a robot that is controller via the VR headset and sends a real-time video feed to the headset.
I've chosen to go the native way on Android and now have everything I need to receive the video stream and encode it (using GStreamer) and also to send the control data to the robot via UDP.
The last thing to do (and the one I most struggle with as I nave no prior experience with computer graphics) is to draw the image (encoded camera feed) to the screen. In the last few days, I've been reading stuff about how Vulkan and OpenGL works, I've also went through the examples provided in Oculus Mobile SDK (mainly VRCubeWorld_SurfaceView) but that's way to complex for what I need, I've tried to simplify it so I could just draw two images, but then I thought.
Do I even need any of that? And this question might sound stupid, but I really don't have any prior experience doing this.
I mean, the example is using OpenGL to basically compute all the layers of the 3D scene, apply colors and then fuse them together to get a final frame that is passed to VR_API via the function:
vrapi_SubmitFrame2(appState.Ovr, &frameDesc);
Can I just take those images, and somehow force them into the frameDesc structure to skip the whole OpenGL pipeline? If so, can anyone knowledgeable enough point me to a working solution?
I don't need any kind of panning over the images, just to render them. Later I'll be using head sensor data, but it won't actually do anything with the "scene".
Related
I am fairly new to Android and especially to the various camera systems in this platform. I'm building an app where I need to integrate ARCore only to track the camera pose (among other things like objects in the scene, planes etc). I don't want to augment anything in the "real world" , so I am not looking to preview the frames being fed to the camera. I've looked through all of the examples in the arcore-sdk and sample code in google's documentation. None of them cover my use case where I want to be able to fetch camera's pose without previewing the camera images onto a surface view or something. I also don't want to 'fake' it by creating a view and hiding it. I would like to know if anyone has experience with such a thing or any ideas how we can achieve it or if we can achieve this at all? Does ARCore even support this?
UPDATE: I found this https://github.com/google-ar/arcore-android-sdk/issues/259 where they mention that it's possible with just an OpenGL context. But I have no clue how to get started. Any samples or pointers would be appreciated!
You can run an ArSession for tracking. ArSession doesn't depend on View.
I wanna develop a pictionary style app. I've figured out the drawing part (using canvas, paint and related libraries) on the device, and now I need to update the drawings in real time on all devices that are connected.
The approach I have in mind is to take screenshots at very close intervals and upload them to the server (Firebase). The app will constantly check for server side updates. I know this is a horrible way to keep things in relative synchronization, is there any other way I can do this?
Maybe like a video stream or something.
If you are drawing using paths, then you could send a list of paths to the other devices and redraw them there.
I do not think there is a fast way to convert a series of bitmaps into a video(by bitmaps I mean images that are generated using the Android canvas).
If you do your drawing using OpenGL, then you could convert the output of an OpenGL surface into a video using a video encoder
I need to play a video on a OpenGL surface. I think I will need to render each frame of the video to a texture in a loop and then render it via OpenGL. is this possible under ios and/or android ?
It is possible on iOS, but it's pretty tricky business to get it to run fast enough to keep up with a video stream.
There is an old demo app from Apple called ChromaKey that takes a CVPixelBuffer from Core Video and maps it directly into an OpenGL texture without having to copy the data. That makes performance MUCH better, and is the approach I would suggest.
I don't know if there is more current sample code available that shows how it's done. That code is back from the days of iOS 6, and was written in Objective-C. (I would suggest doing new iOS development in Swift, since that's where Apple is putting its emphasis.)
Regards community
I just want to build a similar app like this,
with my own content of course.
How to capture 360 degree video (cameras, format, ingest, audio)?
Implementation:
2.1 Which one Cardboard SDK works best for my interests (Android or Unity)
2.2 Do you know any blogs, websites, tutorials, samples in which I can support.
Thank you
MovieTextures are a great way to do this in Unity, but unfortunately MovieTextures are not implemented on Android (maybe this will change in Unity 5). See the docs here:
For a simple wrap-a-texture-on-a-sphere app, the Cardboard Java SDK should work. But if you would rather use Unity due to other aspects of the app, the next best way to do this would be to allocate a RenderTexture and then grab the GL id and pass it to a native plugin that you would write.
This native code would be decoding the video stream, and for each frame it would fill the texture. Then Unity can handle the rest of the rendering, as detailed by the previous answer.
First of all, you need content, and to record stereo 360 video, you'll need a rig of at least 12 cameras. Such rigs can be purchased for GoPro cams. That's gonna be expensive.
The recently released Unity 5 is a great option and I strongly suggest using it. The most basic way of doing 360 stereo videos in Unity is to create two spheres with MovieTextures showing your 360 video. Then you turn them "inside out", so that they display their back faces instead of the front faces. This can be done with a simple shader, turning on front face culling and removing the mirror effect. Then you place your cameras inside the spheres. If you are using the google cardboard sdk camera rig, put the spheres on different culling layers and make the cameras only see the appropriate spheres. Remember to put the spheres in proper positions regarding the cameras.
There may be other ways to do this, leading to better results, but they won't be as simple. You can also look for some paid scripts/plugins/assets to do 360 video in VR.
I need to provide some 3D rotation of images, like credit card (please check the video in the link)
I want to know is it feasible or not, in the case of Android. If yes, how can I do that.
The card must have some thickness.
It definitely is feasible, but you will have to do some studying:-).
Start here: http://developer.android.com/guide/topics/graphics/opengl.html
But you may also achieve your goal by just using the video you have posted in your link.
Some context would be useful such as will this be a loading screen? Something in video? etc?
For instance, if you are trying to make a website style layout and have the card up at the top always spinning, I would advice against that on any mobile device as its a waste of performance.
If instead you are using it as a loading screen then again I would advice against it as you are going to spend a lot of time initializing open gl and loading the texture and mesh for the card as well as any lighting you need and then initiate animators and do the spinning etc.
As previously stated OpenGL would be a way of doing this; however, this is not a simple few lines of code. This would be quite the undertaking for someone unfamiliar with OpenGL and 3D modeling to accomplish in a short time frame.
Question: do you require a native Android app, or would it be alright to use Flash Player? You can find tons of interactive 3d geometry demos on http://wonderfl.net - I forked one that had a plane, switched it to a cube, and you can download the results -
3d box on wonderfl
Not OpenGL - the example I found was Papervision3D (which is out of date a couple of years) - but software rendering is fine for 12 triangles. You, of course, would have to import your card faces as texture images if you want to make it look like a credit card.