OpenGL model Overlay of Camera Preview - android

I am trying to display a .md2 model over top of the camera preview on my android phone. I don't need to use the accelerometers or anything. If anyone could even just point me in the right direction as to have to set up an opengl overlay that would be fantastic. If you are able to provide code that shows how to enable this that would be even better! It would be greatly appreciated..

I'm not able to provide code until later this week, but you might want to check out a library called min3d, because I believe they already have a parser written for .md2 files. Then I believe that if you use a GLSurfaceView, the background can be set to be transparent, and you can put a view of the camera behind it. Are you trying to get some kind of augmented reality effect? There are android specific libraries for that too, but they're pretty laggy (at least on my Motorola Droid).

Related

Is it possible to run ARCamera in the background without previewing it on Android?

I am fairly new to Android and especially to the various camera systems in this platform. I'm building an app where I need to integrate ARCore only to track the camera pose (among other things like objects in the scene, planes etc). I don't want to augment anything in the "real world" , so I am not looking to preview the frames being fed to the camera. I've looked through all of the examples in the arcore-sdk and sample code in google's documentation. None of them cover my use case where I want to be able to fetch camera's pose without previewing the camera images onto a surface view or something. I also don't want to 'fake' it by creating a view and hiding it. I would like to know if anyone has experience with such a thing or any ideas how we can achieve it or if we can achieve this at all? Does ARCore even support this?
UPDATE: I found this https://github.com/google-ar/arcore-android-sdk/issues/259 where they mention that it's possible with just an OpenGL context. But I have no clue how to get started. Any samples or pointers would be appreciated!
You can run an ArSession for tracking. ArSession doesn't depend on View.

Real time mark recognition on Android

I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.

Android Camera API Advance Features

I am to start working on an Android Custom Camera App. I just want to know is there any way to add the following features to my app:
Beauty Level
Red Eye Removal
Acne Removal
I just want to know that if it is possible, can someone suggest or give me any idea how can code it into my app.
Though I am familiar with Android Camera API functions, and worked on several simple custom camera apps.
Thanks in advance
For beginning, you can use FaceDetector to detect faces in the picture.For example , you can remove Red eyes by searching for Red pixels in the picture and try to decrease red level of them.And also , you can use OpenCV for detecting eyes.I found a sample for eye detection in Here

First steps in creating a chroma key effect using android camera

I'd like to create a chroma key effect using the android camera. I don't need a step by step, but I'd like to know the best way to hijack the android camera and apply the filters. I've checked out the API and haven't found anything super definitive on how to manipulate data coming from the camera. At first I looked into using a surface texture, but I'm not fully aware how that helps or how to even use it. Then I checked out using a GLSurfaceView, which may be the right direction, but not really sure.
Also, to add to my question, how would I handle both preview and saving of the image? Would I process the image at minimum, twice? Once while previewing and once while saving? I think that's probably the best solution.
Lastly, would it make sense to create a C/++ wrapper to handle the processing to optimize speed?
Any help at all would be greatly appreciated. A link to some examples would also be greatly appreciated.
Thanks.
The only real chance is to use openGL ES and fragment shader (it will require at least openGL ES 2.0) and do the chroma key effect on GPU. The shader itself will be quite easy (google).
But to do that, you need to display camera preview with callback. You will have to implement Camera.PreviewCallback, create a buffer for image data and use setPreviewCallbackWithBuffer method. You can get the basic idea from my answer to a similar question. Note that there is a significant problem with performance of this custom camera preview, but it might work on hardware that supports ES 2.0.
To display the preview with openGL, you will need to extend GLSurfaceView and also implement GLSurfaceView.Renderer. Then you will bind the camera preview frame as a texture with glTexImage2D to some simple rectangle and the rest will be handled by shaders. See how to use shaders in ES here or if you have no experience with shaders, this tutorial might be a good start.
To the other question: you could save the current image from the preview, but the preview has lower resolution than a taken picture, so you will probably want to take a picture and then process it separately (you could use the same shader for it).
As for the C++, it's a lot of additional effort with questionable output. But it can improve performance if done right. Try to check this article, it's on a similar topic, it describes how to use NDK to process camera preview and display it in openGL. But if you were thinking about doing the chroma key effect in C++, it would be significantly slower than shaders.
You can check this library: https://github.com/cyberagent/android-gpuimage.
It provides a framework to do image processing on device's GPU using GL shaders.
There is also a sample showing how to use the library with a camera.
There is a Chroma-Key-Project on Google-Code: http://code.google.com/p/chroma-key-project/ It includes a way to upload pictures that are token using chroma-key:
After an exhaustive search online, I have failed to find any open source projects working >with Chroma-keying for Android devices. The aim of this project is to provide a useful >Chroma-key library, that will make it easy to implement applications and games that can take >pictures in front of a Green or Blue screen, and apply the pictures on a chosen background. >Furthermore, the application will also allow the user to upload the picture using Intent.

Vuforia & Unity 1.5 not rendering object on the scene on Android

I am very frustrated with this problem and the Unity3D community isn't very helpful because no one there is answering my question. I have done a ton of searching to find what the problem could be, but I didn't succeed. I install Qualcomm Vuforia 1.5 and Unity3D 1.5.0.f. I use the Unity extension. I imported their demo app called vuforia-imagetargets-android-1-5-10.unitypackage, put their wood chips image target on the scene, their AR camera, and added a box object on top of the image target. Then I built it and sent to my Samsung Galaxy tablet. However, when I open the app on the tablet and point the tablet to the image target, nothing shows up - the box isn't there. As if I didn't add any objects to the scene. I just see what the device camera sees.
Has anybody experienced this before? Do you have any ideas what could be wrong? No one online seems to be complaining about it.
Thank you!
Make sure you have your ImageTarget dataset activated and loaded for your ARCamera (in the inspector, under Data Set Load Behaviour) and that the checkbox next to the Camera subheading in the inspector is ticked.
Also ensure that the cube (or other 3d object) is a child of ImageTarget in the hierarchy, and that your ImageTarget object has it's "Image Target Behaviour" set to the intended image.
You may also have to point your ARCamera in such a way that your scene is not visible to it.
Okay. I got the solution. I asked on Qualcomm's forum and one gentleman was nice enough to explain to me that I missed setting up Data Set Load Behaviour in Unity's AR camera. I had to active the image target data set and load the corresponding image target. Once I set these two things up, built and deployed, all worked well.
Good luck!

Categories

Resources