.obj view files with interaction - android

I'm looking for an android component that loads .obj files and has rotation, zoom and zoom interaction.
I need this component to be placed on a camera preview (eg, surfaceview) and can crash a photo by merge the .obj viewer component with the camera preview.
Has anyone else found a solution to these cases? Can you give me some links and examples?
Sorry for my English.

This is the base project!
From it I'm putting the camera to take the photo along with the 3d object. However, the project is well underway to load and interact with 3d objects.
https://github.com/andresoviedo/android-3D-model-viewer

Related

How to render the 3 D models from CT scan DICOM files on the marker for medical augment reality purpose in Android application

I am new to android and ARToolkit.I have to develop the android application which can augment and render the 3D models from CT scan images in DICOM format on the detected marker. I am using ARToolkit SDK for my purpose. But don't how to proceed with the dicom files and render the 3D model on marker. Someone please suggest some approach. Any sort of help will be highly appreciated.
Thanks
I recommend the following process;
Figure out a tool for segmentation. This is the process whereby you will build a 3d model of subset of the data depending on density. For example, you will build a model of the ribs of a chest CT. You should do this outside of Android and then figure out how to move it later. You can use tools like ITK and VTK to learn how to do this stage.
If you want to avoid the ITK/VTK learning curve, use GDCM (grass roots dicom) to learn how to load a DICOM series. With this approach you can have a 3D array of density points in your app in a few hours. At this point you can forget about the DICOM and just work on the numbers. You still have the segmentation problem.
You can look at the NIH app ImageVis3D which has source code and see what there approach is.
Once you have a segmented dataset, conversion to a standard format is not too hard and you will be on your way.
What is the 'detected marker' you refer to? If you have a marker in the image set to aid in segmentation, you can work on detection from the 3d dataset you get back from loading the dicom data.
Once you have the processes worked out, you can then see how to apply it all to Android.
It seems a little old but, recommended for a start: Android OpenGL .OBJ file loader
I was wondering too about building a CustomView to address your needs, since in a CV you can display anything.

Android custom camera api shoot pictures with custom png as background

I want make an Android app with custom camera API, which can take pictures with some png files as frames(Like some web came apps in PCs). And also first I want to take a picture of ball(or something) which act as frame for the second photo that I am going to take. Anybody have an idea?
Most devices already have a camera application, which you can start for the result if that suits your requirement.
But if you have more extensive requirement android also allows you to directly control the camera. Directly controlling the camera is much more involved and you should access your requirement before deciding on either approach.
You can refere to the following develper guides to get details of both
http://developer.android.com/training/camera/photobasics.html
http://developer.android.com/training/camera/cameradirect.html
Once you get the Bitmap, you can use the canvas element to combine the two bitmaps.

How do you record AR video processed by OpenCV on Android?

I'm creating an Android app that makes use of OpenCV to implement augmented reality. One of the needed features is that it saves the processed video. I can't seem to find any sample code on real-time saving while using OpenCV.
If the above scenario isn't possible, another option is to save the video first and have it post-processed by OpenCV and saved back as a new file. But I can't find any sample code for this either.
Could someone be kind enough to point me to either direction, or give me an alternative? It's ok if the alternative doesn't use OpenCV.
Typical opencv flow is, you receive frames from camera, convert to RGB format, perform matrix operations then return to activity to display in View. You can actually store the modified frames as images somewhere in sdcard and use jcodec to create your mp4 out of your images. See Android make animated video from list of images.

Phonegap overlay image on camera preview for Android

Iv'e been search for days with out much success, all the phonegap overlays for Android topics fade out miserable with no real answer, so I'll just add another one to the mix! :)
Is there an easy way to manipulate the phonegap camera API so you can add a PNG to the camera preview?!
And return the URI as usual etc.
You should be able to use absolute positioning to place a png with a transparent background over the image returned from the camera API.

Android Video from camera

What should I read for creating a basic augmented reality app for android?
I read the android reference articles, and I learnd that I could use the Intent(using the built in app) or construct my own "costumized" app (with camera).
I wanted to know what I should read more about, so that I could create something basic like a shape on the screen
By the way:
Cant I just see the current image given by the camera with out the need of saving it? (all of the articles want me to save the files captured, and as you know augmented reality(in my case) does not need saving the file, but does it "on the fly" , am I correct?
you can see the preview using surfaceview while recording from a mediarecorder.
the preview can be seen using setPreviewDisplayfunction of media recorder. its pretty simple to use.
I highly recommend you have a look at OpenCV. I have not used it with Android, but I know it to be a fairly painless and accessible way to image processing.

Categories

Resources