I would like to reconstruct 3d images from a set of dicom images. I hope you are aware of dicom images. I am planning to use OpenGLES for generating 3d view of images. Like I have a set of images as an image stack or image array. I want to generate 3d view of those images in android. The images are the output of ct scan or MRI scan. I am planning to use 2.3 or 3.0 of android. So my first question is, is it possible in android to generate a 3D view from an array of images? Can you give me some hints. I am new to android and OpenGL. Please help.
well. will not be an easy task, but, first of all you have to decide what donyou whant to do,,
3D volumerender needs lot of cpu power, but there are some android devices that can do it, also a lot of memory, and that would be a problem.
surface rendering is far less requiring and you can benefit from ogl. but today medical application is on the 3d volume rendering side...
there are open source libraries like ued on Osirix viewer (mac) and other linux implementations (ie VTK) ,, you sould go that way...
Related
I wanna develop a pictionary style app. I've figured out the drawing part (using canvas, paint and related libraries) on the device, and now I need to update the drawings in real time on all devices that are connected.
The approach I have in mind is to take screenshots at very close intervals and upload them to the server (Firebase). The app will constantly check for server side updates. I know this is a horrible way to keep things in relative synchronization, is there any other way I can do this?
Maybe like a video stream or something.
If you are drawing using paths, then you could send a list of paths to the other devices and redraw them there.
I do not think there is a fast way to convert a series of bitmaps into a video(by bitmaps I mean images that are generated using the Android canvas).
If you do your drawing using OpenGL, then you could convert the output of an OpenGL surface into a video using a video encoder
Trying to write a camera app where I can have a custom filter applied at runtime, currently sdk offer stuff such as greyscale, sephia etc and it's as simple as setting a parameter.
However I need to apply our own custom filter (where I would edit some pixels value)to both still images and videos, it shouldnt matter as the concept is the same. I managed to do this in iOS using opengles, I was hoping if the same could be done on android.
The approach we tried so far is using sdk and applying a simple grey scale filter in a frame by frame basis, however the camera preview was much much slower (much lower fps) as probably it should be done at a lower level ndk or opengles.
OpenCV for Android is what you are looking for :)
http://opencv.org/platforms/android.html
If you want to further improve the performance, you can use NDK to compile your C/C++ code to make it run on the Android device. There are many samples of how to use openCv camera or Android native camera to do image/video manipulation.
OpenGL is fully functional and fast on Android. You can probably reuse much of your iOS code.
I need to provide some 3D rotation of images, like credit card (please check the video in the link)
I want to know is it feasible or not, in the case of Android. If yes, how can I do that.
The card must have some thickness.
It definitely is feasible, but you will have to do some studying:-).
Start here: http://developer.android.com/guide/topics/graphics/opengl.html
But you may also achieve your goal by just using the video you have posted in your link.
Some context would be useful such as will this be a loading screen? Something in video? etc?
For instance, if you are trying to make a website style layout and have the card up at the top always spinning, I would advice against that on any mobile device as its a waste of performance.
If instead you are using it as a loading screen then again I would advice against it as you are going to spend a lot of time initializing open gl and loading the texture and mesh for the card as well as any lighting you need and then initiate animators and do the spinning etc.
As previously stated OpenGL would be a way of doing this; however, this is not a simple few lines of code. This would be quite the undertaking for someone unfamiliar with OpenGL and 3D modeling to accomplish in a short time frame.
Question: do you require a native Android app, or would it be alright to use Flash Player? You can find tons of interactive 3d geometry demos on http://wonderfl.net - I forked one that had a plane, switched it to a cube, and you can download the results -
3d box on wonderfl
Not OpenGL - the example I found was Papervision3D (which is out of date a couple of years) - but software rendering is fine for 12 triangles. You, of course, would have to import your card faces as texture images if you want to make it look like a credit card.
I am able to display an image using OpenGL ES in android ndk. now I want to display 2 or four images using multithreading in OPENGL ES through android ndk.
I have done huge search for this and came to know a Surfaceview can only have one picture. Then what is the way to display multiple pictures on GLSurface view..
Can anybody please tell me how it can be done..
Thanks in Advance
It seems there are several issues here.
First of all, if you are trying to display "pictures" through OpenGL(ES), you mean textures (OpenGL readable format for "pictures" or "image"), right ? If you are not sure of what I am talking about, find some tutorial about displaying images using OpenGLES. Learn how to display juste 1 and you will be able to display 4.
a Surfaceview can only have one picture
You may have misunderstand something. A GLSurfaceView can draw as many textures as your video memory can handle.
Basically, to display your textures, you will draw 2 or 4 quads and bind the appropriate textures to them.
About the multithreading, I guess you gather your pictures asynchronously. Just wait for a complete picture, and while in the OpenGL thread, create a texture and bind it to a quad.
I got dozens of DICOM images, and I have to convert it into a 3D image in an Android device, any advice about how to do it, as I never handle any imaging like this before, I have no idea............
The search term you are looking for is "Volume Rendering". This is a deep, complicated topic. If you're looking for a place to get started learning, I'd recommend reading the relevant sections of "Handbook of Medical Imaging: Processing and Analysis", Isaac Bankman (ed.), ISBN# 0-12-077790-8.
Volume rendering of DICOM images is computationally very demanding, and DICOM is a complex protocol. Partly for these reasons, only a few implementations of DICOM viewers exist on mobile devices, versus hundreds on desktop/laptops. And most of these are 2D viewers which are more commonly used in clinical applications. There is no open source DICOM toolkit I know of for mobile devices (there are plenty for Unix/Mac/Windows).
For an example of a good implementation, look at the iPad/iPhone versions of Osirix. Further resources available on my website I Do Imaging
Your best bet is probably to get involved with Xbox360 Kinect hacking/research and use the Kinect's 3D mapping functionality to convert your 2D images to a 3D scene. Kinect is probably the most robust 2D to 3D mapping system in the world right now - you're not likely to find much else.