This is very theoretical question about general knowledge.
First of all I dont have so far alot of understanding about things in Open GL so please forgive me.:
The best idea to load a 3D Model into android is using Waterfall .obj files yes?
I downloaded sample model for sketchup (some robot model with alot of parts) and the .obj file has size of 3mb. I loaded it into vector of strings (almost 100k of lines) and the application is +15mb's heavier in ram usage!!!! So I am a bit concerned about this whole method.. and approach?
When I will load the model is there a simple way of rotating it and moving. Will it be like single object in open GL or do I need to multiply all thousands of verticals by matrix?
Is there anything else I should add to my understanding.
I can't answer all of your questions, but:
3) Yes. You can combine the Android framework's onTouchEvent() functionality to work with OpenGL. In OpenGL, you can rotate things very easily with simple glRotate(angle) calls (which will rotate everything for you), where the provided angle is variable based on your touch interaction.
EDIT::
2) Why are you loading it into Strings? I don't know models very well, but I parse many files. You should load into the smallest size variable you can. For instance an ArrayList of shorts, or something. I don't know your data, but that is the best way. Consider parsing in portions if you have memory issues.
Related
I am currently working on an HDR application that requires the use of Camera2 to be able to customize HDR settings.
I have developed a customized algorithm to retrieve certain data from Raw DNG images and I would like to implement it on Android.
I am unfortunately not an expert in Java/Android, so I taught myself how to code. Using other formats, I have usually worked with bitmaps to retrieve pixel data. ( which was relatively an easy task concerning the existing methods )
Concerning DNG files, I have found no documentation showing me how to retrieve the pixels data. I thought of bufferizing the image, however the DNG file format contains many information other than pixels and I'm afraid I am unable to find an extraction strategy using bufferstream. (I just want to store the pixels inside an array)
Anyone has an idea ? Would highly appreciate some tips.
Best regards
Camera2 does not produce DNGs directly - it produces plain RAW buffers, which you can then save to a DNG via DngCreator.
Are you operating on the initial RAW buffers, or saving DNGs and then loading them back?
In general, DNGs are not full baked images, so quite a bit of code is needed to render them completely - see for example Adobe's DNG SDK.
I am new to android and ARToolkit.I have to develop the android application which can augment and render the 3D models from CT scan images in DICOM format on the detected marker. I am using ARToolkit SDK for my purpose. But don't how to proceed with the dicom files and render the 3D model on marker. Someone please suggest some approach. Any sort of help will be highly appreciated.
Thanks
I recommend the following process;
Figure out a tool for segmentation. This is the process whereby you will build a 3d model of subset of the data depending on density. For example, you will build a model of the ribs of a chest CT. You should do this outside of Android and then figure out how to move it later. You can use tools like ITK and VTK to learn how to do this stage.
If you want to avoid the ITK/VTK learning curve, use GDCM (grass roots dicom) to learn how to load a DICOM series. With this approach you can have a 3D array of density points in your app in a few hours. At this point you can forget about the DICOM and just work on the numbers. You still have the segmentation problem.
You can look at the NIH app ImageVis3D which has source code and see what there approach is.
Once you have a segmented dataset, conversion to a standard format is not too hard and you will be on your way.
What is the 'detected marker' you refer to? If you have a marker in the image set to aid in segmentation, you can work on detection from the 3d dataset you get back from loading the dicom data.
Once you have the processes worked out, you can then see how to apply it all to Android.
It seems a little old but, recommended for a start: Android OpenGL .OBJ file loader
I was wondering too about building a CustomView to address your needs, since in a CV you can display anything.
I have seen a lot of developer use all asset or images or drawable needed into one file png like this :
the question is how can developer split each image to use it in android ?
and what's advantage of this technique ?
This technique is mainly used in game development, and the file you linked is called a Texture Atlas.
It's main advantage is that the game engine has to load only one texture which saves a lot of memory writing, making the game run smoother.
Splitting is normally done with the help of an XML/JSON file which contains the coordinates and size of every image, that way the engine knows where each image is located in the atlas.
You can find more information about Texture Atlases here
hope you are all well.
I am at a somewhat of a crossroads in my current project, I am needing to extract grayscale pixel values that will be sorted as per the discussion in my previous post (and very kindly and thoroughly answered).
The two main methods that I am aware of are:
Extract the grayscale from the Yuv preview.
Take the photo, and convert the RGB values to grayscale.
One of my main aims is simplicity, the project as a whole needs it, so thus my question - whaich of these two (or another method I am not aware of) would be the most reliable/stable, but would be less taxing on the battery and processing time?
Please note, I am not after any code samples, but are looking for what people may have experienced, may hve read (in articles etc) or have a intuitive hunch about.
Thank you for taking the time to read this.
I'm currently working on a project which also uses pixel values to do some calculations, and I noticed that it's better to use the values directly from the YUV preview if you only need the grayscale, or need to use the entire preview for your calculation.
If you want to use the RGB values, or only calculate something based on a certain part of the preview it's better to convert the area you need by converting to a Bitmap and using that for instance.
However, it all depends on what you're trying to achieve since no two projects are alike. If you have the time, why not (rougly) implement both methods and do a quick test to see what works better in terms of cpu usage and total processing time? That's how I found the best method for my particular problem.
I have a texture in my LibGDX based Android app that is created procedurally through FrameBuffers that I need to preserve through context loss, and it seems that the only efficient way to do this is to simply save the data, whether as a full image or raw data, out, and load it back in when the time comes. I'm struggling to find any way to achieve this though, as every route I've taken has lead to complete failure in one way or another.
I've searched around quite a bit, but nothing I've come across has worked out. I am mostly just looking for a hint into the right direction, rather than the aimless searching and attempts I have been doing thus far. I would assume the best thing would be to convert all of the data from the texture into a "buffer" of some sort, save the data internally, and then reload it and recreate the texture, but I'm not sure what the best way to go about doing that is.
The PixmapIO class is supposed to help with writing a run-time generated pixmap out. Its not quite what you're looking for with an FBO texture, though. (Its easy to go from Pixmap to Texture, but not so easy to go the other way.) If the primitives you use to generate the data in your FBO are available on Pixmap (e.g., the basic geometric primitives), that may be an alternative. I believe this is the closest libGDX comes to a supported mechanism for saving a run-time texture, but I'm not positive.
There is some libGDX code around for scraping bytes off the framebuffer (the texture data of an FBO all lives on your GPU, so you need to jump through some hoops to copy it into normal memory). See ScreenUtils and the links here about screenshots and PNGs.
It should be easy to adapt the PixmapIO to write out a "CIM" formatted file using the byte[] returned from on of the ScreenUtils methods.
Alternatively, you could track the list of "operations" that were done to the FBO, so you can just replay them (reconstructing the content later). It depends a lot on what's going into your texture, though ...