I have a texture in my LibGDX based Android app that is created procedurally through FrameBuffers that I need to preserve through context loss, and it seems that the only efficient way to do this is to simply save the data, whether as a full image or raw data, out, and load it back in when the time comes. I'm struggling to find any way to achieve this though, as every route I've taken has lead to complete failure in one way or another.
I've searched around quite a bit, but nothing I've come across has worked out. I am mostly just looking for a hint into the right direction, rather than the aimless searching and attempts I have been doing thus far. I would assume the best thing would be to convert all of the data from the texture into a "buffer" of some sort, save the data internally, and then reload it and recreate the texture, but I'm not sure what the best way to go about doing that is.
The PixmapIO class is supposed to help with writing a run-time generated pixmap out. Its not quite what you're looking for with an FBO texture, though. (Its easy to go from Pixmap to Texture, but not so easy to go the other way.) If the primitives you use to generate the data in your FBO are available on Pixmap (e.g., the basic geometric primitives), that may be an alternative. I believe this is the closest libGDX comes to a supported mechanism for saving a run-time texture, but I'm not positive.
There is some libGDX code around for scraping bytes off the framebuffer (the texture data of an FBO all lives on your GPU, so you need to jump through some hoops to copy it into normal memory). See ScreenUtils and the links here about screenshots and PNGs.
It should be easy to adapt the PixmapIO to write out a "CIM" formatted file using the byte[] returned from on of the ScreenUtils methods.
Alternatively, you could track the list of "operations" that were done to the FBO, so you can just replay them (reconstructing the content later). It depends a lot on what's going into your texture, though ...
Related
I'm working on OpenGL program and I must calculate a bounding box . I made the code to do it but I can't get vertexes coordinations from vertex buffer . Someone can explain me an easy way to get data from vertex buffer?
I'm using Java for android and OpenGL es
If you use OpenGL ES 3.0 or later, you can use glMapBufferRange() to access buffer data directly. See the man page for details about the functionality, and the GLES30 documentation for details about the Java bindings in Android.
I don't think there's any reasonable way to do this in ES 2.0. I could think of absolutely awful ways, but I would feel bad to steer you in that direction. Well, for completeness, but please do not do this: You could render something that ends up leaving the vertex data in a render target, and read it back with glReadPixels().
If you need frequent access to the vertex data in your own code, it will most likely work better if you keep a copy of it. You already had the data when you called glBufferData(). If you're currently throwing it away after the glBufferData() call, simply keep it around, and use it whenever you need access to vertex data.
i've found how to do it
mFb= (FloatBuffer) vb.getData(UlVertexBuffer.VERTEX_FIELD_POSITION).position(0);
getmFloatArray(mFb);
mSb= (ShortBuffer) ib.getData();
getmShortArray(mSb);
hope you are all well.
I am at a somewhat of a crossroads in my current project, I am needing to extract grayscale pixel values that will be sorted as per the discussion in my previous post (and very kindly and thoroughly answered).
The two main methods that I am aware of are:
Extract the grayscale from the Yuv preview.
Take the photo, and convert the RGB values to grayscale.
One of my main aims is simplicity, the project as a whole needs it, so thus my question - whaich of these two (or another method I am not aware of) would be the most reliable/stable, but would be less taxing on the battery and processing time?
Please note, I am not after any code samples, but are looking for what people may have experienced, may hve read (in articles etc) or have a intuitive hunch about.
Thank you for taking the time to read this.
I'm currently working on a project which also uses pixel values to do some calculations, and I noticed that it's better to use the values directly from the YUV preview if you only need the grayscale, or need to use the entire preview for your calculation.
If you want to use the RGB values, or only calculate something based on a certain part of the preview it's better to convert the area you need by converting to a Bitmap and using that for instance.
However, it all depends on what you're trying to achieve since no two projects are alike. If you have the time, why not (rougly) implement both methods and do a quick test to see what works better in terms of cpu usage and total processing time? That's how I found the best method for my particular problem.
I'm currently writing Bitmaps to a png file and also reading them back to a Bitmap. I'm looking for ways to improve the speed at which writing and reading happens. The images need to be lossless since I'm reading them back to edit them.
The place where I see the worst performance is the actual BitmapFactory.decode(...).
Few questions:
1. Is there a faster solution to read/write from file to a Bitmap using NDK?
2. Is there a better library to decode a Bitmap faster?
3. What is the best way to store and read a Bitmap?
Trying to resolve the best/fastest possible way to read/write image to file came down to using plain old BitmapFactory. I have tried using NDK to do the encoding/decoding but that really didn't make a difference.
Essentially the format to use was lossless PNG since I didn't want to loose any quality after editing an image.
The main concept from all this was that I needed to understand was how long encoding took versus decoding. The encoding numbers where in the upper 300-600ms, depending on image size, and decoding was just fast, around 10-23ms.
After understanding all that I just created a worker thread that I passed images needing encoding and let it do the work without affecting the user experience. The image was kept cached in memory just in case it was needed right away before it was completely encoded and saved to file.
I'm planning to write an app for Android which performs a simple cell counting. The method I'm planning to use is a type of Blob analysis.
The steps of my procedure would be;
Histographing to identify the threshold values to perform the thresholding.
Thresholding to create a binary image where cells are white and the background is black.
Filtering to remove noise and excess particles.
Particle (blob) analysis to count cells.
I got this sequence from this site where functions from the software IMAQ Vision are used to perform those steps.
I'm aware that on Android I can use OpenCV's similar functions to replicate the above procedure. But I would like to know whether I'd be able to implement histographing, thresholding and Blob analysis myself writing the required algorithms without calling API functions. Is that possible? And how hard would it be?
It is possible. From a PNG image (e.g. from disk or camera), you can generate a Bitmap object. The Bitmap gives you direct access to the pixel color values. You can also create new Bitmap objects based on raw data.
Then it is up to you to implement the algorithms. Creating a histogram and thresholding should be easy, filtering and blob analysis more difficult. It depends on your exposure to algorithms and data structures, however a hands-on approach is not bad either.
Just make sure to downscale large images (Bitmap can do that too). This saves memory (which can be critical on Android) and gives better results.
This is very theoretical question about general knowledge.
First of all I dont have so far alot of understanding about things in Open GL so please forgive me.:
The best idea to load a 3D Model into android is using Waterfall .obj files yes?
I downloaded sample model for sketchup (some robot model with alot of parts) and the .obj file has size of 3mb. I loaded it into vector of strings (almost 100k of lines) and the application is +15mb's heavier in ram usage!!!! So I am a bit concerned about this whole method.. and approach?
When I will load the model is there a simple way of rotating it and moving. Will it be like single object in open GL or do I need to multiply all thousands of verticals by matrix?
Is there anything else I should add to my understanding.
I can't answer all of your questions, but:
3) Yes. You can combine the Android framework's onTouchEvent() functionality to work with OpenGL. In OpenGL, you can rotate things very easily with simple glRotate(angle) calls (which will rotate everything for you), where the provided angle is variable based on your touch interaction.
EDIT::
2) Why are you loading it into Strings? I don't know models very well, but I parse many files. You should load into the smallest size variable you can. For instance an ArrayList of shorts, or something. I don't know your data, but that is the best way. Consider parsing in portions if you have memory issues.