How to prevent opengl context loss when a new activity is loaded - android

I am developing a 3d android app where i need to do rendering in two different activities( normal rendering in one activity and VR rendering in the other). I could find that once i move from one activity to another my 3d model data(vertices, indices) are being lost. If i come back to the first activity i have to reload the whole data from files. Is there any work around for this specific issue? Also which is the best format in which i could save the models to get the quickest loading speed.

You can use GLSurfaceView.setPreserveEGLContextOnPause. While support for preserving EGL contexts is not guaranteed to be supported, it is widely available on modern Android devices.
As for model loading speed - you're treading dangerously into 'opinion based' territory. But, a model format laid out exactly as your GLES buffers expect on the device could be streamed directly from disk, without any modification - so, likely that would be your fastest loading solution. However, many developers use some other format (eg. FBX/OBJ/etc.), because they are more flexible and export directly from DCC tools.

Related

AOSP / Android 7: How is EGL utilized in detail?

I am trying to understand the Android (7) Graphics System from the system integrators point of view. My main focus is the minimum functionality that needs to be provided by libegl.
I understand that surfaceflinger is the main actor in this domain. Surfaceflinger initialized EGL, creates the actual EGL surface and acts as a consumer for buffers (frames) created by the app. The app again is executing the main part of required GLES calls. Obviously, this leads to restrictions as surfaceflinger and apps live in separate processes which is not the typical use case for GLES/EGL.
Things I do not understand:
Do apps on Android 7 always render into EGL_KHR_image buffers which are send to surfaceflinger? This would mean there's always an extra copy step (even when no composition is needed), as far as I understand... Or is there also some kind of optimized fullscreen mode, where apps do directly render into the final EGL surface?
Which inter-process sharing mechanisms are used here? My guess is that EGL_KHR_image, used with EGL_NATIVE_BUFFER_ANDROID, defines the exact binary format, so that an image object may be created in each process, where the memory is shared via ashmem. Is this already the complete/correct picture or do I miss something here?
I'd guess these are the main points I am lacking confident knowledge about, at the moment. For sure, I have some follow-up questions about this (like, how do gralloc/composition fit into this?), but, in accordance to this platform, I'd like to keep this question as compact as possible. Still, besides the main documentation page, I am missing documentation clearly targeted at system integrators. So further links would be really appreciated.
My current focus are typical use cases which would cover the vast majority of apps compatible with Android 7. If there are corner cases like long deprecated compatibility shims, I'd like to ignore them for now.

What is a good approach to implementing a real time raster plot on Android?

What is a suggested implementation approach for a real time scrolling raster plot on Android?
I'm not looking for a full source code dump or anything, just some implementation guidance or an outline on the "what" and "how".
what: Should I use built in Android components for drawing or go straight to OpenGL ES2? Or maybe something else I haven't heard of. This is my first bout with graphics of any sort, but I'm not afraid to get a little dirty with OpenGL.
how: Given a certain set of drawing components how would I approach implementation? I feel like the plot is basically a texture that needs updating and translating.
Background
I need do design an Android application that as part of its functionality displays a real time scrolling raster plot (i.e. a spectrogram or waterfall plot). The data will first be coming out of libUSB and passing through native C++ where signal processing will happen. Then, I assume, the plotting can happen either in C++ or Kotlin depending on what is easier and whether passing the data over the JNI is a big enough bottleneck or not.
My main concern is drawing the base raster itself in real time and not so much extra things such as zooming, axes, or other added functionality. I'm trying to start simple.
Constraints
I'm limited to free software.
Platform: Android version 7.0+ on modern device
GPU hardware acceleration is preferred as the CPU will be doing a good amount of number crunching bringing streaming data to the plot.
Thanks in advance!

Starling for IOS & Android: Very low FPS in a static situation

I created an application with Starling, on the new mobile devices it performs amazingly well, however on the older devices (e.g. iPhone 4) I encounter a very odd lag.
I have as far as I can tell a completely static situation:
There are quite a few display objects added to stage, many of them are buttons in case it matters, their properties are not changed at all after initialization (x, y, rotation, etc...).
There are no enterframes / timeouts / intervals / requests of any kind in the background.
I'm not allocating / deallocating any memory.
In this situation, there's an average of 10 FPS out of 30, which is very odd.
Since Starling is a well established framework, I imagine it's me who's doing something wrong / not understanding something / not aware of something.
Any idea what might be causing it?
Has anyone else experienced this sort of problem?
Edit:
After reading a little I've made great optimizations in every possible way according to this thread:
http://wiki.starling-framework.org/manual/performance_optimization
I reduced the draw calls from around 90 to 12, flattened sprites and set blendmode to none in specific cases to ease on CPU, and so on...
To my surprise when I tested again, the FPS was unaffected:
fps: 6 / 60
mem: 19
drw: 12
Is it even possible to get normal fps with Starling on mobile? What am I missing?
I am using big textures that are scaled down to the size of the device, is it possible that such a thing affects the fps that much?
Regarding "Load textures from files/URLs", I'm downloading different piles of assets for different situations, therefore I assumed compiling each pile into a SWF would be way faster than sending a separate request for each file. The problem is, for that I can only use embed, which apparently uses twice the memory. Do you have any solution in mind to enjoy the best of both worlds?
Instead of downloading 'over-the-wire' your assets and manually caching them for re-use, you can embed the assets into your app bundle vs. embedding them and then use the Starling AssetManager to load the textures at the resolution/scale that you need for the device:
ie.
assets.enqueue(
appDir.resolvePath("audio"),
appDir.resolvePath(formatString("fonts/{0}x", scaleFactor)),
appDir.resolvePath(formatString("textures/{0}x", scaleFactor))
);
Ref: https://github.com/Gamua/Starling-Framework/blob/master/samples/scaffold_mobile/src/Scaffold_Mobile.as
Your application bundle gets bigger of course, but you do not take the 2x ram hit of using 'embed'.
Misc perf ideas from my comment:
Testing FPS with "Release" mode correct?
Are you using textures that are scaled down to match the resolution of the device before loading them?
Are you mixing BLEND modes that are causing additional draw calls?
Ref: The Performance Optimization is great reading to optimize your usage of Starling.
Starling is not a miracle solution for mobile device. There's quite a lot of code running in the background in order to make the GPU display anything. You the coder has to make sure the amount of draw call is kept to a minimum. The weaker the device and the less draw call you should force. It's not rare to see people using Starling and not pay any attention to their draw calls.
The size of graphics used is only relevant for the GPU upload time and not that much for the GPU display time. So of course all relevant texture need to be uploaded prior to displaying any scenes. You simply cannot try to upload any new texture while any given scene is playing. Even a small texture uploading will cause idling.
Displaying everything using Starling is not always a smart choice. In render mode the GPU gets a lot of power but the CPU still has some remaining. You can reduce the amount of GPU uploading and GPU charge by simply displaying static UI elements using the classic display list (which is where the Staling framework design is failing). Starling was originally made to make it very difficult to use both display system together that's one of the downsides of using this framework. Most professional I know including myself don't use Starling for that reason.
Your system must be flexible and you should embed your assets for mobile and not use any external swf as much as possible and be able to switch to another system for the web. If you expect to use one system of asset for mobile/desktop/web version of your app you are setting yourself up for failure. Embedding on mobile is critical for memory management as the AIR platform internally manages the cache of those embedded assets. Thx to that when creating new instances of those assets the memory consumption stays under control, if you don't embed then you are on your own.
Regarding overall performance a very weak Android device will probably never be able to go passed 10 fps when using Starling or any Stage3D framework because of the amount of code those framework need to run (draw calls) in the background. On weak device that amount of code is already enough to completely overload the CPU. On the other hand on weak device you can still get a good performance and user experience by using GPU mode instead of render mode (so no Stage3D) and displaying mostly raster graphic.
IN RESPONSE TO YOUR EDIT:
12 draw calls is very good (90 was pretty high).
That you still get low FPS on some device is not that surprising. Especially low end Android device will always have low FPS in render mode with Stage3D framework because of the amount of code that those framework have to run to render one frame. Now the size of the texture you are using should not affect the FPS that much (that's the point of Stage3D). It would help with the GPU uploading time if you reduce the size of those graphics.
Now optimization is the key and optimizing on low end device with low FPS is the best way to go since whatever you do will have great effect on better device as well. Start by running tests and only displaying static graphics with no or very little code on your part just to see how far the Stage3D framework can go on its own on those weak device without losing any FPS and then optimize from there. The amount of object displayed on screen + the amount of draw calls is what affects FPS with those Stage3D framework so keep a count of those and always seek ways to reduce it. On some low end device it's not practical to try to keep a 60fps so try to switch to 30 and adjust your rendering accordingly.

How to properly use glDiscardFramebufferEXT

This question relates to the OpenGL ES 2.0 Extension EXT_discard_framebuffer.
It is unclear to me which cases justify the use of this extension. If I call glDiscardFramebufferEXT() and it puts the specified attachable images in an undefined state this means that either:
- I don't care about the content anymore since it has been used with glReadPixels() already,
- I don't care about the content anymore since it has been used with glCopyTexSubImage() already,
- I shouldn't have made the render in the first place.
Clearly, only the 1st two cases make sense or are there other cases in which glDiscardFramebufferEXT() is useful? If yes, which are these cases?
glDiscardFramebufferEXT is a performance hint to the driver. Mobile GPUs use tile based deferred rendering. In that context setting any of your framebuffer to be discarded saves the gpu work and memory bandwith as it does not need to write it back to uniform memory.
Typically you will discard:
the depth buffer as it is not presented on screen. It is just used during rendering on the gpu.
the msaa buffer as it is resolved to a smaller buffer for presenting to screen.
Additionally any buffer that is just used for rendering on the GPU should be discarded so it is not written back to uniform memory.
The main situation where I've seen DiscardFramebuffer used is when you have a multi-sampled renderbuffer that you just resolved to a texture using BlitFramebuffer or ResolveMultisampleFramebufferAPPLE (on iOS) in which case you no longer care about the contents of the original buffer.

Turning a series of raw images into movie frames in Android

I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage.
Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK).
Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?
Is there any way I can use the API to
do something like this?
No.
Or alternatively, what would be the
best way to go about this, if it's
even feasible on cell phone hardware
(which seems to keep getting more and
more powerful, strangely...)?
It is possible you can find a Java library that lets you assemble movies out of stills and annotations, but I would be rather surprised if it met your needs, would run on Android, and would run acceptably on mobile phone hardware.
IMHO, the best route is to use a Web service. Use the device for data collection, use the server to do all the heavy lifting of assembling the movie out of the parts.
If you have to do it on-device, the NDK seems like the only practical route.
Do you just want to create movie files or do you want to display them on the phone?
If you just want to display the post-processed annotated images as a movie then it's possible. What is the format of your images ? Currently, I'm able to display to MJPEG video on a Nexus One (running 2.1) without any noticeable lag without using the NDK. In my case the images are coming from the network.
On the other hand, if you just want to create movie files and store is on the phone or some other place then CommonsWave's idea of "delegating" this to a server makes more sense since you will have more processing power and storage on the server. This will require that you have access to a network and don't mind sending all the images from the phone to the server and then download the movie file back to the phone.

Categories

Resources