this question is more for looking advices
I've developed an almost full user interface in pure c++ and opengl es 2, currently I'm developing that the click/touch events be propagate as in HTML, I can't remember why I decided do a UI in opengl, but now I realize that I'm reinventing the wheel.
so currently I have many "elements", buttons, containers, scrolls, pictures, animated pictures, borders, text, drag-drop's, I can set easily where I want each element, set sounds, and I really don't know how fast and effective it is right now, but I've maked all the fast and memory effective that I can, and I planning to use multi-threading in the future. And my goal is to approach a system like html.
And now that I'm looking for how to implement my UI in native Android (java), I realized that I was in the correct way, all the properties and methods all the structure I'm implementing in c++ and opengl is almost the same as in android, even I think my implementation is easier to use the images and sounds because I've automatized all and I can put it in the assets folder, and in the code only put the name of the file and the program returns it, with and effective memory managment.
the current elements are elements that I need for a game interface, so if I need other kind of element I'd have to write it, but right now, I have all elements that I need, I think so.
maybe my code in c++ is faster than java.
but even with all this, I know I have a lot to do to have a full UI, so can you give some advices or experience, advantages, disadvantages, it worth to continue with the UI, or better learn (beacuse I just know a llitle bit of UI Android) UI Android, modify all my game for use the new arquitecture/system, What do you think?
Related
I'm porting a vector graphics editor from iOS to Android. The app must draw a complex hierarchy of graphical objects in an efficient manner, so that the graphics can be edited with gestures in real time. The edited work commonly consists of images, text and graphical primitives (lines, circles etc.). UI elements like selection highlights are rendered on a separate layer on the top.
On the iOS app, if one component of the graphic changes (for example a small text element changes its content), only that text element is re-rendered.
On iOS, we use CALayer objects from the CoreAnimation framework. This works very well. What framework can be used on Android for this use case? Is there an established "native" way to do that, or are usually third party frameworks used?
Android does not have similar thing out of the box. We do have core.animation but it is limited to simple behavioral animations. To create what you want you need to use SurfaceView or GLSurfaceView and help of clean OpenGL. You may also try to use ordinary Canvas of View - you will have limited possibilities though.
Also there are wrapper around OpenGl and SurfaceView like libgdx it is used mostly for games though - so it has much wider possibilities than you need, but it is less complicated than OpenGl.
Hope it helps.
I would like to create a UI similar to the one that Apple created for their Music app. I am specifically talking about rounded shaped items that you can scroll to explore different types of songs.
There is a video here that will help:
https://www.youtube.com/watch?v=gr2dn6IAVzU
Apple has a very strong graphic API that makes this UI really easy to code. For instance you can use physics, etc
I am very new with Android. Does Canvas allows me to do the same thing ? do you have any recommendations ?
No, there's no UI tools like that, and definitely no built in physics engine like that on Android. You can probably find libraries, but it will all be custom code. And I wouldn't recommend Canvas for it, I'd go for OpenGL.
As an aside- that is the ugliest, most annoying UI I've seen in my life. I wouldn't use it if you paid me to. UI elements shouldn't bounce around and move, it provides no benefit and makes the app harder to use. Can you imagine using that if you had reduced vision, or motor control issues? Whoever made that should be fired. Its a great argument for making software be forced to comply to the ADA.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Hello everyone and thanks for your time.
Introduction to the problem: Basically the main problem is that I'm designing an android tablet application and I don't know exactly which is the best way to implement the kind of design I've been thinking about. I've been reading that is possible to use another platforms apart of Android, like OpenGL or HTML, but I don't actually know how to do it and if its necessary.
What do I attempt to do? I didn't know how to explain exactly which is my idea using words so I decided to prepare a small image about it, here it is:
So, the idea is to drag&drop the buttons to the color square, and detect if they are colliding, if that happens, start an event. Then all the buttons are going to be ordered again by themselves.
Which is the question then? I'm new at android, and I have no idea which is the best way to start implementing this, if I should bet for OpenGL or program directly in android. And in both cases, which is the best way START CREATING something like this?
Last things: I want to make clear something, just in case: This IS NOT any school work that someone asked me to do, I'm doing it by my own. I'm NOT ASKING PEOPLE TO DO MY WORK I'm just asking your opinion and asking for help and guidance about how to start managing this. (so I want your professional opinion, not any single line of code) :D
Thank you very much to you all. :)
MartÃ.
This sounds like it'd be pretty easy to do in HTML / JavaScript using touch events, maybe using a support library for drag/drop events. I don't know if it matters to you, but the nice thing with that solution is that it'd port pretty easily from Android to other mobile devices.
Like Quintin said, if performance is a big concern you can optimize and go to OpenGL, but you can probably knock out a simple prototype in HTML in under a day...
in my opinion you don't need any framework. You can do it all with android. I would also advise you to use android. I'm not a good designer, but my suggestion would be for this:
2x Framelayout, for the color tools on the left side and one for the rightside
For the color tools, for each 2 colors you can use LinearLayout. And if you want the exact backgraund design, you can design it with an image software and can set it in the background.
for the colors, you can use an imageButton.
And i also advice you to read the tutorials how to design apps, here is a good tutorial: http://mobile.tutsplus.com/tutorials/android/android-layout/
Open GL seems to be your best bet here. I haven't had much experience with Android OpenGL, but it is going to be the most efficient way to write the app - since you are dealing with collision detection and colours.
Android OpenGL will be the best option since it will be natively compiled by the ADK whereas HTML is going to be interpreted at runtime.
As far as using normal bland Android layouts, I would not recommend that, since this is not a "standard" UI using the standard components. You are creating an entirely unique interface that is very graphical, and so I would recommend sticking to that.
EDIT: You will find that by using standard layouts that ALL of your user interface components will have to be customized, and you will spend a lot of time in your onDraw() methods with the collision detection.
One problem with this is that you need to check whether the object collides with another object, making certain types of objects aware of each other, and may run the risk of circular references. Whereas if you use OpenGL, you can have one list of "Shape" types and then have a "checkCollision(Shape draggedObject)" method which iterates through the list of shapes and checks the collision using an optimized collision detection for the simplest object.
The second problem is doing collision detection in the onDraw() method will make your app lag and seem sticky to the user while performing a drag. The more components the more problems.
EDIT 2:
Here are some resources for Android OpenGL:
http://developer.android.com/guide/topics/graphics/opengl.html
http://www.learnopengles.com/android-lesson-one-getting-started/
2D example with OpenGL
I'm creating an application that relies heavily on property animations for Android 3.0. I have it working, but there are significant slowdowns in certain parts. I believe that multi-threading the UI would help a lot. Naturally, you can't really do that with Android's design. What I was wondering was, is it possible to use render my View objects in a SurfaceView and use the property animation framework that's already in place? I've seen examples on drawing objects using the Canvas class, but I don't want to re-implement all the animations when it's all right there. I haven't seen anyone use any of the Andriod animation (frame, tween, property) in a SurfaceView.
Android animations don't work in a SurfaceView, you have to do them manually with a secondary thread. It's not so hard, but you will need to rewrite almost everything.
If you use a lot of animations (like in a game) then you should really consider switching to a SurfaceView because it's way faster (and is the only way to get smooth animations in this case, since you probably hit the limit of these built-in animations, which have a lot of overhead). If you don't switch now, it will be even harder later...
I can provide you some samples if you decide to go this way.
What considerations should one be mindful of when constructing a GLSurfaceView-centric UI?
This is for a game and the bulk of the UI will be an intro screen (start, options, about, exit) and a level selector screen. I've put a lot of time into the rendering/animation for the game using OpenGL, and I'm no graphic artist, so taking the OGL UI route seems to make sense to me. But I'm an Android novice and need some outside input. Thanks for reading.
There is nothing wrong with that, especially for a game. The only problem is that you will have to do everything yourself. Most games seem to be doing this.
Due to the ease from which one activity can start another, I would say it is worthwhile to abstract your options and level selection from the game itself. If you're unfamiliar with starting activities and/or passing information between activities, there are plently of good tutorials and examples to help. You could try the ubiquitous Notepad tutorial if you haven't already ( http://developer.android.com/resources/tutorials/notepad/index.html ).
The advantages of this method would be to leave your OpenGL/game Activity less cluttered, and that you would be able to use tried-and-true Android UI elements instead of building your own from scratch.