The new Jetpack compose component added to Arch component is like Flutter Ui making.
How does it make the Ui though?
Does it use a native code engine like Skia, or it still follows the ViewGroup way like before?
Compose creates one view currently named AndroidComposeView, which inherits ViewGroup, and it draws the widget tree on its canvas. It also processes motion/keyboard events for this view.
There may be more helper views added to this view due to implementation details, but basically for the "widgets" of Compose, you won't see classical Views in view hierarchy. Layout inspector currently doesn't help for Compose - you can try it but you'll won't see your widgets.
Developers are promised to be able to create own customized widgets, which can directly paint on Canvas, set layout for itself or children, or process input events.
However, the Canvas and lots of other classes used here are not standard framework classes. For example, Canvas for Compose is redefined in Kotlin. Similar way there is new Paint, Shape, and other new classes. They internally use framework classes for their work, but that's implementation detail. When drawing, you'd use these new classes.
Since Compose is a library, and not present natively on Android devices, the library is included in each app that uses Compose.
Also there is no native code involved here, all is done in Kotlin and becomes part of your app's dexed code. By using Compose, your app won't contain any additional native library (probably, if creators don't change mind).
No, It doesn't use anything from the old UI Toolkit actually they are building it to overcome old UIToolkit's problems.
Compose is not views, It's a new set of Jetpack UI Widget, Basically, it's a Kotlin compiler plugin which renders the Android Canvas (I suppose there's no documentation for this yet) with full compatibility of existing android's view system, the last Dev summit there was a talk covers how it works internally, I/O had another talk too
Related
I've developed an android library which exposes some activities, include one named AuthenticiateActivity and also included some helper classes.
I have a second android project that pulls in this library. I am trying create a custom React Native UI component to be able to display the AuthenticateActivity using javascript.
I have followed as best I can the documentation on creating native UI components here, however my use case is slightly different. As I understand, I need to create a ViewManager that extends SimpleViewManager, however, the SimpleViewManager takes a custom view class as a generic parameter. In my case I'm simply trying to display the activity defined in the library, I'm not looking to create a fully custom View implementation.
Any ideas how I can achieve this?
For displaying a completely new Activity you should just use a native module.
There's a good example that shows how to wrap a custom Activity.
We can create animations using both android.animation package and android.transition package but I would like to know what is the main difference between these packages since even custom transitions also use animator's from android animation package.
From the documentation of android.animation:
These classes provide functionality for the property animation system,
which allows you to animate object properties of any type.
From the documentation of android.transition:
The classes in this package enable "scenes & transitions" functionality for view hiearchies.
From there a conclusion could be made that android.animation mostly handles individual View animation (a FAB moving left upon click, etc) while android.transition cares mostly about view hierarchy/layout transition animation (Material Design shared elements, etc).
do read about fundamental difference at http://developer.android.com/about/versions/android-4.4.html in 'Animation & Graphics' section. Basically, you can transition between different states of UI by defining Scene objects.
I don't have any code to support as i haven't used this till now, but above link should get you started.
I'm writing a cross-platform app for iOS and Android using MvvmCross.
The Android version makes use of nested Fragments. For example, the home view is a navigation drawer, its various navigation hub views are Fragments which may be split views containing other fragments, and on top of that, each view may show a dialog fragment as well.
Additionally, not all ViewModels are shown via ShowViewModel(), some of them are used more like PropertyChanged event providers as demonstrated in the N=32 video.
This is working fine until a configuration change takes place (typically, rotating the device). When the fragment views get recreated, their View Models aren't and are set to null. This is hinted at in the following MvvmCross issue #636, where Stuart also mentions he'd like the project to come up with some best practice advice.
My question now is what are the best practices for this? What do you do if you have to properly support Android configuration changes in MvvmCross?
I've tried working around the problem as outlined in the issue linked above, i.e. by some form of ViewModel registry in the parent ViewModels, and also by trying to serialize the Fragment's ViewModel when saving its instance state, with limited success. The results felt hackish at best. The problem remains that a Fragment doesn't know how to recreate its View Model in MvvmCross. Oh, and disabling configuration changes on device rotation doesn't count as an answer. ;-)
Obviously this answer is not a direct answer to your question but I feel its related enough to mention here.
In my android apps I have started to inject a Controller (or MVA style Adapter) into a View / Fragment / Activity using the Dagger Dependency Injection library. This has the crucial property of maintaining the instance of the Controller class, so on rotation / config change the same Controller is re-injected.
It seems that Mvx.Resolve() should be able to perform this ideally, or your gonna have a bad time. If it does not introducing a middle layer cache between your view classes and the Mvx class seems the only option to me. But this is my first hour or so reading about Xamarin so I may be off the mark. I have been an Android dev for 5 years now though so just thought I would add my 2 pence :)
Java is used to bring up gui components like View and Widgets. Official site say they dont include AWT/Swing as a part of their java bundle, then what implementation (native-wrapper if any?) they have in place? Also is it possible to create user interface from scratch for android apps without extending any View class?
It's a custom UI toolkit unrelated to AWT or Swing.
You can create custom subclasses of the View class to draw whatever custom components you would like, but most of the time you can set attributes on the existing components to change the way they're drawn (like setting the drawables for a button).
'Android Design' site is recommending 'Boundary feedback' for scrollable view.
http://developer.android.com/design/style/touch-feedback.html
http://i.stack.imgur.com/TuBkX.png
is there any API or library for custom view to implement that with ease and consistent?
or should I implement it from the scratch?
Are you "building custom"? If you stick to the UI elements form the API you should be fine. All the scroll views can already be configured to do different things for boundry cases (such as overscroll).
If you are building UI elements from scratch, you might consider simply overriding or subclassing existing UI elements to function the way you want. If not, you can examine the source to see how different boundry cases (again overscrolling) are implemented. But, I get the feeling you're in the first category..