androidplot: multiple y axes on the same graph - android

I am trying to display two sets of data on the same graph using androidplot; however, the data sets have very different scales. Whilst I could do some kind of normalisation of the data sets to make them compatible, I would rather plot them on multiple y axes - e.g.
http://www.cohort.com/2yaxes.gif
I can pretty much do this in achartengine, but I think that androidplot produces better looking graphs and provides a "better" (i.e. more android like) architecture.
Does anybody have any ideas:
a) if it can be done?
and, b) how to do it?

I'm guessing that by now the answer is a little late to be of use, but here it is in any case:
Androidplot does not provide any built-int support for displaying multiple log scales using different tick intervals simultaneously. As you already noted, the most popular workaround is normalisation, which forces you to use a shared tick interval. Having said that, one Androidplot user was able to find a workaround by overlaying data:
Here's a link to the discussion:
http://old.androidplot.com/forum/index.php/topic,116.0.html

Related

How to handoff animation for Android developers?

I'm a designer and interested in different ways I can handoff animation to Android developers and the best ways to do that depending on a particular case.
1. JSON
I know Lottie works best for animating micro interactions and creating animated illustration, like those on onboarding pages. For a designer it's easy to provide JSON file since it can be generated with Bodymovin plugin in AfterEffects. Developer just gets the file and uses it as is, no more additional efforts for him.
2. Java or Kotlin
UI elements that require complex interaction are usually build with code, like BubblePicker since it has changeable content in those bubbles and different conditions how it can be interacted with. Since design tools don't generate production-ready code designers export video recording from tools like Principle, generate clickable prototypes in ProtoPie or other tools. Designers try different ways to show the idea of animation but in this case all the work is left for a developer.
3. XML
Don't know when developers use this type and if designers can provide it using export from some design tools.
What are other technologies developers use to create animations?
What type of files, prototypes designers should provide for the developer considering different cases?
Android animation API is really diverse, meaning there are lots of ways a developer may choose to deliver an animation. I dare to say this should never be conditioned by the nature or limitations of the provided resources. Let's understand by resources anything that's not actual code: bitmap images, audio files, and even text. Knowing the file types or formats the developer can or wants to use involves communication and you can expect them not to be always the same.
Always provide a video of the animation, unless it can be described with a single word.
The most common animations in android are:
Drawable animations. This type of animation usually happens inside a pre-defined area on the screen and is achieved by loading a series of images, one after the other. Here a common filetype would be PNG images, one for each step of the animation. Probably the same amount of different sprites you used for the video, never as many as 24/s! Keep in mind that to support different screen sizes and densities, different size/densities will have to be provided for each series. If the image is simple Vector Graphics would simplify the job for both the coder and the designer, regular SVGs are supported.
One can also animate on the paths of the vector images, even morph between several of these, as long as the paths are compatible for morphing, which according to the documentation they must have the same number of commands and the same number of parameters for each command....this takes more understanding of the intrinsics of the vector file definitions, if you can see the image by reading the SVG code, go for it!
Another major group comprises the animation (by acting on properties like color, position, size, etx) of the application UI elements. This type may or may not involve image resources, and are usually applied to components of pre-defined types. E.g.: all buttons should have a ripple effect starting where the pointer clicks. Android has pre-defined effects with particular names (flip, zoom), it could be useful to know this vocabulary.
Finally, layout changes are animations that happen when you reorder things around to better convey information or hint the user towards actions. Similar to these are the Transitions, which happen when switching screens but can also be used to create animations that move images around, altering their positions and properties. They are really simple to implement and may require resource files of the same type as mentioned in 1
For reference, check the following which has some code but also illustrative examples:
https://developer.android.com/training/animation/overview
To know how to support different screen sizes, check:
https://developer.android.com/training/multiscreen/screensizes
To know more regarding SVG support in the Android platform: https://developer.android.com/studio/write/vector-asset-studio

Creating an interactive seating map in Android Studio using either a custom layout or webview

So I'm on my first android project and I'm implementing a native app. One of the components is to book a seat on a seating map.
General specifications:
Handle venues that have different seating layouts and amount of seats, over 200
The seats can have different sizes and shapes, i.e. large round VIP seats and standard square seats. Imagine small round stadium with a lot of custom seating and different orientation, with a stage in the middle. (I have an image but can't post because I don't have enough reputation)
What I have tried so far:
Created custom seat class with size, seat number, orientation and seat type
Used a StaggeredGridLayout and a view-adapter to load each of these objects dynamically from a DB onto this layout.
My concerns: No matter how much I was worked on it, it never came out the way I wanted. Basically, I think this is better for grid maps with one sized objects like bus seats placed in the distance between each other and doesn't have a huge irrelevant object in the middle like a stage.
I was thinking about changing directions completely after doing some research: Using webview? Each venue would be a web page, that would be linked from the venue object from the DB. Then in that web page, I could make this sort of venue a lot easier because I could just place out this layout manually and style with different div elements or make an interactive javascript map, attach a button and make a call using Jquery/Javascript to my native android app.
What are your opinions, is this a feasible solution?
To be honest, an interactive seat map development was the most challenging task I have ever had in my development life. But some how I have done it for an asymmetric seat plan.
This type of work can be achieved in following two ways.
1. Using GPU rendering - Much easier process as almost every device has GPU by default. It detects the interaction point and check it's RGB value to detect the right path has been interacted and return this through client interface (with some drawbacks). e.g: Webview interacting Js interface in android and others.
2. Using CPU based drawing - Draw each and every path on the canvas and repeatedly it need to check the touched/interacted point is inside the paths or not. It will use more the CPU if paths are more complex to render/draw on the canvas for every single interaction. (hug CPU usage and some other constraints)
I am tired of searching some library in android which is very useful like Macaw in iOS development. This library handles the interactive path inside a svg file and help to interact with the client side.
Anyways for me, neither of the two options are not feasible though. I will go with importing/downloading .svg file to your android application and make it interaction using JavascriptInterface in your app. Unfortunately, this is less worse solution that I have found out so far.
UPDATE:
Here is my approach to make it workable. See my medium blog post. Hope it will help you.

do I need classification method for my android gesture recognition app?

I'm developing an android application which recognizing accelerometer gesture. For now I'm just utilizing dynamic time warping to get the smallest distance between input gesture and about 200 unique gesture data in database. My application looping through the data and compare the input gesture with gesture data in the database one by one. It can find the smallest distance and recognizing the gesture for average in 5 second. The problem is can i speed up recognition time maybe for half second or less? Do I have to use classfication method like KNN and combine it with dtw method? an example or references will be apreciated..
What you are currently doing is a 1NN. In other words, you are already running a simplest possible KNN method. with K=1. Changing K won't speed up anything, it can only change the quality of the result. To speed up the process you can think about using two approaches:
Using some indexing methods, which will reduce the computational complexity of your distance based search. This problem is called Nearest Neighbout Search (NNS), and even wikipedia provides quite a lot of information regarding its speed ups;
Using completely different classification method, which build a much simplier model (possibly SVM or even some decision tree - it depends on your actual data).
My intuition is that Locally Sensitive Hashing can be quite easily applicable. For instance you could design them by picking K points randomly and checking if the time series isn't too "far" away.
I would go into more details on that idea, but instead I found this paper : http://dtai.cs.kuleuven.be/events/MLSA13/papers/mlsa13_submission_13.pdf , and it seems to be using much simpler LHS function.
So this is one way out, hope it works out. You can also implement an easy classifier and accept its answer if it is very certain about the gesture (I would recommend SVM here as in the answer above), and if it is close to the boundary decision look for the closest neighbour.
you can do DTW at 10,000 hz, even on a phone, see this vid
http://www.youtube.com/watch?v=d_qLzMMuVQg
eamonn

Any idea on the simplest graphing technique?

I need to create a View that represents an x/y axis. I will read in 9 separate sets of (x,y) coordinates and will have 9 checkboxes, one for each set of data. When the user presses the check boxes the different waves will appear.
I have done research and found three possibilities: google charts, achartengine, and Canvas.
Does anyone have any ideas at what I could use for the simplest implementation of this?
I suggest you use AChartEngine rather than drawing to canvas.
You can download the library, javadocs and a demo application here.
The main advantage of using AChartEngine is that you won't need a data connection for rendering charts.
There are tutorials on youtube on getting started with AChartEngine. The library is free and open-source.

Image classification with opencv

We're currently working on an android ocr app using opencv.pre-processing ,segmentation ,Feature extraction steps are done. Classification is the remaining step and we're stuck ..We're using a DB table which is filled with each letter features ..Firstly we had only 1 feature per letter and we used euclidean distance ,but results wasn't accurate and more features needed to be obtained and so we did.The problem now is we have 7 features per letter and absolutely no idea of how to classify i/p based on them..some have recommended using knn ,but we can't figure out how and the opencv documentation in that part ain't clear ..so if anybody can help it wud be great.
Thanks in advance
Briefly and without discussing the details. Vector space comes in handy here. You need to build a feature vector
<feature1, feature2, feature3.. featureN> for each of the instances in your training set.
From each of these images you extract features that you think or you read in the research articles are important for image classification. For example you can do centroid, Gaussian blur, histograms, etc.
Once you have these values linear algebra comes into play with some classification algorithm: knn, svm, naive bayes etc that you run on your training set, that is you build your model.
If the model is ready you run it on your test set.
Use cross validation for more comprehensive results.
For more details check the course notes:
http://www.inf.ed.ac.uk/teaching/courses/iaml/slides/knn-2x2.pdf
or
http://www.inf.ed.ac.uk/teaching/courses/inf2b/lectureSchedule.html
would like to add that OpenCV may not have the sort of classifiers you might prefer.
There are several libraries out there, though you may have to see which works best when on a mobile platform. Could you give some details on the features you are using?
The simplest KNN (k-nearest neighbors) measure would be to find the Euclidean distance in n dimensions (for an n-dimensional feature vector) between the input sample's features and each of the vectors in your DB table. Also explore Mahalanobis distance (used to measure distance between a point and a dataset/class) if you have multiple classes and the input image is to be classified as one such 'type' or 'class' of image.
As #matcheek mentioned, more sophistication can be possible using machine learning techniques such as SVM, Neural Nets, etc. However first you might consider a simpler thing like kNN, considering its a mobile platform which may limit the computational complexity.

Categories

Resources