Android SQLite database to a server - android

I have an Android application which stores user's data in a SQLite database (containing 4 tables). Each table has an Integer Primary Key (Auto-incremented).
To get an idea of the scale:
3 of those tables have less than 7 fields and aren't expected to have more than 10 entries.
1 table having 8 fields, gets roughly 5 entries per day on an average.
What I want is the database to be synced to cloud - for many reasons, including
The data to be in sync across all of user's devices
To collect anonymous data, for research purposes, if the user has opted in.
I realize I would require a multi-tenant database. I want to know an efficient way to do this.
If I use a MySQL database on a web server, would this be efficient? How do I approach multi-tenancy in this case?
Or, do I need to use a cloud service like Google App Engine? (Which is completely new to me)

Yes, you should consider using GAE and its datastore. It also supports SQL, but I think your application is well suited to the datastore.
Your datastore would have 5 'kinds' (these are like tables) - the four you listed above and a User kind. Your four kinds would have a key path that would group them beneath their respective user. This would take care of the multi-tenancy and it would be very efficient: retrieving and writing these groups (defined by the key path hierarchy) is efficient and cheap.
For a quick intro to these concepts of the datastore I recommend this page of the objectify docs.
GAE supports Java, so not only can you use the same language on client and server, but you can also share code. It requires a bit of care when organizing your code (and watch out for logging) but this has considerable benefits particularly if Android is your only client.

Related

Multiple instances of Firebase

I am developing an Android application. Actually my app is for Societies/Apartments use. And I am using Firebase to store all the data of the app. I have a Firebase database and I want to assign one instance of Database to one society. Like Society A will have one instance of database. Society B will have another instance of same Database. And so on. And master database will have the link of all its instances of database.
How can I achieve it?
I also wanted to know How many instances of a database can we create in Firebase for one project?
firebaser here
The ability to have multiple instances of the Realtime Database is to allow it to scale beyond the capabilities of a single database. This is known as sharding, as key to this approach is dividing your entire user-base into separate shards, where the shards have minimal interaction with each other.
You should consider however if you really need a separate shard for each society/apartment. I recommend studying When to shard your data carefully and see if your usage is really going to go beyond the scalability abilities of a realtime database instance. While technically sharding sounds very possible with your scenario, using shards complicates matters significantly, so you should only use this when it is actually needed.
E.g. you could easily start of by segmenting the data for the societies/apartments into separate branches in a single database instance:
/apt1
users
groups
places
/apt2
users
groups
places
/apt3
users
groups
places
Then if you ever reach the need to shard, you can move some of these apartments into another database instance, and use a master database (or an implicit key mapping, which typically scales better) to tie users/apartments to a database instance.
There is no documented limit on the number of shards. Each database instance is quite literally that: a separate database with its own URL, (possibly) on different infrastructure, etc. There probably is a physical limit to how many instances can be created, but I would again refer back to the previous paragraph: if you find yourself creating hundreds of shards, you're probably using them for more than is needed.

App local database

Not sure, if this is important: app is written with react-native.
Question is about both iOS and android apps.
I have a static database, which contains currently ~9000 rows, each row contains 45 columns and about 280 letters total in it. So, basically, database is relatively small. I'll need to perform pattern search (equivalent of ILIKE in Postgres) and sorting based on misc columns with numeric values. No insertions, no modifications, no relations with other "Tables" required.
Should i write nodejs server, instantiate PG database, connect app through web-socket and start querying data from pg, or should i just somehow create local database in app and search through it right in app? The local db is way simpler but I'm worry about performance. If nothing wrong with performance, then what if the database will grow to 15000? 50000?
If the database can be local and there's no need for it to ever be persisted on servers, then you should probably explore Realm. I have not heard of how it great it performs with a lot of data but it acts as a database engine in your phone and has indexing as well.

What is the best primary key strategy for an online/offline multi-client mobile application with SQLite and Azure SQL database as the central store?

What primary key strategy would be best to use for a relational database model given the following?
tens of thousands of users
multiple clients per user (phone, tablet, desktop)
millions of rows per table (continually growing)
Azure SQL will be the central data store which will be exposed via Web API. The clients will include a web application and a number of native apps including iOS, Android, Mac, Windows 8, etc. The web application will require an “always on” connection and will not have a local data store but will instead retrieve and update via the api - think CRUD via RESTful API.
All other clients (phone, tablet, desktop) will have a local db (SQLite). On first use of this type of client the user must authenticate and sync. Once authenticated and synced, these clients can operate in an offline mode (creating, deleting and updating records in the local SQLite db). These changes will eventually sync with the Azure backend.
The distributed nature of the databases leaves us with a primary key problem and the reason for asking this question.
Here is what we have considered thus far:
GUID
Each client creates it’s own keys. On sync, there is a very small chance for a duplicate key but we would need to account for it by writing functionality into each client to update all relationships with a new key. GUIDs are big and when multiple foreign keys per table are considered, storage may become an issue over time. Likely the biggest problem is the random nature of GUIDs which means that they can not (or should not) be used as the clustered index due to fragmentation. This means we would need to create a clustered index (perhaps arbitrary) for each table.
Identity
Each client creates it’s own primary keys. On sync, these keys are replaced with server generated keys. This adds additional complexity to the syncing process and forces each client to “fix” their keys including all foreign keys on related tables.
Composite
Each client is assigned a client id on first sync. This client id is used in conjunction with a local auto-incrementing id as a composite primary key for each table. This composite key will be unique so there should be no conflicts on sync but it does mean that most tables will require a composite primary key. Performance and query complexity is the concern here.
HiLo (Merged Composite)
Like the composite approach, each client is assigned a client id (int32) on the first sync The client id is merged with a unique local id (int32) into a single column to make an application wide unique id (int64). This should result in no conflicts during sync. While there is more order to these keys vs GUIDs since the ids generated by each client are sequential, there will be thousands of unique client-ids, so do we still run the risk of fragmentation on our clustered index?
Are we overlooking something? Are there any other approaches worth investigating? A discussion of the pros and cons of each approach would be quite helpful.
I've considered this question at length came to the decision that a GUID is usually the best solution. Here's a little information on why:
Identity
The Identity option sounds like it removes all the negatives, but having implemented a Single Page Web App that implemented this system, I can tell you it adds a significant amount of complexity to the code. A temporary id can spread through your client side data quite quickly, and it's really hard to create a system that has no holes in it when it comes to finding every single possible usage. It usually leads to application and data specific hard-coded information to track foreign keys on the client (which is tedious and error prone as the database changes and you forget to update this information). It also adds a lot of overhead to every sync, as it might have to run through multiple tables each sync to check for temporary ids. There might be a better way to implement this system, but I haven't seen a good approach that doesn't add a ton of complexity and possible ugly error states in your data.
Composite
The composite approaches also add a lot of complexity to your code in generating session ids and creating ids from them, and they don't really offer any advantages over GUIDs other than you can guarantee that it's unique - but the thing is, a GUID is theoretically unique, and while I was scared of the fact that there is a possibility of repeats, I realized that it was an infinitesimally small chance and there's actually a really easy method to handle the small possibility that it's not unique.
GUIDs
My biggest worries about using a GUID were
they have a large size and aren't traditional ints, which will make transferring large bits of data slower and degrade database performance
if you actually ever do run into a conflict, it can ruin your app, so you have to write complex code to handle a situation you will probably never use.
Then I realized that in an offline style web app, you're not usually transferring large amounts of data at once because it's all stored on the client.
You also don't worry about server database performance much either because that's done behind the scenes in a sync - you just worry about client side data performance.
Last, I realized that handling a conflict is really a trivial thing. Just test for a conflict and if you get one, create a new GUID on the server and continue with the operation. Then send a message back to the client that causes the client to throw up a little error message and then deletes all client side data and re-downloads it fresh from the server. This is really quick and easy to implement, and you probably already want this as a possible operation on an offline web app anyway. While it might sound inconvenient for the user, the likelihood of the user ever seeing this error is almost 0%.
Conclusion
In the end, I think for this type of app, GUID's are the easiest to implement and work the best with the least possibility for error and without creating overly complex code.
If your application doesn't have to run offline, but you have a client-side database for performance or other reasons, you can also consider throwing up a loading gif and pausing client side execution until the id is returned via ajax from the server.
The key (pun intended) thing to remember is to simply have a unique key for each object you are storing on the persistent store. How you handle the storage of that object is completely up to you and up to the methodology of how you access that key. Each of the strategies you listed have their own reasons for why they do what they do but in the end they are storing a key for a certain object in the db so all of its attributes can be changed while retaining the same object reference in the database.

What would I need to do in order to connect to a central database with Android?

I'm about to build a GPS Spot Finder application with Android and I am trying to decide what requirements are feasible and what aren't. The app would enable users to essentially add different spots on a Google Map. One of the problems would be fetching the data, adding new spots, etc, etc. This, of course would mean the database would have to be online and it would have to be central. My question is, what kind technologies would I need to make this happen? I am mostly familiar with XAMPP, PHPMyAdmin and the like. Can I just use that and connect Android to the database? I assume I would not need to create a website...just the database?
What different approaches can I take with this? Be great if people can point me in the right direction.
Sorry if I don't make any sense and if this type of question is inappropriate for Stackoverflow :S
Create a website to access the database locally, and have Android send requests to the website.
If users are adding spots to a map that only they see, then it makes sense to keep the data local to Android using a built-in database (SQLite). That looks like
ANDROID -> DATABASE
You can read up about SQLite options here.
If users need to see all the spots added by all other users, or even a subset of spots added by users, then you need a web service to handle queries to the database: Connect to a remote database...online database
ANDROID -> HTTP -> APPLICATION SERVER -> DATABASE
Not only is trying to interface directly to a database less stable, but it may pose risks in terms of security and accessibility.
Never never use a database driver across an Internet connection, for any database, for any platform, for any client, anywhere. That goes double for mobile. Database drivers are designed for LAN operations and are not designed for flaky/intermittent connections or high latency.
Additionally, Android does not come with built in clients to access databases such as MySQL. So while it may seem like more work to run a web service somewhere, you will actually be way better off than trying to do things directly with a database. Here is a tutorial showing how to interface these two.
There is a hidden benefit to using html routes. You will need a programming mindset to think through what type of data is being sent in the POST and what is being retrieved in the GET. This alone will improve your application architecture and results.
Why not try using something that is already built into android like SQLite? Save the coordinates of these "spots" into a database through there. This way, everything is local, and should be speedy. Unless, one of your features is to share spots with other users? You can still send these "spots" through different methods other than having a central database.
And yes, you just need an open database, not a website, exactly. You could technically host a database from your home computer, but I do not suggest it.
If you are looking at storing the data in your users mobile nothing better than built in SQLLite.
If you are looking at centralized database to store information, Parse.com is a easy and better way to store your user application data in centralized repository.
Parse.com is not exactly a SQL based database, However you can create table , insert / update and retrieve rows from android.
Best part is it is free upto 1GB. They claim 400,000 apps are built on Parse.com. I have used few of my application typically for user management worked great for me.

How do REST URIs work internally?

I'm trying to create a REST API to interact with a MySQL database. I want to use this API to access the database from an Android or iOS device without (obviously) exposing the database directly through the application. But I'm having problems wrapping my head around a key aspect about REST and the implementation of an API designed on its principles.
I understand the concepts of REST from the theory stand point. What I've been struggling for days trying to grasp is how a REST URI maps to something located on a database server.
If I make a GET request to a server for a resource with a given URI, say http://www.example.com/resource, internally, where does this go to on the server? The way I understand it, is that it goes to the root directory, then to the "resource" directory. From there it returns all the files within that "resource" directory. I'm simply confused because the resource is located on the database server and not the server where the API is being called from. Does the resource path/hierarchy represent actual directories on the server or is it an abstraction of the resource? If the latter, then what do I do with that abstracted resource name to make it map to a table or row in a database? It's been frustrating not being able to find concrete implementation examples of this where I can easily understand how this URI path works internally.
You can use a framework to do a lot of the work for you, but what happens under the hood is no magic. In some way, the URIs map to some database tables. They don't refer to a certain directory structure, but try to explain a hierarchical relationship between the resources.
For example, let's say that we're modelling a university. The elements in the database are stored in one of two tables, either Faculties or Courses. The Faculties table consists of rows describing the Faculty of Law, Faculty of Medicine and so on. It has a unique faculty_id column and then columns to describe whatever we need. The Courses table has a unique course_id column and a foreign key faculty_id column, to tell which faculty the course belongs to.
A RESTful way of designing this API might be
/faculties to get a list of all faculties, retrieved with SELECT * FROM Faculties
/faculties/2 to get the information about a certain faculty, retrieved with SELECT * FROM Faculties WHERE faculty_id=2
/faculties/2/courses to get all the courses belonging to a certain faculty, retrieved with SELECT * FROM Courses WHERE faculty_id=2
/faculties/2/courses/15 to retrieve a certain course, if it indeed belongs to faculty 2, retrieved with SELECT * FROM Courses WHERE faculty_id=2 AND course_id=15
The exact implementation of this depends on the programming language (and possibly framework) that you choose, but at some point you need to make choices about how you should be able to query the database. This is not obvious. You need to plan it carefully for it to make sense!
The result from the database will of course have to be encoded in some way, typically XML or JSON (but other representations are just as fine, although maybe not as common).
Apart from this, you should also make sure to implement the four verbs correctly so that they match the SQL commands (GET=SELECT, POST=INSERT, PUT=UPDATE, DELETE=DELETE), handle encoding negotiation correctly, return proper HTTP response codes and all the other things that are expected of a RESTful API.
As a final piece of advice: If you do this neatly, it'll become so much easier for you to design your mobile apps. I really can't stress this enough. For example, if you on a POST request return the full entry as it now looks in the database, you can immediately store it on the phone with the correct ID, and you can use the same code for rendering the content as you would if it had been downloaded using a GET request. Also, you won't trick the user by updating prematurely, before you know whether that request was successful (mobile phones lose connection a lot).
EDIT: To answer your question in the comments: Creating an API can be seen as a form of art, and should probably not involve any coding in the design stage. The API should be meaningful and not rely on a particular implementation (i.e. a different database choice shouldn't affect your API). Your next task will be to create ties between the human-readable structure of the API and your database (regardless of whether it's relational or something else). So yes, you will need to do some translation, but I don't see how the query string would help you. The typical structure is api.my.website/collection/element/collection/element. Queries can be used for filtering. You could for example write example.com/resource?since=2012-06-01 to retrieve a subset of the elements from your 'resource' collection, but the meaning of this particular query is to retrieve something you couldn't express with an unique ID.
The way I understand it, you think that incoming requests must always go to separate files based on the way PHP and HTTP servers work. This is not the case. You can configure your web server to route every request to a single PHP file and then parse $_SERVER['REQUEST_URI']. Depending on your choice of HTTP server your mileage might vary, but this is essentially what you want to do.
By googling I found a list of frameworks for PHP but I don't know any of them. There are others, though, and I also recently heard someone mention Apify, although I can't tell you much about that either. PHP is probably one more the more common choices for implementing an API. cURL, however, is a library/tool that's only designed to connect to other websites, as far as I know. You can certainly use the command line version of it to debug your API, but I don't think you'll have much use for it on the server side.
I think you should start with what framework you want to built the REST application from. Rails, RestEasy for java, Codeigniter, all have the basis for good REST routing capabilities.
This includes abstracting URL's from the underlining resource either from database or even business processes. They accomplish this using URL mapping, or creating a Facade for abstractions.
Generally REST does not really differ from tradational GET/PUT/POST with query parameters. Infact Apache URL rewrites usually are used to support REST style routing. I recommend you should pick up one of the frameworks and study how they implement this functions. Rails i thing has significant strength among other frameworks.

Categories

Resources