cordovaSQLite retrieving data slowly using ionic framework - android

I am making an application using the ionic framework and I am using sqlite to store a list of about 150 rows. Each row has two attributes, ID and Name.
Now I am retrieving this data using a database factory which runs a query.
It works, however when I test it on an my Galaxy Tab 3 the list takes about 5-10 seconds to load and once it it loaded the list it super laggy scrolling through the list items.
Here's my controller
.controller('ActionSearchCtrl', function($scope, ActionSearchDataService, DBA, $cordovaSQLite){
var tablet = true;
var query = "select action FROM actions;";
$scope.items = [];
$scope.runSQL = function(){
DBA.query(query).then(function(result){
$scope.items = DBA.getAll(result);
});
};
if(tablet){$scope.runSQL()};
Here's my Database Factory
.factory('DBA', function($cordovaSQLite, $q, $ionicPlatform) {
var self = this;
// Handle query's and potential errors
self.query = function (query, parameters) {
parameters = parameters || [];
var q = $q.defer();
$ionicPlatform.ready(function () {
$cordovaSQLite.execute(herbsDatabase, query, parameters)
.then(function (result) {
q.resolve(result);
}, function (error) {
console.warn('I found an error');
console.warn(error);
alert(error.message);
q.reject(error);
});
});
return q.promise;
}
// Proces a result set
self.getAll = function(result) {
var output = [];
for (var i = 0; i < result.rows.length; i++) {
output.push(result.rows.item(i));
}
return output;
}
// Proces a single result
self.getById = function(result) {
var output = null;
output = angular.copy(result.rows.item(0));
return output;
}
return self;
})
So the query returns about 150 entries which I need all on one page (I've looked into infinite scrolling and pagination but my client wants all the items on one page so this is not an option. From what I've read, 150 entries shouldn't be too slow in terms of watchers as I am using ng-repeat for the list items to display. If anyone has a way I can display this many items using the cordovaSQLite plugin to make the list function quicker let me know! at the moment the list is pretty much unusable, I've tried it on other devices too and it has the same result.
I've also tried creating about 200 dummy objects in the controller (without making a call to the db to get data) and the performance is fine. That's why I am thinking it's an SQLite performance issue. This is my first post on here so apologies if I am not clear enough.

Okay, so I used the collection-repeat instead of ng-repeat in my template where the list was being generated. Dramatically increased the performance of the list. I hope this helps someone as it was driving me crazy!

Your problem may comes from several points.
Are you running on Android?
First, if you are running on android, be aware that list management and SCROLLING is pretty lame at present on ionic. On my case, only Native scroll was providing good results with keeping ng-repeat.
app.config(function($ionicConfigProvider) {
if(ionic.Platform.isAndroid())
$ionicConfigProvider.scrolling.jsScrolling(false);
});
Fasting up SQLite
I have clearly improved my performance of SQLite by defining correct index with this command :
'CREATE INDEX IF NOT EXISTS ' + _indexUniqueId + ' ON ' + _tableName + ' ' + IndexDefinition));
For your case, if your have 2 cols, (A and B) and if you are using only A to query on your table, then you need only one index on ColA, and above IndexDefinition will be equal to : (A)
Moreover, check that officiel sqlite doc to understand how index are managed :
https://www.sqlite.org/lang_createindex.html
Update to ionic RC5
If you have not done it yet, do it.
There huge improvements on scrolling ng-repeat on 1.0.0-rc5
Else
If those information does not work, please provide more info on your issue :
- is the list unscrollable ? or delays a lot before displaying ? or both ?

Related

What's the best way to keep offline devices consistent with AWS/DynamoDB

Basically
Incrementing values seems like it should be so simple but it seems like it's not in my case.. I know I've still got so much to learn especially with VTL which is entirely new to me so please excuse my ignorance.
All I want is to be able to add to/subtract from an existing value possibly offline and for the server to capture all those changes when coming online, keeping multiple (intermittently online) devices consistent.
Use cases:
Keep count of inventory/
Keep balances for Accounts/
Votes/
Visits/
etc/
Scenario
device 1 makes n changes to Item A.quantityOnHand, ( i.e: A delivery is made from a driver using the app)
device 2 makes n changes to Item A.quantityOnHand,( i.e: Some Sales are made of the item over time)
device n makes n changes to Items/Customers/Cash/Votes/Visits/Some other Counting operation.
All changes need to be captured and at any time devices could go offline.
So Far..
I have looked at a custom resolver.. something like this simple change in my resolver:
#if ( $entryKeyAttributeName == "transacted")
##perhaps use regex and filter a set prefix
$util.qr($expAdd.put("#$entryKeyAttributeName", ":$entryKeyAttributeName"))
#else
$util.qr($expSet.put("#$entryKeyAttributeName", ":$entryKeyAttributeName"))
#end
I found that this only works once, for the most current version of the model. The result is only the latest update is synced, depending on current server versions, resulting in inconsistent data.
I thought the custom conflict handling could help, Perhaps If I have a locally stored 'last-synced-value' and then use the differences to create a modified model corrected with the changed local values taken into account.
Could this work?
What is the downside of creating _increment logic locally in a DB?
It seems like it would be a simple process?
if amplify.isConnected -> save normally, immediate update using model.
if amplify.isNotConnected:
->diff model saved and queued separately in a local DB (with a 'incremented value' field)
->query(model) finds and sums any relevant model during unSynced/offline State
upon Amplify.isNowSynced && connected
->save the now synced and updated models with the relevant increments
->delete the increment rows once response from server is received and checked for consistency
I did something similar in the below extension... i used another SQflite db with one table..
Can you see any downside to something like the followingextension?
extension AmplifyHelper on DataStoreCategory {
Future<bool> incrementValue<T extends Model>(T model, incrementValue, QueryField incField,
{QueryPredicate? where}) async {
/// If online and connected, just save in the normal way
if (_networkService.connectivityResult.value != ConnectivityResult.none) {
try {
save(model, where: where);
return true;
} catch (e) {
debugPrint('*** An error occurred trying to save while online ***');
return false;
}
} else {
/// Otherwise, create a map to save in a sqflite DB
var map = <String, dynamic>{};
map['dyn_tableName'] = model.getInstanceType().modelName();
map['dyn_rowId'] = model.getId();
map['dyn_increment'] = incrementValue;
map['dyn_field'] = incField.fieldName;
map['dyn_fromValue'] = model.toJson()[incField.fieldName];
return _dbService.updateOrInsertTable(map);
}
}
Future<List> queryWithOffline<T extends Model>(ModelType<T> modelType,
{QueryPredicate? where, QueryPagination? pagination, List<QuerySortBy>? sortBy}) async {
/// Get normal results ( basically unchanged from increments, contains the last-synced value for any increment)
List<T> amplifyResult = await query(modelType, where: where, pagination: pagination, sortBy: sortBy);
/// Get any increments from other DB
List<Map<String, dynamic>> offlineList = await _dbService.getOfflineTableRows(tableName: modelType.modelName());
if (offlineList.isNotEmpty && amplifyResult.isNotEmpty) {
/// If there is something in there SUM the relevant fields for each row and return.
List<T> listWithOffline = [];
for (var rowMap in offlineList) {
ModelField field = modelProvider.modelSchemas
.firstWhere((mdl) => mdl.name == rowMap['dyn_tableName'])
.fields![rowMap['dyn_field']]!;
Map<String, dynamic> modelMap =
amplifyResult.firstWhere((item) => item.getId() == rowMap['dyn_rowId']).toJson();
modelMap[field.name] = modelMap[field.name] + rowMap['dyn_increment'];
listWithOffline.add(modelType.fromJson(modelMap));
}
return listWithOffline;
} else {
/// There is nothing in the sync DB, just return data.
return amplifyResult;
}
}
Future<bool> returnToOnline<T extends Model>() async {
/// Called when Amplify is synced /ready after coming online
/// Check if Amplify is resynced
if (!_amplifySyncService.amplifyHasDirt) {
List<Map<String, dynamic>> offlineList = await _dbService.getOfflineTableRows();
if (offlineList.isNotEmpty) {
List<T> listWithOffline = [];
ModelType<T>? modelType;
List<T> amplifyResult = [];
for (var rowMap in offlineList) {
///Basically the same process of match and sum as above
if (modelType == null || modelType.modelName() != rowMap['dyn_tableName']) {
modelType = modelProvider.getModelTypeByModelName(rowMap['dyn_tableName']) as ModelType<T>?;
amplifyResult = await Amplify.DataStore.query(modelType!);
}
ModelField field = modelProvider.modelSchemas
.firstWhere((mdl) => mdl.name == rowMap['dyn_tableName'])
.fields![rowMap['dyn_field']]!;
Map<String, dynamic> modelMap = amplifyResult.firstWhere((item) => item.getId() == rowMap['dyn_rowId']).toJson();
modelMap[field.name] = modelMap[field.name] + rowMap['dyn_increment'];
listWithOffline.add(modelType.fromJson(modelMap));
}
for (var mdl in listWithOffline) {
/// Saving the updated model with the increments added
await Amplify.DataStore.save(mdl);
if (await _dbService.deleteRow(mdl.getId())) {
debugPrint('${mdl.getId()} has been processed from the jump queue');
// TODO: final looks...
// if(isNowSynced(mdl){
listWithOffline.remove(mdl);
// }else{
// rollback(mdl);
// }
}
} else {
debugPrint('No jump queue to process');
}
} else {
print('*** Amplify Had Dirt! ***');
}
return true;
}
}
Thanks for reading

Can't extract cities of country OpenWeather

I wanna use open-weather API for my application. for that i thought i might be able to save cities and their IDs in my own app in a json file and whenever i user wants a locations weather, app selects city's ID from json and make API call.
for that i downloaded a 30MB Json file provided by Openweather, it contains all countries and all theirs cities. putting a 30MB in my app isn't a good idea apparently. so i decided to extract my country cities only. but the point is, this idea could not be done. so many cities from different countries has same names. and extracted json was huge again. even some coutry codes are cities in other countries.
i wonder if there is a way better implementation. or any idea or way to extract just cities of a country.
any help to implement weather call in my app for different cities would be appreciated
I know this question is old but I recently bumped into this problem too. The way I ended up doing it was making city.list.json in to a default exported JSON object and writing a node script to then strip cities out by country code:
var fs = require('fs');
var cityList = require('./city.list.js');
let output = {};
let countryCodes = [];
cityList.forEach((city) => {
const country = city.country;
if (country) {
if (!countryCodes.includes(country)) {
countryCodes.push(country);
}
if (!output[country]) {
output[country] = [];
}
output[country].push({
id: city.id,
name: city.name,
});
}
});
for (const [key, value] of Object.entries(output)) {
const fileName = 'city.' + key.toLowerCase() + '.json';
fs.writeFile(fileName, JSON.stringify(value), function (err) {
if (err) console.error(err.message);
console.log(fileName + ' - saved!');
});
}
fs.writeFile('countrycodes.json', JSON.stringify(countryCodes), function (err) {
if (err) console.error(err.message);
console.log('countrycodes.json' + ' - saved!');
});
This totally worked! Only problem I then ran into is city.list.json includes country names, and they are not differentiated in the data...

How to batch read set of documents in Firestore? [duplicate]

I am wondering if it's possible to get multiple documents by a list of ids in one round trip (network call) to the Firestore database.
if you're within Node:
https://github.com/googleapis/nodejs-firestore/blob/master/dev/src/index.ts#L978
/**
* Retrieves multiple documents from Firestore.
*
* #param {...DocumentReference} documents - The document references
* to receive.
* #returns {Promise<Array.<DocumentSnapshot>>} A Promise that
* contains an array with the resulting document snapshots.
*
* #example
* let documentRef1 = firestore.doc('col/doc1');
* let documentRef2 = firestore.doc('col/doc2');
*
* firestore.getAll(documentRef1, documentRef2).then(docs => {
* console.log(`First document: ${JSON.stringify(docs[0])}`);
* console.log(`Second document: ${JSON.stringify(docs[1])}`);
* });
*/
This is specifically for the server SDK
UPDATE: Cloud Firestore Now Supports IN Queries!
myCollection.where(firestore.FieldPath.documentId(), 'in', ["123","456","789"])
In practise you would use firestore.getAll like this
async getUsers({userIds}) {
const refs = userIds.map(id => this.firestore.doc(`users/${id}`))
const users = await this.firestore.getAll(...refs)
console.log(users.map(doc => doc.data()))
}
or with promise syntax
getUsers({userIds}) {
const refs = userIds.map(id => this.firestore.doc(`users/${id}`))
this.firestore.getAll(...refs).then(users => console.log(users.map(doc => doc.data())))
}
They have just announced this functionality, https://firebase.googleblog.com/2019/11/cloud-firestore-now-supports-in-queries.html .
Now you can use queries like, but mind that the input size can't be greater than 10.
userCollection.where('uid', 'in', ["1231","222","2131"])
With Firebase Version 9 (Dec, 2021 Update):
You can get multiple documents by multiple ids in one round-trip using "documentId()" and "in" with "where" clause:
import {
query,
collection,
where,
documentId,
getDocs
} from "firebase/firestore";
const q = query(
collection(db, "products"),
where(documentId(), "in",
[
"8AVJvG81kDtb9l6BwfCa",
"XOHS5e3KY9XOSV7YYMw2",
"Y2gkHe86tmR4nC5PTzAx"
]
),
);
const productsDocsSnap = await getDocs(q);
productsDocsSnap.forEach((doc) => {
console.log(doc.data()); // "doc1", "doc2" and "doc3"
});
You could use a function like this:
function getById (path, ids) {
return firestore.getAll(
[].concat(ids).map(id => firestore.doc(`${path}/${id}`))
)
}
It can be called with a single ID:
getById('collection', 'some_id')
or an array of IDs:
getById('collection', ['some_id', 'some_other_id'])
No, right now there is no way to batch multiple read requests using the Cloud Firestore SDK and therefore no way to guarantee that you can read all of the data at once.
However as Frank van Puffelen has said in the comments above this does not mean that fetching 3 documents will be 3x as slow as fetching one document. It is best to perform your own measurements before reaching a conclusion here.
If you are using flutter, you can do the following:
Firestore.instance.collection('your_collection_name')
.where(FieldPath.documentId, whereIn:["list", "of", "document", "ids"])
.getDocuments();
This will return a Future containing List<DocumentSnapshot> which you can iterate as you feel fit.
Surely the best way to do this is by implementing the actual query of Firestore in a Cloud Function? There would then only be a single round trip call from the client to Firebase, which seems to be what you're asking for.
You really want to be keeping all of your data access logic like this server side anyway.
Internally there will likely be the same number of calls to Firebase itself, but they would all be across Google's super-fast interconnects, rather than the external network, and combined with the pipelining which Frank van Puffelen has explained, you should get excellent performance from this approach.
You can perform an IN query with the document IDs (up to ten):
import {
query,
collection,
where,
getDocs,
documentId,
} from 'firebase/firestore';
export async function fetchAccounts(
ids: string[]
) {
// use lodash _.chunk, for example
const result = await Promise.all(
chunk(ids, 10).map(async (chunkIds) => {
const accounts = await getDocs(
query(
collection(firestore, 'accounts'),
where(documentId(), 'in', chunkIds)
));
return accounts.docs.filter(doc => doc.exists()).map(doc => doc.data());
})
);
return result.flat(1);
}
Here's how you would do something like this in Kotlin with the Android SDK.
May not necessarily be in one round trip, but it does effectively group the result and avoid many nested callbacks.
val userIds = listOf("123", "456")
val userTasks = userIds.map { firestore.document("users/${it!!}").get() }
Tasks.whenAllSuccess<DocumentSnapshot>(userTasks).addOnSuccessListener { documentList ->
//Do what you need to with the document list
}
Note that fetching specific documents is much better than fetching all documents and filtering the result. This is because Firestore charges you for the query result set.
For some who are stucked in same problem
here is a sample code:
List<String> documentsIds = {your document ids};
FirebaseFirestore.getInstance().collection("collection_name")
.whereIn(FieldPath.documentId(), documentsIds).get().addOnCompleteListener(new OnCompleteListener<QuerySnapshot>() {
#Override
public void onComplete(#NonNull Task<QuerySnapshot> task) {
if (task.isSuccessful()) {
for (DocumentSnapshot document : Objects.requireNonNull(task.getResult())) {
YourClass object = document.toObject(YourClass.class);
// add to your custom list
}
}
}
}).addOnFailureListener(new OnFailureListener() {
#Override
public void onFailure(#NonNull Exception e) {
e.printStackTrace();
}
});
For the ones who want to do it using Angular, here is an example:
First some library imports are needed: (must be preinstalled)
import * as firebase from 'firebase/app'
import { AngularFirestore, AngularFirestoreCollection } from '#angular/fire/firestore'
Some configuration for the collection:
yourCollection: AngularFirestoreCollection;
constructor(
private _db : AngularFirestore,
) {
// this is your firestore collection
this.yourCollection = this._db.collection('collectionName');
}
Here is the method to do the query: ('products_id' is an Array of ids)
getProducts(products_ids) {
var queryId = firebase.firestore.FieldPath.documentId();
this.yourCollection.ref.where(queryId, 'in', products_ids).get()
.then(({ docs }) => {
console.log(docs.map(doc => doc.data()))
})
}
I hope this helps you, it works for me.
getCartGoodsData(id) {
const goodsIDs: string[] = [];
return new Promise((resolve) => {
this.fs.firestore.collection(`users/${id}/cart`).get()
.then(querySnapshot => {
querySnapshot.forEach(doc => {
goodsIDs.push(doc.id);
});
const getDocs = goodsIDs.map((id: string) => {
return this.fs.firestore.collection('goods').doc(id).get()
.then((docData) => {
return docData.data();
});
});
Promise.all(getDocs).then((goods: Goods[]) => {
resolve(goods);
});
});
});
}
Yes, it is possible. Sample in .NET SDK for Firestore:
/*List of document references, for example:
FirestoreDb.Collection(ROOT_LEVEL_COLLECTION).Document(DOCUMENT_ID);*/
List<DocumentReference> docRefList = YOUR_DOCUMENT_REFERENCE_LIST;
// Required fields of documents, not necessary while fetching entire documents
FieldMask fieldMask = new FieldMask(FIELD-1, FIELD-2, ...);
// With field mask
List<DocumentSnapshot> documentSnapshotsMasked = await FirestoreDb.GetAllSnapshotsAsync(docRefList, fieldMask);
// Without field mask
List<DocumentSnapshot>documentSnapshots = await FirestoreDb.GetAllSnapshotsAsync(docRefList);
Documentation in .NET:
Get all snapshots
Field mask
This doesn't seem to be possible in Firestore at the moment. I don't understand why Alexander's answer is accepted, the solution he proposes just returns all the documents in the "users" collection.
Depending on what you need to do, you should look into duplicating the relevant data you need to display and only request a full document when needed.
if you are using the python firebase admin sdk this is how you query for multiple documents using their uids
from firebase_admin import firestore
import firebase_admin
from google.cloud.firestore_v1.field_path import FieldPath
app = firebase_admin.initialize_app(cred)
client = firestore.client(app)
collection_ref = client.collection('collection_name')
query = collection_ref.where(FieldPath.document_id(), 'in', listOfIds)
docs = query.get()
for doc in docs:
print(doc.id, doc.to_dict())
Instead of importing FieldPath you can also simply use the string __name__. Now your query will be collection_ref.where('__name__', 'in', listOfIds)
The best you can do is not use Promise.all as your client then must wait for .all the reads before proceeding.
Iterate the reads and let them resolve independently. On the client side, this probably boils down to the UI having several progress loader images resolve to values independently. However, this is better than freezing the whole client until .all the reads resolve.
Therefore, dump all the synchronous results to the view immediately, then let the asynchronous results come in as they resolve, individually. This may seem like petty distinction, but if your client has poor Internet connectivity (like I currently have at this coffee shop), freezing the whole client experience for several seconds will likely result in a 'this app sucks' experience.

Firebase - query by grandchild key [duplicate]

Given the data structure below in firebase, i want to run a query to retrieve the blog 'efg'. I don't know the user id at this point.
{Users :
"1234567": {
name: 'Bob',
blogs: {
'abc':{..},
'zyx':{..}
}
},
"7654321": {
name: 'Frank',
blogs: {
'efg':{..},
'hij':{..}
}
}
}
The Firebase API only allows you to filter children one level deep (or with a known path) with its orderByChild and equalTo methods.
So without modifying/expanding your current data structure that just leaves the option to retrieve all data and filter it client-side:
var ref = firebase.database().ref('Users');
ref.once('value', function(snapshot) {
snapshot.forEach(function(userSnapshot) {
var blogs = userSnapshot.val().blogs;
var daBlog = blogs['efg'];
});
});
This is of course highly inefficient and won't scale when you have a non-trivial number of users/blogs.
So the common solution to that is to a so-called index to your tree that maps the key that you are looking for to the path where it resides:
{Blogs:
"abc": "1234567",
"zyx": "1234567",
"efg": "7654321",
"hij": "7654321"
}
Then you can quickly access the blog using:
var ref = firebase.database().ref();
ref.child('Blogs/efg').once('value', function(snapshot) {
var user = snapshot.val();
ref.child('Blogs/'+user+'/blogs').once('value', function(blogSnapshot) {
var daBlog = blogSnapshot.val();
});
});
You might also want to reconsider if you can restructure your data to better fit your use-case and Firebase's limitations. They have some good documentation on structuring your data, but the most important one for people new to NoSQL/hierarchical databases seems to be "avoid building nests".
Also see my answer on Firebase query if child of child contains a value for a good example. I'd also recommend reading about many-to-many relationships in Firebase, and this article on general NoSQL data modeling.
Given your current data structure you can retrieve the User that contains the blog post you are looking for.
const db = firebase.database()
const usersRef = db.ref('users')
const query = usersRef.orderByChild('blogs/efg').limitToLast(1)
query.once('value').then((ss) => {
console.log(ss.val()) //=> { '7654321': { blogs: {...}}}
})
You need to use limitToLast since Objects are sorted last when using orderByChild docs.
It's actually super easy - just use foreslash:
db.ref('Users').child("userid/name")
db.ref('Users').child("userid/blogs")
db.ref('Users').child("userid/blogs/abc")
No need of loops or anything more.

How to retrieve data from all pages?

I want to get some data from all users in Users table. I've found that I have to use Data paging. I've written the same code as described in Feature 47->https://backendless.com/feature-47-loading-data-objects-from-server-with-sorting/ (because I also
have to sort) , but then I've figured out that this code takes data only from first page. Then , I decided that I have to go to the next page and read it , until its size is not equal to zero. Below,you can see my wrong solution:
QueryOptions queryOptions = new QueryOptions();
List<String> list = new ArrayList<String>() ;
list.add("point DESC") ;
queryOptions.setSortBy(list);
BackendlessDataQuery backendlessDataQuery = new BackendlessDataQuery();
backendlessDataQuery.setQueryOptions(queryOptions);
Backendless.Data.of(BackendlessUser.class).find(backendlessDataQuery, new AsyncCallback<BackendlessCollection<BackendlessUser>>() {
#Override
public void handleResponse(BackendlessCollection<BackendlessUser> one) {
while(one.getCurrentPage().size()>0) {
Iterator<BackendlessUser> it = one.getCurrentPage().iterator();
while (it.hasNext()) {
//something here,not so important
}
one.nextPage(this);// here I want to get next page,
//but seems like it does not work, cause my loop became infinite
}
}
I think that I have to use nextPage method with AsyncCallback instead of one.nextPage(this) , but if I do so , the method couldn't keep up with the loop. So, how can I solve my problem?
I actually can't find the problem with your solution, but I solved this problem using:
int tot = response.getTotalObjects()
to get the total number of objects at the first response. Then use a loop until your list of objects has size = tot. In each loop you make a query setting the offset equals to the current size of the list.

Categories

Resources