I am using parse.com to populate a listview in android. Each item in the list view has a textview that shows the like count and another one that shows the comment count.
Now, according to parse.com
"For classes with over 1000 objects, count operations are limited by timeouts. They may routinely yield timeout errors or return results that are only approximately correct. Thus, it is preferable to architect your application to avoid this sort of count operation."
what would be the recommended/ideal way of going about it then?
What I did is I created a column called commentCount, and a column called likeCount. Then in afterSave, I modified the appropriate cell.
Parse.Cloud.afterSave("Activity", function(request) {
Parse.Cloud.useMasterKey(); //bypasses ACL requirements
//After commenting, increment commentCount
if(request.object.get("type") == "comment"){
query = new Parse.Query("Posts");
query.get(request.object.get("post").id, {
success: function(post) {
post.increment("commentCount", 1);
post.save();
},
error: function(error) {
console.error("Got an error " + error.code + " : " + error.message);
}
});
}
});
Related
I have a Sqlite table In that I am Selecting one row at a time with Limit 1.. like
cursor = sqLiteDatabase.rawQuery("SELECT * FROM " + SQLiteHelper.TABLE_NAME + " WHERE status in ('new') LIMIT 1", null);
So Now I want to read the values of all columns with previous/next options
I tried with String list but its not working
I am Using this for Voice Based application So If User Says Next/Previous It should Say/Display Next Value..
I have Done with Speech to text and Text to speech but I Struck at Previous Next
If I Filter Previous and Next from that row I can add voice to them
My Column Values like
1,Android,Oreo,4gb,64gb,2.2Ghz,4000mhz,$800,May2019.
I want to get these Column values one by one
I googled a lot but I got previous and next with rows.. but not column values
You could use the following as the basis
........ existing code
int current_column = 0;
show_value();
}
// Called when event requires next
private next_column() {
if (current_column < (cursor.getColumnCount() - 1) {
current_column++;
show_value();
}
}
// Called when event requires prev
private prev_column() {
if (current_column > 0 {
current_column--;
show_value();
}
}
private void show_value() {
String current_value = cursor.getString(current_column);
your_appropriate_view.setText(current_value);
}
The app im working with is getting data from a .csv (20k-30k records) from a server and it needs to persist the data into an SQLiteDatabase.
It works but some records are missing and appeared that it have been skipped.
I/Choreographer( 2555): Skipped 46 frames! The application may be doing too much work on its main thread.
I know that this error says that the memory consumption is very high due to heavy load. Is there a more efficient way in persisting data in SQLiteDatabase rather than the classic accessing of CSV and processing it from there?
Code for writing in DB
String sql = "INSERT INTO " + tableName
+ " VALUES (?,?,?,?,?,?);";
SQLiteDatabase db = openHelper.getWritableDatabase();
SQLiteStatement statement = db.compileStatement(sql);
try {
db.beginTransaction();
String[] sa = null;
for (final String csvline : arrCSV) {
statement.clearBindings();
sa = csvline.split(",");
if(sa.length==6){
statement.bindString(1, sa[0]);
statement.bindString(2, sa[1]);
statement.bindString(3, sa[2]);
statement.bindString(4, sa[3]);
statement.bindString(5, sa[4]);
statement.bindString(6, sa[5]);
}
statement.execute();
}
db.setTransactionSuccessful();
Log.d("Transaction", "Successful");
}catch(Exception e){
e.printStackTrace();
}finally {
statement.releaseReference();
statement.close();
db.endTransaction();
db.releaseMemory();
}
UPDATE
The missing records were not loaded in the Collection.
Is the skipping of frames the culprit here?
The loading in the collection is just a simple parsing of a csv file and
non replicable at times so Im assuming it is due to the skipping of frames.
I believe the issue is not linked to skipping frames and < 100 frames is considered a small/insignificant number. At least according to The application may be doing too much work on its main thread.
I frequently see it and has never been the cause of any issues. I've even seen it basically doing nothing other than returning a result from an activity to an activity that just starts the second activity.
As you have commented, the number of elements that result from the split is on occasion not 6. The issue is likely that the insert is not happening on such an occasion, perhaps due to constraints (without seeing how the columns are defined only guesses could be made).
However, you appear to consider that each line in csvline should be split into 6 elements. You should thus investigate as to why not?
To investigate I'd suggest getting details of the original data before the split and the resultant data after the split whenever the number of elements created by the split is not 6. e.g. by changing :-
sa = csvline.split(",");
if(sa.length==6){
statement.bindString(1, sa[0]);
statement.bindString(2, sa[1]);
statement.bindString(3, sa[2]);
statement.bindString(4, sa[3]);
statement.bindString(5, sa[4]);
statement.bindString(6, sa[5]);
}
statement.execute();
to
sa = csvline.split(",");
if(sa.length==6){
statement.bindString(1, sa[0]);
statement.bindString(2, sa[1]);
statement.bindString(3, sa[2]);
statement.bindString(4, sa[3]);
statement.bindString(5, sa[4]);
statement.bindString(6, sa[5]);
} else {
Log.d("NOT6SPLIT","CSVLINE WAS ===>" + csvline + "<===");
Log.d("NOT6SPLIT","CSVLINE WAS SPLIT INTO " + Integer.toString(sa.length) + " ELEMENTS :-");
for(String s: sa) {
Log.d("NOT6SPLIT","\tElement Value ===>" + s + "<===");
}
}
statement.execute();
Changing statement.execute() to :-
if (statement.executeInsert() < 1) {
Log.d("INSERTFAIL","Couldn't insert where CSVLINE was ===>" + csvline + "<===");
}
May also assist ('executeInsert' returns the rowid of the inserted record, else -1, not sure of the consequences of a table defined with WITHOUT ROWID).
It wouldn't surprise me at all if the issue boils down to your data containing characters that split considers special or metacharaceters:-
there are 12
characters with
special meanings:
the backslash \,
the caret ^,
the dollar sign $,
the period or dot .,
the vertical bar or pipe symbol |,
the question mark ?,
the asterisk or star *,
the plus sign +,
the opening parenthesis (,
the closing parenthesis ),
the opening square bracket [,
and the opening curly brace {,
These special characters are often called "metacharacters". Most
of them are errors when used alone.
I need to insert a lot of data into a SQLite database. I use transactions for it. There is unfortunatly no chance I can fill the tables without these transactions. My code works like this:
database.transaction(function(transaction) {
for(var i = 0; i < 300; i++) {
sql = "INSERT INTO test (fielda, fieldb, fieldc) VALUES (111, 1, 23.53)";
transaction.executeSql(sql, function(result) {
});
}
}, function() {
database.transaction(function(transaction) {
for(var i = 0; i < 300; i++) {
sql = "INSERT INTO test (fielda, fieldb, fieldc) VALUES (111, 1, 23.53)";
transaction.executeSql(sql, function(result) {
});
}
}, function() {
.................and so on..............
alert("done");
}, function(error) {
alert("error");
});
}, function(error) {
alert("error");
});
It works fast and easy on my iPad 2 but crashes just as fast on my Galaxy Tab 3. It seems that the number of successive transactions doesn't matter but the number of inserts they have to perform altogether. Always around the 900th entry it crashes. That means it crashes in the 4th transaction with loops of i < 300 but needs 10 transactions with loops of i < 100 to crash.
I even tried calling 3 transactions with one button click, then waited for over ten minutes and calling the 4th transactions by clicking on another button. It still crashes.
There are happening other strange things as well:
When I call "DELETE FROM test" at some point before calling the transactions the app throws the error message "Cannot perform this operation because there is no current transaction." just before it crashes then during the transaction. But only then.
I already played around with every PRAGMA there is, like page_size, journal_mode, etc. It doesn't change a thing.
Calling the ORMLite RuntimeExceptionDao's createOrUpdate(...) method in my app is very slow.
I have a very simple object (Item) with a 2 ints (one is the generatedId), a String and a double. I test the time it takes (roughly) to update the object in the database (a 100 times) with the code below. The log statement logs:
time to update 1 row 100 times: 3069
Why does it take 3 seconds to update an object 100 times, in a table with only 1 row. Is this the normal ORMLite speed? If not, what might be the problem?
RuntimeExceptionDao<Item, Integer> dao =
DatabaseManager.getInstance().getHelper().getReadingStateDao();
Item item = new Item();
long start = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
item.setViewMode(i);
dao.createOrUpdate(item);
}
long update = System.currentTimeMillis();
Log.v(TAG, "time to update 1 row 100 times: " + (update - start));
If I create 100 new rows then the speed is even slower.
Note: I am already using ormlite_config.txt. It logs "Loaded configuration for class ...Item" so this is not the problem.
Thanks.
This may be the "expected" speed unfortunately. Make sure you are using ORMLite version 4.39 or higher. createOrUpdate(...) was using a more expensive method to test for existing of the object in the database beforehand. But I suspect this is going to be a minimal speed improvement.
If I create 100 new rows then the speed is even slower.
By default Sqlite is in auto-commit mode. One thing to try is to wrap your inserts (or your createOrUpdates) using the the ORMLite Dao.callBatchTasks(...) method.
In by BulkInsertsTest android unit test, the following doInserts(...) method inserts 1000 items. When I just call it:
doInserts(dao);
It takes 7.3 seconds in my emulator. If I call using the callBatchTasks(...) method which wraps a transactions around the call in Android Sqlite:
dao.callBatchTasks(new Callable<Void>() {
public Void call() throws Exception {
doInserts(dao);
return null;
}
});
It takes 1.6 seconds. The same performance can be had by using the dao.setSavePoint(...) method. This starts a transaction but is not as good as the callBachTasks(...) method because you have to make sure you close your own transaction:
DatabaseConnection conn = dao.startThreadConnection();
Savepoint savePoint = null;
try {
savePoint = conn.setSavePoint(null);
doInserts(dao);
} finally {
// commit at the end
conn.commit(savePoint);
dao.endThreadConnection(conn);
}
This also takes ~1.7 seconds.
I am creating an appliction which uses Google Books API. So whenever I search a book it gives a JSON response and I load those results in my table view. There will be thousands of books results when I search. But I don't want to load everything in my tableview. Whenever I scroll down it only has to load next books.
Can anyone give me a code or rough idea on how to do this in Android using Titanium? I have checked this post: https://github.com/appcelerator/KitchenSink/blob/master/Resources/examples/table_view_dynamic_scroll.js But this is for iPhone, I need it for Android as well. Help me out...
After looking around I have implemented following solution for android:
tableView.addEventListener('scroll',
function(e) {
if (!e.source.__doneUpdating && e.totalItemCount % e.source.__pageSize === 0) {
var distance = e.totalItemCount - e.firstVisibleItem;
if (distance <= e.visibleItemCount) {
if (!e.source.__updating) {
e.source.__updating = true;
e.source.fireEvent('beginUpdate', e);
}
}
}
Ti.API.info('-------------------');
Ti.API.info( 'e.firstVisibleItem: ' + e.firstVisibleItem);
Ti.API.info( 'e.totalItemCount: ' + e.totalItemCount);
Ti.API.info( 'e.visibleItemCount: '+ e.visibleItemCount);
}
);
Where e.source.__pageSize, e.source.__doneUpdating, and e.source.__updating are internal variables that are maintained by the code inserting rows into the tableView.