How to open jetpack DataStore file (.preferences_pb) - android

I am trying out new jetpack DataStore library . I saved data using the library . This created a file settings.preferences_pb in app data directory (/data/data/my.package.name/files/datastore/settings.preferences_pb) . setting is the file name given by me . The data doesn't show properly with a text viewer . I can make out Key name but the value is garbage . How do I open this file and view it ?
Here is the drive link for file settings.preferences_pb

Reference: https://medium.com/swlh/get-your-hand-dirty-with-jetpack-datastore-b1f1dfb0a5c1
The protobuf files will be located in /data/data/{application.package}/files/datastore/ . These are the protobuf formatted files, so we can not read them with an ordinary editor. To decode the files, we can use protoc command line.
To be able to use protoc command, we have to pull these files to our
workstation’s space using adb command
For preference datastore
protoc --decode_raw < app_name.preferences_pb
The result will be similar to this:
1 {
1: "app_name"
2 {
5: "Datastore sample"
}
}
1 {
1: "is_demo_mode"
2 {
1: 1
}
}
Note: value 1 of is_demo_mode represents a true

Here is the current format for the preferences_pb file: link
You can parse the file using this schema and print it out if you need to.
Alternatively, you can just use the toString() method on the Preferences object and you should get a nice readable output.

I used an Hex Editor ("Hex Fiend" for macOS) and it seems understandable.
You may refer to the preferences.proto here (thanks #rohit-sathyanarayana)
The binary file is a map of <string, value> pair.
Each pair starts with 0x0A and length (in bytes). For example 0x26 means next 38 bytes.
Name field starts with 0x0A and length (in bytes). For example 0x04 for "name" and 0x05 for "token".
Value field starts with 0x12 and length (in bytes). For example 0x1E = next 30 bytes.
First byte might indicates field type. For example 0x2A = String field.
Second byte is length of value. For example 0x1C = 28 bytes.
Since 0x0A is same as line feed, you can open the preferences_db as text if most of your fields are in String format.

I find a workaround from the datastore source code. Just change the pbFile to File(filesDir, "datastore/settings.preferences_pb")

Related

Unable to download file with special character from Amazon S3

I have been trying to download a file from Amazon S3 that ends with special character.
The file name ends with an "=" as a result of Base64 encoding. Now I am trying to download this file and I receive an error,
The specified key does not exist. (Service: Amazon S3; Status Code: 404; Error Code: NoSuchKey;
I tried URL Encoding the string. So now the "=" becomes "%3D" and still I receive the same error.
But if I remove the "=" from the file name, I am being able to download the file without issues. But this is a common file and it is to be accessed form iOS as well.
NOTE: The iOS Amazon SDK works even when the file name has "=" in it.
The issue is faced only in Android SDK.
According to AWS documentation
Safe Characters
The following character sets are generally safe for use in key names:
Alphanumeric characters [0-9a-zA-Z]
Special characters !, -, _, ., *, ', (, and )
and
Characters That Might Require Special Handling
The following characters in a key name may require additional code handling and will likely need to be URL encoded or referenced as HEX. Some of these are non-printable characters and your browser may not handle them, which will also require special handling:
Ampersand ("&")
Dollar ("$")
ASCII character ranges 00–1F hex (0–31 decimal) and 7F (127 decimal.)
'At' symbol ("#")
Equals ("=")
Semicolon (";")
Colon (":")
Plus ("+")
Space – Significant sequences of spaces may be lost in some uses (especially multiple spaces)
Comma (",")
Question mark ("?")
So it confirms you that "=" require special handling,
It will be better if you replace the last "=" char with another safe char to avoid the issue ...
Please try to change the "=" to "&#61"
As on iOS, there is no issue, I expect that it could be relative to the Android environment.
You may note that some chars could also be forbidden because the SH or BASH or ANDROID shell environment execution,
please also to take in consideration that some disk format option (FAT32 on a normal android external memory card) could also represent a factor which forbids some char in the filename.
If you take a look here and more especially on the #kreker answer:
According to wiki and assuming that you are using external data storage which has FAT32.
Allowable characters in directory entries
are
Any byte except for values 0-31, 127 (DEL) and: " * / : < > ? \ | + , . ; = [] (lowcase a-z are stored as A-Z). With VFAT LFN any Unicode except NUL
You will note that = is not an allowed char on the android FAT32 partition ...
As I expect that Android will consider = as restricted char you may try to escape it with a \= or add a quote to the file name on your code ...
An example with a copy:
cp filename=.png mynewfile=.png #before
cp "filename=.png" "mynewfile=.png" #after
"VCS...=.png"
If nothing of this tricks will work you have to change the filename to remove the "=" when you create those files.
Regards
The following characters in a key name may requiere additional code
handling and will likely need to be URL encoded or referenced as
HEX.
Some of these are non-printable characters and your browser may not handle them, which will also require special handling:
The best practices to ensure compatibility between applications defining Key names are using:
- Alphanumeric characters [0-9a-zA-Z]
- Special characters !, -, _, ., *, ', (, and )
Using android you need to encode the file name, the character (commonly used as operator):
=
to :
%3D
First of all I think you are using CopyObjects method of s3. OR you received a file name from an s3 event or somewhere else which you are trying to download. The issue is aws handles special characters differently when they store the names. If you'll go to s3 console and click on the file name. You'll see the URI which will have different values for special characters like space will be replaced by + like that. So you need to handle the special characters accordingly. Misleading examples wont help you as aws has constraints on file names but if you save them otherwise it will replace them with acceptable characters and your actual file name will be different than the one you uploaded hence you getting file not found

Access the file from salesforce

I am using salesforce for my android app in backend.
and somewhere while using SOQL, I am getting response something like this :
Article_FileNames":"Template.doc"
Article_FileContentTypes":"application/msword
Article_FileBodys":"/services/data/v35.0/sobjects/Hub_Knowledge_Articlekav/xxxxxxxxxxx/Article_FileBodys
ArticleNumber":"000001010
LastPublishedDate":"2015-12-09T20:34:48.000+0000
Now i am not getting, how I can let the user open the file named "Template.doc" on click.
How to open this file ?
is there any file path or id which will linkify this filename ?
Any help will be appreciated!
The only file types that are directly available from outside of your Salesforce instance are images, which are made so by setting the Externally Available Image flag on the Document object itself.
If you need to work with other file types, you're going to have to build them yourself to facilitate their download by the end user. This can be done by querying your instance for the Document, which will contain the Base64 encoded file contents (Body), that array's length (BodyLength), the file extension (Type), and also the file name itself (Name).
Given what you provided, your SOQL query would look something like :
SELECT Body, BodyLength, Name, Type FROM Document WHERE Name = 'Template' AND Type = 'doc'
From there, you should have everything you need to build the file in memory.

What is the meaning of %©»ªµ in a PDF code header?

I am trying to create a PDF in my Android application using the Android PDF Writer. This is a very basic library that allows to create simple PDF files. It works quite well, but there is one thing I do not understand:
When I look at the generated PDF source code I can see, that the file starts with the following lines:
%PDF-1.4
%©»ªµ
1 0 obj
<<
/Type /Catalog
/Pages 2 0 R
>>
endobj
...
What does the second line mean? I searched a lot of different PDF syntax documentations but I have found no hint what that line could mean. In all examples I found the the %PDF-VersionXY line is directly followed by the first object / the catalog.
I am not sure if this is valid PDF code at all, or if this some an error due to some charset/enconding problem with the libraries source code.
Any idea what this could be about? What information could be included at this place and is %©»ªµ valid PDF or some enconding error?**
When taking a look at the pdf-1.4 reference here (or also in the current 1.7 here) in section 3.4.1 it says
Note: If a PDF file contains binary data, as most do (see Section 3.1, “Lexical Conventions”),
it is recommended that the header line be immediately followed by a
comment line containing at least four binary characters—that is, characters whose
codes are 128 or greater. This will ensure proper behavior of file transfer applications
that inspect data near the beginning of a file to determine whether to treat the file’s
contents as text or as binary.
So your generator seems to include this additional comment-line by default, even if there is no binary data to follow. What's in there doesn't matter as long as each byte value is > 128 (that is: outside the ASCII-range). In your case it's hex values A9 BB AA B5, so everything is fine and you don't have to worry about this line.

Concatenate formatted string in macro

I am doing some algorithm development on android platform. I want to modify my past developer's code and add keyword to it, since he has had put so many useful log info in the code. But I want to grep a new keyword by logcat to see all the useful log I want.
1.The idea is to use: adb logcat | grep 'keyword' to see the log file. For example the keyword can be a person's name James.
2.The past developer remode the ALOGE in the header file like this. and he add many LOG_ACD in the .c file.
#define LOG_ACD(fmt, args...) if (acd->stats_debug_mask & STATS_DEBUG_MASK_ACD_LOG) ALOGE(fmt, ##args)
example in c is LOG_AcD("%s: acd_enable %d, monitor %d, freq %d, afd_state %d, acd_atb %d",
func, output->acd_enable, output->acd_monitor,
output->freq, output->acd_state, output->acd_atb);
3.How can I add the keyword to the above line of code to force each line of LOG_ACD in .c file has my new keyword? The interesting part for me is the ALOGE itself is not a string, the format string will be generated in the .c file.
I hope I describe the problem clearly. Thank you guys
You say that the format string will be gerenated in the C file. I think you don't mean what you say.
For printf-like functions, it is common to specify a literal format string. (The format string is the string with all the format specifiers like %d. A literal string is a zero-terminated string constant between double quotes.) If that is the case (and your example backs this assumption), you can use string-literal concatenation:
#define LOG_ACD(fmt, args...) ALOGE("ACD: " fmt, ##args)
Two adjacent string literals are compiled as one, e.g. "A" "B" is essentially the same as "AB". The macro will generate a compile-time error when the format string is not a literal, but, as said above, that's unusual.

How to remove accent characters from an InputStream

I am trying to parse a Rss2.0 feed on Android using a Pull parser.
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), null);
The prolog of the feed XML says the encoding is "utf-8". When I open the remote stream and pass this to my Pull Parser, I get invalid token, document not well formed exceptions.
When I save the XML file and open it in the browser(FireFox) the browser reports presence of Unicode 0x12 character(grave accent?) in the file and fails to render the XML.
What is the best way to handle such cases assuming that I do not have any control over the XML being returned?
Thanks.
Where did you find that 0x12 is the grave accent? UTF-8 has the character range 0x00-0x7F encoded the same as ASCII, and ASCII code point 0x12 is a control character, DC2, or CTRL+R.
It sounds like an encoding problem of some sort. The simplest way to resolve that is to look at the file you've saved in a hex editor. There are some things to check:
the byte order mark (BOM) at the beginning might confuse some XML parsers
even though the XML declaration says the encoding is in UTF-8, it may not actually have that encoding, and the file will be decoded incorrectly.
not all unicode characters are legal in XML, which is why firefox refuses to render it. In particular, the XML spec says that that 0x9, 0xA and 0xD are the only valid characters less than 0x20, so 0x12 will definitely cause compliant parsers to grumble.
If you can upload the file to pastebin or similar, I can help find the cause and suggest a resolution.
EDIT: Ok, you can't upload. That's understandable.
The XML you're getting is corrupted somehow, and the ideal course of action is to contact the party responsible for producing it, to see if the problem can be resolved.
One thing to check before doing that though - are you sure you are getting the data undisturbed? Some forms of communication (SMS) allow only 7-bit characters. This would turn 0x92 (ASCII forward tick/apostrophe - grave accent?) into 0x12. Seems like quite a coincidence, particularly if these appear in the file where you would expect an accent.
Otherwise, you will have to try to make best do with what you have:
although not strictly necessary, be defensive and pass "UTF-8" as the second paramter to setInput, on the parser.
similarly, force the parser to use another character encoding by passing a different encoding as the second parameter. Encodings to try in addtion to "UTF-8" are "iso-8859-1" and "UTF-16". A full list of supported encodings for java is given on the Sun site - you could try all of these. (I couldn't find a definitive list of supported encodings for Android.)
As a last resort, you can strip out invalid characters, e.g. remove all characters below 0x20 that are not whitespace (0x9,0xA and 0xD are all whitepsace.) If removing them is difficult, you can replace them instead.
For example
class ReplacingInputStream extends FilterInputStream
{
public int read() throws IOException
{
int read = super.read();
if (read!=-1 && read<0x20 && !(read==0x9 || read==0xA || read==0xB))
read = 0x20;
return read;
}
}
You wrap this around your existing input stream, and it filters out the invalid characters. Note that you could easily do more damage to the XML, or end up with nonsense XML, but equally it may allow you to get out the data you need or to more easily see where the problems lie.
I use to filter it with a regex, but the trick is not trying to get and replace the accents. It depends on the encode and you don't want to change the content.
Try to insert the content of the tags into this tags
Like this
<title>My title</title>
<link>http://mylink.com</link>
<description>My description</description>
To this
<title><![CDATA[My title]]></title>
<link><![CDATA[http://milynk.com]]></link>
<description><![CDATA[My Description]]></description>
The regex shouldn't be very hard to figure out. It works for me, hope it helps for you.
The problem with UTF-8 is that it is a multibyte encoding. As such it needs a way to indicate when a character is formed by more than one byte (maybe two, three, four, ...). The way of doing this is by reserving some byte values to signal multibyte characters. Thus encoding follows some basic rules:
One byte characters have no MSB set (codes compatible with 7-bit ASCII).
Two byte characters are represented by sequence: 110xxxxx 10xxxxxx
Three bytes: 1110xxxx 10xxxxxx 10xxxxxx
Four bytes: 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
Your problem is that you may be reading some character string supposedly encoded as UTF-8 (as the XML encoding definition states) but the byte chunk might not be really encoded in UTF-8 (it is a common mistake to declare something as UTF-8 but encoding text with a different encoding such as Cp1252). Your XML parser tries to interpret byte chunks as UTF-8 characters but finds something that does not fit the encoding rules (illegal character). I.e. two bytes with two most significate bytes set would bring an illegal encoding error: 110xxxxx must be always followed by 10xxxxxx (values such as 01xxxxxx 11xxxxxx 00xxxxxx would be illegal).
This problem does not arise when non-variable length encodings are used. I.e. if you state in your XML declaration that your file uses Windows-1252 encoding but you end up using ANSI your only problem will be that non-ASCII characters (values > 127) will render incorrectly.
The solution:
Try to detect encoding by other means.
If you will always be reading data from same source you could sample some files and use an advanced text editor that tries to infer actual encoding of the file (i.e. notepad++, jEdit, etc.).
Do it programatically. Preprocess raw bytes before doing any actual xml processing.
Force actual encoding at the XML processor
Alternatively if you do not mind about non-ASCII characters (no matter if strange symbols appear now and then) you could go directly to step 2 and force XML processing to any ASCII compatible 8-byte fixed length encoding (ANSI, any Windows-XXXX codepage, Mac-Roman encoding, etc.). With your present code you just could try:
XmlPullParser parser = Xml.newPullParser();
parser.setInput(url.open(), "ISO-8859-1");
Calling setInput(istream, null) already means for the pull parser to try to detect the encoding on its own. It obviously fails, due to the fact that there is an actual problem with the file. So it's not like your code is wrong - you can't be expected to be able to parse all incorrect documents, whether ill-formed or with wrong encodings.
If however it's mandatory that you try to parse this particular document, what you can do is modify your parsing code so it's in a function that takes the encoding as a parameter and is wrapped in a try/catch block. The first time through, do not specify an encoding, and if you get an encoding error, relaunch it with ISO-8859-1. If it's mandatory to have it succeed, repeat for other encodings, otherwise call it quits after two.
Before parsing your XML, you may tweak it, and manually remove the accents before you parse it.
Maybe not the best solution so far, but it will do the job.

Categories

Resources