I'm trying to generate filenames in my Android app from a 4 byte byte array. I'm Base64 encoding the byte array with the URL_SAFE option. However, the generated string seems to end with a newline character, which makes it unusable as a filename. Is there anyway to remove the newline?
My code is as follows:
byte[] myByteArray = new byte[4];
myByteArray = generateBytes(myByteArray); // fills the byte array with some data
final String byteString = Base64.encodeToString(myByteArray, Base64.URL_SAFE);
After some googling, I found out that in Android, Base64 encoding automatically inserts a newline after the string, and that using the NO_WRAP flag would solve this. However, is the NO_WRAP flag generated output filename safe?
Thanks.
OK, turns out I can use (Base64.URL_SAFE | Base64.NO_WRAP) to apply both flags.
Related
How add cyrillic symbol in exif file?
My code alwais added symbol "?".
This code:
String userComment = "АБВГДЕЁЖЗИЙКЛСНОПРСТУФХЦЧШЩЪЫЬЭЮЯ";
exifInterface.setAttribute("UserComment", userComment );
exifInterface.saveAttributes();
or
String userComment = "АБВГДЕЁЖЗИЙКЛСНОПРСТУФХЦЧШЩЪЫЬЭЮЯ";
exifInterface.setAttribute("UserComment", new String(userComment.getBytes(), "UTF-8"));
exifInterface.saveAttributes();
Think that ExifInterface class does not support this cyrillic.
I tried to get the bytes from that tag with
byte [] bytes1 = exifInterface.getAttribute("UserComment").getBytes();
byte [] bytes2 = exifInterface.getAttribute("UserComment").getBytes("utf-8");
// byte [] bytes3 = exifInterface.getAttribute("UserComment").getBytes("utf-16");
// byte [] bytes3 = exifInterface.getAttribute("UserComment").getBytes("ISO-8859-1");
byte [] bytes3 = exifInterface.getAttribute("UserComment").getBytes("windows-1252");
And then displayed the bytes in hexadecimal notation. They were all rubbish.
There were 66 bytes for your 33 characters. Dont know which encoding is used.
I wanted to compare them with the bytes of your alphabet string.
Also tried compiling for Andoid 7 but all the same.
I give up ;-).
"UserComment" tag in Exif support ASCII or Unicode.
Unfortunately, Android's ExifInterface only use ASCII to write or read the tag.
So, cyrillic symbol is not support by Android's ExifInterface.
But this lib may help you:
https://github.com/ddyos/UnicodeExifInterface
I'm working in Android, developping an app in which I'm uploading files to dropbox. As i don't want the title of this files to be seen, i'm encrypting them and the enccoding the result bytearray. The problem is that when you use the sentences:
String fileNameEncrypted = Base64.encodeToString(encrypted, Base64.DEFAULT);
File file = new File(mDirectoryPath + "/" + fileNameEncrypted);
The string "fileNameEncrypted" contains forward and back slashes and maybe other characters that are not allowed for a file name. Besides, the forward slashes are confused with subfolders.
How could I solve this problem?
PS: my goal is the filename can't be read in the dropbox app.
[EDIT the whole message according to comments]
Because base64 encode use special char (/) and lower/upper case char, it's seems to not be very compliant with filename for some OS like windows. Where file "aaa.txt" is equals to "AAA.txt".
Even the safe mode of base64 use lower and upper case charset.
The ASCII hex format (base16) provides a more compliant charset 0-9 A-F for store byte array
the char 'A' = 0x41 in base16. You can wrote this as "41"
A more complete example
"test.txt" can be translate to : 746573742E747874
If you need to really hide the name you can combine the encoding with a hash function. Because hash is a one way function you will definitely hide the filename, but you will not be capable to recover the real name from this.
If you need a two way function you can use a simple crypto method like aes with a internal key
You can use the Guava library to perform the transformation on base16 or base32 who has a more compliant charset than base64 for windows.
byte[] encrypted = "test.txt".getBytes();
BaseEncoding encoder = BaseEncoding.base16().lowerCase();
String newFilename = encoder.encode(encrypted);
If you want to use base32 juste change the encoder.
You can use the base64 encoder in filename safe mode with
Base64.encodeToString(encrypted, Base64.URL_SAFE)
Documentation:
Encoder/decoder flag bit to indicate using the "URL and filename safe" variant of Base64 (see RFC 3548 section 4) where - and _ are used in place of + and /.
I'm encountering an odd situation whereby strings that I load from my resource XML file that have Spanish characters in them display correctly in my TextViews, but strings that I'm fetching from a JSON file that I load via HTTP at runtime display the missing char [] boxes
ESPAÑOL for example, when embedded in my XML strings works fine, but when pulled from my JSON is rendered as SPAÃ[]OL, so the Ñ is transformed into a à and a missing char!
I'm not sure at what point I need to intercept these strings and set the correct encoding on them. The JSON text file itself is generated on the server via Node, so, I'm not entirely sure if that's the point at which I should be encoding it, or if I should be encoding the fileReader on the Android side, or perhaps setting the TextView itself to be of some special encoding type (I'm unaware that this is an option, just sort of throwing my hands in the air, really).
[EDIT]
As per ianhanniballake's suggestion I am logging and seeing that the screwy characters are actually showing up in the log as well. However, when I look at the JSON file with a text viewer on the Android file system (it's sitting on the SDCARD) it appears correct.
So, it turned out that the text file was, indeed, encoded correctly and the issue was that I wasn't setting UTF-8 as my encoding on the FileInputStream...
The solution is to read the file thusly:
static String readInput() {
StringBuffer buffer = new StringBuffer();
try {
FileInputStream fis = new FileInputStream("myfile.json");
InputStreamReader isr = new InputStreamReader(fis, "UTF8");
Reader in = new BufferedReader(isr);
int ch;
while ((ch = in.read()) > -1) {
buffer.append((char) ch);
}
in.close();
return buffer.toString();
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
I just found out that Android can correctly read in a file which is encoded using Windows ANSI (or the so-called multi-byte encoding) and convert it to Java Unicode strings. But it fails when reading a Unicode file. It seems that Android is reading it in a byte-by-byte fashion. A Unicode string "ABC" in the file would be read in to a Java String of length 6, and the characters are 0x41, 0x00, 0x42, 0x00, 0x43, 0x00.
BufferedReader in = new BufferedReader(new FileReader(pathname));
String str = in.readLine();
Please, is there a way to read Windows Unicode files correctly on Android? Thank you.
[Edited]
Experiements: I saved two Chinese characters "難哪" in two Windows text files:
ANSI.txt -- C3 F8 AD FE
UNICODE.txt -- FF FE E3 96 EA 54
Then I put these files to Emulator's SD card, and I used the following program to read them in: (Notice that the locale of the Emulator has already been set to zh_TW).
BufferedReader in = new BufferedReader(new FileReader("/sdcard/ANSI.txt"));
String szLine = in.readLine();
int n = szLine.length(), j, i;
in.close();
for (i = 0; i < n; i++)
j = szLine.charAt(i);
Here is what I saw on the Emulator:
ANSI.txt -- FFFD FFFD FFFD
UNICODE.txt -- FFFD FFFD FFFD FFFD 0084
Apparantly Android (or Java) is unable to properly decode the Chinese characters. So, how do I do this? Thank you in advance.
The FileReader apparently assumes that the encoding will be ASCII-compatible. (Could expect UTF-8 or any of the older ASCII extensions).
Also, it is not a "Unicode file" - it is an "UTF-16 encoded file".
You will have to use a StreamReader and specify the encoding yourself:
BufferedReader in = new BufferedReader(new InputStreamReader(new FileInputStream(pathname), "UTF-16LE"));
You should also really read that article - it seems to me that there is a lot that you misunderstand about character sets and encoding.
You can try following code.
Normally Window base Ascii file that within the chinese words
may not be correct process under android system.
It's normally default to use the UTF8 format in stream Process.
Once you place a Window base Ascii file that within chinese words into Android system.
the normal stream process can't correct recognize the part of chinese.
following code, can correct parser String from Window Base Acsii text file that within chinese words that put at Android System SD or Asset folder.
It's very simple just Use "BIG5" format decoder , at InputStreamReader Ojbect.
I have been verified. It's working well. Try it !! FYI. KNC.
String pathname="AAA.txt";
BufferedReader inBR;
inBR = new BufferedReader(new InputStreamReader(new FileInputStream(pathname), "BIG5"));
String sData="";
while ((sData = inBR.readLine()) != null) {
System.out.println(sData);
}
A Unicode string "ABC" in the file would be read in to a Java String of length 6, and the characters are 0x41, 0x00, 0x42, 0x00, 0x43, 0x00.
How are you getting the length? What you have described is absolutely correct for a Java String. Java strings are UTF-16 (i.e., Unicode). This means that ABC will be stored in a Java string exactly as you describe (0x41, 0x00, 0x42, 0x00, 0x43, 0x00).
The String 'length', however, as returned by int String.length() will be 3 even though it is 6 bytes long.
In my android app I can record audio and save it on the phone/sdk. I checked that it is audible and clear when i play it back on the phone. The size of the audio file it created is 5.9kb(.amr format).
Next i upload the file to the server, it stores the audio on sql db. The upload is successful. When the uploaded audio is played, it is all garbled...
In the database i store the audio in a column with datatype image and is of length 16.
My question is ..why is the noise garbled after upload. How do i verify that the audio is saved correctly without any noise added.
Code for file upload
InputStream = new DataInputStream(new FileInputStream( FileName));
byte[] responseData = new byte[10000];
int length = 0;
StringBuffer rawResponse = new StringBuffer();
while (-1 != (length = InputStream.read(responseData)))
rawResponse.append(new String(responseData, 0, length));
String finalstring = rawResponse.toString();
voicedataArray = finalstring.getBytes();
Your problem is very much likely due to the use of StringBuffer to buffer the response. A character in Java is a two-byte entity corresponding to a Unicode character point. The documentation for String#getBytes() says:
Returns a new byte array containing the characters of this string
encoded using the system's default charset.
So there's no guarantee that the bytes you are passing in, being converted to characters, then back to bytes is the same stream you passed in the first place.
I think you would need to code your solution using a dynamically expanding byte buffer in place of the StringBuffer.
Also, two notes about the usage of StringBuffer:
1) All accesses to the StringBuffer are synchronized, so you're paying a performance penalty. StringBuilder is a modern-day replacement that doesn't do synchronization under the hood.
2) Each time you append to the StringBuffer:
rawResponse.append(new String(responseData, 0, length));
you are allocating a new string and throwing it away. That's really abusive to the garbage collector. StringBuffer actually has a form of append() that will directly take a char array, so there is no need to use an intermediate String. (But you probably don't want to use a StringBuffer in the first place).