Encrypt logcat output in kernel side - android

I try add simple encryption to log file created by logcat in Android. For the performance, I add encryption in read function instead of write (driver/staging/android/logger.c)
/*
* do_read_log_to_user - reads exactly 'count' bytes from 'log' into the
* user-space buffer 'buf'. Returns 'count' on success.
*
* Caller must hold log->mutex.
*/
static ssize_t do_read_log_to_user(struct logger_log *log,
struct logger_reader *reader,
char __user *buf,
size_t count)
{
size_t len;
unsigned int i;
size_t _size;
unsigned char *str;
/*
* We read from the log in two disjoint operations. First, we read from
* the current read head offset up to 'count' bytes or to the end of
* the log, whichever comes first.
*/
len = min(count, log->size - reader->r_off);
// added
_size= strlen(log->buffer);
printk(KERN_INFO "_size=%d, len=%d, count=%d, logsize=%d, r_off=%d\n", _size, len, count, log->size, reader->r_off);
str = kmalloc(len, GFP_KERNEL);
if(str==NULL)
printk(KERN_ERR "logger: failed to allocate buffer\n");
memcpy(str,log->buffer,len);
for(i=0;i < _size; i++)
str[i] ^=14; //XOR encryption here
// if (copy_to_user(buf, log->buffer + reader->r_off, len))
if (copy_to_user(buf, str + reader->r_off, len)) //changed here
return -EFAULT;
/*
* Second, we read any remaining bytes, starting back at the head of
* the log.
*/
if (count != len) {
// if (copy_to_user(buf + len, log->buffer, count - len))
if (copy_to_user(buf + len, str, count - len)) //changed here
return -EFAULT;
}
reader->r_off = logger_offset(reader->r_off + count);
if(str!=NULL) {
kfree(str);
str=NULL;
}
return count;
}
but it seems that it does not work as expected. When I run adb logcat, the error message is displayed from this segment code (/system/core/logcat/logcat.cpp)
else if (entry->entry.len != ret - sizeof(struct logger_entry)) {
fprintf(stderr, "read: unexpected length. Expected %d, got %d\n",
entry->entry.len, ret - sizeof(struct logger_entry));
exit(EXIT_FAILURE);
}
Did I miss something here ?
Updated:
I modified the code just to store data to temp str then perform XOR on it before copying it to user.
/* Allocate the memory for storing plain text */
str = kmalloc(len, GFP_KERNEL);
if(str == NULL) {
printk(KERN_ERR "logger: failed to allocate buffer\n");
return -ENOMEM;
}
memcpy(str, log->buffer + reader->r_off, len);
/* Start: Add a simple XOR encryption here */
for(i=0;i < strlen(str); i++)
str[i] ^= 14;
/* End: Add a simple XOR encryption here */
// if (copy_to_user(buf, log->buffer + reader->r_off, len)) //Original code
if (copy_to_user(buf, str, len)) //Modified code
return -EFAULT;
However, it seems that same error still displayed and I'm sure that the problem is from the following code
// Start: Add a simple XOR encryption here
for(i=0;i < strlen(str); i++)
str[i] ^= 14;
// End: Add a simple XOR encryption here
because if I remove it, it works as original code. And I dont find anything wrong here ? Anyone see something weird here ???

First of all, you should show a unified diff to the original sources. Someone familiar with kernel development and the sources will be used to reading diffs.
I can see a few issues with the code
When the kmalloc() fails, you print an error message, but copy to str anyway. A sure recipe for disaster.
_size= strlen(log->buffer) I don't know, if the buffer is NUL terminated, especially, since there seems to be a log->size member. If it is not NUL terminated, the strlen() will either report a size too large or run until it hits a non-accessible page, which might result in an access error.
You allocate a buffer of min(...) size, but copy _size bytes into it. When there's a mismatch between the two, you will corrupt the heap memory.
You use strlen() to detect the needed buffer size, but do a memcpy() afterwards. Usually, you shouldn't mix str*() and mem*() functions, stay with either one or the other.

Related

"Heap corruption detected" when I splice together strings in JNI

I used JNI to concatenate strings like this, and received an APP crash error: "heap corruption detected by dlmalloc_real", the compiler told me that there was a problem with the memory heap. I am a newbie, and I don’t know if I use JNI to concatenate correctly. I other will receive this error when +1 or not +1 when allocating a memory block of char. This error directly caused my APP to crash, and there was no error log can track. I only received one: ‘ heap corruption detected by dlmalloc_real’. This error currently only appears on Android 5.x. No other versions have found this error. So I want to know if my JNI to concatenate strings of stitching is wrong.
this is my code original screenshot, the re value is returned directly to the JAVA layer at the end.
const char *username = (*env)->GetStringUTFChars(env, user_name, NULL);
const char *password = (*env)->GetStringUTFChars(env, pass_word, NULL);
const char *http = "thi is a string***";
const char *pas = "*thi is a string****";
const char *httpTail = "thi is a string***";
char *host = (char *) malloc(
strlen(http) + strlen(username) +strlen(pas) + strlen(password) + strlen(httpTail)+1);
strcat(host, http);
strcat(host, username);
strcat(host, pas);
strcat(host, password);
strcat(host, httpTail);
correctUtfBytes(host);
(*env)->ReleaseStringUTFChars(env, user_name, username);
(*env)->ReleaseStringUTFChars(env, pass_word, password);
jstring re = (*env)->NewStringUTF(env, host);
return re;
This method only checks whether the encoding conforms to C's encoding format. If no detection and conversion is made, the function newstringutf may report an error that the encoding format does not conform, because Java's string and C's char character encoding are different
void correctUtfBytes(char *bytes) {
char three = 0;
while (*bytes != '\0') {
unsigned char utf8 = *(bytes++);
three = 0;
// Switch on the high four bits.
switch (utf8 >> 4) {
case 0x00:
case 0x01:
case 0x02:
case 0x03:
case 0x04:
case 0x05:
case 0x06:
case 0x07:
// Bit pattern 0xxx. No need for any extra bytes.
break;
case 0x08:
case 0x09:
case 0x0a:
case 0x0b:
case 0x0f:
/*
* Bit pattern 10xx or 1111, which are illegal start bytes.
* Note: 1111 is valid for normal UTF-8, but not the
* modified UTF-8 used here.
*/
*(bytes - 1) = '?';
break;
case 0x0e:
// Bit pattern 1110, so there are two additional bytes.
utf8 = *(bytes++);
if ((utf8 & 0xc0) != 0x80) {
--bytes;
*(bytes - 1) = '?';
break;
}
three = 1;
// Fall through to take care of the final byte.
case 0x0c:
case 0x0d:
// Bit pattern 110x, so there is one additional byte.
utf8 = *(bytes++);
if ((utf8 & 0xc0) != 0x80) {
--bytes;
if (three)--bytes;
*(bytes - 1) = '?';
}
break;
}
}
}

I can't convert Android HPROF to standard HPROF on Lollipop

My test device version is 5.0.1 (Lollipop).
I get Android heap dump file in Android Studio 1.3.
But I saw error message.
So I try to get dump file (example file name: android.hprof) in DDMS.
Then I try to convert Android HPROF to standard HPROF file.
hprof-conv android.hprof standard.hprof
Then hprof-conv return message that is ERROR: read 40070 of 65559 bytes
Somebody help me.
https://android.googlesource.com/platform/dalvik.git/+/android-4.2.2_r1/tools/hprof-conv/HprofConv.c#221
/*
* Read some data, adding it to the expanding buffer.
*
* This will ensure that the buffer has enough space to hold the new data
* (plus the previous contents).
*/
static int ebReadData(ExpandBuf* pBuf, FILE* in, size_t count, int eofExpected)
{
size_t actual;
assert(count > 0);
ebEnsureCapacity(pBuf, count);
actual = fread(pBuf->storage + pBuf->curLen, 1, count, in);
if (actual != count) {
if (eofExpected && feof(in) && !ferror(in)) {
/* return without reporting an error */
} else {
fprintf(stderr, "ERROR: read %d of %d bytes\n", actual, count);
return -1;
}
}
pBuf->curLen += count;
assert(pBuf->curLen <= pBuf->maxLen);
return 0;
}

Why does SKIA not use a custom FilterInputStream?

I'm trying to decode a bitmap from an extended FilterInputStream. I have to perform on-the-fly byte manipulation to the image data to provide a decodable image to SKIA, however it seems like SKIA ignores my custom InputStream and initializes one of its own...
When I run my test application, attempting to load in a 2mb large JPEG results in ObfuscatedInputStream.read([]) being called only once from BitmapFactory.decodeStream()
It seems like once the type of file is determined from the first 16kb of data retrieved from my ObfuscatedInputStream it initializes its own native stream and reads from that, effectively rendering all changes I make to how the input stream should work useless...
Here is the buffered read function in my extended FilterInputStream class. The Log.d at the top of the function is only executed once.
#Override
public int read(byte b[], int off, int len) throws IOException
{
Log.d(TAG, "called read[] with aval + " + super.available() + " len " + len);
int numBytesRead = -1;
if (pos == 0)
{
numBytesRead = fill(b);
if (numBytesRead < len)
{
int j;
numBytesRead += ((j = super.read(b, numBytesRead, len - numBytesRead)) == -1) ? 0 : j ;
}
}
else
numBytesRead = super.read(b, 0, len);
if (numBytesRead > -1)
pos += numBytesRead;
Log.d(TAG, "actually read " + numBytesRead);
return numBytesRead;
}
Has anyone ever encountered this issue? It seems like the only way to get my desired behavior is to rewrite portions of the SKIA library... I would really like to know what the point of the InputStream parameter is if the native implementation initializes a stream of its own...
turns out that it wasnt able to detect that it was an actual image from the first 1024 bytes it takes in. If it doesnt detect that the file is an actual image, it will not bother decoding the rest, hence only having read[] called once.

Getting segmentation fault SIGSEGV in memcpy after mmap

I wrote a simple Android native function that get a filename and some more arguments and read the file by mmapping (mmap) it's memory.
Because it's mmap, I don't really need to call "read()" so I just memcpy() from the address returned from the mmap().
But, somewhere I'm getting a SIGSEGV probably because I'm trying to access a memory which I not permitted. But I don't understand why, I already asked all file's memory to be mapped!
I'm attaching my code and the error I got:
EDIT
I fixed the unterminating loop, but still getting SIGSEGV after 25001984 bytes have been read.
The function works on those arguments:
jn_bytes = 100,000,000
jbuffer_size = 8192
jshared=jpopulate=jadvice=0
void Java_com_def_benchmark_Benchmark_testMmapRead(JNIEnv* env, jobject javaThis,
jstring jfile_name, unsigned int jn_bytes, unsigned int jbuffer_size, jboolean jshared, jboolean jpopulate, jint jadvice) {
const char *file_name = env->GetStringUTFChars(jfile_name, 0);
/* *** start count *** */
int fd = open(file_name, O_RDONLY);
//get the size of the file
size_t length = lseek(fd, 0L, SEEK_END);
lseek(fd, 0L, SEEK_SET);
length = length>jn_bytes?jn_bytes:length;
// man 2 mmap: MAP_POPULATE is only supported for private mappings since Linux 2.6.23
int flags = 0;
if (jshared) flags |= MAP_SHARED; else flags |= MAP_PRIVATE;
if(jpopulate) flags |= MAP_POPULATE;
//int flags = MAP_PRIVATE;
int * addr = reinterpret_cast<int *>(mmap(NULL, length , PROT_READ, flags , fd, 0));
if (addr == MAP_FAILED) {
__android_log_write(ANDROID_LOG_ERROR, "NDK_FOO_TAG", strerror(errno));
return;
}
int * initaddr = addr;
if(jadvice > 0)
madvise(addr,length,jadvice==1?(MADV_SEQUENTIAL|MADV_WILLNEED):(MADV_DONTNEED));
close(fd);
char buffer[jbuffer_size];
void *ret_val = buffer;
int read_length = length;
while(ret_val == buffer || read_length<jbuffer_size) {
/*****GETTING SIGSEGV SOMWHERE HERE IN THE WHILE************/
ret_val = memcpy(buffer, addr,jbuffer_size);
addr+=jbuffer_size;
read_length -= jbuffer_size;
}
munmap(initaddr,length);
/* stop count */
env->ReleaseStringUTFChars(jfile_name, file_name);
}
and the error log:
15736^done
(gdb)
15737 info signal SIGSEGV
&"info signal SIGSEGV\n"
~"Signal Stop\tPrint\tPass to program\tDescription\n"
~"SIGSEGV Yes\tYes\tYes\t\tSegmentation fault\n"
15737^done
(gdb)
15738-stack-list-arguments 0 0 0
15738^done,stack-args=[frame={level="0",args=[]}]
(gdb)
15739-stack-list-locals 0
15739^done,locals=[]
(gdb)
There is a big problem here:
addr+=jbuffer_size;
You're bumping addr by sizeof(int) * jbuffer_size bytes whereas you just want to increment it by jbuffer_size bytes.
My guess is sizeof(int) is 4 on your system, hence you crash at around 25% of the way through your loop, because you're incrementing addr by a factor of 4x too much on each iteration.
This loop never terminates because ret_val always equals buffer
void *ret_val = buffer;
int read_length = length;
while(ret_val == buffer || read_length<jbuffer_size) {
/*****GETTING SIGSEGV SOMWHERE HERE IN THE WHILE************/
ret_val = memcpy(buffer, addr,jbuffer_size);
addr+=jbuffer_size;
read_length -= jbuffer_size;
}
memcpy always returns it's first argument, so ret_val never changes.
The while loop is infinite:
while(ret_val == buffer || read_length<jbuffer_size) {
ret_val = memcpy(buffer, addr,jbuffer_size);
addr+=jbuffer_size;
read_length -= jbuffer_size;
}
as memcpy() always returns the desintation buffer so ret_val == buffer will always be true (and is therefore useless as part of the terminating condition). This means that addr is being incremented by jbuffer_size bytes on every iteration of the loop and is passed to memcpy(), resuting in accessing invalid memory.
The condition in while(ret_val == buffer || read_length<jbuffer_size) is wrong. ret_val == buffer will always be true, and if read_length<jbuffer_size is true when the loop is reached, it will always remain true because read_length is only ever reduced (well, until it underflows INT_MIN).

Android/PHP - Encryption and Decryption

I'm struggeling with code from this page: http://www.androidsnippets.com/encrypt-decrypt-between-android-and-php
I want to send data from a server to an Android application and vice versa, but it should be sent as an encrypted string. However, I manage to encrypt and decrypt the string in PHP. But on Android the application crashes with the following error message when decrypting:
java.lang.Exception: [decrypt] unable to parse ' as integer.
This error occours here in the for-loop:
public static byte[] hexToBytes(String str) {
if (str==null) {
return null;
} else if (str.length() < 2) {
return null;
} else {
int len = str.length() / 2;
byte[] buffer = new byte[len];
for (int i=0; i<len; i++) {
buffer[i] = (byte) Integer.parseInt(str.substring(i*2,i*2+2),16);
}
System.out.println("Buffer: " + buffer);
return buffer;
}
}
This is by the way the string that should be decrypted: f46d86e65fe31ed46920b20255dd8ea6
You're talking about encrypting and decrypting, but you're showing code which simply turns numeric bytes (such as 0x4F) into strings (such as "4F") -- which may be relevant to your transfer of data (if you cannot transfer binary format), but completely unrelated to encryption/decryption.
Since the Android code you have contains only a single Integer parse, have you examined the input you're giving it? str.substring(i*2,i*2+2) apparently contains data other than [0-9A-F] when the exception occurs. You should start by examining the string you've received and comparing it to what you sent, to make sure they agree and they only contain hexadecimal characters.
Edit -- passing the string "f46d86e65fe31ed46920b20255dd8ea6" through your hexToBytes() function works flawlessly. Your input is probably not what you think it is.

Categories

Resources