I my cpp code contains a jni function that i wish to convert to const char*. This is the code i am using
extern "C" {
void Java_com_sek_test_JNITest_printSomething(JNIEnv * env, jclass cl, jstring str) {
const char* mystring = env->GetStringUTFChars(env, str, 0);
PingoScreen::notify();
}
I get an error that
no matching function for call to '_JNIEnv::GetStringUTFChars(JNIEnv*&, _jstring*&, int)
What am i doing wrong ?
There several things that aren't quite right with your code and approach:
As you've discovered, (env*)->JNIFunc(env,...) should be env->JNIFunc(...) in C++. Your vendor's (Google Android's) jni.h simplifies the C++ syntax over the C syntax.
You're not calling the "Release" function (ReleaseStringUTFChars) corresponding to the "pinning" function (GetStringUTFChars). This is very important because pinned objects reduce the memory efficiency of the JVM's garbage collector.
You've misinterpreted the final argument to GetStringUTFChars. It's a pointer for an output parameter. The result isn't very interesting so pass nullptr.
You're using JNI functions that deal with the modified UTF-8 encoding (GetStringUTFChars et al). There should be no need to ever use that encoding. Java classes are very capable at converting encodings. They also give you control over what happens when a character cannot be encoded in the target encoding. (The default is to convert it to a ?.)
The idea of converting a JVM object reference (jstring) to a pointer to one byte storage (char*) needs a lot of refinement. You probably want to copy the characters in a JVM java.lang.String to a "native" string using a specific or OS-default encoding. The Java string has Unicode characters with a UTF-16 encoding. Android generally uses the Unicode character set with the UTF-8 encoding. If do you need something else, you can specify it with a Charset object.
Also, in C++, it is more convenient to use STL std::string to hold counted byte-sequences for a string. You can get a pointer to a null-terminated buffer from a std::string if you need it.
Be sure to read Android's JNI Tips.
Here is an implementation of your function that lets the vendor's JVM implementation pick the target encoding (which is UTF-8 for Android):
extern "C" JNIEXPORT void Java_com_sek_test_JNITest_printSomething
(JNIEnv * env, jclass cl, jstring str) {
// TODO check for JVM exceptions where appropriate
// javap -s -public java.lang.String | egrep -A 2 "getBytes"
const auto stringClass = env->FindClass("java/lang/String");
const auto getBytes = env->GetMethodID(stringClass, "getBytes", "()[B");
const auto stringJbytes = (jbyteArray) env->CallObjectMethod(str, getBytes);
const auto length = env->GetArrayLength(stringJbytes);
const auto pBytes = env->GetByteArrayElements(stringJbytes, nullptr);
std::string s((char *)pBytes, length);
env->ReleaseByteArrayElements(stringJbytes, pBytes, JNI_ABORT);
const auto pChars = s.c_str(); // if you really do need a pointer
}
However, I'd probably do the call to String.getBytes on the Java side, defining the native method to take a byte array instead of a string.
(Of course, implementations that use GetStringUTFChars do work for some subset of Unicode strings but why impose an esoteric and needless limit?)
According to the documentation,
GetStringUTFChars
const jbyte* GetStringUTFChars(JNIEnv *env, jstring string,
jboolean *isCopy);
Returns a pointer to an array of bytes representing the string in modified UTF-8 encoding. This array is valid until it is released by ReleaseStringUTFChars().
If isCopy is not NULL, then *isCopy is set to JNI_TRUE if a copy is made; or it is set to JNI_FALSE if no copy is made.
So the last parameter should be a jboolean;
Try... Change this line
const char* mystring = env->GetStringUTFChars(env, str, 0);
to
const char *mystring = (*env)->GetStringUTFChars(env, str, 0);
Hope it works:)
Related
Library written in c++ produces continuous stream of data and same has to be ported on different platforms. Now integrating the lib to android application, I am trying to create shared memory between NDK and SDK.
Below is working snippet,
Native code:
#include <jni.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <linux/ashmem.h>
#include <android/log.h>
#include <string>
char *buffer;
constexpr size_t BufferSize=100;
extern "C" JNIEXPORT jobject JNICALL
Java_test_com_myapplication_MainActivity_getSharedBufferJNI(
JNIEnv* env,
jobject /* this */) {
int fd = open("/dev/ashmem", O_RDWR);
ioctl(fd, ASHMEM_SET_NAME, "shared_memory");
ioctl(fd, ASHMEM_SET_SIZE, BufferSize);
buffer = (char*) mmap(NULL, BufferSize, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
return (env->NewDirectByteBuffer(buffer, BufferSize));
}
extern "C" JNIEXPORT void JNICALL
Java_test_com_myapplication_MainActivity_TestBufferCopy(
JNIEnv* env,
jobject /* this */) {
for(size_t i=0;i<BufferSize;i = i+2) {
__android_log_print(ANDROID_LOG_INFO, "native_log", "Count %d value:%d", i,buffer[i]);
}
//pass `buffer` to dynamically loaded library to update share memory
//
}
SDK code:
//MainActivity.java
public class MainActivity extends AppCompatActivity {
// Used to load the 'native-lib' library on application startup.
static {
System.loadLibrary("native-lib");
}
final int BufferSize = 100;
#RequiresApi(api = Build.VERSION_CODES.Q)
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
ByteBuffer byteBuffer = getSharedBufferJNI();
//update the command to shared memory here
//byteBuffer updated with commands
//Call JNI to inform update and get the response
TestBufferCopy();
}
/**
* A native method that is implemented by the 'native-lib' native library,
* which is packaged with this application.
*/
public native ByteBuffer getSharedBufferJNI();
public native int TestBufferCopy();
}
Question:
Accessing primitive arrays from Java to native is reference only if garbage collector supports pinning. Is it true for other way around ?
Is it guaranteed by android platform that ALWAYS reference is shared from NDK to SDK without redundant copy?
Is it the right way to share memory?
You only need /dev/ashmem to share memory between processes. NDK and SDK (Java/Kotlin) work in same Linux process and have full access to same memory space.
The usual way to define memory that can be used both from C++ and Java is by creating a Direct ByteBuffer. You don't need JNI for that, Java API has ByteBuffer.allocateDirect(int capacity). If it's more natural for your logical flow to allocate the buffer on the C++ side, JNI has the NewDirectByteBuffer(JNIEnv* env, void* address, jlong capacity) function that you used in your question.
Working with Direct ByteBuffer is very easy on the C++ side, but not so efficient on the JVM side. The reason is that this buffer is not backed by array, and the only API you have involves ByteBuffer.get() with typed variations (getting byte array, char, int, …). You have control of current position in the buffer, but working this way requires certain discipline: every get() operation updates the current position. Also, random access to this buffer is rather slow, because it involves calling both positioning and get APIs. Therefore, in some cases of non-trivial data structures, it may be easier to write your custom access code in C++ and have 'intelligent' getters called through JNI.
It's important not to forget to set ByteBuffer.order(ByteOrder.nativeOrder()). The order of a newly-created byte buffer is counterintuitively BIG_ENDIAN. This applies both to buffer created from Java and from C++.
If you can isolate the instances when C++ needs access to such shared memory, and don't really need it to be pinned all the time, it's worth to consider working with byte array. In Java, you have more efficient random access. On the NDK side, you will call GetByteArrayElements() or GetPrimitiveArrayCritical(). The latter is more efficient, but its use imposes restrictions on what Java functions you can call until the array is released. On Android, both methods don't involve memory allocation and copy (with no official guarantee, though). Even though C++ side uses the same memory as Java, your JNI code must call the appropriate Release…() function, and better do that as early as possible. It's a good practice to handle this Get/Release via RAII.
Let me summarize my findings,
Accessing primitive arrays from Java to native is reference only if garbage collector supports pinning. Is it true for other way around ?
The contents of a direct buffer can, potentially, reside in native memory outside of the ordinary garbage-collected heap. And hence garbage collector can't claim the memory.
Is it guaranteed by android platform that ALWAYS reference is shared from NDK to SDK without redundant copy?
Yes, As per documentation of NewDirectByteBuffer.
jobject NewDirectByteBuffer(JNIEnv* env, void* address, jlong capacity);
Allocates and returns a direct java.nio.ByteBuffer referring to the block of memory starting at the memory address address and extending capacity bytes.
I am building a video system, which contains video capture from camera, video encode and video mux.
I want to use the encode in C level, since there is some algorithm I want to implement before the data is sent to the encoder.
There is a 'native-media' project in NDK samples, in which it calls OMX functions in C level to do the video decode and play stuff, but it seems that NDK doesn't support encode of OMX now, is that true?
I've successfully used MediaCodec API in Java level, if NDK doesn't support encode of OMX, can I use the MediaCodec API through C code?
Yes, from C code you can call Java methods.
For example, if we want to call the method x.doSomething(5), supposing x is an instance of class MyClass in the "com.example.ndk" package and the method returns void, we can use:
jclass cls = (*env)->FindClass(env, "com/example/ndk/MyClass");
jmethodID mid = (*env)->GetMethodID(env, cls, "doSomething", "(I)V");
(*env)->CallObjectMethod(env, x, mid, 5);
Where:
"env" is the JNIEnv pointer which you receive in the C JNI method (read here for information about the JNIEnv pointer and native methods).
"(I)V" is the method signature, which in this case it says that the method has an int parameter (I), and returns void (V).
"x" is a jobject obtained in some previous *env function (here you can find all of JNI functions pointed by env).
I'd like to convert a ASCII char* to wchar_t* in C++ on Linux without using mbstowcs(). On iOS and Windows, this works perfectly. On Android, however, mbstowcs seems to convert things quite literally, one-to-one. Even using different variations of setlocale(), I've been unable to successfully convert.
I might end up with just manually converting it on Android by copying 1 byte, and filling the rest with zeroes. But is this proper for ASCII? Are the first 255 characters of UTF-32/Unicode the same as the ASCII (ISO 8859-1/ISO Latin-1) character set?
To make thinks a bit clearer :
ASCII is a character encoding using values from 0..127 to encode a single character.
Latin-1 is another character set, that extends ASCII by using the values from 128..255 to encode its own characters.
Indeed most architecture byte is 8 bits, so there are still 128 values available when storing ASCII characters in byte.
Several different character set were thus designed to extend ASCII for values from 128..255. Happy accident, the one referred as Latin-1 was used for the first 256 code points in Unicode (as pointed by BoBTFish). So if you have on one hand string of chars that you know is encoded using Latin-1, you can just assign each value to a wchar_t (which will ensure a correct "zero filling" with regard to endianness on your architecture), and it will be a valid wstring of unicode code points corresponding to the same characters. Then, the consumer of your wstring has to interpret its content as unicode code points.
Also, as soon as you cannot guarantee the encoding of the original string is Latin-1, you will run into problems. (eg, UTF-8 encoding is not mapping byte-per-byte to Latin-1).
If you don't mind taking an STL dependency and using string and wstring instead of raw char * and wchar_t * pointers, you can use a function like the following to perform string conversions:
template<typename TARGET, typename SOURCE>
TARGET convertString(const SOURCE &s)
{
TARGET result;
result.assign(s.begin(), s.end());
return result;
}
Use this as follows:
#include <string>
#include <iostream>
using namespace std;
int main()
{
wstring wstr(L"HELLO WORLD");
string str(convertString<string, wstring>(wstr));
cout << str << endl;
return 0;
}
This performs a character-by-character conversion and is platform-independent. This has been tested on Windows using GCC 4.7.3 and Visual C++ 2012 as well as on Linux using GCC 4.7.3.
The following code can be shortened using std::wstring_convert:
#include <string>
#include <locale>
std::string convert(std::wstring str, std::locale loc = std::locale(),
std::mbstate_t state = std::mbstate_t())
{
const wchar_t* a; char *b;
std::string res;
res.resize(str.size());
auto bytes = std::use_facet<std::codecvt<wchar_t, char, std::mbstate_t>>(loc)
.out(state, &str[0], &str[str.size()], a, &res[0], &res[res.size()], b);
return res;
}
int main()
{
std::wstring a = L"abcdef";
std::string b = convert(a);
}
Demo
I have C code that uses prints with something clever like
printf("hello ");
// do something
printf(" world!\n");
which outputs
hello world!
I want to reuse that code with Android and iOS, but Log.d() and NSLog() effectively add a newline at the end of every string I pass them, so that the output of this code:
NSLog(#"hello ");
// do something
NSLog(#"world!\n");
comes out (more or less) as:
hello
world!
I'm willing to replace printf with some macro to make Log.d and NSLog emulate printf's handling of '\n'; any suggestions?
One solution that might work is to define a global log function that doesn't flush its buffer until it finds a newline.
Here's a (very) simple version in java for android:
import java.lang.StringBuilder;
class CustomLogger {
private static final StringBuilder buffer = new StringBuilder();
public static void log(String message) {
buffer.append(message);
if(message.indexOf('\n') != -1) {
Log.d('SomeTag', buffer);
buffer.setLength(0);
}
}
}
...
CustomLogger.log("Hello, ");
// Stuff
CustomLogger.log("world!\n"); // Now the message gets logged
It's completely untested but you get the idea.
This particular script has some performance issues. It might be better to check if just the last character is a newline for example.
I just realized that you wanted this in C. It shouldn't be too hard to port though a standard lib wouldn't hurt (to get stuff like a string buffer).
For progeny, this is what I did: store logged strings in a buffer, and print the part before the newline whenever there is a newline in the buffer.
Yes, the NDK logcat is dumb about it. There are ways to redirect stderr/stdout to logcat, but there are drawbacks (either need to "adb shell setprop" which is only for rooted devices, or a dup() like technique but creating a thread just for that purpose is not a good idea on embedded devices IMHO though you can look further below for this technique).
So I did my own function/macros for that purpose. Here are snippets. In a debug.c, do this:
#include "debug.h"
#include <stdio.h>
#include <stdarg.h>
static const char LOG_TAG[] = "jni";
void android_log(android_LogPriority type, const char *fmt, ...)
{
static char buf[1024];
static char *bp = buf;
va_list vl;
va_start(vl, fmt);
int available = sizeof(buf) - (bp - buf);
int nout = vsnprintf(bp, available, fmt, vl);
if (nout >= available) {
__android_log_write(type, LOG_TAG, buf);
__android_log_write(ANDROID_LOG_WARN, LOG_TAG, "previous log line has been truncated!");
bp = buf;
} else {
char *lastCR = strrchr(bp, '\n');
bp += nout;
if (lastCR) {
*lastCR = '\0';
__android_log_write(type, LOG_TAG, buf);
char *rest = lastCR+1;
int len = bp - rest; // strlen(rest)
memmove(buf, rest, len+1); // no strcpy (may overlap)
bp = buf + len;
}
}
va_end(vl);
}
Then in debug.h do this:
#include <android/log.h>
void android_log(android_LogPriority type, const char *fmt, ...);
#define LOGI(...) android_log(ANDROID_LOG_INFO, __VA_ARGS__)
#define LOGW(...) android_log(ANDROID_LOG_WARN, __VA_ARGS__)
...
Now you just need to include debug.hpp and call LOGI() with a printf-like semantic buffered until a '\n' is encountered (or buffer is full).
This is not perfect though, as if the string generated from a call is longer than the buffer, it will be truncated and output. But frankly, 1024 chars should be enough in most cases (even less than this). Anyway, if this happens it will output a warning so you know about it.
Also note the vsnprintf() is not standard C (but it works in Android NDK). We could use vsprintf() instead (which is standard), but it is unsafe on its own.
======================================================================
Now for the dup() technique, you can look here (James Moore answer).
Then you can get rid of the function above and define your macro as:
#define LOG(...) fprintf(stderr, ...)
and you're done.
Advantages:
C/C++ libraries often use stderr for their logs. Using dup is the only way to have their output in logcat without modifying their code (some big ones use hundreds of direct calls to fprintf(stderr, ...))
stderr is standard C used since decades. All standard C library functions related to streams can be used with it. Same for C++, you can even use cerr with << operator. It works since under the hood, it still stderr.
Very long lines not truncated (instead, their are split). A good reason to use a shorter buffer (256 in the example).
Disadvantages:
A thread on its own (though it's an IO only thread, impact is close to nothing)
No log priority value (INFO, WARN, ERROR, etc...) can be choosen during the call. It uses a default one (INFO), so DMMS will always show stderr lines in the same color.
You could always just build the string one segment at a time:
String message = "Hello";
// Do Something
message += " World!";
Log.v("Example", message);
On Android, a direct ByteBuffer does not ever seem to release its memory, not even when calling System.gc().
Example: doing
Log.v("?", Long.toString(Debug.getNativeHeapAllocatedSize()));
ByteBuffer buffer = allocateDirect(LARGE_NUMBER);
buffer=null;
System.gc();
Log.v("?", Long.toString(Debug.getNativeHeapAllocatedSize()));
gives two numbers in the log, the second one being at least LARGE_NUMBER larger than the first.
How do I get rid of this leak?
Added:
Following the suggestion by Gregory to handle alloc/free on the C++ side, I then defined
JNIEXPORT jobject JNICALL Java_com_foo_bar_allocNative(JNIEnv* env, jlong size)
{
void* buffer = malloc(size);
jobject directBuffer = env->NewDirectByteBuffer(buffer, size);
jobject globalRef = env->NewGlobalRef(directBuffer);
return globalRef;
}
JNIEXPORT void JNICALL Java_com_foo_bar_freeNative(JNIEnv* env, jobject globalRef)
{
void *buffer = env->GetDirectBufferAddress(globalRef);
free(buffer);
env->DeleteGlobalRef(globalRef);
}
I then get my ByteBuffer on the JAVA side with
ByteBuffer myBuf = allocNative(LARGE_NUMBER);
and free it with
freeNative(myBuf);
Unfortunately, while it does allocate fine, it a) still keeps the memory allocated according to Debug.getNativeHeapAllocatedSize() and b) leads to an error
W/dalvikvm(26733): JNI: DeleteGlobalRef(0x462b05a0) failed to find entry (valid=1)
I am now thoroughly confused, I thought I at least understood the C++ side of things... Why is free() not returning the memory? And what am I doing wrong with the DeleteGlobalRef()?
There is no leak.
ByteBuffer.allocateDirect() allocates memory from the native heap / free store (think malloc()) which is in turn wrapped in to a ByteBuffer instance.
When the ByteBuffer instance gets garbage collected, the native memory is reclaimed (otherwise you would leak native memory).
You're calling System.gc() in hope the native memory is reclaimed immediately. However, calling System.gc() is only a request which explains why your second log statement doesn't tell you memory has been released: it's because it hasn't yet!
In your situation, there is apparently enough free memory in the Java heap and the garbage collector decides to do nothing: as a consequence, unreachable ByteBuffer instances are not collected yet, their finalizer is not run and native memory is not released.
Also, keep in mind this bug in the JVM (not sure how it applies to Dalvik though) where heavy allocation of direct buffers leads to unrecoverable OutOfMemoryError.
You commented about doing controlling things from JNI. This is actually possible, you could implement the following:
publish a native ByteBuffer allocateNative(long size) entry point that:
calls void* buffer = malloc(size) to allocate native memory
wraps the newly allocated array into a ByteBuffer instance with a call to (*env)->NewDirectByteBuffer(env, buffer, size);
converts the ByteBuffer local reference to a global one with (*env)->NewGlobalRef(env, directBuffer);
publish a native void disposeNative(ByteBuffer buffer) entry point that:
calls free() on the direct buffer address returned by *(env)->GetDirectBufferAddress(env, directBuffer);
deletes the global ref with (*env)->DeleteGlobalRef(env, directBuffer);
Once you call disposeNative on the buffer, you're not supposed to use the reference anymore, so it could be very error prone. Reconsider whether you really need such explicit control over the allocation pattern.
Forget what I said about global references. Actually global references are a way to store a reference in native code (like in a global variable) so that a further call to JNI methods can use that reference. So you would have for instance:
from Java, call native method foo() which creates a global reference out of a local reference (obtained by creating an object from native side) and stores it in a native global variable (as a jobject)
once back, from Java again, call native method bar() which gets the jobject stored by foo() and further processes it
finally, still from Java, a last call to native baz() deletes the global reference
Sorry for the confusion.
I was using TurqMage's solution until I tested it on a Android 4.0.3 emulator (Ice Cream Sandwich). For some reason, the call to DeleteGlobalRef fails with a jni warning: JNI WARNING: DeleteGlobalRef on non-global 0x41301ea8 (type=1), followed by a segmentation fault.
I took out the calls to create a NewGlobalRef and DeleteGlobalRef (see below) and it seems to work fine on the Android 4.0.3 emulator.. As it turns out, I'm only using the created byte buffer on the java side, which should hold a java reference to it anyways, so I think the call to NewGlobalRef() was not needed in the first place..
JNIEXPORT jobject JNICALL Java_com_foo_allocNativeBuffer(JNIEnv* env, jobject thiz, jlong size)
{
void* buffer = malloc(size);
jobject directBuffer = env->NewDirectByteBuffer(buffer, size);
return directBuffer;
}
JNIEXPORT void JNICALL Java_comfoo_freeNativeBuffer(JNIEnv* env, jobject thiz, jobject bufferRef)
{
void *buffer = env->GetDirectBufferAddress(bufferRef);
free(buffer);
}
Not sure if your last comments are old or what Kasper. I did the following...
JNIEXPORT jobject JNICALL Java_com_foo_allocNativeBuffer(JNIEnv* env, jobject thiz, jlong size)
{
void* buffer = malloc(size);
jobject directBuffer = env->NewDirectByteBuffer(buffer, size);
jobject globalRef = env->NewGlobalRef(directBuffer);
return globalRef;
}
JNIEXPORT void JNICALL Java_comfoo_freeNativeBuffer(JNIEnv* env, jobject thiz, jobject globalRef)
{
void *buffer = env->GetDirectBufferAddress(globalRef);
env->DeleteGlobalRef(globalRef);
free(buffer);
}
Then in Java...
mImageData = (ByteBuffer)allocNativeBuffer( mResX * mResY * mBPP );
and
freeNativeBuffer(mImageData);
mImageData = null;
and everything seems to be working fine for me. Thanks a lot Gregory for this idea. The link to the referenced Bug in the JVM has gone bad.
Use the reflection to call java.nio.DirectByteBuffer.free(). I remind you that Android DVM is inspired by Apache Harmony, which supports the method above.
The direct NIO buffers are allocated on the native heap, not on the Java heap managed by the garbage collection. It's up to the developer to release their native memory. It's a bit different with OpenJDK and Oracle Java because they try to call the garbage collector when the creation of a direct NIO buffer fails but there is no guarantee that it helps.
N.B: You'll have to tinker a bit more if you use asFloatBuffer(), asIntBuffer(), ... because only the direct byte buffer can be "freed".