How to convert char[] to ByteBuffer in JNI? - android

I want to pass a ByteBuffer over JNI to C++, as the buffer to receive an image decoded from AVDecode, though the buffer is correctly filled in C++, but the ByteBuffer at the Java side is still empty.
Please help me to find out where is the error. Thanks.
pOutBuffer is the ByteBuffer passed via JNI.
jclass ByteBufferClass = env->GetObjectClass(pOutBuffer);
jmethodID ArraryMethodId = env->GetMethodID(ByteBufferClass,"array","()[B");
jmethodID ClearMethodId = env->GetMethodID(ByteBufferClass,"clear","()Ljava/nio/Buffer;");
//clear buffer
env->CallObjectMethod(pOutBuffer,ClearMethodId);
jbyteArray OutByteArrary = (jbyteArray)env->CallObjectMethod(pOutBuffer,ArraryMethodId);
jbyte OutJbyte = env->GetByteArrayElements(OutByteArrary,0);
Out = (unsigned char*)OutJbyte;
DecodeSize = AVDecode(m_pVideoDecode, (unsigned char *)In, inputSize, (unsigned char **)&Out, (int *)&pBFrameKey);
The decoding is correct and I can see that 'Out' is filled with the output image, however, when this function returns, the pOutBuffer at the Java side is still empty.

How was the ByteBuffer created? Is it a direct or non-direct ByteBuffer?
If it's a direct ByteBuffer which has been created in Java using the allocateDirect method you can us GetDirectBufferAddress in your native code to get the direct address of the ByteBuffer and any changes there should be reflected in Java.

Related

Comparing a jbytearray with a string in JNI

I have a JNI C function that has an jbyteArray input parameter. This is a byte array of size 128 that I wish to compare with a #define string. How do I achieve this?
I tried to memcpy the jbyteArray to an unsigned char data[128] and then do a memcmp() of data and the #define, but the memcpy crashed my app.
Thanks.
You can use GetByteArrayElements() to get the byte array contents and then compare using strncmp or memcmp or whatever:
#define COMPARE_STRING "somestring" // can be up to 128 bytes long
// JNIEnv *pEnv
// jbyteArray byteArray
// get the byte array contents:
jbyte* pBuf = (jbyte*)(*pEnv)->GetByteArrayElements(pEnv, byteArray, 0);
if(pBuf)
{
// compare up to a maximum of 128 bytes:
int result = strncmp((char*)pBuf, COMPARE_STRING, 128);
}
I ended up copying the jbytearray using GetByteArrayRegion instead.

How to copy decoded frame from C to Android

I used ffmpeg library to decode the video and got a frame buffer data.
I want to copy the frame buffer into Android byte array (format is RGB565).
How to copy the frame buffer data from C into Android byte array?
Have any one can give me some example or advice?
You could use java.nio.ByteBuffer for that:
ByteBuffer theVideoFrame = ByteBuffer.allocateDirect(frameSize);
...
CopyFrame(theVideoFrame);
And the native code could be something like:
JNIEXPORT void JNICALL Java_blah_blah_blah_CopyFrame(JNIEnv *ioEnv, jobject ioThis, jobject byteBuffer)
{
char *buffer;
buffer = (char*)(ioEnv->GetDirectBufferAddress(byteBuffer));
if (buffer == NULL) {
__android_log_write(ANDROID_LOG_VERBOSE, "foo", "failed to get NIO buffer address");
return;
}
memcpy(buffer, theNativeVideoFrame, frameSize);
}
To copy the data from the ByteBuffer to a byte[] you'd then use something like:
theVideoFrame.get(byteArray);

use ffmpeg api to convert audio files. crash on avcodec_encode_audio2

From the examples I got the basic idea of this code.
However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
all use different approaches.
The process of converting an audio file to another file should go roughly like this:
inputfile -demux-> audiostream -read-> inPackets -decode2frames->
frames
-encode2packets-> outPackets -write-> audiostream -mux-> outputfile
However I found the following comment in demuxing.c:
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
My questions about this are:
Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
the resampling step mentioned in the comment mandatory?
Am I taking the right approach? ffmpeg is very asymmetric, i.e. if there is a function
open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
schemes, so it's very hard to tell, if the general approach is right, or actually an
ugly mixture of techniques that where used at different version bumps of ffmpeg.
ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio?
Here is my code so far that right now crashes at avcodec_encode_audio2
and I don't know why.
int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
jboolean isCopy;
jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
logging = (*env)->GetStaticIntField(env, configClass, fid);
/// open input
const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
AVFormatContext* pInputCtx;
AVStream* pInputStream;
open_input(sourceFile, &pInputCtx, &pInputStream);
// open output
const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
AVFormatContext* pOutputCtx;
AVOutputFormat* pOutputFmt;
AVStream* pOutputStream;
open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);
/// decode/encode
error = avformat_write_header(pOutputCtx, NULL);
DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));
AVFrame* frame = avcodec_alloc_frame();
DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
frame->pts = 0;
LOGI("allocate packet");
AVPacket pktIn;
AVPacket pktOut;
LOGI("done");
int got_frame, got_packet, len, frame_count = 0;
int64_t processed_time = 0, duration = pInputStream->duration;
while (av_read_frame(pInputCtx, &pktIn) >= 0) {
do {
len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
if (len < 0) break;
len = FFMIN(len, pktIn.size);
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
if (got_frame) {
do {
av_init_packet(&pktOut);
pktOut.data = NULL;
pktOut.size = 0;
LOGI("encode frame");
DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
LOGI("tests done");
len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
LOGI("encode done");
DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
} while (!got_packet);
// write packet;
LOGI("write packet");
/* Write the compressed frame to the media file. */
error = av_interleaved_write_frame(pOutputCtx, &pktOut);
DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
av_free_packet(&pktOut);
}
pktIn.data += len;
pktIn.size -= len;
} while (pktIn.size > 0);
av_free_packet(&pktIn);
}
LOGI("write trailer");
av_write_trailer(pOutputCtx);
LOGI("end");
/// close resources
avcodec_free_frame(&frame);
avcodec_close(pInputStream->codec);
av_free(pInputStream->codec);
avcodec_close(pOutputStream->codec);
av_free(pOutputStream->codec);
avformat_close_input(&pInputCtx);
avformat_free_context(pOutputCtx);
return 0;
}
Meanwhile I have figured this out and written an Android Library Project that does this
(for audio files). https://github.com/fscz/FFmpeg-Android
See the file /jni/audiodecoder.c for details

A correct way to convert byte[] in java to unsigned char* in C++, and vice versa?

I'm newbie in C++ and JNI, I try to find a correct way to convert byte[] in java to unsigned char* in C++ by using JNI, and vice versa ! (I'm working on android)
After looking for a solution in google and SO, I haven't found a good details way to convert byte[] in java to C++. Please help me, and provide a solution for a vice versa (unsigned char* in C++ to byte[] in java). Thanks very much
byte[] in java to unsigned char* in C++:
JAVA :
private static native void nativeReceiveDataFromServer(byte[] value, int length);
JNI:
... (JNIEnv* env, jobject thiz, jbyteArray array, jint array_length)
{
???
}
PS: I modified my question for being a real question for my problem :(
You can use this to convert unsigned char array into a jbyteArray
jbyteArray as_byte_array(unsigned char* buf, int len) {
jbyteArray array = env->NewByteArray (len);
env->SetByteArrayRegion (array, 0, len, reinterpret_cast<jbyte*>(buf));
return array;
}
to convert the other way around...
unsigned char* as_unsigned_char_array(jbyteArray array) {
int len = env->GetArrayLength (array);
unsigned char* buf = new unsigned char[len];
env->GetByteArrayRegion (array, 0, len, reinterpret_cast<jbyte*>(buf));
return buf;
}
The array buf is a variable on stack. After leaving the function is this variable undefined.
A solution is separating this function in two, one to compute the size and another with a pointer to now the allocated array as parameter to fill in the values.

How to send int array to Java code from C

On Android platform in my native code I have allocated an int array
mBuffer = new int[BUFSIZE];
I want to send this to Java side, Java method is this
public void WriteBuffer(int[] buffer, int size)
{
}
I call back to java code like this
const char* callback = "WriteBuffer";
mWriteMethod = env->GetMethodID(cls, callback, "([II)V");
This calls the java method its just that in my Java code the buffer is null. As I am really passing a pointer to memory that was dynamically allocated rather than an actual array is probably why it doesn't work but I don't know how to pass a pointer to Java.
I need the buffer parameter as an integer array on the Java side anyway.
Anyone know how I can modify the above to get it to work?
Thanks
My understanding of your question is you want call the java method WriteBuffer and pass a int[] to it.
Some pseudo-code which you will need in jni
jintArray buffer;
buffer= (*env)->NewIntArray(env, BUFSIZE);
(*env)->SetIntArrayRegion(env, buffer, 0,BUFSIZE, mBuffer);
SetIntArrayRegion() will copy from mBuffer into the Java array.

Categories

Resources