How to copy decoded frame from C to Android - android

I used ffmpeg library to decode the video and got a frame buffer data.
I want to copy the frame buffer into Android byte array (format is RGB565).
How to copy the frame buffer data from C into Android byte array?
Have any one can give me some example or advice?

You could use java.nio.ByteBuffer for that:
ByteBuffer theVideoFrame = ByteBuffer.allocateDirect(frameSize);
...
CopyFrame(theVideoFrame);
And the native code could be something like:
JNIEXPORT void JNICALL Java_blah_blah_blah_CopyFrame(JNIEnv *ioEnv, jobject ioThis, jobject byteBuffer)
{
char *buffer;
buffer = (char*)(ioEnv->GetDirectBufferAddress(byteBuffer));
if (buffer == NULL) {
__android_log_write(ANDROID_LOG_VERBOSE, "foo", "failed to get NIO buffer address");
return;
}
memcpy(buffer, theNativeVideoFrame, frameSize);
}
To copy the data from the ByteBuffer to a byte[] you'd then use something like:
theVideoFrame.get(byteArray);

Related

How to play back a byte stream in libVLC for Android

I would like to play back a byte stream with the media player in libVLC for Android. But I don't find any Interface or Class where I could "inject" a byte stream. Only chance I have for play back is providing a file descriptor, a path to a file, or an URI.
Android's native media player provides the interface setDataSource(MediaDataSource dataSource) where a byte stream can be injected by extending the class MediaDataSource. Do I have similar possibility in libVLC for Android?
The libVLC API you are looking for is libvlc_media_new_callbacks.
However, it seems it is not currently exposed to Java to be used with a Java stream parameter. This would need to be implemented by you in the libvlcjni bindings, I believe.
You could get inspiration from this existing code making use of that API
void
Java_org_videolan_libvlc_Media_nativeNewFromFdWithOffsetLength(
JNIEnv *env, jobject thiz, jobject libVlc, jobject jfd, jlong offset, jlong length)
{
vlcjni_object *p_obj;
int fd = FDObject_getInt(env, jfd);
if (fd == -1)
return;
p_obj = VLCJniObject_newFromJavaLibVlc(env, thiz, libVlc);
if (!p_obj)
return;
p_obj->u.p_m =
libvlc_media_new_callbacks(p_obj->p_libvlc,
media_cb_open,
media_cb_read,
media_cb_seek,
media_cb_close,
p_obj);
if (Media_nativeNewCommon(env, thiz, p_obj) == 0)
{
vlcjni_object_sys *p_sys = p_obj->p_sys;
p_sys->media_cb.fd = fd;
p_sys->media_cb.offset = offset;
p_sys->media_cb.length = length >= 0 ? length : UINT64_MAX;
}
}
https://github.com/videolan/vlc-android/blob/f05db3f9b51e64061ff73c794e6a7bfb44f34f65/libvlc/jni/libvlcjni-media.c#L284-L313
libvlcsharp has this implemented, including for Android platforms, but it's .NET.

Converting char* to cv::Mat in NDK Android studio

I have a native C++ method, which I am using to read an image called "hi.jpg". The code below finds the asset, and loads the data into a char* buffer. (I've tried other methods such as imread() and the file is not found). I would then like to change this data into Mat format, so I've followed some instructions to put the char* buffer into std::vector , and then use cv::imdecode to convert the data to Mat.
JNIEXPORT jint JNICALL Java_com_example_user_application_MainActivity_generateAssets(JNIEnv* env,jobject thiz,jobject assetManager) {
AAsset* img;
AAssetManager *mgr = AAssetManager_fromJava(env, assetManager);
AAssetDir* assetDir = AAssetManager_openDir(mgr, "");
const char* filename;
while ((filename = AAssetDir_getNextFileName(assetDir)) != NULL) {
AAsset *asset = AAssetManager_open(mgr, filename, AASSET_MODE_UNKNOWN);
if(strcmp(filename, "hi.jpg")==0 ) {
img = asset;
}
}
long sizeOfImg = AAsset_getLength(img);
char* buffer = (char*) malloc (sizeof(char)*sizeOfImg);
AAsset_read(img, buffer, sizeOfImg);
std::vector<char> data(buffer, buffer + strlen(buffer));
cv::Mat dataToMat = cv::imdecode(data, IMREAD_UNCHANGED);
return 0;
}
My problem is that I don't know how to test that the data has been successfully converted into Mat. How can I test this? I have ran the debugger and inspected dataToMat, but it isn't making much sense.

Comparing a jbytearray with a string in JNI

I have a JNI C function that has an jbyteArray input parameter. This is a byte array of size 128 that I wish to compare with a #define string. How do I achieve this?
I tried to memcpy the jbyteArray to an unsigned char data[128] and then do a memcmp() of data and the #define, but the memcpy crashed my app.
Thanks.
You can use GetByteArrayElements() to get the byte array contents and then compare using strncmp or memcmp or whatever:
#define COMPARE_STRING "somestring" // can be up to 128 bytes long
// JNIEnv *pEnv
// jbyteArray byteArray
// get the byte array contents:
jbyte* pBuf = (jbyte*)(*pEnv)->GetByteArrayElements(pEnv, byteArray, 0);
if(pBuf)
{
// compare up to a maximum of 128 bytes:
int result = strncmp((char*)pBuf, COMPARE_STRING, 128);
}
I ended up copying the jbytearray using GetByteArrayRegion instead.

use ffmpeg api to convert audio files. crash on avcodec_encode_audio2

From the examples I got the basic idea of this code.
However I am not sure, what I am missing, as muxing.c demuxing.c and decoding_encoding.c
all use different approaches.
The process of converting an audio file to another file should go roughly like this:
inputfile -demux-> audiostream -read-> inPackets -decode2frames->
frames
-encode2packets-> outPackets -write-> audiostream -mux-> outputfile
However I found the following comment in demuxing.c:
/* Write the raw audio data samples of the first plane. This works
* fine for packed formats (e.g. AV_SAMPLE_FMT_S16). However,
* most audio decoders output planar audio, which uses a separate
* plane of audio samples for each channel (e.g. AV_SAMPLE_FMT_S16P).
* In other words, this code will write only the first audio channel
* in these cases.
* You should use libswresample or libavfilter to convert the frame
* to packed data. */
My questions about this are:
Can I expect a frame that was retrieved by calling one of the decoder functions, f.e.
avcodec_decode_audio4 to hold suitable values to directly put it into an encoder or is
the resampling step mentioned in the comment mandatory?
Am I taking the right approach? ffmpeg is very asymmetric, i.e. if there is a function
open_file_for_input there might not be a function open_file_for_output. Also there are different versions of many functions (avcodec_decode_audio[1-4]) and different naming
schemes, so it's very hard to tell, if the general approach is right, or actually an
ugly mixture of techniques that where used at different version bumps of ffmpeg.
ffmpeg uses a lot of specific terms, like 'planar sampling' or 'packed format' and I am having a hard time, finding definitions for these terms. Is it possible to write working code, without deep knowledge of audio?
Here is my code so far that right now crashes at avcodec_encode_audio2
and I don't know why.
int Java_com_fscz_ffmpeg_Audio_convert(JNIEnv * env, jobject this, jstring jformat, jstring jcodec, jstring jsource, jstring jdest) {
jboolean isCopy;
jclass configClass = (*env)->FindClass(env, "com.fscz.ffmpeg.Config");
jfieldID fid = (*env)->GetStaticFieldID(env, configClass, "ffmpeg_logging", "I");
logging = (*env)->GetStaticIntField(env, configClass, fid);
/// open input
const char* sourceFile = (*env)->GetStringUTFChars(env, jsource, &isCopy);
AVFormatContext* pInputCtx;
AVStream* pInputStream;
open_input(sourceFile, &pInputCtx, &pInputStream);
// open output
const char* destFile = (*env)->GetStringUTFChars(env, jdest, &isCopy);
const char* cformat = (*env)->GetStringUTFChars(env, jformat, &isCopy);
const char* ccodec = (*env)->GetStringUTFChars(env, jcodec, &isCopy);
AVFormatContext* pOutputCtx;
AVOutputFormat* pOutputFmt;
AVStream* pOutputStream;
open_output(cformat, ccodec, destFile, &pOutputCtx, &pOutputFmt, &pOutputStream);
/// decode/encode
error = avformat_write_header(pOutputCtx, NULL);
DIE_IF_LESS_ZERO(error, "error writing output stream header to file: %s, error: %s", destFile, e2s(error));
AVFrame* frame = avcodec_alloc_frame();
DIE_IF_UNDEFINED(frame, "Could not allocate audio frame");
frame->pts = 0;
LOGI("allocate packet");
AVPacket pktIn;
AVPacket pktOut;
LOGI("done");
int got_frame, got_packet, len, frame_count = 0;
int64_t processed_time = 0, duration = pInputStream->duration;
while (av_read_frame(pInputCtx, &pktIn) >= 0) {
do {
len = avcodec_decode_audio4(pInputStream->codec, frame, &got_frame, &pktIn);
DIE_IF_LESS_ZERO(len, "Error decoding frame: %s", e2s(len));
if (len < 0) break;
len = FFMIN(len, pktIn.size);
size_t unpadded_linesize = frame->nb_samples * av_get_bytes_per_sample(frame->format);
LOGI("audio_frame n:%d nb_samples:%d pts:%s\n", frame_count++, frame->nb_samples, av_ts2timestr(frame->pts, &(pInputStream->codec->time_base)));
if (got_frame) {
do {
av_init_packet(&pktOut);
pktOut.data = NULL;
pktOut.size = 0;
LOGI("encode frame");
DIE_IF_UNDEFINED(pOutputStream->codec, "no output codec");
DIE_IF_UNDEFINED(frame->nb_samples, "no nb samples");
DIE_IF_UNDEFINED(pOutputStream->codec->internal, "no internal");
LOGI("tests done");
len = avcodec_encode_audio2(pOutputStream->codec, &pktOut, frame, &got_packet);
LOGI("encode done");
DIE_IF_LESS_ZERO(len, "Error (re)encoding frame: %s", e2s(len));
} while (!got_packet);
// write packet;
LOGI("write packet");
/* Write the compressed frame to the media file. */
error = av_interleaved_write_frame(pOutputCtx, &pktOut);
DIE_IF_LESS_ZERO(error, "Error while writing audio frame: %s", e2s(error));
av_free_packet(&pktOut);
}
pktIn.data += len;
pktIn.size -= len;
} while (pktIn.size > 0);
av_free_packet(&pktIn);
}
LOGI("write trailer");
av_write_trailer(pOutputCtx);
LOGI("end");
/// close resources
avcodec_free_frame(&frame);
avcodec_close(pInputStream->codec);
av_free(pInputStream->codec);
avcodec_close(pOutputStream->codec);
av_free(pOutputStream->codec);
avformat_close_input(&pInputCtx);
avformat_free_context(pOutputCtx);
return 0;
}
Meanwhile I have figured this out and written an Android Library Project that does this
(for audio files). https://github.com/fscz/FFmpeg-Android
See the file /jni/audiodecoder.c for details

How to convert char[] to ByteBuffer in JNI?

I want to pass a ByteBuffer over JNI to C++, as the buffer to receive an image decoded from AVDecode, though the buffer is correctly filled in C++, but the ByteBuffer at the Java side is still empty.
Please help me to find out where is the error. Thanks.
pOutBuffer is the ByteBuffer passed via JNI.
jclass ByteBufferClass = env->GetObjectClass(pOutBuffer);
jmethodID ArraryMethodId = env->GetMethodID(ByteBufferClass,"array","()[B");
jmethodID ClearMethodId = env->GetMethodID(ByteBufferClass,"clear","()Ljava/nio/Buffer;");
//clear buffer
env->CallObjectMethod(pOutBuffer,ClearMethodId);
jbyteArray OutByteArrary = (jbyteArray)env->CallObjectMethod(pOutBuffer,ArraryMethodId);
jbyte OutJbyte = env->GetByteArrayElements(OutByteArrary,0);
Out = (unsigned char*)OutJbyte;
DecodeSize = AVDecode(m_pVideoDecode, (unsigned char *)In, inputSize, (unsigned char **)&Out, (int *)&pBFrameKey);
The decoding is correct and I can see that 'Out' is filled with the output image, however, when this function returns, the pOutBuffer at the Java side is still empty.
How was the ByteBuffer created? Is it a direct or non-direct ByteBuffer?
If it's a direct ByteBuffer which has been created in Java using the allocateDirect method you can us GetDirectBufferAddress in your native code to get the direct address of the ByteBuffer and any changes there should be reflected in Java.

Categories

Resources