Using Facemark opencv contrib in android native C++ - android

I am new to opencv, I am trying to use Facemark in opencv contrib modules in my android native C++ app. However, i am getting the error
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x1788 in tid
21567(my_app)
when creating an instance of Facemark using
Ptr<Facemark> facemark = FacemarkLBF::create();
I am using https://github.com/chaoyangnz/opencv3-android-sdk-with-contrib opencv library
here is my implementation
c++
void
Java_com_makeover_makeover_1opencv_MainActivity_nativeDetectFaceLandmarks(
JNIEnv *env,
jobject , jlong srcAddr, jlong retAddr,
jstring faceCascadePath, jstring faceYamlPath)
{
const char *faceCascadeFile = env->GetStringUTFChars(faceCascadePath,NULL);
const char *yamlFile = env->GetStringUTFChars(faceYamlPath,NULL);
LOGI("nativeDetectFace called");
string cascadePath(faceCascadeFile);
LOGI("nativeDetectFace called");
string yamlPath(yamlFile);
Mat& colorMat = *(Mat*)srcAddr;
Mat& retValMat = *(Mat*)retAddr;
Mat gray;
// Load Face Detector
CascadeClassifier faceDetector(cascadePath);
LOGI("cascade file loaded");
// Create an instance of Facemark
Ptr<Facemark> facemark = FacemarkLBF::create();
LOGI("face instance created");
// Load landmark detector
facemark->loadModel(yamlPath);
LOGI("yalm model loaded");
// Find face
vector<Rect> faces;
// Convert frame to grayscale because
// faceDetector requires grayscale image.
cvtColor(colorMat, gray, COLOR_BGR2GRAY);
// Detect faces
faceDetector.detectMultiScale(gray, faces);
// Variable for landmarks.
// Landmarks for one face is a vector of points
// There can be more than one face in the image. Hence, we
// use a vector of vector of points.
vector< vector<Point2f> > landmarks;
// Run landmark detector
bool success = facemark->fit(colorMat,faces,landmarks);
if(success)
{
// If successful, render the landmarks on the face
for(int i = 0; i < landmarks.size(); i++)
{
drawLandmarks(colorMat, landmarks[i]);
}
}
}
java implementation
drawFaces.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Mat colorMat,grayMat;
colorMat = new Mat();
grayMat = new Mat();
Utils.bitmapToMat(bmp,colorMat);
nativeDetectFaceLandmarks(colorMat.getNativeObjAddr(), grayMat.getNativeObjAddr(),
getCascade("face"),getCascade("yaml"));
Bitmap new_bmp2 = Bitmap.createBitmap(bmp);
Utils.matToBitmap(colorMat,new_bmp2);
img_face.setImageBitmap(new_bmp2);
}
});
getCascade method
public String getCascade(String cascadeType){
String fileName;
File mCascadeFile;
final InputStream is;
FileOutputStream os;
switch (cascadeType){
case "mouth":
fileName="haarcascade_mcs_mouth.xml";
break;
case "face":
fileName = "haarcascade_frontalface_alt2.xml";
break;
case "right_eye":
fileName = "haarcascade_mcs_righteye.xml";
break;
case "yaml":
fileName = "lbfmodel.yaml";
break;
case "left_eye":
fileName = "haarcascade_mcs_lefteye.xml";
break;
default:
fileName = null;
}
if(fileName==null) {
return null;
}
try {
is = getResources().getAssets().open(fileName);
File cascadeDir = getDir("cascade", Context.MODE_PRIVATE);
mCascadeFile = new File(cascadeDir,fileName);
os = new FileOutputStream(mCascadeFile);
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = is.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
is.close();
os.close();
Log.i("TAG", "getCascade: face cascade found");
return mCascadeFile.getAbsolutePath();
} catch (IOException e) {
Log.e("TAG", "face cascade not found", e);
return null;
}
}
Anyone who knows what Iam doing wrong or a better way to use Facemark in opencv contrib modules in android native

Every tutorial I have seen so far has implemented this method in a different way, to clear this up I will go through the methods that work for me.
Firstly declaring facemark the below method returns the SIGSEGV error:
Ptr<Facemark> facemark = FacemarkLBF::create();
Instead use:
Ptr<Facemark> facemark = createFacemarkLBF();
Secondly I have seen many tutorials use:
facemark->load(filename);
But the correct syntax would be:
facemark->loadModel(filename);
If you still have the same issue follow this link and download link and get the latest SDK with contrib:
https://pullrequest.opencv.org/buildbot/export/opencv_releases/

Related

How to parse a zipped file completely from RAM?

Background
I need to parse some zip files of various types (getting some inner files content for one purpose or another, including getting their names).
Some of the files are not reachable via file-path, as Android has Uri to reach them, and as sometimes the zip file is inside another zip file. With the push to use SAF, it's even less possible to use file-path in some cases.
For this, we have 2 main ways to handle: ZipFile class and ZipInputStream class.
The problem
When we have a file-path, ZipFile is a perfect solution. It's also very efficient in terms of speed.
However, for the rest of the cases, ZipInputStream could reach issues, such as this one, which has a problematic zip file, and cause this exception:
java.util.zip.ZipException: only DEFLATED entries can have EXT descriptor
at java.util.zip.ZipInputStream.readLOC(ZipInputStream.java:321)
at java.util.zip.ZipInputStream.getNextEntry(ZipInputStream.java:124)
What I've tried
The only always-working solution would be to copy the file to somewhere else, where you could parse it using ZipFile, but this is inefficient and requires you to have free storage, as well as remove the file when you are done with it.
So, what I've found is that Apache has a nice, pure Java library (here) to parse Zip files, and for some reason its InputStream solution (called "ZipArchiveInputStream") seem even more efficient than the native ZipInputStream class.
As opposed to what we have in the native framework, the library offers a bit more flexibility. I could, for example, load the entire zip file into bytes array, and let the library handle it as usual, and this works even for the problematic Zip files I've mentioned:
org.apache.commons.compress.archivers.zip.ZipFile(SeekableInMemoryByteChannel(byteArray)).use { zipFile ->
for (entry in zipFile.entries) {
val name = entry.name
... // use the zipFile like you do with native framework
gradle dependency:
// http://commons.apache.org/proper/commons-compress/ https://mvnrepository.com/artifact/org.apache.commons/commons-compress
implementation 'org.apache.commons:commons-compress:1.20'
Sadly, this isn't always possible, because it depends on having the heap memory hold the entire zip file, and on Android it gets even more limited, because the heap size could be relatively small (heap could be 100MB while the file is 200MB). As opposed to a PC which can have a huge heap memory being set, for Android it's not flexible at all.
So, I searched for a solution that has JNI instead, to have the entire ZIP file loaded into byte array there, not going to the heap (at least not entirely). This could be a nicer workaround because if the ZIP could be fit in the device's RAM instead of the heap, it could prevent me from reaching OOM while also not needing to have an extra file.
I've found this library called "larray" which seems promising , but sadly when I tried using it, it crashed, because its requirements include having a full JVM, meaning not suitable for Android.
EDIT: seeing that I can't find any library and any built-in class, I tried to use JNI myself. Sadly I'm very rusty with it, and I looked at an old repository I've made a long time ago to perform some operations on Bitmaps (here). This is what I came up with :
native-lib.cpp
#include <jni.h>
#include <android/log.h>
#include <cstdio>
#include <android/bitmap.h>
#include <cstring>
#include <unistd.h>
class JniBytesArray {
public:
uint32_t *_storedData;
JniBytesArray() {
_storedData = NULL;
}
};
extern "C" {
JNIEXPORT jobject JNICALL Java_com_lb_myapplication_JniByteArrayHolder_allocate(
JNIEnv *env, jobject obj, jlong size) {
auto *jniBytesArray = new JniBytesArray();
auto *array = new uint32_t[size];
for (int i = 0; i < size; ++i)
array[i] = 0;
jniBytesArray->_storedData = array;
return env->NewDirectByteBuffer(jniBytesArray, 0);
}
}
JniByteArrayHolder.kt
class JniByteArrayHolder {
external fun allocate(size: Long): ByteBuffer
companion object {
init {
System.loadLibrary("native-lib")
}
}
}
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
thread {
printMemStats()
val jniByteArrayHolder = JniByteArrayHolder()
val byteBuffer = jniByteArrayHolder.allocate(1L * 1024L)
printMemStats()
}
}
fun printMemStats() {
val memoryInfo = ActivityManager.MemoryInfo()
(getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager).getMemoryInfo(memoryInfo)
val nativeHeapSize = memoryInfo.totalMem
val nativeHeapFreeSize = memoryInfo.availMem
val usedMemInBytes = nativeHeapSize - nativeHeapFreeSize
val usedMemInPercentage = usedMemInBytes * 100 / nativeHeapSize
Log.d("AppLog", "total:${Formatter.formatFileSize(this, nativeHeapSize)} " +
"free:${Formatter.formatFileSize(this, nativeHeapFreeSize)} " +
"used:${Formatter.formatFileSize(this, usedMemInBytes)} ($usedMemInPercentage%)")
}
This doesn't seem right, because if I try to create a 1GB byte array using jniByteArrayHolder.allocate(1L * 1024L * 1024L * 1024L) , it crashes without any exception or error logs.
The questions
Is it possible to use JNI for Apache's library, so that it will handle the ZIP file content which is contained within JNI's "world" ?
If so, how can I do it? Is there any sample of how to do it? Is there a class for it? Or do I have to implement it myself? If so, can you please show how it's done in JNI?
If it's not possible, what other way is there to do it? Maybe alternative to what Apache has?
For the solution of JNI, how come it doesn't work well ? How could I efficiently copy the bytes from the stream into the JNI byte array (my guess is that it will be via a buffer)?
I took a look at the JNI code you posted and made a couple of changes. Mostly it is defining the size argument for NewDirectByteBuffer and using malloc().
Here is the output of the log after allocating 800mb:
D/AppLog: total:1.57 GB free:1.03 GB used:541 MB (34%)
D/AppLog: total:1.57 GB free:247 MB used:1.32 GB (84%)
And the following is what the buffer looks like after the allocation. As you can see, the debugger is reporting a limit of 800mb which is what we expect.
My C is very rusty, so I am sure that there is some work to be done. I have updated the code to be a little more robust and to allow for the freeing of memory.
native-lib.cpp
extern "C" {
static jbyteArray *_holdBuffer = NULL;
static jobject _directBuffer = NULL;
/*
This routine is not re-entrant and can handle only one buffer at a time. If a buffer is
allocated then it must be released before the next one is allocated.
*/
JNIEXPORT
jobject JNICALL Java_com_example_zipfileinmemoryjni_JniByteArrayHolder_allocate(
JNIEnv *env, jobject obj, jlong size) {
if (_holdBuffer != NULL || _directBuffer != NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Call to JNI allocate() before freeBuffer()");
return NULL;
}
// Max size for a direct buffer is the max of a jint even though NewDirectByteBuffer takes a
// long. Clamp max size as follows:
if (size > SIZE_T_MAX || size > INT_MAX || size <= 0) {
jlong maxSize = SIZE_T_MAX < INT_MAX ? SIZE_T_MAX : INT_MAX;
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Native memory allocation request must be >0 and <= %lld but was %lld.\n",
maxSize, size);
return NULL;
}
jbyteArray *array = (jbyteArray *) malloc(static_cast<size_t>(size));
if (array == NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Failed to allocate %lld bytes of native memory.\n",
size);
return NULL;
}
jobject directBuffer = env->NewDirectByteBuffer(array, size);
if (directBuffer == NULL) {
free(array);
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Failed to create direct buffer of size %lld.\n",
size);
return NULL;
}
// memset() is not really needed but we call it here to force Android to count
// the consumed memory in the stats since it only seems to "count" dirty pages. (?)
memset(array, 0xFF, static_cast<size_t>(size));
_holdBuffer = array;
// Get a global reference to the direct buffer so Java isn't tempted to GC it.
_directBuffer = env->NewGlobalRef(directBuffer);
return directBuffer;
}
JNIEXPORT void JNICALL Java_com_example_zipfileinmemoryjni_JniByteArrayHolder_freeBuffer(
JNIEnv *env, jobject obj, jobject directBuffer) {
if (_directBuffer == NULL || _holdBuffer == NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Attempt to free unallocated buffer.");
return;
}
jbyteArray *bufferLoc = (jbyteArray *) env->GetDirectBufferAddress(directBuffer);
if (bufferLoc == NULL) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"Failed to retrieve direct buffer location associated with ByteBuffer.");
return;
}
if (bufferLoc != _holdBuffer) {
__android_log_print(ANDROID_LOG_ERROR, "JNI Routine",
"DirectBuffer does not match that allocated.");
return;
}
// Free the malloc'ed buffer and the global reference. Java can not GC the direct buffer.
free(bufferLoc);
env->DeleteGlobalRef(_directBuffer);
_holdBuffer = NULL;
_directBuffer = NULL;
}
}
I also updated the array holder:
class JniByteArrayHolder {
external fun allocate(size: Long): ByteBuffer
external fun freeBuffer(byteBuffer: ByteBuffer)
companion object {
init {
System.loadLibrary("native-lib")
}
}
}
I can confirm that this code along with the ByteBufferChannel class provided by Botje here works for Android versions before API 24. The SeekableByteChannel interface was introduced in API 24 and is needed by the ZipFile utility.
The maximum buffer size that can be allocated is the size of a jint and is due to the limitation of JNI. Larger data can be accommodated (if available) but would require multiple buffers and a way to handle them.
Here is the main activity for the sample app. An earlier version always assumed the the InputStream read buffer was was always filled and errored out when trying to put it to the ByteBuffer. This was fixed.
MainActivity.kt
class MainActivity : AppCompatActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
}
fun onClick(view: View) {
button.isEnabled = false
status.text = getString(R.string.running)
thread {
printMemStats("Before buffer allocation:")
var bufferSize = 0L
// testzipfile.zip is not part of the project but any zip can be uploaded through the
// device file manager or adb to test.
val fileToRead = "$filesDir/testzipfile.zip"
val inStream =
if (File(fileToRead).exists()) {
FileInputStream(fileToRead).apply {
bufferSize = getFileSize(this)
close()
}
FileInputStream(fileToRead)
} else {
// If testzipfile.zip doesn't exist, we will just look at this one which
// is part of the APK.
resources.openRawResource(R.raw.appapk).apply {
bufferSize = getFileSize(this)
close()
}
resources.openRawResource(R.raw.appapk)
}
// Allocate the buffer in native memory (off-heap).
val jniByteArrayHolder = JniByteArrayHolder()
val byteBuffer =
if (bufferSize != 0L) {
jniByteArrayHolder.allocate(bufferSize)?.apply {
printMemStats("After buffer allocation")
}
} else {
null
}
if (byteBuffer == null) {
Log.d("Applog", "Failed to allocate $bufferSize bytes of native memory.")
} else {
Log.d("Applog", "Allocated ${Formatter.formatFileSize(this, bufferSize)} buffer.")
val inBytes = ByteArray(4096)
Log.d("Applog", "Starting buffered read...")
while (inStream.available() > 0) {
byteBuffer.put(inBytes, 0, inStream.read(inBytes))
}
inStream.close()
byteBuffer.flip()
ZipFile(ByteBufferChannel(byteBuffer)).use {
Log.d("Applog", "Starting Zip file name dump...")
for (entry in it.entries) {
Log.d("Applog", "Zip name: ${entry.name}")
val zis = it.getInputStream(entry)
while (zis.available() > 0) {
zis.read(inBytes)
}
}
}
printMemStats("Before buffer release:")
jniByteArrayHolder.freeBuffer(byteBuffer)
printMemStats("After buffer release:")
}
runOnUiThread {
status.text = getString(R.string.idle)
button.isEnabled = true
Log.d("Applog", "Done!")
}
}
}
/*
This function is a little misleading since it does not reflect the true status of memory.
After native buffer allocation, it waits until the memory is used before counting is as
used. After release, it doesn't seem to count the memory as released until garbage
collection. (My observations only.) Also, see the comment for memset() in native-lib.cpp
which is a member of this project.
*/
private fun printMemStats(desc: String? = null) {
val memoryInfo = ActivityManager.MemoryInfo()
(getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager).getMemoryInfo(memoryInfo)
val nativeHeapSize = memoryInfo.totalMem
val nativeHeapFreeSize = memoryInfo.availMem
val usedMemInBytes = nativeHeapSize - nativeHeapFreeSize
val usedMemInPercentage = usedMemInBytes * 100 / nativeHeapSize
val sDesc = desc?.run { "$this:\n" }
Log.d(
"AppLog", "$sDesc total:${Formatter.formatFileSize(this, nativeHeapSize)} " +
"free:${Formatter.formatFileSize(this, nativeHeapFreeSize)} " +
"used:${Formatter.formatFileSize(this, usedMemInBytes)} ($usedMemInPercentage%)"
)
}
// Not a great way to do this but not the object of the demo.
private fun getFileSize(inStream: InputStream): Long {
var bufferSize = 0L
while (inStream.available() > 0) {
val toSkip = inStream.available().toLong()
inStream.skip(toSkip)
bufferSize += toSkip
}
return bufferSize
}
}
A sample GitHub repository is here.
You can steal LWJGL's native memory management functions. It is BSD3 licensed, so you only have to mention somewhere that you are using code from it.
Step 1: given an InputStream is and a file size ZIP_SIZE, slurp the stream into a direct byte buffer created by LWJGL's org.lwjgl.system.MemoryUtil helper class:
ByteBuffer bb = MemoryUtil.memAlloc(ZIP_SIZE);
byte[] buf = new byte[4096]; // Play with the buffer size to see what works best
int read = 0;
while ((read = is.read(buf)) != -1) {
bb.put(buf, 0, read);
}
Step 2: wrap the ByteBuffer in a ByteChannel.
Taken from this gist. You possibly want to strip the writing parts out.
package io.github.ncruces.utils;
import java.nio.ByteBuffer;
import java.nio.channels.NonWritableChannelException;
import java.nio.channels.SeekableByteChannel;
import static java.lang.Math.min;
public final class ByteBufferChannel implements SeekableByteChannel {
private final ByteBuffer buf;
public ByteBufferChannel(ByteBuffer buffer) {
if (buffer == null) throw new NullPointerException();
buf = buffer;
}
#Override
public synchronized int read(ByteBuffer dst) {
if (buf.remaining() == 0) return -1;
int count = min(dst.remaining(), buf.remaining());
if (count > 0) {
ByteBuffer tmp = buf.slice();
tmp.limit(count);
dst.put(tmp);
buf.position(buf.position() + count);
}
return count;
}
#Override
public synchronized int write(ByteBuffer src) {
if (buf.isReadOnly()) throw new NonWritableChannelException();
int count = min(src.remaining(), buf.remaining());
if (count > 0) {
ByteBuffer tmp = src.slice();
tmp.limit(count);
buf.put(tmp);
src.position(src.position() + count);
}
return count;
}
#Override
public synchronized long position() {
return buf.position();
}
#Override
public synchronized ByteBufferChannel position(long newPosition) {
if ((newPosition | Integer.MAX_VALUE - newPosition) < 0) throw new IllegalArgumentException();
buf.position((int)newPosition);
return this;
}
#Override
public synchronized long size() { return buf.limit(); }
#Override
public synchronized ByteBufferChannel truncate(long size) {
if ((size | Integer.MAX_VALUE - size) < 0) throw new IllegalArgumentException();
int limit = buf.limit();
if (limit > size) buf.limit((int)size);
return this;
}
#Override
public boolean isOpen() { return true; }
#Override
public void close() {}
}
Step 3: Use ZipFile as before:
ZipFile zf = new ZipFile(ByteBufferChannel(bb);
for (ZipEntry ze : zf) {
...
}
Step 4: Manually release the native buffer (preferably in a finally block):
MemoryUtil.memFree(bb);

Passing FFMPEG AvFrame data from c++ to JAVA

I need to pass the FFMPEG 'raw' data back to my JAVA code in order to display it on the screen.
I have a native method that deals with FFMPEG and after that calls a method in java that takes Byte[] (so far) as an argument.
Byte Array that is passed is read by JAVA but when doing BitmapFactory.decodeByteArray(bitmap, 0, bitmap.length); it returns null. I have printed out the array and I get 200k of elements (which are expected), but cannot be decoded. So far what I'm doing is taking data from AvFrame->data casting it to unsigned char * and then casting that to jbyterArray. After all the casting, I pass the jbyteArray as argument to my JAVA method. Is there something I'm missing here? Why won't BitmapFactory decode the array into an image for displaying?
EDIT 1.0
Currently I am trying to obtain my image via
public void setImage(ByteBuffer bmp) {
bmp.rewind();
Bitmap bitmap = Bitmap.createBitmap(1920, 1080, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(bmp);
runOnUiThread(() -> {
ImageView imgViewer = findViewById(R.id.mSurfaceView);
imgViewer.setImageBitmap(bitmap);
});
}
But I keep getting an exception
JNI DETECTED ERROR IN APPLICATION: JNI NewDirectByteBuffer called with pending exception java.lang.RuntimeException: Buffer not large enough for pixels
at void android.graphics.Bitmap.copyPixelsFromBuffer(java.nio.Buffer) (Bitmap.java:657)
at void com.example.asmcpp.MainActivity.setSurfaceImage(java.nio.ByteBuffer)
Edit 1.1
So, here is the full code that is executing every time there is a frame incoming. Note that the ByteBuffer is created and passed from within this method
void VideoClientInterface::onEncodedFrame(video::encoded_frame_t &encodedFrame) {
AVFrame *filt_frame = av_frame_alloc();
auto frame = std::shared_ptr<video::encoded_frame_t>(new video::encoded_frame_t,
[](video::encoded_frame_t *p) { if (p) delete p; });
if (frame) {
frame->size = encodedFrame.size;
frame->ssrc = encodedFrame.ssrc;
frame->width = encodedFrame.width;
frame->height = encodedFrame.height;
frame->dataType = encodedFrame.dataType;
frame->timestamp = encodedFrame.timestamp;
frame->frameIndex = encodedFrame.frameIndex;
frame->isKeyFrame = encodedFrame.isKeyFrame;
frame->isDroppable = encodedFrame.isDroppable;
frame->data = new char[frame->size];
if (frame->data) {
memcpy(frame->data, encodedFrame.data, frame->size);
AVPacket packet;
av_init_packet(&packet);
packet.dts = AV_NOPTS_VALUE;
packet.pts = encodedFrame.timestamp;
packet.data = (uint8_t *) encodedFrame.data;
packet.size = encodedFrame.size;
int ret = avcodec_send_packet(m_avCodecContext, &packet);
if (ret == 0) {
ret = avcodec_receive_frame(m_avCodecContext, m_avFrame);
if (ret == 0) {
m_transform = sws_getCachedContext(
m_transform, // previous context ptr
m_avFrame->width, m_avFrame->height, AV_PIX_FMT_YUV420P, // src
m_avFrame->width, m_avFrame->height, AV_PIX_FMT_RGB24, // dst
SWS_BILINEAR, nullptr, nullptr, nullptr // options
);
auto decodedFrame = std::make_shared<video::decoded_frame_t>();
decodedFrame->width = m_avFrame->width;
decodedFrame->height = m_avFrame->height;
decodedFrame->size = m_avFrame->width * m_avFrame->height * 3;
decodedFrame->timeStamp = m_avFrame->pts;
decodedFrame->data = new unsigned char[decodedFrame->size];
if (decodedFrame->data) {
uint8_t *dstSlice[] = {decodedFrame->data,
0,
0};// outFrame.bits(), outFrame.bits(), outFrame.bits()
const int dstStride[] = {decodedFrame->width * 3, 0, 0};
sws_scale(m_transform, m_avFrame->data, m_avFrame->linesize,
0, m_avFrame->height, dstSlice, dstStride);
auto m_rawData = decodedFrame->data;
auto len = strlen(reinterpret_cast<char *>(m_rawData));
if (frameCounter == 10) {
jobject newArray = GetJniEnv()->NewDirectByteBuffer(m_rawData, len);
GetJniEnv()->CallVoidMethod(m_obj, setSurfaceImage, newArray);
frameCounter = 0;
}
frameCounter++;
}
} else {
av_packet_unref(&packet);
}
} else {
av_packet_unref(&packet);
}
}
}
}
I am not entirely sure I am even doing that part correctly. If you see any errors in this, feel free to point them out.
You cannot cast native byte arrays to jbyteArray and expect it to work. A byte[] is an actual object with length field, a reference count, and so on.
Use NewDirectByteBuffer instead to wrap your byte buffer into a Java ByteBuffer, from where you can grab the actual byte[] using .array().
Note that this JNI operation is relatively expensive, so if you expect to do this on a per-frame basis, you might want to pre-allocate some bytebuffers and tell FFmpeg to write directly into those buffers.

How can get raw file from jni native c++

I'm making a simple app in Android. I'm using NDK to make JNI calls. I have a file in a resource subfolder (raw), which I need to access from native c++ code. I want to read it from native using for example "ifstream" function but I don't get to do that.
That's my Java code:
Algorithm algorithm = new Algorithm();
InputStream isModel = getResources().openRawResource(R.raw.model);
String model = algorithm.ReadResourceFile(isModel);
if(imgInput != null && txtResults != null)
{
Bitmap bmp = ((BitmapDrawable)imgInput.getDrawable()).getBitmap();
//Convert Bitmap to Mat
Mat image = new Mat(bmp.getHeight(), bmp.getWidth(), CvType.CV_8U);
//Print results on txtResults
String results = algorithm.DetectEmotionByImage(image.nativeObj, model);
txtResults.setText(results);
}
That's my C++ code:
JNIEXPORT jstring JNICALL
Java_org_ctic_emoplay_1android_algorithm_Algorithm_DetectEmotionByImage(JNIEnv *env,
jobject instance,
jlong image,
jstring fileModel_,
jstring fileRange_,
jstring fileModelFlandmarks_,
jstring fileHaarCascade_)
{
const char *fileModel = env->GetStringUTFChars(fileModel_, NULL);
SVM_testing testing;
Mat* imageInput= (Mat*)image;
Mat& inImageInput = *(Mat*) imageInput;
string results = testing.TestModel(inImageInput, fileModel);
const char* final_results = results.c_str();
env->ReleaseStringUTFChars(fileModel_, fileModel);
return env->NewStringUTF(final_results);
}
Anyone can help me? I'm desperated. Thanks!
The file will be stored inside the APK, but if you rename the file extension to something like .PNG then it will not be compressed. Put the file in the assets folder, not res/raw.
You can get the APK file path like this:
public static String getAPKFilepath(Context context) {
// Get the path
String apkFilePath = null;
ApplicationInfo appInfo = null;
PackageManager packMgmr = context.getPackageManager();
String packageName = context.getApplicationContext().getPackageName();
try {
appInfo = packMgmr.getApplicationInfo(packageName, 0);
apkFilePath = appInfo.sourceDir;
} catch (NameNotFoundException e) {
}
return apkFilePath;
}
Then find the offset of your resource inside the APK:
public static void findAPKFile(String filepath, Context context) {
String apkFilepath = getAPKFilepath(context);
// Get the offset and length for the file: theUrl, that is in your
// assets folder
AssetManager assetManager = context.getAssets();
try {
AssetFileDescriptor assFD = assetManager.openFd(filepath);
if (assFD != null) {
long offset = assFD.getStartOffset();
long fileSize = assFD.getLength();
assFD.close();
// **** offset and fileSize are the offset and size
// **** in bytes of the asset inside the APK
}
} catch (IOException e) {
e.printStackTrace();
}
}
Call like this:
findAPKFile("model.png", MyActivity.this);
You can call your C++ and pass offset, fileSize and apkFilepath via JNI. Open the file, seek past offset bytes and then read out fileSize bytes of data.
The accepted answer to this question shows an alternative method but I haven't tried doing it that way so I can't vouch for it.

How to properly pass an asset FileDescriptor to FFmpeg using JNI in Android

I'm trying to retrieve metadata in Android using FFmpeg, JNI and a Java FileDescriptor and it isn't' working. I know FFmpeg supports the pipe protocol so I'm trying to emmulate: "cat test.mp3 | ffmpeg i pipe:0" programmatically. I use the following code to get a FileDescriptor from an asset bundled with the Android application:
FileDescriptor fd = getContext().getAssets().openFd("test.mp3").getFileDescriptor();
setDataSource(fd, 0, 0x7ffffffffffffffL); // native function, shown below
Then, in my native (In C++) code I get the FileDescriptor by calling:
static void wseemann_media_FFmpegMediaMetadataRetriever_setDataSource(JNIEnv *env, jobject thiz, jobject fileDescriptor, jlong offset, jlong length)
{
//...
int fd = jniGetFDFromFileDescriptor(env, fileDescriptor); // function contents show below
//...
}
// function contents
static int jniGetFDFromFileDescriptor(JNIEnv * env, jobject fileDescriptor) {
jint fd = -1;
jclass fdClass = env->FindClass("java/io/FileDescriptor");
if (fdClass != NULL) {
jfieldID fdClassDescriptorFieldID = env->GetFieldID(fdClass, "descriptor", "I");
if (fdClassDescriptorFieldID != NULL && fileDescriptor != NULL) {
fd = env->GetIntField(fileDescriptor, fdClassDescriptorFieldID);
}
}
return fd;
}
I then pass the file descriptor pipe # (In C) to FFmpeg:
char path[256] = "";
FILE *file = fdopen(fd, "rb");
if (file && (fseek(file, offset, SEEK_SET) == 0)) {
char str[20];
sprintf(str, "pipe:%d", fd);
strcat(path, str);
}
State *state = av_mallocz(sizeof(State));
state->pFormatCtx = NULL;
if (avformat_open_input(&state->pFormatCtx, path, NULL, &options) != 0) { // Note: path is in the format "pipe:<the FD #>"
printf("Metadata could not be retrieved\n");
*ps = NULL;
return FAILURE;
}
if (avformat_find_stream_info(state->pFormatCtx, NULL) < 0) {
printf("Metadata could not be retrieved\n");
avformat_close_input(&state->pFormatCtx);
*ps = NULL;
return FAILURE;
}
// Find the first audio and video stream
for (i = 0; i < state->pFormatCtx->nb_streams; i++) {
if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO && video_index < 0) {
video_index = i;
}
if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO && audio_index < 0) {
audio_index = i;
}
set_codec(state->pFormatCtx, i);
}
if (audio_index >= 0) {
stream_component_open(state, audio_index);
}
if (video_index >= 0) {
stream_component_open(state, video_index);
}
printf("Found metadata\n");
AVDictionaryEntry *tag = NULL;
while ((tag = av_dict_get(state->pFormatCtx->metadata, "", tag, AV_DICT_IGNORE_SUFFIX))) {
printf("Key %s: \n", tag->key);
printf("Value %s: \n", tag->value);
}
*ps = state;
return SUCCESS;
My issue is avformat_open_input doesn't fail but it also doesn't let me retrieve any metadata or frames, The same code works if I use a regular file URI (e.g file://sdcard/test.mp3) as the path. What am I doing wrong? Thanks in advance.
Note: if you would like to look at all of the code I'm trying to solve the issue in order to provide this functionality for my library: FFmpegMediaMetadataRetriever.
Java
AssetFileDescriptor afd = getContext().getAssets().openFd("test.mp3");
setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), fd.getLength());
C
void ***_setDataSource(JNIEnv *env, jobject thiz,
jobject fileDescriptor, jlong offset, jlong length)
{
int fd = jniGetFDFromFileDescriptor(env, fileDescriptor);
char path[20];
sprintf(path, "pipe:%d", fd);
State *state = av_mallocz(sizeof(State));
state->pFormatCtx = avformat_alloc_context();
state->pFormatCtx->skip_initial_bytes = offset;
state->pFormatCtx->iformat = av_find_input_format("mp3");
and now we can continue as usual:
if (avformat_open_input(&state->pFormatCtx, path, NULL, &options) != 0) {
printf("Metadata could not be retrieved\n");
*ps = NULL;
return FAILURE;
}
...
Even better, use <android/asset_manager.h>, like this:
Java
setDataSource(getContext().getAssets(), "test.mp3");
C
#include <android/asset_manager_jni.h>
void ***_setDataSource(JNIEnv *env, jobject thiz,
jobject assetManager, jstring assetName)
{
AAssetManager* assetManager = AAssetManager_fromJava(env, assetManager);
const char *szAssetName = (*env)->GetStringUTFChars(env, assetName, NULL);
AAsset* asset = AAssetManager_open(assetManager, szAssetName, AASSET_MODE_RANDOM);
(*env)->ReleaseStringUTFChars(env, assetName, szAssetName);
off_t offset, length;
int fd = AAsset_openFileDescriptor(asset, &offset, &length);
AAsset_close(asset);
Disclaimer: error checking was omitted for brevity, but resources are released correctly, except for fd. You must close(fd) when finished.
Post Scriptum: note that some media formats, e.g. mp4 need seekable protocol, and pipe: cannot help. In such case, you may try sprintf(path, "/proc/self/fd/%d", fd);, or use the custom saf: protocol.
Thks a lot for this post.
That help me a lot to integrate Android 10 and scoped storage with FFmpeg using FileDescriptor.
Here the solution I'm using on Android 10:
Java
URI uri = ContentUris.withAppendedId(
MediaStore.Audio.Media.EXTERNAL_CONTENT_URI,
trackId // Coming from `MediaStore.Audio.Media._ID`
);
ParcelFileDescriptor parcelFileDescriptor = getContentResolver().openFileDescriptor(
uri,
"r"
);
int pid = android.os.Process.myPid();
String path = "/proc/" + pid + "/fd/" + parcelFileDescriptor.dup().getFd();
loadFFmpeg(path); // Call native code
CPP
// Native code, `path` coming from Java `loadFFmpeg(String)`
avformat_open_input(&format, path, nullptr, nullptr);
OK, I spent a lot of time trying to transfer media data to ffmpeg through Assetfiledescriptor. Finally, I found that there may be a bug in mov.c. When mov.c parsed the trak atom, the corresponding skip_initial_bytes was not set. I have tried to fix this problem.
Detail please refer to FFmpegForAndroidAssetFileDescriptor, demo refer to WhatTheCodec.
FileDescriptor fd = getContext().getAssets().openFd("test.mp3").getFileDescriptor();
Think you should start with AssetFileDescripter.
http://developer.android.com/reference/android/content/res/AssetFileDescriptor.html

NDK load assets

I'm using this method to load assets in NDK:
jclass localRefCls = myEnv->FindClass("(...)/AssetLoaderHelper");
helperClass = reinterpret_cast<jclass>(myEnv->NewGlobalRef(localRefCls));
myEnv->DeleteLocalRef(localRefCls);
helperMethod1ID = myEnv->GetStaticMethodID(helperClass, "getFileData", "(Ljava/lang/String;)[B");
...
myEnv->PushLocalFrame(10);
jstring pathString = myEnv->NewStringUTF(path);
jbyteArray data = (jbyteArray) myEnv->CallStaticObjectMethod(helperClass, helperMethod1ID, pathString);
char* buffer = new char[len];
myEnv->GetByteArrayRegion(data, 0, len, (jbyte*)buffer);
myEnv->DeleteLocalRef(pathString);
myEnv->DeleteLocalRef(data);
jobject result;
myEnv->PopLocalFrame(result);
myEnv->DeleteLocalRef(result);
return buffer;
in java:
public static byte[] getFileData(String path)
{
InputStream asset = getAsset(path); //my method using InputStream.open
byte[] b = null;
try
{
int size = asset.available();
b = new byte[size];
asset.read(b, 0, size);
asset.close();
}
catch (IOException e1)
{
Log.e("getFileData", e1.getMessage());
}
return b;
}
It works but when i load many assets there is crash or system locks. Am I making any mistake or someone knows better method to load assets to NDK? Perhaps it is only problem with low memory in my device?
I'm not sure on your exact problem, but I may offer a alternative solution to opening assets JNI side:
Java side create a AssetFileDescriptor for each file in question (call this fd for now on
Pass the value of fd.getFileDescriptor(), fd.getStartOffset(), and fd.getLength() to a JNI function
JNI side you can now use fdopen(), fseek(), fread(), etc. using the information from #2
Don't forget to call fd.close() after your JNI work
Hope that helps

Categories

Resources