I'm making a simple app in Android. I'm using NDK to make JNI calls. I have a file in a resource subfolder (raw), which I need to access from native c++ code. I want to read it from native using for example "ifstream" function but I don't get to do that.
That's my Java code:
Algorithm algorithm = new Algorithm();
InputStream isModel = getResources().openRawResource(R.raw.model);
String model = algorithm.ReadResourceFile(isModel);
if(imgInput != null && txtResults != null)
{
Bitmap bmp = ((BitmapDrawable)imgInput.getDrawable()).getBitmap();
//Convert Bitmap to Mat
Mat image = new Mat(bmp.getHeight(), bmp.getWidth(), CvType.CV_8U);
//Print results on txtResults
String results = algorithm.DetectEmotionByImage(image.nativeObj, model);
txtResults.setText(results);
}
That's my C++ code:
JNIEXPORT jstring JNICALL
Java_org_ctic_emoplay_1android_algorithm_Algorithm_DetectEmotionByImage(JNIEnv *env,
jobject instance,
jlong image,
jstring fileModel_,
jstring fileRange_,
jstring fileModelFlandmarks_,
jstring fileHaarCascade_)
{
const char *fileModel = env->GetStringUTFChars(fileModel_, NULL);
SVM_testing testing;
Mat* imageInput= (Mat*)image;
Mat& inImageInput = *(Mat*) imageInput;
string results = testing.TestModel(inImageInput, fileModel);
const char* final_results = results.c_str();
env->ReleaseStringUTFChars(fileModel_, fileModel);
return env->NewStringUTF(final_results);
}
Anyone can help me? I'm desperated. Thanks!
The file will be stored inside the APK, but if you rename the file extension to something like .PNG then it will not be compressed. Put the file in the assets folder, not res/raw.
You can get the APK file path like this:
public static String getAPKFilepath(Context context) {
// Get the path
String apkFilePath = null;
ApplicationInfo appInfo = null;
PackageManager packMgmr = context.getPackageManager();
String packageName = context.getApplicationContext().getPackageName();
try {
appInfo = packMgmr.getApplicationInfo(packageName, 0);
apkFilePath = appInfo.sourceDir;
} catch (NameNotFoundException e) {
}
return apkFilePath;
}
Then find the offset of your resource inside the APK:
public static void findAPKFile(String filepath, Context context) {
String apkFilepath = getAPKFilepath(context);
// Get the offset and length for the file: theUrl, that is in your
// assets folder
AssetManager assetManager = context.getAssets();
try {
AssetFileDescriptor assFD = assetManager.openFd(filepath);
if (assFD != null) {
long offset = assFD.getStartOffset();
long fileSize = assFD.getLength();
assFD.close();
// **** offset and fileSize are the offset and size
// **** in bytes of the asset inside the APK
}
} catch (IOException e) {
e.printStackTrace();
}
}
Call like this:
findAPKFile("model.png", MyActivity.this);
You can call your C++ and pass offset, fileSize and apkFilepath via JNI. Open the file, seek past offset bytes and then read out fileSize bytes of data.
The accepted answer to this question shows an alternative method but I haven't tried doing it that way so I can't vouch for it.
Related
I am new to opencv, I am trying to use Facemark in opencv contrib modules in my android native C++ app. However, i am getting the error
A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x1788 in tid
21567(my_app)
when creating an instance of Facemark using
Ptr<Facemark> facemark = FacemarkLBF::create();
I am using https://github.com/chaoyangnz/opencv3-android-sdk-with-contrib opencv library
here is my implementation
c++
void
Java_com_makeover_makeover_1opencv_MainActivity_nativeDetectFaceLandmarks(
JNIEnv *env,
jobject , jlong srcAddr, jlong retAddr,
jstring faceCascadePath, jstring faceYamlPath)
{
const char *faceCascadeFile = env->GetStringUTFChars(faceCascadePath,NULL);
const char *yamlFile = env->GetStringUTFChars(faceYamlPath,NULL);
LOGI("nativeDetectFace called");
string cascadePath(faceCascadeFile);
LOGI("nativeDetectFace called");
string yamlPath(yamlFile);
Mat& colorMat = *(Mat*)srcAddr;
Mat& retValMat = *(Mat*)retAddr;
Mat gray;
// Load Face Detector
CascadeClassifier faceDetector(cascadePath);
LOGI("cascade file loaded");
// Create an instance of Facemark
Ptr<Facemark> facemark = FacemarkLBF::create();
LOGI("face instance created");
// Load landmark detector
facemark->loadModel(yamlPath);
LOGI("yalm model loaded");
// Find face
vector<Rect> faces;
// Convert frame to grayscale because
// faceDetector requires grayscale image.
cvtColor(colorMat, gray, COLOR_BGR2GRAY);
// Detect faces
faceDetector.detectMultiScale(gray, faces);
// Variable for landmarks.
// Landmarks for one face is a vector of points
// There can be more than one face in the image. Hence, we
// use a vector of vector of points.
vector< vector<Point2f> > landmarks;
// Run landmark detector
bool success = facemark->fit(colorMat,faces,landmarks);
if(success)
{
// If successful, render the landmarks on the face
for(int i = 0; i < landmarks.size(); i++)
{
drawLandmarks(colorMat, landmarks[i]);
}
}
}
java implementation
drawFaces.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
Mat colorMat,grayMat;
colorMat = new Mat();
grayMat = new Mat();
Utils.bitmapToMat(bmp,colorMat);
nativeDetectFaceLandmarks(colorMat.getNativeObjAddr(), grayMat.getNativeObjAddr(),
getCascade("face"),getCascade("yaml"));
Bitmap new_bmp2 = Bitmap.createBitmap(bmp);
Utils.matToBitmap(colorMat,new_bmp2);
img_face.setImageBitmap(new_bmp2);
}
});
getCascade method
public String getCascade(String cascadeType){
String fileName;
File mCascadeFile;
final InputStream is;
FileOutputStream os;
switch (cascadeType){
case "mouth":
fileName="haarcascade_mcs_mouth.xml";
break;
case "face":
fileName = "haarcascade_frontalface_alt2.xml";
break;
case "right_eye":
fileName = "haarcascade_mcs_righteye.xml";
break;
case "yaml":
fileName = "lbfmodel.yaml";
break;
case "left_eye":
fileName = "haarcascade_mcs_lefteye.xml";
break;
default:
fileName = null;
}
if(fileName==null) {
return null;
}
try {
is = getResources().getAssets().open(fileName);
File cascadeDir = getDir("cascade", Context.MODE_PRIVATE);
mCascadeFile = new File(cascadeDir,fileName);
os = new FileOutputStream(mCascadeFile);
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = is.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
is.close();
os.close();
Log.i("TAG", "getCascade: face cascade found");
return mCascadeFile.getAbsolutePath();
} catch (IOException e) {
Log.e("TAG", "face cascade not found", e);
return null;
}
}
Anyone who knows what Iam doing wrong or a better way to use Facemark in opencv contrib modules in android native
Every tutorial I have seen so far has implemented this method in a different way, to clear this up I will go through the methods that work for me.
Firstly declaring facemark the below method returns the SIGSEGV error:
Ptr<Facemark> facemark = FacemarkLBF::create();
Instead use:
Ptr<Facemark> facemark = createFacemarkLBF();
Secondly I have seen many tutorials use:
facemark->load(filename);
But the correct syntax would be:
facemark->loadModel(filename);
If you still have the same issue follow this link and download link and get the latest SDK with contrib:
https://pullrequest.opencv.org/buildbot/export/opencv_releases/
I have a native C++ method, which I am using to read an image called "hi.jpg". The code below finds the asset, and loads the data into a char* buffer. (I've tried other methods such as imread() and the file is not found). I would then like to change this data into Mat format, so I've followed some instructions to put the char* buffer into std::vector , and then use cv::imdecode to convert the data to Mat.
JNIEXPORT jint JNICALL Java_com_example_user_application_MainActivity_generateAssets(JNIEnv* env,jobject thiz,jobject assetManager) {
AAsset* img;
AAssetManager *mgr = AAssetManager_fromJava(env, assetManager);
AAssetDir* assetDir = AAssetManager_openDir(mgr, "");
const char* filename;
while ((filename = AAssetDir_getNextFileName(assetDir)) != NULL) {
AAsset *asset = AAssetManager_open(mgr, filename, AASSET_MODE_UNKNOWN);
if(strcmp(filename, "hi.jpg")==0 ) {
img = asset;
}
}
long sizeOfImg = AAsset_getLength(img);
char* buffer = (char*) malloc (sizeof(char)*sizeOfImg);
AAsset_read(img, buffer, sizeOfImg);
std::vector<char> data(buffer, buffer + strlen(buffer));
cv::Mat dataToMat = cv::imdecode(data, IMREAD_UNCHANGED);
return 0;
}
My problem is that I don't know how to test that the data has been successfully converted into Mat. How can I test this? I have ran the debugger and inspected dataToMat, but it isn't making much sense.
I am using the latest version of Android Studio (2.2.3) and I have loaded up the HelloGL2 sample project.
I now want to add a file (any type of file) to my app, and then be able to open it and read it in the c++ code using something like c's fopen etc (any direct file access api is fine)
How do I do this?
There are two options, it will depend on your target.
If your file is a basic text configuration file, you can use both cases, but if your file is a 3D object such as (.obj, .max, .dae) you should use AssetManager class.
First option: (store your files in res raw (You can use fopen())).
Create a folder called raw inside res directory (res->raw).
Write your files in the apk private directory.
In Java:
public void writeFileToPrivateStorage(int fromFile, String toFile)
{
InputStream is = mContext.getResources().openRawResource(fromFile);
int bytes_read;
byte[] buffer = new byte[4096];
try
{
FileOutputStream fos = mContext.openFileOutput(toFile, Context.MODE_PRIVATE);
while ((bytes_read = is.read(buffer)) != -1)
fos.write(buffer, 0, bytes_read); // write
fos.close();
is.close();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
}
Then, call to your function:
writeFileToPrivateStorage(R.raw.your_file,"your_output_file.txt");
Get your private path
path=mContext.getApplicationContext().getFilesDir().toString();
Define your JNI funcion in Java:
public static native void setconfiguration(String yourpath);
Implement it in C/C++:
JNIEXPORT void JNICALL Java_com_android_gl2jni_GL2JNILib_setconfiguration(JNIEnv * env, jobject obj, jstring path)
{
//convert your string into std::string.
const char *nativeString = env->GetStringUTFChars(config_path, 0);
//make here your fopen.
fopen(nativeString,"r");
}
Second option (use assetManager, usually for opengl resources).
The parameter, in this case, is not the path of the directory is the asset manager.
Store your files in the asset directory.
Define your native function in C/C++
public static native void yourfunction(AssetManager assetManager);
Call in java to this function:
loadYourFile(m_context.getAssets());
Create your jni function in C/C++
JNIEXPORT void Java_com_android_gl2jni_GL2JNILib_(JNIEnv * env, jobject obj,jobject java_asset_manager)
{
AAssetManager* mgr = AAssetManager_fromJava(env,java_asset_manager);
AAsset* asset = AAssetManager_open(mgr, (const char *) js, AASSET_MODE_UNKNOWN);
if (NULL == asset) {
__android_log_print(ANDROID_LOG_ERROR, NF_LOG_TAG, "_ASSET_NOT_FOUND_");
return JNI_FALSE;
}
long size = AAsset_getLength(asset);
char* buffer = (char*) malloc (sizeof(char)*size);
AAsset_read (asset,buffer,size);
__android_log_print(ANDROID_LOG_ERROR, NF_LOG_TAG, buffer);
AAsset_close(asset);
}
Note: Do not forget to add the permissions in your AndroidManifest.xml.
Note II: Do not forget to add:
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
I hope this answer helps you.
I'm trying to retrieve metadata in Android using FFmpeg, JNI and a Java FileDescriptor and it isn't' working. I know FFmpeg supports the pipe protocol so I'm trying to emmulate: "cat test.mp3 | ffmpeg i pipe:0" programmatically. I use the following code to get a FileDescriptor from an asset bundled with the Android application:
FileDescriptor fd = getContext().getAssets().openFd("test.mp3").getFileDescriptor();
setDataSource(fd, 0, 0x7ffffffffffffffL); // native function, shown below
Then, in my native (In C++) code I get the FileDescriptor by calling:
static void wseemann_media_FFmpegMediaMetadataRetriever_setDataSource(JNIEnv *env, jobject thiz, jobject fileDescriptor, jlong offset, jlong length)
{
//...
int fd = jniGetFDFromFileDescriptor(env, fileDescriptor); // function contents show below
//...
}
// function contents
static int jniGetFDFromFileDescriptor(JNIEnv * env, jobject fileDescriptor) {
jint fd = -1;
jclass fdClass = env->FindClass("java/io/FileDescriptor");
if (fdClass != NULL) {
jfieldID fdClassDescriptorFieldID = env->GetFieldID(fdClass, "descriptor", "I");
if (fdClassDescriptorFieldID != NULL && fileDescriptor != NULL) {
fd = env->GetIntField(fileDescriptor, fdClassDescriptorFieldID);
}
}
return fd;
}
I then pass the file descriptor pipe # (In C) to FFmpeg:
char path[256] = "";
FILE *file = fdopen(fd, "rb");
if (file && (fseek(file, offset, SEEK_SET) == 0)) {
char str[20];
sprintf(str, "pipe:%d", fd);
strcat(path, str);
}
State *state = av_mallocz(sizeof(State));
state->pFormatCtx = NULL;
if (avformat_open_input(&state->pFormatCtx, path, NULL, &options) != 0) { // Note: path is in the format "pipe:<the FD #>"
printf("Metadata could not be retrieved\n");
*ps = NULL;
return FAILURE;
}
if (avformat_find_stream_info(state->pFormatCtx, NULL) < 0) {
printf("Metadata could not be retrieved\n");
avformat_close_input(&state->pFormatCtx);
*ps = NULL;
return FAILURE;
}
// Find the first audio and video stream
for (i = 0; i < state->pFormatCtx->nb_streams; i++) {
if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO && video_index < 0) {
video_index = i;
}
if (state->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO && audio_index < 0) {
audio_index = i;
}
set_codec(state->pFormatCtx, i);
}
if (audio_index >= 0) {
stream_component_open(state, audio_index);
}
if (video_index >= 0) {
stream_component_open(state, video_index);
}
printf("Found metadata\n");
AVDictionaryEntry *tag = NULL;
while ((tag = av_dict_get(state->pFormatCtx->metadata, "", tag, AV_DICT_IGNORE_SUFFIX))) {
printf("Key %s: \n", tag->key);
printf("Value %s: \n", tag->value);
}
*ps = state;
return SUCCESS;
My issue is avformat_open_input doesn't fail but it also doesn't let me retrieve any metadata or frames, The same code works if I use a regular file URI (e.g file://sdcard/test.mp3) as the path. What am I doing wrong? Thanks in advance.
Note: if you would like to look at all of the code I'm trying to solve the issue in order to provide this functionality for my library: FFmpegMediaMetadataRetriever.
Java
AssetFileDescriptor afd = getContext().getAssets().openFd("test.mp3");
setDataSource(afd.getFileDescriptor(), afd.getStartOffset(), fd.getLength());
C
void ***_setDataSource(JNIEnv *env, jobject thiz,
jobject fileDescriptor, jlong offset, jlong length)
{
int fd = jniGetFDFromFileDescriptor(env, fileDescriptor);
char path[20];
sprintf(path, "pipe:%d", fd);
State *state = av_mallocz(sizeof(State));
state->pFormatCtx = avformat_alloc_context();
state->pFormatCtx->skip_initial_bytes = offset;
state->pFormatCtx->iformat = av_find_input_format("mp3");
and now we can continue as usual:
if (avformat_open_input(&state->pFormatCtx, path, NULL, &options) != 0) {
printf("Metadata could not be retrieved\n");
*ps = NULL;
return FAILURE;
}
...
Even better, use <android/asset_manager.h>, like this:
Java
setDataSource(getContext().getAssets(), "test.mp3");
C
#include <android/asset_manager_jni.h>
void ***_setDataSource(JNIEnv *env, jobject thiz,
jobject assetManager, jstring assetName)
{
AAssetManager* assetManager = AAssetManager_fromJava(env, assetManager);
const char *szAssetName = (*env)->GetStringUTFChars(env, assetName, NULL);
AAsset* asset = AAssetManager_open(assetManager, szAssetName, AASSET_MODE_RANDOM);
(*env)->ReleaseStringUTFChars(env, assetName, szAssetName);
off_t offset, length;
int fd = AAsset_openFileDescriptor(asset, &offset, &length);
AAsset_close(asset);
Disclaimer: error checking was omitted for brevity, but resources are released correctly, except for fd. You must close(fd) when finished.
Post Scriptum: note that some media formats, e.g. mp4 need seekable protocol, and pipe: cannot help. In such case, you may try sprintf(path, "/proc/self/fd/%d", fd);, or use the custom saf: protocol.
Thks a lot for this post.
That help me a lot to integrate Android 10 and scoped storage with FFmpeg using FileDescriptor.
Here the solution I'm using on Android 10:
Java
URI uri = ContentUris.withAppendedId(
MediaStore.Audio.Media.EXTERNAL_CONTENT_URI,
trackId // Coming from `MediaStore.Audio.Media._ID`
);
ParcelFileDescriptor parcelFileDescriptor = getContentResolver().openFileDescriptor(
uri,
"r"
);
int pid = android.os.Process.myPid();
String path = "/proc/" + pid + "/fd/" + parcelFileDescriptor.dup().getFd();
loadFFmpeg(path); // Call native code
CPP
// Native code, `path` coming from Java `loadFFmpeg(String)`
avformat_open_input(&format, path, nullptr, nullptr);
OK, I spent a lot of time trying to transfer media data to ffmpeg through Assetfiledescriptor. Finally, I found that there may be a bug in mov.c. When mov.c parsed the trak atom, the corresponding skip_initial_bytes was not set. I have tried to fix this problem.
Detail please refer to FFmpegForAndroidAssetFileDescriptor, demo refer to WhatTheCodec.
FileDescriptor fd = getContext().getAssets().openFd("test.mp3").getFileDescriptor();
Think you should start with AssetFileDescripter.
http://developer.android.com/reference/android/content/res/AssetFileDescriptor.html
I'm using this method to load assets in NDK:
jclass localRefCls = myEnv->FindClass("(...)/AssetLoaderHelper");
helperClass = reinterpret_cast<jclass>(myEnv->NewGlobalRef(localRefCls));
myEnv->DeleteLocalRef(localRefCls);
helperMethod1ID = myEnv->GetStaticMethodID(helperClass, "getFileData", "(Ljava/lang/String;)[B");
...
myEnv->PushLocalFrame(10);
jstring pathString = myEnv->NewStringUTF(path);
jbyteArray data = (jbyteArray) myEnv->CallStaticObjectMethod(helperClass, helperMethod1ID, pathString);
char* buffer = new char[len];
myEnv->GetByteArrayRegion(data, 0, len, (jbyte*)buffer);
myEnv->DeleteLocalRef(pathString);
myEnv->DeleteLocalRef(data);
jobject result;
myEnv->PopLocalFrame(result);
myEnv->DeleteLocalRef(result);
return buffer;
in java:
public static byte[] getFileData(String path)
{
InputStream asset = getAsset(path); //my method using InputStream.open
byte[] b = null;
try
{
int size = asset.available();
b = new byte[size];
asset.read(b, 0, size);
asset.close();
}
catch (IOException e1)
{
Log.e("getFileData", e1.getMessage());
}
return b;
}
It works but when i load many assets there is crash or system locks. Am I making any mistake or someone knows better method to load assets to NDK? Perhaps it is only problem with low memory in my device?
I'm not sure on your exact problem, but I may offer a alternative solution to opening assets JNI side:
Java side create a AssetFileDescriptor for each file in question (call this fd for now on
Pass the value of fd.getFileDescriptor(), fd.getStartOffset(), and fd.getLength() to a JNI function
JNI side you can now use fdopen(), fseek(), fread(), etc. using the information from #2
Don't forget to call fd.close() after your JNI work
Hope that helps