I'm currently developing an OpenGL-ES application for Android using the NDK.
The application would greatly benefit from the following Open-GL extension:
GL_EXT_texture_array
(details here: GL_EXT_texture_arary)
The extension is supported by my Tegra-3 device (Asus EeePad Transformer Prime Tf-201)
The issue I'm now facing is, that I have no clue how to make the extension available for my application as it is not included by the Open-GL ES API registry.
(see "Extension Specifications": http://www.khronos.org/registry/gles/)
However I noticed an extension called "GL_NV_texture_array" which seems to be of the same use, but is not supported by my Tegra-3 device.
I'm aware of the possibility to include extensions using function pointers.
But I thought there might be a more comfortable way.
I have also found a header file (gl2ext_nv.h), which contains the necessary extension.
But when you search for it through google, the file is always part of particular projects, not something official.
I have also downloaded the Tegra Android Development Pack (2.0) in which neither this header file nor the desired extension is included.
Can anyone explain this to me, please?
How can I use Open-GL ES extension supported by my Tegra-3 device,
which are seemingly not supported by any official Open-GL ES specific headers (in the NDK)?
Thanks in advance!
When you say that your Tegra 3 device supports GL_EXT_texture_array but not GL_NV_texture_array, I'm assuming that you determined that through a call to glGetString(GL_EXTENSIONS).
GL_NV_texture_array is very similar to GL_EXT_texture_array, just limited to 2d texture arrays. Not surprisingly, it uses many of the same constants as GL_EXT_texture_array, just with different names.
GL_NV_texture_array:
TEXTURE_2D_ARRAY_NV 0x8C1A
TEXTURE_BINDING_2D_ARRAY_NV 0x8C1D
MAX_ARRAY_TEXTURE_LAYERS_NV 0x88FF
FRAMEBUFFER_ATTACHMENT_TEXTURE_LAYER_NV 0x8CD4
SAMPLER_2D_ARRAY_NV 0x8DC1
GL_EXT_texture_array:
TEXTURE_2D_ARRAY_EXT 0x8C1A
TEXTURE_BINDING_2D_ARRAY_EXT 0x8C1D
MAX_ARRAY_TEXTURE_LAYERS_EXT 0x88FF
FRAMEBUFFER_ATTACHMENT_TEXTURE_LAYER_EXT 0x8CD4
SAMPLER_2D_ARRAY_EXT 0x8DC1
This version of gl2ext_nv.h defines the constants for GL_EXT_texture_array but not for GL_NV_texture_array, so perhaps nVidia is using the the old name now. If you can't find a more recent version of the header, just include this one.
To gain access to functions offered by GL extensions, use eglGetProcAddress to assign the function to a function pointer.
// The function pointer, declared in a header.
// You can put this in a class instance or at global scope.
// If the latter, declare it with "extern", and define the actual function
// pointer without "extern" in a single source file.
PFNGLFRAMEBUFFERTEXTURELAYEREXTPROC glFramebufferTextureLayerEXT;
In your function that checks for the presence of the GL_EXT_texture_array extension, if it's found, get the address of the function and store it in your function pointer. With OpenGL-ES, that means asking EGL:
glFramebufferTextureEXT = (PFNGLFRAMEBUFFERTEXTURELAYEREXTPROC) eglGetProcAddress("glFramebufferTextureLayerEXT");
Now you can use the function just like it was part of regular OpenGL.
Related
The application I use (https://github.com/dangbo/ncnn-mobile.git) use a native library that gives out inference result as a tag. I need it to give me a float array from which it knows the tag. The array is already implemented in the C++ files, but changing them does not effect the application itself. I would not mind if the array was written into a string, I just need the numbers in a readable format. However, the method is native and thus I do not know how to modify this behavior.
I use the newest versions of Android Studio and NCNN. Please advise.
Simply use ndk-build on the jni folder for building.
I'm a student in computer science. As part of my master's project, I'm trying to intercept calls to functions in native libraries on the Android platform. The goal is to decide whether to allow the call or deny it in order to improve security.
Following the approach of a research paper 1, I want to modify the Procedure Linkage Table (PLT) and the Global Offset Table (GOT) of the ELF file. The idea is that I want to make all the function calls point to my own intercepting function, which decides whether to block the call or pass it through to the original target function.
The ELF specification 2 says (in Book III, Chapter 2 Program Loading and Dynamic Linking, page 2-13, Sections "Global Offset Table" and "Procedure Linkage Table") that the actual contents and form of the PLT and the GOT depend upon the processor. However, in the documentation "ELF for the ARM Architecture" 3, I was unable to see the exact specification of either of those tables. I am concentrating on ARM and not considering other architectures at the moment.
I have 3 questions:
How can I map a symbol to a GOT or PLT entry?
Where do I find the precise specification of the GOT and PLT for ARM processors?
As the PLT contains machine code; will I have to parse that code in order to modify the target address, or do all PLT entries look identical, so that I could just modify the memory at a constant offset for each PLT entry?
Thanks,
Manuel
You need to parse ELF headers and look up the symbol index by the string name in the SHT_DYNSYM. Then iterate over the GOT (which would be called ".rela.plt") and find the entry with the matching index.
I don't know about the formal spec, but you can always study the android linker source and disassemble some binaries to notice the patterns
Usually PLT is just common code and you don't need to modify it. It's actually designed this way because if linker had to modify it, you would end up with RWX memory which is undesirable. So you just need to rewrite the entry in the GOT. By default the GOT entries point to the resolver routine that will find the needed function and write the entry to the GOT. That's on Linux. On Android the address are already resolved.
I did something for the x86_64 Linux
https://github.com/astarasikov/sxge/blob/vaapi_recorder/apps/src/sxge/apps/demo1_cube/hook-elf.c
And also there's a blog about doing what you want on Android
https://www.google.de/amp/shunix.com/android-got-hook/amp/
In my Android project, I'm using std::thread.
I use the same C++ code also in some Linux and OSX projects.
For debugging purpose, I want to assign human-readable thread names and I do that by calling pthread_setname_np() (because lack of std::thread::set_name()).
In case of later debug output, I try to obtain the current thread name by calling pthread_getname_np() and this works e.g. on Linux target.
But for my surprise, there is no pthread_getname_np() in Android Ndk pthread.h, not in e.g. ndk-bundle/platforms/android-19/arch-arm/usr/include/pthread.h nor in ndk-bundle/platforms/android-21/arch-arm/usr/include/pthread.h
A stupid trying with a forward declaration like:
extern "C" int pthread_getname_np(pthread_t, char*, size_t);
fails with a linker error (as expected).
Any idea how to obtain the human readable name of the current thread in Android from C/C++ code?
You can see how Dalvik sets them in dalvik/vm/Thread.cpp. It uses pthread_setname_np() if available, prctl(PR_SET_NAME) if not. So if pthread_getname_np() isn't available -- and bear in mind that "np" means "non-portable" -- you can use prctl(PR_GET_NAME) to get a 16-byte null-terminated string under Linux.
You can find other bits by fishing around in /proc entries.
If you have specific requirements for the size and format of the name then you may want to define a pthread key and tuck it into thread-local storage. It's more work, but it's consistent and portable.
I am working on a matlab project where I add effects to audio files (mp3, wav). Therefore, I load the files into arrays using the matlab function audioread(..).
Now, I want to export this to Android. I read that the best way is to use the Matlab Coder to export the matlab code to C/C++ (or Java) and then export it into android (more or less).
However, the function call audioplayer (and play) are Unsupported (that's what the code generation readiness issues says).
What can I do ? One idea was to play the sounds directly using c++ code (so after the code generation). But how to play sounds from arrays using c++ ?
Or if you guys have others ideas without touching c++ codes (so fixing the problem directly in matlab), I would be glad to hear it !
Thanks and have a good day !
Typically what I recommend in cases like this is to factor your code in two pieces:
The part that does the audio file I/O and audio playing (namely the OS-specific part)
The computational kernel for which you will generate code using MATLAB Coder. This piece usually takes numeric arrays representing the image or audio data as arguments.
I've used this approach to leverage MATLAB Coder generated code to do image filtering on Android.
To do part (1), as Navan says, you'll need to use Android APIs to read in audio files, write data back to files, and to play them as desired. Note, I haven't done extensive Android development, so doing these tasks may take some research or be difficult.
Once you have the data in a format suitable for the function(s) in (2), likely a numeric array, then you can call your generated code using JNI to add the desired effects. The generated code would return the data back to the Java code and you can then encode it, play it, or do as you please with it using the Android APIs.
Playing audio normally uses platform dependent libraries. In DSP System toolbox, there is an audio player object called dsp.AudioPlayer which supports C code generation. But I believe this uses platform dependent libraries in the generated code and it will not be straight forward to make it work in Android. You will be better off finding an audio player library for Android and hooking that in manually after generating code.
I'm currently developing an algorithm for texture classification based on Machine Learning, primarily Support Vector Machines (SVM). I was able to gain some very good results on my test data and now want to use the SVM in productive environment.
Productive in my case means, it is going to run on multiple Desktop- and Mobile platforms (i.e. Android, iOS) and always somewhere deep down in native threads. For reasons of software structure and the platform's access policies, I'm not able to access the file system from where I use the SVM. However, my framework supports reading Files in an environment where access the file system is granted and channel the file's content as a std::string to the SVM-part of my application.
The standard procedure how to configure an SVM is by using filenames and OpenCV reads directly from the file:
cv::SVM _svm;
_svm.load("/home/<usrname>/DEV/TrainSoftware/trained.cfg", "<trainSetName>");
I want this (basically reading from the file somewhere else and passing the file's content as a string to the SVM):
cv::SVM _svm;
std::string trainedCfgContentStr="<get the content here>";
_svm.loadFromString(trainedCfgContentStr, "<trainSetName>") // This method is desired
I couldn't find anything in OpenCV's docs or source that this is possible somehow, but it wouldn't be the first OpenCV-Feature that's there and not documented or widely known. Of course, I could hack the OpenCV source and cross-compile to each of my target platforms, but I'd try to avoid that since it is a hell lot of work, besides I'm pretty convinced I'm not the first one with this problem.
All ideas (also unconventional) and/or hints are highly appreciated!
as long as you stick with the c++ api it's quite easy, FileStorage can read from memory:
string data_string; //containing xml/yml data
FileStorage fs( data_string, FileStorage::READ | FileStorage::MEMORY);
svm.read(fs.getFirstTopLevelNode()); // or the node with your trainset
(unfortunately not exposed to java)