android ANativeWindow_lock api doesn't work for GLSurfaceView - android

I'm finding methods to get drawing buffers very quickly from Android GLSurfaceView.
eventhough I know glreadpixel can do this job, glreadpixel is too slow to get drawing buffer.
I want to read buffers maintaining 30 fps.
ANativeWindow api seems like what I am looking for..
ANativeWindow api performance
I couldn't found any example of ANativeWindow api for GLSurfaceView.
My procedure :
Send GLSurfaceView surface to jni code.(using GLSurfaceView.getHolder().getSurface())**
Get Window handle using ANativeWindow_fromSurface method.**
Set Window Buffer**
Lock surface and get window buffer**
Do something using this buffer**
UnlockAndPost window**
I tried below jni code using Android "BasicGLSurfaceView" example.
JNIEXPORT void JNICALL Java_com_example_android_basicglsurfaceview_BasicGLSurfaceViewActivity_readSurface(JNIEnv* jenv, jobject obj, jobject surface)
{
LOG_INFO("Java_com_example_android_basicglsurfaceview_BasicGLSurfaceViewActivity_readSurface");
if (surface != 0) {
ANativeWindow *window = ANativeWindow_fromSurface(jenv, surface);
LOG_INFO("Got window %p", window);
if(window > 0)
{
int width = ANativeWindow_getWidth(window);
int height = ANativeWindow_getHeight(window);
LOG_INFO("Got window %d %d", width,height);
ANativeWindow_setBuffersGeometry(window,0,0,WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer buffer;
memset((void*)&buffer,0,sizeof(buffer));
int lockResult = -22;
lockResult = ANativeWindow_lock(window, &buffer, NULL);
if (lockResult == 0) { \
LOG_INFO("ANativeWindow_locked");
ANativeWindow_unlockAndPost(window);
}
else
{
LOG_INFO("ANativeWindow_lock failed error %d",lockResult);
}
LOG_INFO("Releasing window");
ANativeWindow_release(window);
}
} else {
LOG_INFO("surface is null");
}
return;
}
ANativeWindow_fromSurface and getHeight, setBuffersGeometry api work well.
But ANativeWindow_lock api always fails returning -22 value.
Error Message
[Surfaceview] connect: already connected(cur=1, req=2)
I tried this code at onDrawFrame in Renderer or main Thread or onSurfaceChanged.
but Always It return -22 value.
I am calling this api at wrong place?
is it possible to use ANativeWindow_lock for GLSurfaceView?
Here is my example code
Any help will be really appreciated~

Try doing this instead:
http://forums.arm.com/index.php?/topic/15782-glreadpixels/
Hopefully you can use an ANativeWindow for the buffer so you don't have to call gralloc directly...

I'm not sure that the memset operation is needed.
This worked for me:
ANativeWindow *window = ANativeWindow_fromSurface(env, surface);
if(window > 0){
unsigned char *data = ... // an RGBA(8888) image
int32_t w = ANativeWindow_getWidth(window);
int32_t h = ANativeWindow_getHeight(window);
ANativeWindow_setBuffersGeometry(window, w, h, WINDOW_FORMAT_RGBA_8888);
ANativeWindow_Buffer buffer;
int lockResult = -1;
lockResult = ANativeWindow_lock(window, &buffer, NULL);
if(lockResult == 0){
//write data
memcpy(buffer.bits, data, w * h * 4);
ANativeWindow_unlockAndPost(window);
}
//free data...
ANativeWindow_release(window);
}

When you encountered the same error message,
[Surfaceview] connect: already connected
You can resolve this by GLSurface.onPause() before calling the native part.
You can render the by ANativeWindow_lock, write frame buffer, ANativeWindow_unlockAndPost in native code.
// Java
SurfaceHolder holder = mGLSurfaceView.getHolder();
mPreviewSurface = holder.getSurface();
mGLSurfaceView.onPause();
camera.nativeSetPreviewDisplay(mPreviewSurface);
I call GLSurfaceView onPause() only for release EGL context. The GLSurfaceView will be rendered by native code.
GLSurfaceView
At least I think this can be the answer for the question "is it possible to use ANativeWindow_lock for GLSurfaceView?".
Yes it is.
But in this way, you can render the view only using window buffer. You cannot use opengl api because GLSurfaceView.Renderer onDrawFrame() is not called after GLSurfaceView paused. It still show the better performance than using but it is useless if you need opengl api.
Shader calls should be within a GL thread that is onSurfaceChanged(), onSurfaceCreated() or onDrawFrame()

Related

eglCreateWindowSurface() can only be called with an instance of Surface, SurfaceView, SurfaceTexture or SurfaceHolder

I am using RecordableSurfaceView
https://github.com/spaceLenny/recordablesurfaceview/blob/master/recordablesurfaceview/src/main/java/com/uncorkedstudios/android/view/recordablesurfaceview/RecordableSurfaceView.java
For android 6, (api 23) I get this error. Is there a way to fix this?
eglCreateWindowSurface() can only be called with an instance of Surface, SurfaceView, SurfaceTexture or SurfaceHolder at the moment, this will be fixed later.
.RecordableSurfaceView
The potential code segment.
mEGLSurface = EGL14
.eglCreateWindowSurface(mEGLDisplay, eglConfig, RecordableSurfaceView.this,
surfaceAttribs, 0);
EGL14.eglMakeCurrent(mEGLDisplay, mEGLSurface, mEGLSurface, mEGLContext);
// guarantee to only report surface as created once GL context
// associated with the surface has been created, and call on the GL thread
// NOT the main thread but BEFORE the codec surface is attached to the GL context
if (mRendererCallbacksWeakReference != null
&& mRendererCallbacksWeakReference.get() != null) {
mRendererCallbacksWeakReference.get().onSurfaceCreated();
}
mEGLSurfaceMedia = EGL14
.eglCreateWindowSurface(mEGLDisplay, eglConfig, mSurface,
surfaceAttribs, 0);
GLES20.glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
write a null check for this mEGLSurface and done
(mEGLSurface != null)
In the code snippet of RecordableSurfaceView the second call to eglCreateWindowSurface passes in a mSurface variable which is initialized in doSetup via:
mSurface = MediaCodec.createPersistentInputSurface();
I would guess that your codec doesn't support this function and it's somehow returning null, which is causing the exception you see. Or perhaps it's being used by more than one codec or recorder instance?
The only other somewhat related question on SO I could find is: MediaCodec's Persistent Input Surface unsupported by Camera2 Session?
Can you at least clarify from the stack trace where in the library it's crashing? In other words, from which call of eglCreateWindowSurface?

OpenGL drawing on Android combining with Unity to transfer texture through frame buffer cannot work

I'm currently making an Android player plugin for Unity. The basic idea is that I will play the video by MediaPlayer on Android, which provides a setSurface API receiving a SurfaceTexture as constructor parameter and in the end binds with an OpenGL-ES texture. In most other cases like showing an image, we can just send this texture in form of pointer/id to Unity, call Texture2D.CreateExternalTexture there to generate a Texture2D object and set that to an UI GameObject to render the picture. However, when it comes to displaying video frames, it's a little bit different since video playing on Android requires a texture of type GL_TEXTURE_EXTERNAL_OES while Unity only supports the universal type GL_TEXTURE_2D.
To solve the problem, I've googled for a while and known that I should adopt a kind of technology called "Render to texture". More clear to say, I should generate 2 textures, one for the MediaPlayer and SurfaceTexture in Android to receive video frames and another for Unity that should also has the picture data inside. The first one should be in type of GL_TEXTURE_EXTERNAL_OES (let's call it OES texture for short) and the second one in type of GL_TEXTURE_2D (let's call it 2D texture). Both of these generated textures are empty in the beginning. When bound with MediaPlayer, the OES texture will be updated during video playing, then we can use a FrameBuffer to draw the content of OES texture upon the 2D texture.
I've written a pure-Android version of this process and it works pretty well when I finally draw the 2D texture upon the screen. However, when I publish it as an Unity Android plugin and runs the same code on Unity, there won't be any pictures showing. Instead, it only displays a preset color from glClearColor, which means two things:
The transferring process of OES texture -> FrameBuffer -> 2D texture is complete and Unity do receive the final 2D texture. Because the glClearColor is called only when we draw the content of OES texture to FrameBuffer.
There are some mistakes during drawing happened after glClearColor, because we don't see the video frames pictures. In fact, I also call glReadPixels after drawing and before unbinding with the FrameBuffer, which is going to read data from the FrameBuffer we bound with. And it returns the single color's value that is same with the color we set in glClearColor.
In order to simplify the code I should provide here, I'm going to draw a triangle to a 2D texture through FrameBuffer. If we can figure out which part is wrong, we then can easily solve the similar problem to draw video frames.
The function will be called on Unity:
public int displayTriangle() {
Texture2D texture = new Texture2D(UnityPlayer.currentActivity);
texture.init();
Triangle triangle = new Triangle(UnityPlayer.currentActivity);
triangle.init();
TextureTransfer textureTransfer = new TextureTransfer();
textureTransfer.tryToCreateFBO();
mTextureWidth = 960;
mTextureHeight = 960;
textureTransfer.tryToInitTempTexture2D(texture.getTextureID(), mTextureWidth, mTextureHeight);
textureTransfer.fboStart();
triangle.draw();
textureTransfer.fboEnd();
// Unity needs a native texture id to create its own Texture2D object
return texture.getTextureID();
}
Initialization of 2D texture:
protected void initTexture() {
int[] idContainer = new int[1];
GLES30.glGenTextures(1, idContainer, 0);
textureId = idContainer[0];
Log.i(TAG, "texture2D generated: " + textureId);
// texture.getTextureID() will return this textureId
bindTexture();
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_LINEAR);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_S, GLES30.GL_CLAMP_TO_EDGE);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_CLAMP_TO_EDGE);
unbindTexture();
}
public void bindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureId);
}
public void unbindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
}
draw() of Triangle:
public void draw() {
float[] vertexData = new float[] {
0.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f
};
vertexBuffer = ByteBuffer.allocateDirect(vertexData.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(vertexData);
vertexBuffer.position(0);
GLES30.glClearColor(0.0f, 0.0f, 0.9f, 1.0f);
GLES30.glClear(GLES30.GL_DEPTH_BUFFER_BIT | GLES30.GL_COLOR_BUFFER_BIT);
GLES30.glUseProgram(mProgramId);
vertexBuffer.position(0);
GLES30.glEnableVertexAttribArray(aPosHandle);
GLES30.glVertexAttribPointer(
aPosHandle, 3, GLES30.GL_FLOAT, false, 12, vertexBuffer);
GLES30.glDrawArrays(GLES30.GL_TRIANGLE_STRIP, 0, 3);
}
vertex shader of Triangle:
attribute vec4 aPosition;
void main() {
gl_Position = aPosition;
}
fragment shader of Triangle:
precision mediump float;
void main() {
gl_FragColor = vec4(0.9, 0.0, 0.0, 1.0);
}
Key code of TextureTransfer:
public void tryToInitTempTexture2D(int texture2DId, int textureWidth, int textureHeight) {
if (mTexture2DId != -1) {
return;
}
mTexture2DId = texture2DId;
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, mTexture2DId);
Log.i(TAG, "glBindTexture " + mTexture2DId + " to init for FBO");
// make 2D texture empty
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGBA, textureWidth, textureHeight, 0,
GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, null);
Log.i(TAG, "glTexImage2D, textureWidth: " + textureWidth + ", textureHeight: " + textureHeight);
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
fboStart();
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER, GLES30.GL_COLOR_ATTACHMENT0,
GLES30.GL_TEXTURE_2D, mTexture2DId, 0);
Log.i(TAG, "glFramebufferTexture2D");
int fboStatus = GLES30.glCheckFramebufferStatus(GLES30.GL_FRAMEBUFFER);
Log.i(TAG, "fbo status: " + fboStatus);
if (fboStatus != GLES30.GL_FRAMEBUFFER_COMPLETE) {
throw new RuntimeException("framebuffer " + mFBOId + " incomplete!");
}
fboEnd();
}
public void fboStart() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, mFBOId);
}
public void fboEnd() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, 0);
}
And finally some code on Unity-side:
int textureId = plugin.Call<int>("displayTriangle");
Debug.Log("native textureId: " + textureId);
Texture2D triangleTexture = Texture2D.CreateExternalTexture(
960, 960, TextureFormat.RGBA32, false, true, (IntPtr) textureId);
triangleTexture.UpdateExternalTexture(triangleTexture.GetNativeTexturePtr());
rawImage.texture = triangleTexture;
rawImage.color = Color.white;
Well, code above will not display the expected triangle but only a blue background. I add glGetError after nearly every OpenGL functions call while no errors are thrown.
My Unity version is 2017.2.1. For Android build, I shut down the experimental multithread rendering and other settings are all default(no texture compression, not use development build, so on). My app's minimum API level is 5.0 Lollipop and target API level is 9.0 Pie.
I really need some help, thanks in advance!
Now I found the answer: If you want to do any drawing jobs in your plugin, you should do it at native layer. So if you want to make an Android plugin, you should call OpenGL-ES APIs at JNI instead of Java side. The reason is that Unity only allows drawing graphics on its rendering thread. If you simply call OpenGL-ES APIs like I did at Java side as in question description, they will actually run on Unity main thread instead of rendering thread. Unity provides a method, GL.IssuePluginEvent, to call your own functions on rendering thread but it needs native coding since this function requires a function pointer as its callback. Here is a simple example to use it:
At JNI side:
// you can copy these headers from https://github.com/googlevr/gvr-unity-sdk/tree/master/native_libs/video_plugin/src/main/jni/Unity
#include "IUnityInterface.h"
#include "UnityGraphics.h"
static void on_render_event(int event_type) {
// do all of your jobs related to rendering, including initializing the context,
// linking shaders, creating program, finding handles, drawing and so on
}
// UnityRenderingEvent is an alias of void(*)(int) defined in UnityGraphics.h
UnityRenderingEvent get_render_event_function() {
UnityRenderingEvent ptr = on_render_event;
return ptr;
}
// notice you should return a long value to Java side
extern "C" JNIEXPORT jlong JNICALL
Java_com_abc_xyz_YourPluginClass_getNativeRenderFunctionPointer(JNIEnv *env, jobject instance) {
UnityRenderingEvent ptr = get_render_event_function();
return (long) ptr;
}
At Android Java side:
class YourPluginClass {
...
public native long getNativeRenderFunctionPointer();
...
}
At Unity side:
private void IssuePluginEvent(int pluginEventType) {
long nativeRenderFuncPtr = Call_getNativeRenderFunctionPointer(); // call through plugin class
IntPtr ptr = (IntPtr) nativeRenderFuncPtr;
GL.IssuePluginEvent(ptr, pluginEventType); // pluginEventType is related to native function parameter event_type
}
void Start() {
IssuePluginEvent(1); // let's assume 1 stands for initializing everything
// get your texture2D id from plugin, create Texture2D object from it,
// attach that to a GameObject, and start playing for the first time
}
void Update() {
// call SurfaceTexture.updateTexImage in plugin
IssuePluginEvent(2); // let's assume 2 stands for transferring TEXTURE_EXTERNAL_OES to TEXTURE_2D through FrameBuffer
// call Texture2D.UpdateExternalTexture to update GameObject's appearance
}
You still need to transfer texture and everything about it should happen at JNI layer. But don't worry, they are nearly the same as I did in question description but only in a different language than Java and there are a lot of materials about this process so you can surely make it.
Finally let me address the key to solve this problem again: do your native stuff at native layer and don't be addicted to pure Java... I'm totally surprised that there are no blog/answer/wiki to tell us just write our code in C++. Although there are some open-source implementations like Google's gvr-unity-sdk, they give a complete reference but you'll still be doubt that maybe you can finish the task without writing any C++ code. Now we know that we can't. However, to be honest, I think Unity have the ability to make this progress even easier.

Real-time image process and display using Android Camera2 api and ANativeWindow

I need to do some real-time image processing with the camera preview data, such as face detection which is a c++ library, and then display the processed preview with face labeled on screen.
I have read http://nezarobot.blogspot.com/2016/03/android-surfacetexture-camera2-opencv.html and Eddy Talvala's answer from Android camera2 API - Display processed frame in real time. Following the two webpages, I managed to build the app(no calling the face detection lib, only trying to display preview using ANativeWindow), but everytime I run this app on Google Pixel - 7.1.0 - API 25 running on Genymotion, the app always collapses throwing the following log
08-28 14:23:09.598 2099-2127/tau.camera2demo A/libc: Fatal signal 11 (SIGSEGV), code 2, fault addr 0xd3a96000 in tid 2127 (CAMERA2)
[ 08-28 14:23:09.599 117: 117 W/ ]
debuggerd: handling request: pid=2099 uid=10067 gid=10067 tid=2127
I googled this but no answer found.
The whole project on Github:https://github.com/Fung-yuantao/android-camera2demo
Here is the key code(I think).
Code in Camera2Demo.java:
private void startPreview(CameraDevice camera) throws CameraAccessException {
SurfaceTexture texture = mPreviewView.getSurfaceTexture();
// to set PREVIEW size
texture.setDefaultBufferSize(mPreviewSize.getWidth(),mPreviewSize.getHeight());
surface = new Surface(texture);
try {
// to set request for PREVIEW
mPreviewBuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
} catch (CameraAccessException e) {
e.printStackTrace();
}
mImageReader = ImageReader.newInstance(mImageWidth, mImageHeight, ImageFormat.YUV_420_888, 2);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener,mHandler);
mPreviewBuilder.addTarget(mImageReader.getSurface());
//output Surface
List<Surface> outputSurfaces = new ArrayList<>();
outputSurfaces.add(mImageReader.getSurface());
/*camera.createCaptureSession(
Arrays.asList(surface, mImageReader.getSurface()),
mSessionStateCallback, mHandler);
*/
camera.createCaptureSession(outputSurfaces, mSessionStateCallback, mHandler);
}
private CameraCaptureSession.StateCallback mSessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(CameraCaptureSession session) {
try {
updatePreview(session);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
#Override
public void onConfigureFailed(CameraCaptureSession session) {
}
};
private void updatePreview(CameraCaptureSession session)
throws CameraAccessException {
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
session.setRepeatingRequest(mPreviewBuilder.build(), null, mHandler);
}
private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// get the newest frame
Image image = reader.acquireNextImage();
if (image == null) {
return;
}
// print image format
int format = reader.getImageFormat();
Log.d(TAG, "the format of captured frame: " + format);
// HERE to call jni methods
JNIUtils.display(image.getWidth(), image.getHeight(), image.getPlanes()[0].getBuffer(), surface);
//ByteBuffer buffer = image.getPlanes()[0].getBuffer();
//byte[] bytes = new byte[buffer.remaining()];
image.close();
}
};
Code in JNIUtils.java:
import android.media.Image;
import android.view.Surface;
import java.nio.ByteBuffer;
public class JNIUtils {
// TAG for JNIUtils class
private static final String TAG = "JNIUtils";
// Load native library.
static {
System.loadLibrary("native-lib");
}
public static native void display(int srcWidth, int srcHeight, ByteBuffer srcBuffer, Surface surface);
}
Code in native-lib.cpp:
#include <jni.h>
#include <string>
#include <android/log.h>
//#include <android/bitmap.h>
#include <android/native_window_jni.h>
#define LOGE(...) __android_log_print(ANDROID_LOG_ERROR, "Camera2Demo", __VA_ARGS__)
extern "C" {
JNIEXPORT jstring JNICALL Java_tau_camera2demo_JNIUtils_display(
JNIEnv *env,
jobject obj,
jint srcWidth,
jint srcHeight,
jobject srcBuffer,
jobject surface) {
/*
uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer));
if (srcLumaPtr == nullptr) {
LOGE("srcLumaPtr null ERROR!");
return NULL;
}
*/
ANativeWindow * window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_acquire(window);
ANativeWindow_Buffer buffer;
ANativeWindow_setBuffersGeometry(window, srcWidth, srcHeight, 0/* format unchanged */);
if (int32_t err = ANativeWindow_lock(window, &buffer, NULL)) {
LOGE("ANativeWindow_lock failed with error code: %d\n", err);
ANativeWindow_release(window);
return NULL;
}
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
ANativeWindow_unlockAndPost(window);
ANativeWindow_release(window);
return NULL;
}
}
After I commented the memcpy out, the app no longer collapses but displays nothing. So I guess the problem is now turning to how to correctly use memcpy to copy the captured/processed buffer to buffer.bits.
Update:
I change
memcpy(buffer.bits, srcBuffer, srcWidth * srcHeight * 4);
to
memcpy(buffer.bits, srcLumaPtr, srcWidth * srcHeight * 4);
the app no longer collapses and starts to display but it's displaying something strange.
As mentioned by yakobom, you're trying to copy a YUV_420_888 image directly into a RGBA_8888 destination (that's the default, if you haven't changed it). That won't work with just a memcpy.
You need to actually convert the data, and you need to ensure you don't copy too much - the sample code you have copies width*height*4 bytes, while a YUV_420_888 image takes up only stride*height*1.5 bytes (roughly). So when you copied, you were running way off the end of the buffer.
You also have to account for the stride provided at the Java level to correctly index into the buffer. This link from Microsoft has a useful diagram.
If you just care about the luminance (so grayscale output is enough), just duplicate the luminance channel into the R, G, and B channels. The pseudocode would be roughly:
uint8_t *outPtr = buffer.bits;
for (size_t y = 0; y < height; y++) {
uint8_t *rowPtr = srcLumaPtr + y * srcLumaStride;
for (size_t x = 0; x < width; x++) {
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = *rowPtr;
*(outPtr++) = 255; // gamma for RGBA_8888
++rowPtr;
}
}
You'll need to read the srcLumaStride from the Image object (row stride of the first Plane) and pass it down via JNI as well.
Just to put it as an answer, to avoid a long chain of comments - such a crash issue may be due to improper size of bites being copied by the memcpy (UPDATE following other comments: In this case it was due to forbidden direct copy).
If you are now getting a weird image, it is probably another issue - I would suspect the image format, try to modify that.

android: how to Update a SurfaceView from the ndk by updating the Surface buffer?

I am working on an image processing project. I have a SurfaceView where I want to show "images" form the jni side.
I followed this blog , the NDK ANativeWindow API instructions to get the pointer of the buffer and updated from the C side.
I got the code to run but my SurfaceView is not updating (not showing any image...). Also the callback surfaceChanged is not called when the buffer is updated.
Here is what I am doing:
JNI SIDE :
/*
* Class: com_example_myLib
* Method: renderImage
* Signature: (JI)V
*/
JNIEXPORT void JNICALL com_example_myLib_renderImage
(JNIEnv *mJNIEnv, jobject mjobject, jobject javaSurface) {
#ifdef DEBUG_I
LOGI("renderImage attempt !");
#endif
// load an ipl image. code tested and works well with ImageView.
IplImage *iplimage = loadMyIplImage();
int iplImageWidth = iplimage->width;
int iplImageHeitgh = iplimage->height;
char * javalBmpPointer = malloc(iplimage->width * iplimage->height * 4);
int _javaBmpRowBytes = iplimage->width * 4;
// code tested and works well with ImageView.
copyIplImageToARGBPointer(iplimage, javalBmpPointer, _javaBmpRowBytes,
iplimage->width, iplimage->height);
#ifdef DEBUG_I
LOGI("ANativeWindow_fromSurface");
#endif
ANativeWindow* window = ANativeWindow_fromSurface(env, javaSurface);
#ifdef DEBUG_I
LOGI("Got window %p", window);
#endif
if(window != 0 ){
ANativeWindow_Buffer buffer;
if (ANativeWindow_lock(window, &buffer, NULL) == 0) {
#ifdef DEBUG_I
LOGI("GANativeWindow_lock %p", buffer);
#endif
memcpy(buffer.bits, javalBmpPointer, iplimage->width* iplimage->height* 4); // ARGB_8888
ANativeWindow_unlockAndPost(window);
}
ANativeWindow_release(window);
}
}
java SIDE :
// every time that I want to reload the image:
renderImage(mySurfaceView.getHolder().getSurface());
Thanks for your time and help!
One of the most common problems leading to a blank SurfaceView is setting a background for the View. The View contents are intended to be a transparent "hole", used only for layout. The Surface contents are on a separate layer, below the View layer, so they are only visible if the View remains transparent.
The View background should generally be disabled, and nothing drawn on the View, unless you want some sort of "mask" effect (like rounded corners).

android: how to get display resolution in native code

Is it possible in android to get display dimensions in the native code?
I see that there are EGL related function:
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
surface = eglCreateWindowSurface(display, config, engine->app->window, NULL);
eglQuerySurface(display, surface, EGL_WIDTH, &w);
eglQuerySurface(display, surface, EGL_HEIGHT, &h);
But I need dimension without creating any surface.
You can get the window size while processing the APP_CMD_INIT_WINDOW "app command" like this in your native activity:
static void engine_handle_cmd(struct android_app* app, int32_t cmd) {
struct engine* engine = (struct engine*)app->userData;
switch (cmd) {
case APP_CMD_INIT_WINDOW:
if (engine->app->window != NULL) {
int width = ANativeWindow_getWidth(engine->app->window);
int height = ANativeWindow_getHeight(engine->app->window);
}
Check out main.c in the native-activity sample NDK project if you want to know how to set up the engine struct & the engine_handle_cmd callback.
Below is the relevant code taken from the Google source for screen capture http://code.google.com/p/androidscreenshot/source/browse/trunk/Binary/screenshot.c?r=3
the vinfo structure contains fields xres and yres.
int framebufferHandle = -1;
struct fb_var_screeninfo vinfo;
framebufferHandle = open("/dev/graphics/fb0", O_RDONLY);
if(framebufferHandle < 0)
{
printf("failed to open /dev/graphics/fb0\n");
// handle error
}
if(ioctl(framebufferHandle, FBIOGET_VSCREENINFO, &vinfo) < 0)
{
printf("failed to open ioctl\n");
// handle error
}
fcntl(framebufferHandle, F_SETFD, FD_CLOEXEC);
if you're looking for the pixel values height_pixels and width_pixels are probs what you're looking for. See the developer docs for more info:
http://developer.android.com/reference/android/util/DisplayMetrics.html

Categories

Resources