I am developing an Android application based on Qt5.4/QtQuick 2 and opengles 2.
I installed textureinsgnode in order to check how well it runs in my target device(as my app is making massive use of FBOs) and I am getting 18 FPS. I checked on a Samsung SM-T535 and I get around 47/48 FPS, and my application looks like colapsed when trying to make any user action.
I checked the extension available in both devices(my target and Samsung tablet) by:
QOpenGLFramebufferObject *createFramebufferObject(const QSize &size) {
QSet<QByteArray> extensions = QOpenGLContext::currentContext()->extensions();
foreach (QByteArray extension, extensions) {
qDebug() << extension;
}
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
format.setSamples(4);
return new QOpenGLFramebufferObject(size, format);
}
And I am getting a very sort list of extensions in the target device If I compare with Samsung tablet list in the same point:
"GL_OES_rgb8_rgba8"
"GL_EXT_multisampled_render_to_texture" "GL_OES_packed_depth_stencil"
"GL_ARM_rgba8" "GL_OES_vertex_half_float" "GL_OES_EGL_image"
"GL_EXT_discard_framebuffer" "GL_OES_compressed_ETC1_RGB8_texture"
"GL_OES_depth_texture" "GL_KHR_debug" "GL_ARM_mali_shader_binary"
"GL_OES_depth24" "GL_EXT_texture_format_BGRA8888"
"GL_EXT_blend_minmax" "GL_EXT_shader_texture_lod"
"GL_OES_EGL_image_external" "GL_EXT_robustness" "GL_OES_texture_npot"
"GL_OES_depth_texture_cube_map" "GL_ARM_mali_program_binary"
"GL_EXT_debug_marker" "GL_OES_get_program_binary"
"GL_OES_standard_derivatives" "GL_OES_EGL_sync"
I installed an NME(3.4.4, so based in opengl es 1.1) application (BunnyMark) and I can get around 45-48 FPS.
By basing in this tests, I can think the target device is having some problem with opengl es 2 but I am not able to find (I was googleing) on any place what opengl es 2 extension requires Qt5.4 to work properly.
The question: What are the opengl es 2 extensions required by Qt5.4 and is there a way to check it?
Related
I try to extract image directly from the GPU (so without using AcquireCameraImageBytes()) for performance reason (Samsung S9 can't reach 10fps) and to support the Xiaomi Pocophone I bought. I use the class TextureReader, included inside the ComputerVision example, but OnImageAvailableCallback is never called and log show some error during initialization:
camera_utility: Failed to create OpenGL frame buffer.
camera_utility: Failed to create OpenGL frame buffer.
I insert a couple of breakpoints inside libarcore_camera_utility.so. And I see that glCheckFramebufferStatus inside TextureReader::create return 0 (so a error occurred), but glGetError() don't return any error. How it's possible resolve the problem?
I have developed an Android game which is played by many people. One user out of 100-200 faces an Exception that I cannot make any sense of.
I use a RenderTexture which throws the following Exception when I try to initialize it:
Fatal Exception: org.andengine.opengl.exception.RenderTextureInitializationException
org.andengine.opengl.exception.GLFrameBufferException: GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
It works on 99% of all devices. The init-method looks like this:
public void init(final GLState pGLState) throws GLFrameBufferException, GLException {
this.savePreviousFramebufferObjectID(pGLState);
try {
this.loadToHardware(pGLState);
} catch (final IOException e) {
/* Can not happen. */
}
/* The texture to render to must not be bound. */
pGLState.bindTexture(0);
/* Generate FBO. */
this.mFramebufferObjectID = pGLState.generateFramebuffer();
pGLState.bindFramebuffer(this.mFramebufferObjectID);
/* Attach texture to FBO. */
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, this.mHardwareTextureID, 0);
try {
pGLState.checkFramebufferStatus();
} catch (final GLException e) {
this.destroy(pGLState);
throw new RenderTextureInitializationException(e);
} finally {
this.restorePreviousFramebufferObjectID(pGLState);
}
this.mInitialized = true;
}
It seems like something is wrong with the FrameBuffer-Status...
Update
A list of phones where the crash happened so far:
Sony - Sony Tablet S
TCT - ALCATEL ONE TOUCH 5020A
TCT - ALCATEL ONE TOUCH 6030N
VNPT Technology - VNPT Technology Smart Box
Q-Smart - S32
LGE - LG-E465g
LGE - LG-D682TR
LGE - LG-E451g
LGE - LG-D686
LGE - LG-E470f
HUAWEI - MediaPad 7 Youth
unknown - Bliss Pad B9712KB
samsung - GT-P5110
samsung - GT-I9505
samsung - Galaxy Nexus
samsung - GT-P3110
samsung - GT-P5100
samsung - GT-P3100
samsung - GT-I9105P
samsung - GT-I9082
samsung - GT-I9082L
samsung - GT-I9152
samsung - GT-P3113
E1A - E1A
LNV - LN1107
motorola - XT920
motorola - XT915
asus - ME172V
Based on the code you linked to, it looks like you are trying to render to an RGBA8888 texture. This isn't always available on OpenGL ES 2.0 devices, as it dates back to a time when most devices were using 16-bit displays.
The only mandatory formats in OpenGL ES 2.x are documented in the specification under the error code you are getting ...
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glCheckFramebufferStatus.xml
RGBA8 targets are only available if this extension is exposed:
https://www.khronos.org/registry/gles/extensions/OES/OES_rgb8_rgba8.txt
... so it's highly like that some users are using an older device with a GPU which isn't exposing this extension. To check if the extension is supported use glGetString (see below) with GL_EXTENSIONS:
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glGetString.xml
... and see if the OES_rgb_rgba8 value is present in this list. If it isn't then your only real option is to fall back to something else in the mandatory ES2.x format set such as RGB565 or RGB5_A1.
I want to control a ptz camera from android,actually i do that in jni and use linux api,the camera is connected to android-tvbox's usb interface directly, ,below is the code:
struct v4l2_ext_control xctrls[1];
struct v4l2_ext_controls ctrls;
memset(xctrls, 0, sizeof xctrls);
memset(&ctrls, 0, sizeof ctrls);
xctrls[0].id = V4L2_CID_PAN_ABSOLUTE;
xctrls[0].value = 20;
ctrls.ctrl_class = V4L2_CTRL_CLASS_CAMERA;
ctrls.count = 1;
ctrls.controls = xctrls;
//xioctl(fd, VIDIOC_S_EXT_CTRLS, &ctrls);
int result = ioctl(fd, VIDIOC_S_CTRL, &ctrls);
//LOGE("Cannot identify:%d , %d, %s", result, errno, strerror (errno));
LOGE("Cannot open '%d': %d, %s", result, errno, strerror (errno));
and it return invalid argument , can anyone tell me which argument is wrong?Or my code is incorrect...
I have solved this problem. Actually, you need to design your Android source code in UVC (USB Video Class) module, and UVC is located in the kernel of the whole android source code.
What's more, you have to cooperate with the camera's vendor, because they have deep insights into the firmware of PTZ cameras, and Android's UVC version should apply to camera firmware, data length and control type (absolute or relative).
Furthermore, Android kernel's version is generally 3.10. When you control camera with absolute control, it may move in one direction but not another one, because the Android UVC control parameters of absolute is unsigned: you should change it to signed parameters.
When you control in relative, it's a little more complicated, for you should add relative control in Android source code because Android kernel in 3.10 does not support it. You can get the Linux kernel patch to add relative movements.
I am trying to write a android arm kernel module in which I need to use a virt_to_phys translation of a memory var allocated using _kmalloc.
I do know that I can use the macro virt_to_physc to do this task. However, I dont have the specifically full kernel source, and beacuse virt_to_physc is a macro
I can't get a function address reading kallsyms to use in my module , so I would like to find another way to do this task.
I've been trying to do it using MMU (registers ATS1Cxx and PAR) to perform V=>P as Iam working in an ARMv7 proccessor but I couldnt make it work.
That's my test code...
int hello_init_module(void) {
printk("Virtual MEM:0x%X \n", allocated_buf);
//Trying to get the physc mem
asm("\t mcr p15, 0, %[value], c7, c8, 2\n"
"\t isb\n"
\t mrc p15, 0, %[result], c7, c4, 0\n" : [result]"=r" (pa) : [value]"r" (allocated_buf));
printk("Physical using MMU : %x\n", pa );
//This show the right address, but I wanna do it without calling the macro.
printk("Physical using virt_2_physc: 0x%X",virt_to_phys((int *) allocated_buf);)
}
What Iam actually doing is developing a module that is intended to work in two devices with the same 3.4.10 kernel but different memory arquitectures,
I can make the module works as they have the same VER_MAGIC and functions crc, so the module load perfectly in both devices.
The main problem is that because of diferences in their arquitecture, PAGE_OFFSET and PHYS_OFFSET actually change in both of them.
So, I've wondering if there is a way to make the translation without define this values as constant in my module.That's what I tried using MMU to perform V=>P , but MMU hasnt worked in my case, it always returns 0x1F.
According to cat /proc/cpuinfo . Iam working with a
Processor : ARMv7 Processor rev 0 (v7l)
processor : 0
Features : swp half thumb fastmult vfp edsp neon vfpv3 tls
CPU implementer : 0x51
CPU architecture: 7
If it's not possible to do it using MMU as alternative way of using virt_to_phys.
Does somebody know other way to do it?
I'm optimizing my app that is FaceDetection Algorithm using OpenCL & OpenGL.OpenGL API was used to make read/write Image. meanwhile, I want to make 1 context with multiple devices(2 device : one is GPU, the other is CPU) for CPU/GPU co-processing. but I can't make CPU device. I expected 'contextProperties for using openGL'.
What shoud I do for using using multiple device with OpenGL??
cl_context_properties contextProperties[] = {
CL_GL_CONTEXT_KHR, (cl_context_properties) eglGetCurrentContext(),
CL_EGL_DISPLAY_KHR, (cl_context_properties) eglGetCurrentDisplay(),
CL_CONTEXT_PLATFORM, (cl_context_properties) firstPlatformId,
0 }; // If CL_DEVICE_TYPE_ALL is set, program can't execution.
context = clCreateContextFromType(contextProperties, CL_DEVICE_TYPE_GPU,
NULL, NULL, &errNum);//creating a context for a GPU device,
if (errNum != CL_SUCCESS) {
LOGE("[LYW]Could not create GPU context, trying CPU...");
context = clCreateContextFromType(contextProperties,
CL_DEVICE_TYPE_CPU, NULL, NULL, &errNum); //creating a context for a CPU device
if (errNum != CL_SUCCESS) {
LOGE("[LYW] Failed to create an OpenCL GPU or CPU context.");
return NULL;
}
}
As noted above OpenCL CPU devices are not supported by any particular vendor on Android. As such, if you have an Android device that does support OpenCL on the GPU, you could used one of the following options to integrate the CPU:
Use a simple thread pool to manage CPU threads
Use Qualcomm's MARE tasking runtime*, which provides an integrated view of the CPU and GPU, including access to OpenCL within a single framework.
Alternatively you could use Renderscript which can provide CPU and GPU support but limited in functionality compared to OpenCL and/or MARE.
More information on Qualcomm's MARE can be found here: https://developer.qualcomm.com/mobile-development/maximize-hardware/parallel-computing-mare