Using glMapBufferRange to read data to a buffer - android

I want to use glMapBufferRange to read out data. The way I use it is the following:
bindBuffer(vboIdx);
void *ptr = glMapBufferRange(GL_ARRAY_BUFFER, 0, size, GL_MAP_READ_BIT);
int err = glGetError();
assert(ptr);
memcpy(dst, ptr, size);
glUnmapBuffer(GL_ARRAY_BUFFER);
unbindBuffer();
However, ptr seems to be NULL. As stated in the documentation here: khronos - glMapBufferRange; the function returns NULL if an error occurs. Therefore, I checked for the error by using glGetError(). But the error I get is 0, which is the same as GL_NO_ERROR.
What am I doing wrong?

Honestly, I haven't heard about the OpenGL context so far (just read about it), I started writing my code based on android's sensor graph sample project.
To answer your question, I haven't had a problem so far and I guess yes, but I'm not 100% sure. Before requesting a data readout I write data to the buffer:
if(dataSize == 0) {
return;
}
bindBuffer(vboIdx);
if(dataSize > vertexBufSize[vboIdx] - offset) {
glBufferData(GL_ARRAY_BUFFER, dataSize, data, GL_STREAM_DRAW);
vertexBufSize[vboIdx] = dataSize;
}
void *ptr = glMapBufferRange(GL_ARRAY_BUFFER, offset, dataSize, GL_MAP_WRITE_BIT | GL_MAP_UNSYNCHRONIZED_BIT);
int err = glGetError();
assert(ptr);
if(data != nullptr) {
memcpy(ptr, data, dataSize);
}
glUnmapBuffer(GL_ARRAY_BUFFER);
unbindBuffer();
This is where I call glBufferData()
Java Side:
mView = (GLSurfaceView) frameLayout.findViewById(R.id.plot_view);
mView.setEGLContextClientVersion(3);
mView.setEGLConfigChooser(true);
if(LOG) Log.d(TAG, "onCreateView(): get ID: " + mView.getId());
isStreaming = true;
mView.setRenderer(new GLSurfaceView.Renderer() {
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
mLiveViewJNI.surfaceCreated();
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
mLiveViewJNI.surfaceChanged(width, height);
}
#Override
public void onDrawFrame(GL10 gl) {
mLiveViewJNI.drawFrame();
}
});
#Matic Oblak was right, the context was actually the problem. As long as I only touched the buffers via JNI within onSurfaceCreated, onSurfaceChanged or onDrawFrame everything was fine.
But as soon as I tried to access them outside for instance via mLiveViewJNI.dummyCall I ran into that problem and everything returned 0 or NULL.

Related

Read from GL_TEXTURE_EXTERNAL_OES to GL_TEXTURE_2D have perfomance issues and glitches

I'm need to send data from GL_TEXTURE_EXTERNAL_OES to simple GL_TEXTURE_2D (Render image from Android player to Unity texture) and currently do it through read pixels from buffer with attached source texture. This process work correctly on my OnePlus 5 phone, but have some glitches with image on phones like xiaomi note 4, mi a2 and etc (like image is very green), and also there is perfomance issues becouse of this process works every frame and than more pixels to read, than worser perfomance (even my phone has low fps at 4k resolution). Any idea how to optimize this process or do it in some other way?
Thanks and best regards!
GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}
unsigned char* data = new unsigned char[g_SourceWidth * g_SourceHeight * 4];
glReadPixels(0, 0, g_SourceWidth, g_SourceHeight, GL_RGBA, GL_UNSIGNED_BYTE, data);
glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, g_SourceWidth, g_SourceHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
delete[] data;
UPDATE.
Function which contain this code and function which calls it from Unity side
static void UNITY_INTERFACE_API OnRenderEvent(int eventID) { ... }
extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API UMDGetRenderEventFunc()
{
return OnRenderEvent;
}
Which called from Unity Update function like this:
[DllImport("RenderingPlugin")]
static extern IntPtr UMDGetRenderEventFunc();
IEnumerator UpdateVideoTexture()
{
while (true)
{
...
androidPlugin.UpdateSurfaceTexture();
GL.IssuePluginEvent(UMDGetRenderEventFunc, 1);
}
}
And Android plugin do this on its side (surfaceTexture its texture which contain this external texture on which ExoPlayer render video)
public void exportUpdateSurfaceTexture() {
synchronized (this) {
if (this.mIsStopped) {
return;
}
surfaceTexture.updateTexImage();
}
}
On the C++ side:
You're creating and destroying pixel data every frame when you do new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete[] data and that's expensive depending on the Texture size. Create the texture data once then re-use it.
One way to do this is to have static variables on the C++ side hold the texture information then a function to initialize those variables::
static void* pixelData = nullptr;
static int _x;
static int _y;
static int _width;
static int _height;
void initPixelData(void* buffer, int x, int y, int width, int height) {
pixelData = buffer;
_x = x;
_y = y;
_width = width;
_height = height;
}
Then your capture function should be re-written to remove new unsigned char[g_SourceWidth * g_SourceHeight * 4]; and delete[] data but use the static variables.
static void UNITY_INTERFACE_API OnRenderEvent(int eventID)
{
if (pixelData == nullptr) {
//Debug::Log("Pointer is null", Color::Red);
return;
}
GLuint FramebufferName;
glGenFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, FramebufferName);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, g_ExtTexturePointer, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
LOGD("%s", "Error: Could not setup frame buffer.");
}
glReadPixels(_x, _y, _width, _height, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);
glBindTexture(GL_TEXTURE_2D, g_TexturePointer);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, _width, _height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixelData);
glDeleteFramebuffers(1, &FramebufferName);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}
extern "C" UnityRenderingEvent UNITY_INTERFACE_EXPORT UNITY_INTERFACE_API
UMDGetRenderEventFunc()
{
return OnRenderEvent;
}
On the C# side:
[DllImport("RenderingPlugin", CallingConvention = CallingConvention.Cdecl)]
public static extern void initPixelData(IntPtr buffer, int x, int y, int width, int height);
[DllImport("RenderingPlugin", CallingConvention = CallingConvention.StdCall)]
private static extern IntPtr UMDGetRenderEventFunc();
Create the Texture information, pin it and send the pointer to C++:
int width = 500;
int height = 500;
//Where Pixel data will be saved
byte[] screenData;
//Where handle that pins the Pixel data will stay
GCHandle pinHandler;
//Used to test the color
public RawImage rawImageColor;
private Texture2D texture;
// Use this for initialization
void Awake()
{
Resolution res = Screen.currentResolution;
width = res.width;
height = res.height;
//Allocate array to be used
screenData = new byte[width * height * 4];
texture = new Texture2D(width, height, TextureFormat.RGBA32, false, false);
//Pin the Array so that it doesn't move around
pinHandler = GCHandle.Alloc(screenData, GCHandleType.Pinned);
//Register the screenshot and pass the array that will receive the pixels
IntPtr arrayPtr = pinHandler.AddrOfPinnedObject();
initPixelData(arrayPtr, 0, 0, width, height);
StartCoroutine(UpdateVideoTexture());
}
Then to update the texture, see the sample below. Note that there are two methods to update the texture as shown on the code below. If you run into issues with Method1, comment out the two lines which uses texture.LoadRawTextureData and texture.Apply and un-comment the Method2 code which uses the ByteArrayToColor, texture.SetPixels and texture.Apply function:
IEnumerator UpdateVideoTexture()
{
while (true)
{
//Take screenshot of the screen
GL.IssuePluginEvent(UMDGetRenderEventFunc(), 1);
//Update Texture Method1
texture.LoadRawTextureData(screenData);
texture.Apply();
//Update Texture Method2. Use this if the Method1 above crashes
/*
ByteArrayToColor();
texture.SetPixels(colors);
texture.Apply();
*/
//Test it by assigning the texture to a raw image
rawImageColor.texture = texture;
//Wait for a frame
yield return null;
}
}
Color[] colors = null;
void ByteArrayToColor()
{
if (colors == null)
{
colors = new Color[screenData.Length / 4];
}
for (int i = 0; i < screenData.Length; i += 4)
{
colors[i / 4] = new Color(screenData[i],
screenData[i + 1],
screenData[i + 2],
screenData[i + 3]);
}
}
Unpin the array when done or when the script is about to be destroyed:
void OnDisable()
{
//Unpin the array when disabled
pinHandler.Free();
}
Calling glReadPixels is always going to be slow; CPUs are not good at bulk data transfer.
Ideally you'd managed to convince Unity to accept an external image handle, and do the whole process zero copy, but failing that I would use a GPU render-to-texture and use a shader to transfer from the external image to the RGB surface.

Surface Texture object is not getting the frames from a Surface Class

On the one hand, I have a Surface Class which when instantiated, automatically initialize a new thread and start grabbing frames from a streaming source via native code based on FFMPEG. Here is the main parts of the code for the aforementioned Surface Class:
public class StreamingSurface extends Surface implements Runnable {
...
public StreamingSurface(SurfaceTexture surfaceTexture, int width, int height) {
super(surfaceTexture);
screenWidth = width;
screenHeight = height;
init();
}
public void init() {
mDrawTop = 0;
mDrawLeft = 0;
mVideoCurrentFrame = 0;
this.setVideoFile();
this.startPlay();
}
public void setVideoFile() {
// Initialise FFMPEG
naInit("");
// Get stream video res
int[] res = naGetVideoRes();
mDisplayWidth = (int)(res[0]);
mDisplayHeight = (int)(res[1]);
// Prepare Display
mBitmap = Bitmap.createBitmap(mDisplayWidth, mDisplayHeight, Bitmap.Config.ARGB_8888);
naPrepareDisplay(mBitmap, mDisplayWidth, mDisplayHeight);
}
public void startPlay() {
thread = new Thread(this);
thread.start();
}
#Override
public void run() {
while (true) {
while (2 == mStatus) {
//pause
SystemClock.sleep(100);
}
mVideoCurrentFrame = naGetVideoFrame();
if (0 < mVideoCurrentFrame) {
//success, redraw
if(isValid()){
Canvas canvas = lockCanvas(null);
if (null != mBitmap) {
canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint);
}
unlockCanvasAndPost(canvas);
}
} else {
//failure, probably end of video, break
naFinish(mBitmap);
mStatus = 0;
break;
}
}
}
}
In my MainActivity class, I instantiated this class in the following way:
public void startCamera(int texture)
{
mSurface = new SurfaceTexture(texture);
mSurface.setOnFrameAvailableListener(this);
Surface surface = new StreamingSurface(mSurface, 640, 360);
surface.release();
}
I read the following line in the Android developer page, regarding the Surface class constructor:
"Images drawn to the Surface will be made available to the SurfaceTexture, which can attach them to an OpenGL ES texture via updateTexImage()."
That is exactly what I want to do, and I have everything ready for the further renderization. But definitely, with the above code, I never get my frames captured in the surface class transformed to its corresponding SurfaceTexture. I know this because the debugger, for instace, never call the OnFrameAvailableLister method associated with that Surface Texture.
Any ideas? Maybe the fact that I am using a thread to call the drawing functions is messing everything? In such a case, what alternatives I have to grab the frames?
Thanks in advance

Record opengl fbo made with c++ with java MediaCodec

I'm trying to record the contents of an fbo that i create and fill in c++
with the MediaCodec java object.
I understand that i need to get a Surface from MediaCodec and draw on it
with a frag shader, so i'm trying to feed the c++ fbo as uniform to the
shader.
At the moment i can write the video and if i draw all the pixels red with
the shader i got a red clip, so everything ok on the Shader=>Surface=>MediaCodec side.
To use my fbo i'm passing the GLunit of the texture of the fbo to the
java side with a jni function, and everything seems ok, but all i got
is just noise, like a not initializated texture.
So the question is:
is in any way possible to use a texture done with c++ as a input uniform
to a gles2 java shader?
and if yes, what is the correct flow to do so?
To clarify, i'm actually doing this in my GLSurfaceView.Renderer class:
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.i("OF","onSurfaceCreated");
OFAndroid.onSurfaceCreated();
try{
((OFActivity)OFAndroid.getContext()).onGLSurfaceCreated();
}catch(Exception e){
Log.e("OF","couldn call onGLSurfaceCreated",e);
}
return;
}
#Override
public void onSurfaceChanged(GL10 gl, int w, int h) {
this.w = w;
this.h = h;
if(!setup && OFAndroid.unpackingDone){
try {
setup();
} catch (IOException e) {
e.printStackTrace();
}
}
OFGestureListener.swipe_Min_Distance = (int)(Math.max(w, h)*.04);
OFGestureListener.swipe_Max_Distance = (int)(Math.max(w, h)*.6);
OFAndroid.resize(w, h);
mRenderFbo = new RenderFbo(
800, 800, mSrfTexId);
mFboTexId = mRenderFbo.getFboTexId();
GlUtil.checkGlError("onSurfaceCreated_E");
}
#Override
public void onDrawFrame(GL10 gl) {
if(setup && OFAndroid.unpackingDone){
OFAndroid.render();
if (mRenderSrfTex != null) {
mRenderSrfTex.draw();
}
GlUtil.checkGlError("onDrawFrame_E RENDER SRF TEX");
}else if(!setup && OFAndroid.unpackingDone){
try {
setup();
} catch (IOException e) {
e.printStackTrace();
}
}else{
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glClearColor(.5f, .5f, .5f, 1.f);
}
}
public void setRecorder(MyRecorder recorder, int myTexID) {
texID = myTexID;
synchronized(this) {
if (recorder != null) {
mRenderSrfTex = new RenderSrfTex(
800, 804,
myTexID, recorder);
} else {
mRenderSrfTex = null;
}
}
}
where myTexID is the GLUint that i pass from c++ to java with jni, containing the GLUint of the fbo texture.
for reference to the classes myRecorder and RenderSrfTex see the classes at https://github.com/MorihiroSoft/Android_MediaCodecTest/tree/master/MediaCodecTest18/src/jp/morihirosoft/mediacodectest18
and here the sources of the relevant part of the framework, https://github.com/openframeworks/openFrameworks/blob/master/addons/ofxAndroid/ofAndroidLib/src/cc/openframeworks/OFAndroidWindow.java
The answer is absolutely yes, it's possible to pass via GLUint a textureID from c++ to Java, and here there is my solution to record openFrameworks ofFbo with MediaCodec https://github.com/rcavazza/openFramework-Android-Recording. Thanks fadden for the useful answers.

Playing Video with OpenGL and MediaCodec

I'm trying to play the same video at the same time in two different textureviews. I've used code from grafika (MoviePlayer and ContinuousCaptureActivity) to try to get it to work (thanks fadden). To make the problem simpler, I'm trying to do it with just one TextureView first.
At the moment I've created a TextureView, and once it get a SurfaceTexture, I create a WindowSurface and make it current. Then I generate a TextureID generated using a FullFrameRect object.
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int
width, int height) {
mSurfaceTexture = surface;
mEGLCore = new EglCore(null, EglCore.FLAG_TRY_GLES3);
Log.d("EglCore", "EGL core made");
mDisplaySurface = new WindowSurface(mEGLCore, mSurfaceTexture);
mDisplaySurface.makeCurrent();
Log.d("DisplaySurface", "mDisplaySurface made");
mFullFrameBlit = new FullFrameRect(new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureID = mFullFrameBlit.createTextureObject();
//mSurfaceTexture.attachToGLContext(mTextureID);
clickPlayStop(null);
}
Then I get an off-screen SurfaceTexture, link it with the TextureID that I got above and create a surface to pass to a MoviePlayer thus:
public void clickPlayStop(#SuppressWarnings("unused") View unused) {
if (mShowStopLabel) {
Log.d(TAG, "stopping movie");
stopPlayback();
// Don't update the controls here -- let the task thread do it after the movie has
// actually stopped.
//mShowStopLabel = false;
//updateControls();
} else {
if (mPlayTask != null) {
Log.w(TAG, "movie already playing");
return;
}
Log.d(TAG, "starting movie");
SpeedControlCallback callback = new SpeedControlCallback();
callback.setFixedPlaybackRate(24);
MoviePlayer player = null;
MovieTexture = new SurfaceTexture(mTextureID);
MovieTexture.setOnFrameAvailableListener(this);
Surface surface = new Surface(MovieTexture);
try {
player = new MoviePlayer(surface, callback, this);//TODO
} catch (IOException ioe) {
Log.e(TAG, "Unable to play movie", ioe);
return;
}
adjustAspectRatio(player.getVideoWidth(), player.getVideoHeight());
mPlayTask = new MoviePlayer.PlayTask(player, this);
mPlayTask.setLoopMode(true);
mShowStopLabel = true;
mPlayTask.execute();
}
}
The idea is that the SurfaceTexture gets a raw frame which I can use as an OES_external texture to sample from with OpenGL. Then I can call DrawFrame() from my EGLContext after setting my WindowSurface as current.
private void drawFrame() {
Log.d(TAG, "drawFrame");
if (mEGLCore == null) {
Log.d(TAG, "Skipping drawFrame after shutdown");
return;
}
// Latch the next frame from the camera.
mDisplaySurface.makeCurrent();
MovieTexture.updateTexImage();
MovieTexture.getTransformMatrix(mTransformMatrix);
// Fill the WindowSurface with it.
int viewWidth = mTextureView.getWidth();
int viewHeight = mTextureView.getHeight();
GLES20.glViewport(0, 0, viewWidth, viewHeight);
mFullFrameBlit.drawFrame(mTextureID, mTransformMatrix);
mDisplaySurface.swapBuffers();
}
If I wanted to do it with 2 TextureViews, the idea would be to call makeCurrent() and draw into each buffer for each view, then call swapBuffers() after the drawing is done.
This is what I want to do, but I am pretty sure this is not what my code is actually doing. Could somebody help me understand what I need to change to make it work?
#Fadden
Update: This is interesting. I changed the code in onSurfaceTextureAvailable to this:
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int
width, int height) {
mSurfaceTexture = surface;
TextureHeight = height;
TextureWidth = width;
//mEGLCore = new EglCore(null, EglCore.FLAG_TRY_GLES3);
Log.d("EglCore", "EGL core made");
//mDisplaySurface = new WindowSurface(mEGLCore, mSurfaceTexture);
//mDisplaySurface.makeCurrent();
Log.d("DisplaySurface", "mDisplaySurface made");
//mFullFrameBlit = new FullFrameRect(new Texture2dProgram(Texture2dProgram.ProgramType.OPENGL_TEST));
//mTextureID = mFullFrameBlit.createTextureObject();
//clickPlayStop(null);
// Fill the SurfaceView with it.
//int viewWidth = width;
//int viewHeight = height;
//GLES20.glViewport(0, 0, viewWidth, viewHeight);
//mFullFrameBlit.drawFrame(mTextureID, mTransformMatrix);
//mFullFrameBlit.openGLTest();
//mFullFrameBlit.testDraw(mDisplaySurface.getHeight(),mDisplaySurface.getWidth());
//mDisplaySurface.swapBuffers();
}
So, it shouldn't call anything else, just show the empty TextureView - and this is what I see...
Thanks to Fadden for the help.
So there seemed to be some unknown issue that was resolved when I used a new thread to decode and produce the frames. I haven't found out what caused the original problem, but I have found a way around it.

How do I implement multi-texturing using OpenGL ES 1.1 to combine separate RGB and alpha PKMs?

I have been trying to reduce the memory footprint of my textures in a Android game that I wrote without too much success. Based on research that I have done it seems that a good approach is to compress my textures using ETC1 since that is the mostly wided supported format for Android devices.
I am able to create the necessary PKMs from my PNGs using Mali ARM - no problems there. I can also render these PKMs just fine using ETC1Utils - again no problems so far.
The problem comes in with trying to handle alphas. I used Mali to create a separate alpha file for my PNGs, i.e. "xxx.png" is compressed into "xxx.pkm" and "xxx_alpha.pkm". The one approach suggested to me in a different question that I asked was to use multi-texturing to combine these two textures since I can't use fragment shaders in OpenGL ES 1.1.
And this is where I am stuck. I am not too familiar with this stuff and I am not making much head way. Basically, as soon as I try to combine with my alpha texture, everything is just rendered white.
Here is a snippet of my code:
public class Texture {
GLGraphics glGraphics;
FileIO fileIO;
String fileName;
int textureId;
int minFilter;
int magFilter;
public int width;
public int height;
private boolean loaded = false;
public Texture(GLGame glGame, String fileName) {
this.glGraphics = glGame.getGLGraphics();
this.fileIO = glGame.getFileIO();
this.fileName = fileName;
load();
}
public void load() {
GL10 gl = glGraphics.getGL();
int[] textureIds = new int[2];
gl.glGenTextures(2, textureIds, 0);
textureId = textureIds[0];
InputStream inputStream = null;
try {
inputStream = fileIO.readAsset(fileName + ".pkm");
int rgbTexture = textureId;
gl.glActiveTexture(GLES10.GL_TEXTURE0);
gl.glBindTexture(GLES11.GL_TEXTURE_2D, rgbTexture);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_TEXTURE_ENV_MODE, GLES11.GL_MODULATE);
ETC1Texture etcTexture = ETC1Util.createTexture(inputStream);
ETC1Util.loadTexture(GLES11.GL_TEXTURE_2D, 0, 0, GLES11.GL_RGB, GLES11.GL_UNSIGNED_SHORT_5_6_5, etcTexture);
int alphaTexture = textureId[1];
gl.glActiveTexture(GLES11.GL_TEXTURE1);
gl.glBindTexture(GLES11.GL_TEXTURE_2D, alphaTexture);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_TEXTURE_ENV_MODE, GLES11.GL_COMBINE);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_COMBINE_RGB, GLES11.GL_REPLACE);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_SRC0_RGB, GLES11.GL_PREVIOUS);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_OPERAND0_RGB, GLES11.GL_SRC_COLOR);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_COMBINE_ALPHA, GLES11.GL_MODULATE);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_SRC0_ALPHA, GLES11.GL_TEXTURE);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_OPERAND0_ALPHA, GLES11.GL_SRC_ALPHA);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_SRC1_ALPHA, GLES11.GL_PREVIOUS);
gl.glTexEnvf(GLES11.GL_TEXTURE_ENV, GLES11.GL_OPERAND1_ALPHA, GLES11.GL_SRC_ALPHA);
InputStream inputStreamAlpha = fileIO.readAsset(fileName + "_alpha.pkm");
ETC1Texture etcAlphaTexture = ETC1Util.createTexture(inputStreamAlpha);
ETC1Util.loadTexture(GLES11.GL_TEXTURE_2D, 0, 0, GLES11.GL_RGB, GLES11.GL_UNSIGNED_SHORT_5_6_5, etcAlphaTexture);
setFilters(GL10.GL_NEAREST, GL10.GL_NEAREST);
gl.glBindTexture(GL10.GL_TEXTURE_2D, 0);
width = etcTexture.getWidth();
height = etcTexture.getHeight();
} catch (IOException e) {
throw new RuntimeException("Couldn't load texture '" + fileName + "'", e);
} finally {
if (inputStream != null) {
try {
inputStream.close();
} catch (IOException e) {
// do nothing
}
}
}
loaded = true;
}
public void reload() {
load();
bind();
setFilters(minFilter, magFilter);
glGraphics.getGL().glBindTexture(GL10.GL_TEXTURE_2D, 0);
}
public void setFilters(int minFilter, int magFilter) {
this.minFilter = minFilter;
this.magFilter = magFilter;
GL10 gl = glGraphics.getGL();
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, minFilter);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, magFilter);
}
public void bind() {
GL10 gl = glGraphics.getGL();
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
}
public void dispose() {
loaded = false;
GL10 gl = glGraphics.getGL();
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
int[] textureIds = { textureId };
gl.glDeleteTextures(1, textureIds, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, 0);
}
public boolean isLoaded() {
return loaded;
}
public void setLoaded(boolean loaded) {
this.loaded = loaded;
}
}
My main concern is the load() method. This code was put together through snippets that I have found on the web and coupled with my lack of understanding of multi-texturing in general, I have clearly gone wrong somewhere. Also note that when I render my textures I call:
GL10 gl = glGraphics.getGL();
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glEnable(GL10.GL_TEXTURE_2D);
camera.setViewportAndMatrices();
gl.glEnable(GL10.GL_BLEND);
gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
// call some objects that do my rendering stuff here
gl.glDisable(GL10.GL_BLEND);
gl.glDisable(GL10.GL_TEXTURE_2D);
When I render a texture I call the bind() method on my Texture class. As you can see, this binds to my global textureId variable which was used as the RGB PKMs ID when loading. I am not even sure if this is correct. Should I be binding to the RGB's ID or the alpha's ID? Or is that not even close to being on the right track? My problem may also relate to how I am loading the alphas using ETC1Utils - I have no idea if this approach is correct or not.
I am really quite stuck so any help pointing out where I have gone wrong and some sort of explanation about how multi-texturing is supposed to be implement to combine ETC1 alphas and RGBs would really be awesome.
I'm not sure this is possible with the fixed pipeline in OpenGL ES 1.1, but there is a great summary of how to combine textures for both 1.1 and 2.0 here.
Also, the PowerVR SDK has a great example of this for 1.1 called OGLESMultitexture.cpp.

Categories

Resources