I'm very new to Android, Eclipse, and OpenGL ES 2.0 (and object oriented programming in general), but I got a program (basically, based on a few examples I found on the Internet) working to show a cube with each face displaying a different 512 x 512 bitmap image (a PNG image loaded and converted once during the initialization phase) a few days ago.
My system is a 32-bit Vista PC, and I've been using Bluestacks as my emulator (version 0.8.11, released June 22, 2014, Android version 4.0.4). I believe its API level is 15. I have no physical Android devices to test my programs on at this time.
I now want to try a technique to use a dynamic texture.
I've looked for complete Android app examples using a dynamic texture (in Java), but I found only one and when I run it, it doesn't work on my Bluestacks.
Here's the page that has a link to the complete project containing the program:
http://blog.shayanjaved.com/2011/05/13/android-opengl-es-2-0-render-to-texture/
When I run it on my Bluestacks, all I see is a black screen.
One thing I noticed is that the texture size is huge. I changed the size to 512 x 512 (which I know works because my cube program uses 512 x 512 textures), but I still see only a black screen.
(It would be nice if somebody could try to run this program on his/her physical Android device/s and/or Bluestacks and post the result.)
The following is the method in which a dynamic texture is created and prepared for its use:
/**
* Sets up the framebuffer and renderbuffer to render to texture
*/
private void setupRenderToTexture() {
fb = new int[1];
depthRb = new int[1];
renderTex = new int[1];
// generate
GLES20.glGenFramebuffers(1, fb, 0);
GLES20.glGenRenderbuffers(1, depthRb, 0);
GLES20.glGenTextures(1, renderTex, 0);
// generate color texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, renderTex[0]);
// parameters
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_LINEAR);
// create it
// create an empty intbuffer first?
int[] buf = new int[texW * texH];
texBuffer = ByteBuffer.allocateDirect(buf.length
* FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asIntBuffer();;
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGB, texW, texH, 0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_SHORT_5_6_5, texBuffer);
// create render buffer and bind 16-bit depth buffer
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, depthRb[0]);
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER, GLES20.GL_DEPTH_COMPONENT16, texW, texH);
/*** PART BELOW SHOULD BE DONE IN onDrawFrame ***/
// bind framebuffer
/*GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fb[0]);
// specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, renderTex[0], 0);
// attach render buffer as depth buffer
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT, GLES20.GL_RENDERBUFFER, depthRb[0]);
// check status
int status = GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);*/
}
Does the above piece of code seem sound?
Oh, one more thing:
I've been using the following as my vertex and fragment shaders. They've been working fine to show a cube with static textures. (Normal's data are read but I'm not using any lights now.)
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_Position;" +
"attribute vec4 a_Color;" +
"attribute vec3 a_Normal;" +
"attribute vec2 a_TexCoordinate;" + // Per-vertex texture coordinate information we will pass in.
"varying vec4 v_Color;" +
"varying vec2 v_TexCoordinate;" + // This will be passed into the fragment shader.
"void main() {" +
// The matrix must be included as a modifier of gl_Position.
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * a_Position;" +
" v_Color = a_Color;" +
" vec3 difuse = a_Normal;" +
" v_TexCoordinate = a_TexCoordinate;" + // pass through
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform sampler2D u_Texture;" + // The input texture.
"varying vec4 v_Color;" +
"varying vec2 v_TexCoordinate;" + // Interpolated texture coordinate per fragment.
"void main() {" +
" gl_FragColor = (v_Color * texture2D(u_Texture, v_TexCoordinate));" +
"}";
Should they work when I render to a texture? Or do I have to have separate vertex and fragment shaders when I'm rendering to a texture?
Thank you.
Related
I am trying to create image filters like Instagram for my Android application. I am new to image processing and have just stumbled upon this term called color mapping. After many research, I tried to create my own filter using OpenGL using a lookup table (LUT). But upon adding the filter to a camera, Here is the result:
You can see that there is this weird blueish color at the edge of my thumb. It only happens on overexposed areas of the image.
Here is the fragment shader code:
#extension GL_OES_EGL_image_external : require
precision lowp float;
varying highp vec2 vTextureCoord;
uniform samplerExternalOES inputImage;
uniform sampler2D lookup;
void main() {
vec2 tiles = vec2(8.0);
vec2 tileSize = vec2(64.0);
vec4 texel = texture2D(inputImage, vTextureCoord);
float index = texel.b * (tiles.x * tiles.y - 1.0);
float index_min = min(62.0, floor(index));
float index_max = index_min + 1.0;
vec2 tileIndex_min;
tileIndex_min.y = floor(index_min / tiles.x);
tileIndex_min.x = floor(index_min - tileIndex_min.y * tiles.x);
vec2 tileIndex_max;
tileIndex_max.y = floor(index_max / tiles.x);
tileIndex_max.x = floor(index_max - tileIndex_max.y * tiles.x);
vec2 tileUV = mix(0.5/tileSize, (tileSize-0.5)/tileSize, texel.rg);
vec2 tableUV_1 = tileIndex_min / tiles + tileUV / tiles;
vec2 tableUV_2 = tileIndex_max / tiles + tileUV / tiles;
vec3 lookUpColor_1 = texture2D(lookup, tableUV_1).rgb;
vec3 lookUpColor_2 = texture2D(lookup, tableUV_2).rgb;
vec3 lookUpColor = mix(lookUpColor_1, lookUpColor_2, index-index_min);
gl_FragColor = vec4(lookUpColor, 1.0);
}
Here is the lookup table. This is a base lookup table. I tried editing the lookup tables and applying the filter but the result is same, irrespective of the table.
What is causing this issue? Can anyone show me how to create a simple fragment shader that maps color from lookup table to the current texture? Any help would be appreciated. Regards.
Here is the code for loading the textures:
public static int loadTexture(final Bitmap img) {
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, img, 0);
return textures[0];
}
Not sure what color space or gamma your external sampler is using, but it looks like you are getting input values outside of the zero-to-one range, so the over exposed areas are jumping a LUT tile boundary.
As a diagnostic, clamp your inputs between zero and one, but this is rarely the right "proper fix" as the clipping in the colors will be obvious.
Something like this:
vec4 texel = texture2D(inputImage, vTextureCoord);
texel = min(texel, 1.0);
I've been trying to apply the filters I use in the android-gpuimage library in the Mediacodec surface context. So far I've succeeded in using the filters that only require one extra texture map. However, when I try to apply a filter that needs at least two, the result is an either blue-colored or rainbow-colored mess.
The following issue deals with the one that uses a texture lookup filter and an vignette filter.
The vertex shader I used is as follows:
uniform mat4 uMVPMatrix;
uniform mat4 textureTransform;
attribute vec4 vPosition;
attribute vec4 vTexCoordinate;
varying vec2 v_TexCoordinate;
void main() {
gl_Position = uMVPMatrix * vPosition;
v_TexCoordinate = (textureTransform * vTexCoordinate).xy;
}
The fragment shader I used is as follows:
#extension GL_OES_EGL_image_external : require
precision lowp float;
varying highp vec2 v_TexCoordinate;
uniform samplerExternalOES u_Texture; //MediaCodec decoder provided data
uniform sampler2D inputImageTexture2; //Amaro filter map
uniform sampler2D inputImageTexture3; //Common vignette map
void main()
{
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
vec2 red = vec2(texel.r, 0.16666);
vec2 green = vec2(texel.g, 0.5);
vec2 blue = vec2(texel.b, 0.83333);
texel.rgb = vec3(
texture2D(inputImageTexture2, red).r,
texture2D(inputImageTexture2, green).g,
texture2D(inputImageTexture2, blue).b);
//After further research I found the problem is somewhere below
vec2 tc = (2.0 * v_TexCoordinate) - 1.0;
float d = dot(tc, tc);
vec2 lookup = vec2(d, texel.r);
texel.r = texture2D(inputImageTexture3, lookup).r;
lookup.y = texel.g;
texel.g = texture2D(inputImageTexture3, lookup).g;
lookup.y = texel.b;
texel.b = texture2D(inputImageTexture3, lookup).b;
//The problem is somewhere above
gl_FragColor = vec4(texel, 1.0);
}
The end result of that program looked like this:
Is this the result of a bad vignette map, or is it something to do with the vignette application part of the fragment shader?
EDIT:
The texture used for inputImageTexture2:
The texture used for inputImageTexture3:
Turns out the way I load my textures matters.
My current code for loading textures:
public int loadColormap(final Bitmap colormap) {
IntBuffer textureIntBuf = IntBuffer.allocate(1);
GLES20.glGenTextures(1, textureIntBuf);
int textureHandle = textureIntBuf.get();
//if (textures[2] != 0) {
if (textureHandle != 0) {
//GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[2]);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, colormap, 0);
}
//if (textures[2] == 0) {
if (textureHandle == 0) {
throw new RuntimeException("Error loading texture.");
}
//return textures[2];
return textureHandle;
}
The previous incarnation used the textures array, an array I use to load the data from MediaCodec and the watermark. For some reason if I use that instead of generating an IntBuffer for each texture, the textures used in the fragment shader get jumbled or something.
I'm trying to render a simple 3D scene. It's basically a surface wrapped in a texture. I've tested it on 4 devices and only one (Galaxy Note 4) renders the texture properly. The other three phones (HTC Evo 3D, LG G2, Sony Xperia Z1) render the texture with all texels in one color, which seems to be an average color of the texture. E.g.: original image and rendered texture.
My first guess was there's something wrong with my fragment shader. But it's very basic and copied from a book "OpenGL ES 2 for Android":
private final String vertexShaderCode =
"uniform mat4 uMVPMatrix;"+
"attribute vec4 vPosition;"+
"attribute vec2 a_TextureCoordinates;"+
"varying vec2 v_TextureCoordinates;"+
"void main()"+
"{"+
"v_TextureCoordinates = a_TextureCoordinates;"+
"gl_Position = uMVPMatrix * vPosition;"+
"}";
private final String fragmentShaderCode =
"precision mediump float;"+
"uniform sampler2D u_TextureUnit;"+
"varying vec2 v_TextureCoordinates;"+
"void main()"+
"{"+
"gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);"+
"}";
Loading texture:
texCoordHandle = glGetAttribLocation(program, "a_TextureCoordinates");
texUnitHandle = glGetUniformLocation(program, "u_TextureUnit");
texHandle = new int[1];
glGenTextures(1, texHandle, 0);
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inScaled = false;
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),
R.drawable.way_texture_pink_square_512, opt);
glBindTexture(GL_TEXTURE_2D, texHandle[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
glGenerateMipmap(GL_TEXTURE_2D);
bitmap.recycle();
glBindTexture(GL_TEXTURE_2D, 0);
I hold the vertices and texture coords in different float buffers, and draw them in a loop:
glUseProgram(program);
glUniformMatrix4fv(mvpMatrixHandle, 1, false, vpMatrix, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texHandle[0]);
glUniform1i(texUnitHandle, 0);
glEnableVertexAttribArray(positionHandle);
WaySection[] ws = waySects.get(currWaySect);
for( int i = 0 ; i < ws.length ; i++ ) {
glVertexAttribPointer(
positionHandle, COORDS_PER_VERTEX,
GL_FLOAT, false,
vertexStride, ws[i].vertexBuffer);
glVertexAttribPointer(texCoordHandle, COORDS_PER_TEX_VER,
GL_FLOAT, false,
texStride, ws[i].texBuffer);
glDrawArrays(GL_TRIANGLE_FAN, 0,
ws[i].vertices.length / COORDS_PER_VERTEX);
}
glDisableVertexAttribArray(positionHandle);
What I've done:
Changed textures, tried different sizes (power of two and not), different alpha.
Tried switching on/off different options, like: GL_CULL_FACE, GL_BLEND, GL_TEXTURE_MIN(MAG)_FILTER, GL_TEXTURE_WRAP...
Checked for OpenGL errors in every possible place. glGetError() returns no error.
Checked the code in debugger. The textures converted into Bitmap looked fine before and after loading them into OpenGL.
Help, please :)
Problem solved. Erratic behavior was caused by the lack of glEnableVertexAttribArray(texCoordHandle); call in onDraw(). Note 4 was ok with that; it required only the positionHandle (vertices) to be enabled. Other phones apparently weren't to happy. Maybe because Note 4 supports GL ES 3.1?
Thanks for your time.
I'm working on a simple pong type game to get to grips with opengl and android, and seem to have hit a brick wall in terms of performance.
I've got my game logic on a separate thread, with draw commands sent to the gl thread through a blocking queue. The problem is that I'm stuck at around 40fps, and nothing I've tried seems to improve the framerate.
To keep things simple I set up opengl with:
GLES20.glDisable(GLES20.GL_CULL_FACE);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_BLEND);
Set up of the opengl program and drawing is handled by the following class:
class GLRectangle {
private final String vertexShaderCode =
"precision lowp float;" +
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform lowp mat4 uMVPMatrix;" +
"attribute lowp vec4 vPosition;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
" gl_Position = vPosition * uMVPMatrix;" +
"}";
private final String fragmentShaderCode =
"precision lowp float;" +
"uniform lowp vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
protected static int mProgram = 0;
private static ShortBuffer drawListBuffer;
private static short drawOrder[] = { 0, 1, 2, 0, 2, 3};//, 4, 5, 6, 4, 6, 7 }; // order to draw vertices
// number of coordinates per vertex in this array
private static final int COORDS_PER_VERTEX = 3;
private static final int vertexStride = COORDS_PER_VERTEX * 4; // bytes per vertex
GLRectangle(){
int vertexShader = GameRenderer.loadShader(GLES20.GL_VERTEX_SHADER, vertexShaderCode);
int fragmentShader = GameRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode);
mProgram = GLES20.glCreateProgram(); // create empty OpenGL ES Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // creates OpenGL ES program executables
// initialize byte buffer for the index list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
}
protected static void Draw(Drawable dObj, float mvpMatrix[])
{
FloatBuffer vertexBuffer = dObj.vertexBuffer;
GLES20.glUseProgram(mProgram);
//GameRenderer.checkGlError("glUseProgram");
// get handle to fragment shader's vColor member
int mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
//GameRenderer.checkGlError("glGetUniformLocation");
// get handle to shape's transformation matrix
int mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
//GameRenderer.checkGlError("glGetUniformLocation");
// get handle to vertex shader's vPosition member
int mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
//GameRenderer.checkGlError("glGetAttribLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
//GameRenderer.checkGlError("glUniformMatrix4fv");
// Set color for drawing the quad
GLES20.glUniform4fv(mColorHandle, 1, dObj.color, 0);
//GameRenderer.checkGlError("glUniform4fv");
// Enable a handle to the square vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
//GameRenderer.checkGlError("glEnableVertexAttribArray");
// Prepare the square coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
//GameRenderer.checkGlError("glVertexAttribPointer");
// Draw the square
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
//GameRenderer.checkGlError("glDrawElements");
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
//GameRenderer.checkGlError("glDisableVertexAttribArray");
}
}
I've done plenty of profiling and googling, but cant find anything to make this work faster... I've included a screenshot of the DDMS output:
To me, it looks like glClear is causeing GLThread to sleep for 23ms... though I doubt that's really the case.
I have absolutely no idea how I can make this more efficient, there's nothing fancy going on. In my quest for better rendering performance I have switched to the multi-threaded approach I described, turned off alpha blending and depth testing, changed to a batched drawing approach (not applicable for this simple example), and switched everything to lowp in the shaders.
Any assistance with getting this to 60fps would be greatly appreciated!
Bruce
edit Well talk about overthinking a problem. It turns out that I've had the powersaving mode switched on for the past week... it seems to lock rendering to 40fps.
This behavior occurs when Power Saving mode is switched on, using a Galaxy S3.
It appears the power saving mode locks the framerate to 40fps. Switching it off easily achieved the desired 60fps.
Not sure if you have control over the EGL setup to the device surface, but if you can there is a way to set the update to run in "non-sync" mode.
egl.eglSwapInterval( dpy, 0 )
This isn't available on all devices but allows some control of your rendering.
I created an app that uses GLES2.0 on a HTC Desire S.
It works on the HTC, but not on an Samung Galaxy tab10.1.
The program cannot be linked (GLES20.glGetProgramiv(mProgram, GLES20.GL_LINK_STATUS, linOk,0) gives-1) and glGetError() gives me an error 1282 (Invalid Operation).
When I replace this line (in the shader):
graph_coord.z = (texture2D(mytexture, graph_coord.xy / 2.0 + 0.5).r);
by
graph_coord.z = 0.2;
it works also on the galaxy tab.
My shader looks like this:
private final String vertexShaderCode =
"attribute vec2 coord2d;" +
"varying vec4 graph_coord;" +
"uniform mat4 texture_transform;" +
"uniform mat4 vertex_transform;" +
"uniform sampler2D mytexture;" +
"void main(void) {" +
" graph_coord = texture_transform * vec4(coord2d, 0, 1);" +
" graph_coord.z = (texture2D(mytexture, graph_coord.xy / 2.0 + 0.5).r);" +
" gl_Position = vertex_transform * vec4(coord2d, graph_coord.z, 1);" +
"}";
That's where the shaders are attached:
mProgram = GLES20.glCreateProgram(); // create empty OpenGL Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // create OpenGL program executables
int linOk[] = new int[1];
GLES20.glGetProgramiv(mProgram, GLES20.GL_LINK_STATUS, linOk,0);
And the texture is loaded here:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture_id[0]);
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_2D, // target
0, // level, 0 = base, no minimap,
GLES20.GL_LUMINANCE, // internalformat
size, // width
size, // height
0, // border, always 0 in OpenGL ES
GLES20.GL_LUMINANCE, // format
GLES20.GL_UNSIGNED_BYTE, // type
values
);
This seems to be a limitation of the Nvidia Tegra GPU. I was able to reproduce the error on a Tegra 3 GPU. Even though texture lookups in the vertex shader are in theory part of OpenGL ES 2.0, according to Nvidia the number of vertex shader texture units (GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS) for Tegra is 0 (PDF: OpenGL ES 2.0 Development for the Tegra Platform).
You have to use texture2DLod() instead of texture2D() to make texture lookups in vertex shader.
GLSL specs, section 8.7 Texture Lookup Functions: http://www.khronos.org/files/opengles_shading_language.pdf