i hava a bmp size =512*512,and now i want to use it to render a surface
since the surface is not plain, i cut the surface into small piece of rectangle(num = rowMax * colMax), the code like this:
draw(GL10 gl)
{
int[] textures = new int[];
gl.glBindTexture(...);
gl.glTxtParameterf(...);
for(int row =0; row< maxRow; row++)
{
for(int col=0; col<maxCol; col++
{
GLUtil.texImage2D(GL10.GL_TEXTURE_2D , bitmap,0 );// bitmap is the 512*512 bmp
//generate 4 point coordinate
...
//generate texture uv coordinate
...
//draw it
gl.glDrawArray(...);
}
}
}
it works fine.
but when i take the statement:
GLUtil.texImage2D(GL10.GL_TEXTURE_2D , bitmap,0 );
out of the loop,(since i think this may take a lot of time )
it doesn't work,i don't know w
}
Maybe I didn't understand the way your program works, but from what i understand you have num different textures that you need to draw.
Were you to take the texImage2D out of the loop, you would load just one texture instead of num texture so obviously that wouldn't work.
Another thing to know about open gl, is that you don't need to call GLUtil.texImage2D every time you draw, but only every time the texture changes.
Once you call texImage2D, the current state of bitmap is saved to the texture memory of the GL context, under the texture name you previously called glBindTexture(GL10.GL_TEXTURE_2D, /*texture name here*/) with, so next time you need to draw the texture, you only need to call glBindTexture with the name of the texture, and that's it, no costly operations needed.
What you should do is make it so that each texture's name (a unique integer to represent the texture) is it's position (=(row * maxCol) + col) + 1 (so that you don't have a texture named 0 (reserved texture name).
Also, you should create an array of booleans the size of num called mDirty, so that each cell is true when the texture corresponding to the position of the cell has changed and needs to be reloaded.
This way, you can in the loop just check whether mDirty[(row * maxCol) + col] == true, and if it is true, then call texImage2D, and set mDirty[(row * maxCol) + col] to false to indicate that the texture is updated in the gl memory.
Related
The main texture of my surface shader is a Google Maps image tile, similar to this:
.
I want to replace pixels that are close to a specified color with that from a separate texture. What is working now is the following:
Shader "MyShader"
{
Properties
{
_MainTex("Base (RGB) Trans (A)", 2D) = "white" {}
_GrassTexture("Grass Texture", 2D) = "white" {}
_RoadTexture("Road Texture", 2D) = "white" {}
_WaterTexture("Water Texture", 2D) = "white" {}
}
SubShader
{
Tags{ "Queue" = "Transparent-1" "IgnoreProjector" = "True" "ForceNoShadowCasting" = "True" "RenderType" = "Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert alpha approxview halfasview noforwardadd nometa
uniform sampler2D _MainTex;
uniform sampler2D _GrassTexture;
uniform sampler2D _RoadTexture;
uniform sampler2D _WaterTexture;
struct Input
{
float2 uv_MainTex;
};
void surf(Input IN, inout SurfaceOutput o)
{
fixed4 ct = tex2D(_MainTex, IN.uv_MainTex);
// if the red (or blue) channel of the pixel is within a
// specific range, get either a 1 or a 0 (true/false).
int grassCond = int(ct.r >= 0.45) * int(0.46 >= ct.r);
int waterCond = int(ct.r >= 0.14) * int(0.15 >= ct.r);
int roadCond = int(ct.b >= 0.23) * int(0.24 >= ct.b);
// if none of the above conditions is a 1, then we want to keep our
// current pixel's color:
half defaultCond = 1 - grassCond - waterCond - roadCond;
// get the pixel from each texture, multiple by their check condition
// to get:
// fixed4(0,0,0,0) if this isn't the right texture for this pixel
// or fixed4(r,g,b,1) from the texture if it is the right pixel
fixed4 grass = grassCond * tex2D(_GrassTexture, IN.uv_MainTex);
fixed4 water = waterCond * tex2D(_WaterTexture, IN.uv_MainTex);
fixed4 road = roadCond * tex2D(_RoadTexture, IN.uv_MainTex);
fixed4 def = defaultCond * ct; // just used the MainTex pixel
// then use the found pixels as the Albedo
o.Albedo = (grass + road + water + def).rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "None"
}
This is the first shader I've ever written, and it probably isn't very performant. It seems counter intuitive to me to call tex2D on each texture for every pixel to just throw that data away, but I couldn't think of a better way to do this without if/else (which I read were bad for GPUs).
This is a Unity Surface Shader, and not a fragment/vertex shader. I know there is a step that happens behind the scenes that will generate the fragment/vertex shader for me (adding in the scene's lighting, fog, etc.). This shader is applied to 100 256x256px map tiles (2560x2560 pixels in total). The grass/road/water textures are all 256x256 pixels as well.
My question is: is there a better, more performant way of accomplishing what I'm doing here? The game runs on Android and iOS.
I'm not a specialist in Shader performance, but assuming you have a relatively small number of source tiles that you wish to render in the same frame it might make more sense to store the result of the pixel replacement and reuse it.
As you are stating that the resulting image is going to be the same size as your source tile, just render the source tile using your surface shader (without any lighting though, you may want to consider using a simple, flat pixel shader!) into a RenderTexture once and then use that RenderTexture as source for your world rendering. That way you are doing the expensive work only once per source tile and thus it isn't even important anymore whether your shader is well optimized.
If all textures are static, you might even consider not doing this at runtime, but just translate them once in the Editor.
I apologize in advance for my long post.
My purpose is to create a meshing app for a Project Tango Yellowstone device to create a 3D map of building interiors. I intend to make use of the experimental meshing API added in recent versions of the tango-examples-c code.
I'm using point-cloud-jni-example (turing) as a starting point and so far have done the following:
Set config_experimental_enable_scene_reconstruction tango config parameter in point_cloud_app.cc (see docs)
// Enable scene reconstruction
ret = TangoConfig_setBool(tango_config_,
config_experimental_enable_scene_reconstruction", true);
if (ret != TANGO_SUCCESS) {
LOGE("PointCloudApp: config_experimental_enable_scene_reconstruction() failed"
"with error code: %d", ret);
return ret;
}
Added extractMesh native method in TangoJNINative.java
// Extracts the full mesh from the scene reconstruction.
public static native float extractMesh();
Added matching extractMesh function to the jni_interface.cc
JNIEXPORT void JNICALL
Java_com_projecttango_experiments_nativepointcloud_TangoJNINative_extractMesh(
JNIEnv*, jobject) {
app.ExtractMesh();
}
Added ExtractMesh method in point_cloud_app.cc
void PointCloudApp::ExtractMesh() {
// see line 1245 of tango_client_api.h
mesh_ptr = new TangoMesh_Experimental();
TangoService_Experimental_extractMesh(mesh_ptr);
mesh = *mesh_ptr;
LOGE("PointCloudApp: num_vertices: %d", mesh.num_vertices);
float float1, float2, float3;
float1 = mesh.vertices[1][0];
float2 = mesh.vertices[1][1];
float3 = float1 + float2; // these lines show I can use the vertex data
LOGE("PointCloudApp: First vertex, x: %f", mesh.vertices[1][0]); // this line causes app to crash; printing the vertex data seems to be the problem
}
Added TangoMesh_Experimental declaration to point_cloud_app.h
// see line 1131 of tango_client_api.h
TangoMesh_Experimental* mesh_ptr;
TangoMesh_Experimental mesh;
Added an additional button to call the extractMesh native method. (not showing this one as it is pretty straightforward)
For reference, here is the TangoMesh_Experimental Struct from the API:
// A mesh, described by vertices and face indices, with optional per-vertex
// normals and colors.
typedef struct TangoMesh_Experimental {
// Index into a three-dimensional fixed grid.
int32_t index[3];
// Array of vertices. Each vertex is an {x, y, z} coordinate triplet, in
// meters.
float (*vertices)[3];
// Array of faces. Each face is an index triplet into the vertices array.
uint32_t (*faces)[3];
// Array of per-vertex normals. Each normal is a normalized {x, y, z} vector.
float (*normals)[3];
// Array of per-vertex colors. Each color is a 4-tuple of 8-bit {R, G, B, A}
// values.
uint8_t (*colors)[4];
// Number of vertices, describing the size of the vertices array.
uint32_t num_vertices;
// Number of faces, describing the size of the faces array.
uint32_t num_faces;
// If true, each vertex will have an associated normal. In that case, the
// size of the normals array will be equal to num_vertices. Otherwise, the
// size of the normals array will be 0.
bool has_normals;
// If true, each vertex will have an associated color. In that case, the size
// of the colors array will be equal to num_vertices. Otherwise, the size of
// the colors array will be 0.
bool has_colors;
} TangoMesh_Experimental;
My current understanding of this struct is:
The three pointers in float (*vertices)[3]; point to the addresses at the start of three chuncks of memory for the x, y, and z coordinates for the vertices for the mesh (the same is true for normals and colors colors). A specific vertex is composed of an x, y, and z component found at a specific index in the three arrays.
Similarly, the uint32_t (*faces)[3] array has three pointers to the beginning of three chunks of memory, but a specific set of three elements here instead contains index numbers that indicate which three vertices (from the vertices array (each with three coordinates)) make up that face.
The current status is I am able to extract the mesh, and print some of it to the console, then crashes without errors
PointCloudApp: PointCloudApp: num_vertices: 8044
If I omit the last line I added in point_cloud_app.cc (#4, above), the app doesn't crash. I am able to access the vertex data and do something with it, but printing it using LOGE causes a crash 9 times out of 10. Occasionally, it does print the value correctly without crashing. Could the vertex data have holes or invalid values?
I have tried returning test_float from JNI back to java, but it crashes again when I try to do so.
Suggestions?
vertices is a dynamic array of points, where each point is a float[3]. Try this example:
for (int i = 0; i < mesh.num_vertices; ++i) {
printf("%d: x=%f y=%f z=%f\n", i, mesh.vertices[i][0],
mesh.vertices[i][1], mesh.vertices[i][2]);
}
If you look at the memory layout, it would be x0 y0 z0 x1 y1 z1 etc, each of those a float.
I am having problems with dynamic loading of textures.
When a user double taps on the screen, the background and other sprites are changed. There is no error produced, but some times the textures are cleared and new textures are just not loaded.
This is my initial onCreateResource
ITextureRegion BackgroundTextureRegion;
BitmapTextureAtlas MainTexture1;
//Initiate Textures
MainTexture1 = new BitmapTextureAtlas(this.getTextureManager(),1000,1000, TextureOptions.BILINEAR);
//Clear Textures
MainTexture1.addEmptyTextureAtlasSource(0, 0, 1000,1000);
//Assign Image Files to TextureRegions
BackgroundTextureRegion = BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "Evening.jpg",0,0);
//Loading the Main Texture to memory
MainTexture1.load();
There is no problem until after this point. After this when user double taps or swipes the background, I change the texture dynamically. Here is the code:
MainTexture1.clearTextureAtlasSources();
MainTexture1.addEmptyTextureAtlasSource(0, 0, 1000,1000);
BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "WinterNight.jpg",0,0);
This usually changes the texture and I am getting the desired result. But in some devices (eg. Samsung Tab 2), 1 in 10 times the the MainTexture1 is cleared but its not loaded with new image.
So it just gives a black screen, how do I correct this?
MainTexture1.clearTextureAtlasSources();
// MainTexture1.addEmptyTextureAtlasSource(0, 0, 1000,1000);
BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "WinterNight.jpg",0,0);
MainTexture1.load();
Try this
runOnUiThread(new Runnable() {
#Override
public void run() {
MainTexture1.clearTextureAtlasSources();
MainTexture1.addEmptyTextureAtlasSource(0, 0, 1024, 1024);
BitmapTextureAtlasTextureRegionFactory.createFromAsset(MainTexture1, this, "WinterNight.jpg",0,0);
MainTexture1.load();
}
});
Like this.
If it is not working try with creating another textureatlas instead of same atlas.
I had a small question.If i want to make a man run in android one way of doing this is to get images of the man in different position and display them at different positions.But often,this does not work very well and it appears as two different images are being drawn.Is there any other way through which i can implement custom animation.(Like create a custom image and telling one of the parts of this image to move).
The way i do it is to use sprite sheets for example (Not my graphics!):
You can then use a class like this to handle your animation:
public class AnimSpriteClass {
private Bitmap mAnimation;
private int mXPos;
private int mYPos;
private Rect mSRectangle;
private int mFPS;
private int mNoOfFrames;
private int mCurrentFrame;
private long mFrameTimer;
private int mSpriteHeight;
private int mSpriteWidth;
public AnimSpriteClass() {
mSRectangle = new Rect(0,0,0,0);
mFrameTimer =0;
mCurrentFrame =0;
mXPos = 80;
mYPos = 200;
}
public void Initalise(Bitmap theBitmap, int Height, int Width, int theFPS, int theFrameCount) {
mAnimation = theBitmap;
mSpriteHeight = Height;
mSpriteWidth = Width;
mSRectangle.top = 0;
mSRectangle.bottom = mSpriteHeight;
mSRectangle.left = 0;
mSRectangle.right = mSpriteWidth;
mFPS = 1000 /theFPS;
mNoOfFrames = theFrameCount;
}
public void Update(long GameTime) {
if(GameTime > mFrameTimer + mFPS ) {
mFrameTimer = GameTime;
mCurrentFrame +=1;
if(mCurrentFrame >= mNoOfFrames) {
mCurrentFrame = 0;
}
}
mSRectangle.left = mCurrentFrame * mSpriteWidth;
mSRectangle.right = mSRectangle.left + mSpriteWidth;
}
public void draw(Canvas canvas) {
Rect dest = new Rect(getXPos(), getYPos(), getXPos() + mSpriteWidth,
getYPos() + mSpriteHeight);
canvas.drawBitmap(mAnimation, mSRectangle, dest, null);
}
mAnimation - This is will hold the actual bitmap containing the animation.
mXPos/mYPos - These hold the X and Y screen coordinates for where we want the sprite to be on the screen. These refer to the top left hand corner of the image.
mSRectangle - This is the source rectangle variable and controls which part of the image we are rendering for each frame.
mFPS - This is the number of frames we wish to show per second. 15-20 FPS is enough to fool the human eye into thinking that a still image is moving. However on a mobile platform it’s unlikely you will have enough memory 3 – 10 FPS which is fine for most needs.
mNoOfFrames -This is simply the number of frames in the sprite sheet we are animating.
mCurrentFrame - We need to keep track of the current frame we are rendering so we can move to the next one in order.~
mFrameTimer - This controls how long between frames.
mSpriteHeight/mSpriteWidth -These contain the height and width of an Individual Frame not the entire bitmap and are used to calculate the size of the source rectangle.
Now in order to use this class you have to add a few things to your graphics thread. First declare a new variable of your class and then it can be initialised in the constructor as below.
Animation = new OurAnimatedSpriteClass();
Animation.Initalise(Bitmap.decodeResource(res, R.drawable.stick_man), 62, 39, 20, 20);
In order to pass the value of the bitmap you first have to use the Bitmap Factory class to decode the resource. It decodes a bitmap from your resources folder and allows it to be passed as a variable. The rest of the values depend on your bitmap image.
In order to be able to time the frames correctly you first need to add a Game timer to the game code. You do this by first adding a variable to store the time as show below.
private long mTimer;
We now need this timer to be updated with the correct time every frame so we need to add a line to the run function to do this.
public void run() {
while (mRun) {
Canvas c = null;
mTimer = System.currentTimeMillis(); /////This line updates timer
try {
c = mSurfaceHolder.lockCanvas(null);
synchronized (mSurfaceHolder) {
Animation.update(mTimer);
doDraw(c);
}....
then you just have to add Animation.draw(canvas); your Draw function and the animation will draw the current frame in the right place.
When you describe : " one way of doing this is to get images of the man in different position and display them at different positions", this is indeed not only a programming technique to render animation but a general principle that is applied in every form of animation : it applies to making movies, making comics, computer gaming, etc, etc.
Our eyes see at the frequency of 24 images per second. Above 12 frames per second, your brain gets the feeling of real, fluid, movement.
So, yes, this is the way, if you got the feeling movement is not fuild, then you have to increase frame rate. But that works.
Moving only one part of an image is not appropriate for a small sprite representing a man running. Nevertheless, keep this idea in mind for later, when you will be more at ease with animation programming, you will see that this applies to bigger areas that are not entirely drawn at every frame in order to decresase the number of computations needed to "make a frame". Some parts of a whole screen are not "recomputed" every time, this technique is called double buffer and you should soon be introduced to it when making games.
But for now, you should start by making your man run, replacing quickly one picture by another. If movement is not fuild either increase frame rate (optimize your program) or choose images that are closer to each other.
Regards,
Stéphane
We have been dealing with OpenCV for two weeks to make it work on Android.
Do you know where can we find an Android implementation of optical flow? It would be nice if it's implemented using OpenCV.
Openframeworks has openCV baked in, as well as many other interesting libraries. It has a very elegant strucutre, and I have used it with android to make a virtual mouse of the phone using motion estimation from the camera.
See the ports to android here http://openframeworks.cc/setup/android-studio/
Seems they recently added support for android studio, otherwise eclipse works great.
Try this
#Override
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
if (mMOP2fptsPrev.rows() == 0) {
//Log.d("Baz", "First time opflow");
// first time through the loop so we need prev and this mats
// plus prev points
// get this mat
Imgproc.cvtColor(mRgba, matOpFlowThis, Imgproc.COLOR_RGBA2GRAY);
// copy that to prev mat
matOpFlowThis.copyTo(matOpFlowPrev);
// get prev corners
Imgproc.goodFeaturesToTrack(matOpFlowPrev, MOPcorners, iGFFTMax, 0.05, 20);
mMOP2fptsPrev.fromArray(MOPcorners.toArray());
// get safe copy of this corners
mMOP2fptsPrev.copyTo(mMOP2fptsSafe);
}
else
{
//Log.d("Baz", "Opflow");
// we've been through before so
// this mat is valid. Copy it to prev mat
matOpFlowThis.copyTo(matOpFlowPrev);
// get this mat
Imgproc.cvtColor(mRgba, matOpFlowThis, Imgproc.COLOR_RGBA2GRAY);
// get the corners for this mat
Imgproc.goodFeaturesToTrack(matOpFlowThis, MOPcorners, iGFFTMax, 0.05, 20);
mMOP2fptsThis.fromArray(MOPcorners.toArray());
// retrieve the corners from the prev mat
// (saves calculating them again)
mMOP2fptsSafe.copyTo(mMOP2fptsPrev);
// and save this corners for next time through
mMOP2fptsThis.copyTo(mMOP2fptsSafe);
}
/*
Parameters:
prevImg first 8-bit input image
nextImg second input image
prevPts vector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.
nextPts output vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when OPTFLOW_USE_INITIAL_FLOW flag is passed, the vector must have the same size as in the input.
status output status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has been found, otherwise, it is set to 0.
err output vector of errors; each element of the vector is set to an error for the corresponding feature, type of the error measure can be set in flags parameter; if the flow wasn't found then the error is not defined (use the status parameter to find such cases).
*/
Video.calcOpticalFlowPyrLK(matOpFlowPrev, matOpFlowThis, mMOP2fptsPrev, mMOP2fptsThis, mMOBStatus, mMOFerr);
cornersPrev = mMOP2fptsPrev.toList();
cornersThis = mMOP2fptsThis.toList();
byteStatus = mMOBStatus.toList();
y = byteStatus.size() - 1;
for (x = 0; x < y; x++) {
if (byteStatus.get(x) == 1) {
pt = cornersThis.get(x);
pt2 = cornersPrev.get(x);
Core.circle(mRgba, pt, 5, colorRed, iLineThickness - 1);
Core.line(mRgba, pt, pt2, colorRed, iLineThickness);
}
}
return mRgba;
}