Tear when not expected after re-sizing the texture - android

In certain circumstances, there is a desire to expand a texture when the user is zooming into an image until the shortest edge fills the entire screen (as seen with standard galleries). However, the input to the OpenGL shader is a matrix from an ImageView, so certain translations need to be put into place as well as corrective calculations for different spaces.
EDIT:
The vertex shader code is simplistic and is as follows:
private final String vertexShader_ =
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position;\n" +
" v_texCoord = a_texCoord.xy;\n" +
"}\n";
where a_position references verticesData_ below. When loading onto the screen, the position of the image is accurate based upon calculations from the display width and a portion of the screen that is occupies.
Additionally, I have the following fragment shader code:
private final String fragmentShader_ =
"precision mediump float;\n" +
"varying vec2 v_texCoord;\n" +
"uniform sampler2D texture;\n" +
"uniform mat3 transform;\n" +
"void main() {\n" +
" vec2 uv = (transform * vec3(v_texCoord, 1.0)).xy;\n" +
" gl_FragColor = texture2D( texture, uv );\n" +
"}\n";
where the input of the mat3 transform is input that is coming from an ImageView. Essentially, there is an ImageView underneath the OpenGLSurfaceView. It is a much lower resolution image and when the user swipes, the SurfaceView is hidden and the ImageView is beneath it at the same position that the user was in the SurfaceView.
However, later on when I want to expand upon this texture, I am finding myself will unexpected results.
When panning back and forth, the screen moves where it is expected to move. So, the translation in the x and y coordinates from the matrix are coming across ok. However, when zooming in, the image is tearing. It is only tearing when the bounds of the texture have grown beyond the screen dimensions. As can be seen below, the height is not introducing extra pixels when growing, but the width is tearing as it progresses.
In order to pass an appropriate matrix, the values from the ImageView to the OpenGL SurfaceView were inverted and transposed, since the space required it for an appropriate conversion. A scale factor is passed into the a listener that the Activity is listening in on.
#Override
public void onTranslation(float imageScaleF, float[] matrixTranslationF)
{
currentScaleForImage_ = imageScaleF;
//If we are currently using the entire screen, then there is not need anymore to resize the texture itself.
if(surfaceViewWidth_ * imageScaleF > displayViewWidth_ && surfaceViewHeight_ * imageScaleF > displayViewHeight_)
{
}
else //Expand the size of the texture to be displayed on the screen.
{
maximumScaleForImage_ = imageScaleF;
photoViewRenderer_.updateTextureSize(imageScaleF);
photoSurfaceView_.requestRender();
}
matrixTranslationF = updateTranslationBasedOnTextureSize(matrixTranslationF);
photoViewRenderer_.updateTranslationOfTexture(matrixTranslationF);
photoSurfaceView_.requestRender();
}
But, with the following code above, the portion of the image that scrolls is always cut short, and if I attempt to correct it based upon the scale by uncommenting the line there for updating the Translation, it causes tears. So, it seems as though at this point, the user below has gotten me into a better position with their input, but I am still a single step away and I think that it is within this function. The following now updated code below has made the corrections necessary in order to provide an appropriate translation between the matricies in the ImageView and the OpenGL texture/coordinates.
private float[] updateTranslationBasedOnTextureSize(float[] matrixTranslationsF)
{
if(scaleDirection_ == ScaleDirection.WIDTH)
{
matrixTranslationsF[0] = matrixTranslationsF[0] * maximumScaleForImage_;
}
else if(scaleDirection_ == ScaleDirection.HEIGHT)
{
matrixTranslationsF[4] = matrixTranslationsF[4] * maximumScaleForImage_;
}
return matrixTranslationsF;
}
I was causing tearing in the photographs in the first iteration as seen below. Or it was clipping the photograph due to improper conversions between the two spaces.
The width seems to be maintained, so it suggests to me that maybe something happens with values outside the standard bounds, but I am not sure what to try.

This streaking is happening because you have the texture wrap set to clamp to edge. You have something like:
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
GL_REPEAT is the default and will repeat the image. But you're looking to not have any "effect" at the bottom. This bottom is happening because of your MATH. You are getting the streaking only on the bottom and not the sides. As someone said in the comment you're vertex range isn't correct. The min and max are 0 and 1.. not -1 and 1. So at the bottom of your picture it's streaking because you're calculating a texture coordinate number out of the 0-1 bounds and probably getting in to the <0 area or the >1 area. When the texture2D(u_textureSampler, v_fragmentTexCoord0) gets a texture coordinate outside of the 0-1 range it's going to fall on that WRAP_S / WRAP_T setting to know what value to give. It's repeating the edge pixel because you have GL_CLAMP_TO_EDGE.
Your actual problem though isn't this setting. It's that you are calculating an S or T outside of the range to begin with. That's not good form cause even if you were writing black in there it's wasted cycles on the fragment shader and will slow down your app. Basically, your formulas are wrong.
As for your zooming in math... it's math. You'll have to work it out for whatever you're trying to do. (Or ask for help.) You could do something like this, which is a simplified version of what I'm doing in my new photo editing app:
GLfloat aspect = photo.width / photo.height;
if (aspect > 1) {
displayWidth = panelWidth;
displayHeight = panelWidth/aspect;
}
else {
displayHeight = panelHeight;
displayWidth = panelHeight*destAspect;
}
Where panelWidth and panelHeight are the size of the area you're displaying your photo - which could be the full screen or something smaller. Then build your vertex based on the display height and width. If you zoom in beyond the size of the panel you'll need to then do a crop to keep the image inside the panel, both by reducing the display width or height (which ever one is too large) and also changing your texture coordinates to match. This is very rough but something like this:
GLfloat sSize = (destSMax-destSMin) / panelZoom;
GLfloat tSize = (destTMax-destTMin) / panelZoom;
displaySMin = destSMin + panelPanX/coreTextureSize;
displayTMin = destTMin + panelPanY/coreTextureSize;
displaySMax = displaySMin + sSize;
displayTMax = displayTMin + tSize;
Then of course build your vertex data, use matrix math to move the square where you want it, and draw it.
I recommend rethinking your approach to using something more like what I'm doing above. Don't remove a percentage of the photo when it goes off screen. Instead calculate what is going to be seen and build your data based on that.
EDIT - Also, I realized my code is ios code and this is an android question. Same concept applies though. The glTexParameteri call might be slightly different and it might be float instead of GLfloat.

Related

Render FBO to same FBO

I am trying to create a ghost-like camera filter. This requires mixing the previous frame to current one. I use one FBO to make the mixing and a second one to simply put the context to the screen.
My implementation works on 4 out of 5 devices I have tried. On the fifth (Samsung galaxy S7) I get some random pixels.
The simpler shader to reproduce the error is the following (the frame counter and cropping is just for debugging). The result is that I get on the center of the screen on line gradually going up.
uniform samplerExternalOES camTexture;
uniform sampler2D fbo;
uniform int frame_no;
varying vec2 v_CamTexCoordinate;
void main ()
{
vec2 uv = v_CamTexCoordinate;
if(frame_no<10){
gl_FragColor = texture2D(camTexture, uv);
}else{
if(uv.y>0.2 && uv.y<0.8 && uv.x>0.2 && uv.x<0.8)
gl_FragColor = texture2D(fbo, uv + vec2(0.0, +0.005));
else
gl_FragColor = texture2D(camTexture, uv);
}
}
But on the Samsung I get some correct pixels and some random ones as the following sample. Some black and other random pixels going up together with the camera's pixels. Any idea of what might be wrong?
Fault sample
Correct sample

GLSL - fragment shaders for image manipulation/blending

I'm trying to build a shader that allows you to combine two images by applying a gradient opacity mask to the one on top, like you do in photoshop. I've gotten to the point where I can overlay the masked image over the other but as a newbie am confused about a few things.
It seems that the images inside the shader are sometimes skewed to fit the canvas size, and they always start at position 0,0. I have played around with a few snippets I have found to try and scale the textures, but always end up with unsatisfactory results.
I am curious if there is a standard way to size, skew, and translate textures within a view, or if images in GLSL are necessarily limited in some way that will stop me from accomplishing my goal.
I'm also unsure of how I am applying the gradient/mask and if it is the right way to do it, because I do not have a lot of control over the shape of the gradient at the moment.
Here's what I have so far:
precision highp float;
varying vec2 uv;
uniform sampler2D originalImage;
uniform sampler2D image;
uniform vec2 resolution;
void main(){
float mask;
vec4 result;
vec2 position = gl_FragCoord.xy / ((resolution.x + resolution.y) * 2.0 );
mask = vec4(1.0,1.0,1.0,1.0 - position.x).a;
vec4 B = texture2D(image,uv);
vec4 A = texture2D(originalImage,uv) * (mask);
result = mix(B, A, A.a);
gl_FragColor = result;
}
Which produces an image like this:
What I would like to be able to do is change the positions of the images independently and also make sure that they conform to their proper dimensions.
I have tried naively shifting positions like this:
vec2 pos = uv;
pos.y = pos.y + 0.25;
texture2D(image, pos)
Which does shift the texture, but leads to a bunch of strange lines dragging:
I tried to get rid of them like this:
gl_FragColor = uv.y < 0.25 ? vec4(0.0,0.0,0.0,0.0) : result;
but it does nothing
You really need to decide what you want to happen when images are not the same size. What you probably want is for it to appear there's no image so check your UV coordinates and use 0,0,0,0 when outside of the image
//vec4 B = texture2D(image,uv);
vec4 getImage(sampler2D img, vec2 uv) {
if (uv.x < 0.0 || uv.x > 1.0 || uv.y < 0.0 || uv.y > 1.0) {
return vec4(0);
}
return texture2D(img, uv);
}
vec4 B = getImage(image, uv);
As for a standard way to size/skew/translate images use a matrix
uniform mat4 u_imageMatrix;
...
vec2 newUv = u_imageMatrix * vec4(uv, 0, 1).xy;
An example of implementing canvas 2d's drawImage using a texture matrix.
In general though I don't think most image manipulations programs/library would try to do everything in the shader. Rather they'd build up the image with very very simple primitives. My best guess would be they use a shader that's just A * MASK then draw B followed by A * MASK with blending on.
To put it another way, if you have 30 layers in photoshop they wouldn't generate a single shader that computes the final image in one giant shader taking in all 30 layers at once. Instead each layer would be applied on its own with simpler shaders.
I also would expect them to create an texture for the mask instead of using math in the shader. That way the mask can be arbitrarily complex, not just a 2 stop ramp.
Note I'm not saying you're doing it wrong. You're free to do whatever you want. I'm only saying I suspect that if you want to build a generic image manipulation library you'll have more success with smaller building blocks you combine rather than trying to do more complex things in shaders.
ps: I think getImage can be simplified to
vec4 getImage(sampler2D img, vec2 uv) {
return texture2D(img, uv) * step(0.0, uv) * step(-1.0, -uv);
}

Converting pixel co-ordinates to normalized co-ordinates at draw time in OpenGL 3.0

I am drawing a triangle in OpenGL like:
MyGLRenderer( )
{
fSampleVertices = ByteBuffer.allocateDirect( fSampleVerticesData.length * 4 )
.order ( ByteOrder.nativeOrder( ) ).asFloatBuffer( );
fSampleVertices.put( fSampleVerticesData ).position ( 0 );
Log.d( TAG, "MyGLRender( )" );
}
private FloatBuffer fSampleVertices;
private final float[] fSampleVerticesData =
{ .8f, .8f, 0.0f, -.8f, .8f, 0.0f, -.8f, -.8f, 0.0f };
public void onDrawFrame( GL10 unused )
{
GLES30.glViewport ( 0, 0, mWidth, mHeight );
GLES30.glClear ( GLES30.GL_COLOR_BUFFER_BIT );
GLES30.glUseProgram ( dProgramObject1 );
GLES30.glVertexAttribPointer ( 0, 3, GLES30.GL_FLOAT, false, 0, fSampleVertices );
GLES30.glEnableVertexAttribArray ( 0 );
GLES30.glDrawArrays( GLES30.GL_TRIANGLES, 0, 3 );
//Log.d( TAG, "onDrawFrame( )" );
}
So since I have experimented with the co-ordinates it doesn't take long to figure out that the visible area of the screen
is between -1,1. So then the triangle takes up 80% of the screen. As well I have determined that the pixel dimensions of my
GLSurfaceView are 2560 in width and 1600 in height.
So then given a triangle with these pixel based co-ordinates (fBoardOuter):
1112.0f
800.0f
0.0f
-1280.0f
800.0f
0.0f
-1280.0f
-800.0f
0.0f
I have to either convert those pixel co-ordinates to something between -1,1 or find out a way to have gl convert those co-ordinates
at the time they are drawn? Since I am very new to OpenGL I am looking for some guidance to do this?
My vertex shader is like:
String sVertexShader1 =
"#version 300 es \n"
+ "in vec4 vPosition; \n"
+ "void main() \n"
+ "{ \n"
+ " gl_Position = vPosition; \n"
+ "} \n";
Would I be correct then in saying that a pixels based system would be called world co-ordinates? What I am trying to do right now is just some 2D drawing for a board game.
I've discovered that Android has this function:
orthoM(float[] m, int mOffset, float left, float right, float bottom, float top, float near, float far)
However there is nothing in the documentation I've read so far that explain the usage of the matrix of how a float[] with pixel co-ordinates can be transformed to normalized co-ordinates with that matrix in GLES30.
I've also found the documentation here:
http://developer.android.com/guide/topics/graphics/opengl.html
Based off the documentation I have tried to create an example:
http://pastebin.com/5PTsfSdz
In the pastebin example fSampleVertices I thought would be much smaller and at the center of the screen but it isn't it's still almost the entire screen and fBoardOuter just shows me a black screen if I try to put it into glDrawArray.
You will probably need to find a book or some good tutorials to get a strong grasp on some of these concepts. But since there some specific items in your question, I'll try and explain them as well as I can within this format.
The coordinate system you discovered, where the range is [-1.0, 1.0] in the x- and y coordinate directions, is officially called Normalized Device Coordinates, often abbreviated as NDC. Which is very similar to the name you came up with, so some of the OpenGL terminology is actually very logical. :)
At least as long as you're dealing with 2D coordinates, this is the coordinate range your vertex shader needs to produce. I.e. the coordinates you assign to the built-in gl_Position variable need to be within this range to be visible in the output. Things gets slightly more complicated if you're dealing with 3D coordinates and are applying perspective projections, but we'll skip over that part for now.
Now, as you already guessed, you have two main options if you want to specify your coordinates in a different coordinate system:
You transform them to NDC in your code before you pass them to OpenGL.
You have OpenGL apply transformations to your input coordinates.
Option 2 is clearly the better one, since GPUs are very efficient at performing this job.
On a very simple level, this means that you modify the coordinates in your vertex shader. If you look at your very simple first vertex shader:
in vec4 vPosition;
void main()
{
gl_Position = vPosition;
}
you get the coordinates provided by your app code in the vPosition input variable, and you assign exactly he same coordinates to the vertex shader output gl_Position.
If you want to use a different coordinate system, you process the input coordinates in the vertex shader code, and assign those processed coordinates to the output instead.
Modern versions of OpenGL don't really have a name for those coordinate systems anymore. There used to be "model coordinates" and "world coordinates" when some of this stuff was still hardwired into a fixed pipeline. Now that this is done with programmable shader code, those concepts are not relevant anymore from the OpenGL point of view. All it cares about are the coordinates that come out of the vertex shader. Everything that happens before that is your own business.
The canonical way of applying linear transformations, which includes the translations and scaling you need for your intended use, is by multiplying the coordinates with a transformation matrix. You already discovered the android.opengl.Matrix package that contains some utility functions for building transformation matrices if you don't want to write the (simple) code yourself.
Once you have a transformation matrix, you pass it into the vertex shader as a uniform variable, and apply the matrix in your shader code. The way this looks in the shader code is for example:
in vec4 vPosition;
uniform mat4 TransformMat;
void main()
{
gl_Position = TransformMat * vPosition;
}
To set the value of this matrix, you need to get the location of the uniform variable once after linking the shader, with prog your shader program:
GLint transformLoc = GLES20.glGetUniformLocation(prog, "TransformMat");
Then, at least once, and every time you want to change the matrix, you call:
GLES20.glUniformMatrix4fv(transformLoc, 1, GL_FALSE, mat, 0);
where mat is the matrix you either built yourself, or got from one of the utility functions in android.opengl.Matrix. Note that this call needs to be after you make the program current with glUseProgram().

Shaders and Uniforms. Not behaving as expected on the Galaxy S6

I've got a distance field shader that I use for font rendering in LibGDX. It takes a uniform that sets how bold the text should be. All this has been working fine for ages, but in the last week or so after a recent Galaxy S6 update that seems to be rolling out (in the US I believe) I've had several reports of incorrect font rendering. (If you have an S6 and want to see the problem you can download here)
My font rendering involves 3 passes, once for a drop shadow, once for a stroke and once for the main text. The drop shadow and stroke are drawn bolder than the main text (depending on font settings)
The problem I'm having is the S6 would appear to be ignoring me changing the uniform to make the text less bold, so it's drawing the text too bold and merging the letters into each other.
Below is an example of incorrect and correct rendering (No drop shadow in this).
The game has been out for over a year and installed on over 500k devices and this problem has only just started occurring.
I don't have a problematic S6 to test on which makes it tricky. Here's my font rendering method.
private GlyphLayout drawText(float x, float y, CharSequence str, float alignmentWidth, int alignment, boolean wrap, SpriteBatch drawBatch) {
if (dropShadowColour.a > 0) {
font.setColor(dropShadowColour.r, dropShadowColour.g, dropShadowColour.b, dropShadowColour.a * originalColour.a);
font.draw(drawBatch, str,
x + font.getCapHeight() * shadowOffset.x,
y + font.getCapHeight() * shadowOffset.y,
alignmentWidth, alignment, wrap);
font.setColor(originalColour);
}
if (strokeSize != 0) {
font.setColor(strokeColour);
font.draw(drawBatch, str, x, y, alignmentWidth, alignment, wrap);
drawBatch.flush();
drawBatch.getShader().setUniformf("u_boldness", 0);
}
font.setColor(originalColour);
return font.draw(drawBatch, str, x, y, alignmentWidth, alignment, wrap);
}
And the fragment shader
uniform sampler2D u_texture;
uniform float u_boldness;
uniform float u_smoothing;
varying vec4 v_color;
varying vec2 v_texCoord;
void main()
{
float distance = texture2D(u_texture, v_texCoord).b;
float alpha = smoothstep(0.5 - u_boldness - u_smoothing, 0.5 - u_boldness + u_smoothing, distance);
gl_FragColor = vec4(v_color.rgb * alpha, alpha * v_color.a);
}
Is there anything else I should be doing when setting the uniform? I'm not expected to begin and end the shader am I? Any pointers would be useful.
Update If I call drawBatch.end(); drawBatch.begin(); instead of drawBatch.flush(); then the issue is resolved. This however, is not efficient, so I'd like a better solution.
Another Update
I use this to get around the issue for now
public static void safeFlush(SpriteBatch spriteBatch){
spriteBatch.flush();
spriteBatch.getShader().end();
spriteBatch.getShader().begin();
}

android.grapics.matrix to OpenGL 2.0 ES texture translation

Currently, the application displays an ImageView that successfully zooms and pans around on the screen. On top of that image, I would like to provide a texture that I would like to update itself to zoom or pan when the image below it zooms or pans.
As I understand, this should be possible with using the getImageMatrix() of my current setup on the ImageView and then applying that to my textured bitmap that is on top of the original image.
Edit & Resolution: (with strong aide from the selected answer below)
Currently, the panning in the texture occurs at a different speed than that of the ImageView, but when that has been resolved I will update this posting with additional edits and provide the solution to the entirety of the application. Then only the solution and a quick problem description paragraph will remain. Perhaps I'll even post a surface view and renderer source code for using a zoomable surface view.
In order to accomplish a mapping of an ImageView to an OpenGL texture, there were a couple of things that needed to be done in order to accomplish this correctly. And hopefully, this will aide other people who might want to use a zoomable SurfaceView in the future.
The shader code is written below. The gl_FragColor takes a translation matrix of a 3x3 from the getImageMatrix of any ImageView in order to update the screen from within the onDraw() method.
private final String vertexShader_ =
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position;\n" +
" v_texCoord = a_texCoord.xy;\n" +
"}\n";
private final String fragmentShader_ =
"precision mediump float;\n" +
"varying vec2 v_texCoord;\n" +
"uniform sampler2D texture;\n" +
"uniform mat3 transform;\n" +
"void main() {\n" +
" vec2 uv = (transform * vec3(v_texCoord, 1.0)).xy;\n" +
" gl_FragColor = texture2D( texture, uv );\n" +
"}\n";
At the start, the translation matrix for the OpenGL is simply an identify matrix and our ImageView will update these shared values with the shader through a listener.
private float[] translationMatrix = {1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f};
...
uniforms_[TRANSLATION_UNIFORM] = glGetUniformLocation(program_, "transform");
checkGlError("glGetUniformLocation transform");
if (uniforms_[TRANSLATION_UNIFORM] == -1) {
throw new RuntimeException("Could not get uniform location for transform");
}
....
glUniformMatrix3fv(uniforms_[TRANSLATION_UNIFORM], 1, false, FloatBuffer.wrap(translationMatrix));
....
However, the OpenGL code needs an inverse of the OpenGL matrix calculations that come from Android, as well as a transpose operation performed on them before they are handed off to the renderer for being displayed on the screen. This has to do with the way that that information is stored.
protected void onMatrixChanged()
{
//...
float[] matrixValues = new float[9];
Matrix imageViewMatrix = getImageViewMatrix();
Matrix invertedMatrix = new Matrix();
imageViewMatrix.invert(invertedMatrix);
invertedMatrix.getValues(matrixValues);
transpose(matrixValues);
matrixChangedListener.onTranslation(matrixValues);
//...
}
In addition to the fact that the values are in the wrong locations for input into the OpenGL renderer, we also have the problem that we are dealing with our translations on the scale of imageHeight and imageWidth instead of the normalized [0, 1] range that OpenGL expects. So, in order to correct with this we have to target the last column with divisible numbers of our width and height.
matrixValues[6] = matrixValues[6] / getImageWidth();
matrixValues[7] = matrixValues[7] / getImageHeight();
Then, after you have accomplished this, you can start mapping with the ImageView matrix functions or the updated ImageTouchView matrix functions (which have some additional nice features when handling images on Android).
The effect can be seen below with the ImageView behind it taking up the whole screen and the texture in front of it being updated based upon updates from the ImageView. This handles zooming and panning translations between the two.
The only other step in the process would be to change the size of the SurfaceView to match that of the ImageView in order to get the full screen experience when the user is zooming in. In order to get only a surfaceview shown, you can simply put an 'invisible' imageview behind it in the background. And just let the ImageTouchView updates change the way that your OpenGL SurfaceView is shown.
Vote up on the answer below, as they helped me offline through chat significantly to get to this point in understanding what needs to be done for linking an imageview with a texture. They deserve it.
If you do end up going with the vertex shader modification below, instead of the fragment shader resolution, then it would seem that the values no longer need to be inverted to work with the pinch zoom anymore (and in fact work backwards if you leave it as is). Additionally, panning seems to work backwards than is expected as well.
You could forge fragment shader for that, just set this matrix as uniform parameter of the fragment shaderand modify texture coordinates using this matrix.
precision mediump float;
uniform sampler2D samp;
uniform mat3 transform;
varying vec2 texCoord;
void main()
{
vec2 uv = (transform * vec3(texCoord, 1.0)).xy;
gl_FragColor = texture2D(samp, uv);
}
Note that matrix you get from image view is row-major order you should transform it to get OpenGL column-major order, also you would have divide x and y translation components by width and height respectively.
Update: It just occured to me that modifying vertex shader would be better, you can just scale your quad with the same matrix you used in fragment shader, and it would be clamped to your screen if to big, and it will not have a problem with edge clamping when small, and matrix multiplication will happen only 4 times instead of one for each pixel. Just do in vertex shader:
gl_Position = vec4((transform * vec3(a_position.xy, 1.0)).xy, 0.0, 1.0);
and use untransformed texCoord in fragment shader.

Categories

Resources