Shader not rendering in Android export - android

My game uses gles 2 by Godot engine.
This shader displays stars with many little white dots. It works fine on the editor but it shows only the colour when exporting the project to android.
shader_type canvas_item;
uniform vec4 bg_color: hint_color;
float rand(vec2 st) {
return fract(sin(dot(st.xy, vec2(12.9898,78.233))) * 43758.5453123);
}
void fragment() {
float size = 100.0;
float prob = 0.9;
vec2 pos = floor(1.0 / size * FRAGCOORD.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + 0.2 * sin(TIME * 8.0 + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(FRAGCOORD.xy, center) / (0.5 * size);
color = color * t / (abs(FRAGCOORD.y - center.y)) * t / (abs(FRAGCOORD.x - center.x));
}
else if (rand(SCREEN_UV.xy / 20.0) > 0.996)
{
float r = rand(SCREEN_UV.xy);
color = r * (0.85 * sin(TIME * (r * 5.0) + 720.0 * r) + 0.95);
}
COLOR = vec4(vec3(color),1.0) + bg_color;
}
Any help please?

Related

Rewriting an Android OpenGL filter to Metal (for CIFilter)

There are dozens of image filters written for the Android version of our app in GLSL (ES). As of iOS 12, OpenGL is deprecated, and CIFilter kernels have to be written in Metal.
I had some previous background in OpenGL, however writing CIFilter kernels in Metal is new to me.
Here is one of the filters. Could you help me in translating it to Metal as a CIFilter kernel? That would provide a good example for me so I could translate others.
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTextureCoord;
uniform samplerExternalOES sTexture;
uniform float texelWidth;
uniform float texelHeight;
uniform float intensivity;
void main() {
float SIZE = 1.25 + (intensivity / 100.0)*2.0;
vec4 color;
float min = 1.0;
float max = 0.0;
float val = 0.0;
for (float x = -SIZE; x < SIZE; x++) {
for (float y = -SIZE; y < SIZE; y++) {
color = texture2D(sTexture, vTextureCoord + vec2(x * texelWidth, y * texelHeight));
val = (color.r + color.g + color.b) / 3.;
if (val > max) { max = val; } else if (val < min) { min = val; }
}
}
float range = 5. * (max - min);
gl_FragColor = vec4(pow(1. - range, SIZE * 1.5));
gl_FragColor = vec4((gl_FragColor.r + gl_FragColor.g + gl_FragColor.b) / 3. > 0.75 ? vec3(1.) : gl_FragColor.rgb, 1.);
}
Here's the Metal source for a kernel that attempts to replicate your described filter:
#include <metal_stdlib>
#include <CoreImage/CoreImage.h>
using namespace metal;
extern "C" {
namespace coreimage {
float4 sketch(sampler src, float texelWidth, float texelHeight, float intensity40) {
float size = 1.25f + (intensity40 / 100.0f) * 2.0f;
float minVal = 1.0f;
float maxVal = 0.0f;
for (float x = -size; x < size; ++x) {
for (float y = -size; y < size; ++y) {
float4 color = src.sample(src.coord() + float2(x * texelWidth, y * texelHeight));
float val = (color.r + color.g + color.b) / 3.0f;
if (val > maxVal) {
maxVal = val;
} else if (val < minVal) {
minVal = val;
}
}
}
float range = 5.0f * (maxVal - minVal);
float4 outColor(pow(1.0f - range, size * 1.5f));
outColor = float4((outColor.r + outColor.g + outColor.b) / 3.0f > 0.75f ? float3(1.0f) : outColor.rgb, 1.0f);
return outColor;
}
}
}
I assume you're already familiar with the basics of how to correctly build Metal shaders into a library that can be loaded by Core Image.
You can instantiate your kernel at runtime by loading the default Metal library and requesting the "sketch" function (the name is arbitrary, so long as it matches the kernel source):
NSURL *libraryURL = [NSBundle.mainBundle URLForResource:#"default" withExtension:#"metallib"];
NSData *libraryData = [NSData dataWithContentsOfURL:libraryURL];
NSError *error;
CIKernel *kernel = [CIKernel kernelWithFunctionName:#"sketch" fromMetalLibraryData:libraryData error:&error];
You can then apply this kernel to an image by wrapping it in your own CIFilter subclass, or just invoke it directly:
CIImage *outputImage = [kernel applyWithExtent:CGRectMake(0, 0, width, height)
roiCallback:^CGRect(int index, CGRect destRect)
{ return destRect; }
arguments:#[inputImage, #(1.0f/width), #(1.0f/height), #(60.0f)]];
I've tried to select sensible defaults for each of the arguments (the first of which should be an instance of CIImage), but of course these can be adjusted to taste.

GLSL - a shader does not work using mediump precision

I have a shader of atmospheric scattering that I found through the Internet. In fact, this is the WebGL(three.js) shader. I try to run it on mobile devices. I found out that it works great on Qualcomm and KIRIN chipsets when I use hihgp precision and doesn't work on Mali chipsets at all (just black screen). Athouth, it compiles without any errors. I assume that this happens because Mali GPUs (or similar ones) have not enough highp precision for this shader. If I use mediump precision it doesn't work on all devices (just black screen). Is it possible to make this(any) shader works using mediump precision? Or how to make this shader works on all popular chipsets?
Vertex shader:
attribute vec3 a_position;
attribute vec2 a_texCoord0;\n"+
uniform mat4 u_worldTrans;
uniform mat4 u_projTrans;
varying vec3 vWorldPosition;
void main() {
vec4 worldPosition = u_worldTrans * vec4( a_position, 1.0 );
vWorldPosition = worldPosition.xyz;
gl_Position = u_projTrans * worldPosition;
}
Fragment shader:
#ifdef GL_ES
precision highp float;
#endif
uniform vec3 sunPosition;
varying vec3 vWorldPosition;
const float turbidity = 3.2;
const float reileigh = 2.7;
const float luminance = 1.0;
const float mieCoefficient = 0.007;
const float mieDirectionalG = .990;
#define e 2.718281828459
#define pi 3.141592653589
#define n 1.0003
#define pn 0.035
const vec3 lambda = vec3(0.00000068, 0.00000055, 0.00000045);
const vec3 K = vec3(0.686, 0.678, 0.666);
const vec3 up = vec3(0.0, 1.0, 0.0);
#define v 4.0
#define rayleighZenithLength 8400.0
#define mieZenithLength 1250.0
#define EE 1000.0
#define sunAngularDiameterCos 0.999956676946
#define cutoffAngle pi/1.95
#define steepness 1.5
vec3 simplifiedRayleigh()
{
return 0.0005 / vec3(94., 40., 18.);
}
float rayleighPhase(float cosTheta)
{
return (3.0 / (16.0*pi)) * (1.0 + pow(cosTheta, 2.0));
}
vec3 totalMie(vec3 lambda, vec3 K, float T)
{
float val = 10E-18;
float c = (0.2 * T ) * val;
return 0.434 * c * pi * pow((2.0 * pi) / lambda, vec3(v - 2.0)) * K;
}
float hgPhase(float cosTheta, float g)
{
return (1.0 / (4.0*pi)) * ((1.0 - pow(g, 2.0)) / pow(abs(1.0 - 2.0*g*cosTheta + pow(g, 2.0)), 1.5));
}
float sunIntensity(float zenithAngleCos)
{
return EE * max(0.0, 1.0 - exp(-((cutoffAngle - acos(zenithAngleCos))/steepness)));
}
#define A 0.15
#define B 0.50
#define C 0.10
#define D 0.20
#define E 0.02
#define F 0.30
#define W 1000.0
vec3 Uncharted2Tonemap(vec3 x)
{
return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F;
}
void main()
{
vec3 cameraPos = vec3(0., 0., -500.);
float sunfade = 1.0-clamp(1.0-exp((sunPosition.y/4000.0)),0.0,1.0);
float reileighCoefficient = reileigh - (1.0* (1.0-sunfade));
vec3 sunDirection = normalize(sunPosition);
float sunE = sunIntensity(dot(sunDirection, up));
vec3 betaR = simplifiedRayleigh() * reileighCoefficient;
vec3 betaM = totalMie(lambda, K, turbidity) * mieCoefficient;
float zenithAngle = acos(max(0.0, dot(up, normalize(vWorldPosition - cameraPos))));
float sR = rayleighZenithLength / (cos(zenithAngle) + 0.15 * pow(abs(93.885 - ((zenithAngle * 180.0) / pi)), -1.253));
float sM = mieZenithLength / (cos(zenithAngle) + 0.15 * pow(abs(93.885 - ((zenithAngle * 180.0) / pi)), -1.253));
vec3 Fex = exp(-(betaR * sR + betaM * sM));
float cosTheta = dot(normalize(vWorldPosition - cameraPos), sunDirection);
float rPhase = rayleighPhase(cosTheta*0.5+0.5);
vec3 betaRTheta = betaR * rPhase;
float mPhase = hgPhase(cosTheta, mieDirectionalG);
vec3 betaMTheta = betaM * mPhase;
vec3 Lin = pow(abs(sunE * ((betaRTheta + betaMTheta) / (betaR + betaM)) * (1.0 - Fex)),vec3(1.5));
Lin *= mix(vec3(1.0),pow(sunE * ((betaRTheta + betaMTheta) / (betaR + betaM)) * Fex,vec3(1.0/2.0)),clamp(pow(1.0-dot(up,sunDirection),5.0),0.0,1.0));
vec3 L0 = vec3(0.1) * Fex;
float sundisk = smoothstep(sunAngularDiameterCos,sunAngularDiameterCos+0.00002,cosTheta);
L0 += (sunE * 19000.0 * Fex)*sundisk;
vec3 whiteScale = 1.0/Uncharted2Tonemap(vec3(W));
vec3 texColor = (Lin+L0);
texColor *= 0.04 ;
texColor += vec3(0.0,0.001,0.0025)*0.3;
vec3 curr = Uncharted2Tonemap((log2(2.0/pow(luminance,4.0)))*texColor);
vec3 color = curr*whiteScale;
vec3 retColor = pow(color,vec3(1.0/(1.2+(1.2*sunfade))));
gl_FragColor.rgb = retColor;
gl_FragColor.a = 1.0;
}
I'm using LibGDX, so my shader class is bellow:
public class SkyShader implements Shader{
ShaderProgram program;
Camera specCamera;
int WIDTH, HEIGHT;
int u_projTrans;
int u_worldTrans;
int u_sunPosition;
int u_res;
int u_luminance;
int u_turbidity;
int u_reileigh;
int u_mieCoefficient;
int u_mieDirectionalG;
public Vector3 sunDirection = new Vector3();
Matrix4 viewInvTraMatrix = new Matrix4();
#Override
public void init() {
program = new ShaderProgram(vert, frag);
if (!program.isCompiled())throw new RuntimeException(program.getLog());
else Gdx.app.log("sky program", "shader compiled successfully!");
u_projTrans = program.getUniformLocation("u_projTrans");
u_worldTrans = program.getUniformLocation("u_worldTrans");
u_sunPosition = program.getUniformLocation("sunPosition");
u_res = program.getUniformLocation("u_res");
}
#Override
public void dispose() {
if(program!=null){
program.dispose();
program = null;
}
}
static final float delta = 0.0028526667f;
Matrix4 projectionMatrix = new Matrix4();
#Override
public void begin(Camera camera, RenderContext context) {
program.begin();
projectionMatrix.set(camera.projection);
if(Settings.landscape)
projectionMatrix.rotate(0, 1, 0, (Settings.EarthRotation-100) * .4f);
else projectionMatrix.rotate(0, 1, 0, (Settings.EarthRotation-100) * .7f);
program.setUniformMatrix(u_projTrans, projectionMatrix);
if(Settings.EnableSlightShadowGradient)
program.setUniformf(u_res, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
float rotation = Settings.EarthRotation * delta -.0375f;//- .00674f;
float azimuth = rotation; //_azimuth;
float inclination = .3f; //skyParams[i][4];
float theta = (float) (Math.PI * ( inclination - 0.5f ));
float phi = (float) (2f * Math.PI * ( azimuth - 0.5f ));
float distance = 4000;//400000;
sunDirection.x = (float) (distance * Math.cos( phi ));
sunDirection.y = (float) (distance * Math.sin( phi ) * Math.sin( theta ));
sunDirection.z = (float) (distance * Math.sin( phi ) * Math.cos( theta ));
context.setDepthTest(GL20.GL_LEQUAL);
context.setCullFace(GL20.GL_FRONT);
}
#Override
public boolean canRender(Renderable arg0) {
return true;
}
#Override
public int compareTo(Shader arg0) {
return 0;
}
#Override
public void end() {
program.end();
}
#Override
public void render(Renderable renderable) {
program.setUniformMatrix(u_worldTrans, renderable.worldTransform);
program.setUniformf(u_sunPosition, sunDirection);
renderable.mesh.render(program, renderable.primitiveType,
renderable.meshPartOffset, renderable.meshPartSize);
}
}
And finally, I render it using the following code:
modelBatch.begin(cam);
modelBatch.render(sphere, skyShader);
modelBatch.end();
Where sphere is:
public ModelInstance createSphere() {
ModelBuilder modelBuilder = new ModelBuilder();
float radius = 4500;//450000;
Model tempSphere;
tempSphere = modelBuilder.createSphere(radius, radius, radius, 32, 8,
new Material(),
VertexAttributes.Usage.Position | VertexAttributes.Usage.Normal | VertexAttributes.Usage.TextureCoordinates
);
return new ModelInstance(tempSphere,0,0,0);
}
Original shader is here
EDIT
If I run original shader(link is above) on problem devices in browser it works perfect!
Thanks in advance.

Draw centered circle

I'm trying to figure out how to draw a centered circle using fragment shader. I don't quite understand how to accomplish this. This is what I got so far, but the result is a white screen.
I want to be able to draw it any size and be able to change the offsets as I like (move the circle around).
void main()
{
float radius = 10.0;
float viewWidth = 340.0;
float viewHeight = 500.0;
float offsetX = viewWidth / 2.0;
float offsetY = viewHeight / 2.0;
float factorX = viewWidth / ( 360.0 / 6.3 );
float factorY = viewHeight / ( 360.0 / 6.3 );
float angleX = gl_FragCoord.x / factorX;
float angleY = gl_FragCoord.y / factorY;
float x = offsetX + ( sin( angleX ) * radius );
float y = offsetY + ( cos( angleY ) * radius );
float c = x + y;
gl_FragColor = vec4( c, c, c, 1.0 );
}
Remember, this program runs separately for each individual fragment. Each one need only decide if it's in or out of the circle. No need to use sin and cos here, just measure the distance from the center of the viewport, to see if the fragment is in the circle.
Here's a disc, which is even simpler: http://glslsandbox.com/e#28997.0
uniform vec2 resolution;
void main( void ) {
vec2 position = ( gl_FragCoord.xy / resolution.xy ) - 0.5;
position.x *= resolution.x / resolution.y;
float circle = 1.0 - smoothstep(0.2, 0.21, length(position));
gl_FragColor = vec4( vec3( circle ), 1.0 );
}
And here's a circle, made by tweaking the disc a little: http://glslsandbox.com/e#28997.1
uniform vec2 resolution;
void main( void ) {
vec2 position = ( gl_FragCoord.xy / resolution.xy ) - 0.5;
position.x *= resolution.x / resolution.y;
float circle = 1.0 - smoothstep(0.003, 0.005, abs(length(position) - 0.2));
gl_FragColor = vec4( vec3( circle ), 1.0 );
}

Algorithm to draw arrowhead at the end of arbitrary line in an Android custom View

I've been trying to come up with an algorithm to draw an arrow in a custom View, using Path, but I haven't figured out how to get the coordinates of the arrowhead tips. The line startpoint and endpoint coordinates are arbitrary, the angle of the arrowhead relative to the line and the length of the arrowhead are fixed.
I think I have to use trigonometry somehow, but I'm not sure how.
My friend came up with a math equation, which I have translated into java code here:
public static void calculateArrowHead(Point start, Point end, double angleInDeg, double tipLength){
double x1 = end.getX();
double x2 = start.getX();
double y1 = end.getY();
double y2 = start.getY();
double alpha = Math.toRadians(angleInDeg);
double l1 = Math.sqrt(Math.pow(x2-x1, 2) + Math.pow(y2-y1, 2)); // length of the arrow line
double l2 = tipLength;
double a = Math.pow(y2-y1, 2) + Math.pow(x2-x1, 2);
double b = -2 * l1 * l2 * Math.cos(alpha) * (y2 - y1);
double c = Math.pow(l1, 2) * Math.pow(l2, 2) * Math.pow(Math.cos(alpha), 2) - Math.pow(l2, 2) * Math.pow(x2-x1, 2);
double s2a = (-b + Math.sqrt(Math.pow(b, 2) - 4 * a * c)) / (2 * a);
double s2b = (-b - Math.sqrt(Math.pow(b, 2) - 4 * a * c)) / (2 * a);
double s1a = (l1 * l2 * Math.cos(alpha) - s2a * (y2 - y1)) / (x2-x1);
double s1b = (l1 * l2 * Math.cos(alpha) - s2b * (y2 - y1)) / (x2-x1);
double x3a = s1a + x1;
double y3a = s2a + y1;
double x3b = s1b + x1;
double y3b = s2b + y1;
System.out.println("(A) x:" + (int)x3a + "; y:" + (int)y3a);
System.out.println("(B) x:" + (int)x3b + "; y:" + (int)y3b);
}
I haven't tested it thoroughly, but for the first few tests, it appears to be correct.

Trying to convert Bilinear Interpolation code from Java to C/C++ on Android

Background
I've made a tiny Android library for handling bitmaps using JNI (link here)
In the long past, I've made some code of Bilinear Interpolation as a possible algorithm for scaling of images. The algorithm is a bit complex and uses pixels around to form the target pixel.
The problem
Even though there are no errors (no compilation errors and no runtime errors), the output image look like this (scaled the width by x2) :
The code
Basically the original Java code used SWT and supported only RGB, but it's the same for the Alpha channel. It worked before just perfectly (though now that I look at it, it seems to create a lot of objects on the way) .
Here's the Java code:
/** class for resizing imageData using the Bilinear Interpolation method */
public class BilinearInterpolation
{
/** the method for resizing the imageData using the Bilinear Interpolation algorithm */
public static void resize(final ImageData inputImageData,final ImageData newImageData,final int oldWidth,final int oldHeight,final int newWidth,final int newHeight)
{
// position of the top left pixel of the 4 pixels to use interpolation on
int xTopLeft,yTopLeft;
int x,y,lastTopLefty;
final float xRatio=(float)newWidth/(float)oldWidth,yratio=(float)newHeight/(float)oldHeight;
// Y color ratio to use on left and right pixels for interpolation
float ycRatio2=0,ycRatio1=0;
// pixel target in the src
float xt,yt;
// X color ratio to use on left and right pixels for interpolation
float xcRatio2=0,xcratio1=0;
// copy data from source image to RGB values:
RGB rgbTopLeft,rgbTopRight,rgbBottomLeft=null,rgbBottomRight=null,rgbTopMiddle=null,rgbBottomMiddle=null;
RGB[][] startingImageData;
startingImageData=new RGB[oldWidth][oldHeight];
for(x=0;x<oldWidth;++x)
for(y=0;y<oldHeight;++y)
{
rgbTopLeft=inputImageData.palette.getRGB(inputImageData.getPixel(x,y));
startingImageData[x][y]=new RGB(rgbTopLeft.red,rgbTopLeft.green,rgbTopLeft.blue);
}
// do the resizing:
for(x=0;x<newWidth;x++)
{
xTopLeft=(int)(xt=x/xRatio);
// when meeting the most right edge, move left a little
if(xTopLeft>=oldWidth-1)
xTopLeft--;
if(xt<=xTopLeft+1)
{
// we are between the left and right pixel
xcratio1=xt-xTopLeft;
// color ratio in favor of the right pixel color
xcRatio2=1-xcratio1;
}
for(y=0,lastTopLefty=Integer.MIN_VALUE;y<newHeight;y++)
{
yTopLeft=(int)(yt=y/yratio);
// when meeting the most bottom edge, move up a little
if(yTopLeft>=oldHeight-1)
yTopLeft--;
// we went down only one rectangle
if(lastTopLefty==yTopLeft-1)
{
rgbTopLeft=rgbBottomLeft;
rgbTopRight=rgbBottomRight;
rgbTopMiddle=rgbBottomMiddle;
rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
rgbBottomMiddle=new RGB((int)(rgbBottomLeft.red*xcRatio2+rgbBottomRight.red*xcratio1),(int)(rgbBottomLeft.green*xcRatio2+rgbBottomRight.green*xcratio1),(int)(rgbBottomLeft.blue*xcRatio2+rgbBottomRight.blue*xcratio1));
}
else if(lastTopLefty!=yTopLeft)
{
// we went to a totally different rectangle (happens in every loop start,and might happen more when making the picture smaller)
rgbTopLeft=startingImageData[xTopLeft][yTopLeft];
rgbTopRight=startingImageData[xTopLeft+1][yTopLeft];
rgbTopMiddle=new RGB((int)(rgbTopLeft.red*xcRatio2+rgbTopRight.red*xcratio1),(int)(rgbTopLeft.green*xcRatio2+rgbTopRight.green*xcratio1),(int)(rgbTopLeft.blue*xcRatio2+rgbTopRight.blue*xcratio1));
rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
rgbBottomMiddle=new RGB((int)(rgbBottomLeft.red*xcRatio2+rgbBottomRight.red*xcratio1),(int)(rgbBottomLeft.green*xcRatio2+rgbBottomRight.green*xcratio1),(int)(rgbBottomLeft.blue*xcRatio2+rgbBottomRight.blue*xcratio1));
}
lastTopLefty=yTopLeft;
if(yt<=yTopLeft+1)
{
// color ratio in favor of the bottom pixel color
ycRatio1=yt-yTopLeft;
ycRatio2=1-ycRatio1;
}
// prepared all pixels to look at, so finally set the new pixel data
newImageData.setPixel(x,y,inputImageData.palette.getPixel(new RGB((int)(rgbTopMiddle.red*ycRatio2+rgbBottomMiddle.red*ycRatio1),(int)(rgbTopMiddle.green*ycRatio2+rgbBottomMiddle.green*ycRatio1),(int)(rgbTopMiddle.blue*ycRatio2+rgbBottomMiddle.blue*ycRatio1))));
}
}
}
}
And here's the C/C++ code I've tried to make from it:
typedef struct
{
uint8_t alpha, red, green, blue;
} ARGB;
int32_t convertArgbToInt(ARGB argb)
{
return (argb.alpha) | (argb.red << 16) | (argb.green << 8)
| (argb.blue << 24);
}
void convertIntToArgb(uint32_t pixel, ARGB* argb)
{
argb->red = ((pixel >> 24) & 0xff);
argb->green = ((pixel >> 16) & 0xff);
argb->blue = ((pixel >> 8) & 0xff);
argb->alpha = (pixel & 0xff);
}
...
/**scales the image using a high-quality algorithm called "Bilinear Interpolation" */ //
JNIEXPORT void JNICALL Java_com_jni_bitmap_1operations_JniBitmapHolder_jniScaleBIBitmap(
JNIEnv * env, jobject obj, jobject handle, uint32_t newWidth,
uint32_t newHeight)
{
JniBitmap* jniBitmap = (JniBitmap*) env->GetDirectBufferAddress(handle);
if (jniBitmap->_storedBitmapPixels == NULL)
return;
uint32_t oldWidth = jniBitmap->_bitmapInfo.width;
uint32_t oldHeight = jniBitmap->_bitmapInfo.height;
uint32_t* previousData = jniBitmap->_storedBitmapPixels;
uint32_t* newBitmapPixels = new uint32_t[newWidth * newHeight];
// position of the top left pixel of the 4 pixels to use interpolation on
int xTopLeft, yTopLeft;
int x, y, lastTopLefty;
float xRatio = (float) newWidth / (float) oldWidth, yratio =
(float) newHeight / (float) oldHeight;
// Y color ratio to use on left and right pixels for interpolation
float ycRatio2 = 0, ycRatio1 = 0;
// pixel target in the src
float xt, yt;
// X color ratio to use on left and right pixels for interpolation
float xcRatio2 = 0, xcratio1 = 0;
ARGB rgbTopLeft, rgbTopRight, rgbBottomLeft, rgbBottomRight, rgbTopMiddle,
rgbBottomMiddle, result;
for (x = 0; x < newWidth; ++x)
{
xTopLeft = (int) (xt = x / xRatio);
// when meeting the most right edge, move left a little
if (xTopLeft >= oldWidth - 1)
xTopLeft--;
if (xt <= xTopLeft + 1)
{
// we are between the left and right pixel
xcratio1 = xt - xTopLeft;
// color ratio in favor of the right pixel color
xcRatio2 = 1 - xcratio1;
}
for (y = 0, lastTopLefty = -30000; y < newHeight; ++y)
{
yTopLeft = (int) (yt = y / yratio);
// when meeting the most bottom edge, move up a little
if (yTopLeft >= oldHeight - 1)
--yTopLeft;
if (lastTopLefty == yTopLeft - 1)
{
// we went down only one rectangle
rgbTopLeft = rgbBottomLeft;
rgbTopRight = rgbBottomRight;
rgbTopMiddle = rgbBottomMiddle;
//rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbBottomLeft);
//rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth)
+ (xTopLeft + 1)], &rgbBottomRight);
rgbBottomMiddle.alpha = rgbBottomLeft.alpha * xcRatio2
+ rgbBottomRight.alpha * xcratio1;
rgbBottomMiddle.red = rgbBottomLeft.red * xcRatio2
+ rgbBottomRight.red * xcratio1;
rgbBottomMiddle.green = rgbBottomLeft.green * xcRatio2
+ rgbBottomRight.green * xcratio1;
rgbBottomMiddle.blue = rgbBottomLeft.blue * xcRatio2
+ rgbBottomRight.blue * xcratio1;
}
else if (lastTopLefty != yTopLeft)
{
// we went to a totally different rectangle (happens in every loop start,and might happen more when making the picture smaller)
//rgbTopLeft=startingImageData[xTopLeft][yTopLeft];
convertIntToArgb(previousData[(yTopLeft * oldWidth) + xTopLeft],
&rgbTopLeft);
//rgbTopRight=startingImageData[xTopLeft+1][yTopLeft];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbTopRight);
rgbTopMiddle.alpha = rgbTopLeft.alpha * xcRatio2
+ rgbTopRight.alpha * xcratio1;
rgbTopMiddle.red = rgbTopLeft.red * xcRatio2
+ rgbTopRight.red * xcratio1;
rgbTopMiddle.green = rgbTopLeft.green * xcRatio2
+ rgbTopRight.green * xcratio1;
rgbTopMiddle.blue = rgbTopLeft.blue * xcRatio2
+ rgbTopRight.blue * xcratio1;
//rgbBottomLeft=startingImageData[xTopLeft][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth) + xTopLeft],
&rgbBottomLeft);
//rgbBottomRight=startingImageData[xTopLeft+1][yTopLeft+1];
convertIntToArgb(
previousData[((yTopLeft + 1) * oldWidth)
+ (xTopLeft + 1)], &rgbBottomRight);
rgbBottomMiddle.alpha = rgbBottomLeft.alpha * xcRatio2
+ rgbBottomRight.alpha * xcratio1;
rgbBottomMiddle.red = rgbBottomLeft.red * xcRatio2
+ rgbBottomRight.red * xcratio1;
rgbBottomMiddle.green = rgbBottomLeft.green * xcRatio2
+ rgbBottomRight.green * xcratio1;
rgbBottomMiddle.blue = rgbBottomLeft.blue * xcRatio2
+ rgbBottomRight.blue * xcratio1;
}
lastTopLefty = yTopLeft;
if (yt <= yTopLeft + 1)
{
// color ratio in favor of the bottom pixel color
ycRatio1 = yt - yTopLeft;
ycRatio2 = 1 - ycRatio1;
}
// prepared all pixels to look at, so finally set the new pixel data
result.alpha = rgbTopMiddle.alpha * ycRatio2
+ rgbBottomMiddle.alpha * ycRatio1;
result.blue = rgbTopMiddle.blue * ycRatio2
+ rgbBottomMiddle.blue * ycRatio1;
result.red = rgbTopMiddle.red * ycRatio2
+ rgbBottomMiddle.red * ycRatio1;
result.green = rgbTopMiddle.green * ycRatio2
+ rgbBottomMiddle.green * ycRatio1;
newBitmapPixels[(y * newWidth) + x] = convertArgbToInt(result);
}
}
//get rid of old data, and replace it with new one
delete[] previousData;
jniBitmap->_storedBitmapPixels = newBitmapPixels;
jniBitmap->_bitmapInfo.width = newWidth;
jniBitmap->_bitmapInfo.height = newHeight;
}
The question
What am I doing wrong?
Is it also possible to make the code a bit more readable? I'm a bit rusty on C/C++ and I was more of a C developer than a C++ developer.
EDIT: now it works fine. I've edited and fixed the code.
Only thing that you guys can help is giving tips about how to make it better.
OK, it all started with a bad conversion of the colors, then it went to the usage of pointers , and then to the basic one of where to put the pixels.
The code I've written now works fine (added all of the needed fixes).
Soon you will all be able to use the new code on the Github project.

Categories

Resources