I use the following procedure for transferring an array of float numbers to a RenderScript Kernel and it works fine.
float[] w = new float[10];
Allocation w_rs = Allocation.createSized(rs, Element.F32(rs), 10);
w_rs.copy1DRangeFrom(0, 10, w);
I want to use a similar procedure for transferring Float4 values as follows
Float4[] w = new Float4[10];
for (int i = 0; i < 10; i++) {
w[i] = new Float4(i, 2*i, 3*i, 4*i);
}
Allocation w_rs = Allocation.createSized(rs, Element.F32_4(rs), 10);
w_rs.copy1DRangeFromUnchecked(0, 10, w);
Which results in the following error
Object passed is not an Array of primitives
Apparently, w should be array of primitives. But I want w to be array of Float4.
You can simply use:
float[] w = new float[4 * 10];
for (int i = 0; i < 10; i++) {
w[i * 4 + 0] = i;
w[i * 4 + 1] = i*2;
w[i * 4 + 2] = i*3;
w[i * 4 + 3] = i*4;
}
Allocation w_rs = Allocation.createSized(rs, Element.F32_4(rs), 10);
w_rs.copyFrom(w);
// Or
w_rs.copy1DRangeFrom(0,40,w);
Painless :)
Reference: RenderScript: parallel computing on Android, the easy way
Deeper explanation
Inside RenderScript Java source code, you'll see this middleware function:
public void copy1DRangeFromUnchecked(int off, int count, Object array) {
copy1DRangeFromUnchecked(off, count, array,
validateObjectIsPrimitiveArray(array, false),
java.lang.reflect.Array.getLength(array));
}
The validateObjectIsPrimitiveArray is performed on any copy-method invocation. You can pass only raw arrays.
Related
I am trying to port a tensorflow model to tensorflow lite to use it in an android application. The conversion is successful and everything runs except for Internal error: Failed to run on the given Interpreter: input must be 5-dimensional. The input in the original model was input_shape=(20, 320, 240, 1), which is 20 320 x 240 grayscale images (therefore ...,1). Here is the important code:
List<Mat> preprocessedFrames = preprocFrames(buf);
//has length of 20 -> no problem there (shouldn't affect dimensionality either...)
int[] output = new int[2];
float[][][] inputMatrices = new float[preprocessedFrames.toArray().length][320][240];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
// returns float[][] with floats from 0 to 1
inputMatrices[i] = inputMatrix;
}
try{
detector.run(inputMatrices, output);
Debug("results: " + output.toString());
}
The model gives me an output of 2 neurons translating into 2 labels.
The model code is the following:
model = tf.keras.Sequential(name='detector')
model.add(tf.keras.layers.Conv3D(filters=(56), input_shape=(20, 320, 240, 1), strides=(2,2,2), kernel_size=(3,11,11), padding='same', activation="relu"))
model.add(tf.keras.layers.AveragePooling3D(pool_size=(1,4,4)))
model.add(tf.keras.layers.Conv3D(filters=(72), kernel_size=(4,7,7), strides=(1,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(81), kernel_size=(2,4,4), strides=(2,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(100), kernel_size=(1,2,2), strides=(3,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(128), kernel_size=(1,2,2), padding='same'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(768, activation='tanh', kernel_regularizer=tf.keras.regularizers.l2(0.011)))
model.add(tf.keras.layers.Dropout(rate=0.1))
model.add(tf.keras.layers.Dense(256, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.012)))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001), loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
EDIT: I printed out the first input tensor as follows:
int[] shape = detector.getInputTensor(0).shape();
for(int r = 0; r < shape.length; r++){
Log.d("********" + r, "*******: " + r + " : " + shape[r]);
}
With that I first get the output [1,20,320,240,1]and after that I only get [20,320,240]. I am really quite desperate now...
So, I figured it out by myself and it seems like I really only had to make the input 5 dimensional by putting the content into a first dimension and every single pixel into a fifth dimension. I don't know why, but I will accept that xD.
float[][] output = new float[1][2];
float[][][][][] inputMatrices = new float[1][preprocessedFrames.toArray().length][320][240][1];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
for (int j = 0; j < inputMatrix.length - 1; j++) {
for(int k = 0; k < inputMatrix[0].length - 1; k++) {
inputMatrices[0][i][k][j][0] = inputMatrix[j][k];
}
}
}
I'm trying to deal with ECG signal processing in android. I want to implement simple digital filters (lowpass, highpass)
I've got a transfer function:
here is what i've found:
wikipedia - lowpass filter - it looks quite easy here.
for i from 1 to n
y[i] := y[i-1] + α * (x[i] - y[i-1])
but there is nothing about transfer function which I want to use.
I also found the following matlab code
%% Low Pass Filter H(z) = (1 - 2z^(-6) + z^(-12)) / (1 - 2z^(-1) + z^(-2))
b = [1 0 0 0 0 0 -2 0 0 0 0 0 1];
a = [1 -2 1];
h_l = filter(b,a,[1 zeros(1,12)]);
ecg_l = conv (ecg ,h_l);
but there is no function like filter and conv in java (or I missed something).
Also I was looking on stackoverflow for an answer. But I didn't found anything about transfer function.
so can someone help me? I just want to move on with my project.
Given a time-domain recurrence equation (such as the one you quoted from wikipedia), the corresponding transfer function in the z-domain can relatively easily be obtained by using the following properties:
Where X(z) and Y(z) are the z-transforms of the time-domain input sequence x and output sequence y respectively.
Going the other way around, given a transfer function which can be expressed as a ratio of polynomials in z, such as:
the recurrence equation of the transfer function can be written as:
There are of course many different ways to implement such a recurrence equation, but a simple filter implementation following the Direct Form II would be along the line of:
// Implementation of an Infinite Impulse Response (IIR) filter
// with recurrence equation:
// y[n] = -\sum_{i=1}^M a_i y[n-i] + \sum_{i=0}^N b_i x[n-i]
public class IIRFilter {
public IIRFilter(float a_[], float b_[]) {
// initialize memory elements
int N = Math.max(a_.length, b_.length);
memory = new float[N-1];
for (int i = 0; i < memory.length; i++) {
memory[i] = 0.0f;
}
// copy filter coefficients
a = new float[N];
int i = 0;
for (; i < a_.length; i++) {
a[i] = a_[i];
}
for (; i < N; i++) {
a[i] = 0.0f;
}
b = new float[N];
i = 0;
for (; i < b_.length; i++) {
b[i] = b_[i];
}
for (; i < N; i++) {
b[i] = 0.0f;
}
}
// Filter samples from input buffer, and store result in output buffer.
// Implementation based on Direct Form II.
// Works similar to matlab's "output = filter(b,a,input)" command
public void process(float input[], float output[]) {
for (int i = 0; i < input.length; i++) {
float in = input[i];
float out = 0.0f;
for (int j = memory.length-1; j >= 0; j--) {
in -= a[j+1] * memory[j];
out += b[j+1] * memory[j];
}
out += b[0] * in;
output[i] = out;
// shift memory
for (int j = memory.length-1; j > 0; j--) {
memory[j] = memory[j - 1];
}
memory[0] = in;
}
}
private float[] a;
private float[] b;
private float[] memory;
}
which you could use to implement your specific transfer function like so:
float g = 1.0f/32.0f; // overall filter gain
float[] a = {1, -2, 1};
float[] b = {g, 0, 0, 0, 0, 0, -2*g, 0, 0, 0, 0, 0, g};
IIRFilter filter = new IIRFilter(a, b);
filter.process(input, output);
Note that you can alternatively also factorize the numerator and denominator into 2nd order polynomials and obtain a cascade of 2nd order filters (known as biquad filters).
In order to align the intensity values of two grayscale Images (as a first step for further processing) I wrote a Java method that:
converts the bitmaps of the two images into two int[] arrays containing the bitmap's intensities (I just take the red component here, since it's grayscale, i.e. r=g=b ).
public static int[] bmpToData(Bitmap bmp){
int width = bmp.getWidth();
int height = bmp.getHeight();
int anzpixel = width*height;
int [] pixels = new int[anzpixel];
int [] data = new int[anzpixel];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
for (int i = 0 ; i < anzpixel ; i++) {
int p = pixels[i];
int r = (p & 0xff0000) >> 16;
//int g = (p & 0xff00) >> 8;
//int b = p & 0xff;
data[i] = r;
}
return data;
}
aligns the cumulated intensity distributions of Bitmap 2 to that of Bitmap 1
//aligns the intensity distribution of a grayscale picture moving (given by int[] //data2) the the intensity distribution of a reference picture fixed (given by // int[] data1)
public static int[] histMatch(int[] data1, int[] data2){
int anzpixel = data1.length;
int[] histogram_fixed = new int[256];
int[] histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
//read intensities of fixed und moving in histogram
for (int n = 0; n < anzpixel; n++) {
histogram_fixed[data1[n]]++;
histogram_moving[data2[n]]++;
}
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[anzpixel];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// apply the lut[] to moving picture.
i=0;
for (int n = 0; n < anzpixel; n++) {
data2[n]=(int) lut[data2[n]];
}
return data2;
}
converts the int[] arrays back to Bitmap.
public static Bitmap dataToBitmap(int[] data, int width, int heigth) {
int index=0;
Bitmap bmp = Bitmap.createBitmap(width, heigth, Bitmap.Config.ARGB_8888);
for (int x = 0; x < width; x++) {
for (int y = 0; y < heigth; y++) {
index=y*width+x;
int c = data[index];
bmp.setPixel(x,y,Color.rgb(c, c, c));
}
}
return bmp;
}
While the core procedure 2) is straightforward and fast, the conversion steps 1) and 3) are rather inefficient. It would be more than cool to do the whole thing in Renderscript. But, honestly, I am completely lost in doing so because of missing documentation and, while there are many impressing examples on what Renderscript COULD perform, I don't see a way to benefit from these possibilities (no books, no docu). Any advice is highly appreciated!
As a starting point, use Android Studio to "Import Sample..." and select Basic Render Script. This will give you a working project that we will now modify.
First, let's add more Allocation references to MainActivity. We will use them to communicate image data, histograms and the LUT between Java and Renderscript.
private Allocation mInAllocation;
private Allocation mInAllocation2;
private Allocation[] mOutAllocations;
private Allocation mHistogramAllocation;
private Allocation mHistogramAllocation2;
private Allocation mLUTAllocation;
Then in onCreate() load another image, which you will also need to add to /res/drawables/.
mBitmapIn2 = loadBitmap(R.drawable.cat_480x400);
In createScript() create additional allocations:
mInAllocation2 = Allocation.createFromBitmap(mRS, mBitmapIn2);
mHistogramAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
mHistogramAllocation2 = Allocation.createSized(mRS, Element.U32(mRS), 256);
mLUTAllocation = Allocation.createSized(mRS, Element.U32(mRS), 256);
And now the main part (in RenderScriptTask):
/*
* Invoke histogram kernel for both images
*/
mScript.bind_histogram(mHistogramAllocation);
mScript.forEach_compute_histogram(mInAllocation);
mScript.bind_histogram(mHistogramAllocation2);
mScript.forEach_compute_histogram(mInAllocation2);
/*
* Variables copied verbatim from your code.
*/
int []histogram_fixed = new int[256];
int []histogram_moving = new int[256];
int[] cumhist_fixed = new int[256];
int[] cumhist_moving = new int[256];
int i=0;
int j=0;
// copy computed histograms to Java side
mHistogramAllocation.copyTo(histogram_fixed);
mHistogramAllocation2.copyTo(histogram_moving);
// your code again...
// calc cumulated distributions
cumhist_fixed[0]=histogram_fixed[0];
cumhist_moving[0]=histogram_moving[0];
for ( i=1; i < 256; ++i ) {
cumhist_fixed[i] = cumhist_fixed[i-1]+histogram_fixed[i];
cumhist_moving[i] = cumhist_moving[i-1]+histogram_moving [i];
}
// look-up-table lut[]. For each quantile i of the moving picture search the
// value j of the fixed picture where the quantile is the same as that of moving
int[] lut = new int[256];
j=0;
for ( i=0; i < 256; ++i ){
while(cumhist_fixed[j]< cumhist_moving[i]){
j++;
}
// check, whether the distance to the next-lower intensity is even lower, and if so, take this value
if ((j!=0) && ((cumhist_fixed[j-1]- cumhist_fixed[i]) < (cumhist_fixed[j]- cumhist_fixed[i]))){
lut[i]= (j-1);
}
else {
lut[i]= (j);
}
}
// copy the LUT to Renderscript side
mLUTAllocation.copyFrom(lut);
mScript.bind_LUT(mLUTAllocation);
// Apply LUT to the destination image
mScript.forEach_apply_histogram(mInAllocation2, mInAllocation2);
/*
* Copy to bitmap and invalidate image view
*/
//mOutAllocations[index].copyTo(mBitmapsOut[index]);
// copy back to Bitmap in preparation for viewing the results
mInAllocation2.copyTo((mBitmapsOut[index]));
Couple notes:
In your part of the code I also fixed LUT allocation size - only 256 locations are needed,
As you can see, I left the computation of cumulative histogram and LUT on Java side. These are rather difficult to efficiently parallelize due to data dependencies and small scale of the calculations, but considering the latter I don't think it's a problem.
Finally, the Renderscript code. The only non-obvious part is the use of rsAtomicInc() to increase values in histogram bins - this is necessary due to potentially many threads attempting to increase the same bin concurrently.
#pragma version(1)
#pragma rs java_package_name(com.example.android.basicrenderscript)
#pragma rs_fp_relaxed
int32_t *histogram;
int32_t *LUT;
void __attribute__((kernel)) compute_histogram(uchar4 in)
{
volatile int32_t *addr = &histogram[in.r];
rsAtomicInc(addr);
}
uchar4 __attribute__((kernel)) apply_histogram(uchar4 in)
{
uchar val = LUT[in.r];
uchar4 result;
result.r = result.g = result.b = val;
result.a = in.a;
return(result);
}
I want to convert an image from Android camera to HSI format using OpenCV.
The problem is when I use the following method
private Mat rgb2hsi(Mat rgbFrame) {
Mat hsiFrame = rgbFrame.clone();
for( int i = 0; i < rgbFrame.rows(); ++i ) {
for( int j = 0; j < rgbFrame.cols(); ++j ) {
double[] rgb = rgbFrame.get(i, j);
Log.d(MAINTAG, "rgbFrame.get(i, j) array size = " + rgb.length);
double colorR = rgb[0];
double colorG = rgb[1];
double colorB = rgb[2];
double minRGB = min(colorR, colorG, colorB);
double colorI = (colorR + colorG + colorB) / 3;
double colorS = 0.0;
if(colorI > 0) colorS = 1.0 - (minRGB / colorI);
double colorH;
double const1 = colorR - (colorG / 2) - (colorB / 2);
double const2 = Math.sqrt(Math.pow(colorR, 2) + Math.pow(colorG, 2) + Math.pow(colorR, 2)
- (colorR * colorG) - (colorR * colorB) - (colorG * colorB));
colorH = Math.acos(const1 / const2);
if(colorB > colorG) colorH = 360 - colorH;
double[] hsi = {colorH, colorS, colorI};
hsiFrame.put(i, j, hsi);
}
}
return hsiFrame;
}
It shows an error
java.lang.UnsupportedOperationException: Provided data element number (3) should be multiple of the Mat channels count (4)
I search for a while to figure out the cause of this error.
I found that I put an array of size 3 instead of 4.
Android convert byte array from Camera API to color Mat object openCV
I wonder what Type of image receive from Android Camera.
Why when I get an array of size 4?
How to convert an image received from Android camera to HSI and preview on the screen?
The following is the overrided method onCameraFrame
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat outputFrame = inputFrame.rgba();
/* Get RGB color from the pixel at [index_row, index_column] */
int index_row = 0;
int index_column = 0;
final double[] mRgb_pixel = outputFrame.get(index_row, index_column);
/* Show the result */
runOnUiThread(new Runnable() {
#Override
public void run() {
int r = (int) mRgb_pixel[0];
int g = (int) mRgb_pixel[1];
int b = (int) mRgb_pixel[2];
/* Set RGB color */
mRred_textview.setText("Red\n" + Double.toString(mRgb_pixel[0]));
mGreen_textview.setText("Green\n" + Double.toString(mRgb_pixel[1]));
mBlue_textview.setText("Blue\n" + Double.toString(mRgb_pixel[2]));
mColor_textview.setBackgroundColor(Color.rgb(r, g, b));
}
});
if(mPreviewType == PreviewType.GB) {
outputFrame.convertTo(outputFrame, CvType.CV_64FC3);
return getGBColor(rgb2hsi(outputFrame));
} else if (mPreviewType == PreviewType.HSI) {
outputFrame.convertTo(outputFrame, CvType.CV_64FC3);
return rgb2hsi(outputFrame);
} else {
return outputFrame;
}
}
My MainActivity implements CameraBridgeViewBase.CvCameraViewListener2
[Edit]
I think that the reason why it return an array of size 4 is because the frame is in RGBA format, not RGB format.
Therefore, how to convert RGBA to HSI and preview the frame on the screen?
The problem here is that your hsiFrame is a 4 channel image and your hsi array has only 3 values. You need to add one term corresponding to alpha channel to your hsi array. Making either of the following changes should work for you:
1. double[] hsi = {colorH, colorS, colorI, rgb[3]};
2. Mat hsiFrame = new Mat(rgbFrame.size(), CvType.CV_8UC3);
Hope this helps.
I'm attempting to call a method which is outside the class I'm working in from an openGL thread but I'm getting a Can't create
handler inside thread that has not called Looper.prepare() runtime exception. Does anyone know of a way around this without putting the
method in the same class?
Program breaks # Extract cam = new Extract();
public void onDrawFrame(GL10 gl) {
onDrawFrameCounter++;
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
bindCameraTexture(gl);
System.out.println(Global.bearing);
float bear = Global.bearing;
gl.glLoadIdentity();
gl.glNormal3f(0,0,1);
System.out.println("ARRAY!: " + GLCamTest.array.length);
p = 0;
gl.glRotatef(bear, 1, 0, 0);
int q = 0;
int e = 0;
if(q < Global.cubes){
Extract cam = new Extract();
Bitmap image = cam.extractimage(q);
final int TILE_WIDTH = 512;
final int TILE_HEIGHT = 512;
final int TILE_SIZE = TILE_WIDTH * TILE_HEIGHT;
int[] pixels = new int[TILE_WIDTH];
short[] rgb_565 = new short[TILE_SIZE];
// Convert 8888 to 565, and swap red and green channel position:
int i = 0;
for (int y = 0; y < TILE_HEIGHT; y++) {
image.getPixels(pixels, 0, TILE_WIDTH, 0, y, TILE_WIDTH,
1);
for (int x = 0; x < TILE_WIDTH; x++) {
int argb = pixels[x];
int r = 0x1f & (argb >> 19); // Take 5 bits from23..19
int g = 0x3f & (argb >> 10); // Take 6 bits from15..10
int b = 0x1f & (argb >> 3); // Take 5 bits from 7.. 3
int rgb = (r << 11) | (g << 5) | b;
rgb_565[i] = (short) rgb;
++i;
}
}
ShortBuffer textureBuffer = ShortBuffer.wrap (rgb_565, 0,
TILE_SIZE);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGB, 48, 48, 0,
GL10.GL_RGB, GL10.GL_UNSIGNED_SHORT_5_6_5,
textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, e, 4);
e = e + 4;
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, e, 4);
e = e + 4;
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, e, 4);
e = e + 4;
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, e, 4);
e = e + 4;
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, e, 4);
e = e + 4;
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, e, 4);
e = e + 4;
q++;
}
}
The error indicates that you are calling some code that expects to be called from the UI thread, but is not. The gl onDrawFrame is not called from the UI thread, hence the error.
From the code this looks to be happening in the Extract class' constructor. What is the class full name?
Assuming the Extract method loads an image from the camera or similar. I would move the code doing the "extracting" out of the gl thread, you likely dont want to limit your fps by doing this conversion on the gl thread anyway :-) Its better doing this "as fast as possible" on a separate thread, and have the gl draw the latest.
If you do need to make this work here, use Handler.