Android - Issue in rendering bulk points using Android plot - android

I am working on a requirement to display the luma value of raw YUV image(1280 x 720) as graph. That is, I am separating the Y data and displaying it in the form of graph, in which x axis is the width and the y axis is the respective Y value.
// Code
int count = 0;
int byteValue = 0;
for ( y = 0; y < height; y++) {
for (x = 0; x < width; x++) {
byteValue = pPictureIn[count++] & 0xff;
series.addLast(x, byteValue);
}
}
final PlotStatistics stats = new PlotStatistics(10, false);
plot.addListener(stats);
redrawer = new Redrawer(Arrays.asList(new Plot[]{plot}),
1, false);
format = new LineAndPointFormatter(this, R.xml.formatter);
plot.addSeries(series, format);
redrawer.start();
I am using AndroidPlot to plot the graph. And I am adding all the points to the series. Here my problem is, if I try to render the points, My app gets freezed. And I am using the render mode as USE_BACKGROUND_THREAD.
Someone please help me to render the points at one shot without any freeze. Thanks in advance

Im going to guess that you're using SimpleXYSeries, which is not optimized for efficiency or speed; the calls to addLast become extremely expensive as the number of points increase. Using a fixed memory XYSeries implementation will provide far better performance. If your image data is dynamic (coming from a camera or some other image stream) then a ring buffer might be a good design to consider...I'd suggest taking a look at FixedSizeEditableXYSeries in particular.
Additionally, you might consider sampling your data to reduce the size using SampledXYSeries.
The Advanced XY Series Types doc has more details about the pros and cons of the above mentioned classes and a few others.

Related

Using Rotation Matrix to rotate points in space

I'm using android's rotation matrix to rotate multiple points in space.
Work so far
I start by reading the matrix from the SensorManager.getRotationMatrix function. Next I transform the rotation matrix into a quaternion using the explanation given in this link. I'm doing this because I read that Euler angles can lead to Gimbal lock issue and that operations with a 3x3 matrix can be exhaustive. source
Problem
Now what I want to do is: Imagine the phone is the origin of the referential and given a set of points (projected lat/lng coordinates into a xyz coordinate system see method bellow) I want to rotate them so I can check which ones are on my line of sight. For that I'm using this SO question which returns a X and Y (left and top respectively) to display the point on screen. It's working fine but only works when facing North (because it doesn't take orientation into account and my projected vector uses North/South as X and East/West as Z). So my thought was to rotate all objects. Also even though the initial altitude (Y) is 0 I want to be able to position the point up/down according to phone's orientation.
I think part of the solution may be on this post. But since this uses Euler angles I don't think that's the best method.
Conclusion
So, if it's really better to rotate each point's position how can I archive that using the rotation quaternion? Otherwise which is the better way?
I'm sorry if I said anything wrong in this post. I'm not good at physics.
Code
//this functions returns a 3d vector (0 for Y since I'm discarding altitude) using 2 coordinates
public static float[] convLocToVec(LatLng source, LatLng destination)
{
float[] z = new float[1];
z[0] = 0;
Location.distanceBetween(source.latitude, source.longitude, destination
.latitude, source.longitude, z);
float[] x = new float[1];
Location.distanceBetween(source.latitude, source.longitude, source
.latitude, destination.longitude, x);
if (source.latitude < destination.latitude)
z[0] *= -1;
if (source.longitude > destination.longitude)
x[0] *= -1;
return new float[]{x[0], (float) 0, z[0]};
}
Thanks for your help and have a nice day.
UPDATE 1
According to Wikipedia:
Compute the matrix product of a 3 × 3 rotation matrix R and the
original 3 × 1 column matrix representing v→. This requires 3 × (3
multiplications + 2 additions) = 9 multiplications and 6 additions,
the most efficient method for rotating a vector.
Should I really just use the rotation matrix to rotate a vector?
Since no one answered I'm here to answer myself.
After some research (a lot actually) I came to the conclusion that yes it is possible to rotate a vector using a quaternion but it's better for you that you transform it into a rotation matrix.
Rotation matrix - 9 multiplications and 6 additions
Quartenion - 15 multiplications and 15 additions
Source: Performance comparisons
It's better to use the rotation matrix provided by Android. Also if you are going to use quaternion somehow (Sensor.TYPE_ROTATION_VECTOR + SensorManager.getQuaternionFromVector for example) you can (and should) transform it into a rotation matrix. You can use the method SensorManager.getRotationMatrixFromVector to convert the rotation vector to a matrix. After you get the rotation matrix you just have to multiply it for the projected vector you want. You can use this function for that:
public float[] multiplyByVector(float[][] A, float[] x) {
int m = A.length;
int n = A[0].length;
if (x.length != n) throw new RuntimeException("Illegal matrix dimensions.");
float[] y = new float[m];
for (int i = 0; i < m; i++)
for (int j = 0; j < n; j++)
y[i] += (A[i][j] * x[j]);
return y;
}
Although I'm still not able to get this running correctly I will mark this as answer.

Changing colors from specific parts of a bitmap image

I am writing an Android application that must paint determined parts of a loaded bitmap image according to received events.
I need to paint (or change the current color) of a single part of a bitmap image, without changing the rest of the image.
Let's say I have a car, which is divided by many parts: door, windows, wheels, etc.
Each time an event (received from the network) arrives, I need to change the color of that particular part with the color specified by the event data.
What would be the best technique to achieve that?
I first thought on FloodFill, as suggested on many threads in SO, but given that the messages are received quite fast (several per second) I fear it would drag performance down, as it seem to be very CPU intensive algorithm.
I also thought about having multiple segments of the same image, each colored with a different color and show the right one at the right time, but the car has at least 10 different parts and each one could be painted with 4-6 colors, so I would end up with dozens of images and that would be impractical to handle, not to mention the waste of memory.
So, is there any other approach?
The fastest way to do it is with a shader. You'll need to use OpenGL ES 2 for that (some Androids only support ES 1). You'll need a temporary bitmap the same size as the image you want to change. Set it as the target. In the shader, retrieve a pixel from the sampler which is bound to the image you want to change. If it's within a small tolerance of the colour you want to change, set gl_FragColor to the new colour, otherwise just set gl_FragColor to the colour you retrieved from the sampler. You'll need to pass the desired colour and the new colour into the shader as vec4s with al_set_shader_float_vector. The fastest way to do this is to keep 2 bitmaps and swap between them as the "main one" that you're using each time a colour changes.
If you can't use a shader, then you'll have to lock the bitmap and replace the colour. Use al_lock_bitmap to lock it, then you can use al_get_pixel and al_put_pixel to change colours. Then al_unlock_bitmap when you're done. You can also avoid using al_get_pixel/al_put_pixel and access the memory manually which will be faster. If you lock the bitmap with the format ALLEGRO_PIXEL_FORMAT_ABGR_8888_LE then the memory is laid out like so:
int w = al_get_bitmap_width(bitmap);
int h = al_get_bitmap_height(bitmap);
for (int y = 0; y < h; y++) {
unsigned char *p = locked_region->data + locked_region->pitch * y;
for (int x = 0; x < w; x++) {
unsigned char r = p[0];
unsigned char g = p[1];
unsigned char b = p[2];
unsigned char a = p[3];
/* change r, g, b, a here if they match */
p[0] = r;
p[1] = g;
p[2] = b;
p[3] = a;
p += 4;
}
}
It's recommended that you lock the image in the format it was created in. That means pick an easy one like the one I mentioned, or else the inner part of the loop gets more complicated. The ABGR_8888 part of the pixel format describes the layout of the data. ABGR tells the order of the components. If you were to read a pixel into a single storage unit (an int in this case but it works the same with a short) then the bit pattern would be AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR. However, when you're reading a byte at a time, most machine are little endian so that means the small end comes first. That's why in my sample code p[0] is red. The 8888 part tells how many bits per component.

Graphing a curve with any android library

I've found several graphing libraries for android which would be very suitable for my school project e.g. GraphView
My task is to display 3 Biorhythm curves on one graph. I looked at many examples and helps but I just couldn't get my head around how to change the code so it will display my biorhythm curve.
// sin curve
int num = 150;
GraphViewData[] data = new GraphViewData[num];
double v = 0;
for (int i = 0; i < num; i++) {
v += 0.2;
data[i] = new GraphViewData(i, Math.sin(v));
}
The three biorhythm cycle equations are:
physical: sin(2\pi t/23),
emotional: sin(2\pi t/28),
intellectual: sin(2\pi t/33),
t = number of days since birth, this number has been calculated from the users input and is stored in a shared preference defined as differenceInDays
It would be awesome if someone could just give me an example, please do ask if I need to be more specific (this is my first time posting on this website) and if you have a suggestion to THE perfect graphing library in android please let me know :D

Detecting periodic data from the phone's accelerometer

I am developing an Android app and I am need to detect user context (if walking or driving at minimal)
I am using accelerometer and sum of all axes to detect the accleration vector. It is working pretty well in the way I can see some periodics values while walking. But I need to detect these poeriods programmatically.
Please is there any kind of math function to detect period in set of values? I heard Fourier transformation is usable for that, but I really dont know how to implement it. It looks pretty complicated :)
Please help
The simplest way to detect periodicity of data is autocorrelation. This is also fairly simple to implement. To get the autocorrelation at i you simply multiply each data point of your data with each data point shifted by i. Here is some pseudocode:
for i = 0 to length( data ) do
autocorrel[ i ] = 0;
for j = 0 to length( data ) do
autocorrel[ i ] += data( j ) * data( ( j + i ) mod length( data ) )
done
done
This will give you an array of values. The highest "periodicity" is at the index with the highes value. This way you can extract any periodic parts (there usually is more than one).
Also I would suggest you do not try implement your own FFT in an application. Although this algorithm is very good for learning, there is much one can do wrong which is hard to test and it is also likely that your implementation will be much slower than those which are already available. If it is possible on your system I would suggest you use the FFTW which is impossible to beat in any respect, when it comes to FFT implementations.
EDIT:
Explanation, why this works even on values which do not repeat exactely:
The usual and fully correct way to calculate the autocorrelation is, to substract the mean from your data. Let's say you have [1, 2, 1.2, 1.8 ]. Then you could extract 1.5 from each sample leaving you with [-.5, .5, -.3, .3 ]. Now if you multiply this with itself at an ofset of zero, negatives will be multiplied by negatives and positives by positives, yielding (-.5)^2 + (.5)^2 + (-.3)^2 + (.3)^2=.68. At an offset of one negatives will be multiplied with positives yielding (-.5)*(.5) + (.5)*(-.3) + (-.3)*(.3) + (.3)*(-.5)=-.64. At an offset of two again negatives will be multiplied by negatives and positives by positives. At offset of three something similar to the situation for an offset of one happens again. As you can see, you get positive values at offsets of 0 and 2 (the periods) and negative values at 1 and 4.
Now to only detect the period it is not necessary to substract the mean. If you just leave the samples as-is, the suqared mean will be added at each addition. Since the same value will be added for each calculated coefficient, the comparison will yield the same results as if you first subtracted the mean. At worst either your datatype might run over (in case you use some kind of integral type), or you might get round off errors when the values start getting to big (in case you use float, usually this is not a problem). In case this happens first substract the mean and try if your results get better.
The strongest drawback of using autocorrelation vs. some kind of fast fourier transformation is the the speed. Autocorelation takes O(n^2) where as a FFT only takes O(n log(n)). In case you need to calculate the period of very long sequences very often, autocorelation might not work in your case.
If you want to know how the fourier transformation works, and what all this stuff about real part, and imaginary part, magnitude and phase (have a look at the code posted by Manu for example) means, I suggest you have a look at this book.
EDIT2:
In most cases data is neither fully periodic nor fully chaotic and aperiodic. Usually your data will be composed of several periodic compenents, with varying strength. A period is a time difference by which you can shift your data to make it similar to itself. The autocorrelation calculates how similar the data is, if you shift it by a certain amount. Thus it gives you the strength of all possible periods. This means, there is not "index of repeating value", because when the data is perfectly periodic, all indexes will repeat. The index with the strongest value, gives you the shift, at which the data is most similar to itself. Thus this index gives a time offset, not an index into your data. In order to understand this, it is important to understand, how a time series can be thought of as being made up of the sum of perfectly periodic functions (sinusoidal base functions).
If you need to detect this for very long time series, it is usually also best to slide a window over your data and just check for the period of this smaller data frame. However you have to be aware that your window will add additional periods to your data, of which you have to be aware.
More in the link I posted in the last edit.
There is also a way to compute the autocorrelation of your data using FFT which reduces the complexity from O(n^2) to O(n log n). The basic idea is you take your periodic sample data, transform it using an FFT, then compute the power spectrum by multiplying each FFT coefficient by its complex conjugate, then take the inverse FFT of the power spectrum. You can find pre-existing code to compute the power spectrum without much difficulty. For example, look at the Moonblink android library. This library contains a JAVA translation of FFTPACK (a good FFT library) and it also has some DSP classes for computing power spectra. An autocorrelation method I have used with success is the McLeod Pitch Method (MPM), the java source code for which is available here. I have edited a method in the class McLeodPitchMethod which allows it to compute the pitch using the FFT-optimized autocorrelation algorithm:
private void normalizedSquareDifference(final double[] data) {
int n = data.length;
// zero-pad the data so we get a number of autocorrelation function (acf)
// coefficients equal to the window size
double[] fft = new double[2*n];
for(int k=0; k < n; k++){
fft[k] = data[k];
}
transformer.ft(fft);
// the output of fft is 2n, symmetric complex
// multiply first n outputs by their complex conjugates
// to compute the power spectrum
double[] acf = new double[n];
acf[0] = fft[0]*fft[0]/(2*n);
for(int k=1; k <= n-1; k++){
acf[k] = (fft[2*k-1]*fft[2*k-1] + fft[2*k]*fft[2*k])/(2*n);
}
// inverse transform
transformerEven.bt(acf);
// the output of the ifft is symmetric real
// first n coefficients are positive lag acf coefficients
// now acf contains acf coefficients
double[] divisorM = new double[n];
for (int tau = 0; tau < n; tau++) {
// subtract the first and last squared values from the previous divisor to get the new one;
double m = tau == 0 ? 2*acf[0] : divisorM[tau-1] - data[n-tau]*data[n-tau] - data[tau-1]*data[tau-1];
divisorM[tau] = m;
nsdf[tau] = 2*acf[tau]/m;
}
}
Where transformer is a private instance of the FFTTransformer class from the java FFTPACK translation, and transformerEven is a private instance of the FFTTransformer_Even class.
A call to McLeodPitchMethod.getPitch() with your data will give a very efficient estimate of the frequency.
Here is an example of calculating the Fourier Transform android using the FFT class from libgdx:
package com.spec.example;
import android.app.Activity;
import android.os.Bundle;
import com.badlogic.gdx.audio.analysis.FFT;
import java.lang.String;
import android.util.FloatMath;
import android.widget.TextView;
public class spectrogram extends Activity {
/** Called when the activity is first created. */
float[] array = {1, 6, 1, 4, 5, 0, 8, 7, 8, 6, 1,0, 5 ,6, 1,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
float[] array_hat,res=new float[array.length/2];
float[] fft_cpx,tmpr,tmpi;
float[] mod_spec =new float[array.length/2];
float[] real_mod = new float[array.length];
float[] imag_mod = new float[array.length];
double[] real = new double[array.length];
double[] imag= new double[array.length];
double[] mag = new double[array.length];
double[] phase = new double[array.length];
int n;
float tmp_val;
String strings;
FFT fft = new FFT(32, 8000);
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
TextView tv = new TextView(this);
fft.forward(array);
fft_cpx=fft.getSpectrum();
tmpi = fft.getImaginaryPart();
tmpr = fft.getRealPart();
for(int i=0;i<array.length;i++)
{
real[i] = (double) tmpr[i];
imag[i] = (double) tmpi[i];
mag[i] = Math.sqrt((real[i]*real[i]) + (imag[i]*imag[i]));
phase[i]=Math.atan2(imag[i],real[i]);
/****Reconstruction****/
real_mod[i] = (float) (mag[i] * Math.cos(phase[i]));
imag_mod[i] = (float) (mag[i] * Math.sin(phase[i]));
}
fft.inverse(real_mod,imag_mod,res);
}
}
More info here: http://www.digiphd.com/android-java-reconstruction-fast-fourier-transform-real-signal-libgdx-fft/

Android opengl-es fast way to render text

I want to show some text in opengl ES. I have a 512*512 font texture (texture atlas), all letter is 32*32 pixel here.
My text length is about 400 char.
My algorithm
opengl.setClearTransparentBGEnabled();
float y2=0;
float j =0;
for (int i=0; i<text.length(); i++) {
int ch =(int)text.charAt(i);
float x2=((float)j*16*scale/50);
j++;
if ((text.charAt(i)+"").equals("\n")) {
y2+=(16*scale*2)/50;
j=0;
x2=0;
}
opengl.saveMatrix();
Sprites.selectVertex("font"+name)
.setSprite(ch)
.translate(x-x2, y+y2, -9)
.scale(scale, scale, scale)
.rotate(90, 0, 0, 1)
.draw(true);
opengl.loadMatrix();
}
opengl.setClearTransparentBGDisabled();
My only probleme, this method is very slow: after this i get 15-20 FPS.
What is the best way, to render texts in opengl-es dynamically?
That's far too much work to be doing per-frame.
I'd use the 2D APIs to Canvas.drawText() (or drawBitmap, if you're not using a real font) the 400 chars to a private Bitmap, and use that as my texture.
You are repeating a lot of work every frame and for every character in the text. You should calculate all of the vertex and triangle data for a given string, and then submit it to opengl in one batch. Reuse the data for as long as the string stays the same.

Categories

Resources