I am making one music application in android.In this music list coming from server side. I don'tknow how to show waveform of audio in android ? like in soundcloud website. I have attached image below.
Perhaps, you can implements this feature without libraries, of course if you want only visualisation of audio sample.
For example:
public class PlayerVisualizerView extends View {
/**
* constant value for Height of the bar
*/
public static final int VISUALIZER_HEIGHT = 28;
/**
* bytes array converted from file.
*/
private byte[] bytes;
/**
* Percentage of audio sample scale
* Should updated dynamically while audioPlayer is played
*/
private float denseness;
/**
* Canvas painting for sample scale, filling played part of audio sample
*/
private Paint playedStatePainting = new Paint();
/**
* Canvas painting for sample scale, filling not played part of audio sample
*/
private Paint notPlayedStatePainting = new Paint();
private int width;
private int height;
public PlayerVisualizerView(Context context) {
super(context);
init();
}
public PlayerVisualizerView(Context context, #Nullable AttributeSet attrs) {
super(context, attrs);
init();
}
private void init() {
bytes = null;
playedStatePainting.setStrokeWidth(1f);
playedStatePainting.setAntiAlias(true);
playedStatePainting.setColor(ContextCompat.getColor(getContext(), R.color.gray));
notPlayedStatePainting.setStrokeWidth(1f);
notPlayedStatePainting.setAntiAlias(true);
notPlayedStatePainting.setColor(ContextCompat.getColor(getContext(), R.color.colorAccent));
}
/**
* update and redraw Visualizer view
*/
public void updateVisualizer(byte[] bytes) {
this.bytes = bytes;
invalidate();
}
/**
* Update player percent. 0 - file not played, 1 - full played
*
* #param percent
*/
public void updatePlayerPercent(float percent) {
denseness = (int) Math.ceil(width * percent);
if (denseness < 0) {
denseness = 0;
} else if (denseness > width) {
denseness = width;
}
invalidate();
}
#Override
protected void onLayout(boolean changed, int left, int top, int right, int bottom) {
super.onLayout(changed, left, top, right, bottom);
width = getMeasuredWidth();
height = getMeasuredHeight();
}
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
if (bytes == null || width == 0) {
return;
}
float totalBarsCount = width / dp(3);
if (totalBarsCount <= 0.1f) {
return;
}
byte value;
int samplesCount = (bytes.length * 8 / 5);
float samplesPerBar = samplesCount / totalBarsCount;
float barCounter = 0;
int nextBarNum = 0;
int y = (height - dp(VISUALIZER_HEIGHT)) / 2;
int barNum = 0;
int lastBarNum;
int drawBarCount;
for (int a = 0; a < samplesCount; a++) {
if (a != nextBarNum) {
continue;
}
drawBarCount = 0;
lastBarNum = nextBarNum;
while (lastBarNum == nextBarNum) {
barCounter += samplesPerBar;
nextBarNum = (int) barCounter;
drawBarCount++;
}
int bitPointer = a * 5;
int byteNum = bitPointer / Byte.SIZE;
int byteBitOffset = bitPointer - byteNum * Byte.SIZE;
int currentByteCount = Byte.SIZE - byteBitOffset;
int nextByteRest = 5 - currentByteCount;
value = (byte) ((bytes[byteNum] >> byteBitOffset) & ((2 << (Math.min(5, currentByteCount) - 1)) - 1));
if (nextByteRest > 0) {
value <<= nextByteRest;
value |= bytes[byteNum + 1] & ((2 << (nextByteRest - 1)) - 1);
}
for (int b = 0; b < drawBarCount; b++) {
int x = barNum * dp(3);
float left = x;
float top = y + dp(VISUALIZER_HEIGHT - Math.max(1, VISUALIZER_HEIGHT * value / 31.0f));
float right = x + dp(2);
float bottom = y + dp(VISUALIZER_HEIGHT);
if (x < denseness && x + dp(2) < denseness) {
canvas.drawRect(left, top, right, bottom, notPlayedStatePainting);
} else {
canvas.drawRect(left, top, right, bottom, playedStatePainting);
if (x < denseness) {
canvas.drawRect(left, top, right, bottom, notPlayedStatePainting);
}
}
barNum++;
}
}
}
public int dp(float value) {
if (value == 0) {
return 0;
}
return (int) Math.ceil(getContext().getResources().getDisplayMetrics().density * value);
}
}
Sorry, code with a small amount of comments, but it is working visualizer. You can attach it to any players you want.
How you can use it: add this view in your xml layout, then you have to update visualizer state with methods
public void updateVisualizer(byte[] bytes) {
playerVisualizerView.updateVisualizer(bytes);
}
public void updatePlayerProgress(float percent) {
playerVisualizerView.updatePlayerPercent(percent);
}
In updateVisualizer you pass bytes array with you audio sample, and in updatePlayerProgress you dynamically pass percentage, while audio sample is playing.
for converting file to bytes you can use this helper method
public static byte[] fileToBytes(File file) {
int size = (int) file.length();
byte[] bytes = new byte[size];
try {
BufferedInputStream buf = new BufferedInputStream(new FileInputStream(file));
buf.read(bytes, 0, bytes.length);
buf.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return bytes;
}
and for example(very shortly), how it looks like with Mosby library:
public class AudioRecorderPresenter extends MvpBasePresenter<AudioRecorderView> {
public void onStopRecord() {
// stopped and released MediaPlayer
// ...
// some preparation and saved audio file in audioFileName variable.
getView().updateVisualizer(FileUtils.fileToBytes(new File(audioFileName)));
}
}
}
UPD: I created the library for resolving this case github.com/scrobot/SoundWaveView. It still in status "WIP"(work in progress), but soon I will complete it.
I believe Scrobot's answer does not work. It assumes the input audio to be in a certain (quite peculiar) encoding (single-channel/mono linear PCM with 5 bit depth). And the algorithm to calculate amplitudes from the wave function is probably flawed. If you use that algorithm with any commonly used audio file format, you will get nothing but random data.
The truth is: It's just a bit more complicated that that.
Here's what's there to be done to achieve the OP's goal:
Use Android's MediaExtractor to read the input audio file (variousformats/encodings are supported)
Use Android's MediaCodec to decode the input audio encoding to a linear PCM encoding with certain bit depth (usually 16 bit)
Only after this step you got a byte array which you can linearly read and calculate amplitudes from.
Apply a loudness measure to the PCM-encoded data. There are many of them, some more complicated (e.g. LUFS/LKFS), some more basic (RMS). Let's take RMS (= Root Mean Squares) for example:
Determine the number of samples per bar.
Read all the samples for a single bar. Usually there are 2 channels, so for each sample you will get 2 short ints (16 bit) for PCM-16.
For each sample calculate the mean of all channels.
Maybe you will want to normalize the value in some way, e.g. to get float values between -1 and 1 you can divide by (2 / (1 << 16))
Square each sample (hence the "S" in RMS)
Calculate the mean of all the samples for a bar (hence the "M" in RMS)
Calculate the square root of the resulting value (hence the "R" in RMS)
Now you get a value which you can base the height of the bar on.
Repeat steps 2-8 for all the bars.
To implement this is quite an involved task. I could not find any library providing this whole process already. But at least Android's media API provides the algorithms for reading audio file in any format.
Note: RMS is considered a not very accurate loudness measure. But it seems to yield results which are at least somewhat related to what you can actually hear. For many applications it should be good enough.
JETPACK COMPOSE
AudioWaveform is a lightweight Jetpack Compose library that draws a waveform of audio.
XML
WaveformSeekBar is an android library that draws a waveform from a local audio file, resource, and URL using android.view.View (XML approach).
AUDIO PROCESSING
If you're looking for a fast audio processing library, you could use the existing Amplituda library. Amplituda also has caching and compressing features out of the box.
Related
I have the following function which receives image as void*. How can I flip it upside down without using external libraries like OpenCV. Image width and height are known.
Note: this function will be called at least 30 times a second on Android so this needs to be efficient.
PushVideoFrame(void *bytes, int width, int height) {
if (clientPtr == nullptr) {
return ErrorCodes::DEVICE_CONNECTION;
}
char* data = static_cast< char *>(bytes);
//////// CODE TO FLIP IMAGE /////////////
clientPtr->PushVideoFrameAsync(data, width * height * 4)
}
You can swap rows with additional memory:
void flipVertically(uint8_t *data, int width, int height) {
vector<uint8_t> temp(width);
for (int i = 0; i < height / 2; i++) {
void *ptr1 = data + i*width;
void *ptr2 = data + (height-i-1)*width;
// swap(row1, row2) with temp buffer
memcpy(&temp[0], ptr1, width);
memcpy(ptr1, ptr2, width);
memcpy(ptr2, &temp[0], width);
}
}
I have a cpp code implementing a media player behavior on Android.
I'm using the media player for playing a mp4 file however, I need to draw text above this.
For testing purposes, I've already tried to do as drawText() function from BootAnimation.cpp however without success.
I'm guessing there is some OpenGL calls I'm missing. Is there some call to be added inside drawText() for it to draw above the mp4?
void BootAnimation::drawText(const char* str, const Font& font, bool bold, int* x, int* y) {
glEnable(GL_BLEND); // Allow us to draw on top of the animation
glBindTexture(GL_TEXTURE_2D, font.texture.name);
const int len = strlen(str);
const int strWidth = font.char_width * len;
if (*x == TEXT_CENTER_VALUE) {
*x = (mWidth - strWidth) / 2;
} else if (*x < 0) {
*x = mWidth + *x - strWidth;
}
if (*y == TEXT_CENTER_VALUE) {
*y = (mHeight - font.char_height) / 2;
} else if (*y < 0) {
*y = mHeight + *y - font.char_height;
}
int cropRect[4] = { 0, 0, font.char_width, -font.char_height };
for (int i = 0; i < len; i++) {
char c = str[i];
if (c < FONT_BEGIN_CHAR || c > FONT_END_CHAR) {
c = '?';
}
// Crop the texture to only the pixels in the current glyph
const int charPos = (c - FONT_BEGIN_CHAR); // Position in the list of valid characters
const int row = charPos / FONT_NUM_COLS;
const int col = charPos % FONT_NUM_COLS;
cropRect[0] = col * font.char_width; // Left of column
cropRect[1] = row * font.char_height * 2; // Top of row
// Move down to bottom of regular (one char_heigh) or bold (two char_heigh) line
cropRect[1] += bold ? 2 * font.char_height : font.char_height;
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, cropRect);
glDrawTexiOES(*x, *y, 0, font.char_width, font.char_height);
*x += font.char_width;
}
glDisable(GL_BLEND); // Return to the animation's default behaviour
glBindTexture(GL_TEXTURE_2D, 0);
}
PS: this is no android app, so it won't be done in app layer.
The Bootanimation.cpp use of OpenGL ES changed a bit and now it's using a more modern way to deal with graphics.
That being said, I found that my case would need a some abstraction as done here. Basic OpenGL manipulation, as use of common vertex and fragment shaders (position and color, really nothing different from fundamentals) and VBO/VAO for data buffering and glDrawArrays is enough for my usage.
I still need to understand and apply some texture and understand the best way (in my scenario) for manipulate text, however I think that is the all.
I am using android platform, from the following reference question I come to know that using AudioRecord class which returns raw data I can filter range of audio frequency depends upon my need but for that I will need algorithm, can somebody please help me out to find algorithm to filter range b/w 14,400 bph and 16,200 bph.
I tried "JTransform" but i don't know can I achieve this with JTransform or not ? Currently I am using "jfftpack" to display visual effects which works very well but i can't achieve audio filter using this.
Reference here
help appreciated Thanks in advance.
Following is my code as i mentioned above i am using "jfftpack" library to display you may find this library reference in the code please don't get confuse with that
private class RecordAudio extends AsyncTask<Void, double[], Void> {
#Override
protected Void doInBackground(Void... params) {
try {
final AudioRecord audioRecord = findAudioRecord();
if(audioRecord == null){
return null;
}
final short[] buffer = new short[blockSize];
final double[] toTransform = new double[blockSize];
audioRecord.startRecording();
while (started) {
final int bufferReadResult = audioRecord.read(buffer, 0, blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}
transformer.ft(toTransform);
publishProgress(toTransform);
}
audioRecord.stop();
audioRecord.release();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
/**
* #param toTransform
*/
protected void onProgressUpdate(double[]... toTransform) {
canvas.drawColor(Color.BLACK);
for (int i = 0; i < toTransform[0].length; i++) {
int x = i;
int downy = (int) (100 - (toTransform[0][i] * 10));
int upy = 100;
canvas.drawLine(x, downy, x, upy, paint);
}
imageView.invalidate();
}
There are a lot of tiny details in this process that can potentially hang you up here. This code isn't tested and I don't do audio filtering very often so you should be extremely suspicious here. This is the basic process you would take for filtering audio:
Get audio buffer
Possible audio buffer conversion (byte to float)
(optional) Apply windowing function i.e. Hanning
Take the FFT
Filter frequencies
Take inverse FFT
I'm assuming you have some basic knowledge of Android and audio recording so will cover steps 4-6 here.
//it is assumed that a float array audioBuffer exists with even length = to
//the capture size of your audio buffer
//The size of the FFT will be the size of your audioBuffer / 2
int FFT_SIZE = bufferSize / 2;
FloatFFT_1D mFFT = new FloatFFT_1D(FFT_SIZE); //this is a jTransforms type
//Take the FFT
mFFT.realForward(audioBuffer);
//The first 1/2 of audioBuffer now contains bins that represent the frequency
//of your wave, in a way. To get the actual frequency from the bin:
//frequency_of_bin = bin_index * sample_rate / FFT_SIZE
//assuming the length of audioBuffer is even, the real and imaginary parts will be
//stored as follows
//audioBuffer[2*k] = Re[k], 0<=k<n/2
//audioBuffer[2*k+1] = Im[k], 0<k<n/2
//Define the frequencies of interest
float freqMin = 14400;
float freqMax = 16200;
//Loop through the fft bins and filter frequencies
for(int fftBin = 0; fftBin < FFT_SIZE; fftBin++){
//Calculate the frequency of this bin assuming a sampling rate of 44,100 Hz
float frequency = (float)fftBin * 44100F / (float)FFT_SIZE;
//Now filter the audio, I'm assuming you wanted to keep the
//frequencies of interest rather than discard them.
if(frequency < freqMin || frequency > freqMax){
//Calculate the index where the real and imaginary parts are stored
int real = 2 * fftBin;
int imaginary = 2 * fftBin + 1;
//zero out this frequency
audioBuffer[real] = 0;
audioBuffer[imaginary] = 0;
}
}
//Take the inverse FFT to convert signal from frequency to time domain
mFFT.realInverse(audioBuffer, false);
I Have Some static images like below:
Now, I want is, when i touch on the face or hand, then the selected color should be fill on that skin portion.
See below image of result:
So how to get the result like above ??
Redo and Undo Functionality Should be also there.
I have try with the FloodFill color but doing that i can only able to do color in to the perticular portion. as FloodFill only fill the color till the same pixwl color comes. If the touch place pixel color get change the it will not fill color on it.
So Usinf FloodFill i got the result like below image, If i press on the hand, then only hand portion will fill with color, instead of it i want to fill color to the other hand and face also.
So Please help me in this case.
EDITED
After some reply i got the solution like this one.
But still there is a memory issue. It consume lots of memory to draw the color. So please can anyone help me for it ?
You can have a complete image colored the actual way and when you fill a certain region with a color, it will replace all the regions that is specified by that color to be filled in.
Layman's terms:
User will click on the hand of the OUTLINE
That click location will be checked with another image with perfectly color coded regions. Lets call it a MASK for this case. All the skin regions will have the same color. The shirt areas will be another color.
Wherever the user clicks, the selected color by the user will be applied to every pixel that has that similar color in the MASK, but instead of painting directly on the MASK, you paint onto the pixels of the the OUTLINE.
I hope this helps.
Feel free to comment if you want an example and then I can update the answer with that, but I think you can get it from here.
EDIT:
Basically start off with a simple image like this. This we can call as OUTLINE
Then as the developer, you have to do some work. Here, you color code the OUTLINE. The result we call a MASK. To make this we, color code the regions with the same color that you want. This can be done on paint or whatever. I used Photoshop to be cool lol :D.
Then there is the ALGORITHM to get it working on the phone. Before you read the code, look at this variable.
int ANTILAISING_TOLERANCE = 70; //Larger better coloring, reduced sensing
If you zoom up on the image specifically noting the black regions of the border, you can actually see that sometimes, the computer blends the colors a little bit. In order to account for that change, we use this tolerance value.
COLORINGANDROIDACTIVITY.JAVA
package mk.coloring;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.Bitmap.Config;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.os.Bundle;
import android.view.MotionEvent;
import android.view.View;
import android.widget.ImageView;
import android.view.View.OnTouchListener;
public class ColoringAndroidActivity extends Activity implements OnTouchListener{
/** Called when the activity is first created. */
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
findViewById(R.id.imageView1).setOnTouchListener(this);
}
int ANTILAISING_TOLERANCE = 70;
public boolean onTouch(View arg0, MotionEvent arg1) {
Bitmap mask = BitmapFactory.decodeResource(getResources(), R.drawable.mask);
int selectedColor = mask.getPixel((int)arg1.getX(),(int)arg1.getY());
int sG = (selectedColor & 0x0000FF00) >> 8;
int sR = (selectedColor & 0x00FF0000) >> 16;
int sB = (selectedColor & 0x000000FF);
Bitmap original = BitmapFactory.decodeResource(getResources(), R.drawable.empty);
Bitmap colored = Bitmap.createBitmap(mask.getWidth(), mask.getHeight(), Config.ARGB_8888);
Canvas cv = new Canvas(colored);
cv.drawBitmap(original, 0,0, null);
for(int x = 0; x<mask.getWidth();x++){
for(int y = 0; y<mask.getHeight();y++){
int g = (mask.getPixel(x,y) & 0x0000FF00) >> 8;
int r = (mask.getPixel(x,y) & 0x00FF0000) >> 16;
int b = (mask.getPixel(x,y) & 0x000000FF);
if(Math.abs(sR - r) < ANTILAISING_TOLERANCE && Math.abs(sG - g) < ANTILAISING_TOLERANCE && Math.abs(sB - b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFF000000) | 0x00458414);
}
}
((ImageView)findViewById(R.id.imageView1)).setImageBitmap(colored);
return true;
}
}
This code doesn't provide the user with much of color choices. Instead, if the user touches a region, it will look at the MASK and paint the OUTLINE accordingly. But, you can make really interesting and interactive.
RESULT
When I touched the man's hair, it not only colored the hair, but colored his shirt and hand with the same color. Compare it with the MASK to get a good idea of what happened.
This is just a basic idea. I have created multiple Bitmaps but there is not really a need for that. I had used it for testing purposes and takes up unnecessary memory. And you don't need to recreate the mask on every click, etc.
I hope this helps you :D
Good luck
Use a FloodFill Algorithm. Fill the complete canvas but keep the bound fill area as it is like circle, rectangle. You can also check this link. Android: How to fill color to the specific part of the Image only?. The general idea get the x and y co-ordinates on click.
final Point p1 = new Point();
p1.x=(int) x; p1.y=(int) y; X and y are co-ordinates when user clicks on the screen
final int sourceColor= mBitmap.getPixel((int)x,(int) y);
final int targetColor =mPaint.getColor();
new TheTask(mDrawingManager.mDrawingUtilities.mBitmap, p1, sourceColor, targetColor).execute(); //Use AsyncTask and do floodfillin the doinBackground().
Check the above links for floodfill algorithmin android. This should help you achieve what you want. Android FingerPaint Undo/Redo implementation. This should help you modify according to your needs regarding undo and redo.
Edit:
A post on stackoverflow led me to a efficient way of using flood fill algorithm without delay and OOM.
Picking from the SO Post
Filling a small closed area works fine with the above flood fill algorithm. However for large area the algorithm works slow and consumes lot of memory. Recently i came across a post which uses QueueLinear Flood Fill which is way faster that the above.
Source :
http://www.codeproject.com/Articles/16405/Queue-Linear-Flood-Fill-A-Fast-Flood-Fill-Algorith
Code :
public class QueueLinearFloodFiller {
protected Bitmap image = null;
protected int[] tolerance = new int[] { 0, 0, 0 };
protected int width = 0;
protected int height = 0;
protected int[] pixels = null;
protected int fillColor = 0;
protected int[] startColor = new int[] { 0, 0, 0 };
protected boolean[] pixelsChecked;
protected Queue<FloodFillRange> ranges;
// Construct using an image and a copy will be made to fill into,
// Construct with BufferedImage and flood fill will write directly to
// provided BufferedImage
public QueueLinearFloodFiller(Bitmap img) {
copyImage(img);
}
public QueueLinearFloodFiller(Bitmap img, int targetColor, int newColor) {
useImage(img);
setFillColor(newColor);
setTargetColor(targetColor);
}
public void setTargetColor(int targetColor) {
startColor[0] = Color.red(targetColor);
startColor[1] = Color.green(targetColor);
startColor[2] = Color.blue(targetColor);
}
public int getFillColor() {
return fillColor;
}
public void setFillColor(int value) {
fillColor = value;
}
public int[] getTolerance() {
return tolerance;
}
public void setTolerance(int[] value) {
tolerance = value;
}
public void setTolerance(int value) {
tolerance = new int[] { value, value, value };
}
public Bitmap getImage() {
return image;
}
public void copyImage(Bitmap img) {
// Copy data from provided Image to a BufferedImage to write flood fill
// to, use getImage to retrieve
// cache data in member variables to decrease overhead of property calls
width = img.getWidth();
height = img.getHeight();
image = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
Canvas canvas = new Canvas(image);
canvas.drawBitmap(img, 0, 0, null);
pixels = new int[width * height];
image.getPixels(pixels, 0, width, 1, 1, width - 1, height - 1);
}
public void useImage(Bitmap img) {
// Use a pre-existing provided BufferedImage and write directly to it
// cache data in member variables to decrease overhead of property calls
width = img.getWidth();
height = img.getHeight();
image = img;
pixels = new int[width * height];
image.getPixels(pixels, 0, width, 1, 1, width - 1, height - 1);
}
protected void prepare() {
// Called before starting flood-fill
pixelsChecked = new boolean[pixels.length];
ranges = new LinkedList<FloodFillRange>();
}
// Fills the specified point on the bitmap with the currently selected fill
// color.
// int x, int y: The starting coords for the fill
public void floodFill(int x, int y) {
// Setup
prepare();
if (startColor[0] == 0) {
// ***Get starting color.
int startPixel = pixels[(width * y) + x];
startColor[0] = (startPixel >> 16) & 0xff;
startColor[1] = (startPixel >> 8) & 0xff;
startColor[2] = startPixel & 0xff;
}
// ***Do first call to floodfill.
LinearFill(x, y);
// ***Call floodfill routine while floodfill ranges still exist on the
// queue
FloodFillRange range;
while (ranges.size() > 0) {
// **Get Next Range Off the Queue
range = ranges.remove();
// **Check Above and Below Each Pixel in the Floodfill Range
int downPxIdx = (width * (range.Y + 1)) + range.startX;
int upPxIdx = (width * (range.Y - 1)) + range.startX;
int upY = range.Y - 1;// so we can pass the y coord by ref
int downY = range.Y + 1;
for (int i = range.startX; i <= range.endX; i++) {
// *Start Fill Upwards
// if we're not above the top of the bitmap and the pixel above
// this one is within the color tolerance
if (range.Y > 0 && (!pixelsChecked[upPxIdx])
&& CheckPixel(upPxIdx))
LinearFill(i, upY);
// *Start Fill Downwards
// if we're not below the bottom of the bitmap and the pixel
// below this one is within the color tolerance
if (range.Y < (height - 1) && (!pixelsChecked[downPxIdx])
&& CheckPixel(downPxIdx))
LinearFill(i, downY);
downPxIdx++;
upPxIdx++;
}
}
image.setPixels(pixels, 0, width, 1, 1, width - 1, height - 1);
}
// Finds the furthermost left and right boundaries of the fill area
// on a given y coordinate, starting from a given x coordinate, filling as
// it goes.
// Adds the resulting horizontal range to the queue of floodfill ranges,
// to be processed in the main loop.
// int x, int y: The starting coords
protected void LinearFill(int x, int y) {
// ***Find Left Edge of Color Area
int lFillLoc = x; // the location to check/fill on the left
int pxIdx = (width * y) + x;
while (true) {
// **fill with the color
pixels[pxIdx] = fillColor;
// **indicate that this pixel has already been checked and filled
pixelsChecked[pxIdx] = true;
// **de-increment
lFillLoc--; // de-increment counter
pxIdx--; // de-increment pixel index
// **exit loop if we're at edge of bitmap or color area
if (lFillLoc < 0 || (pixelsChecked[pxIdx]) || !CheckPixel(pxIdx)) {
break;
}
}
lFillLoc++;
// ***Find Right Edge of Color Area
int rFillLoc = x; // the location to check/fill on the left
pxIdx = (width * y) + x;
while (true) {
// **fill with the color
pixels[pxIdx] = fillColor;
// **indicate that this pixel has already been checked and filled
pixelsChecked[pxIdx] = true;
// **increment
rFillLoc++; // increment counter
pxIdx++; // increment pixel index
// **exit loop if we're at edge of bitmap or color area
if (rFillLoc >= width || pixelsChecked[pxIdx] || !CheckPixel(pxIdx)) {
break;
}
}
rFillLoc--;
// add range to queue
FloodFillRange r = new FloodFillRange(lFillLoc, rFillLoc, y);
ranges.offer(r);
}
// Sees if a pixel is within the color tolerance range.
protected boolean CheckPixel(int px) {
int red = (pixels[px] >>> 16) & 0xff;
int green = (pixels[px] >>> 8) & 0xff;
int blue = pixels[px] & 0xff;
return (red >= (startColor[0] - tolerance[0])
&& red <= (startColor[0] + tolerance[0])
&& green >= (startColor[1] - tolerance[1])
&& green <= (startColor[1] + tolerance[1])
&& blue >= (startColor[2] - tolerance[2]) && blue <= (startColor[2] + tolerance[2]));
}
// Represents a linear range to be filled and branched from.
protected class FloodFillRange {
public int startX;
public int endX;
public int Y;
public FloodFillRange(int startX, int endX, int y) {
this.startX = startX;
this.endX = endX;
this.Y = y;
}
}
}
One basic way would be something like the floodfill algorythm.
The Wikipedia article describes the algorythm and its variations pretty well.
Here you can find a implementation on SO. But depending on your specific needs this one has to be modified.
[UPDATE]
To conclude this question, I implemented my graph using the following two methods (see below). drawCurve() receives a Canvas and an array of float. The array is properly filled (timestamps are assumed by the value index in the array) and varies from 0.0 to 1.0. The array is sent to prepareWindowArray() that takes a chunk of the array from position windowStart for windowSize-values, in a circular manner.
The array used by the GraphView and by the data provider (a Bluetooth device) is the same. A Class in the middle ensures that GraphView is not reading data that are being written by the Bluetooth device. Since the GraphView always loop thru the array and redraw it at every iteration, it will update according to the data written by the Bluetooth device, and by forcing the write frequency of the Bluetooth device to the refresh frequency of the Graph, I obtain a smooth animation of my signal.
The GraphView's invalidate() method is called by the Activity, which run a Timer to refresh the graph at every x milliseconds. The frequency at which the graph is refreshed is dynamically set, so that it adapt to the flow of data from the Bluetooth device (which specify the frequency of its signal in the header of its packet).
Find the complete code of my GraphView in the answer I wrote below (in the answer section). If you guys find errors or way to optimize it, please let me know; it would be greatly appreciated!
/**
* Read a buffer array of size greater than "windowSize" and create a window array out of it.
* A curve is then drawn from this array using "windowSize" points, from left
* to right.
* #param canvas is a Canvas object on which the curve will be drawn. Ensure the canvas is the
* later drawn object at its position or you will not see your curve.
* #param data is a float array of length > windowSize. The floats must range between 0.0 and 1.0.
* A value of 0.0 will be drawn at the bottom of the graph, while a value of 1.0 will be drawn at
* the top of the graph. The range is not tested, so you must ensure to pass proper values, or your
* graph will look terrible.
* 0.0 : draw at the bottom of the graph
* 0.5 : draw in the middle of the graph
* 1.0 : draw at the top of the graph
*/
private void drawCurve(Canvas canvas, float[] data){
// Create a reference value to determine the stepping between each points to be drawn
float incrementX = (mRightSide-mLeftSide)/(float) windowSize;
float incrementY = (mBottomSide - mTopSide);
// Prepare the array for the graph
float[] source = prepareWindowArray(data);
// Prepare the curve Path
curve = new Path();
// Move at the first point.
curve.moveTo(mLeftSide, source[0]*incrementY);
// Draw the remaining points of the curve
for(int i = 1; i < windowSize; i++){
curve.lineTo(mLeftSide + (i*incrementX), source[i] * incrementY);
}
canvas.drawPath(curve, curvePaint);
}
The prepareWindowArray() method that implement the circular behavior of the array:
/**
* Extract a window array from the data array, and reposition the windowStart
* index for next iteration
* #param data the array of data from which we get the window
* #return an array of float that represent the window
*/
private float[] prepareWindowArray(float[] data){
// Prepare the source array for the graph.
float[] source = new float[windowSize];
// Copy the window from the data array into the source array
for(int i = 0; i < windowSize; i++){
if(windowStart+i < data.length) // If the windows holds within the data array
source[i] = data[windowStart + i]; // Simply copy the value in the source array
else{ // If the window goes beyond the data array
source[i] = data[(windowStart + 1)%data.length]; // Loop at the beginning of the data array and copy from there
}
}
// Reposition the buffer index
windowStart = windowStart + windowSize;
// If the index is beyond the end of the array
if(windowStart >= data.length){
windowStart = windowStart % data.length;
}
return source;
}
[/UPDATE]
I'm making an app that read data from a Bluetooth device at a fixed rate. Everytime that I have new data, I want them to be plotted on the graph to the right, and to translate the remainder of the graph to the left in realtime. Basically, like an oscilloscope would do.
So I made a custom View, with xy axis, a title and units. To do this, I simply draw those things on the View canvas. Now I want to draw the curve. I manage to draw a static curve from an already filled array using this method:
public void drawCurve(Canvas canvas){
int left = getPaddingLeft();
int bottom = getHeight()-getPaddingTop();
int middle = (bottom-10)/2 - 10;
curvePaint = new Paint();
curvePaint.setColor(Color.GREEN);
curvePaint.setStrokeWidth(1f);
curvePaint.setDither(true);
curvePaint.setStyle(Paint.Style.STROKE);
curvePaint.setStrokeJoin(Paint.Join.ROUND);
curvePaint.setStrokeCap(Paint.Cap.ROUND);
curvePaint.setPathEffect(new CornerPathEffect(10) );
curvePaint.setAntiAlias(true);
mCurve = new Path();
mCurve.moveTo(left, middle);
for(int i = 0; i < mData[0].length; i++)
mCurve.lineTo(left + ((float)mData[0][i] * 5), middle-((float)mData[1][i] * 20));
canvas.drawPath(mCurve, curvePaint);
}
It gives me something like this.
There are still things to fix on my graph (the sub-axis are not properly scaling), but these are details I can fix later.
Now I want to change this static graph (that receives a non-dynamic matrice of values) with something dynamic that would redraw the curve every 40ms, pushing the old data to the left and plotting the new data to the right, so I could visualise in real time the information provided by the Bluetooth device.
I know there are some graphing package that exists already, but I'm kinda noob with these things and I'd like to pratice by implementing this graph myself. Also, most of my GraphView class is done, except for the curve part.
Second question, I'm wondering how I should send the new values to the graph. Should I use something like a FIFO stack, or can I achieve what I want with a simple matrice of doubles?
On a side note, the 4 fields at the bottom are already dynamically updated. Well, they are kind of faking the "dynamic", they loop thru the same double matrice again and again, they don't actually take fresh values.
Thanks for your time! If something's unclear about my question, let me know and I'll update it with more details.
As mentioned in my question, here's the class that I designed to solve my problems.
/**
* A View implementation that displays a scatter graph with
* automatic unit scaling.
*
* Call the <i>setupGraph()</i> method to modify the graph's
* properties.
* #author Antoine Grondin
*
*/
public class GraphView extends View {
//////////////////////////////////////////////////////////////////
// Configuration
//////////////////////////////////////////////////////////////////
// Set to true to impose the graph properties
private static final boolean TEST = false;
// Scale configuration
private float minX = 0; // When TEST is true, these values are used to
private float maxX = 50; // Draw the graph
private float minY = 0;
private float maxY = 100;
private String titleText = "A Graph...";
private String xUnitText = "s";
private String yUnitText = "Volts";
// Debugging variables
private boolean D = true;
private String TAG = "GraphView";
//////////////////////////////////////////////////////////////////
// Member fields
//////////////////////////////////////////////////////////////////
// Represent the borders of the View
private int mTopSide = 0;
private int mLeftSide = 0;
private int mRightSide = 0;
private int mBottomSide = 0;
private int mMiddleX = 0;
// Size of a DensityIndependentPixel
private float mDips = 0;
// Hold the position of the axis in regard to the range of values
private int positionOfX = 0;
private int positionOfY = 0;
// Index for the graph array window, and size of the window
private int windowStart = 0;
private int windowSize = 128;
private float[] dataSource;
// Painting tools
private Paint xAxisPaint;
private Paint yAxisPaint;
private Paint tickPaint;
private Paint curvePaint;
private Paint backgroundPaint;
private TextPaint unitTextPaint;
private TextPaint titleTextPaint;
// Object to be drawn
private Path curve;
private Bitmap background;
///////////////////////////////////////////////////////////////////////////////
// Constructors
///////////////////////////////////////////////////////////////////////////////
public GraphView(Context context) {
super(context);
init();
}
public GraphView(Context context, AttributeSet attrs){
super(context, attrs);
init();
}
public GraphView(Context context, AttributeSet attrs, int defStyle){
super(context, attrs, defStyle);
init();
}
///////////////////////////////////////////////////////////////////////////////
// Configuration methods
///////////////////////////////////////////////////////////////////////////////
public void setupGraph(String title, String nameOfX, float min_X, float max_X, String nameOfY, float min_Y, float max_Y){
if(!TEST){
titleText = title;
xUnitText = nameOfX;
yUnitText = nameOfY;
minX = min_X;
maxX = max_X;
minY = min_Y;
maxY = max_Y;
}
}
/**
* Set the array this GraphView is to work with.
* #param data is a float array of length > windowSize. The floats must range between 0.0 and 1.0.
* A value of 0.0 will be drawn at the bottom of the graph, while a value of 1.0 will be drawn at
* the top of the graph. The range is not tested, so you must ensure to pass proper values, or your
* graph will look terrible.
* 0.0 : draw at the bottom of the graph
* 0.5 : draw in the middle of the graph
* 1.0 : draw at the top of the graph
*/
public void setDataSource(float[] data){
this.dataSource = data;
}
///////////////////////////////////////////////////////////////////////////////
// Initialization methods
///////////////////////////////////////////////////////////////////////////////
private void init(){
initDrawingTools();
}
private void initConstants(){
mDips = getResources().getDisplayMetrics().density;
mTopSide = (int) (getTop() + 10*mDips);
mLeftSide = (int) (getLeft() + 10*mDips);
mRightSide = (int) (getMeasuredWidth() - 10*mDips);
mBottomSide = (int) (getMeasuredHeight() - 10*mDips);
mMiddleX = (mRightSide - mLeftSide)/2 + mLeftSide;
}
private void initWindowSetting() throws IllegalArgumentException {
// Don't do anything if the given values make no sense
if(maxX < minX || maxY < minY ||
maxX == minX || maxY == minY){
throw new IllegalArgumentException("Max and min values make no sense");
}
// Transform the values in scanable items
float[][] maxAndMin = new float[][]{
{minX, maxX},
{minY, maxY}};
int[] positions = new int[]{positionOfY, positionOfX};
// Place the X and Y axis in regard to the given max and min
for(int i = 0; i<2; i++){
if(maxAndMin[i][0] < 0f){
if(maxAndMin[i][1] < 0f){
positions[i] = (int) maxAndMin[i][0];
} else{
positions[i] = 0;
}
} else if (maxAndMin[i][0] > 0f){
positions[i] = (int) maxAndMin[i][0];
} else {
positions[i] = 0;
}
}
// Put the values back in their right place
minX = maxAndMin[0][0];
maxX = maxAndMin[0][1];
minY = maxAndMin[1][0];
maxY = maxAndMin[1][1];
positionOfY = mLeftSide + (int) (((positions[0] - minX)/(maxX-minX))*(mRightSide - mLeftSide));
positionOfX = mBottomSide - (int) (((positions[1] - minY)/(maxY-minY))*(mBottomSide - mTopSide));
}
private void initDrawingTools(){
xAxisPaint = new Paint();
xAxisPaint.setColor(0xff888888);
xAxisPaint.setStrokeWidth(1f*mDips);
xAxisPaint.setAlpha(0xff);
xAxisPaint.setAntiAlias(true);
yAxisPaint = xAxisPaint;
tickPaint = xAxisPaint;
tickPaint.setColor(0xffaaaaaa);
curvePaint = new Paint();
curvePaint.setColor(0xff00ff00);
curvePaint.setStrokeWidth(1f*mDips);
curvePaint.setDither(true);
curvePaint.setStyle(Paint.Style.STROKE);
curvePaint.setStrokeJoin(Paint.Join.ROUND);
curvePaint.setStrokeCap(Paint.Cap.ROUND);
curvePaint.setPathEffect(new CornerPathEffect(10));
curvePaint.setAntiAlias(true);
backgroundPaint = new Paint();
backgroundPaint.setFilterBitmap(true);
titleTextPaint = new TextPaint();
titleTextPaint.setAntiAlias(true);
titleTextPaint.setColor(0xffffffff);
titleTextPaint.setTextAlign(Align.CENTER);
titleTextPaint.setTextSize(20f*mDips);
titleTextPaint.setTypeface(Typeface.MONOSPACE);
unitTextPaint = new TextPaint();
unitTextPaint.setAntiAlias(true);
unitTextPaint.setColor(0xff888888);
unitTextPaint.setTextAlign(Align.CENTER);
unitTextPaint.setTextSize(20f*mDips);
unitTextPaint.setTypeface(Typeface.MONOSPACE);
}
///////////////////////////////////////////////////////////////////////////////
// Overridden methods
///////////////////////////////////////////////////////////////////////////////
protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec){
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
}
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
regenerateBackground();
}
public void onDraw(Canvas canvas){
drawBackground(canvas);
if(dataSource != null)
drawCurve(canvas, dataSource);
}
///////////////////////////////////////////////////////////////////////////////
// Drawing methods
///////////////////////////////////////////////////////////////////////////////
private void drawX(Canvas canvas){
canvas.drawLine(mLeftSide, positionOfX, mRightSide, positionOfX, xAxisPaint);
canvas.drawText(xUnitText, mRightSide - unitTextPaint.measureText(xUnitText)/2, positionOfX - unitTextPaint.getTextSize()/2, unitTextPaint);
}
private void drawY(Canvas canvas){
canvas.drawLine(positionOfY, mTopSide, positionOfY, mBottomSide, yAxisPaint);
canvas.drawText(yUnitText, positionOfY + unitTextPaint.measureText(yUnitText)/2 + 4*mDips, mTopSide + (int) (unitTextPaint.getTextSize()/2), unitTextPaint);
}
private void drawTick(Canvas canvas){
// No tick at this time
// TODO decide how I want to put those ticks, if I want them
}
private void drawTitle(Canvas canvas){
canvas.drawText(titleText, mMiddleX, mTopSide + (int) (titleTextPaint.getTextSize()/2), titleTextPaint);
}
/**
* Read a buffer array of size greater than "windowSize" and create a window array out of it.
* A curve is then drawn from this array using "windowSize" points, from left
* to right.
* #param canvas is a Canvas object on which the curve will be drawn. Ensure the canvas is the
* later drawn object at its position or you will not see your curve.
* #param data is a float array of length > windowSize. The floats must range between 0.0 and 1.0.
* A value of 0.0 will be drawn at the bottom of the graph, while a value of 1.0 will be drawn at
* the top of the graph. The range is not tested, so you must ensure to pass proper values, or your
* graph will look terrible.
* 0.0 : draw at the bottom of the graph
* 0.5 : draw in the middle of the graph
* 1.0 : draw at the top of the graph
*/
private void drawCurve(Canvas canvas, float[] data){
// Create a reference value to determine the stepping between each points to be drawn
float incrementX = (mRightSide-mLeftSide)/(float) windowSize;
float incrementY = mBottomSide - mTopSide;
// Prepare the array for the graph
float[] source = prepareWindowArray(data);
// Prepare the curve Path
curve = new Path();
// Move at the first point.
curve.moveTo(mLeftSide, source[0]*incrementY);
// Draw the remaining points of the curve
for(int i = 1; i < windowSize; i++){
curve.lineTo(mLeftSide + (i*incrementX), source[i] * incrementY);
}
canvas.drawPath(curve, curvePaint);
}
///////////////////////////////////////////////////////////////////////////////
// Intimate methods
///////////////////////////////////////////////////////////////////////////////
/**
* When asked to draw the background, this method will verify if a bitmap of the
* background is available. If not, it will regenerate one. Then, it will draw
* the background using this bitmap. The use of a bitmap to draw the background
* is to avoid unnecessary processing for static parts of the view.
*/
private void drawBackground(Canvas canvas){
if(background == null){
regenerateBackground();
}
canvas.drawBitmap(background, 0, 0, backgroundPaint);
}
/**
* Call this method to force the <i>GraphView</i> to redraw the cache of it's background,
* using new properties if you changed them with <i>setupGraph()</i>.
*/
public void regenerateBackground(){
initConstants();
try{
initWindowSetting();
} catch (IllegalArgumentException e){
Log.e(TAG, "Could not initalize windows.", e);
return;
}
if(background != null){
background.recycle();
}
background = Bitmap.createBitmap(getWidth(), getHeight(), Bitmap.Config.ARGB_8888);
Canvas backgroundCanvas = new Canvas(background);
drawX(backgroundCanvas);
drawY(backgroundCanvas);
drawTick(backgroundCanvas);
drawTitle(backgroundCanvas);
}
/**
* Extract a window array from the data array, and reposition the windowStart
* index for next iteration
* #param data the array of data from which we get the window
* #return an array of float that represent the window
*/
private float[] prepareWindowArray(float[] data){
// Prepare the source array for the graph.
float[] source = new float[windowSize];
// Copy the window from the data array into the source array
for(int i = 0; i < windowSize; i++){
if(windowStart+i < data.length) // If the windows holds within the data array
source[i] = data[windowStart + i]; // Simply copy the value in the source array
else{ // If the window goes beyond the data array
source[i] = data[(windowStart + 1)%data.length]; // Loop at the beginning of the data array and copy from there
}
}
// Reposition the buffer index
windowStart = windowStart + windowSize;
// If the index is beyond the end of the array
if(windowStart >= data.length){
windowStart = windowStart % data.length;
}
return source;
}
}
Well I would start by just trying to redraw it all with the code you have and real dynalic data. Only if that is not quick enough do you need to try anything fancy like scrolling...
If you need fancy I would try somthing like this.
I would draw the dynamic part of the graph into a secondary Bitmap that you keep between frames rather than directly to the canves. I would have the background none dynamic part of the graph in another bitmap that only gets drawen on rescale etc.
In this secondary dynamic bitmap when ploting new data you first need to clear the old data you are replacing you do this by drawing the apropriate slice of the static background bitmap over the top of the stale data, thus clearing it and geting the background nice and fresh again. You then just need to draw your new bit of dynamic data. The trick is that You draw into this second bitmap left to right then just wrap back to the left at the end and start over.
To get from the soncodary bitmap to your cancas draw the bitmap to the canvas in two parts. The older data to the right of what you just added needs to be drawn onto the left part of your final canvas and the new data needs to be drawn imediatly to the right of it.
For sending the data a circular buffer would be the normal thing for this sort of data where once it's off the graph you don't care about it.