How to change audio pitch? - android

I'm using the Oboe C++ library for playing sounds in my android application.
I want to change the pitch of my audio samples.
So, I started creating "mPos" float value to hold the current played frame and adding the "mPitch" value every step.
It seems like the audio played correctly with the new Pitch but it's double it's self when the pitch is high(e.g 1.2) and make a weird noise and when the pitch is low (e.g 0.212).
This is my first-time audio programming,
I did a lot of research before I post this question. I even send messages directly to "Oboe" supports but no response.
Does anyone have any idea how to implement the Pitch correctly?
streamLength always 192
channelCount always 2
Code:
void Player::renderAudio(float *stream, int32_t streamLength){
const int32_t channelCount = mSound->getChannelCount();
if (mIsPlaying){
float framesToRenderFromData = streamLength ;
float totalSourceFrames = mSound->getTotalFrames()/mPitch;
const float *data = mSound->getData();
// Check whether we're about to reach the end of the recording
if (mPos + streamLength >= totalSourceFrames ){
framesToRenderFromData = (totalSourceFrames - mPos);
mIsPlaying = false;
}
for (int i = 0; i < framesToRenderFromData; ++i) {
for (int j = 0; j < channelCount; ++j) {
if(j % 2 == 0){
stream[(i*channelCount)+j] = (data[((size_t)mPos * channelCount)) + j] * mLeftVol) * mVol;
}else{
stream[(i*channelCount)+j] = (data[((size_t)mPos * channelCount)) + j] * mRightVol) * mVol;
}
}
mPos += mPitch;
if(mPos >= totalSourceFrames){
mPos = 0;
}
}
if (framesToRenderFromData < streamLength){
renderSilence(&stream[(size_t)framesToRenderFromData], streamLength * channelCount);
}
} else {
renderSilence(stream, streamLength * channelCount);
}
}
void Player::renderSilence(float *start, int32_t numSamples){
for (int i = 0; i < numSamples; ++i) {
start[i] = 0;
}
}
void Player::setPitch(float pitchData){
mPitch = pitchData;
};

When you multiply a float variable (mPos) by an integer-type variable (channelCount), the result is a float. You are, at the least, messing up your channel interleaving. Instead of
(size_t)(mPos * channelCount)
try
((size_t)mPos) * channelCount
EDIT:
You are intentionally looping the source when reaching the end, with the if statement that results in mPos = 0;. Instead of doing this, you could calculate the number of source samples independently of the pitch, but break out of the loop when your source samples are exhausted. Also, your comparison of the source and destination samples isn't useful because of the pitch adjustment:
float framesToRenderFromData = streamLength ;
float totalSourceFrames = mSound->getTotalFrames(); // Note change here
const float *data = mSound->getData();
// Note: Only check here is whether mPos has reached the end from
// a previous call
if ( mPos >= totalSourceFrames ) {
framesToRenderFromData = 0.0f;
}
for (int i = 0; i < framesToRenderFromData; ++i) {
for (int j = 0; j < channelCount; ++j) {
if(j % 2 == 0){
stream[(i*channelCount)+j] = (data[((size_t)mPos * channelCount)) + j] * mLeftVol) * mVol;
}else{
stream[(i*channelCount)+j] = (data[((size_t)mPos * channelCount)) + j] * mRightVol) * mVol;
}
}
mPos += mPitch;
if ( ((size_t)mPos) >= totalSourceFrames ) { // Replace this 'if' and its contents
framesToRenderFromData = (size_t)mPos;
mPos = 0.0f;
break;
}
}
A note, however, for completeness: You really shouldn't be accomplishing pitch change in this way for any serious application -- the sound quality will be terrible. There are free libraries for audio resampling to an arbitrary target rate; these will convert your source sample to a higher or lower number of samples, and provide quality pitch changes when replayed at the same rate as the source.

Related

Oboe stops rendering audio if there is too many data to be rendered

I'm trying to implement Oboe library into my application, so I can perform low latency audio playing. I could perform panning, playback manipulation, sound scaling, etc. I've been asking few questions about this topic because I'm completely new to audio worlds.
Now I can perform basic things which internal Android audio class such as SoundPool provides. I can play multiple sounds simultaneously without noticeable delays.
But now I encountered another problem. So I made very simple application for example; There is button in screen and if user taps this screen, it plays simple piano sound. However fast user taps this button, it must be able to mix those same piano sounds just like what SoundPool does.
My codes can do this very well, until I taps button too much times, so there are many audio queues to be mixed.
class OggPlayer;
class PlayerQueue {
private:
OggPlayer* player;
void renderStereo(float* audioData, int32_t numFrames);
void renderMono(float* audioData, int32_t numFrames);
public:
int offset = 0;
float pan;
float pitch;
int playScale;
bool queueEnded = false;
PlayerQueue(float pan, float pitch, int playScale, OggPlayer* player) {
this->pan = pan;
this->playScale = playScale;
this->player = player;
this->pitch = pitch;
if(this->pan < -1.0)
this->pan = -1.0;
else if(this->pan > 1.0)
this->pan = 1.0;
}
void renderAudio(float* audioData, int32_t numFrames, bool isStreamStereo);
};
class OggPlayer {
private:
std::vector<PlayerQueue> queues = std::vector<PlayerQueue>();
public:
int offset = 0;
bool isStereo;
float defaultPitch = 1.0;
OggPlayer(std::vector<float> data, bool isStereo, int fileSampleRate, int deviceSampleRate) {
this->data = data;
this->isStereo = isStereo;
defaultPitch = (float) (fileSampleRate) / (float) (deviceSampleRate);
}
void renderAudio(float* audioData, int32_t numFrames, bool reset, bool isStreamStereo);
static void smoothAudio(float* audioData, int32_t numFrames, bool isStreamStereo);
void addQueue(float pan, float pitch, int playerScale) {
queues.push_back(PlayerQueue(pan, defaultPitch * pitch, playerScale, this));
};
static void resetAudioData(float* audioData, int32_t numFrames, bool isStreamStereo);
std::vector<float> data;
};
OggPlayer holds decoded PCM data with defulatPitch value to sync speaker's sample rate and audio file's sample rate. Each OggPlayer holds its own PCM data (meaning each audio file's data), and it holds its own vector of PlayerQueue. PlayerQueue is the class which actually renders audio data. OggPlayer is PCM data provider for PlayerQueue classes. PlayerQueue has its own custom pitch, pan, and audio scale value. Since AudioStream can provide limited size of array in callback methods, I added offset, so PlayerQueue can continue rendering audio in next session without losing its status.
void OggPlayer::renderAudio(float *audioData, int32_t numFrames, bool reset, bool isStreamStereo) {
if(reset) {
resetAudioData(audioData, numFrames, isStreamStereo);
}
for(auto & queue : queues) {
if(!queue.queueEnded) {
queue.renderAudio(audioData, numFrames, isStreamStereo);
}
}
smoothAudio(audioData, numFrames, isStreamStereo);
queues.erase(std::remove_if(queues.begin(), queues.end(),
[](const PlayerQueue& p) {return p.queueEnded;}), queues.end());
}
This is how I render audio data currently, I seek through each OggPlayer's PlayerQueue vector, and make them render audio data by passing pointer of array if they didn't reach end of PCM data array yet. I smooth audio data after finishing audio data, to prevent clipping or other things. Then finally remove queues from vector if they finished rendering audio (completely).
void PlayerQueue::renderAudio(float * audioData, int32_t numFrames, bool isStreamStereo) {
if(isStreamStereo) {
renderStereo(audioData, numFrames);
} else {
renderMono(audioData, numFrames);
}
}
void PlayerQueue::renderStereo(float *audioData, int32_t numFrames) {
for(int i = 0; i < numFrames; i++) {
if(player->isStereo) {
if((int) ((float) (offset + i) * pitch) * 2 + 1 < player->data.size()) {
float left = player->data.at((int)((float) (offset + i) * pitch) * 2);
float right = player->data.at((int)((float) (offset + i) * pitch) * 2 + 1);
if(pan < 0) {
audioData[i * 2] += (left + right * (float) sin(abs(pan) * M_PI / 2.0)) * (float) playScale;
audioData[i * 2 + 1] += right * (float) cos(abs(pan) * M_PI / 2.0) * (float) playScale;
} else {
audioData[i * 2] += left * (float) cos(pan * M_PI / 2.0) * (float) playScale;
audioData[i * 2 + 1] += (right + left * (float) sin(pan * M_PI / 2.0)) * (float) playScale;
}
} else {
break;
}
} else {
if((int) ((float) (offset + i) * pitch) < player->data.size()) {
float sample = player->data.at((int) ((float) (offset + i) * pitch));
if(pan < 0) {
audioData[i * 2] += sample * (1 + (float) sin(abs(pan) * M_PI / 2.0)) * (float) playScale;
audioData[i * 2 + 1] += sample * (float) cos(abs(pan) * M_PI / 2.0) * (float) playScale;
} else {
audioData[i * 2] += sample * (float) cos(pan * M_PI / 2.0) * (float) playScale;
audioData[i * 2 + 1] += sample * (1 + (float) sin(pan * M_PI / 2.0)) * (float) playScale;
}
} else {
break;
}
}
}
offset += numFrames;
if((float) offset * pitch >= player->data.size()) {
offset = 0;
queueEnded = true;
}
}
void PlayerQueue::renderMono(float *audioData, int32_t numFrames) {
for(int i = 0; i < numFrames; i++) {
if(player->isStereo) {
if((int) ((float) (offset + i) * pitch) * 2 + 1 < player->data.size()) {
audioData[i] += (player->data.at((int) ((float) (offset + i) * pitch) * 2) + player->data.at((int) ((float) (offset + i) * pitch) * 2 + 1)) / 2 * (float) playScale;
} else {
break;
}
} else {
if((int) ((float) (offset + i) * pitch) < player->data.size()) {
audioData[i] += player->data.at((int) ((float) (offset + i) * pitch)) * (float) playScale;
} else {
break;
}
}
if(audioData[i] > 1.0)
audioData[i] = 1.0;
else if(audioData[i] < -1.0)
audioData[i] = -1.0;
}
offset += numFrames;
if((float) offset * pitch >= player->data.size()) {
queueEnded = true;
offset = 0;
}
}
I render everything (panning, playback, scaling) queue has in one session, considering both speaker and audio file's status (mono or stereo)
using namespace oboe;
class OggPianoEngine : public AudioStreamCallback {
public:
void initialize();
void start(bool isStereo);
void closeStream();
void reopenStream();
void release();
bool isStreamOpened = false;
bool isStreamStereo;
int deviceSampleRate = 0;
DataCallbackResult
onAudioReady(AudioStream *audioStream, void *audioData, int32_t numFrames) override;
void onErrorAfterClose(AudioStream *audioStream, Result result) override ;
AudioStream* stream;
std::vector<OggPlayer>* players;
int addPlayer(std::vector<float> data, bool isStereo, int sampleRate) const;
void addQueue(int id, float pan, float pitch, int playerScale) const;
};
and then finally in OggPianoEngine, I put vector of OggPlayer, so my app can hold multiple sounds in memory, making users able to add sounds, and also able to play them in anywhere, anytime.
DataCallbackResult
OggPianoEngine::onAudioReady(AudioStream *audioStream, void *audioData, int32_t numFrames) {
for(int i = 0; i < players->size(); i++) {
players->at(i).renderAudio(static_cast<float*>(audioData), numFrames, i == 0, audioStream->getChannelCount() != 1);
}
return DataCallbackResult::Continue;
}
Rendering audio in engine is quite simple, as you may expect, I just seek through vector of OggPlayer, and call renderAudio method. Below code is how I initialize AudioStream.
void OggPianoEngine::start(bool isStereo) {
AudioStreamBuilder builder;
builder.setFormat(AudioFormat::Float);
builder.setDirection(Direction::Output);
builder.setChannelCount(isStereo ? ChannelCount::Stereo : ChannelCount::Mono);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
builder.setCallback(this);
builder.openStream(&stream);
stream->setBufferSizeInFrames(stream->getFramesPerBurst() * 2);
stream->requestStart();
deviceSampleRate = stream->getSampleRate();
isStreamOpened = true;
isStreamStereo = isStereo;
}
Since I watched basic video guide of Oboe like this or this, so I tried to configure basic settings for LowLatency mode (for example, setting setting buffer size to burst size multiplying 2). But audio starts to stop rendering when there are too many queues. At first, sound starts to stutter. It feels like it skipping some of rendering session, and then it completely stops rendering if I tap playing button more after this. It starts to render again after I wait for a while (5~10 seconds, enough time to wait for queues to be emptied). So I have several questions
Does Oboe stop rendering audio if it takes too much time to render audio like situation above?
Did I reach limit of rendering audio, meaning that only limiting number of queues is the solution? or are there any ways to reach better performance?
These codes are in my flutter plugin, so you can get full codes from this github link
Does Oboe stop rendering audio if it takes too much time to render audio like situation above?
Yes. If you block onAudioReady for longer than the time represented by numFrames you will get an audio glitch. I bet if you ran systrace.py --time=5 -o trace.html -a your.app.packagename audio sched freq you'd see that you're spending too much time inside that method.
Did I reach limit of rendering audio, meaning that only limiting number of queues is the solution? or are there any ways to reach better performance?
Looks like it. The problem is you're trying to do too much work inside the audio callback. Things I'd try immediately:
Different compiler optimisation: try -O2, -O3 and -Ofast
Profile the code inside the callback - identify where you're spending most time.
It looks like you're making a lot of calls to sin and cos. There may be faster versions of these functions.
I spoke about some of these debugging/optimisation techniques in this talk
One other quick tip. Try to avoid raw pointers unless you really have no choice. For example AudioStream* stream; would be better as std::shared_ptr<AudioStream> and std::vector<OggPlayer>* players; can be refactored to std::vector<OggPlayer> players;

opencv median calculation in java

There is a comment on here, on a stackflow post that answers the question of how to achieve super fast median calculation on opencv. The question is here:
super fast median of matrix in opencv (as fast as matlab)
The problem is that the code is on C/C++ and I can't find a way to convert this to opencv for java or C#, since I'm trying to port this to OpenCV for Xamarin.Forms
Anyone knows how to conver this?
double medianMat(cv::Mat Input, int nVals) {
// COMPUTE HISTOGRAM OF SINGLE CHANNEL MATRIX
float range[] = {
0,
nVals
};
const float * histRange = {
range
};
bool uniform = true;
bool accumulate = false;
cv::Mat hist;
calcHist( & Input, 1, 0, cv::Mat(), hist, 1, & nVals, & histRange, uniform, accumulate);
// COMPUTE CUMULATIVE DISTRIBUTION FUNCTION (CDF)
cv::Mat cdf;
hist.copyTo(cdf);
for (int i = 1; i <= nVals - 1; i++) {
cdf.at < float > (i) += cdf.at < float > (i - 1);
}
cdf /= Input.total();
// COMPUTE MEDIAN
double medianVal;
for (int i = 0; i <= nVals - 1; i++) {
if (cdf.at < float > (i) >= 0.5) {
medianVal = i;
break;
}
}
return medianVal / nVals;
}
UPDATE:
As an example, this is my tried method on c# to calculate the mediam of a grey scale Mat. It fails, can you see what I did wrong?
private static int Median2(Mat Input)
{
// COMPUTE HISTOGRAM OF SINGLE CHANNEL MATRIX
MatOfInt mHistSize = new MatOfInt(256);
MatOfFloat mRanges = new MatOfFloat(0f, 256f);
MatOfInt channel = new MatOfInt(0); // only 1 channel for grey
Mat temp = new Mat();
Mat hist = new Mat();
Imgproc.CalcHist(Java.Util.Arrays.AsList(Input).Cast<Mat>().ToList(), channel, temp, hist, mHistSize, mRanges);
float[] cdf = new float[(int)hist.Total()];
hist.Get(0, 0, cdf);
for (int i = 1; i < 256; i++)
{
cdf[i] += cdf[i - 1];
cdf[i - 1] = cdf[i - 1] / Input.Total();
}
// COMPUTE CUMULATIVE DISTRIBUTION FUNCTION (CDF)
float total = (float)Input.Total();
// COMPUTE MEDIAN
double medianVal=0;
for (int i = 0; i < 256; i++)
{
if (cdf[i] >= 0.5) {
medianVal = i;
break;
}
}
return (int)(medianVal / 256);
}

Fetching images of AIMAGE_FORMAT_JPEG fails

When fetching images:
assert(AIMAGE_FORMAT_JPEG == src_format, "Failed to get format");
AImage_getHeight(image, &height);
AImage_getWidth(image, &width);
AImage_getPlaneData(image, 0, &pixel, &y_len);
AImage_getPlaneRowStride(image, 0, &stride);
AImage_getPlanePixelStride(image, 0, &pixel_stride);
Note, that the last 2 commands return AMEDIA_ERROR_UNSUPPORTED for that format - see docs.
Now writing on the buffer:
uint8_t *out = buf.data;
for (int y = 0; y < height; y++) {
const uint8_t *pY = pixel + width*3*y;
for (int x = 0; x < width; x++) {
out[x*3 + 0] = pY[x*3 + 0];
out[x*3 + 1] = pY[x*3 + 1];
out[x*3 + 2] = pY[x*3 + 2];
}
out += width*3;
}
the app crashes after start up very quickly. The logcat does not output something meaningful to me - at least not readable. Any thoughts on how to approach this issue?
getPlaneData() for JPEG gives you the undecoded buffer. You can see that the returned size is much smaller than width*height*3.

Implementation of FFT array

I have a problem when implementing a FFT algorithm in Android.
Let´s say that I have a wav file of 8.000 bytes length.
I am aware that you have to select a size of the FFT algorithm (and also has to be a power of 2). My problem is that I am not really sure about how to further proceed from now on.
Lets say that I have chosen a size of the FFT of N=1024.
I have basically to options on my mind:
1) Apply the FFT algorithm directly to the whole array of 8.000 bytes
2) Divide the 8000 byte array wav file in chunks of 1024 bytes (and fill with 0´s the last chunk untill having 8 exact chunks),
then apply the fft to each of this chunks and finally collate all the different chunks again to have one single byte array to represent.
8000*2*1 sec = 8192
I think it´s the option 2 but I am not completely sure.
Here is the fft array thaT I am using:
package com.example.acoustics;
public class FFT {
int n, m;
// Lookup tables. Only need to recompute when size of FFT changes.
double[] cos;
double[] sin;
public FFT(int n) {
this.n = n;
this.m = (int) (Math.log(n) / Math.log(2));
// Make sure n is a power of 2
if (n != (1 << m))
throw new RuntimeException("FFT length must be power of 2");
// precompute tables
cos = new double[n / 2];
sin = new double[n / 2];
for (int i = 0; i < n / 2; i++) {
cos[i] = Math.cos(-2 * Math.PI * i / n);
sin[i] = Math.sin(-2 * Math.PI * i / n);
}
}
/***************************************************************
* fft.c
* Douglas L. Jones
* University of Illinois at Urbana-Champaign
* January 19, 1992
* http://cnx.rice.edu/content/m12016/latest/
*
* fft: in-place radix-2 DIT DFT of a complex input
*
* input:
* n: length of FFT: must be a power of two
* m: n = 2**m
* input/output
* x: double array of length n with real part of data
* y: double array of length n with imag part of data
*
* Permission to copy and use this program is granted
* as long as this header is included.
****************************************************************/
public void fft(double[] x, double[] y) {
int i, j, k, n1, n2, a;
double c, s, t1, t2;
// Bit-reverse
j = 0;
n2 = n / 2;
for (i = 1; i < n - 1; i++) {
n1 = n2;
while (j >= n1) {
j = j - n1;
n1 = n1 / 2;
}
j = j + n1;
if (i < j) {
t1 = x[i];
x[i] = x[j];
x[j] = t1;
t1 = y[i];
y[i] = y[j];
y[j] = t1;
}
}
// FFT
n1 = 0;
n2 = 1;
for (i = 0; i < m; i++) {
n1 = n2;
n2 = n2 + n2;
a = 0;
for (j = 0; j < n1; j++) {
c = cos[a];
s = sin[a];
a += 1 << (m - i - 1);
for (k = j; k < n; k = k + n2) {
t1 = c * x[k + n1] - s * y[k + n1];
t2 = s * x[k + n1] + c * y[k + n1];
x[k + n1] = x[k] - t1;
y[k + n1] = y[k] - t2;
x[k] = x[k] + t1;
y[k] = y[k] + t2;
}
}
}
}
}
I think that you can use the entire array with the FFT. There is not problem with that, you can use 2^13 = 8192 and complete the array with zeros, this processing is also called zero padding and is used in more than one implementation of the FFT. If your procedure works well there is not problem with run the entire array, but if you use section of size 1024 for compute the FFT, then you will have a segmented Fourier transform that not describe well the entire spectrum of the signal, because the FFT use all the positions in the array to compute each value in the new transformed array, then you not get the correct answer in the position one for example if you don't use the entire array of the signal.
This is my analysis of your question I am not hundred percent sure but my knowledge about Fourier series tell me that this is almost that is going to do if you compute a segmented form of the Fourier Transform instead the entire serie.

How can I draw 2D water in Android using OpenGL ES 2.0?

I am developing Android application in Java. I want to draw dynamical images like in attached files (print screens of very old DOS program). I think it is water waves.
Can any one explain me how can I do this job? I have no ideas how this pictures were drawn.
Thanks!
p.s. May be it is a traveling wave in a compressible fluid?
EDITED: screen record with required animation: http://www.youtube.com/watch?v=_zeSQX_8grY
EDITED2: I have found sources of this video effect here. There is a compiled program for DOS (can be run in DOS box) and sourses in ASM. Folder "PART3" contains sources of required video effect (file WPLASMA.ASM). Unfortunately I don't know Turbo Assembler. Can somebody help me to understand how this program draws this video effect? I published content of WPLASMA.ASM here.
EDITED3: I have ported most parts of the code to C. But I don't know how VGA mode works. I have difficulties with PutBmp function.
#include <cmath>
#include <ctime>
#include <cstring>
#include <cstdlib>
#include <cassert>
#include <opencv2/highgui/highgui.hpp>
struct RGB {
char red, green, blue;
};
#define MAXH 60 // horiz wave length.
#define MAXVW 64 // vert wave length.
#define MAXHW 32 // max horiz wave amount.
#define MAXV (80 + MAXHW) // vert wave length.
static void UpdHWaves( char* HWave1, char* HWave2,
int& HWavPos1, int& HWavPos2,
int HWavInc1 ) // Updates the Horiz Waves.
{
for( int i = 0; i < MAXH - 1; ++i ) {
HWave1[ i ] = HWave1[ i + 1 ];
}
int8_t val = 127 * std::sin( HWavPos1 * M_PI / 180.0 );
HWave1[ MAXH - 1 ] = val >> 1;
HWavPos1 += HWavInc1;
if( HWavPos1 >= 360 ) {
HWavPos1 -= 360;
}
for( int i = 0; i < MAXH; ++i ) {
val = 127 * std::sin( ( HWavPos2 + i * 4 ) * M_PI / 180.0 );
val = ( val >> 1 ) + HWave1[ i ];
HWave2[ i ] = ( val >> 3 ) + 16;
}
HWavPos2 += 4;
if( HWavPos2 >= 360 ) {
HWavPos2 -= 360;
}
}
static void UpdVWaves( char *VWave1, char* VWave2,
int& VWavPos1, int& VWavPos2,
int VWavInc1 )
{
for( int i = 0; i < MAXV - 1; ++i ) {
VWave1[ i ] = VWave1[ i + 1 ];
}
int8_t val = 127 * std::sin( VWavPos1 * M_PI / 180.0 );
VWave1[ MAXV - 1 ] = val >> 1;
VWavPos1 += VWavInc1;
if( VWavPos1 >= 360 ) {
VWavPos1 -= 360;
}
for( int i = 0; i < MAXV; ++i ) {
val = 127 * std::sin( ( VWavPos2 + i * 3 ) * M_PI / 180.0 );
val = ( val >> 1 ) + VWave1[ i ];
VWave2[ i ] = ( val >> 2 ) + 32;
}
++VWavPos2;
if( VWavPos2 >= 360 ) {
VWavPos2 -= 360;
}
}
static void UpdBmp( char *Bitmap, const char *VWave2 ) // Updates the Plasma bitmap.
{
for( int k = 0; k < MAXV; ++k ) {
char al = VWave2[ k ];
int i = 0;
for( int l = 0; l < MAXH; ++l ) {
++al;
Bitmap[ i ] = al;
i += 256;
}
++Bitmap;
}
}
static void PutBmp( const RGB* palete,
const char* BitMap,
const char* HWave2 ) // Puts into the screen the Plasma bitmap.
{
RGB screen[320*200];
memset( screen, 0, sizeof( screen ) );
RGB *screenPtr = screen;
const char *dx = BitMap;
const char *si = HWave2;
for( int i = 0; i < MAXH; ++i ) {
char ax = *si;
++si;
const char *si2 = ax + dx;
for( int j = 0; j < 40; ++j ) {
assert( *si2 < MAXH + MAXVW );
*screenPtr = palete[ *si2 ];
++screenPtr;
++si2;
assert( *si2 < MAXH + MAXVW );
*screenPtr = palete[ *si2 ];
++screenPtr;
++si2;
}
dx += 256;
}
static cv::VideoWriter writer( "test.avi", CV_FOURCC('M','J','P','G'), 15, cv::Size( 320, 200 ) );
cv::Mat image( 200, 320, CV_8UC3 );
for( int i = 0; i < 200; ++i ) {
for( int j = 0; j < 320; ++j ) {
image.at<cv::Vec3b>(i, j )[0] = screen[ 320 * i + j ].blue;
image.at<cv::Vec3b>(i, j )[1] = screen[ 320 * i + j ].green;
image.at<cv::Vec3b>(i, j )[2] = screen[ 320 * i + j ].red;
}
}
writer.write( image );
}
int main( )
{
RGB palete[256];
// generation of the plasma palette.
palete[ 0 ].red = 0;
palete[ 0 ].green = 0;
palete[ 0 ].blue = 0;
RGB *ptr = palete + 1;
int ah = 0;
int bl = 2;
for( int i = 0; i < MAXH + MAXVW; ++i ) {
ptr->red = 32 - ( ah >> 1 );
ptr->green = 16 - ( ah >> 2 );
ptr->blue = 63 - ( ah >> 2 );
ah += bl;
if( ah >= 64 ) {
bl = - bl;
ah += 2 * bl;
}
ptr += 1;
}
//setup wave parameters.
int HWavPos1 = 0; // horiz waves pos.
int HWavPos2 = 0;
int VWavPos1 = 0; // vert waves pos.
int VWavPos2 = 0;
int HWavInc1 = 1; // horiz wave speed.
int VWavInc1 = 7; // vert wave speed.
char HWave1[ MAXH ]; // horiz waves.
char HWave2[ MAXH ];
char VWave1[ MAXV ]; // vert waves.
char VWave2[ MAXV ];
char Bitmap[ 256 * MAXH + MAXV ];
memset( Bitmap, 0, sizeof( Bitmap ) );
//use enough steps to update all the waves entries.
for( int i = 0; i < MAXV; ++i ) {
UpdHWaves( HWave1, HWave2, HWavPos1, HWavPos2, HWavInc1 );
UpdVWaves( VWave1, VWave2, VWavPos1, VWavPos2, VWavInc1 );
}
std::srand(std::time(0));
for( int i = 0; i < 200; ++i ) {
UpdHWaves( HWave1, HWave2, HWavPos1, HWavPos2, HWavInc1 );
UpdVWaves( VWave1, VWave2, VWavPos1, VWavPos2, VWavInc1 );
UpdBmp( Bitmap, VWave2 );
PutBmp( palete, Bitmap, HWave2 );
//change wave's speed.
HWavInc1 = ( std::rand( ) & 7 ) + 3;
VWavInc1 = ( std::rand( ) & 3 ) + 5;
}
return 0;
}
Can you name the DOS program? Or find a similar effect on YouTube?
At a guess, it's a "plasma" effect, which used to be very common on the demo scene. You can see one in the background of the menu of the PC version of Tempest 2000, including very briefly in this YouTube video. Does that look right?
If so then as with all demo effects, it's smoke and mirrors. To recreate one in OpenGL you'd need to produce a texture with a spherical sine pattern. So for each pixel, work out its distance from the centre. Get the sine of that distance multiplied by whatever number you think is aesthetically pleasing. Store that value to the texture. Make sure you're scaling to fill a full byte. You should end up with an image that looks like ripples on the surface of a pond.
To produce the final output, you're going to additively composite at least three of those. If you're going to do three then multiply each by 1/3rd so that the values in your framebuffer end up in the range 0–255. You're going to move the three things independently to produce the animation, also by functions of sine — e.g. one might follow the path centre + (0.3 * sin(1.8 + time * 1.5), 0.8 * sin(0.2 + time * 9.2)), and the others will also obey functions of that form. Adjust the time multiplier, angle offset and axis multiplier as you see fit.
There's one more sine pattern to apply: if this were a DOS program, you'd further have set up your palette so that brightness comes and goes in a sine wave — e.g. colours 0–31 would be one complete cycle, 32–63 would be a repeat of the cycle, etc. You can't set a palette on modern devices and OpenGL ES doesn't do paletted textures so you're going to have to write a shader. On the plus side, the trigonometric functions are built into GLSL so it'll be a fairly simple one.
EDIT: I threw together a quick test project and wrote the following vertex shader:
attribute vec4 position;
attribute vec2 texCoord;
uniform mediump float time;
varying highp vec2 texCoordVarying1, texCoordVarying2, texCoordVarying3;
void main()
{
mediump float radiansTime = time * 3.141592654 * 2.0;
/*
So, coordinates here are of the form:
texCoord + vec2(something, variant of same thing)
Where something is:
<linear offset> + sin(<angular offset> + radiansTime * <multiplier>)
What we're looking to do is to act as though moving three separate sheets across
the surface. Each has its own texCoordVarying. Each moves according to a
sinusoidal pattern. Note that the multiplier is always a whole number so
that all patterns repeat properly as time goes from 0 to 1 and then back to 0,
hence radiansTime goes from 0 to 2pi and then back to 0.
The various constants aren't sourced from anything. Just play around with them.
*/
texCoordVarying1 = texCoord + vec2(0.0 + sin(0.0 + radiansTime * 1.0) * 0.2, 0.0 + sin(1.9 + radiansTime * 8.0) * 0.4);
texCoordVarying2 = texCoord - vec2(0.2 - sin(0.8 + radiansTime * 2.0) * 0.2, 0.6 - sin(1.3 + radiansTime * 3.0) * 0.8);
texCoordVarying3 = texCoord + vec2(0.4 + sin(0.7 + radiansTime * 5.0) * 0.2, 0.5 + sin(0.2 + radiansTime * 9.0) * 0.1);
gl_Position = position;
}
... and fragment shader:
varying highp vec2 texCoordVarying1, texCoordVarying2, texCoordVarying3;
void main()
{
/*
Each sheet is coloured individually to look like ripples on
the surface of a pond after a stone has been thrown in. So it's
a sine function on distance from the centre. We adjust the ripple
size with a quick multiplier.
Rule of thumb: bigger multiplier = smaller details on screen.
*/
mediump vec3 distances =
vec3(
sin(length(texCoordVarying1) * 18.0),
sin(length(texCoordVarying2) * 14.2),
sin(length(texCoordVarying3) * 11.9)
);
/*
We work out outputColour in the range 0.0 to 1.0 by adding them,
and using the sine of that.
*/
mediump float outputColour = 0.5 + sin(dot(distances, vec3(1.0, 1.0, 1.0)))*0.5;
/*
Finally the fragment colour is created by linearly interpolating
in the range of the selected start and end colours 48 36 208
*/
gl_FragColor =
mix( vec4(0.37, 0.5, 1.0, 1.0), vec4(0.17, 0.1, 0.8, 1.0), outputColour);
}
/*
Implementation notes:
it'd be smarter to adjust the two vectors passed to mix so as not
to have to scale the outputColour, leaving it in the range -1.0 to 1.0
but this way makes it clearer overall what's going on with the colours
*/
Putting that into a project that draws a quad to display the texCoord range [0, 1] in both dimensions (aspect ratio be damned) and setting time so that it runs from 0 to 1 once every minute gave me this:
YouTube link
It's not identical, clearly, but it's the same effect. You just need to tweak the various magic constants until you get something you're happy with.
EDIT 2: it's not going to help you all that much but I've put this GL ES code into a suitable iOS wrapper and uploaded to GitHub.
There is a sample program in the Android NDK called "bitmap-plasma" whch produces very similar patterns to this. It is in C, but could probably be converted into GLSL code.

Categories

Resources