Understanding Android RingDroid WAV Calculations - android

I have been studying the RingDroid source, trying to figure out how to draw waveforms on Android device. However, I got stuck in the section about reading the WAV file at CheapWAV.java.
public void ReadFile(File inputFile)
throws java.io.FileNotFoundException,
java.io.IOException {
super.ReadFile(inputFile);
mFileSize = (int)mInputFile.length();
if (mFileSize < 128) {
throw new java.io.IOException("File too small to parse");
}
FileInputStream stream = new FileInputStream(mInputFile);
byte[] header = new byte[12];
stream.read(header, 0, 12);
mOffset += 12;
if (header[0] != 'R' ||
header[1] != 'I' ||
header[2] != 'F' ||
header[3] != 'F' ||
header[8] != 'W' ||
header[9] != 'A' ||
header[10] != 'V' ||
header[11] != 'E') {
throw new java.io.IOException("Not a WAV file");
}
mChannels = 0;
mSampleRate = 0;
while (mOffset + 8 <= mFileSize) {
byte[] chunkHeader = new byte[8];
stream.read(chunkHeader, 0, 8);
mOffset += 8;
int chunkLen =
((0xff & chunkHeader[7]) << 24) |
((0xff & chunkHeader[6]) << 16) |
((0xff & chunkHeader[5]) << 8) |
((0xff & chunkHeader[4]));
if (chunkHeader[0] == 'f' &&
chunkHeader[1] == 'm' &&
chunkHeader[2] == 't' &&
chunkHeader[3] == ' ') {
if (chunkLen < 16 || chunkLen > 1024) {
throw new java.io.IOException(
"WAV file has bad fmt chunk");
}
byte[] fmt = new byte[chunkLen];
stream.read(fmt, 0, chunkLen);
mOffset += chunkLen;
int format =
((0xff & fmt[1]) << 8) |
((0xff & fmt[0]));
mChannels =
((0xff & fmt[3]) << 8) |
((0xff & fmt[2]));
mSampleRate =
((0xff & fmt[7]) << 24) |
((0xff & fmt[6]) << 16) |
((0xff & fmt[5]) << 8) |
((0xff & fmt[4]));
if (format != 1) {
throw new java.io.IOException(
"Unsupported WAV file encoding");
}
} else if (chunkHeader[0] == 'd' &&
chunkHeader[1] == 'a' &&
chunkHeader[2] == 't' &&
chunkHeader[3] == 'a') {
if (mChannels == 0 || mSampleRate == 0) {
throw new java.io.IOException(
"Bad WAV file: data chunk before fmt chunk");
}
int frameSamples = (mSampleRate * mChannels) / 50;
mFrameBytes = frameSamples * 2;
mNumFrames = (chunkLen + (mFrameBytes - 1)) / mFrameBytes;
mFrameOffsets = new int[mNumFrames];
mFrameLens = new int[mNumFrames];
mFrameGains = new int[mNumFrames];
byte[] oneFrame = new byte[mFrameBytes];
int i = 0;
int frameIndex = 0;
while (i < chunkLen) {
int oneFrameBytes = mFrameBytes;
if (i + oneFrameBytes > chunkLen) {
i = chunkLen - oneFrameBytes;
}
stream.read(oneFrame, 0, oneFrameBytes);
int maxGain = 0;
for (int j = 1; j < oneFrameBytes; j += 4 * mChannels) {
int val = java.lang.Math.abs(oneFrame[j]);
if (val > maxGain) {
maxGain = val;
}
}
mFrameOffsets[frameIndex] = mOffset;
mFrameLens[frameIndex] = oneFrameBytes;
mFrameGains[frameIndex] = maxGain;
frameIndex++;
mOffset += oneFrameBytes;
i += oneFrameBytes;
if (mProgressListener != null) {
boolean keepGoing = mProgressListener.reportProgress(
i * 1.0 / chunkLen);
if (!keepGoing) {
break;
}
}
}
} else {
stream.skip(chunkLen);
mOffset += chunkLen;
}
}
}
Everything seems straight forward until I reach
int frameSamples = (mSampleRate * mChannels) / 50;
mFrameBytes = frameSamples * 2;
mNumFrames = (chunkLen + (mFrameBytes - 1)) / mFrameBytes;
Q1. Where did the 50 magic number came from? Is it just assuming the frame duration is 50?
Q2. Why is mFrameBytes = frameSample * 2? Is it assuming each sample is 2 byte? But why?
for (int j = 1; j < oneFrameBytes; j += 4 * mChannels) {
int val = java.lang.Math.abs(oneFrame[j]);
if (val > maxGain) {
maxGain = val;
}
}
Q3. Why is j incrementing by 4 * mChannels? How was 4 justified?
Q4. What does frameGains mean actually? I've went through although articles/blogs such as
https://ccrma.stanford.edu/courses/422/projects/WaveFormat/2
http://blogs.msdn.com/b/dawate/archive/2009/06/23/intro-to-audio-programming-part-2-demystifying-the-wav-format.aspx
http://www.speakingcode.com/2011/12/31/primer-on-digital-audio-and-pulse-code-modulation-pcm/
But I don't see such term mentioned any where.
Hope someone can shed some light on this. Thank you.

Q1. Where did the 50 magic number came from? Is it just assuming the frame duration is 50?
A1. That calculates 1/50th of a second as a frame. So the app would have to process 50 frame buffers if audio data per second.
Q2. Why is mFrameBytes = frameSample * 2? Is it assuming each sample is 2 byte? But why?
A2. I'm guessing this because he is assuming 16bit samples.
Q3. Why is j incrementing by 4 * mChannels? How was 4 justified?
A3. I think the key here is to note it starts from offset 1. Which means he is only sampling the high order byte for the sample. The 4 is probably just an optimisation so he's only processing a half the buffer (remember he's assuming 2 bytes per sample)
Q4. What does frameGains mean actually?
Well it's exactly what it says. It's the gain of that frame (1/50th of a second) See http://en.m.wikipedia.org/wiki/Gain or Google for: Audio Gain.
This should also help: https://ccrma.stanford.edu/courses/422/projects/WaveFormat/

Related

Android - AudioRecord: How to separate left and right channels in stereo recording mode

I am using AudioRecord class of android sdk to record raw audio and then I am encoding it with lame mp3 encoder. For this I have to separate the left and right chaanels in stereo recording mode like this.
bytesEncoded = Lame.encode(left, isMono ? left : right, samplesRead, mp3Buf, mp3Buf.length);
How to separate the left and right channels from a buffer into its own buffer?
I have tried using this but it gives sound with gaps.
link
The following method is working but it is adding noise in background:`
private int readStereo(short[] left, short[] right, int numSamples) throws IOException {
byte[] buf = new byte[numSamples * 4];
int index = 0;
if (audioRecorder == null)
return -1;
int bytesRead = audioRecorder.read(buf, 0, numSamples * 4);
for (int i = 0; i < bytesRead; i+=2) {
short val = byteToShortLE(buf[0], buf[i+1]);
if (i % 4 == 0) {
left[index] = val;
} else {
right[index] = val;
index++;
}
}
return index;
}`
private static short byteToShortLE(byte b1, byte b2) {
return (short) (b1 & 0xFF | ((b2 & 0xFF) << 8));
}

MediaMuxer unable to make MP4s that are streamable

I'm editing an MP4 on Android using MediaExtractor to fetch audio and video tracks then creating a new file using MediaMuxer. It works fine. I can play the new MP4 on the phone (and other players) but am unable to stream the file on the web. When I stop the MediaMuxer it generates a log message
"The mp4 file will not be streamable."
I looked at the underlying native code (MPEG4Writer.cpp) and it would appear that the writer is having trouble calculating the needed moov box size. It tries to guess using some heuristic if a bit rate is not supplied as a parameter to the writer. The problem is the MediaMuxer doesn't provider the ability to set MPEG4Writer's parameters. Am I missing something or am I stuck looking a some other means of generating the file (or header)? Thanks.
In MPEG4Writer.cpp:
// The default MIN_MOOV_BOX_SIZE is set to 0.6% x 1MB / 2,
// where 1MB is the common file size limit for MMS application.
// The default MAX _MOOV_BOX_SIZE value is based on about 3
// minute video recording with a bit rate about 3 Mbps, because
// statistics also show that most of the video captured are going
// to be less than 3 minutes.
This is a bad assumption on how MediaMuxer might be used. We are recording a max of 15 seconds of higher res video and MIN_MOOV_BOX_SIZE is way too small. So to make the file streamable I have to rewrite the file to move the moov header before mdat and patch up some offsets. Here is my code. It's not great. Error paths aren't handled correctly and it makes assumptions about the order of the boxes.
public void fastPlay(String srcFile, String dstFile) {
RandomAccessFile inFile = null;
FileOutputStream outFile = null;
try {
inFile = new RandomAccessFile(new File(srcFile), "r");
outFile = new FileOutputStream(new File(dstFile));
int moovPos = 0;
int mdatPos = 0;
int moovSize = 0;
int mdatSize = 0;
byte[] boxSizeBuf = new byte[4];
byte[] pathBuf = new byte[4];
int boxSize;
int dataSize;
int bytesRead;
int totalBytesRead = 0;
int bytesWritten = 0;
// First find the location and size of the moov and mdat boxes
while (true) {
try {
boxSize = inFile.readInt();
bytesRead = inFile.read(pathBuf);
if (bytesRead != 4) {
Log.e(TAG, "Unexpected bytes read (path) " + bytesRead);
break;
}
String pathRead = new String(pathBuf, "UTF-8");
dataSize = boxSize - 8;
totalBytesRead += 8;
if (pathRead.equals("moov")) {
moovPos = totalBytesRead - 8;
moovSize = boxSize;
} else if (pathRead.equals("mdat")) {
mdatPos = totalBytesRead - 8;
mdatSize = boxSize;
}
totalBytesRead += inFile.skipBytes(dataSize);
} catch (IOException e) {
break;
}
}
// Read the moov box into a buffer. This has to be patched up. Ug.
inFile.seek(moovPos);
byte[] moovBoxBuf = new byte[moovSize]; // This shouldn't be too big.
bytesRead = inFile.read(moovBoxBuf);
if (bytesRead != moovSize) {
Log.e(TAG, "Couldn't read full moov box");
}
// Now locate the stco boxes (chunk offset box) inside the moov box and patch
// them up. This ain't purdy.
int pos = 0;
while (pos < moovBoxBuf.length - 4) {
if (moovBoxBuf[pos] == 0x73 && moovBoxBuf[pos + 1] == 0x74 &&
moovBoxBuf[pos + 2] == 0x63 && moovBoxBuf[pos + 3] == 0x6f) {
int stcoPos = pos - 4;
int stcoSize = byteArrayToInt(moovBoxBuf, stcoPos);
patchStco(moovBoxBuf, stcoSize, stcoPos, moovSize);
}
pos++;
}
inFile.seek(0);
byte[] buf = new byte[(int) mdatPos];
// Write out everything before mdat
inFile.read(buf);
outFile.write(buf);
// Write moov
outFile.write(moovBoxBuf, 0, moovSize);
// Write out mdat
inFile.seek(mdatPos);
bytesWritten = 0;
while (bytesWritten < mdatSize) {
int bytesRemaining = (int) mdatSize - bytesWritten;
int bytesToRead = buf.length;
if (bytesRemaining < bytesToRead) bytesToRead = bytesRemaining;
bytesRead = inFile.read(buf, 0, bytesToRead);
if (bytesRead > 0) {
outFile.write(buf, 0, bytesRead);
bytesWritten += bytesRead;
} else {
break;
}
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
} finally {
try {
if (outFile != null) outFile.close();
if (inFile != null) inFile.close();
} catch (IOException e) {}
}
}
private void patchStco(byte[] buf, int size, int pos, int moovSize) {
Log.e(TAG, "stco " + pos + " size " + size);
// We are inserting the moov box before the mdat box so all of
// offsets in the stco box need to be increased by the size of the moov box. The stco
// box is variable in length. 4 byte size, 4 byte path, 4 byte version, 4 byte flags
// followed by a variable number of chunk offsets. So subtract off 16 from size then
// divide result by 4 to get the number of chunk offsets to patch up.
int chunkOffsetCount = (size - 16) / 4;
int chunkPos = pos + 16;
for (int i = 0; i < chunkOffsetCount; i++) {
int chunkOffset = byteArrayToInt(buf, chunkPos);
int newChunkOffset = chunkOffset + moovSize;
intToByteArray(newChunkOffset, buf, chunkPos);
chunkPos += 4;
}
}
public static int byteArrayToInt(byte[] b, int offset)
{
return b[offset + 3] & 0xFF |
(b[offset + 2] & 0xFF) << 8 |
(b[offset + 1] & 0xFF) << 16 |
(b[offset] & 0xFF) << 24;
}
public void intToByteArray(int a, byte[] buf, int offset)
{
buf[offset] = (byte) ((a >> 24) & 0xFF);
buf[offset + 1] = (byte) ((a >> 16) & 0xFF);
buf[offset + 2] = (byte) ((a >> 8) & 0xFF);
buf[offset + 3] = (byte) (a & 0xFF);
}
Currently MediaMuxer does not create streamable MP4 files
You can try Intel INDE on https://software.intel.com/en-us/intel-inde and Media Pack for Android which is a part of INDE, tutorials on https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials. It has a sample that shows how to use media pack to create and stream files over the network
For example for camera streaming it have sample CameraStreamerActivity.java
public void onCreate(Bundle icicle) {
capture = new CameraCapture(new AndroidMediaObjectFactory(getApplicationContext()), progressListener);
parameters = new StreamingParameters();
parameters.Host = getString(R.string.streaming_server_default_ip);
parameters.Port = Integer.parseInt(getString(R.string.streaming_server_default_port));
parameters.ApplicationName = getString(R.string.streaming_server_default_app);
parameters.StreamName = getString(R.string.streaming_server_default_stream);
parameters.isToPublishAudio = false;
parameters.isToPublishVideo = true;
}
public void startStreaming() {
configureMediaStreamFormat();
capture.setTargetVideoFormat(videoFormat);
capture.setTargetAudioFormat(audioFormat);
capture.setTargetConnection(prepareStreamingParams());
capture.start();
}
In addition there are simular samples for files streaming or game process capturing and streaming

Sending OpenCV::Mat image to websocket Java client

I've a c++ websocket server, and I want to ti send an openCV image (cv::Mat) to my Android client.
I understood that I should use base64 string, but I can't find out how to do it from my openCV frames.
I don't know how to convert a cv::Mat to a bytearray.
Thank you
Hi you can use this below code which works form me
C++ Client
Here we will send BGR raw byte in to socket by accessing Mat data pointer.
Before sending make sure that Mat is continues otherwise make it continues.
int sendImage(Mat frame){
int imgSize = frame.total()*frame.elemSize();
int bytes=0;
int clientSock;
const char* server_ip=ANDROID_IP;
int server_port=2000;
struct sockaddr_in serverAddr;
socklen_t serverAddrLen = sizeof(serverAddr);
if ((clientSock = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
printf("\n--> socket() failed.");
return -1;
}
serverAddr.sin_family = PF_INET;
serverAddr.sin_addr.s_addr = inet_addr(server_ip);
serverAddr.sin_port = htons(server_port);
if (connect(clientSock, (sockaddr*)&serverAddr, serverAddrLen) < 0) {
printf("\n--> connect() failed.");
return -1;
}
frame = (frame.reshape(0,1)); // to make it continuous
/* start sending images */
if ((bytes = send(clientSock, frame.data, imgSize, 0)) < 0){
printf("\n--> send() failed");
return -1;
}
/* if something went wrong, restart the connection */
if (bytes != imgSize) {
cout << "\n--> Connection closed " << endl;
close(clientSock);
return -1;
}
return 0;
}
Java Server
You should know size of image going to receive.
Receives stream from socket and convert to byte array.
Convert byte array BGR and create Bitmap.
Code for Receiving byte array from C++ Server
public static byte imageByte[];
int imageSize=921600;//expected image size 640X480X3
InputStream in = server.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte buffer[] = new byte[1024];
int remainingBytes = imageSize; //
while (remainingBytes > 0) {
int bytesRead = in.read(buffer);
if (bytesRead < 0) {
throw new IOException("Unexpected end of data");
}
baos.write(buffer, 0, bytesRead);
remainingBytes -= bytesRead;
}
in.close();
imageByte = baos.toByteArray();
baos.close();
Code to Convert byte array to RGB bitmap image
int nrOfPixels = imageByte.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = imageByte[3*i];
int g = imageByte[3*i + 1];
int b = imageByte[3*i + 2];
if (r < 0)
r = r + 256; //Convert to positive
if (g < 0)
g = g + 256; //Convert to positive
if (b < 0)
b = b + 256; //Convert to positive
pixels[i] = Color.rgb(b,g,r);
}
Bitmap bitmap = Bitmap.createBitmap(pixels, 640, 480, itmap.Config.ARGB_8888);
Check this answer in question Serializing OpenCV Mat_ . If it is not a problem for you to use boost, it can solve your problem. Probably you will need some additional JNI magic on client side.
You can take into account which of the data are important for you: cols (number of columns), rows (number of columns), data (which contains the pixel information), type (data type and channel number).
You have to vectorize your matrix, because it is not neccessarrily continuous and take into account the variations of the size of a pixel in the memory.
Suppose:
cv::Mat m;
Then to allocate:
int depth; // measured in bytes
switch (m.depth())
{
// ... you might check for all of the possibilities
case CV_16U:
depth = 2;
}
char *array = new char[4 + 4 + 4 + m.cols * m.rows * m.channels() * depth]; // rows + cols + type + data
And than write the header information:
int *rows = array;
int *cols = &array[4];
int *type = &array[8];
*rows = m.rows;
*cols = m.cols;
*type = m.type;
And finally the data:
char *mPtr;
for (int i = 0; i < m.rows; i++)
{
mPtr = m.ptr<char>(i); // data type doesn't matter
for (int j = 0; j < m.cols; j++)
{
array[i * rows + j + 3 * 4] = mPtr[j];
}
}
Hopefully no bugs in the code.

How to change orientation of camera preview callback buffer?

This is a variation on a question often asked hereabouts but I don't see this exact situation, so I'll throw it out there.
I have an onPreviewFrame callback set up. This gets a byte[] with NV21 data in it. We h.264 encode it and send it out as a video stream. On the other side, we see the video skewed, either 90 or 270 degrees, depending on the phone.
So the question is, how to rotate the data, not just the preview image? Camera.Parameters.setRotation only affects taking the picture, not video. Camera.setDisplayOrientation specifically says it only affects the displaying preview, not the frame bytes:
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
So is there a way, at any API level, to change the orientation of the byte array? Failing that, can you even rotate the NV21 (YVU) format that this come in, or do I need to RGB it first?
Turns out you do need to rotate each frame yourself before sending it off. We ended up using libyuv, which has a very convenient function that both rotates and converts it - libyuv::ConvertToI420
https://code.google.com/p/libyuv/
I think that you would need to rotate the picture yourself. I did it once using the NDK and the leptonica library. A look at my code should get you started. Performance was okayish on a Samsung Galaxy S2 (i think i got around 15 frames or so). Since i was pushing the result into an openGL texture i had to also swizzle the color bytes around..
You could speed it up by rotating the image directly in the loop which decodes the yuv data..
mPix32 and mPix8 were previously allocated to hold the converted data.You would need to replace with your own image data structure of course..
jint Java_de_renard_ImageFilter_nativeProcessImage(JNIEnv *env, jobject javathis, jbyteArray frame) {
....
jbyte *data_buffer = env->GetByteArrayElements(frame, NULL);
l_uint8 *byte_buffer = (l_uint8 *) data_buffer;
yuvToPixFast(byte_buffer, mPix32, mPix8);
env->ReleaseByteArrayElements(frame, data_buffer, JNI_ABORT);
....
}
static inline void yuvToPixFast(unsigned char* pY, Pix* pix32, Pix* pix8) {
int i, j;
int nR, nG, nB;
int nY, nU, nV;
l_uint32* data = pixGetData(pix32);
l_uint32* data8 = pixGetData(pix8);
l_int32 height = pixGetHeight(pix32);
l_int32 width = pixGetWidth(pix32);
l_int32 wpl = pixGetWpl(pix32);
l_int32 wpl8 = pixGetWpl(pix8);
l_uint8 **lineptrs = pixSetupByteProcessing(pix8, NULL, NULL);
l_uint8* line8;
//memcpy(data8,pY,height*width);
unsigned char* pUV = pY + width * height;
for (i = 0; i < height; i++) {
nU = 0;
nV = 0;
unsigned char* uvp = pUV + (i >> 1) * width;
line8 = lineptrs[i];
memcpy(line8, pY, wpl8 * 4);
for (j = 0; j < width; j++) {
if ((j & 1) == 0) {
nV = (0xff & *uvp++) - 128;
nU = (0xff & *uvp++) - 128;
}
// Yuv Convert
nY = *(pY++);
//*line8++ = (l_uint8) nY;
nY -= -16;
if (nY < 0) {
nY = 0;
}
int y1192 = nY * 1192;
/*double saturation to increase cartoon effect*/
//nU<<=1;
//nV<<=1;
nB = y1192 + 2066 * nU;
nG = y1192 - 833 * nV - 400 * nU;
nR = y1192 + 1634 * nV;
if (nR < 0) {
nR = 0;
} else if (nR > 262143) {
nR = 262143;
}
if (nG < 0) {
nG = 0;
} else if (nG > 262143) {
nG = 262143;
}
if (nB < 0) {
nB = 0;
} else if (nB > 262143) {
nB = 262143;
}
//RGBA
//ABGR
*data++ = ((nR << 14) & 0xff000000) | ((nG << 6) & 0xff0000) | ((nB >> 2) & 0xff00) | (0xff);
//*data++ = (0x00 << 24) | (0xff<<16) | (0x00<<8) | ( 0xff) ;
//*data++ = (0xff << 24) | ((nB << 6) & 0xff0000) | ((nG >> 2) & 0xff00) | ((nR >> 10) & 0xff);
}
}
pixCleanupByteProcessing(pix8, lineptrs);
}

shift operator syntax error in android

Was just wondering if you could use the shift operator in android I am getting a syntax error when trying it. the operator is >> << >>> . If it doesn't support it is their an android sdk equivalent?
EDIT: here is the code i am using. I am trying to do a per pixel collision detection and was trying this out.
public void getBitmapData(Bitmap bitmap1, Bitmap bitmap2){
int[] bitmap1Pixels;
int[] bitmap2Pixels;
int bitmap1Height = bitmap1.getHeight();
int bitmap1Width = bitmap1.getWidth();
int bitmap2Height = bitmap1.getHeight();
int bitmap2Width = bitmap1.getWidth();
bitmap1Pixels = new int[bitmap1Height * bitmap1Width];
bitmap2Pixels = new int[bitmap2Height * bitmap2Width];
bitmap1.getPixels(bitmap1Pixels, 0, bitmap1Width, 1, 1, bitmap1Width - 1, bitmap1Height - 1);
bitmap2.getPixels(bitmap2Pixels, 0, bitmap2Width, 1, 1, bitmap2Width - 1, bitmap2Height - 1);
// Find the first line where the two sprites might overlap
int linePlayer, lineEnemy;
if (ninja.getY() <= enemy.getY()) {
linePlayer = enemy.getY() - ninja.getY();
lineEnemy = 0;
} else {
linePlayer = 0;
lineEnemy = ninja.getY() - enemy.getY();
}
int line = Math.max(linePlayer, lineEnemy);
// Get the shift between the two
int x = ninja.getX() - enemy.getX();
int maxLines = Math.max(bitmap1Height, bitmap2Height);
for (; line <= maxLines; line ++) {
// if width > 32, then you need a second loop here
long playerMask = bitmap1Pixels[linePlayer];
long enemyMask = bitmap2Pixels[lineEnemy];
// Reproduce the shift between the two sprites
if (x < 0) playerMask << (-x);
else enemyMask << x;
// If the two masks have common bits, binary AND will return != 0
if ((playerMask & enemyMask) != 0) {
// Contact!
Log.d("pixel collsion","we have pixel on pixel");
}
}
}
If you're appending to a string you'll get an error unless you put the arithmetic operations in parentheses:
jcomeau#intrepid:/tmp$ cat test.java
public class test {
public static void main(String args[]) {
int test = 42;
System.out.println("" + (test >> 1) + ", " + (test << 1) + ", " + (test >>> 1));
}
}
jcomeau#intrepid:/tmp$ java test
21, 84, 21
Java, which is used by Android does support bitwise operations. Here's a handy guide.

Categories

Resources