Sending OpenCV::Mat image to websocket Java client - android

I've a c++ websocket server, and I want to ti send an openCV image (cv::Mat) to my Android client.
I understood that I should use base64 string, but I can't find out how to do it from my openCV frames.
I don't know how to convert a cv::Mat to a bytearray.
Thank you

Hi you can use this below code which works form me
C++ Client
Here we will send BGR raw byte in to socket by accessing Mat data pointer.
Before sending make sure that Mat is continues otherwise make it continues.
int sendImage(Mat frame){
int imgSize = frame.total()*frame.elemSize();
int bytes=0;
int clientSock;
const char* server_ip=ANDROID_IP;
int server_port=2000;
struct sockaddr_in serverAddr;
socklen_t serverAddrLen = sizeof(serverAddr);
if ((clientSock = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
printf("\n--> socket() failed.");
return -1;
}
serverAddr.sin_family = PF_INET;
serverAddr.sin_addr.s_addr = inet_addr(server_ip);
serverAddr.sin_port = htons(server_port);
if (connect(clientSock, (sockaddr*)&serverAddr, serverAddrLen) < 0) {
printf("\n--> connect() failed.");
return -1;
}
frame = (frame.reshape(0,1)); // to make it continuous
/* start sending images */
if ((bytes = send(clientSock, frame.data, imgSize, 0)) < 0){
printf("\n--> send() failed");
return -1;
}
/* if something went wrong, restart the connection */
if (bytes != imgSize) {
cout << "\n--> Connection closed " << endl;
close(clientSock);
return -1;
}
return 0;
}
Java Server
You should know size of image going to receive.
Receives stream from socket and convert to byte array.
Convert byte array BGR and create Bitmap.
Code for Receiving byte array from C++ Server
public static byte imageByte[];
int imageSize=921600;//expected image size 640X480X3
InputStream in = server.getInputStream();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte buffer[] = new byte[1024];
int remainingBytes = imageSize; //
while (remainingBytes > 0) {
int bytesRead = in.read(buffer);
if (bytesRead < 0) {
throw new IOException("Unexpected end of data");
}
baos.write(buffer, 0, bytesRead);
remainingBytes -= bytesRead;
}
in.close();
imageByte = baos.toByteArray();
baos.close();
Code to Convert byte array to RGB bitmap image
int nrOfPixels = imageByte.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = imageByte[3*i];
int g = imageByte[3*i + 1];
int b = imageByte[3*i + 2];
if (r < 0)
r = r + 256; //Convert to positive
if (g < 0)
g = g + 256; //Convert to positive
if (b < 0)
b = b + 256; //Convert to positive
pixels[i] = Color.rgb(b,g,r);
}
Bitmap bitmap = Bitmap.createBitmap(pixels, 640, 480, itmap.Config.ARGB_8888);

Check this answer in question Serializing OpenCV Mat_ . If it is not a problem for you to use boost, it can solve your problem. Probably you will need some additional JNI magic on client side.

You can take into account which of the data are important for you: cols (number of columns), rows (number of columns), data (which contains the pixel information), type (data type and channel number).
You have to vectorize your matrix, because it is not neccessarrily continuous and take into account the variations of the size of a pixel in the memory.
Suppose:
cv::Mat m;
Then to allocate:
int depth; // measured in bytes
switch (m.depth())
{
// ... you might check for all of the possibilities
case CV_16U:
depth = 2;
}
char *array = new char[4 + 4 + 4 + m.cols * m.rows * m.channels() * depth]; // rows + cols + type + data
And than write the header information:
int *rows = array;
int *cols = &array[4];
int *type = &array[8];
*rows = m.rows;
*cols = m.cols;
*type = m.type;
And finally the data:
char *mPtr;
for (int i = 0; i < m.rows; i++)
{
mPtr = m.ptr<char>(i); // data type doesn't matter
for (int j = 0; j < m.cols; j++)
{
array[i * rows + j + 3 * 4] = mPtr[j];
}
}
Hopefully no bugs in the code.

Related

Xamarin tf.lite input objects

Im trying to reproduce tensorflow object detection on xamarin.
private MappedByteBuffer LoadModelFile()
{
AssetFileDescriptor fileDescriptor = Assets.OpenFd("detect.tflite");
FileInputStream inputStream = new FileInputStream(fileDescriptor.FileDescriptor);
FileChannel fileChannel = inputStream.Channel;
long startOffset = fileDescriptor.StartOffset;
long declaredLength = fileDescriptor.DeclaredLength;
return fileChannel.Map(FileChannel.MapMode.ReadOnly, startOffset, declaredLength);
}
View view = (View) sender;
MappedByteBuffer buffer = LoadModelFile();
Interpreter interpreter = new Interpreter(buffer);
var sr = new StreamReader(Assets.Open("labels.txt"));
var labels = sr.ReadToEnd()
.Split('\n')
.Select(s => s.Trim())
.Where(s => !string.IsNullOrEmpty(s))
.ToList();
var bitmap = BitmapFactory.DecodeResource(Resources, 2130837608);
var resizedBitmap = Bitmap.CreateScaledBitmap(bitmap, 1000, 750, false)
.Copy(Bitmap.Config.Argb8888, false);
float[][][][] imgData = null;
imgData = new float[1][][][];
imgData[0] = new float[1000][][];
for (int i = 0; i < imgData[0].Length; i++)
{
imgData[0][i] = new float[750][];
for (int j = 0; j < imgData[0][i].Length; j++)
{
imgData[0][i][j] = new float[3];
}
}
var intValuess = new int[1000 * 750];
resizedBitmap.GetPixels(intValuess, 0, 1000, 0, 0, 1000, 750);
int pixels = 0;
for (int i = 0; i < imgData[0].Length; i++)
{
for (int j = 0; j < imgData[0][i].Length; j++)
{
var val = intValuess[pixels++];
imgData[0][i][j][0] = (float)((val >> 16) & 0xFF);
imgData[0][i][j][1] = (float)((val >> 8) & 0xFF);
imgData[0][i][j][2] = (float)(val & 0xFF);
}
}
var outputs = new float[labels.Count];
interpreter.Run(imgData, outputs);
but i have error "cannot convert float[][][][] to Java.Lang.Object in line interpreter.Run(imgData, outputs);
How i can convert float[][][][] to Java.Lang.Object or where i can find tensorflow lite with xamarin examples.
I know it has been a while since you asked this question but maybe my response can be useful to someone.
I am also trying to use Xamarin with tflite, to run a simple CNN.
Here is my code:
private MappedByteBuffer LoadModelFile()
{
var assets = Application.Context.Assets;
AssetFileDescriptor fileDescriptor = assets.OpenFd("seed_model_no_qt.tflite");
FileInputStream inputStream = new FileInputStream(fileDescriptor.FileDescriptor);
FileChannel fileChannel = inputStream.Channel;
long startOffset = fileDescriptor.StartOffset;
long declaredLength = fileDescriptor.DeclaredLength;
return fileChannel.Map(FileChannel.MapMode.ReadOnly, startOffset, declaredLength);
}
private string Classify(MediaFile mediaFile)
{
var assets = Application.Context.Assets;
Bitmap bp = BitmapFactory.DecodeStream(mediaFile.GetStream());
var resizedBitmap = Bitmap.CreateScaledBitmap(bp, 1280, 1280, false).Copy(Bitmap.Config.Argb8888, false);
var bufint = new int[1280 * 1280];
resizedBitmap.GetPixels(bufint, 0, 1280, 0, 0, 1280, 1280);
int pixels = 0;
var input_buffer = new byte[4 * 1280 * 1280 * 3];
for(int i = 0; i < 1280; i++)
{
for(int k = 0; k < 1280; k++)
{
int val = bufint[pixels++];
Array.Copy(BitConverter.GetBytes(((val >> 16) & 0xFF) * (1f / 255f)), 0, input_buffer, (i * 1280 + k) * 12, 4);
Array.Copy(BitConverter.GetBytes(((val >> 8) & 0xFF) * (1f / 255f)), 0, input_buffer, (i * 1280 + k) * 12 + 4, 4);
Array.Copy(BitConverter.GetBytes((val & 0xFF) * (1f / 255f)), 0, input_buffer, (i * 1280 + k) * 12 + 8, 4);
}
}
var bytebuffer = Java.Nio.ByteBuffer.Wrap(input_buffer);
var output = Java.Nio.ByteBuffer.AllocateDirect(4*160*160);
interpreter.Run(bytebuffer, output);
var buffer = new byte[4 * 160 * 160];
Marshal.Copy(output.GetDirectBufferAddress(), buffer, 0, 4 * 160 * 160);
float sum = 0.0f;
for(int i = 0; i < 160*160; i++)
{
sum += BitConverter.ToSingle(buffer, i * 4);
}
return "Count : " + ((int)(sum/255)).ToString();
}
I reused your LoadModelFile() function as it is. The code takes an image from a mediaFile (coming from the phone camera), then resizes it to 1280x1280 rgb image before feeding it to a CNN as an array of float32 values.
Your float[][][][] to Java.Lang.Object issue came from the interpreter.Run() method expecting a Java Object. Some people online solve it by giving a Java.Nio.ByteBuffer as a parameter, instead of an array. It implies some bitwise manipulations but the Run method does accept the ByteBuffer object.
When filling the ByteBuffer, I advise you not to use its methods such as PutFloat(), but to fill a byte[] buffer and then use the Java.Nio.ByteBuffer.Wrap() method as I did. Using ByteBuffer's methods seemed to imply large performance issues in my case.
Same thing happens when manipulating the output of my CNN (a 160x160 heatmap of float32 values). Using ByteBuffer.Get() method to access the values was very slow. Instead, use Marshal.Copy to store the values into a byte array, then get back the float values with BitConverter.ToSingle.

how to filter audio by zeroing fft data in java(android)

I'm working on an audio based application for android platform.in this code first i record my voice in wav format and then i get fft(in fftbegir method) and now i want to filter my voice in 0-4khz with zeroing fft data and then perform ifft but when i play new wav file i can hear very bad quality sound with lots of noise .the fft class is here http://introcs.cs.princeton.edu/java/97data/FFT.java.html and here is my code:
private void filter(Complex[] x, int size) throws IOException {
double d, b;
String strI;
byte[] bytes = new byte[2];
int i = 0;
double k = -3.14159;
Complex[] f = new Complex[size];
Complex[] iff;
byte[] ddd;
double[] kkk = new double[size];
FFT q = new FFT();
short shor;
double data9[] = new double[size];
d = 2 * 3.14159 / size;
totalAudioLen = size;
totalDataLen = totalAudioLen + 36;
while (i < size) {//////to make its lenght length power of 2
data9[i] = k;
k = k + d;
i++;
}
i = 0;
while (i < (size / 2) - 2000) {
f[i] = new Complex(x[i].re(), x[i].im());
i++;
}
while (i < (size / 2) + 2000) { ///i want to remov 2000 sample of fft
f[i] = new Complex(0, 0);
i++;
}
while (i < size) {
f[i] = new Complex(x[i].re(), x[i].im());
i++;
}
iff = q.ifft(f);
try {
out9 = new FileOutputStream(getridemal());
out10 = new FileOutputStream(getwavfilter());
out11 = new FileOutputStream(getkhodesh());
WriteWaveFileHeader(out10, totalAudioLen, totalDataLen,
longSampleRate, channels, byteRate);
for (i = 0; i < size; i++) {
b = iff[i].re();
shor = (short) (b * 32768.0);
bytes = ByteConvert.convertToByteArray(shor);
out10.write(bytes, 0, 2);
}
} finally {
out9.close();
out10.close();
out11.close();
}
}
private void fftbegir(String input, String output) {
double[] data8;
int i = 0;
int r, k, l;
double b;
int m = 2;
try {
in5 = new FileInputStream(input);
out5 = new FileOutputStream(output);
AppLog.logString("File size: " + totalDataLen);
totalAudioLen = in5.getChannel().size();
data8 = SoundDataUtils.load16BitPCMRawDataFileAsDoubleArray();
l = data8.length;
while (l > m) {
m = m * 2;
}
Complex[] x = new Complex[m];
while (i < l) {
x[i] = new Complex(data8[i], 0);
i++;
}
in5.close();
i--;
for (i = l; i < m; i++) {
x[i] = new Complex(0, 0);
}
FFT f = new FFT();
Complex[] y = f.fft(x);
filter(y, m);
out5.close();
}
}
thanks:)
Filtering in the frequency domain as you are doing does not work well.
Whilst applying the inverse FFT to the results of a FFT yields the same samples (that is to say, it is invertible), this no longer holds true when coefficients are modified.
There are (at least) a few issues here:
The Gibbs Phenomemum which results from the sharp transition from pass-band to stop-band
The fact that the FFT is a fairly lousy band-pass filter in the first place. Components of the frequencies in the stop-band appear in several adjacent bands, and therefore remain in the signal.
Each FFT bin contains a real and imaginary component. A complex value of (0,0) has magnitude of 0 but also loses the phase information in the process.
You'd be better off with an IIR band-stop filter, which operates in the time domain. Besides working as expected, it is far cheaper to compute too.

Convert android.media.Image (YUV_420_888) to Bitmap

I'm trying to implement camera preview image data processing using camera2 api as proposed here: Camera preview image data processing with Android L and Camera2 API.
I successfully receive callbacks using onImageAvailableListener, but for future processing I need to obtain bitmap from YUV_420_888 android.media.Image. I searched for similar questions, but none of them helped.
Could you please suggest me how to convert android.media.Image (YUV_420_888) to Bitmap or maybe there's a better way of listening for preview frames?
You can do this using the built-in Renderscript intrinsic, ScriptIntrinsicYuvToRGB. Code taken from Camera2 api Imageformat.yuv_420_888 results on rotated image:
#Override
public void onImageAvailable(ImageReader reader)
{
// Get the YUV data
final Image image = reader.acquireLatestImage();
final ByteBuffer yuvBytes = this.imageToByteBuffer(image);
// Convert YUV to RGB
final RenderScript rs = RenderScript.create(this.mContext);
final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);
final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
allocationYuv.copyFrom(yuvBytes.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
scriptYuvToRgb.setInput(allocationYuv);
scriptYuvToRgb.forEach(allocationRgb);
allocationRgb.copyTo(bitmap);
// Release
bitmap.recycle();
allocationYuv.destroy();
allocationRgb.destroy();
rs.destroy();
image.close();
}
private ByteBuffer imageToByteBuffer(final Image image)
{
final Rect crop = image.getCropRect();
final int width = crop.width();
final int height = crop.height();
final Image.Plane[] planes = image.getPlanes();
final byte[] rowData = new byte[planes[0].getRowStride()];
final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);
int channelOffset = 0;
int outputStride = 0;
for (int planeIndex = 0; planeIndex < 3; planeIndex++)
{
if (planeIndex == 0)
{
channelOffset = 0;
outputStride = 1;
}
else if (planeIndex == 1)
{
channelOffset = width * height + 1;
outputStride = 2;
}
else if (planeIndex == 2)
{
channelOffset = width * height;
outputStride = 2;
}
final ByteBuffer buffer = planes[planeIndex].getBuffer();
final int rowStride = planes[planeIndex].getRowStride();
final int pixelStride = planes[planeIndex].getPixelStride();
final int shift = (planeIndex == 0) ? 0 : 1;
final int widthShifted = width >> shift;
final int heightShifted = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < heightShifted; row++)
{
final int length;
if (pixelStride == 1 && outputStride == 1)
{
length = widthShifted;
buffer.get(output.array(), channelOffset, length);
channelOffset += length;
}
else
{
length = (widthShifted - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < widthShifted; col++)
{
output.array()[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < heightShifted - 1)
{
buffer.position(buffer.position() + rowStride - length);
}
}
}
return output;
}
For a simpler solution see my implementation here:
Conversion YUV 420_888 to Bitmap (full code)
The function takes the media.image as input, and creates three RenderScript allocations based on the y-, u- and v-planes. It follows the YUV_420_888 logic as shown in this Wikipedia illustration.
However, here we have three separate image planes for the Y, U and V-channels, thus I take these as three byte[], i.e. U8 allocations. The y-allocation has size width * height bytes, while the u- and v-allocatons have size width * height/4 bytes each, reflecting the fact that each u-byte covers 4 pixels (ditto each v byte).
I write some code about this, and it's the YUV datas preview and chang it to JPEG datas ,and I can use it to save as bitmap ,byte[] ,or others.(You can see the class "Allocation" ).
And SDK document says: "For efficient YUV processing with android.renderscript: Create a RenderScript Allocation with a supported YUV type, the IO_INPUT flag, and one of the sizes returned by getOutputSizes(Allocation.class), Then obtain the Surface with getSurface()."
here is the code, hope it will help you:https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs

MediaMuxer unable to make MP4s that are streamable

I'm editing an MP4 on Android using MediaExtractor to fetch audio and video tracks then creating a new file using MediaMuxer. It works fine. I can play the new MP4 on the phone (and other players) but am unable to stream the file on the web. When I stop the MediaMuxer it generates a log message
"The mp4 file will not be streamable."
I looked at the underlying native code (MPEG4Writer.cpp) and it would appear that the writer is having trouble calculating the needed moov box size. It tries to guess using some heuristic if a bit rate is not supplied as a parameter to the writer. The problem is the MediaMuxer doesn't provider the ability to set MPEG4Writer's parameters. Am I missing something or am I stuck looking a some other means of generating the file (or header)? Thanks.
In MPEG4Writer.cpp:
// The default MIN_MOOV_BOX_SIZE is set to 0.6% x 1MB / 2,
// where 1MB is the common file size limit for MMS application.
// The default MAX _MOOV_BOX_SIZE value is based on about 3
// minute video recording with a bit rate about 3 Mbps, because
// statistics also show that most of the video captured are going
// to be less than 3 minutes.
This is a bad assumption on how MediaMuxer might be used. We are recording a max of 15 seconds of higher res video and MIN_MOOV_BOX_SIZE is way too small. So to make the file streamable I have to rewrite the file to move the moov header before mdat and patch up some offsets. Here is my code. It's not great. Error paths aren't handled correctly and it makes assumptions about the order of the boxes.
public void fastPlay(String srcFile, String dstFile) {
RandomAccessFile inFile = null;
FileOutputStream outFile = null;
try {
inFile = new RandomAccessFile(new File(srcFile), "r");
outFile = new FileOutputStream(new File(dstFile));
int moovPos = 0;
int mdatPos = 0;
int moovSize = 0;
int mdatSize = 0;
byte[] boxSizeBuf = new byte[4];
byte[] pathBuf = new byte[4];
int boxSize;
int dataSize;
int bytesRead;
int totalBytesRead = 0;
int bytesWritten = 0;
// First find the location and size of the moov and mdat boxes
while (true) {
try {
boxSize = inFile.readInt();
bytesRead = inFile.read(pathBuf);
if (bytesRead != 4) {
Log.e(TAG, "Unexpected bytes read (path) " + bytesRead);
break;
}
String pathRead = new String(pathBuf, "UTF-8");
dataSize = boxSize - 8;
totalBytesRead += 8;
if (pathRead.equals("moov")) {
moovPos = totalBytesRead - 8;
moovSize = boxSize;
} else if (pathRead.equals("mdat")) {
mdatPos = totalBytesRead - 8;
mdatSize = boxSize;
}
totalBytesRead += inFile.skipBytes(dataSize);
} catch (IOException e) {
break;
}
}
// Read the moov box into a buffer. This has to be patched up. Ug.
inFile.seek(moovPos);
byte[] moovBoxBuf = new byte[moovSize]; // This shouldn't be too big.
bytesRead = inFile.read(moovBoxBuf);
if (bytesRead != moovSize) {
Log.e(TAG, "Couldn't read full moov box");
}
// Now locate the stco boxes (chunk offset box) inside the moov box and patch
// them up. This ain't purdy.
int pos = 0;
while (pos < moovBoxBuf.length - 4) {
if (moovBoxBuf[pos] == 0x73 && moovBoxBuf[pos + 1] == 0x74 &&
moovBoxBuf[pos + 2] == 0x63 && moovBoxBuf[pos + 3] == 0x6f) {
int stcoPos = pos - 4;
int stcoSize = byteArrayToInt(moovBoxBuf, stcoPos);
patchStco(moovBoxBuf, stcoSize, stcoPos, moovSize);
}
pos++;
}
inFile.seek(0);
byte[] buf = new byte[(int) mdatPos];
// Write out everything before mdat
inFile.read(buf);
outFile.write(buf);
// Write moov
outFile.write(moovBoxBuf, 0, moovSize);
// Write out mdat
inFile.seek(mdatPos);
bytesWritten = 0;
while (bytesWritten < mdatSize) {
int bytesRemaining = (int) mdatSize - bytesWritten;
int bytesToRead = buf.length;
if (bytesRemaining < bytesToRead) bytesToRead = bytesRemaining;
bytesRead = inFile.read(buf, 0, bytesToRead);
if (bytesRead > 0) {
outFile.write(buf, 0, bytesRead);
bytesWritten += bytesRead;
} else {
break;
}
}
} catch (IOException e) {
Log.e(TAG, e.getMessage());
} finally {
try {
if (outFile != null) outFile.close();
if (inFile != null) inFile.close();
} catch (IOException e) {}
}
}
private void patchStco(byte[] buf, int size, int pos, int moovSize) {
Log.e(TAG, "stco " + pos + " size " + size);
// We are inserting the moov box before the mdat box so all of
// offsets in the stco box need to be increased by the size of the moov box. The stco
// box is variable in length. 4 byte size, 4 byte path, 4 byte version, 4 byte flags
// followed by a variable number of chunk offsets. So subtract off 16 from size then
// divide result by 4 to get the number of chunk offsets to patch up.
int chunkOffsetCount = (size - 16) / 4;
int chunkPos = pos + 16;
for (int i = 0; i < chunkOffsetCount; i++) {
int chunkOffset = byteArrayToInt(buf, chunkPos);
int newChunkOffset = chunkOffset + moovSize;
intToByteArray(newChunkOffset, buf, chunkPos);
chunkPos += 4;
}
}
public static int byteArrayToInt(byte[] b, int offset)
{
return b[offset + 3] & 0xFF |
(b[offset + 2] & 0xFF) << 8 |
(b[offset + 1] & 0xFF) << 16 |
(b[offset] & 0xFF) << 24;
}
public void intToByteArray(int a, byte[] buf, int offset)
{
buf[offset] = (byte) ((a >> 24) & 0xFF);
buf[offset + 1] = (byte) ((a >> 16) & 0xFF);
buf[offset + 2] = (byte) ((a >> 8) & 0xFF);
buf[offset + 3] = (byte) (a & 0xFF);
}
Currently MediaMuxer does not create streamable MP4 files
You can try Intel INDE on https://software.intel.com/en-us/intel-inde and Media Pack for Android which is a part of INDE, tutorials on https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials. It has a sample that shows how to use media pack to create and stream files over the network
For example for camera streaming it have sample CameraStreamerActivity.java
public void onCreate(Bundle icicle) {
capture = new CameraCapture(new AndroidMediaObjectFactory(getApplicationContext()), progressListener);
parameters = new StreamingParameters();
parameters.Host = getString(R.string.streaming_server_default_ip);
parameters.Port = Integer.parseInt(getString(R.string.streaming_server_default_port));
parameters.ApplicationName = getString(R.string.streaming_server_default_app);
parameters.StreamName = getString(R.string.streaming_server_default_stream);
parameters.isToPublishAudio = false;
parameters.isToPublishVideo = true;
}
public void startStreaming() {
configureMediaStreamFormat();
capture.setTargetVideoFormat(videoFormat);
capture.setTargetAudioFormat(audioFormat);
capture.setTargetConnection(prepareStreamingParams());
capture.start();
}
In addition there are simular samples for files streaming or game process capturing and streaming

Android byte array to Bitmap How to

How can I convert byte array received using socket.
The C++ client send image data which is of type uchar.
At the android side I am receiving this uchar array as byte[] which is ranges from -128 to +127.
What I wanted to do is that receives this data and display it. For that I was trying to convert to Bitmap using BitmapFactory.decodeByteArray(), but no luck I am getting null Bitmap. Am I doing right or any other method available.
Thanks in advance....
From the comments to the answers above, it seems like you want to create a Bitmap object from a stream of RGB values, not from any image format like PNG or JPEG.
This probably means that you know the image size already. In this case, you could do something like this:
byte[] rgbData = ... // From your server
int nrOfPixels = rgbData.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = data[3*i];
int g = data[3*i + 1];
int b = data[3*i + 2];
pixels[i] = Color.rgb(r,g,b);
}
Bitmap bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
I've been using it like below in one of my projects and so far it's been pretty solid. I'm not sure how picky it is as far as it not being compressed as a PNG though.
byte[] bytesImage;
Bitmap bmpOld; // Contains original Bitmap
Bitmap bmpNew;
ByteArrayOutputStream baoStream = new ByteArrayOutputStream();
bmpOld.compress(Bitmap.CompressFormat.PNG, 100, baoStream);
bytesImage = baoStream.toByteArray();
bmpNew = BitmapFactory.decodeByteArray(bytesImage, 0, bytesImage.length);
edit: I've adapted the code from this post to use RGB, so the code below should work for you. I haven't had a chance to test it yet so it may need some adjusting.
Byte[] bytesImage = {0,1,2, 0,1,2, 0,1,2, 0,1,2};
int intByteCount = bytesImage.length;
int[] intColors = new int[intByteCount / 3];
int intWidth = 2;
int intHeight = 2;
final int intAlpha = 255;
if ((intByteCount / 3) != (intWidth * intHeight)) {
throw new ArrayStoreException();
}
for (int intIndex = 0; intIndex < intByteCount - 2; intIndex = intIndex + 3) {
intColors[intIndex / 3] = (intAlpha << 24) | (bytesImage[intIndex] << 16) | (bytesImage[intIndex + 1] << 8) | bytesImage[intIndex + 2];
}
Bitmap bmpImage = Bitmap.createBitmap(intColors, intWidth, intHeight, Bitmap.Config.ARGB_8888);
InputStream is = new java.net.URL(urldisplay).openStream();
byte[] colors = IOUtils.toByteArray(is);
int nrOfPixels = colors.length / 3; // Three bytes per pixel.
int pixels[] = new int[nrOfPixels];
for(int i = 0; i < nrOfPixels; i++) {
int r = (int)(0xFF & colors[3*i]);
int g = (int)(0xFF & colors[3*i+1]);
int b = (int)(0xFF & colors[3*i+2]);
pixels[i] = Color.rgb(r,g,b);
}
imageBitmap = Bitmap.createBitmap(pixels, width, height,Bitmap.Config.ARGB_4444);
bmImage.setImageBitmap(imageBitmap );

Categories

Resources