I want to send the bitmap image through the bluetooth along with some other contents like char and int. The problem is to convert those things into single byte array. I tried by making it as two byte array and merging them but copyTo is not working. Is there some other way to do it?
Use System.arraycopy method to copy one array into another
int lenA = arrayA.length;
int lenB = arrayB.length;
byte[] outArray = new byte[lenA + lenB];
System.arraycopy (arrayA, 0, outArray, 0, lenA);
System.arraycopy (arrayB, 0, outArray, lenA, lenB);
I haven't tested it, but should work.
edit:
And of course it's not recommended for big arrays. You're doubling data in memory in that way. I don't know what you're doing exactly with this data but if you can, use streaming instead.
Related
I am working on the android library for encoding/decoding raw data through ffmpeg. Every example I found uses files, it either reads or writes to a file. However, I am using raw byte array representing RGBA image for encoder input and byte array for encoder output. Lets focus on encoding part for this question.
My function looks like this:
int encodeRGBA(uint8_t *image, int imageSize, int presentationTimestamp,
uint8_t *result, int resultSize)
Where image is byte array containing raw rgba image data, imageSize is length of that array, presentationTimestamp is just counter used by AVFrame for setting pts, result is preallocated byte array with some defined length (currently with size matching width x height) and resultSize is byte array length (width x height). Returned int value represents actually used length of preallocated array. I am aware that this is not the best approach for sending data back to java and this is also part of the question. Is there a better way for returning result?
Example found here for encoding, directly writes byte data to the frame->data[0] (different approach for different formats, RGBA or YUV). But google search for "ffmpeg read from memory" results in examples like this, this or this. All of them suggesting using AVIOContext.
I am confused how to use AVFormatContext with AVCodecContext for encoding?
Currently I have encoder working using first approach and I am successfully returning results as described (with preallocated byte array). I would like to know if that is wrong approach? Should I be using AVIOContext for handling byte arrays?
Should I be using AVIOContext for handling byte arrays?
No. AVIOContext if for working with files and or containers in memory. In that case avformat is required to read encoded frames out of A byte array. You are working with raw frames directly and don’t require using avformat.
Methods like Parcel#readByteArray tend to be used like this in examples I've seen:
byte[] _byte = new byte[in.readInt()];
in.readByteArray(_byte);
But there is no reason (that I see) to write this instead of
byte[] _byte = in.createByteArray();
It seems that read*Array is more useful when length is known in advance, but write*Array always writes the length. Is there a way to avoid writing it?
i have one questions, how to convert Multiple picture to byte array (byte []), cause i have case to save many picture in my database sqlite, i have array list which contains picture in drawable folder..
ArrayList<Integer> imageId = new ArrayList<Integer>();
imageId.add(R.drawable.a1);
imageId.add(R.drawable.a2);
imageId.add(R.drawable.a3);
imageId.add(R.drawable.a4);
imageId.add(R.drawable.a5);
Then i have tried this code to convert into byte arrray
Bitmap b = BitmapFactory.decodeResource(mContext.getResources(), R.drawable.a1);
//calculate how many bytes our image consists of.
int bytes = b.getByteCount();
ByteBuffer buffer = ByteBuffer.allocate(bytes); //Create a new buffer
b.copyPixelsToBuffer(buffer); //Move the byte data to the buffer
byte[] array = buffer.array();
System.out.println(array);
The code below is working, but the problem is how the code below is convert one picture, and now i need to convert multiple picture in arrayList, can anybody help me? cause i have tried with looping, it give me an error, java.lang.OutOfMemoryError..
Ok,
first of all do not save whole pictures to SqLite. It is not a suitable way. There's a very easy solution but a little risky..
Just save images to SD Card as WhatsApp do and save their path to SqLite. So whenever you want to access your pictures you can read their paths from Sqlite and access them.
But the risk is pictures are in user control, so they can remove it. But you can hide the pictures in a deep way :)
I am in no way a seasoned programmer. I have been successful at getting AudioRecord to write microphone data directly to file (as a readable .wav file) as it comes in from the mic, with the help of many code snippets from the internet.
However, for what I want to do with my app, I need to save only portions of what comes in from the mic, and I thought I would be able to do that by saving the byte data into some sort of an array first so I could selectively use what I want and save to file. Like many examples do, my class for reading microphone data reads the data into a byte array defined as:
byte data[] = new byte[audioBuffer];
and is read in with
read = audio.read(data, 0, audioBuffer);
My idea was to save each byte data array after it is read in to some sort of another array, and then read back each individual byte data array later to save to file when the user requests it. I tried an ArrayList to hold the data arrays:
private ArrayList<byte[]> grabArray = new ArrayList<byte[]>(grabArraySize);
but I am apparently only getting the last byte data array from the microphone for the whole .wav file. I am guessing I am misusing the ArrayList, but it's description sounded like the best chance of being able to do what I need. I have tried to find another way to do what I want including ByteBuffer, but that does not seem to provide the type of control that an array provides, where I can overwrite old data with new, and then at any point retrieve any or all of the data.
Basically, I know how to do this if it were simple primitives like integers or floats, but byte arrays are apparently throwing me for a loop. On top of that, there is a byte primitive, then there is the Byte class which can be wrapper... all a bunch of ??? to someone who doesn't make a living programming in Java. What is the best way to manhandle byte arrays (or just bytes for that matter) like you would do with just plain numbers?
Some more code to show how I save the audio data (in my AudioRecord thread) to a temporary holding array, then try to retrieve the data (in another class) so I can save to file: (My code is a big mess right now with comments and me trying various methods, commenting out things I'm not currently using... it would be too much to put it all here and I don't have the time to clean it up. I'm hoping this description of how I am trying to handle byte arrays will be enough to the help I need.)
Reading audio data and saving to my temporary holding array:
while(recordState){
read = audio.read(data, 0, audioBuffer);
if(AudioRecord.ERROR_INVALID_OPERATION != read){
if(i_read == grabArraySize){
i_read = 0; // reset index to 0 if hit end of array size
}
grabArray.set(i_read, data);
i_read += 1;
}
}
When asked to, reading audio data back from temporary holding array so I can save to file:
while(i < grabArraySize - 1){ // not writing the whole array - leaving out the last chunk
if(i_write == grabArraySize){
i_write = 0;
}
os.write(tempArray.get(i_write));
i += 1;
i_write += 1;
}
My FileOutputStream os works fine - I am successfully writing to file with the .wav header. I need to figure out how to store the data[ ] byte arrays from the AudioRecorder somewhere other than directly to a file, so that I can then retrieve them whenever I want, and then write them to file. I am successfully getting audio data, but the whole file is repeating one piece of that audio data (of size audiobuffer) over and over into whole file. The file is the correct length; everything else seems to be working; I can even recognize the sound I was making in the little bit that gets saves over and over...
Update again - It appears that ArrayLists are just pointers, as opposed to holding values like a normal array of values. I am now defining grabArray and tempArray both as byte[ ][ ]. Example - if I want to hold 10 separate byte arrays, with each byte array of size audioBuffer, I would define my array as such:
byte[][] grabArray = new byte[10][audioBuffer];
Now, in my AudioRecord thread, I am looping through my grabArray, setting each index = to the incoming audio byte array:
grabArray[i] = data;
Then, when I'm ready to write out to file, (after setting tempArray = grabArray ; I do this in case the AudioRecord thread writes a new audio chunk to the grabArray before I get to write to file) I loop through my tempArray:
os.write(tempArray[i]);
I am still getting only one instance of data[ ] (the audio chunks) repeated all throughout the file. Am I at least on the right track?
I am looking to transfer pixel data from a server to an android program. On the server, the pixel data is in form RGBA, with one byte per color / transparency. Unfortunately on android the the corresponding pixel format is ARGB, meaning the alpha channel comes before the color data, instead of after, like it does on the server. I am worried that shuffling the RGBA data to ARGB format on the server will be too slow, and so I was hoping to find another way around that. The server is written in python by the way. I am capturing the screen data using the function presented here: Image.frombuffer with 16-bit image data. If there is a way to grab screen capture using this method (or some other) in ARGB format or even RGB_565 I would love to hear about that as well.
One trick I thought of to solve this problem was to use the isPreMultiplied flag on canvas.drawbitmap(int[], ...) and then send only the RGB bytes from the server. Then I could recompose the RGB bits into ints on the android device and send that to drawbitmap, ignoring the alpha channel entirely.
However, this leaves me with another problem. Ints are comprised of 4 bytes, and I have a sequence of 3 bytes in my byte[] array (the RGB values). I was using some of the solutions proposed here: byte array to Int Array to convert my byte[] to an int[] when I was transferring RGBA data. But now that it is just 3 byte sequences, I'm not sure how to quickly convert it to ints. I am hoping for close to real time image updating so I need a way to do this quickly. Any ideas?
int rgbInt = byteArray[0] << 16 + byteArray[1] << 8 + byteArray[2];
// not sure these are in the correct order, you may have to swap the indexes around.
You might also need to include
+ 0xFF << 24
to set the alpha value to opaque.