Ensuring OpenGL Texture Memory is Released - android

My application is running out of memory which switching between two activities. The first activity is running an OpenGL scene, the second activity is not. I want to make sure I am releasing all of the textures used by the OpenGL scene.
Right now I am using this method
getNativeHeapAllocatedSize()
to track the relative amount of memory used by the textures. This number goes up by about 4 megs if I allocate textures. However it never seems to go back down again.
In my first activities 'OnPause' I have the following code:
SurfaceView.onPause();
mTexture = null;
In the second activity I then call getNativeHeapAllocatedSize() several times. Even after the GC has run and the memory still has not dropped.
Edit:
After more research it appears it is something with the code that loads the data. I have removed OpenGL from the equation and the memory is still not being released.
try {
InputStream is = null;
{
AssetManager am = MyActivity.getAssetMgr();
is = am.open( fileName );
}
Bitmap b = BitmapFactory.decodeStream( is );
if( b != null ) {
mResX = b.getWidth();
mResY = b.getHeight();
Bitmap.Config bc = b.getConfig();
if( bc == Bitmap.Config.ARGB_8888 )
mBPP = 4;
else
mBPP = 2;
mImageData = ByteBuffer.allocateDirect( mResX * mResY * mBPP );
mImageData.order( ByteOrder.nativeOrder() );
b.copyPixelsToBuffer( mImageData );
mImageData.position( 0 );
return true;
}
} catch (IOException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
return false;
}
Edit2:
I did end up adding in all your ideas. However this seemed to be the problem in my case...
ByteBuffer not releasing memory

I am assuming you mean textures loaded to GPU via gl.glTexImage* or any other helper method. In that case GC wont help you, it is not cleaning internal memory used by textures
Have you tried manually deleting your textures via gl.glDeleteTextures?
Edit according to new code:
Several leaks in your code:
close the input stream
recycle your bitmap after you have copied data to ByteBuffer
I guess you use the byteBuffer with image data to upload texture to GPU, make sure you do not store references to those buffers after data was uploaded.
I do not see any other problems in the this code, if after this fixes it still wont work, than look close at any bitmap usages in your app.

Related

Android USB host : interrupt do not respond immedietly

I have a usb device which have a button.
And I want to an android app to catch a signal of the button.
I found inferface and endpoint number of the button.
It had seemed to perform ordinarily at galaxy S3 and galaxy note.
But later, I found that it has delay at other phones.
I was able to receive instant responses about 10% of the time; usually there was a 2-second delay, with some cases where the whole response was lost.
Although I couldn't figure out the exact reason, I realized that the phones that had response delays were those with kernel version 3.4 or later.
Here is the code that I used initially.
if(mConnection != null){
mConnection.claimInterface(mInterfaces.get(0), true);
final UsbEndpoint endpoint = mInterfaces.get(0).getEndpoint(0);
Thread getSignalThread = new Thread(new Runnable() {
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
while(mConnection!=null){
int len = mConnection.bulkTransfer(endpoint, buffer, buffer.length, 0);
if( len>=0 ){
// do my own code
}
}
}
});
getSignalThread.setPriority(Thread.MAX_PRIORITY);
getSignalThread.start();
}
edit timeout
when the timeout was set to 50ms, I wasn't able to receive responses most of the time. When the timeout was 500ms, I was able to initially get some delayed-responses; however, I lost all responses after several tries with this setting.
Using UsbRequest
In addition to using the bulktransfer method, I also tried using UsbRequest and below is the code that I used.
#Override
public synchronized void run() {
byte[] buffer = new byte[8];
final ByteBuffer byteBuffer = ByteBuffer.wrap(buffer);
UsbRequest inRequest = new UsbRequest();
inRequest.initialize(mConnection, endpoint);
while(mConnection!=null){
inRequest.queue( byteBuffer , buffer.length);
if( mConnection.requestWait() == inRequest ){
// do my own code
}
}
}
However, the same kind of delay happened even after using UsbRequest.
Using libusb
I also tried using libusb_interrupt_transfer from an open source library called libusb.
However this also produced the same type of delay that I had when using UsbDeviceConnection.
unsigned char data_bt[8] = { 0, };
uint32_t out[2];
int transfered = 0;
while (devh_usb != NULL) {
libusb_interrupt_transfer(devh_usb, 0x83, data_bt, 8, &transfered, 0);
memcpy(out, data_bt, 8);
if (out[0] == PUSH) {
LOGI("button pushed!!!");
memset(data_bt, 0, 8);
//(env)->CallVoidMethod( thiz, mid);
}
}
After looking into the part where libusb_interrupt_transfer is processed libusb, I was able to figure out the general steps of interrupt_transfer:
1. make a transfer object of type interrupt
2. make a urb object that points to the transfer object
3. submit the urb object to the device's fd
4. detect any changes in the fd object via urb object
5. read urb through ioctl
steps 3, 4, 5 are the steps regarding file i/o.
I was able to find out that at step 4 the program waits for the button press before moving onto the next step.
Therefore I tried changing poll to epoll in order to check if the poll function was causing the delay; unfortunately nothing changed.
I also tried setting the timeout of the poll function to 500ms and making it always get values of the fd through ioctl but only found out that the value changed 2~3 seconds after pressing the button.
So in conclusion I feel that there is a delay in the process of updating the value of the fd after pressing the button. If there is anyone who could help me with this issue, please let me know. Thank you.
Thanks for reading

Store parts of huge ByteBuffer to file

I have implemented a loop buffer (or circular buffer) storing 250 frames raw video data in total (frame resolution 1280x720). As a buffer I am using the ByteBuffer class. The buffer is running in a separate thread using a Looper, every new frame is passed via a message to the thread Handler object. When the limit is reached, the position is set to 0 and the whole buffer is overwritten from the beginning. Like that, the buffer always contains the last 250 video frames.
As the amount of required heap space is huge (around 320 MByte) I am using the tag android:largeHeap="true" in the manifest.
Now we come to the problem. The loop is running well, it consumes slightly less than the allowed heap space size (which is acceptable for me). But at some point of time, I want to store the whole buffer to a raw binary file while respecting the current position of the buffer.
Let me explain that with a small graph:
The loop buffer looks like this:
|========== HEAD ==========|===============TAIL============|
0 -------------------------buffer.position()-----------------------buffer.limit()
At the time of saving, I want to first store the tail to the file (because it contains the beginning of the video) and afterwards the head until the current buffer.position(). I cannot allocate any more byte arrays for extracting the data from the ByteBuffer (heap space is full), thus, I have to directly write the ByteBuffer to the file.
At the moment ByteBuffer does only allow to be written to a file completely (write() method.) Does anybody know what could be the solution? Or is there even a better solution for my task?
I will give my code below:
public class FrameRecorderThread extends Thread {
public int MAX_NUMBER_FRAMES_QUEUE = 25 * 10; // 25 fps * 10 seconds
public Handler frameHandler;
private ByteBuffer byteBuffer;
byte[] image = new byte[1382400]; // bytes for one image
#Override
public void run() {
Looper.prepare();
byteBuffer = ByteBuffer.allocateDirect(MAX_NUMBER_FRAMES_QUEUE * 1382400); // A lot of memory is allocated
frameHandler = new Handler() {
#Override
public void handleMessage(Message msg) {
// Store message content (byte[]) to queue
if(msg.what == 0) { // STORE FRAME TO BUFFER
if(byteBuffer.position() < MAX_NUMBER_IMAGES_QUEUE * 1382400) {
byteBuffer.put((byte[])msg.obj);
}
else {
byteBuffer.position(0); // Start overwriting from the beginning
}
}
else if(msg.what == 1) { // SAVE IMAGES
String fileName = "VIDEO_BUF_1.raw";
File directory = new File(Environment.getExternalStorageDirectory()
+ "/FrameRecorder/");
directory.mkdirs();
try {
FileOutputStream outStream = new FileOutputStream(Environment.getExternalStorageDirectory()
+ "/FrameRecorder/" + fileName);
// This is the current position of the split between head and tail
int position = byteBuffer.position();
try {
// This stores the whole buffer in a file but does
// not respect the order (tail before head)
outStream.getChannel().write(byteBuffer);
} catch (IOException e) {
e.printStackTrace();
}
} catch (FileNotFoundException e) {
Log.e("FMT", "File not found. (" + e.getLocalizedMessage() + ")");
}
}
else if(msg.what == 2) { // STOP LOOPER
Looper looper = Looper.myLooper();
if(looper != null) {
looper.quit();
byteBuffer = null;
System.gc();
}
}
}
};
Looper.loop();
}}
Thank you very much in advance!
Just create a subsection and write that to a file.
Or call set Length and then write it and then set it back.
Ok, in the meanwhile I have investigated a little further and found a solution.
Instead of a ByteBuffer object I am using a simple byte[] array. In the beginning I am allocating all heap space required for the frames. At the time of storing it, I can then write the head and tail of the buffer by using the current position. This works and is easier than expected. :)

Android NDK Pointer Arithmetic

I am trying to load a TGA file in Android NDK.
I open the file using AssetManager, read in the entire contents of the TGA file into a memory buffer, and then I try to extract the pixel data from it.
I can read the TGA header part of the file without any problems, but when I try to advance the memory pointer past the TGA header, the app crashes. If I don't try to advance the memory pointer, it does not crash.
Is there some sort of limitation in Android NDK for pointer arithmetic?
Here is the code:
This function opens the asset file:
char* GEAndroid::OpenAssetFile( const char* pFileName )
{
char* pBuffer = NULL;
AAssetManager* assetManager = m_pState->activity->assetManager;
AAsset* assetFile = AAssetManager_open(assetManager, pFileName, AASSET_MODE_UNKNOWN);
if (!assetFile) {
// Log error as 'error in opening the input file from apk'
LOGD( "Error opening file %s", pFileName );
}
else
{
LOGD( "File opened successfully %s", pFileName );
const void* pData = AAsset_getBuffer(assetFile);
off_t fileLength = AAsset_getLength(assetFile);
LOGD("fileLength=%d", fileLength);
pBuffer = new char[fileLength];
memcpy( pBuffer, pData, fileLength * sizeof( char ) );
}
return pBuffer;
}
And down here in my texture class I try to load it:
char* pBuffer = g_pGEAndroid->OpenAssetFile( fileNameWithPath );
TGA_HEADER textureHeader;
char *pImageData = NULL;
unsigned int bytesPerPixel = 4;
textureHeader = *reinterpret_cast<TGA_HEADER*>(pBuffer);
// I double check that the textureHeader is valid and it is.
bytesPerPixel = textureHeader.bits/8; // Divide By 8 To Get The Bytes Per Pixel
m_imageSize = textureHeader.width*textureHeader.height*bytesPerPixel; // Calculate The Memory Required For The TGA Data
pImageData = new char[m_imageSize];
// the line below causes the crash
pImageData = reinterpret_cast<char*>(pBuffer + sizeof( TGA_HEADER)); // <-- causes a crash
If I replace the line above with the following line (even though it is incorrect), the app runs, although obviously the texture is messed up.
pImageData = reinterpret_cast<char*>(pBuffer); // <-- does not crash, but obviously texture is messed up.
Anyone have any ideas?
Thanks.
Why reinterpret_cast? You're adding an integer to a char*; that operation produces a char*. No typecast necessary.
One caveat for pointer juggling on Android (and on ARM devices in general): ARM cannot read/write unaligned data from memory. If you read/write an int-sized variable, it needs to be at an address that's a multiple of 4; for short, a multiple of 2. Bytes can be at any address. This does not, as far as I can see, apply to the presented snippet. But do keep in mind. It does throw off binary format parsing occasionally, especially when ported from Intel PCs.
Simply assigning an unaligned value to a pointer does not crash. Dereferencing it might.
Sigh, I just realized the mistake. I allocate memory for pImageData, then set the point to the buffer. This does not sit well when I try to create an OpenGL texture with the pixel data. Modifying it so I memcpy the pixel data from (pBuffer + sizeof( TGA_HEADER) ) to pImageData fixes the problem.

Can 2 WritableByteChannels be used at the same time?

When I write directly to 2 outputstreams, everything works fine. When I try to write to 2 channels though, the second one seemingly does not receive it.
Does anyone know if 2 WritableByteChannels can be written to at the same time? If not, any other ideas of what I can do to perform the same action still using NIO/Channels?
connection2 = new Socket(Resource.LAN_DEV2_IP_ADDRESS, Resource.LAN_DEV2_SOCKET_PORT);
out2 = connection2.getOutputStream();
connection = new Socket(Resource.LAN_HOST_IP_ADDRESS, Resource.LAN_HOST_SOCKET_PORT);
out = connection.getOutputStream();
File f = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS), filename);
in = new FileInputStream(f);
fic = in.getChannel();
fsize = fic.size();
channel2 = Channels.newChannel(out2);
channel = Channels.newChannel(out);
//Send Header
byte[] p = createHeaderPacket(filename, f.length());
out2.write(p); // Received correctly
out.write(p); // Received correctly
//Send file
long currPos = 0;
while (currPos < fsize)
{
if (fsize - currPos < Resource.MEMORY_ALLOC_SIZE)
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, fsize - currPos);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos = fsize;
}
else
{
mappedByteBuffer = fic.map(FileChannel.MapMode.READ_ONLY, currPos, Resource.MEMORY_ALLOC_SIZE);
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
currPos += Resource.MEMORY_ALLOC_SIZE;
}
}
Try:
channel2.write(mappedByteBuffer.duplicate());
channel.write(mappedByteBuffer);
The way to understand NIO Buffers is to keep in mind its basic properties:
the underlying data store (which is commonly an ordinary byte array, but can be other things, such as a memory-mapped region of a file);
the start and capacity within that underlying space;
your current position in the buffer; and
the limit of the buffer.
All buffer operations provided by NIO are documented in terms of how the operation affects these properties. For example, the WritableByteChannel.write() documentation tells us that:
Between 0 and src.remaining() (inclusive) bytes will be written to the channel; and
If count bytes were written, the ByteBuffer's position will be increased by count when write() returns.
So looking at your original code:
channel2.write(mappedByteBuffer); // Received correctly
channel.write(mappedByteBuffer); // Never received
If the first write writes the entire remaining mappedByteBuffer to channel2, after that statement mappedByteBuffer.remaining() will be zero, so the write to channel will not write any bytes at all.
Hence my suggestion above to use ByteBuffer.duplicate() on the first write. This method returns a new ByteBuffer object which:
shares the original buffer's underlying store (so you're not making an unnecessary copy in memory of the actual bytes you want to write twice); but
has its own position (and remaining) values, so when channel2.write() adjusts that (duplicate) ByteBuffer's position, it will leave the position unchanged in the original buffer,
so channel.write() will still receive the intended range of bytes.
As an alternative, you could also write:
mappedByteBuffer.mark(); // store the current position
channel2.write(mappedByteBuffer);
mappedByteBuffer.reset(); // move position to the previously marked position
channel.write(mappedByteBuffer);
I'm also inclined to agree with EJP's point that you're probably not making the best use of MappedByteBuffer here. You could simplify your copying loop to:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
channel2.write(buffer.duplicate());
channel.write(buffer);
}
Here the read() method increases position by the number of bytes read from the channel, then the flip() method sets the limit to that position and the position back to 0, which means the bytes you've just read are in the remaining range that write() will consume.
However, you'll notice that EJP's loop is a little more complicated than that. That's because write operations on channels might not necessarily write every remaining byte. (The write() documentation gives the example of a networking socket opened in non-blocking mode.) However that sample code (and the similar sample in the documentation of ByteBuffer.compact()) relies on the fact that you're only writing to a single channel; when you're writing to two different channels, you have to handle the fact that the two channels might accept a different number of bytes. So:
ByteBuffer buffer = ByteBuffer.allocate(Resource.MEMORY_ALLOC_SIZE);
while (fic.read(buffer) >= 0) {
buffer.flip();
buffer.mark();
while (buffer.hasRemaining()) {
channel2.write(buffer);
}
buffer.reset():
while (buffer.hasRemaining()) {
channel.write(buffer);
}
buffer.clear();
}
Of course multiple channels can be used at the same time, but more to the point that's a terrible way to send a file. Creating lots of MappedByteBuffers causes all kinds of problems as the underlying mapped regions are never released. Just open it as a normal channel and use the canonical NIO copy loop:
while (in.read(buffer) >= 0 || buffer.position() > 0)
{
buffer.flip();
out.write(buffer);
buffer.compact();
}

Speed up encryption/decryption?

I have an encryption and decryption code which I use to encrypt and decrypt video files (mp4). I'm trying to speed up the decryption process as the encryption one is not that relevant for my case. This is the code that I have for the decryption process:
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize];
byte[] outBytes = new byte[outputSize];
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength == blockSize)
{
int outLength
= cipher.update(inBytes, 0, blockSize, outBytes);
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}
My question is how to speed up the decryption process in this code. I've tried decrypting a 10MB mp4 file and it decrypts in 6-7 seconds. However, I'm aiming for < 1 seconds. Another thing I would like to know is if my writing to the FileOutputStream out is actually slowing the process down rather than the decryption process itself. Any suggestions on how to go about speeding things up here.
I'm using AES for encryption/decryption.
Until I find a solution, I will be using a ProgressDialog which tells the user to wait until the video has been decrypted (Obviously, I'm not going to use the word: decrypted).
Why are you decrypting data only by blockSize increments ? You do not show what type of object cipher is, but I am guessing this is a javax.crypto.Cipher instance. It can handle update() calls over arrays of arbitrary length, and you will have much less overhead if you use longer arrays. You should process data by blocks of, say, 8192 bytes (that's the traditional length for a buffer, it interacts reasonably well with CPU inner caches).
bytebiscuit, your question gave me the solution which I am trying from past 6 days. I just modified your code little bit, and my 52 mb video file is getting decrypted in just 4 seconds. Previous decrypting technique took 45 seconds which was a different logic (not yours) . Thats a massive difference 45 seconds to 4 seconds. Where ever I have done modification I am putting //modified comment lines. I am sure if your video is 10mb video, it will get decrypted in 1 second for sure. Try applying this, it should work out.
private static void decryptFile() throws IOException, ShortBufferException, IllegalBlockSizeException, BadPaddingException
{
//int blockSize = cipher.getBlockSize();
int blockSize = cipher.getBlockSize();
int outputSize = cipher.getOutputSize(blockSize);
System.out.println("outputsize: " + outputSize);
byte[] inBytes = new byte[blockSize*1024]; //modified
byte[] outBytes = new byte[outputSize * 1024]; //modified
in= new FileInputStream(inputFile);
out=new FileOutputStream(outputFile);
BufferedInputStream inStream = new BufferedInputStream(in);
int inLength = 0;;
boolean more = true;
while (more)
{
inLength = inStream.read(inBytes);
if (inLength/1024 == blockSize) //modified
{
int outLength
= cipher.update(inBytes, 0, blockSize*1024, outBytes);//modified
out.write(outBytes, 0, outLength);
}
else more = false;
}
if (inLength > 0)
outBytes = cipher.doFinal(inBytes, 0, inLength);
else
outBytes = cipher.doFinal();
out.write(outBytes);
}
I suggest you use the profiling tool provided in the android sdk. it will tell you where you spend the most time (i.e. : file writing or decoding).
see http://developer.android.com/guide/developing/debugging/debugging-tracing.html
This work on the emulator as well as on an actual device.
Consider using the NDK. On devices before Froyo (and even Froyo itself), it would be really slow due to the lack of JIT (or a very simple one in Froyo). Even with the JIT, native architecture-optimized crypto code will always outrun Dalvik.
See also this question.
As an aside, if you're using AES directly, you're probably doing something wrong. If this is part of an effort to do DRM, make sure you realize the full extent of the fact that decompiling an Android app is trivial. Your key will not be secure, which by definition defeats the encryption.
Instead of spending efforts to improve an inadequate architecture, you should consider a streaming solution: it has the great advantage to spread the computation time for the decryption so that it becomes no more noticeable. I mean: do not produce another file from your video source but rather a stream, with a local http server. Unfortunately there is no such component in the SDK, you have to make your own implementation or search for an existing one.

Categories

Resources