I trying to create an Android App that using Camera2 API, as part of the functionality I want to develop a module that saving multiple images produced by ImageReader as followed:
Image image = reader.acquireLatestImage();
I'm getting the followed Exception:
IllegalStateException too many images are currently acquired
as mentioned in the documention:
https://developer.android.com/reference/android/media/ImageReader#acquireLatestImage()
This is because the image returned from 'acquireLatestImage' is still belongs to the ImageReader Queue.
Is there any way to detach images returning from 'ImageReader' ?
Is there a way to copy an image, preferably without to store it on disk, that is a resource consuming operation ?
Thank's
If this is a YUV_420_888 image, you can copy each of the plane ByteBuffers to keep the data around indefinitely. That is somewhat expensive, of course.
Unfortunately, there's no way to easily detach the Image from the ImageReader. The Reader is a circular buffer queue internally, so removing an Image would require the Reader to allocate a new image to replace the one removed, which is somewhat expensive as well.
It can be done by using an ImageWriter.queueInputImage, connected to another ImageReader with the same configuration as the original ImageReader. When you receive image A from your first ImageReader, and you want to keep it, you can queue it into the ImageWriter, and then get the Image handle again from the second ImageReader. Still need to set maxImages on the second reader high enough to account for all the Images you want to keep around at once, of course.
That's fairly cumbersome, of course, and if you're doing this continually you'll cause a lot of memory reallocation and copies may be just as expensive (and simpler in many ways).
Related
So, I've been researching on bitmap scaling using the bitmap factory.
http://developer.android.com/training/displaying-bitmaps/load-bitmap.html
I'm doing so because the application I'm working on requires a gallery that allows users to submit their photos to be added to the gallery. These photos will then be read from a URL.
My theoretical problem is this: Considering the android devices can have as low as 16MB of memory, even scaling down the images is only delaying the inevitable unless only handling a single image. Whereas in my case, the amount of images that will be loaded could be hundreds. Meaning that even if they're scaled down, eventually one will reach that limit.
My only idea thus far are to load one image at a time, which is not preferable since users will have to wait between photo transitions.
That being said, is there anyone who has experience developing applications on android that handle 100's of images? If so, is there any theory you could share on handling all these images fluidly? It can obviously be done, as there are gallery applications available. I am just unsure how they accomplished it given the restraints.
Please note this is not a request on how to use the bitmap factory to scale images, as that question has been answered many times.
Rather a request on handling data amounts you know will exceed limitations.
The gallary apps should not be storing all thousands of images in memory. Use the Viewholder pattern such that the image views displayed will get recycled (this is forced upon you if you use RecyclerView). On backend use an image cache and keep a limit on it size.
See e.g. What is the benefit of ViewHolder? and How to release memory of bitmap using imageloader in android?
The Android gallary app source may be a good reference: https://android.googlesource.com/platform/packages/apps/Gallery/+/android-5.1.1_r18/src/com/android/camera
I want to crop image of large size and tried using Bitmap.createBitmap but it gives OOM error. Also, tried multiple technique around createBitmap but none of them were successful.
Now I thinking of saving image to file system and crop it without loading image into memory that might solve the problem. But don't know how to do it.
User flow: User will take multiple pictures from in-app camera after each snap user can crop it manually or app will silently crop it on some predefine login and later it will send these images to server.
Can anybody guide me how I can achieve this?
There is a class called BitmapRegionDecoder which might help you, but it's available from API 10 and above.
If you can't use it :
Many image formats are compressed and therefore require some sort of loading into memory.
You will need to read about the best image format that fits your needs, and then read it by yourself, using only the memory that you need.
a little easier task would be to do it all in JNI, so that even though you will use a lot of memory, at least your app won't get into OOM so soon since it won't be constrained to the max heap size that is imposed on normal apps.
Of course, since android is open source, you can try to use the BitmapRegionDecoder and use it for any device.
I very much doubt you can solve this problem with the existing Android API.
What you need to do is obtain one of the available image access libraries (libpng is probably your best bet) and link it to your application via jni (see if there's a Java binding already available).
Use the low-level I/O operations to read the image a single scanline at a time. Discard any scanlines before or after the vertical cropped region. For those scanlines inside the vertical cropped region, take only those pixels inside the horizontal cropped region and write them out to the cropped image.
I am trying to use data from Android picture. I do not like JPEG format, since eventually I will use gray scale data. YUV format is fine with me, since the first half part is gray-scale.
from the Android development tutorial,
public final void takePicture (Camera.ShutterCallback shutter,
Camera.PictureCallback raw, Camera.PictureCallback postview,
Camera.PictureCallback jpeg)
Added in API level 5
Triggers an asynchronous image capture. The camera service will
initiate a series of callbacks to the application as the image capture
progresses. The shutter callback occurs after the image is captured.
This can be used to trigger a sound to let the user know that image
has been captured. The raw callback occurs when the raw image data is
available (NOTE: the data will be null if there is no raw image
callback buffer available or the raw image callback buffer is not
large enough to hold the raw image). The postview callback occurs when
a scaled, fully processed postview image is available (NOTE: not all
hardware supports this). The jpeg callback occurs when the compressed
image is available. If the application does not need a particular
callback, a null can be passed instead of a callback method.
It talks about "the raw image data". However, I find nowhere information about the format for the raw image data?
Do you have any idea about that?
I want to get the gray-scale data of the picture taken by the photo, and the data are located in the phone memory, so it would not cost time to write/read from image files, or convert between different image formats. Or maybe I have to sacrifice some to get it??
After some search, I think I found the answer:
From the Android tutorial:
"The raw callback occurs when the raw image data is available (NOTE:
the data will be null if there is no raw image callback buffer
available or the raw image callback buffer is not large enough to hold
the raw image)."
See this link (2011/05/10)
Android: Raw image callback supported devices
Not all devices support raw pictureCallback.
https://groups.google.com/forum/?fromgroups=#!topic/android-developers/ZRkeoCD2uyc (2009)
The employee Dave Sparks at Google said:
"The original intent was to return an uncompressed RGB565 frame, but
this proved to be impractical. " "I am inclined to deprecate that API
entirely and replace it with hooks for native signal processing. "
Many people report the similar problem. See:
http://code.google.com/p/android/issues/detail?id=10910
Since many image processing processes are based on gray scale images, I am looking forward gray scale raw data in the memory produced for each picture by the Android.
You may have some luck with getSupportedPictureFormats(). If it lists some YUV format, you can use setPictureFormat() and the desired resolution, and ciunterintuitively you will get the uncompressed high quality image in JpegPreview callback, from which grayscale (a.k.a. luminance) can be easily extracted.
Most devices will only list JPEG as a valid choice. That's because they perform compression in hardware, on the camera side. Note that the data transfer from camera to application RAM is often the bottleneck; if you can use stagefright hw JPEG decoder, you will actually get the result faster.
The biggest problem with using the raw callback is that many developers have trouble with getting anything returned on many phones.
If you are satisfied with just the YUV array, your camera preview SurfaceView can implement PreviewCallback and you can add the onPreviewFrame method to your class. This function will allow you direct access to the YUV array for every frame. You can fetch it when you choose.
EDIT: I should specify that I was assuming you were building a custom camera application in which you extended SurfaceView for a custom camera preview surface. In order to follow my advice you will need to build a custom camera. If you are trying to do things quickly though I suggest building a new bitmap out of the JPEG data where you implement the greyscale yourself.
After a lot of searching and days of experiments I haven't found a straight-forward solution.
I'm developing an app that user will interact with a pet on the screen and i want to let him save it as video.
Is there any "simple" way to capture the screen of the app itself?
I found a workaround (to save some bitmaps every second and then pass them to an encoder) but it seems too heavy. I will happy even with a framerate of 15fps
It seems to be possible, i.e. there is a similar app that does this, its called "Talking Tom"
It really depends on the way you implement your "pet view". Are you drawing on a Canvas? OpenGl ES? Normal Android view hierarchy?
Anyway, there is no magical "recordScreenToVideo()" like one of the comments said.
You need to:
Obtain bitmaps representing your "frames".
This depends on how you implement your view. If you draw yourself (Canvas or OpenGL), then save your raw pixel buffers as frames.
If you use the normal view hierarchy, subclass Android's onDraw and save the "frames" that you get on the canvas. The frequency of the system's call to onDraw will be no less than the actual "framerate" of the screen. If needed, duplicate frames afterwards to supply a 15fps video.
Encode your frames. Seems like you already have a mechanism to do that, you just need it to be more efficient.
Ways you can optimize encoding:
Cache your bitmaps (frames) and encode afterwards. This will work
only if your expected video will be relatively short, otherwise
you'll get out of storage.
Record only at the framerate that your app actually generates (depending on the way you draw) and use an encoder parameter to generate a 15fps video (without actually supplying 15 frames per second).
Adjust quality settings to current device. Can be done by performing a hidden CPU cycle test on app startup and defining some thresholds.
Encode only the most relevant portion of the screen.
Again, really depending on the way you implement - if you can save some "history data", and then convert that to frames without having to do it in real time, that would be best.
For example, "move", "smile", "change color" - or whatever your business logic is, since you didn't elaborate on that. Your "generate movie" function will animate this history data as a frame sequence (without drawing to the screen) and then encode.
Hope that helps
I would like to capture an image with the Android Camera but because the image may contain sensitive data I dont want the image saved to the phone or sd card. Instead I would like a base64 string (compressed) which would be sent to the server immediately
In PhoneGap it seems files are saved to various places automatically.
Natively I was never able to get the image stream - in onJpegPictureTaken() the byte[] parameter was always null.
can anyone suggest a way?
See Camera.onPreviewFrame() and the YuvImage.compresstoJpeg() to be able to get a byte array you can convert into a bitmap.
Note that YuvImage.compressToJpeg() is only available in SDK 8 or later, I think. For earlier versions you'll need to implement your own YUV decoder. There are several examples around or, I could provide you an example.
Those two methods will allow you to get a camera picture in memory and never persist it to SD. Beware that bitmaps of most camera preview sizes will chew up memory pretty quickly and you'll need to be very careful to recycle the bitmaps and probably also have to scale them down a bit to do much with them and still fit inside the native heap restrictions on most devices.
Good luck!