Android: Camera, onSurfaceTextureUpdated, Bitmap.getPixels - framerate drops from 30 to 3 - android

I have quite a horrible performance when trying to get the pixels of the Camera preview.
The image format is around 600x900.
The preview rate is quite stable 30fps on my HTC one.
As soon as I try to get the pixels of the image the framerate drops to below 5!
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture) {
Bitmap bmp = mTextureView.getBitmap();
int width = bmp.getWidth();
int height = bmp.getHeight();
int[] pixels = new int[bmp.getHeight() * bmp.getWidth()];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
}
The performance is so slow, it's not really bearable.
Now my only 'easy' solution is to skip frames to at least keep some visual performance.
But I'd actually like to have that code perform faster.
I would appreciate any ideas and suggestions, maybe someone solved this already ?
UPDATE
getbitmap: 188.341ms
array: 122ms
getPixels: 12.330ms
recycle: 152ms
It takes 190 milliseconds just to get the bitmap !! That's the problem

I digged into this for several hours.
So short answer: I found no way to avoid getBitmap() and increase performance.
The function is known to be slow, I found many similar questions and no results.
However I found another solution which is about 3 times faster and solves the problem for me.
I keep using the TextureView approach, I use it because it gives more freedom on how to display the Camera preview (for example I can display the camera live preview in a small window of my own aspect ratio without distortion)
But to work with the image data I do not use onSurefaceTextureUpdated() anymore.
I registered a callback of the cameraPreviewFrame which gives me the pixel data I need.
So no getBitmap anymore, and a lot more speed.
Fast, new code:
myCamera.setPreviewCallback(preview);
Camera.PreviewCallback preview = new Camera.PreviewCallback()
{
public void onPreviewFrame(byte[] data, Camera camera)
{
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
Image img = new Image(size.width, size.height, "Y800");
}
};
Slow:
private int[] surface_pixels=null;
private int surface_width=0;
private int surface_height=0;
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture)
{
int width,height;
Bitmap bmp= mTextureView.getBitmap();
height=barcodeBmp.getHeight();
width=barcodeBmp.getWidth();
if (surface_pixels == null)
{
surface_pixels = new int[height * width];
} else
{
if ((width != surface_width) || (height != surface_height))
{
surface_pixels = null;
surface_pixels = new int[height * width];
}
}
if ((width != surface_width) || (height != surface_height))
{
surface_height = barcodeBmp.getHeight();
surface_width = barcodeBmp.getWidth();
}
bmp.getPixels(surface_pixels, 0, width, 0, 0, width, height);
bmp.recycle();
Image img = new Image(width, height, "RGB4");
}
I hope this helps some people with the same problem.
If someone should find a way to create a bitmap in a fast manner within onSurfaceTextureUpdated please respond with a code sample.

Please try:
public void OnSurfaceTextureUpdated(SurfaceTexture surface) {
if (IsBusy) {
return;
}
IsBusy = true;
DoBigWork();
IsBusy = false;
}

Related

DJI mobile SDK Mavic 2 Pro - Live Stream 4k

I’m new here in stackoverflow and I hope someone of you can help me.
I developed an application with DJI mobile SDK and I implemented a live stream. The problem is the resolution of the live stream is not 4k and I need 4k. I think the drone provides the secondary stream for the live preview. Is it possible to change the secondary stream to the primary stream which have 4k resolution? And when it is possible how can I do that? Or is it simply possible to increase the resolution of the live stream / secondary stream?
Here is my current implementation:
Initialization of surface texture element for live stream preview:
SurfaceTextureListener surfaceTextureListener = new SurfaceTextureListener(getApplicationContext());
this.videoStreamPreviewTtView.setSurfaceTextureListener(surfaceTextureListener);
This is my listener:
public class SurfaceTextureListener implements TextureView.SurfaceTextureListener {
private final Context context;
public SurfaceTextureListener(Context context) {
this.context = context;
}
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
if (DroneControl.getCodecManager() == null) {
DroneControl.setCodecManager(new DJICodecManager(this.context, surface, width, height));
DroneControl.getCodecManager().resetKeyFrame();
DroneControl.getCodecManager().enabledYuvData(true);
DroneControl.getCodecManager().setYuvDataCallback(new LiveStreamDataCallback(this.context));
}
}
#Override
public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {
}
#Override
public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
if (DroneControl.getCodecManager() != null) {
DroneControl.getCodecManager().cleanSurface();
DroneControl.setCodecManager(null);
}
return false;
}
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surface) {
}
}
And here is my callback function:
public class LiveStreamDataCallback implements Base, DJICodecManager.YuvDataCallback {
private final Context context;
private final long lastUpdate;
public LiveStreamDataCallback(Context context) {
this.context = context;
this.lastUpdate = System.currentTimeMillis();
}
#Override
public void onYuvDataReceived(MediaFormat format, final ByteBuffer yuvFrame, int dataSize, final int width, final int height) {
long differenceInMillis = System.currentTimeMillis() - this.lastUpdate;
if (differenceInMillis > SCREEN_SHOT_PERIOD && yuvFrame != null) {
final byte[] bytes = new byte[dataSize];
yuvFrame.get(bytes);
newSaveYuvDataToJPEG(bytes, width, height);
}
}
private void newSaveYuvDataToJPEG(byte[] yuvFrame, int width, int height) {
if (yuvFrame.length < width * height) {
return;
}
int length = width * height;
byte[] u = new byte[width * height / 4];
byte[] v = new byte[width * height / 4];
for (int i = 0; i < u.length; i++) {
u[i] = yuvFrame[length + i];
v[i] = yuvFrame[length + u.length + i];
}
for (int i = 0; i < u.length; i++) {
yuvFrame[length + 2 * i] = v[i];
yuvFrame[length + 2 * i + 1] = u[i];
}
screenShot(yuvFrame, width, height);
}
private void screenShot(byte[] buf, int width, int height) {
ByteArrayOutputStream bOutput = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(buf,
ImageFormat.NV21,
width,
height,
null);
yuvImage.compressToJpeg(new Rect(0,
0,
width,
height), 100, bOutput);
insertIntoDB(Base64.getEncoder().encodeToString(bOutput.toByteArray()));
}
private void insertIntoDB(String base64EncodedContent) {
//only a limit of images will be saved inside the DB to avoid using too much space!
DatabaseUtil.reduceTableContentToMaxContentIfNecessary(this.context, ScreenShotModel.ScreenShotEntry.TABLE_NAME, MAX_KEEP_COUNT_FOR_LIVE_STREAM_SCREEN_SHOTS);
Date now = new Date();
SimpleDateFormat dateFormat = new SimpleDateFormat(DATE_FORMAT_FOR_LOGGING, Locale.GERMANY);
SQLiteDatabase db = DroneControl.getDbWriteAccess(this.context);
//create a new map of values, where column names are the keys
ContentValues values = new ContentValues();
values.put(ScreenShotModel.ScreenShotEntry.COLUMN_NAME_DATA, base64EncodedContent);
values.put(ScreenShotModel.ScreenShotEntry.COLUMN_NAME_CREATED, dateFormat.format(now));
db.insert(ScreenShotModel.ScreenShotEntry.TABLE_NAME, null, values);
}
}
Can't be done.
1080p is max, ocusync can't do any higher than that. The bandwidth isn't high enough, and the hardware in the drone don't support it.
I don't know anyway to do what you ask.
The only thing you can do is to take an still image and download it, but it will be slow, of course, but can be used for image recognition for example. You don't say what you are going to use it for, but since you seem to be looking at frames, that may be a (slow) solution.
From DJI web:
OcuSync
Part of the Lightbridge family, DJI’s newly developed OcuSync transmission system performs far better than Wi-Fi transmission at all transmission speeds. OcuSync also uses more effective digital compression and channel transmission technologies, allowing it to transmit HD video reliably even in environments with strong radio interference. Compared to traditional analog transmission, OcuSync can transmit video at 720p and 1080p – equivalent to a 4-10 times better quality, without a color cast, static interference, flickering or other problems associated with analog transmission. Even when using the same amount of radio transmission power, OcuSync transmits further than analog at 4.1mi (7km)
Thank you very much for your answer and your help!
Actually, I tried to capture 4k images first and transfer it. As you already said I try to use the frames for object detection and I need the best performance I can get. Capturing images and transferring it is very time consuming. I need approximately 1.5 seconds for capture and save the image, 3 more seconds for transfer it and, the biggest surprise for me, to read the SD card to find the newest image takes almost 11 seconds (I tried different SD cards with max class 10). In sum the whole process for one image takes 15.5 seconds, this is way too long for my purpose…
Then I thought I could do it with the live stream. The whole process with live stream takes 500 milliseconds and this is an acceptable value for my project. It is very sobering that it seems to be impossible…
Maybe there is still another opportunity to transfer images from the drone very fast?
I get stills from the live stream, but in another way.
I use the fpv-widget (the live stream is shown on the device), and read the bmp directly from the widget.
In this way you don't have to handle so much data in java.
I even do it from python, and with some quirks I got it into a opencv2 without any marshalling. Can read out 100frames/second in python, so java should be at least that fast.
You might reconsider your way of doing it. Try to avoid repacking the image data.
It's still only 1080, but it's very good quality I must say.
<dji.ux.widget.FPVWidget
android:id="#+id/fpv_widget"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_centerInParent="true"
custom:sourceCameraNameVisibility="true" />
public Bitmap getFrameBitmap() {
fpvWidget = findViewById(R.id.fpv_widget);
return fpvWidget.getBitmap();
}

How to handle image capture with MediaProjection on orientation change?

I wrote test application for capturing the images with MediaProjection class.
imageReader = ImageReader.newInstance(currentWidth, currentHeight, PixelFormat.RGBA_8888, 2);
imageReader.setOnImageAvailableListener(this, null);
virtualDisplay = mediaProjection.createVirtualDisplay("captureDisplay",
currentWidth, currentHeight, DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY |
DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC|
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR |
DisplayManager.VIRTUAL_DISPLAY_FLAG_SECURE |
DisplayManager.VIRTUAL_DISPLAY_FLAG_PRESENTATION, Screen.getDensity(mContext), imageReader.getSurface(), null, null);
// DisplayManager flags are trails only
and in onImageAvailable(ImageReader reader) method
i tried to get the image as follows:
Bitmap bitmap;
Image mImage = null;
try {
mImage = mImageReader .acquireLatestImage();
if (img != null) {
final Image.Plane[] planes = mImage.getPlanes();
final ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * mImage.getWidth();
bitmap = Bitmap.createBitmap(mImage.getWidth() + rowPadding/pixelStride, mImage.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] rawData = lastImageAcquiredRaw = stream.toByteArray();
if (rawData != null) {
Bitmap fullScreen = BitmapFactory.decodeByteArray(rawData, 0, rawData.length);
savebitmap(fullScreen,"image",i); //Saving bitmap in storage
}
}
Till now I am OK and I am getting correct Image when my app is in landscape orientation. Problem facing is on orientation change, i am not getting proper images. Some times again on changing back to landscape also not capturing properly.
I gone through ImageReader.java class. Nothing has mentioned like need to handle in orientation changes.
Tried using acquireNextImageNoThrowISE(), acquireNextImage() but no use.
Did any one tried or having possibility to get proper image in orientation?
Please help me in getting proper image.
I know this post is a bit old, but the last couple of days I am strangling with the same thing. I am also using MediaProjection API, VirtualDisplay and ImageReader's surface. The solution I am going to demonstrate is not 100% full-proof, so if anyone has any useful suggestions is more than welcome.
Step one: add an OrientationChangeCallback after you create Virtual Display
OrientationChangeCallback orientationChangeCallback = new OrientationChangeCallback(context);
if (orientationChangeCallback.canDetectOrientation()) {
orientationChangeCallback.enable();
}
Step two: define the orientation callback, which in essence restarts the capture
private class OrientationChangeCallback extends OrientationEventListener {
OrientationChangeCallback(Context context) {
super(context);
}
#Override
public void onOrientationChanged(int orientation) {
final int rotation = mDisplay.getRotation();
if(rotation != mRotation) {
mRotation = rotation;
if (mVirtualDisplay != null) {
mVirtualDisplay.release();
}
if (mImageReader != null) {
mImageReader.setOnImageAvailableListener(null, null);
}
if (!mProjectionStopped) {
createVirtualDisplay();
}
}
}
}
Note that I am holding references to ImageReader and VirtualDisplay, and also have a volatile boolean mRotation to check if the screen actually rotated.
The problem I have is that on some devices (Nexus 5X included) the image can become corrupt after X number or rotations (usually 10-20). If I rotate the device one more time the image is captured correctly. On quick devices (Google Pixel) I cannot replicate the corrupt image issue, so I am guessing it might be a synchronisation issue.

ImageReader in Android needs too long time for one frame to be available

I am developing an Android App in which I'm using ImageReader to get image from a Surface. The surface's data is achieved from the VirtualDisplay when i record screen in Lollipop version. The problem is the image is available with very low rate (1 fps) (OnImageAvailableListener.onImageAvailable() function is invoked). When i tried to use MediaEncoder with this surface as an input surface the output video looks smooth under 30fps.
Is there any suggestion for me to read the image data of surface with high fps?
ImageReader imageReader = ImageReader.newInstance(width, height, PixelFormat.RGBA_8888, 2);
mImageReader.setOnImageAvailableListener(onImageListener, null);
mVirtualDisplay = mMediaProjection.createVirtualDisplay("VideoCap",
mDisplayWidth, mDisplayHeight, mScreenDensity,
DisplayManager.VIRTUAL_DISPLAY_FLAG_AUTO_MIRROR,
imageReader.getSurface(), null /*Callbacks*/, null /*Handler*/);
//
//
OnImageAvailableListener onImageListener = new OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
// TODO Auto-generated method stub
if(reader != mImageReader)
return;
Image image = reader.acquireLatestImage();
if(image == null)
return;
// do some stuff
image.close();
}
};
FPS increased extremely when I switched to another format :
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(), mVideoSize.getHeight(), ImageFormat.YUV_420_888, 2);
Hope this will help you.
I found that in addition to selecting the YUV format, I also had to select the smallest image size available for the device in order to get a significant speed increase.

How come that a camera preview in a textureview is much more fuzzy than in a surfaceview?

I have found out that when using a textureview instead of a surfaceview as a camera preview (both hooked up to the camera via a mediarecorder) then the preview is much more fuzzy.
What I mean by fuzzy is that in a texture view you can see the pixels, especially when zooming. That is not the case when using a surfaceview. Why is that the case?
UPD:
Sorry,but after I re-write my shit code, the key is the preview size too small that caused "fuzziness", so you should set a reasonable preview Size,not the reason strikeout below, but auto-focus is suggested ...
Size size = getBestSupportSize(parameters.getSupportedPreviewSizes(), width, height);
parameters.setPreviewSize(size.width, size.height);
As to the method getBestSupportSize(), how to get the bestSize for your project needs, in this case, it is as large as the screen width andhe ratio is 4/3 your's may be some other, I calculate the ration dividing width/height.
private Size getBestSupportSize(List<Size> sizes, int width, int height) {
Size bestsize = sizes.get(0);
int screenWidth = getResources().getDisplayMetrics().widthPixels;
int dt = Integer.MAX_VALUE;
for (int i = sizes.size() - 1; i >= 0; i--) {
Log.d(TAG, "-index : " + i);
Size s = sizes.get(i);
if (s.width * 3.0f / 4 == s.height) {
int newDT = Math.abs(screenWidth - s.width);
if (newDT < dt && screenWidth < s.width) {
dt = newDT;
bestsize = s;
}
}
}
return bestsize;//note that if no "4/3" size supported,default return size[0]
}
So this "fuzziness" was caused by a small previewSize calcualate a best size for the camera using this getSupportedPreviewSizes() method
And I will keep the autoFocus snippet below, strikeout though, FYR if is needed.
Well i got the solution for this "fuzzy" problem,and my case is just using TextureView andsurfaceTexture to take a pic instead of old surfaceView withsurfaceHolderway.
The key is set this mCamera.autofocus(), why the pic is"fuzzy" is bacause we lack of this autoFocus setting.
like below :
mCamera.setPreviewTexture(surface);
//enable autoFocus if moving
mCamera.setAutoFocusMoveCallback(new AutoFocusMoveCallback() {
#Override
public void onAutoFocusMoving(boolean start, Camera camera) {
if (start) { //true means you are moving the camera
mCamera.autoFocus(myAutoFocus);
}
}
});
mCamera.startPreview();
The autoFocusCallback like this:
AutoFocusCallback myAutoFocus = new AutoFocusCallback() {
#Override
public void onAutoFocus(boolean success, Camera camera) {
}
};

Camera preview processing on Android

I'm making a line follower for my robot on Android (to learn Java/Android programming), currently I'm facing the image processing problem: the camera preview returns an image format called YUV which I want to convert to a threshold in order to know where the line is, how would one do that?
As of now I've succeeded getting something, that is I definitely can read data from the camera preview and by some miracle even know if the light intensity is over or under a certain value at a certain area on the screen. My goal is to draw the robot's path on an overlay over the camera's preview, that too works to some extent, but the problem is the YUV management.
As you can see not only the dark area is drawn sideways, but it also repeats itself 4 times and the preview image is stretched, I cannot figure out how to fix these problems.
Here's the relevant part of code:
public void surfaceCreated(SurfaceHolder arg0) {
// TODO Auto-generated method stub
// camera setup
mCamera = Camera.open();
Camera.Parameters parameters = mCamera.getParameters();
List<Camera.Size> sizes = parameters.getSupportedPreviewSizes();
for(int i=0; i<sizes.size(); i++)
{
Log.i("CS", i+" - width: "+sizes.get(i).width+" height: "+sizes.get(i).height+" size: "+(sizes.get(i).width*sizes.get(i).height));
}
// change preview size
final Camera.Size cs = sizes.get(8);
parameters.setPreviewSize(cs.width, cs.height);
// initialize image data array
imgData = new int[cs.width*cs.height];
// make picture gray scale
parameters.setColorEffect(Camera.Parameters.EFFECT_MONO);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
mCamera.setParameters(parameters);
// change display size
LayoutParams params = (LayoutParams) mSurfaceView.getLayoutParams();
params.height = (int) (mSurfaceView.getWidth()*cs.height/cs.width);
mSurfaceView.setLayoutParams(params);
LayoutParams overlayParams = (LayoutParams) swOverlay.getLayoutParams();
overlayParams.width = mSurfaceView.getWidth();
overlayParams.height = mSurfaceView.getHeight();
swOverlay.setLayoutParams(overlayParams);
try
{
mCamera.setPreviewDisplay(mSurfaceHolder);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
catch (IOException e)
{
e.printStackTrace();
mCamera.stopPreview();
mCamera.release();
}
// callback every time a new frame is available
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera)
{
// create bitmap from camera preview
int pixel, pixVal, frameSize = cs.width*cs.height;
for(int i=0; i<frameSize; i++)
{
pixel = (0xff & ((int) data[i])) - 16;
if(pixel < threshold)
{
pixVal = 0;
}
else
{
pixVal = 1;
}
imgData[i] = pixVal;
}
int cp = imgData[(int) (cs.width*(0.5+(cs.height/2)))];
//Log.i("CAMERA", "Center pixel RGB: "+cp);
debug.setText("Center pixel: "+cp);
// process preview image data
Paint paint = new Paint();
paint.setColor(Color.YELLOW);
int start, finish, last;
start = finish = last = -1;
float x_ratio = mSurfaceView.getWidth()/cs.width;
float y_ratio = mSurfaceView.getHeight()/cs.height;
// display calculated path on overlay using canvas
Canvas overlayCanvas = overlayHolder.lockCanvas();
overlayCanvas.drawColor(0, Mode.CLEAR);
// start by finding the tape from bottom of the screen
for(int y=cs.height; y>0; y--)
{
for(int x=0; x<cs.width; x++)
{
pixel = imgData[y*cs.height+x];
if(pixel == 1 && last == 0 && start == -1)
{
start = x;
}
else if(pixel == 0 && last == 1 && finish == -1)
{
finish = x;
break;
}
last = pixel;
}
//overlayCanvas.drawLine(start*x_ratio, y*y_ratio, finish*x_ratio, y*y_ratio, paint);
//start = finish = last = -1;
}
overlayHolder.unlockCanvasAndPost(overlayCanvas);
}
});
}
This code generates an error sometimes when quitting the application due to some method being called after release, which is the least of my problems.
UPDATE:
Now that the orientation problem is fixed (CCD sensor orientation) I'm still facing the repetition problem, this is probably related to my YUV data management...
Your surface and camera management looks correct, but I would doublecheck that camera actually accepted preview size settings ( some camera implementations reject some settings silently)
As you are working with portrait mode, you have to keep in mind that camera does not give a fart about prhone orientation - its coordinate origin isdetermined by CCD chip and is always to right corner and scan direction is from top to bottom and right to left - quite different from your overlay canvas. ( But if you are in landscape mode, everything is correct ;) ) - this is certaily source of odd drawing result
Your threshloding is bit naive and not very usefull in real life - I would suggest adaptive threshloding. In our javaocr project ( pure java, also has android demos ) we implemented efficient sauvola binarisation (see demos):
http://sourceforge.net/projects/javaocr/
Performance binarisation can be improved to work only on single image rows (patches welcome)
Issue with UV part of image is easy - default forman is NV21, luminance comes first
and this is just byte stream, and you do not need UV part of image at all (look into demos above)

Categories

Resources