Android Take Screenshot of Surface View Shows Black Screen - android

I am attempting to Take a Screenshot of my Game through code and Share it through an Intent. I able to do of those things, however the screenshot always appears black. Here is the Code Related to Sharing the Screenshot:
View view = MainActivity.getView();
view.setDrawingCacheEnabled(true);
Bitmap screen = Bitmap.createBitmap(view.getDrawingCache(true));
.. save Bitmap
This is in the MainActivity:
view = new GameView(this);
view.setLayoutParams(new RelativeLayout.LayoutParams(
RelativeLayout.LayoutParams.FILL_PARENT,
RelativeLayout.LayoutParams.FILL_PARENT));
public static SurfaceView getView() {
return view;
}
And the View itself:
public class GameView extends SurfaceView implements SurfaceHolder.Callback {
private static SurfaceHolder surfaceHolder;
...etc
And this is how I am Drawing everything:
Canvas canvas = surfaceHolder.lockCanvas(null);
if (canvas != null) {
Game.draw(canvas);
...
Ok, based on some answers, i have constructed this:
public static void share() {
Bitmap screen = GameView.SavePixels(0, 0, Screen.width, Screen.height);
Calendar c = Calendar.getInstance();
Date d = c.getTime();
String path = Images.Media.insertImage(
Game.context.getContentResolver(), screen, "screenShotBJ" + d
+ ".png", null);
System.out.println(path + " PATH");
Uri screenshotUri = Uri.parse(path);
final Intent emailIntent = new Intent(
android.content.Intent.ACTION_SEND);
emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri);
emailIntent.setType("image/png");
Game.context.startActivity(Intent.createChooser(emailIntent,
"Share High Score:"));
}
The Gameview contains the Following Method:
public static Bitmap SavePixels(int x, int y, int w, int h) {
EGL10 egl = (EGL10) EGLContext.getEGL();
GL10 gl = (GL10) egl.eglGetCurrentContext().getGL();
int b[] = new int[w * (y + h)];
int bt[] = new int[w * h];
IntBuffer ib = IntBuffer.wrap(b);
ib.position(0);
gl.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for (int i = 0, k = 0; i < h; i++, k++) {
for (int j = 0; j < w; j++) {
int pix = b[i * w + j];
int pb = (pix >> 16) & 0xff;
int pr = (pix << 16) & 0x00ff0000;
int pix1 = (pix & 0xff00ff00) | pr | pb;
bt[(h - k - 1) * w + j] = pix1;
}
}
Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
The Screenshot is still Black. Is there something wrong with the way I am saving it perhaps?
I have attempted several different methods to take the Screenshot, but none of them worked:
The one shown in the code above was the most commonly suggested one. But it does not seem to work.
Is this an Issue with using SurfaceView? And if so, why does view.getDrawingCache(true) even exist if I cant use it and how do I fix this?
My code:
public static void share() {
// GIVES BLACK SCREENSHOT:
Calendar c = Calendar.getInstance();
Date d = c.getTime();
Game.update();
Bitmap.Config conf = Bitmap.Config.RGB_565;
Bitmap image = Bitmap.createBitmap(Screen.width, Screen.height, conf);
Canvas canvas = GameThread.surfaceHolder.lockCanvas(null);
canvas.setBitmap(image);
Paint backgroundPaint = new Paint();
backgroundPaint.setARGB(255, 40, 40, 40);
canvas.drawRect(0, 0, canvas.getWidth(), canvas.getHeight(),
backgroundPaint);
Game.draw(canvas);
Bitmap screen = Bitmap.createBitmap(image, 0, 0, Screen.width,
Screen.height);
canvas.setBitmap(null);
GameThread.surfaceHolder.unlockCanvasAndPost(canvas);
String path = Images.Media.insertImage(
Game.context.getContentResolver(), screen, "screenShotBJ" + d
+ ".png", null);
System.out.println(path + " PATH");
Uri screenshotUri = Uri.parse(path);
final Intent emailIntent = new Intent(
android.content.Intent.ACTION_SEND);
emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri);
emailIntent.setType("image/png");
Game.context.startActivity(Intent.createChooser(emailIntent,
"Share High Score:"));
}
Thank you.

There is a great deal of confusion about this, and a few correct answers.
Here's the deal:
A SurfaceView has two parts, the Surface and the View. The Surface is on a completely separate layer from all of the View UI elements. The getDrawingCache() approach works on the View layer only, so it doesn't capture anything on the Surface.
The buffer queue has a producer-consumer API, and it can have only one producer. Canvas is one producer, GLES is another. You can't draw with Canvas and read pixels with GLES. (Technically, you could if the Canvas were using GLES and the correct EGL context was current when you went to read the pixels, but that's not guaranteed. Canvas rendering to a Surface is not accelerated in any released version of Android, so right now there's no hope of it working.)
(Not relevant for your case, but I'll mention it for completeness:) A Surface is not a frame buffer, it is a queue of buffers. When you submit a buffer with GLES, it is gone, and you can no longer read from it. So if you were rendering with GLES and capturing with GLES, you would need to read the pixels back before calling eglSwapBuffers().
With Canvas rendering, the easiest way to "capture" the Surface contents is to simply draw it twice. Create a screen-sized Bitmap, create a Canvas from the Bitmap, and pass it to your draw() function.
With GLES rendering, you can use glReadPixels() before the buffer swap to grab the pixels. There's a (less-expensive than the code in the question) implementation of the grab code in Grafika; see saveFrame() in EglSurfaceBase.
If you were sending video directly to a Surface (via MediaPlayer) there would be no way to capture the frames, because your app never has access to them -- they go directly from mediaserver to the compositor (SurfaceFlinger). You can, however, route the incoming frames through a SurfaceTexture, and render them twice from your app, once for display and once for capture. See this question for more info.
One alternative is to replace the SurfaceView with a TextureView, which can be drawn on like any other Surface. You can then use one of the getBitmap() calls to capture a frame. TextureView is less efficient than SurfaceView, so this is not recommended for all situations, but it's straightforward to do.
If you were hoping to get a composite screen shot containing both the Surface contents and the View UI contents, you will need to capture the Canvas as above, capture the View with the usual drawing cache trick, and then composite the two manually. Note this won't pick up the system parts (status bar, nav bar).
Update: on Lollipop and later (API 21+) you can use the MediaProjection class to capture the entire screen with a virtual display. There are some trade-offs with this approach, e.g. you're capturing the rendered screen, not the frame that was sent to the Surface, so what you get may have been up- or down-scaled to fit the window. In addition, this approach involves an Activity switch since you have to create an intent (by calling createScreenCaptureIntent on the ProjectionManager object) and wait for its result.
If you want to learn more about how all this stuff works, see the Android System-Level Graphics Architecture doc.

I know its a late reply but for those who face the same problem,
we can use PixelCopy for fetching the snapshot. It's available in API level 24 and above
PixelCopy.request(surfaceViewObject,BitmapDest,listener,new Handler());
where,
surfaceViewObject is the object of surface view
BitmapDest is the bitmap object where the image will be saved and it cant be null
listener is OnPixelCopyFinishedListener
for more info refer - https://developer.android.com/reference/android/view/PixelCopy

Update 2020 view.setDrawingCacheEnabled(true) is deprecated in API 28
If you are using normal View then you can create a canvas with the specified bitmap to draw into. Then ask the view to draw over that canvas and return bitmap filled by Canvas
/**
* Copy View to Canvas and return bitMap
*/
fun getBitmapFromView(view: View): Bitmap? {
var bitmap =
Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
val canvas = Canvas(bitmap)
view.draw(canvas)
return bitmap
}
Or you can fill the canvas with a default color before drawing it with the view:
/**
* Copy View to Canvas and return bitMap and fill it with default color
*/
fun getBitmapFromView(view: View, defaultColor: Int): Bitmap? {
var bitmap =
Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
var canvas = Canvas(bitmap)
canvas.drawColor(defaultColor)
view.draw(canvas)
return bitmap
}
The above approach will not work for the surface view they will drawn as a hole in the screenshot.
For surface view since Android 24, you need to use Pixel Copy.
/**
* Pixel copy to copy SurfaceView/VideoView into BitMap
*/
fun usePixelCopy(videoView: SurfaceView, callback: (Bitmap?) -> Unit) {
val bitmap: Bitmap = Bitmap.createBitmap(
videoView.width,
videoView.height,
Bitmap.Config.ARGB_8888
);
try {
// Create a handler thread to offload the processing of the image.
val handlerThread = HandlerThread("PixelCopier");
handlerThread.start();
PixelCopy.request(
videoView, bitmap,
PixelCopy.OnPixelCopyFinishedListener { copyResult ->
if (copyResult == PixelCopy.SUCCESS) {
callback(bitmap)
}
handlerThread.quitSafely();
},
Handler(handlerThread.looper)
)
} catch (e: IllegalArgumentException) {
callback(null)
// PixelCopy may throw IllegalArgumentException, make sure to handle it
e.printStackTrace()
}
}
This approach can take a screenshot of any subclass of Surface Vie eg VideoView
Screenshot.usePixelCopy(videoView) { bitmap: Bitmap? ->
processBitMap(bitmap)
}

Here is the complete method to take screen shot of a surface view using PixelCopy. It requires API 24(Android N).
#RequiresApi(api = Build.VERSION_CODES.N)
private void capturePicture() {
Bitmap bmp = Bitmap.createBitmap(surfaceView.getWidth(), surfaceView.getHeight(), Bitmap.Config.ARGB_8888);
PixelCopy.request(surfaceView, bmp, i -> {
imageView.setImageBitmap(bmp); //"iv_Result" is the image view
}, new Handler(Looper.getMainLooper()));
}

This is because SurfaceView uses OpenGL thread for drawing and draws directly to a hardware buffer. You have to use glReadPixels (and probably a GLWrapper).
See the thread: Android OpenGL Screenshot

Related

How to draw objects on TextureView Camera stream preview and record the stream with objects?

I need help with an application I am working on. The application has to have a custom Camera interface to record a video with audio and have to add some objects in realtime on the TextureView canvas. Old Camera API is deprecated, so I have to use Camera2 API to render the live preview on TextureView. My goal is to draw some objects on top of the TextureView Canvas, could be some text/jpg/gif while the camera stream renders in the background and being able to record the video with my overlay canvas content and camera feed.
Problem is I can draw custom content in an transparent overlay view but that is is just for user's viewing purposes. I have tried researching this for a few days but I am not able to get the right approach to solve my purpose.
I tried the following code after calling the openCamera() method, but then I just see a rectangle drawn but not the camera preview:
Canvas canvas = mTextureView.lockCanvas();
Paint myPaint = new Paint();
myPaint.setColor(Color.WHITE);
myPaint.setStrokeWidth(10);
canvas.drawRect(100, 100, 300, 300, myPaint);
mTextureView.unlockCanvasAndPost(canvas);
I also tried a custom TextureView class and override thevonDrawForeground(Canvas canvas) method but it doesn't work.
The onDraw() method in TextureView class is final and thus, I am not able to do anything at this point except for just streaming the camera feed.
/**
* Subclasses of TextureView cannot do their own rendering
* with the {#link Canvas} object.
*
* #param canvas The Canvas to which the View is rendered.
*/
#Override
protected final void onDraw(Canvas canvas) {
}
In short, I want user to be able to record video through my camera app with some props here and there.
Modifying a video in real time is a high processor and hence, high battery overhead operation - I am sure you know this but its worth saying that if you can add your modifications on the server side, maybe by sending the stream along with a timestamped set of text overlays to the server, you should have more horsepower server side.
The following code will add text and an image to a still picture or frame captured by Camera2 on Android. I have not used it with video so can't comment on speed and whether it is practical to do this with a real time video stream - it wasn't optimised for this but it should be a starting point for you:
//Copy the image to a BitBuffer
ByteBuffer buffer = mCameraImage.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
Log.d(TAG,"ImageSaver bytes.length: " + bytes.length);
buffer.get(bytes);
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inMutable = true;
Bitmap cameraBitmap = BitmapFactory.decodeByteArray(bytes,0,bytes.length, opt);
if (cameraBitmap == null) {
Log.d(TAG,"ImageSaver cameraBitmap is null");
return;
} else {
camImgBitmap = cameraBitmap;
}
//Modify captured picture by drawing on canvas
Canvas camImgCanvas = new Canvas(camImgBitmap);
//Draw an image in the middle
Drawable d = ContextCompat.getDrawable(this, R.drawable.image_to_add);
int bitMapWidthCenter = camImgBitmap.getWidth()/2;
int bitMapheightCenter = camImgBitmap.getHeight()/2;
int imageToDrawSize = camImgBitmap.getWidth()/10;
int rTop = bitMapheightCenter - sightsSize;
int rLeft = bitMapWidthCenter - sightsSize;
int rRight = bitMapWidthCenter + sightsSize;
int rBot = bitMapheightCenter + sightsSize;
d.setBounds(rLeft, rTop, rRight, rBot);
d.draw(camImgCanvas);
//Now Draw in some text
Paint paint = new Paint();
paint.setColor(Color.GREEN);
int textSize = camImgBitmap.getHeight()/20;
int textPadding = 40;
paint.setTextSize(textSize);
camImgCanvas.drawText("Name: " + text1, textPadding, (camImgBitmap.getHeight() - (textSize * 2) ) - textPadding, paint);
camImgCanvas.drawText("Time: " + text2 + " degrees", textPadding, (camImgBitmap.getHeight() - textSize) - textPadding, paint);
Likely the most performant option is to pipe the camera feed straight into the GPU, draw on top of it there, and from there render to the display and a video encoder directly.
This is what many video chat apps do, for example, for any effects.
You can use a SurfaceTexture to connect camera2 to EGL, and then you can render the preview onto a quad, and then your additions on top.
Then you can render to a screen buffer (GLSurfaceView for example), and to a separate EGLImage from a MediaRecorder/MediaCodec Surface.
There's a lot of code involved there, and a lot of scaffolding for EGL setup, so it's hard to point to any simple examples.

screenshot of Video on surface view shows black screen [duplicate]

I am attempting to Take a Screenshot of my Game through code and Share it through an Intent. I able to do of those things, however the screenshot always appears black. Here is the Code Related to Sharing the Screenshot:
View view = MainActivity.getView();
view.setDrawingCacheEnabled(true);
Bitmap screen = Bitmap.createBitmap(view.getDrawingCache(true));
.. save Bitmap
This is in the MainActivity:
view = new GameView(this);
view.setLayoutParams(new RelativeLayout.LayoutParams(
RelativeLayout.LayoutParams.FILL_PARENT,
RelativeLayout.LayoutParams.FILL_PARENT));
public static SurfaceView getView() {
return view;
}
And the View itself:
public class GameView extends SurfaceView implements SurfaceHolder.Callback {
private static SurfaceHolder surfaceHolder;
...etc
And this is how I am Drawing everything:
Canvas canvas = surfaceHolder.lockCanvas(null);
if (canvas != null) {
Game.draw(canvas);
...
Ok, based on some answers, i have constructed this:
public static void share() {
Bitmap screen = GameView.SavePixels(0, 0, Screen.width, Screen.height);
Calendar c = Calendar.getInstance();
Date d = c.getTime();
String path = Images.Media.insertImage(
Game.context.getContentResolver(), screen, "screenShotBJ" + d
+ ".png", null);
System.out.println(path + " PATH");
Uri screenshotUri = Uri.parse(path);
final Intent emailIntent = new Intent(
android.content.Intent.ACTION_SEND);
emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri);
emailIntent.setType("image/png");
Game.context.startActivity(Intent.createChooser(emailIntent,
"Share High Score:"));
}
The Gameview contains the Following Method:
public static Bitmap SavePixels(int x, int y, int w, int h) {
EGL10 egl = (EGL10) EGLContext.getEGL();
GL10 gl = (GL10) egl.eglGetCurrentContext().getGL();
int b[] = new int[w * (y + h)];
int bt[] = new int[w * h];
IntBuffer ib = IntBuffer.wrap(b);
ib.position(0);
gl.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for (int i = 0, k = 0; i < h; i++, k++) {
for (int j = 0; j < w; j++) {
int pix = b[i * w + j];
int pb = (pix >> 16) & 0xff;
int pr = (pix << 16) & 0x00ff0000;
int pix1 = (pix & 0xff00ff00) | pr | pb;
bt[(h - k - 1) * w + j] = pix1;
}
}
Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
The Screenshot is still Black. Is there something wrong with the way I am saving it perhaps?
I have attempted several different methods to take the Screenshot, but none of them worked:
The one shown in the code above was the most commonly suggested one. But it does not seem to work.
Is this an Issue with using SurfaceView? And if so, why does view.getDrawingCache(true) even exist if I cant use it and how do I fix this?
My code:
public static void share() {
// GIVES BLACK SCREENSHOT:
Calendar c = Calendar.getInstance();
Date d = c.getTime();
Game.update();
Bitmap.Config conf = Bitmap.Config.RGB_565;
Bitmap image = Bitmap.createBitmap(Screen.width, Screen.height, conf);
Canvas canvas = GameThread.surfaceHolder.lockCanvas(null);
canvas.setBitmap(image);
Paint backgroundPaint = new Paint();
backgroundPaint.setARGB(255, 40, 40, 40);
canvas.drawRect(0, 0, canvas.getWidth(), canvas.getHeight(),
backgroundPaint);
Game.draw(canvas);
Bitmap screen = Bitmap.createBitmap(image, 0, 0, Screen.width,
Screen.height);
canvas.setBitmap(null);
GameThread.surfaceHolder.unlockCanvasAndPost(canvas);
String path = Images.Media.insertImage(
Game.context.getContentResolver(), screen, "screenShotBJ" + d
+ ".png", null);
System.out.println(path + " PATH");
Uri screenshotUri = Uri.parse(path);
final Intent emailIntent = new Intent(
android.content.Intent.ACTION_SEND);
emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri);
emailIntent.setType("image/png");
Game.context.startActivity(Intent.createChooser(emailIntent,
"Share High Score:"));
}
Thank you.
There is a great deal of confusion about this, and a few correct answers.
Here's the deal:
A SurfaceView has two parts, the Surface and the View. The Surface is on a completely separate layer from all of the View UI elements. The getDrawingCache() approach works on the View layer only, so it doesn't capture anything on the Surface.
The buffer queue has a producer-consumer API, and it can have only one producer. Canvas is one producer, GLES is another. You can't draw with Canvas and read pixels with GLES. (Technically, you could if the Canvas were using GLES and the correct EGL context was current when you went to read the pixels, but that's not guaranteed. Canvas rendering to a Surface is not accelerated in any released version of Android, so right now there's no hope of it working.)
(Not relevant for your case, but I'll mention it for completeness:) A Surface is not a frame buffer, it is a queue of buffers. When you submit a buffer with GLES, it is gone, and you can no longer read from it. So if you were rendering with GLES and capturing with GLES, you would need to read the pixels back before calling eglSwapBuffers().
With Canvas rendering, the easiest way to "capture" the Surface contents is to simply draw it twice. Create a screen-sized Bitmap, create a Canvas from the Bitmap, and pass it to your draw() function.
With GLES rendering, you can use glReadPixels() before the buffer swap to grab the pixels. There's a (less-expensive than the code in the question) implementation of the grab code in Grafika; see saveFrame() in EglSurfaceBase.
If you were sending video directly to a Surface (via MediaPlayer) there would be no way to capture the frames, because your app never has access to them -- they go directly from mediaserver to the compositor (SurfaceFlinger). You can, however, route the incoming frames through a SurfaceTexture, and render them twice from your app, once for display and once for capture. See this question for more info.
One alternative is to replace the SurfaceView with a TextureView, which can be drawn on like any other Surface. You can then use one of the getBitmap() calls to capture a frame. TextureView is less efficient than SurfaceView, so this is not recommended for all situations, but it's straightforward to do.
If you were hoping to get a composite screen shot containing both the Surface contents and the View UI contents, you will need to capture the Canvas as above, capture the View with the usual drawing cache trick, and then composite the two manually. Note this won't pick up the system parts (status bar, nav bar).
Update: on Lollipop and later (API 21+) you can use the MediaProjection class to capture the entire screen with a virtual display. There are some trade-offs with this approach, e.g. you're capturing the rendered screen, not the frame that was sent to the Surface, so what you get may have been up- or down-scaled to fit the window. In addition, this approach involves an Activity switch since you have to create an intent (by calling createScreenCaptureIntent on the ProjectionManager object) and wait for its result.
If you want to learn more about how all this stuff works, see the Android System-Level Graphics Architecture doc.
I know its a late reply but for those who face the same problem,
we can use PixelCopy for fetching the snapshot. It's available in API level 24 and above
PixelCopy.request(surfaceViewObject,BitmapDest,listener,new Handler());
where,
surfaceViewObject is the object of surface view
BitmapDest is the bitmap object where the image will be saved and it cant be null
listener is OnPixelCopyFinishedListener
for more info refer - https://developer.android.com/reference/android/view/PixelCopy
Update 2020 view.setDrawingCacheEnabled(true) is deprecated in API 28
If you are using normal View then you can create a canvas with the specified bitmap to draw into. Then ask the view to draw over that canvas and return bitmap filled by Canvas
/**
* Copy View to Canvas and return bitMap
*/
fun getBitmapFromView(view: View): Bitmap? {
var bitmap =
Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
val canvas = Canvas(bitmap)
view.draw(canvas)
return bitmap
}
Or you can fill the canvas with a default color before drawing it with the view:
/**
* Copy View to Canvas and return bitMap and fill it with default color
*/
fun getBitmapFromView(view: View, defaultColor: Int): Bitmap? {
var bitmap =
Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
var canvas = Canvas(bitmap)
canvas.drawColor(defaultColor)
view.draw(canvas)
return bitmap
}
The above approach will not work for the surface view they will drawn as a hole in the screenshot.
For surface view since Android 24, you need to use Pixel Copy.
/**
* Pixel copy to copy SurfaceView/VideoView into BitMap
*/
fun usePixelCopy(videoView: SurfaceView, callback: (Bitmap?) -> Unit) {
val bitmap: Bitmap = Bitmap.createBitmap(
videoView.width,
videoView.height,
Bitmap.Config.ARGB_8888
);
try {
// Create a handler thread to offload the processing of the image.
val handlerThread = HandlerThread("PixelCopier");
handlerThread.start();
PixelCopy.request(
videoView, bitmap,
PixelCopy.OnPixelCopyFinishedListener { copyResult ->
if (copyResult == PixelCopy.SUCCESS) {
callback(bitmap)
}
handlerThread.quitSafely();
},
Handler(handlerThread.looper)
)
} catch (e: IllegalArgumentException) {
callback(null)
// PixelCopy may throw IllegalArgumentException, make sure to handle it
e.printStackTrace()
}
}
This approach can take a screenshot of any subclass of Surface Vie eg VideoView
Screenshot.usePixelCopy(videoView) { bitmap: Bitmap? ->
processBitMap(bitmap)
}
Here is the complete method to take screen shot of a surface view using PixelCopy. It requires API 24(Android N).
#RequiresApi(api = Build.VERSION_CODES.N)
private void capturePicture() {
Bitmap bmp = Bitmap.createBitmap(surfaceView.getWidth(), surfaceView.getHeight(), Bitmap.Config.ARGB_8888);
PixelCopy.request(surfaceView, bmp, i -> {
imageView.setImageBitmap(bmp); //"iv_Result" is the image view
}, new Handler(Looper.getMainLooper()));
}
This is because SurfaceView uses OpenGL thread for drawing and draws directly to a hardware buffer. You have to use glReadPixels (and probably a GLWrapper).
See the thread: Android OpenGL Screenshot

Android Camera2 API Showing Processed Preview Image

New Camera 2 API is very different from old one.Showing the manipulated camera frames to user part of pipeline is confuses me. I know there is very good explanation on Camera preview image data processing with Android L and Camera2 API but showing frames is still not clear. My question is what is the way of showing frames on screen which came from ImageReaders callback function after some processing while preserving efficiency and speed in Camera2 api pipeline?
Example Flow :
camera.add_target(imagereader.getsurface) -> on imagereaders callback do some processing -> (show that processed image on screen?)
Workaround Idea : Sending bitmaps to imageview every time new frame processed.
Edit after clarification of the question; original answer at bottom
Depends on where you're doing your processing.
If you're using RenderScript, you can connect a Surface from a SurfaceView or a TextureView to an Allocation (with setSurface), and then write your processed output to that Allocation and send it out with Allocation.ioSend(). The HDR Viewfinder demo uses this approach.
If you're doing EGL shader-based processing, you can connect a Surface to an EGLSurface with eglCreateWindowSurface, with the Surface as the native_window argument. Then you can render your final output to that EGLSurface and when you call eglSwapBuffers, the buffer will be sent to the screen.
If you're doing native processing, you can use the NDK ANativeWindow methods to write to a Surface you pass from Java and convert to an ANativeWindow.
If you're doing Java-level processing, that's really slow and you probably don't want to. But can use the new Android M ImageWriter class, or upload a texture to EGL every frame.
Or as you say, draw to an ImageView every frame, but that'll be slow.
Original answer:
If you are capturing JPEG images, you can simply copy the contents of the ByteBuffer from Image.getPlanes()[0].getBuffer() into a byte[], and then use BitmapFactory.decodeByteArray to convert it to a Bitmap.
If you are capturing YUV_420_888 images, then you need to write your own conversion code from the 3-plane YCbCr 4:2:0 format to something you can display, such as a int[] of RGB values to create a Bitmap from; unfortunately there's not yet a convenient API for this.
If you are capturing RAW_SENSOR images (Bayer-pattern unprocessed sensor data), then you need to do a whole lot of image processing or just save a DNG.
I had the same need, and wanted a quick and dirty manipulation for a demo. I was not worried about efficient processing for a final product. This was easily achieved using the following java solution.
My original code to connect the camera2 preview to a TextureView was commented-out and replaced with a surface to an ImageReader:
// Get the surface of the TextureView on the layout
//SurfaceTexture texture = mTextureView.getSurfaceTexture();
//if (null == texture) {
// return;
//}
//texture.setDefaultBufferSize(mPreviewWidth, mPreviewHeight);
//Surface surface = new Surface(texture);
// Capture the preview to the memory reader instead of a UI element
mPreviewReader = ImageReader.newInstance(mPreviewWidth, mPreviewHeight, ImageFormat.JPEG, 1);
Surface surface = mPreviewReader.getSurface();
// This part stays the same regardless of where we render
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mCaptureRequestBuilder.addTarget(surface);
mCameraDevice.createCaptureSession(...
Then I registered a listener for the image:
mPreviewReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image != null) {
Image.Plane plane = image.getPlanes()[0];
ByteBuffer buffer = plane.getBuffer();
byte[] bytes = new byte[buffer.capacity()];
buffer.get(bytes);
Bitmap preview = BitmapFactory.decodeByteArray(bytes, 0, buffer.capacity());
image.close();
if(preview != null ) {
// This gets the canvas for the same mTextureView we would have connected to the
// Camera2 preview directly above.
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
float[] colorTransform = {
0, 0, 0, 0, 0,
.35f, .45f, .25f, 0, 0,
0, 0, 0, 0, 0,
0, 0, 0, 1, 0};
ColorMatrix colorMatrix = new ColorMatrix();
colorMatrix.set(colorTransform); //Apply the monochrome green
ColorMatrixColorFilter colorFilter = new ColorMatrixColorFilter(colorMatrix);
Paint paint = new Paint();
paint.setColorFilter(colorFilter);
canvas.drawBitmap(preview, 0, 0, paint);
mTextureView.unlockCanvasAndPost(canvas);
}
}
}
}
}, mBackgroundPreviewHandler);

Take a screenshot using MediaProjection

With the MediaProjection APIs available in Android L it's possible to
capture the contents of the main screen (the default display) into a Surface object, which your app can then send across the network
I have managed to get the VirtualDisplay working, and my SurfaceView is correctly displaying the content of the screen.
What I want to do is to capture a frame displayed in the Surface, and print it to file. I have tried the following, but all I get is a black file:
Bitmap bitmap = Bitmap.createBitmap
(surfaceView.getWidth(), surfaceView.getHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
surfaceView.draw(canvas);
printBitmapToFile(bitmap);
Any idea on how to retrieve the displayed data from the Surface?
EDIT
So as #j__m suggested I'm now setting up the VirtualDisplay using the Surface of an ImageReader:
Display display = getWindowManager().getDefaultDisplay();
Point size = new Point();
display.getSize(size);
displayWidth = size.x;
displayHeight = size.y;
imageReader = ImageReader.newInstance(displayWidth, displayHeight, ImageFormat.JPEG, 5);
Then I create the virtual display passing the Surface to the MediaProjection:
int flags = DisplayManager.VIRTUAL_DISPLAY_FLAG_OWN_CONTENT_ONLY | DisplayManager.VIRTUAL_DISPLAY_FLAG_PUBLIC;
DisplayMetrics metrics = getResources().getDisplayMetrics();
int density = metrics.densityDpi;
mediaProjection.createVirtualDisplay("test", displayWidth, displayHeight, density, flags,
imageReader.getSurface(), null, projectionHandler);
Finally, in order to get a "screenshot" I acquire an Image from the ImageReader and read the data from it:
Image image = imageReader.acquireLatestImage();
byte[] data = getDataFromImage(image);
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);
The problem is that the resulting bitmap is null.
This is the getDataFromImage method:
public static byte[] getDataFromImage(Image image) {
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
return data;
}
The Image returned from the acquireLatestImage has always data with default size of 7672320 and the decoding returns null.
More specifically, when the ImageReader tries to acquire an image, the status ACQUIRE_NO_BUFS is returned.
After spending some time and learning about Android graphics architecture a bit more than desirable, I have got it to work. All necessary pieces are well-documented, but can cause headaches, if you aren't already familiar with OpenGL, so here is a nice summary "for dummies".
I am assuming that you
Know about Grafika, an unofficial Android media API test-suite, written by Google's work-loving employees in their spare time;
Can read through Khronos GL ES docs to fill gaps in OpenGL ES knowledge, when necessary;
Have read this document and understood most of written there (at least parts about hardware composers and BufferQueue).
The BufferQueue is what ImageReader is about. That class was poorly named to begin with – it would be better to call it "ImageReceiver" – a dumb wrapper around receiving end of BufferQueue (inaccessible via any other public API). Don't be fooled: it does not perform any conversions. It does not allow querying formats, supported by producer, even if C++ BufferQueue exposes that information internally. It may fail in simple situations, for example if producer uses a custom, obscure format, (such as BGRA).
The above-listed issues are why I recommend to use OpenGL ES glReadPixels as generic fallback, but still attempt to use ImageReader if available, since it potentially allows retrieving the image with minimal copies/transformations.
To get a better idea how to use OpenGL for the task, let's look at Surface, returned by ImageReader/MediaCodec. It is nothing special, just normal Surface on top of SurfaceTexture with two gotchas: OES_EGL_image_external and EGL_ANDROID_recordable.
OES_EGL_image_external
Simply put, OES_EGL_image_external is a a flag, that must be passed to glBindTexture to make the texture work with BufferQueue. Rather than defining specific color format etc., it is an opaque container for whatever is received from producer. Actual contents may be in YUV colorspace (mandatory for Camera API), RGBA/BGRA (often used by video drivers) or other, possibly vendor-specific format. The producer may offer some niceties, such as JPEG or RGB565 representation, but don't hold your hopes high.
The only producer, covered by CTS tests as of Android 6.0, is a Camera API (AFAIK only it's Java facade). The reason, why there are many MediaProjection + RGBA8888 ImageReader examples flying around is because it is a frequently encountered common denomination and the only format, mandated by OpenGL ES spec for glReadPixels. Still don't be surprised if display composer decides to use completely unreadable format or simply the one, unsupported by ImageReader class (such as BGRA8888) and you will have to deal with it.
EGL_ANDROID_recordable
As evident from reading the specification, it is a flag, passed to eglChooseConfig in order to gently push producer towards generating YUV images. Or optimize the pipeline for reading from video memory. Or something. I am not aware of any CTS tests, ensuring it's correct treatment (and even specification itself suggests, that individual producers may be hard-coded to give it special treatment), so don't be surprised if it happens to be unsupported (see Android 5.0 emulator) or silently ignored. There is no definition in Java classes, just define the constant yourself, like Grafika does.
Getting to hard part
So what is one supposed to do to read from VirtualDisplay in background "the right way"?
Create EGL context and EGL display, possibly with "recordable" flag, but not necessarily.
Create an offscreen buffer for storing image data before it is read from video memory.
Create GL_TEXTURE_EXTERNAL_OES texture.
Create a GL shader for drawing the texture from step 3 to buffer from step 2. The video driver will (hopefully) ensure, that anything, contained in "external" texture will be safely converted to conventional RGBA (see the spec).
Create Surface + SurfaceTexture, using "external" texture.
Install OnFrameAvailableListener to the said SurfaceTexture (this must be done before the next step, or else the BufferQueue will be screwed!)
Supply the surface from step 5 to the VirtualDisplay
Your OnFrameAvailableListener callback will contain the following steps:
Make the context current (e.g. by making your offscreen buffer current);
updateTexImage to request an image from producer;
getTransformMatrix to retrieve the transformation matrix of texture, fixing whatever madness may be plaguing the producer's output. Note, that this matrix will fix the OpenGL upside-down coordinate system, but we will reintroduce the upside-downness in the next step.
Draw the "external" texture on our offscreen buffer, using the previously created shader. The shader needs to additionally flip it's Y coordinate unless you want to end up with flipped image.
Use glReadPixels to read from your offscreen video buffer into a ByteBuffer.
Most of above steps are internally performed when reading video memory with ImageReader, but some differ. Alignment of rows in created buffer can be defined by glPixelStore (and defaults to 4, so you don't have to account for it when using 4-byte RGBA8888).
Note, that aside from processing a texture with shaders GL ES does no automatic conversion between formats (unlike the desktop OpenGL). If you want RGBA8888 data, make sure to allocate the offscreen buffer in that format and request it from glReadPixels.
EglCore eglCore;
Surface producerSide;
SurfaceTexture texture;
int textureId;
OffscreenSurface consumerSide;
ByteBuffer buf;
Texture2dProgram shader;
FullFrameRect screen;
...
// dimensions of the Display, or whatever you wanted to read from
int w, h = ...
// feel free to try FLAG_RECORDABLE if you want
eglCore = new EglCore(null, EglCore.FLAG_TRY_GLES3);
consumerSide = new OffscreenSurface(eglCore, w, h);
consumerSide.makeCurrent();
shader = new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT)
screen = new FullFrameRect(shader);
texture = new SurfaceTexture(textureId = screen.createTextureObject(), false);
texture.setDefaultBufferSize(reqWidth, reqHeight);
producerSide = new Surface(texture);
texture.setOnFrameAvailableListener(this);
buf = ByteBuffer.allocateDirect(w * h * 4);
buf.order(ByteOrder.nativeOrder());
currentBitmap = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
Only after doing all of above you can initialize your VirtualDisplay with producerSide Surface.
Code of frame callback:
float[] matrix = new float[16];
boolean closed;
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
// there may still be pending callbacks after shutting down EGL
if (closed) return;
consumerSide.makeCurrent();
texture.updateTexImage();
texture.getTransformMatrix(matrix);
consumerSide.makeCurrent();
// draw the image to framebuffer object
screen.drawFrame(textureId, matrix);
consumerSide.swapBuffers();
buffer.rewind();
GLES20.glReadPixels(0, 0, w, h, GLES10.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);
buffer.rewind();
currentBitmap.copyPixelsFromBuffer(buffer);
// congrats, you should have your image in the Bitmap
// you can release the resources or continue to obtain
// frames for whatever poor-man's video recorder you are writing
}
The code above is a greatly simplified version of approach, found in this Github project, but all referenced classes come directly from Grafika.
Depending on your hardware you may have to jump few extra hoops to get things done: using setSwapInterval, calling glFlush before making the screenshot etc. Most of these can be figured out on your own from contents of LogCat.
In order to avoid Y coordinate reversal, replace the vertex shader, used by Grafika, with the following one:
String VERTEX_SHADER_FLIPPED =
"uniform mat4 uMVPMatrix;\n" +
"uniform mat4 uTexMatrix;\n" +
"attribute vec4 aPosition;\n" +
"attribute vec4 aTextureCoord;\n" +
"varying vec2 vTextureCoord;\n" +
"void main() {\n" +
" gl_Position = uMVPMatrix * aPosition;\n" +
" vec2 coordInterm = (uTexMatrix * aTextureCoord).xy;\n" +
// "OpenGL ES: how flip the Y-coordinate: 6542nd edition"
" vTextureCoord = vec2(coordInterm.x, 1.0 - coordInterm.y);\n" +
"}\n";
Parting words
The above-described approach can be used when ImageReader does not work for you, or if you want to perform some shader processing on Surface contents before moving images from GPU.
It's speed may be harmed by doing extra copy to offscreen buffer, but the impact of running shader would be minimal if you know the exact format of received buffer (e.g. from ImageReader) and use the same format for glReadPixels.
For example, if your video driver is using BGRA as internal format, you would check if EXT_texture_format_BGRA8888 is supported (it likely would), allocate offscreen buffer and retrive the image in this format with glReadPixels.
If you want to perform a complete zero-copy or employ formats, not supported by OpenGL (e.g. JPEG), you are still better off using ImageReader.
The various "how do I capture a screen shot of a SurfaceView" answers (e.g. this one) all still apply: you can't do that.
The SurfaceView's surface is a separate layer, composited by the system, independent of the View-based UI layer. Surfaces are not buffers of pixels, but rather queues of buffers, with a producer-consumer arrangement. Your app is on the producer side. Getting a screen shot requires you to be on the consumer side.
If you direct the output to a SurfaceTexture, instead of a SurfaceView, you will have both sides of the buffer queue in your app process. You can render the output with GLES and read it into an array with glReadPixels(). Grafika has some examples of doing stuff like this with the Camera preview.
To capture the screen as video, or send it over a network, you would want to send it to the input surface of a MediaCodec encoder.
More details on the Android graphics architecture are available here.
I have this working code:
mImageReader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 5);
mProjection.createVirtualDisplay("test", width, height, density, flags, mImageReader.getSurface(), new VirtualDisplayCallback(), mHandler);
mImageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
FileOutputStream fos = null;
Bitmap bitmap = null;
try {
image = mImageReader.acquireLatestImage();
fos = new FileOutputStream(getFilesDir() + "/myscreen.jpg");
final Image.Plane[] planes = image.getPlanes();
final Buffer buffer = planes[0].getBuffer().rewind();
bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
bitmap.compress(CompressFormat.JPEG, 100, fos);
} catch (Exception e) {
e.printStackTrace();
} finally {
if (fos!=null) {
try {
fos.close();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
if (bitmap!=null)
bitmap.recycle();
if (image!=null)
image.close();
}
}
}, mHandler);
I believe that the rewind() on the Bytebuffer did the trick, not really sure why though. I am testing it against an Android emulator 21 as I do not have a Android-5.0 device at hand at the moment.
Hope it helps!
ImageReader is the class you want.
https://developer.android.com/reference/android/media/ImageReader.html
I have this working code:-for tablet and mobile device:-
private void createVirtualDisplay() {
// get width and height
Point size = new Point();
mDisplay.getSize(size);
mWidth = size.x;
mHeight = size.y;
// start capture reader
if (Util.isTablet(getApplicationContext())) {
mImageReader = ImageReader.newInstance(metrics.widthPixels, metrics.heightPixels, PixelFormat.RGBA_8888, 2);
}else{
mImageReader = ImageReader.newInstance(mWidth, mHeight, PixelFormat.RGBA_8888, 2);
}
// mImageReader = ImageReader.newInstance(450, 450, PixelFormat.RGBA_8888, 2);
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP) {
mVirtualDisplay = sMediaProjection.createVirtualDisplay(SCREENCAP_NAME, mWidth, mHeight, mDensity, VIRTUAL_DISPLAY_FLAGS, mImageReader.getSurface(), null, mHandler);
}
mImageReader.setOnImageAvailableListener(new ImageReader.OnImageAvailableListener() {
int onImageCount = 0;
#RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
#Override
public void onImageAvailable(ImageReader reader) {
Image image = null;
FileOutputStream fos = null;
Bitmap bitmap = null;
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.KITKAT) {
image = reader.acquireLatestImage();
}
if (image != null) {
Image.Plane[] planes = new Image.Plane[0];
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.KITKAT) {
planes = image.getPlanes();
}
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * mWidth;
// create bitmap
//
if (Util.isTablet(getApplicationContext())) {
bitmap = Bitmap.createBitmap(metrics.widthPixels, metrics.heightPixels, Bitmap.Config.ARGB_8888);
}else{
bitmap = Bitmap.createBitmap(mWidth + rowPadding / pixelStride, mHeight, Bitmap.Config.ARGB_8888);
}
// bitmap = Bitmap.createBitmap(mImageReader.getWidth() + rowPadding / pixelStride,
// mImageReader.getHeight(), Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
// write bitmap to a file
SimpleDateFormat df = new SimpleDateFormat("dd-MM-yyyy_HH:mm:ss");
String formattedDate = df.format(Calendar.getInstance().getTime()).trim();
String finalDate = formattedDate.replace(":", "-");
String imgName = Util.SERVER_IP + "_" + SPbean.getCurrentImageName(getApplicationContext()) + "_" + finalDate + ".jpg";
String mPath = Util.SCREENSHOT_PATH + imgName;
File imageFile = new File(mPath);
fos = new FileOutputStream(imageFile);
bitmap.compress(Bitmap.CompressFormat.JPEG, 100, fos);
Log.e(TAG, "captured image: " + IMAGES_PRODUCED);
IMAGES_PRODUCED++;
SPbean.setScreenshotCount(getApplicationContext(), ((SPbean.getScreenshotCount(getApplicationContext())) + 1));
if (imageFile.exists())
new DbAdapter(LegacyKioskModeActivity.this).insertScreenshotImageDetails(SPbean.getScreenshotTaskid(LegacyKioskModeActivity.this), imgName);
stopProjection();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (fos != null) {
try {
fos.close();
} catch (IOException ioe) {
ioe.printStackTrace();
}
}
if (bitmap != null) {
bitmap.recycle();
}
if (image != null) {
image.close();
}
}
}
}, mHandler);
}
2>onActivityResult call:-
if (Util.isTablet(getApplicationContext())) {
metrics = Util.getScreenMetrics(getApplicationContext());
} else {
metrics = getResources().getDisplayMetrics();
}
mDensity = metrics.densityDpi;
mDisplay = getWindowManager().getDefaultDisplay();
3>
public static DisplayMetrics getScreenMetrics(Context context) {
WindowManager wm = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE);
Display display = wm.getDefaultDisplay();
DisplayMetrics dm = new DisplayMetrics();
display.getMetrics(dm);
return dm;
}
public static boolean isTablet(Context context) {
boolean xlarge = ((context.getResources().getConfiguration().screenLayout & Configuration.SCREENLAYOUT_SIZE_MASK) == 4);
boolean large = ((context.getResources().getConfiguration().screenLayout & Configuration.SCREENLAYOUT_SIZE_MASK) == Configuration.SCREENLAYOUT_SIZE_LARGE);
return (xlarge || large);
}
Hope this will help who is getting distorted images on device while capturing through MediaProjection Api.

32Bit Bitmaps slow on Google TV. Bad Pixel format?

I am wondering what I'm doing wrong here.
I create a bitmap with dimesions 1080x1080:
Bitmap level = Bitmap.createBitmap(1080, 1080, Config.ARGB_8888);
then I draw some lines and rects into it and put it on canvas in my SurfaceView:
c.drawBitmap(LevelBitmap.level, 0, 0, null);
and this operation takes 20ms on my google TV NSZ-GS7. much to long.
setting pixelformat in surfaceCreated to RGBX_8888 or RGBA_8888 makes things even worse, 30ms for drawing.
holder.setFormat(PixelFormat.RGBX_8888);
setting to RGB_888
holder.setFormat(PixelFormat.RGB_888);
leads to nullpointerexceptions when i draw something
only the combination of Config.RGB_565 for the bitmap and PixelFormat.RGB_565 for the window
draws the bitmap in acceptable 10ms to the canvas, but the quality of RGB_565 is horrilble.
am I missing something? is Google TV not capable of 32bit graphics? what is its "natural" pixel and bitmap format? Is there any documentation on this topic i missed on google?
here my requested onDraw method i use to measure the timings:
private static void doDraw(Canvas c) {
if (LevelBitmap.level == null){
LevelBitmap.createBitmap();
}
long time = System.currentTimeMillis();
c.drawBitmap(LevelBitmap.level, 0, 0, null);
Log.e("Timer:",""+(System.currentTimeMillis()-time) );
if(true) return;
and here the class creating the bitmap:
public final class LevelBitmap {
public static Bitmap level;
public static void createBitmap() {
level = Bitmap.createBitmap(GameViewThread.mCanvasHeight, GameViewThread.mCanvasHeight, Config.ARGB_8888);
Canvas canvas = new Canvas(level);
canvas.scale(GameViewThread.fieldScaleFactor, GameViewThread.fieldScaleFactor);
canvas.translate(500, 500);
// floor covering
int plankWidth = 50;
int plankSpace = 500;
short size = TronStatics.FIELDSIZE;
for (int i = plankSpace; i < size; i = i + plankSpace) {
canvas.drawRect(i, 0, i + plankWidth, size, GameViewThread.bgGrey);
canvas.drawRect(0, i, size, i + plankWidth, GameViewThread.bgGrey);
}
// draw field
canvas.drawRect(-10, -10, TronStatics.FIELDSIZE + 10, TronStatics.FIELDSIZE + 10, GameViewThread.wallPaint);
}
}

Categories

Resources