Im trying to capture a screenshot of a Genymotion instance which has an opengl es window inside.
The problem is that injecting a dll (hooking) to capture the framebuffer didnt work as excepted (genymotion crash, cant hook the framebuffer because i have no idea how to really hook a dll which proxy swapbuffer) and public solutions like glintercept or glproxy didnt work either (no idea how to use them). Using easyhook to inject a dll like http://spazzarama.com/2011/03/14/c-screen-capture-and-overlays-for-direct3d-9-10-and-11-using-api-hooks/ didnt work either since its opengl and not directx.
Now my goal is taking a screenshot while the screen is in background. this works well on all windows containing no opengl/directx inside.
Usually the screen keeps being black. i have found several solutions using SRCCOPY | CAPTUREBLT but this leads in a white screenshot.
Now ive tried different screenshot mechanism but all are black or white (tested on genymotion x86 with dual monitor.
I have also tried capturing the parent window of the genymotion contentwindow which means that instead of the opengl hwnd, the main window of genymotion is captured. same result. everything looks great but the opengl window keeps being black. Could anyone tell me how to hook a dll which capture the framebuffer (gDebugger could display the graphics very well) or why the screenshots are black or white?
Here are my screenshot codes.
public Image TakeScreenshotFromHandle(IntPtr handle)
{
IntPtr hdcSrc = User32.GetWindowDC(handle);
User32.RECT windowRect = new User32.RECT();
User32.GetWindowRect(handle, out windowRect);
int width = windowRect.Right - windowRect.Left;
int height = windowRect.Bottom - windowRect.Top;
IntPtr hdcDest = GDI32.CreateCompatibleDC(hdcSrc);
IntPtr hBitmap = GDI32.CreateCompatibleBitmap(hdcSrc, width, height);
IntPtr hOld = GDI32.SelectObject(hdcDest, hBitmap);
GDI32.BitBlt(hdcDest, 0, 0, width, height, hdcSrc, 0, 0, GDI32.TernaryRasterOperations.SRCCOPY | GDI32.TernaryRasterOperations.CAPTUREBLT);
GDI32.SelectObject(hdcDest, hOld);
GDI32.DeleteDC(hdcDest);
User32.ReleaseDC(handle, hdcSrc);
Image img = Image.FromHbitmap(hBitmap);
GDI32.DeleteObject(hBitmap);
return img;
}
public Image TakeScreenshotFromHandle_2(IntPtr hwnd)
{
Clash_of_Clans_Genymotion_Bot.User32.RECT rc;
User32.GetWindowRect(hwnd, out rc);
Bitmap bmp = new Bitmap(rc.Right - rc.Left, rc.Bottom - rc.Top, System.Drawing.Imaging.PixelFormat.Format32bppPArgb);
Graphics gfxBmp = Graphics.FromImage(bmp);
IntPtr hdcBitmap = gfxBmp.GetHdc();
User32.PrintWindow(hwnd, hdcBitmap, 0);
gfxBmp.ReleaseHdc(hdcBitmap);
gfxBmp.Dispose();
bmp.Save("screenshothandle2.png");
return bmp;
}
Now my idea was sending alt+print button to the window and grab the clipboarded stuff, since the graphics in the clipboard shows the screenshot, but this is a horrible solution. best would be capturing the framebuffer. Does anyone have a simple example how to proxy the buffer (with injecting/hooking)?
Related
I use webRTC's android library and I want to mirror image(footage) displayed on SurfaceView. (It is front camera footage)
I did same in IOS easily with changing scale of surfaceView like this self.LocalView.transform = CGAffineTransformMakeScale(-1.0, 1.0); But in android localRenderer.scaleX = -1f gives black screen result.
This is only source I found which is talking about this: link
It says something like this:
WebRTC Android provides VideoRenderGui as a video rendering interface
VideoRenderGui's update interface provides mirroring parameters. Set to true to mirror reverse when rendering.
public static void update(Callbacks renderer, int x, int y, int width, int height, VideoRendererGui.ScalingType scalingType, boolean mirror)
But I can't find an example how to implement this VideoRenderGui class.
I am not sure whether just want to mirror your local view or you want the camera stream to be mirrored. But if you just want to show front camera footage mirrored then the below solution will do the work.
Here localVideoView is SurfaceViewRenderer.
localVideoView?.setMirror(true)
I'm currently working on an app in C++ using the Android ndk, and I need to create a sampler to access the camera output image.
I have done this using the AIMAGE_FORMAT_YUV_420_888, and using the VkSamplerYcbcrConversion for accessing the image in the hardware buffer. I do the yuv -> rgb conversion in a shader, and it all looks good on my phone.
I have since discovered that this doesn't work on Samsung phones, in my case specifically the Samsung Galaxy S10/S10+.
The reason is that when I set up an image reader with the AIMAGE_FORMAT_YUV_420_888 I get a camera error using Samsung. On my OnePlus and on another phone I tried the pipeline worked entirely as expected. I created a very simple test setup to even try to open the camera with that image format in the ImageReader on Samsung S10 and got the error, but when I changed the ImageReader format to AIMAGE_FORMAT_JPEG the error went away and the camera seemed to start as expected.
AImageReader* SimpleCamera::CreateJpegReader()
{
AImageReader* reader = nullptr;
// media_status_t status = AImageReader_new(640, 480, AIMAGE_FORMAT_JPEG,
//AIMAGE_FORMAT_RGBA_8888
//media_status_t status = AImageReader_new(640, 480, AIMAGE_FORMAT_RGB_565,4, &reader);
media_status_t status = AImageReader_newWithUsage(640, 480,
//AIMAGE_FORMAT_RGBA_8888,
//AIMAGE_FORMAT_RGB_565,
//AIMAGE_FORMAT_RGB_888,
AIMAGE_FORMAT_JPEG,
AHARDWAREBUFFER_USAGE_GPU_SAMPLED_IMAGE | AHARDWAREBUFFER_USAGE_CPU_READ_RARELY,
4, &reader);
if (status != AMEDIA_OK) {
LOGE("Couldn't create new image reader");
return nullptr;
}
AImageReader_ImageListener listener{
.context = nullptr,
.onImageAvailable = imageCallback1,
};
AImageReader_setImageListener(reader, &listener);
return reader;
}
None of the other formats are guaranteed to be supported except AIMAGE_FORMAT_JPEG, but this format doesn't seem to work with the VkSamplerYcbcrConversion because the image layout is different.
Has anyone come up against this issue before? And if so how did you resolve it?
At a high level th goal is: In C++, get the image out of the camera2 api and onto a VkImage. If anyone knows an alternative way of doing that, I'm also all ears.
Try to use ImageFormat.PRIVATE with USAGE_GPU_SAMPLED_IMAGE flag. This used to work fine on the mentioned Samsung devices in particular.
Please make sure to read Vulkan specification, as there are quite a few android-specific and VkSamplerYcbcrConversion requirements.
I can also recommend to take a look at this great project which uses android camera2 api and vulkan.
After implementing the camera2 API for the inApp camera I noticed that on Samsung devices the images appear blurry. After searching about that I found the Sasmung Camera SDK (http://developer.samsung.com/galaxy#camera). So after implementing the SDK on Samsung Galaxy S7 the images are fine now, but on Galaxy S6 they are still blurry. Someone experienced those kind of issues with Samsung devices?
EDIT:
To complement #rcsumners comment. I am setting autofocus by using
mPreviewBuilder.set(SCaptureRequest.CONTROL_AF_TRIGGER, SCaptureRequest.CONTROL_AF_TRIGGER_START);
mSCameraSession.capture(mPreviewBuilder.build(), new SCameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(SCameraCaptureSession session, SCaptureRequest request, STotalCaptureResult result) {
isAFTriggered = true;
}
}, mBackgroundHandler);
It is a long exposure image where the use has to take an image of a static non moving object. For this I am using the CONTROL_AF_MODE_MACRO
mCaptureBuilder.set(SCaptureRequest.CONTROL_AF_MODE, SCaptureRequest.CONTROL_AF_MODE_MACRO);
and also I am enabling auto flash if it is available
requestBuilder.set(SCaptureRequest.CONTROL_AE_MODE,
SCaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
I am not really an expert in this API, I mostly followed the SDK example app.
There could be a number of issues causing this problem. One prominent one is the dimensions of your output image
I ran Camera2 API and the preview is clear, but the output was quite blurry
val characteristics: CameraCharacteristics? = cameraManager.getCameraCharacteristics(cameraId)
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // The issue
val width = imageDimension.width
val height = imageDimension.height
if (size != null) {
width = size[0].width; height = size[0].height
}
val imageReader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 5)
The code below was returning a dimension about 245*144 which was way to small to be sent to the image reader. Some how the output was stretching the image making it end up been blurry. Therefore I removed this line below.
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // this was returning a small
Setting the width and height manually resolved the issue.
You're setting the AF trigger for one frame, but then are you waiting for AF to complete? For AF_MODE_MACRO (are you verifying the device lists support for this AF mode?) you need to wait for AF_STATE_FOCUSED_LOCKED before the image is guaranteed to be stable and sharp. (You may also receive NOT_FOCUSED_LOCKED if the AF algorithm can't reach sharp focus, which could be because the object is just too close for the lens, or the scene is too confusing)
On most modern devices, it's recommended to use CONTINUOUS_PICTURE and not worry about AF triggering unless you really want to lock focus for some time period. In that mode, the device will continuously try to focus to the best of its ability. I'm not sure all that many devices support MACRO, to begin with.
i'm trying out libgdx as an opengl wrapper , and i have some issues with its graphical rendering :
for some reason , all images (textures) on android device look a little blurred using libgdx . this also includes text (font) .
for text , i thought that it's because i use bitmap-fonts , but i can't find an alternative- i've found out that there is a library called "gdx-stb-truetype" , but i can't find how to download it and use it .
for normal images , even when i show the entire image without any scaling , i expect it to look as sharp as i see it on a computer's screen , especially if i have such a good screen on the device (it's galaxy nexus) .
i've tried to set the anti-aliasing off , by using the next code :
final AndroidApplicationConfiguration androidApplicationConfiguration=new AndroidApplicationConfiguration();
androidApplicationConfiguration.numSamples=0; //tried the value of 1 too.
...
i've also tried to set the scaling method to various methods , but with no luck. example:
texture.setFilter(TextureFilter.Nearest,TextureFilter.Nearest);
as a test , i've found a sharp image that is exactly the same as the seen resolution on the device (720x1184 for galaxy nexus , because of the buttons bar) , and i've put it to be on the background of the libgdx app . of course , i had to add extra blank space in order for the texute to be loaded , so the final size of the image (which will include content and empty space) is still a power of 2 for both width and height (1024x2048 in this case) .
on the desktop app , it look ok . on the device , it looked blurred.
a weird thing that i've noticed is that when i change the device's orientation (horizontal <=> vertical) , for the very short time before the rotating animation starts , i see both the image and the text very well .
surely libgdx can handle this , since the opengl part of the api-tests project of android shows images just fine.
can anyone please help me?
#user1130529 : i do use spritebatch . also , here's what i do for setting the viewport . it occurs whether i choose to keep the aspect ratio or not.
public static final int VIRTUAL_WIDTH =720;
public static final int VIRTUAL_HEIGHT =1280-96;
private static final float ASPECT_RATIO =(float)VIRTUAL_WIDTH/(float)VIRTUAL_HEIGHT;
...
#Override
public void resize(final int width,final int height)
{
// calculate new viewport
if(!KEEP_ASPECT_RATIO)
{
_viewport=new Rectangle(0,0,Gdx.app.getGraphics().getWidth(),Gdx.app.getGraphics().getHeight());
Gdx.app.log("DEBUG","size:"+_viewport);
return;
}
final float currentAspectRatio=(float)width/(float)height;
float scale=1f;
final Vector2 crop=new Vector2(0f,0f);
if(currentAspectRatio>ASPECT_RATIO)
{
scale=(float)height/(float)VIRTUAL_HEIGHT;
crop.x=(width-VIRTUAL_WIDTH*scale)/2f;
}
else if(currentAspectRatio<ASPECT_RATIO)
{
scale=(float)width/(float)VIRTUAL_WIDTH;
crop.y=(height-VIRTUAL_HEIGHT*scale)/2f;
}
else scale=(float)width/(float)VIRTUAL_WIDTH;
final float w=VIRTUAL_WIDTH*scale;
final float h=VIRTUAL_HEIGHT*scale;
_viewport=new Rectangle(crop.x,crop.y,w,h);
Gdx.app.log("DEBUG","viewport:"+_viewport+" originalSize:"+VIRTUAL_WIDTH+","+VIRTUAL_HEIGHT+" aspectRatio:"+ASPECT_RATIO+" currentAspectRatio:"+currentAspectRatio);
}
Try this:
TextureRegion.getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear);
Try the following:
texture.setFilter(TextureFilter.Nearest, TextureFilter.Nearest);
There are several types of TextureFilters. I assume that the Linear one (is that default?) is blurring.
If you have the Chainfire 3D application or another which reduce textures or change it to 16bit, turn it off; that works for me, and I had the same problem.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to programatically take a screenshot on Android?
How to capture the android device screen content and make an image file using the snapshot data? Which API should I use or where could I find related resources?
BTW:
not camera snapshot, but device screen
Use the following code:
Bitmap bitmap;
View v1 = MyView.getRootView();
v1.setDrawingCacheEnabled(true);
bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
Here MyView is the View through which we need include in the screen. You can also get DrawingCache from of any View this way (without getRootView()).
There is also another way.. If we having ScrollView as root view then its better to use following code,
LayoutInflater inflater = (LayoutInflater) this.getSystemService(LAYOUT_INFLATER_SERVICE);
FrameLayout root = (FrameLayout) inflater.inflate(R.layout.activity_main, null); // activity_main is UI(xml) file we used in our Activity class. FrameLayout is root view of my UI(xml) file.
root.setDrawingCacheEnabled(true);
Bitmap bitmap = getBitmapFromView(this.getWindow().findViewById(R.id.frameLayout)); // here give id of our root layout (here its my FrameLayout's id)
root.setDrawingCacheEnabled(false);
Here is the getBitmapFromView() method
public static Bitmap getBitmapFromView(View view) {
//Define a bitmap with the same size as the view
Bitmap returnedBitmap = Bitmap.createBitmap(view.getWidth(), view.getHeight(),Bitmap.Config.ARGB_8888);
//Bind a canvas to it
Canvas canvas = new Canvas(returnedBitmap);
//Get the view's background
Drawable bgDrawable =view.getBackground();
if (bgDrawable!=null)
//has background drawable, then draw it on the canvas
bgDrawable.draw(canvas);
else
//does not have background drawable, then draw white background on the canvas
canvas.drawColor(Color.WHITE);
// draw the view on the canvas
view.draw(canvas);
//return the bitmap
return returnedBitmap;
}
It will display entire screen including content hidden in your ScrollView
UPDATED AS ON 20-04-2016
There is another better way to take screenshot.Here I have taken screenshot of WebView.
WebView w = new WebView(this);
w.setWebViewClient(new WebViewClient()
{
public void onPageFinished(final WebView webView, String url) {
new Handler().postDelayed(new Runnable(){
#Override
public void run() {
webView.measure(View.MeasureSpec.makeMeasureSpec(
View.MeasureSpec.UNSPECIFIED, View.MeasureSpec.UNSPECIFIED),
View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
webView.layout(0, 0, webView.getMeasuredWidth(),
webView.getMeasuredHeight());
webView.setDrawingCacheEnabled(true);
webView.buildDrawingCache();
Bitmap bitmap = Bitmap.createBitmap(webView.getMeasuredWidth(),
webView.getMeasuredHeight(), Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bitmap);
Paint paint = new Paint();
int height = bitmap.getHeight();
canvas.drawBitmap(bitmap, 0, height, paint);
webView.draw(canvas);
if (bitmap != null) {
try {
String filePath = Environment.getExternalStorageDirectory()
.toString();
OutputStream out = null;
File file = new File(filePath, "/webviewScreenShot.png");
out = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 50, out);
out.flush();
out.close();
bitmap.recycle();
} catch (Exception e) {
e.printStackTrace();
}
}
}
}, 1000);
}
});
Hope this helps..!
AFAIK, All of the methods currently to capture a screenshot of android use the /dev/graphics/fb0 framebuffer. This includes ddms. It does require root to read from this stream. ddms uses adbd to request the information, so root is not required as adb has the permissions needed to request the data from /dev/graphics/fb0.
The framebuffer contains 2+ "frames" of RGB565 images. If you are able to read the data, you would have to know the screen resolution to know how many bytes are needed to get the image. each pixel is 2 bytes, so if the screen res was 480x800, you would have to read 768,000 bytes for the image, since a 480x800 RGB565 image has 384,000 pixels.
For newer Android platforms, one can execute a system utility screencap in /system/bin to get the screenshot without root permission.
You can try /system/bin/screencap -h to see how to use it under adb or any shell.
By the way, I think this method is only good for single snapshot.
If we want to capture multiple frames for screen play, it will be too slow.
I don't know if there exists any other approach for a faster screen capture.
[Based on Android source code:]
At the C++ side, the SurfaceFlinger implements the captureScreen API. This is exposed over the binder IPC interface, returning each time a new ashmem area that contains the raw pixels from the screen. The actual screenshot is taken through OpenGL.
For the system C++ clients, the interface is exposed through the ScreenshotClient class, defined in <surfaceflinger_client/SurfaceComposerClient.h> for Android < 4.1; for Android > 4.1 use <gui/SurfaceComposerClient.h>
Before JB, to take a screenshot in a C++ program, this was enough:
ScreenshotClient ssc;
ssc.update();
With JB and multiple displays, it becomes slightly more complicated:
ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain));
Then you can access it:
do_something_with_raw_bits(ssc.getPixels(), ssc.getSize(), ...);
Using the Android source code, you can compile your own shared library to access that API, and then expose it through JNI to Java. To create a screen shot form your app, the app has to have the READ_FRAME_BUFFER permission.
But even then, apparently you can create screen shots only from system applications, i.e. ones that are signed with the same key as the system. (This part I still don't quite understand, since I'm not familiar enough with the Android Permissions system.)
Here is a piece of code, for JB 4.1 / 4.2:
#include <utils/RefBase.h>
#include <binder/IBinder.h>
#include <binder/MemoryHeapBase.h>
#include <gui/ISurfaceComposer.h>
#include <gui/SurfaceComposerClient.h>
static void do_save(const char *filename, const void *buf, size_t size) {
int out = open(filename, O_RDWR|O_CREAT, 0666);
int len = write(out, buf, size);
printf("Wrote %d bytes to out.\n", len);
close(out);
}
int main(int ac, char **av) {
android::ScreenshotClient ssc;
const void *pixels;
size_t size;
int buffer_index;
if(ssc.update(
android::SurfaceComposerClient::getBuiltInDisplay(
android::ISurfaceComposer::eDisplayIdMain)) != NO_ERROR ){
printf("Captured: w=%d, h=%d, format=%d\n");
ssc.getWidth(), ssc.getHeight(), ssc.getFormat());
size = ssc.getSize();
do_save(av[1], pixels, size);
}
else
printf(" screen shot client Captured Failed");
return 0;
}
You can try the following library: Android Screenshot Library (ASL) enables to programmatically capture screenshots from Android devices without requirement of having root access privileges. Instead, ASL utilizes a native service running in the background, started via the Android Debug Bridge (ADB) once per device boot.
According to this link, it is possible to use ddms in the tools directory of the android sdk to take screen captures.
To do this within an application (and not during development), there are also applications to do so. But as #zed_0xff points out it certainly requires root.
Framebuffer seems the way to go, it will not always contain 2+ frames like mentioned by Ryan Conrad. In my case it contained only one. I guess it depends on the frame/display size.
I tried to read the framebuffer continuously but it seems to return for a fixed amount of bytes read. In my case that is (3 410 432) bytes, which is enough to store a display frame of 854*480 RGBA (3 279 360 bytes). Yes, the frame in binary outputed from fb0 is RGBA in my device. This will most likely depend from device to device. This will be important for you to decode it =)
In my device /dev/graphics/fb0 permissions are so that only root and users from group graphics can read the fb0. graphics is a restricted group so you will probably only access fb0 with a rooted phone using su command.
Android apps have the user id (uid) app_## and group id (guid) app_## .
adb shell has uid shell and guid shell, which has much more permissions than an app.
You can actually check those permissions at /system/permissions/platform.xml
This means you will be able to read fb0 in the adb shell without root but you will not read it within the app without root.
Also, giving READ_FRAME_BUFFER and/or ACCESS_SURFACE_FLINGER permissions on AndroidManifest.xml will do nothing for a regular app because these will only work for 'signature' apps.
if you want to do screen capture from Java code in Android app AFAIK you must have Root provileges.