I tried :
process = Runtime.getRuntime().exec("su -c cat /dev/graphics/fb0 > /sdcard/frame.raw");
process.waitFor();
but it doesn't work. My device is rooted.
I see many answers that it requires rooted access, but no actual code to get the framebuffer.
I also tried glReadPixels() but no luck.
public void TakeScreen() {
DisplayMetrics dm = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(dm);
int width = dm.widthPixels;
int height = dm.heightPixels;
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - width, -width, 0, 0,
width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
saveBitmap(bitmap, "/screenshots", "capturedImage");
}
Seems to me like your problem is this sign: >. You cannot redirect output using exec. What you need to do is grab the output stream of the process (which is the input stream for you) and store it to file;
process = Runtime.getRuntime().exec("su -c cat /dev/graphics/fb0");
InputStream is = process.getInputStream();
...
The answer lies in replicating the way the device itself handles it:
fb = open("/dev/graphics/fb0", O_RDONLY);
check this
Related
I am working on TensorFlow stylize image. But, the problem I am facing is that it resize my actual image. I want to apply style on whole image itself. For example, if my image resolution is 1280x960, it should be the same after I apply style on it.
I am not using default INPUT_SIZE value 256. Using default value it works fine. Here is my code I am using to prevent resize image.
private TensorFlowInferenceInterface inferenceInterface;
private void applyStyle(){
inferenceInterface = new TensorFlowInferenceInterface(mActivity.getAssets(), "bossK_float.pb");
Bitmap bitmap = getBitmapFromPath();
bitmap=Bitmap.createBitmap(bitmap,0,bitmap.getWidth(),bitmap.getHeight(), matrix, true);
INPUT_SIZE_WIDTH = bitmap.getWidth();
INPUT_SIZE_HEIGHT = bitmap.getHeight();
mStyledBitmap = stylizeImage(bitmap);
}
private Bitmap stylizeImage(Bitmap bitmap) {
Bitmap scaledBitmap = scaleBitmap(bitmap, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT);
intValues = new int[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT];
floatValues = new float[INPUT_SIZE_WIDTH * INPUT_SIZE_HEIGHT * 3];
scaledBitmap.getPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
scaledBitmap = scaledBitmap.copy(Bitmap.Config.ARGB_8888, true);
for (int i = 0; i < intValues.length; ++i) {
final int val = intValues[i];
floatValues[i * 3 + 0] = ((val >> 16) & 0xFF) * 1.0f;
floatValues[i * 3 + 1] = ((val >> 8) & 0xFF) * 1.0f;
floatValues[i * 3 + 2] = (val & 0xFF) * 1.0f;
}
Trace.beginSection("feed");
inferenceInterface.feed(INPUT_NAME, floatValues, INPUT_SIZE_WIDTH, INPUT_SIZE_HEIGHT, 3);
Trace.endSection();
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Trace.beginSection("fetch");
inferenceInterface.fetch(OUTPUT_NAME, floatValues);
Trace.endSection();
for (int i = 0; i < intValues.length; ++i) {
intValues[i] =
0xFF000000
| (((int) (floatValues[i * 3 + 0])) << 16)
| (((int) (floatValues[i * 3 + 1])) << 8)
| ((int) (floatValues[i * 3 + 2]));
}
scaledBitmap.setPixels(intValues, 0, scaledBitmap.getWidth(), 0, 0, scaledBitmap.getWidth(), scaledBitmap.getHeight());
return scaledBitmap;
}
private Bitmap scaleBitmap(Bitmap origin, int newWidth, int newHeight) {
if (origin == null) {
return null;
}
int height = origin.getHeight();
int width = origin.getWidth();
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
Matrix matrix = new Matrix();
matrix.postScale(scaleWidth, scaleHeight);
Bitmap newBitmap = Bitmap.createBitmap(origin, 0, 0, width, height, matrix, false);
return newBitmap;
}
When I change my INPUT_SIZE values to INPUT_SIZE_WIDTH and INPUT_SIZE_HEIGHT, my application stops without error message. I debug this code, but it stucks on this piece of code and stop my app:
Trace.beginSection("run");
inferenceInterface.run(new String[]{OUTPUT_NAME});
Trace.endSection();
Please let me know, how can I style whole image using TensorFlow.
Thank You!
Your code stops there because of the differences in size. You probably must be getting an ArrayOutOfBound Exception.
The model is to be trained to accept images of a particular size. So, whenever you classify, the image is to be reduced to that particular size.
Even your training data which when creating a pb/lite/tflite file will be converted to accept the same size images you mention within the model creation. The results will not affect to a larger extinct. You can give that a try.
It is an old question but I would like to have a reply with a code.
The following is too slow for real-time. I intend to use it later with OpenTOK screen sharing. Any fast substitute?
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
final Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - width, -width,
0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
PS: I already tried GL_RGB and GL_BGRA but it is still slow and I get black screen only.
First off, the glReadPixels isn't what is causing your code to slow down. All the allocating of buffers and converting the image to another format is.
Reuse buffers. Allocate them once and reuse them.
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
Not a problem with this bit.
GLES20.glReadPixels(0, 0, width, height, GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
Now you're preparing to convert to another format which has a lot of overhead. Stick with the format you receive. And again, you're allocating buffers instead of reusing.
However, the Bitmap can be allocated once and reused.
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
final Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - width, -width,
0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
You then won't need this unnecessary conversion.
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
The copyPixelsFromBuffer() can then be used as long as your Bitmap is same format.
bitmap.copyPixelsFromBuffer(bb);
Generally Android Bitmaps are the same format as GL_RGBA, so it's unlikely you will need to convert. The above reduces everything down to just a read and copy.
I am using PBO to take screenshot. However, the result image is all black. It works perfectly fine without PBO. Is there any thing that I need to take care before doing this ?
I even tried by rendering to a FBO and then use GLES30.glReadBuffer(GLES30.GL_COLOR_ATTACHMENT0), no hope
public void SetupPBO(){
GLES30.glGenBuffers(1, pbuffers, 0);
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, pbuffers[0]);
int size = (int)this.mScreenHeight * (int)this.mScreenWidth * 4;
GLES30.glBufferData(GLES30.GL_PIXEL_PACK_BUFFER, size, null, GLES30.GL_DYNAMIC_READ);
checkGlError("glReadBuffer");
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, 0);
}
private void Render(float[] m) {
.......//Normal render logic
exportBitmap();
}
private void exportBitmap() {
int screenshotSize = (int)this.mScreenWidth * (int)this.mScreenHeight;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
// set the target framebuffer to read
GLES30.glReadBuffer(GLES30.GL_FRONT);
checkGlError("glReadBuffer");
GLES30.glBindBuffer(GLES30.GL_PIXEL_PACK_BUFFER, pbuffers[0]);
GLES30.glReadPixels(0, 0, (int)mScreenWidth, (int)mScreenHeight, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb); //<------ not working ?????
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the
// red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00))
| ((pixelsBuffer[i] & 0x000000ff) << 16)
| ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap((int)mScreenWidth, (int)mScreenHeight, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize - (int)mScreenWidth, -(int)mScreenWidth, 0, 0, (int)mScreenWidth, (int)mScreenHeight);
SaveBitmap(bitmap);
}
GLES30.glReadPixels(0, 0, (int)mScreenWidth, (int)mScreenHeight, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
bb is interpret as an offset in your PBO. Thus you're writing out of buffer (On some drivers this code cause crash). You should pass 0 instead of bb. To retrive the data from PBO use glMapBuffer.
I want to programmatically take screen shot of my android application which is a video calling application by using openSIPS protocol. While on the video call, I need to take the screen shots. I have already tried something but it gives the screenshot except the videocall fragment.
Here is my try:
public static Bitmap takeScreenshot() {
View rootView = mVideoView.getRootView();
rootView.setDrawingCacheEnabled(true);
//rootView.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.EXACTLY),
//MeasureSpec.makeMeasureSpec(0, MeasureSpec.EXACTLY));
// rootView.layout(0, 0, getMeasuredWidth(), getMeasuredHeight());
rootView.buildDrawingCache(true);
// rootView.destroyDrawingCache();
return rootView.getDrawingCache();
}
The videoView extends an SurfaceView, which has its content not go through the drawing cache, thus getting it will only returnes a black screen instead of a capture of the video layout. Any help will be appreciated.
Ok now, I got the perfect answer for me.
Here it is.
if (InCallActivity.capture) {
int widthx = width;
int heightx = height;
int screenshotSize = widthx * heightx;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, widthx, heightx, GL10.GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
Bitmap bitmap = Bitmap.createBitmap(widthx, heightx,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - widthx,
-widthx, 0, 0, widthx, heightx);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
// Making created bitmap (from OpenGL points) compatible with
// Android bitmap
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
InCallActivity.captureBmp = bitmap.copy(
Bitmap.Config.ARGB_8888, false);
InCallActivity.capture = false;
}
I put this code inside onDrawFrame(GL10 gl) method of my class Renderer , and it seems working. Here InCallActivity is the class where I using the GlSurfaceView.
I got this answer from this link. Thanks for supports. Happy coding :)
You can try something like that
Bitmap bitmap;
View v1 = findViewById(R.id.rlid);// get ur root view id
v1.setDrawingCacheEnabled(true);
bitmap = Bitmap.createBitmap(v1.getDrawingCache());
v1.setDrawingCacheEnabled(false);
and then for saving
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 40, bytes);
File f = new File(Environment.getExternalStorageDirectory()
+ File.separator + "test.jpg")
f.createNewFile();
FileOutputStream fo = new FileOutputStream(f);
fo.write(bytes.toByteArray());
fo.close();
Source: how to take screenshot programmatically and save it on gallery?
or you can also check this link How to programmatically take a screenshot in Android?
I'm trying to take a screenshot of Android OpenGL.
The code I found is as follows:
nt size = width * height;
ByteBuffer buf = ByteBuffer.allocateDirect(size * 4);
buf.order(ByteOrder.nativeOrder());
glContext.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, buf);
int data[] = new int[size];
buf.asIntBuffer().get(data);
buf = null;
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bitmap.setPixels(data, size-width, -width, 0, 0, width, height);
data = null;
short sdata[] = new short[size];
ShortBuffer sbuf = ShortBuffer.wrap(sdata);
bitmap.copyPixelsToBuffer(sbuf);
for (int i = 0; i < size; ++i) {
//BGR-565 to RGB-565
short v = sdata[i];
sdata[i] = (short) (((v&0x1f) << 11) | (v&0x7e0) | ((v&0xf800) >> 11));
}
sbuf.rewind();
bitmap.copyPixelsFromBuffer(sbuf);
try {
FileOutputStream fos = new FileOutputStream("/sdcard/screeshot.png");
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
} catch (Exception e) {
// handle
}
I tried also a code from that site
link text
In each case the result is a png file which is completely black.
I found there is some problem with glReadPixels method but I don't know how to bypass it.
Sorry for the late response...
In order to perform a correct screenshot You have to put into Your onDrawFrame(GL10 gl) handler the following code:
if(screenshot){
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
//Making created bitmap (from OpenGL points) compatible with Android bitmap
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v&0x1f) << 11) | (v&0x7e0) | ((v&0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
lastScreenshot = bitmap;
screenshot = false;
}
The "screenshot" class field is set to true whenever the user presses the button to create a screenshot
or at any other circumstances You want. Inside the "if" body You may place any screenshot creating code sample You find in th internet - the most important thing is having the current instance of GL10. For example when You just save the GL10 instance to the class variable and then use it outside the event to create the screenshot You'll end up with the completely blank image. That's why You have to take a screenshot inside the OnDrawFrame event handler where the GL10 instance is the current one.
Hope that it helps.
Best regards, Gordon.
Here is the way to do it if you want to preserve the quality (8 bits for every colour channel: red, green, blue and alpha too):
if (this.screenshot) {
int screenshotSize = this.width * this.height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
gl.glReadPixels(0, 0, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
bb = null;
for (int i = 0; i < screenshotSize; ++i) {
// The alpha and green channels' positions are preserved while the red and blue are swapped
pixelsBuffer[i] = ((pixelsBuffer[i] & 0xff00ff00)) | ((pixelsBuffer[i] & 0x000000ff) << 16) | ((pixelsBuffer[i] & 0x00ff0000) >> 16);
}
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
bitmap.setPixels(pixelsBuffer, screenshotSize-width, -width, 0, 0, width, height);
this.screenshot = false;
}
Got it!
My mistake was that I was remembering GL context in the class variable. In order to take a screenshot I have to use the gl context passed to the OnDraw in the class implementing GLSurfaceView.Renderer interface. I simply use my code in the "if" clause and everything works as expected. Hope that remark would help anyone.
Best regards,
Gordon