Android - WebRTC: Video call live drawing - android

I am trying to make a simple feature with WebRTC on an Android (mobile) app.
The app right now can make a simple video call: connect two devices with each other and allow them to hear and see.
What I am trying to achieve is some live drawing during the call. To put it simply: User1 call User2, the call gets connected, then User1 click on a draw button which will freeze the video frame and allow him to draw on this frozen frame. Obviously, User2 should see this drawing happening live on his phone.
Right now I can freeze the frame (by calling videoCapture.stopCapture()) and draw on it with a custom SurfaceViewRenderer. The problem is that User2 does NOT see the drawing, only the frozen frame.
First I tried to create a new videotrack containing the drawing canvas AND the frozen frame to draw on but I couldn't succeed.
When creating a videotrack with peerConnectionFactory.createVideoTrack("ARDAMSv1_" + rand, videoSource);
I am supposed to specify the video source of the track but the source can only be a VideoSource and this VideoSource can only be created with a VideoCapturer which is directly linked to a device camera (without any drawing on it of course). This explains why User2 is not seeing any drawing on his device.
My question here is: how can I create a VideoCapturer which can stream the camera stream (frozen frame) AND a canvas with the drawing on it?
So I tried to implements my own VideoCapturer to either:
1) Capture a View (for example the layout containing the drawing and the frozen frame) and stream it for the VideoSource
OR 2)Capture the camera view but also add the drawing to the frame before streaming it.
I couldn't make any of this work because I have no idea how to manipulate the I420Frame object to draw on it and return it with the right callback.
Maybe I am totally wrong with this approach and need to do something completely different, I am open to any suggestion.
PS: I am using Android API 25 with WebRTC 1.0.19742. I do NOT want to use any paid third party SDK/lib.
Does anyone have a clue how to proceed to achieve a simple WebRTC live drawing from one android app to another android app?

We came back to that feature a couple weeks ago and I managed to find a way.
I extended my own CameraCapturer class to get the hold on the camera frame before the rendering. I then created my own CanvasView to be able to draw on it.
From there what I did was merging the two bitmap together (the camera view + my canvas with the drawing), then I draw it with OpenGL on the buffer and displayed it on the SurfaceView.
If someone is interested I could potentially post some code.

#Override
public void startCapture(int width, int height, int fps) {
Log.d("InitialsClass", "startCapture");
surTexture.stopListening();
cameraHeight = 480;
cameraWidth = 640;
int horizontalSpacing = 16;
int verticalSpacing = 20;
int x = horizontalSpacing;
int y = cameraHeight - verticalSpacing;
cameraBitmap = Bitmap.createBitmap(640, 480, Bitmap.Config.ARGB_8888);
YuvFrame frame = new YuvFrame(null, PROCESSING_NONE, appContext);
surTexture.startListening(new VideoSink() {
#Override
public void onFrame(VideoFrame videoFrame) {
frame.fromVideoFrame(videoFrame, PROCESSING_NONE);
}
});
if (captureThread == null || !captureThread.isInterrupted()) {
captureThread = new Thread(() -> {
try {
if (matrix == null) {
matrix = new Matrix();
}
long start = System.nanoTime();
capturerObs.onCapturerStarted(true);
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
YuvConverter yuvConverter = new YuvConverter();
WindowManager windowManager = (WindowManager) appContext.getSystemService(Context.WINDOW_SERVICE);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
//The bitmap is drawn on the GPU at this point.
TextureBufferImpl buffer = new TextureBufferImpl(cameraWidth, cameraHeight - 3, VideoFrame.TextureBuffer.Type.RGB, textures[0], matrix, surTexture.getHandler(), yuvConverter, null);
Resources resources = appContext.getResources();
float scale = resources.getDisplayMetrics().density;
Log.d("InitialsClass before", "camera start capturer width- " + cameraWidth + " height- " + cameraHeight);
while (!Thread.currentThread().isInterrupted()) {
ByteBuffer gBuffer = frame.getByteBuffer();
if (gBuffer != null) {
Log.d("InitialsClass ", "gBuffer not null");
cameraBitmap.copyPixelsFromBuffer(gBuffer);
}
if (cameraBitmap != null) {
if (canvas == null) {
canvas = new Canvas();
}
if (appContext.getResources().getConfiguration().orientation == ORIENTATION_PORTRAIT)
rotationDegree = -90;
else {
assert windowManager != null;
if (windowManager.getDefaultDisplay().getRotation() == Surface.ROTATION_0) {
// clockwise
rotationDegree = 0;
} else if (windowManager.getDefaultDisplay().getRotation() == Surface.ROTATION_90) {
// anti-clockwise
rotationDegree = -180;
}
}
canvas.save(); //save the position of the canvas
canvas.rotate(rotationDegree, (cameraBitmap.getWidth() / 2), (cameraBitmap.getHeight() / 2)); //rotate the canvas
canvas.drawBitmap(cameraBitmap, 0, 0, null); //draw the image on the rotated canvas
canvas.restore(); // restore the canvas position.
matrix.setScale(-1, 1);
matrix.postTranslate(/*weakBitmap.get().getWidth()*/ cameraBitmap.getWidth(), 0);
matrix.setScale(1, -1);
matrix.postTranslate(0, /*weakBitmap.get().getHeight()*/ cameraBitmap.getHeight());
canvas.setMatrix(matrix);
if (textPaint == null) {
textPaint = new TextPaint();
}
textPaint.setColor(Color.WHITE);
textPaint.setTypeface(Typeface.create(typeFace, Typeface.BOLD));
textPaint.setTextSize((int) (11 * scale));
if (textBounds == null) {
textBounds = new Rect();
}
textPaint.getTextBounds(userName, 0, userName.length(), textBounds);
textPaint.setTextAlign(Paint.Align.LEFT);
textPaint.setAntiAlias(true);
canvas.drawText(userName, x, y, textPaint);
if (paint == null) {
paint = new Paint();
}
if (isLocalCandidate) {
paint.setColor(Color.GREEN);
} else {
paint.setColor(Color.TRANSPARENT);
}
paint.setStrokeWidth(8);
paint.setStyle(Paint.Style.STROKE);
canvas.drawRect(0, 8, cameraWidth - 8, cameraHeight - 8, paint);
if (surTexture != null && surTexture.getHandler() != null && surTexture.getHandler().getLooper().getThread().isAlive()) {
surTexture.getHandler().post(() -> {
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, /*weakBitmap.get()*/ cameraBitmap, 0);
//We transfer it to the VideoFrame
VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);
long frameTime = System.nanoTime() - start;
VideoFrame videoFrame = new VideoFrame(i420Buf, 0, frameTime);
capturerObs.onFrameCaptured(videoFrame);
});
}
}
Thread.sleep(100);
}
} catch (InterruptedException ex) {
Log.d("InitialsClass camera", ex.toString());
Thread.currentThread().interrupt();
return;
}
});
}
captureThread.start();
}
#Anael. Check it out.

I was working on a similar application, so I share the draw on webrtc stream part:
What you need to do is get the stream to canvas.
Then have drawing application to edit on the canvas, I look at a William Malone project. (if you want to import pictures, make then transparent!)
Finally (what you missed I guess) is stream from canvas as you would with any WebRTC.
A little demo I cooked up specially for you here (local webrtc, see log).
PS: I use getDisplayMedia, not userMedia as my webcam is kaput...

Related

screenshot of Video on surface view shows black screen [duplicate]

I am attempting to Take a Screenshot of my Game through code and Share it through an Intent. I able to do of those things, however the screenshot always appears black. Here is the Code Related to Sharing the Screenshot:
View view = MainActivity.getView();
view.setDrawingCacheEnabled(true);
Bitmap screen = Bitmap.createBitmap(view.getDrawingCache(true));
.. save Bitmap
This is in the MainActivity:
view = new GameView(this);
view.setLayoutParams(new RelativeLayout.LayoutParams(
RelativeLayout.LayoutParams.FILL_PARENT,
RelativeLayout.LayoutParams.FILL_PARENT));
public static SurfaceView getView() {
return view;
}
And the View itself:
public class GameView extends SurfaceView implements SurfaceHolder.Callback {
private static SurfaceHolder surfaceHolder;
...etc
And this is how I am Drawing everything:
Canvas canvas = surfaceHolder.lockCanvas(null);
if (canvas != null) {
Game.draw(canvas);
...
Ok, based on some answers, i have constructed this:
public static void share() {
Bitmap screen = GameView.SavePixels(0, 0, Screen.width, Screen.height);
Calendar c = Calendar.getInstance();
Date d = c.getTime();
String path = Images.Media.insertImage(
Game.context.getContentResolver(), screen, "screenShotBJ" + d
+ ".png", null);
System.out.println(path + " PATH");
Uri screenshotUri = Uri.parse(path);
final Intent emailIntent = new Intent(
android.content.Intent.ACTION_SEND);
emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri);
emailIntent.setType("image/png");
Game.context.startActivity(Intent.createChooser(emailIntent,
"Share High Score:"));
}
The Gameview contains the Following Method:
public static Bitmap SavePixels(int x, int y, int w, int h) {
EGL10 egl = (EGL10) EGLContext.getEGL();
GL10 gl = (GL10) egl.eglGetCurrentContext().getGL();
int b[] = new int[w * (y + h)];
int bt[] = new int[w * h];
IntBuffer ib = IntBuffer.wrap(b);
ib.position(0);
gl.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for (int i = 0, k = 0; i < h; i++, k++) {
for (int j = 0; j < w; j++) {
int pix = b[i * w + j];
int pb = (pix >> 16) & 0xff;
int pr = (pix << 16) & 0x00ff0000;
int pix1 = (pix & 0xff00ff00) | pr | pb;
bt[(h - k - 1) * w + j] = pix1;
}
}
Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
The Screenshot is still Black. Is there something wrong with the way I am saving it perhaps?
I have attempted several different methods to take the Screenshot, but none of them worked:
The one shown in the code above was the most commonly suggested one. But it does not seem to work.
Is this an Issue with using SurfaceView? And if so, why does view.getDrawingCache(true) even exist if I cant use it and how do I fix this?
My code:
public static void share() {
// GIVES BLACK SCREENSHOT:
Calendar c = Calendar.getInstance();
Date d = c.getTime();
Game.update();
Bitmap.Config conf = Bitmap.Config.RGB_565;
Bitmap image = Bitmap.createBitmap(Screen.width, Screen.height, conf);
Canvas canvas = GameThread.surfaceHolder.lockCanvas(null);
canvas.setBitmap(image);
Paint backgroundPaint = new Paint();
backgroundPaint.setARGB(255, 40, 40, 40);
canvas.drawRect(0, 0, canvas.getWidth(), canvas.getHeight(),
backgroundPaint);
Game.draw(canvas);
Bitmap screen = Bitmap.createBitmap(image, 0, 0, Screen.width,
Screen.height);
canvas.setBitmap(null);
GameThread.surfaceHolder.unlockCanvasAndPost(canvas);
String path = Images.Media.insertImage(
Game.context.getContentResolver(), screen, "screenShotBJ" + d
+ ".png", null);
System.out.println(path + " PATH");
Uri screenshotUri = Uri.parse(path);
final Intent emailIntent = new Intent(
android.content.Intent.ACTION_SEND);
emailIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
emailIntent.putExtra(Intent.EXTRA_STREAM, screenshotUri);
emailIntent.setType("image/png");
Game.context.startActivity(Intent.createChooser(emailIntent,
"Share High Score:"));
}
Thank you.
There is a great deal of confusion about this, and a few correct answers.
Here's the deal:
A SurfaceView has two parts, the Surface and the View. The Surface is on a completely separate layer from all of the View UI elements. The getDrawingCache() approach works on the View layer only, so it doesn't capture anything on the Surface.
The buffer queue has a producer-consumer API, and it can have only one producer. Canvas is one producer, GLES is another. You can't draw with Canvas and read pixels with GLES. (Technically, you could if the Canvas were using GLES and the correct EGL context was current when you went to read the pixels, but that's not guaranteed. Canvas rendering to a Surface is not accelerated in any released version of Android, so right now there's no hope of it working.)
(Not relevant for your case, but I'll mention it for completeness:) A Surface is not a frame buffer, it is a queue of buffers. When you submit a buffer with GLES, it is gone, and you can no longer read from it. So if you were rendering with GLES and capturing with GLES, you would need to read the pixels back before calling eglSwapBuffers().
With Canvas rendering, the easiest way to "capture" the Surface contents is to simply draw it twice. Create a screen-sized Bitmap, create a Canvas from the Bitmap, and pass it to your draw() function.
With GLES rendering, you can use glReadPixels() before the buffer swap to grab the pixels. There's a (less-expensive than the code in the question) implementation of the grab code in Grafika; see saveFrame() in EglSurfaceBase.
If you were sending video directly to a Surface (via MediaPlayer) there would be no way to capture the frames, because your app never has access to them -- they go directly from mediaserver to the compositor (SurfaceFlinger). You can, however, route the incoming frames through a SurfaceTexture, and render them twice from your app, once for display and once for capture. See this question for more info.
One alternative is to replace the SurfaceView with a TextureView, which can be drawn on like any other Surface. You can then use one of the getBitmap() calls to capture a frame. TextureView is less efficient than SurfaceView, so this is not recommended for all situations, but it's straightforward to do.
If you were hoping to get a composite screen shot containing both the Surface contents and the View UI contents, you will need to capture the Canvas as above, capture the View with the usual drawing cache trick, and then composite the two manually. Note this won't pick up the system parts (status bar, nav bar).
Update: on Lollipop and later (API 21+) you can use the MediaProjection class to capture the entire screen with a virtual display. There are some trade-offs with this approach, e.g. you're capturing the rendered screen, not the frame that was sent to the Surface, so what you get may have been up- or down-scaled to fit the window. In addition, this approach involves an Activity switch since you have to create an intent (by calling createScreenCaptureIntent on the ProjectionManager object) and wait for its result.
If you want to learn more about how all this stuff works, see the Android System-Level Graphics Architecture doc.
I know its a late reply but for those who face the same problem,
we can use PixelCopy for fetching the snapshot. It's available in API level 24 and above
PixelCopy.request(surfaceViewObject,BitmapDest,listener,new Handler());
where,
surfaceViewObject is the object of surface view
BitmapDest is the bitmap object where the image will be saved and it cant be null
listener is OnPixelCopyFinishedListener
for more info refer - https://developer.android.com/reference/android/view/PixelCopy
Update 2020 view.setDrawingCacheEnabled(true) is deprecated in API 28
If you are using normal View then you can create a canvas with the specified bitmap to draw into. Then ask the view to draw over that canvas and return bitmap filled by Canvas
/**
* Copy View to Canvas and return bitMap
*/
fun getBitmapFromView(view: View): Bitmap? {
var bitmap =
Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
val canvas = Canvas(bitmap)
view.draw(canvas)
return bitmap
}
Or you can fill the canvas with a default color before drawing it with the view:
/**
* Copy View to Canvas and return bitMap and fill it with default color
*/
fun getBitmapFromView(view: View, defaultColor: Int): Bitmap? {
var bitmap =
Bitmap.createBitmap(view.width, view.height, Bitmap.Config.ARGB_8888)
var canvas = Canvas(bitmap)
canvas.drawColor(defaultColor)
view.draw(canvas)
return bitmap
}
The above approach will not work for the surface view they will drawn as a hole in the screenshot.
For surface view since Android 24, you need to use Pixel Copy.
/**
* Pixel copy to copy SurfaceView/VideoView into BitMap
*/
fun usePixelCopy(videoView: SurfaceView, callback: (Bitmap?) -> Unit) {
val bitmap: Bitmap = Bitmap.createBitmap(
videoView.width,
videoView.height,
Bitmap.Config.ARGB_8888
);
try {
// Create a handler thread to offload the processing of the image.
val handlerThread = HandlerThread("PixelCopier");
handlerThread.start();
PixelCopy.request(
videoView, bitmap,
PixelCopy.OnPixelCopyFinishedListener { copyResult ->
if (copyResult == PixelCopy.SUCCESS) {
callback(bitmap)
}
handlerThread.quitSafely();
},
Handler(handlerThread.looper)
)
} catch (e: IllegalArgumentException) {
callback(null)
// PixelCopy may throw IllegalArgumentException, make sure to handle it
e.printStackTrace()
}
}
This approach can take a screenshot of any subclass of Surface Vie eg VideoView
Screenshot.usePixelCopy(videoView) { bitmap: Bitmap? ->
processBitMap(bitmap)
}
Here is the complete method to take screen shot of a surface view using PixelCopy. It requires API 24(Android N).
#RequiresApi(api = Build.VERSION_CODES.N)
private void capturePicture() {
Bitmap bmp = Bitmap.createBitmap(surfaceView.getWidth(), surfaceView.getHeight(), Bitmap.Config.ARGB_8888);
PixelCopy.request(surfaceView, bmp, i -> {
imageView.setImageBitmap(bmp); //"iv_Result" is the image view
}, new Handler(Looper.getMainLooper()));
}
This is because SurfaceView uses OpenGL thread for drawing and draws directly to a hardware buffer. You have to use glReadPixels (and probably a GLWrapper).
See the thread: Android OpenGL Screenshot

Why do i always get a whole black picture which screenshoot by GLSurfaceView?

I use the GLSurfaceView as the view to display the camera preview data. I use createBitmapFromGLSurface aim at grabPixels and save it to Bitmap.
However, I always get a whole black picture after save the bitmap to file. Where am i wrong?
Following is my code snippet.
#Override
public void onDrawFrame(GL10 gl) {
if (mIsNeedCaptureFrame) {
mIsNeedCaptureFrame = false;
createBitmapFromGLSurface(width, height);
}
}
private void createBitmapFromGLSurface(int w, int h) {
ByteBuffer buf = ByteBuffer.allocateDirect(w * h * 4);
buf.position(0);
buf.order(ByteOrder.LITTLE_ENDIAN);
GLES20.glReadPixels(0, 0, w, h,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, buf);
buf.rewind();
Bitmap bmp = Bitmap.createBitmap(w, h, Bitmap.Config.ARGB_8888);
bmp.copyPixelsFromBuffer(buf);
Log.i(TAG, "createBitmapFromGLSurface w:" + w + ",h:" + h);
mFrameCapturedCallback.onFrameCaptured(bmp);
}
Update:
public void captureFrame(FrameCapturedCallback callback) {
mIsNeedCaptureFrame = true;
mCallback = callback;
}
public void takeScreenshot() {
final int width = mIncomingWidth;
final int height = mIncomingHeight;
EglCore eglCore = new EglCore(EGL14.eglGetCurrentContext(), EglCore.FLAG_RECORDABLE);
OffscreenSurface surface = new OffscreenSurface(eglCore, width, height);
surface.makeCurrent();
Bitmap bitmap = surface.captureFrame();
for (int x = 0, y = 0; x < 100; x++, y++) {
Log.i(TAG, "getPixel:" + bitmap.getPixel(x, y));
}
surface.release();
eglCore.release();
mCallback.onFrameCaptured(bitmap);
}
#Override
public void onDrawFrame(GL10 gl) {
mSurfaceTexture.updateTexImage();
if (mIsNeedCaptureFrame) {
mIsNeedCaptureFrame = false;
takeScreenshot();
return;
}
....
}
The logs are following:
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
getPixel:0
...
This won't work.
To understand why, bear in mind that a SurfaceView Surface is a queue of buffers with a producer-consumer relationship. When displaying camera preview data, the Camera is the producer, and the system compositor (SurfaceFlinger) is the consumer. Only the producer can send data to the Surface -- there can be only one producer at a time -- and only the consumer can examine buffers from the Surface.
If you were drawing on the Surface yourself, so your app would be the producer, you would be able to see what you've drawn, but only while inonDrawFrame(). When it returns, GLSurfaceView calls eglSwapBuffers(), and the frame you've drawn is sent off to the consumer. (Technically, because the buffers are in a pool and are re-used, you can read frames outside onDrawFrame(); but what you're reading would be stale data from 1-2 frames back, not the one you just drew.)
What you're doing here is reading data from an EGLSurface that has never been drawn on and isn't connected to the SurfaceView. That's why it's always reading black. The Camera doesn't "draw" the preview, it just takes a buffer of YUV data and shoves it into the BufferQueue.
If you want to show the preview and capture frames using GLSurfaceView, see the "show + capture camera" example in Grafika. You can replace the MediaCodec code with a glReadPixels() (see EglSurfaceBase.saveFrame(), which looks very much like what you have).

Draw in FrameBuffer

I work with a program which draws some 2D textures and work with them. In this program was used libgdx. I have some problem with using FrameBuffer. I try draw some texture in my FrameBuffer and after that I need save changed texture(or draw) and use this texture in this FrameBuffer on more time. I try save texture via
Texture texture = mFrameBuffer.getColorBufferTexture()
and I try just bind texture from FrameBuffer
mFilterBuffer.getColorBufferTexture().bind();
For first iteration all work good. But when I try use in FrameBuffer his ColorBufferTexture like texture I have fully black texture.
Code:
public void process(MySprite psObject, float startX, float startY, 
float endX, float endY, int mWidth, int mHeight) {
         boolean frst = false;
         if(psObject.getFrameBuffer() == null){
             psObject.setFrameBuffer(new FrameBuffer(Pixmap.Format.RGBA8888, psObject.getTexture().getWidth(), psObject.getTexture().getHeight(), true));
         }
         if(pSprite == null || pSprite != psObject){
             mFrameBuffer = psObject.getFrameBuffer();
             frst = true;
             pSprite = psObject;
         }
         mFrameBuffer.begin();
         Gdx.gl.glViewport(0, 0, psObject.getTexture().getWidth(), psObject.getTexture().getHeight());
         Gdx.graphics.getGL20().glClearColor(0f, 0f, 0f, 1f);
         Gdx.graphics.getGL20().glClear(GL20.GL_COLOR_BUFFER_BIT);
         ShaderProgram shader = MyUtils.newInstance().getCurrentShader();
         if(!shader.isCompiled()){
             Log.i("ERROR", "SHERROR " + shader.getLog());
         }
         if(shader != null){
             if(frst){
                 psObject.getTexture().bind();
             }else{
mFrameBuffer.getColorBufferTexture().bind();
             }
             shader.begin();
             Matrix4 matrix = new Matrix4();
             matrix.setToRotation(1, 0, 0, 180);
             matrix.scale(scaleSizeInFilterProcessor, scaleSizeInFilterProcessor, 1);
             shader.setUniformMatrix("u_worldView", matrix);
             shader.setUniformi("u_texture", 0);
             float [] start = new float[]{0f,0};
             float [] end = new float[]{1f,1f};
             MyUtils.newInstance().getShaderData(shader, start, end, mWidth, mHeight);
             psObject.getMesh().render(shader, GL20.GL_TRIANGLES);
             shader.end();
         }
         mFrameBuffer.end();
     }
Your code needs some refactoring ;). Anyway, you can't read and write from/to the same FBO if that's your question.
You'll need 2 FBO's (say A and B)
Draw scene to A,
Bind A's color texture
Draw scene to B (now you can read from A).
Note that you can extend libgdx FBO to have many textures associated with the same FBO.

32Bit Bitmaps slow on Google TV. Bad Pixel format?

I am wondering what I'm doing wrong here.
I create a bitmap with dimesions 1080x1080:
Bitmap level = Bitmap.createBitmap(1080, 1080, Config.ARGB_8888);
then I draw some lines and rects into it and put it on canvas in my SurfaceView:
c.drawBitmap(LevelBitmap.level, 0, 0, null);
and this operation takes 20ms on my google TV NSZ-GS7. much to long.
setting pixelformat in surfaceCreated to RGBX_8888 or RGBA_8888 makes things even worse, 30ms for drawing.
holder.setFormat(PixelFormat.RGBX_8888);
setting to RGB_888
holder.setFormat(PixelFormat.RGB_888);
leads to nullpointerexceptions when i draw something
only the combination of Config.RGB_565 for the bitmap and PixelFormat.RGB_565 for the window
draws the bitmap in acceptable 10ms to the canvas, but the quality of RGB_565 is horrilble.
am I missing something? is Google TV not capable of 32bit graphics? what is its "natural" pixel and bitmap format? Is there any documentation on this topic i missed on google?
here my requested onDraw method i use to measure the timings:
private static void doDraw(Canvas c) {
if (LevelBitmap.level == null){
LevelBitmap.createBitmap();
}
long time = System.currentTimeMillis();
c.drawBitmap(LevelBitmap.level, 0, 0, null);
Log.e("Timer:",""+(System.currentTimeMillis()-time) );
if(true) return;
and here the class creating the bitmap:
public final class LevelBitmap {
public static Bitmap level;
public static void createBitmap() {
level = Bitmap.createBitmap(GameViewThread.mCanvasHeight, GameViewThread.mCanvasHeight, Config.ARGB_8888);
Canvas canvas = new Canvas(level);
canvas.scale(GameViewThread.fieldScaleFactor, GameViewThread.fieldScaleFactor);
canvas.translate(500, 500);
// floor covering
int plankWidth = 50;
int plankSpace = 500;
short size = TronStatics.FIELDSIZE;
for (int i = plankSpace; i < size; i = i + plankSpace) {
canvas.drawRect(i, 0, i + plankWidth, size, GameViewThread.bgGrey);
canvas.drawRect(0, i, size, i + plankWidth, GameViewThread.bgGrey);
}
// draw field
canvas.drawRect(-10, -10, TronStatics.FIELDSIZE + 10, TronStatics.FIELDSIZE + 10, GameViewThread.wallPaint);
}
}

how to load a PNG image on sdcard to a Canvas to draw in Android?

My app is a basic drawing app. User can draw on a Canvas and save the image as a PNG. He can load the earlier drawn images and edit them.
I was able to do the first part. that is, the user can draw and save the image on the sdcard. I'm having trouble loading the saved png file on to the Canvas and drawing on it.
here is the run method in my SurfaceView class.
public void run() {
Canvas canvas = null;
while (running) {
try {
canvas = holder.lockCanvas(null);
synchronized (holder) {
if(mBitmap == null){
mBitmap = Bitmap.createBitmap (1, 1, Bitmap.Config.ARGB_8888);;
}
final Canvas c = new Canvas (mBitmap);
c.drawColor(Color.WHITE);
//pad.onDraw(canvas);
Paint p = new Paint();
p.setColor(Color.GRAY);
for(double x = 0.5;x < c.getWidth();x += 30) {
c.drawLine((float)x, 0, (float)x, c.getHeight(), p);
}
for(double y= 0.5;y < c.getHeight();y += 30) {
c.drawLine(0, (float)y, c.getWidth(), (float)y, p);
}
pad.onDraw(c);
canvas.drawBitmap (mBitmap, 0, 0, null);
}
} finally {
if (canvas != null) {
holder.unlockCanvasAndPost(canvas);
}
}
}
}
I tried loading the png to the 'mBitmap', but it didn't work.
Any help appreciated.
thank you!
In your code you are not loading the image from sd card at all, is this intentional? This is how you open an image form SD card:
mBitmap = BitmapFactory.decodeFile("/sdcard/test.png");

Categories

Resources