UPDATE:
Works on Emulator with Android Oreo (8.X). I have the possibility to do changes right to the android sources, so it would also help if someone knews what in the android sources I have to change or update to get this working (so I don't really need an Android 7 workaround for this. An update to Android 8 though is not possible.)
I'm having a SurfaceView inside a FrameLayout. The SurfaceView usually displays a video, for example purposes I'm actually drawing an image.
The problem is, if I'm setting the size of the FrameLayout (or the SurfaceView) above 10.000 pixels in width, it gets cropped on the left side.
Tested on Android 7.1.1 (on a device and Emulator: Android TV (1080p) API 25
public class TestActivity extends Activity {
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//add surfaceview
TestSurfaceView testSurfaceView = new TestSurfaceView(this);
setContentView(testSurfaceView);
//resize
FrameLayout.LayoutParams bgParams = (FrameLayout.LayoutParams) testSurfaceView.getLayoutParams();
//testcase full hd - working
bgParams.width = 1920;
//testcase 2 - working - each segment is 330px as 9900 / 1920 * 64 (default segment width) == 330
//bgParams.width = 9900;
//testcase 3 - working
//bgParams.width = 10000;
//testcase 4 - not working - each segment is 335px which is correct but first cell gets cropped by exactly 50px to the left
//bgParams.width = 10050;
bgParams.height = (int)Math.floor(bgParams.width * (9d/16d)); //16:9
testSurfaceView.setX(0); //doesnt help
/*
Also the position counts into the 10000px limitation as you can see on following testcases
*/
/*
bgParams.width = 9900;
bgParams.height = (int)Math.floor(bgParams.width * (9d/16d)); //16:9
//works - as 9900 + 90 < 10000
testSurfaceView.setX(90);
//doesnt work, crops 50px to the left - 9900 + 150 -> 10050
testSurfaceView.setX(150);
*/
}
}
public class TestSurfaceView extends SurfaceView implements SurfaceHolder.Callback
{
public TestSurfaceView(TestActivity context) {
super(context);
SurfaceHolder holder = this.getHolder();
holder.addCallback(this);
}
#Override
public void surfaceCreated(SurfaceHolder surfaceHolder)
{
//load bitmap from file
BitmapFactory.Options options = new BitmapFactory.Options();
options.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap bitmap = BitmapFactory.decodeFile(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS).getAbsolutePath() + "/testimg.png", options);
Canvas c = surfaceHolder.lockCanvas();
Rect rect = new Rect();
rect.set(0,0, 1920, 1080); //image size
Rect destRect = new Rect();
destRect.set(0, 0, this.getWidth(), this.getHeight());
//draw the image on the surface
c.drawBitmap(bitmap, rect, destRect, null);
surfaceHolder.unlockCanvasAndPost(c);
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
}
}
styles.xml - for the theme
<resources>
<!-- Base application theme. -->
<style name="AppTheme" parent="android:Theme.Material.Light.NoActionBar.Fullscreen">
<!-- Customize your theme here. -->
</style>
</resources>
The code above produces following output, which is correct.
If I change the bgParams.width to 9600 - it scales up correctly and still displays starting from the left edge of the image:
But if I change the code to e.g. 10050, the image gets cropped by 50 pixels to the left.
If I set:
destRect.set(50, 0, this.getWidth(), this.getHeight());
It gets displayed correctly, but as I can't do that for the MediaPlayer and it's super weird, I'm trying to find a good solution.
I also tried setting the sizes directly on the SurfaceView and instead of changing the LayoutParams, I tried setting scaleX and scaleY of the Framelayout but ended up with the same results.
(btw. opengl max texture size is about 16000px - setting it above the ~16000px results in a black screen and an exception, so that is not the cause of the problem)
Update:
Posted all sources. Anyway here is the complete android studio project:
WeTransfer project download link
Related
ExoPlayer - SurfaceView
Camera2 + MediaCodec - GLSurfaceView
I am using the above view groups for playing video and camera recording.
UI-1: Exo-Surf at the center and Cam-GLS in the top right corner.
UI-2: Cam-GLS at the center and Exo-Surf in the top right corner.
To achieve this I am using setZOrderOnTop to set z-index, as both are inside RelativeLayout.
(exoPlayerView.videoSurfaceView as? SurfaceView)?.setZOrderOnTop(true/false)
It seems working fine on Samsung S9+ with API 29 - Android 10, and also for API 28.
But for API 21-27, it behaves with some random issues.
Dash-A top part of SurfaceView/GLSurfaceView is not visible
Dash-B bottom part of SurfaceView/GLSurfaceView is not visible
Entire SurfaceView / GLSurfaceView becomes completely transparent in the top right corner
Also tried using setZOrderMediaOverlay but no luck.
I am sure two surface view works together as Whatsapp and google duo apps are using them in video calls. But I am wondering if GLSurfaceView is causing an issue "something about locking the GL thread" as commented below in this answer.
Hoping for a working solution for API 21+ or any reference link, suggestions would be highly appreciated.
Instead of using the built-in GLSurfaceView, you'll have to create multiple SurfaceViews, and manage how OpenGL draws on one (or more) of those.
The Grafika code (that I mentioned in my comment to the answer you link) is here:
https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/MultiSurfaceActivity.java
In that code, onCreate creates the surfaces:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_multi_surface_test);
// #1 is at the bottom; mark it as secure just for fun. By default, this will use
// the RGB565 color format.
mSurfaceView1 = (SurfaceView) findViewById(R.id.multiSurfaceView1);
mSurfaceView1.getHolder().addCallback(this);
mSurfaceView1.setSecure(true);
// #2 is above it, in the "media overlay"; must be translucent or we will totally
// obscure #1 and it will be ignored by the compositor. The addition of the alpha
// plane should switch us to RGBA8888.
mSurfaceView2 = (SurfaceView) findViewById(R.id.multiSurfaceView2);
mSurfaceView2.getHolder().addCallback(this);
mSurfaceView2.getHolder().setFormat(PixelFormat.TRANSLUCENT);
mSurfaceView2.setZOrderMediaOverlay(true);
// #3 is above everything, including the UI. Also translucent.
mSurfaceView3 = (SurfaceView) findViewById(R.id.multiSurfaceView3);
mSurfaceView3.getHolder().addCallback(this);
mSurfaceView3.getHolder().setFormat(PixelFormat.TRANSLUCENT);
mSurfaceView3.setZOrderOnTop(true);
}
The initial draw code is in:
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height)
which calls different local methods depending on some local flags. For example, it calls an example of GL drawing here:
private void drawRectSurface(Surface surface, int left, int top, int width, int height) {
EglCore eglCore = new EglCore();
WindowSurface win = new WindowSurface(eglCore, surface, false);
win.makeCurrent();
GLES20.glClearColor(0, 0, 0, 0);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
for (int i = 0; i < 4; i++) {
int x, y, w, h;
if (width < height) {
// vertical
w = width / 4;
h = height;
x = left + w * i;
y = top;
} else {
// horizontal
w = width;
h = height / 4;
x = left;
y = top + h * i;
}
GLES20.glScissor(x, y, w, h);
switch (i) {
case 0: // 50% blue at 25% alpha, pre-multiplied
GLES20.glClearColor(0.0f, 0.0f, 0.125f, 0.25f);
break;
case 1: // 100% blue at 25% alpha, pre-multiplied
GLES20.glClearColor(0.0f, 0.0f, 0.25f, 0.25f);
break;
case 2: // 200% blue at 25% alpha, pre-multiplied (should get clipped)
GLES20.glClearColor(0.0f, 0.0f, 0.5f, 0.25f);
break;
case 3: // 100% white at 25% alpha, pre-multiplied
GLES20.glClearColor(0.25f, 0.25f, 0.25f, 0.25f);
break;
}
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
}
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
win.swapBuffers();
win.release();
eglCore.release();
}
I haven't used this code, so I can only suggest you search for additional details about the various calls you see in that code.
FIRST, try to get a simple example working that has two overlapping SurfaceViews, WITHOUT any OpenGL calls. E.g. solid background color views that overlap. And I reiterate the key point: Do Not make either of them a GLSurfaceView!
THEN attempt to change one of the views to initialize and use OpenGL. (Using logic similar to the code I describe above; still NOT a GLSurfaceView.)
Following are the screenshots when using texture view in camera2 apis.In full screen the preview stretches,but it works when using lower resolution(second image).
How to use this preview in full screen without stretching it.
Below answer assumes you are in portrait mode only.
Your question is
How to use the preview in full-screen without stretching it
Let's break it down to 2 things:
You want the preview to fill the screen
The preview cannot be distorted
First you need to know that this is logically impossible without crop, if your device's viewport has a different aspect ratio with any available resolution the camera provides.
So I would assume you accept cropping the preview.
Step 1: Get a list of available resolutions
StreamConfigurationMap map = mCameraCharacteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
throw new IllegalStateException("Failed to get configuration map: " + mCameraId);
}
Size[] sizes = map.getOutputSizes(SurfaceTexture.class);
Now you get a list of available resolutions (Sizes) of your device's camera.
Step 2: Find the best aspect ratio
The idea is to loop the sizes and see which one best fits. You probably need to write your own implementation of "best fits".
I am not going to provide any code here since what I have is quite different from your use case. But ideally, it should be something like this:
Size findBestSize (Size[] sizes) {
//Logic goes here
}
Step 3: Tell the Camera API that you want to use this size
//...
textureView.setBufferSize(bestSize.getWidth(), bestSize.getHeight());
Surface surface = textureView.getSurface();
try {
mPreviewRequestBuilder = mCamera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
mCamera.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
mSessionCallback, null);
} catch (final Exception e) {
//...
}
Step 4: Make your preview extends beyond your viewport
This is then nothing related to the Camera2 API. We "crop" the preview by letting the SurfaceView / TextureView extends beyond device's viewport.
First place your SurfaceView or TextureView in a RelativeLayout.
Use the below to extend it beyond the screen, after you get the aspect ratio from step 2.
Note that in this case you probably need to know this aspect ratio before you even start the camera.
//Suppose this value is obtained from Step 2.
//I simply test here by hardcoding a 3:4 aspect ratio, where my phone has a thinner aspect ratio.
float cameraAspectRatio = (float) 0.75;
//Preparation
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
int screenWidth = metrics.widthPixels;
int screenHeight = metrics.heightPixels;
int finalWidth = screenWidth;
int finalHeight = screenHeight;
int widthDifference = 0;
int heightDifference = 0;
float screenAspectRatio = (float) screenWidth / screenHeight;
//Determines whether we crop width or crop height
if (screenAspectRatio > cameraAspectRatio) { //Keep width crop height
finalHeight = (int) (screenWidth / cameraAspectRatio);
heightDifference = finalHeight - screenHeight;
} else { //Keep height crop width
finalWidth = (int) (screenHeight * cameraAspectRatio);
widthDifference = finalWidth - screenWidth;
}
//Apply the result to the Preview
RelativeLayout.LayoutParams lp = (RelativeLayout.LayoutParams) cameraView.getLayoutParams();
lp.width = finalWidth;
lp.height = finalHeight;
//Below 2 lines are to center the preview, since cropping default occurs at the right and bottom
lp.leftMargin = - (widthDifference / 2);
lp.topMargin = - (heightDifference / 2);
cameraView.setLayoutParams(lp);
If you don't care about the result of Step 2, you can actually ignore Step 1 to Step 3 and simply use a library out there, as long as you can configure its aspect ratio. (It looks like this one is the best, but I haven't tried yet)
I have tested using my forked library. Without modifying any code of my library, I managed to make the preview fullscreen just by using Step 4:
Before using Step 4:
After using Step 4:
And the preview just after taking a photo will not distort as well, because the preview is also extending beyond your screen.
But the output image will include area that you cannot see in the preview, which makes perfect sense.
The code of Step 1 to Step 3 are generally referenced from Google's CameraView.
That's a common problem on some devices. I've noticed it mostly on samsung. You may use a trick with setting transformation on your TextureView to make it centerCrop like ImageView behaviour
I also faced similar situation, but this one line solved my problem
view_finder.preferredImplementationMode = PreviewView.ImplementationMode.TEXTURE_VIEW
in your xml:
<androidx.camera.view.PreviewView
android:id="#+id/view_finder"
android:layout_width="match_parent"
android:layout_height="match_parent" />
For camera implementation using cameraX you can refer
https://github.com/android/camera-samples/tree/master/CameraXBasic
I figured out what was your poroblem. You were probably trying something like this:
textureView.setSurfaceTextureListener(new TextureView.SurfaceTextureListener() {
#Override
public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int i, int j) {
cam.startPreview(surfaceTexture, i, j);
cam.takePicture();
}
public void onSurfaceTextureSizeChanged(SurfaceTexture surfaceTexture, int i, int i1) { }
public boolean onSurfaceTextureDestroyed(SurfaceTexture surfaceTexture) { return false; }
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture) { }
});
Am creating a document scanning application in android, am using OpenCV and Scan library in my project for cropping,I have created a rectangle using drawrect in camera view, now I need to capture the images inside that rectangle portion only and display it in another activity.
The image in question:
For me , I will take whole image, then crop.
Your question : "how do I know which part of the image is inside the rectangular portion, then only I can pass it nah, hope u understood". My answer is you can using relativity scaling of whole image dimension and camera display screen dimension. Then you will know which part of rectangular to be cropped.
This is the code example.
Note that you need to fill some codes to make it can save file into jpg, and save it after cropped.
// 1. Save your bitmap to file
public class MyPictureCallback implements Camera.PictureCallback {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
try {
//mPictureFile is a file to save the captured image
FileOutputStream fos = new FileOutputStream(mPictureFile);
fos.write(data);
fos.close();
} catch (FileNotFoundException e) {
Log.d(TAG, "File not found: " + e.getMessage());
}
}
}
// Somewhere in your code
// 2.1 Load bitmap from your .jpg file
Bitmap bitmap = BitmapFactory.decodeFile(path+"/mPictureFile_name.jpg");
// 2.2 Rotate the bitmap to be the same as display, if need.
... Add some bitmap rotate code
// 2.3 Size of rotated bitmap
int bitWidth = bitmap.getWidth();
int bitHeight = bitmap.getHeight();
// 3. Size of camera preview on screen
int preWidth = preview.getWidth();
int preHeight = preview.getHeight();
// 4. Scale it.
// Assume you draw Rect as "canvas.drawRect(60, 50, 210, 297, paint);" command
int startx = 60 * bitWidth / preWidth;
int starty = 50 * bitHeight / preHeight;
int endx = 210 * bitWidth / preWidth;
int endy = 297 * bitHeight / preHeight;
// 5. Crop image
Bitmap blueArea = Bitmap.createBitmap(bitmap, startx, starty, endx, endy);
// 6. Save Crop bitmap to file
This will work for you: How to programmatically take a screenshot in Android?
Make sure that the view (v1 in the code sample's case) passed in Bitmap.createBitmap(v1.getDrawingCache()) is a viewgroup that contains the image you want ot send to the second activity
Edit:
I don't think your intended flow is feasible. As far as I know, camera intents don't take arguments allowing to draw such a rectangle (I could be wrong though).
Instead, I suggest you take a picture, and then edit it with a library such as this one (https://github.com/ArthurHub/Android-Image-Cropper) or programatically as suggested above.
i got some issue nagging me for quite some time now.
I got my custom Camera-App that shows a live Preview. Aswell i am using the FaceDetection for getting a better focus on peoples faces. When i check my taken Pictures in Gallery i can see the Rectangle correctly. The next step is to make the FaceDetection-Rectangle visible in the Live-Preview. So i decided to use a canvas that gets the coordinates from the Preview-Rectangle and transform them to coordinates that can be used by the canvas.
My problem is that i had to rotate the Preview by 90degrees that it shows the Preview correctly. So when i also rotate the View of the canvas before i draw it, the rectangle shows correctly and moves the right axis aswell. BUT the rectangle can move out of screen on left and right, and only uses about half of the available height. I assume that the rotating causes the trouble, but i can't manage to put things right. Someone got an idea?
Screenshot (i added purple lines to show the top/bottom-parts that cant be reached by red rectangle):
Preview:
mCamera = Camera.open();
mCamera.getParameters();
mCamera.setDisplayOrientation(90);
mCamera.setFaceDetectionListener(new FaceDetectionListener() {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if(mDraw != null) {
mDraw.update(f.rect, f);
}
}
}
}
});
mCamera.startFaceDetection();
}
private DrawOnTop mDraw = null;
public void setDrawOnTop(DrawOnTop d) {
this.mDraw = d;
}
DrawOnTop:
public DrawOnTop(Context context) {
super(context);
myColor = new Paint();
myColor.setStyle(Paint.Style.STROKE);
myColor.setColor(Color.RED);
}
#Override
protected void onDraw(Canvas canvas) {
rect.set((((rect.left+1000)*1000) / WIDTH_DIVISOR),(((rect.top+1000)*1000) / HEIGHT_DIVISOR),(((rect.right+1000)*1000) / WIDTH_DIVISOR),(((rect.bottom+1000)*1000) / HEIGHT_DIVISOR ));
setRotation(90);
canvas.drawRect(rect, myColor);
}
public void update(Rect rect, Face face) {
this.invalidate();
this.rect = rect;
this.face = face;
}
----------------------------------------------------------------------------------------
EDIT:
i came to the conclusion that this is a rare but known bug and that there is so far no other solution but forcing the application to landscape-mode. Works ok, but dimensions look a bit stretched or clinched depending on which perspective the user is operating.
EDIT: I misread the question and talked about the wrong rectangle. This is what i meant:
Basically, you just need to scale the purple rectangle. Find ut where it is defined, then put it onto a canvas and do the following:
float screenWidth = /*get the width of your screen here*/;
float xScale = rect.width / screenWidth;
float screenHeight = /*get the height of your screen here*/;
float yScale = rect.height / screenWidth;
canvas.setScaleX(xScale);
canvas.setScaleY(yScale);
This way, the coordinates will be translated properly.
SECOND EDIT (in response to your comment): You can also do this with views, if you like.
Have fun.
[UPDATED with additional code]
I'm having a major problem with Android correctly rendering some bitmaps in my custom view's onDraw() method on some (Nexus 7 and 10 that I know about) but not all devices. It renders properly on the Android phones I have for testing. Here is the snippet of consequence:
/* set up mImagePaint earlier */
mImagePaint.setAntiAlias(true);
mImagePaint.setFilterBitmap(true);
mImagePaint.setDither(true);
mImagePaint.setStyle(Paint.Style.FILL_AND_STROKE);
mImagePaint.setStrokeWidth(0);
mImagePaint.setColor(Color.WHITE);
protected void onDraw(Canvas canvas) {
final float vw = getWidth();
final float vh = getHeight();
final float bw = mBitmap.getWidth();
final float bh = mBitmap.getHeight();
final float ba = bw / bh;
final float va = vw / vh;
if (va > ba) {
final float top = (bh - ba / va * bh) / 2;
mSrcRect.set(0, (int) top, (int) bw, (int) (bh - top));
} else {
final float left = (bw - va / ba * bw) / 2;
mSrcRect.set((int) left, 0, (int) (bw - left), (int) bh);
}
mContentRect.set(0, 0, vw, vh);
canvas.drawBitmap(mBitmap, mSrcRect, mContentRect, mImagePaint);
}
Results on Nexus 7 and 10 are incorrect and renders a wide white border as shown below. This is part of the bitmap rendering but not part of the original bitmap or rect.
The correct (desired) result on a Samsung Galaxy phone:
The image and code shown in both examples above are exactly the same. I've tried variations already using null paint, null srcRect, and even using alternate method drawBitmap(bitmap, 0, 0, null) and get the same results. Looking at the framework code, of course drawBitmap() calls directly to native methods whose source code I can't view.
Only a small number of images seem to exhibit this problem and seems as though mostly square images exhibit it. But here is only one other non-square image that exhibits this problem:
Most of these images are slightly rotated given the desired custom view, and it now occurs to me that rotation might be part of the problem, but maybe not since the canvas isn't rotated, just when it's copied to the parent's canvas bitmap backing by Android.
Any ideas? This is nuts!
Are you sure the pictures are large enough to display without you having to scale up on those devices?