What are widthDp and heightDp as a #Preview parameter?
【My environment】
Android Studio Arctic Fox | 2020.3.1 Patch3 build on October 1, 2021
Gradle: 7.0.2
AGP: 7.0.3
androidx.compose.ui:ui-tooling-preview:1.0.1
Here is my code.
#Preview(
showBackground = true,
widthDp = 200,
heightDp = 200,
)
#Composable
fun DefaultPreview() {
Box(modifier = Modifier.size(100.dp).background(Color.Red))
}
And preview shows below.
But I expected below.
It seems that the box size is larger than I expected.
Does anyone explain that?
This is somehow a bug with #Preview, the first composables take the hole space they have, can't explain why. Even without the two parameters widthDp = 200, heightDp = 300, The first Box takes all the space. So for now to get the result you want you have to put a box around which "protects" the main composables.
From Preview.kt codebase,
#param widthDp Max width in DP the annotated #[Composable] will be
rendered in. Use this to restrict the size of the rendering
viewport.
#param heightDp Max height in DP the annotated
#[Composable] will be rendered in. Use this to restrict the size of
the rendering viewport.
The parameters are to restrict the maximum rendering viewport.
It seems they scale the composable if the given dimensions are large/smaller than the actual dimensions of the composable.
Related
I wish to check the Android display is showing the correct tone curve. If I generate a patch with alternate black and white lines surrounded by a 50% grey, and the two grey levels match then the first stop of the tone curve is right. I can then generate a patch with alternate black and 50% grey lines surrounded by a 25% grey, and so on.
I can find the current display size. I can make RGB raster data that fits. If needs be I can make the image with 2x2 pixel replication for things like retina displays. But I cannot see how to get the raster data to the display without risking resizing.
The particular image I have described might be generated using a matte texture, or some other trick. I have other vision test images that more complicated, and I currently generate as raster data in another program. So, I am really looking for something that can take a rectangle of custom RGB data and stick it onto the screen.
Maybe the tool is there in Android Studio, staring me in the face. But I can't see it.
(the following day)
I have found the Bitmap class. This is probably what I wanted.
(several days later)
No. I am still not there. I can generate an ImageView. I would like to generate a region of bitmap data with alternate black and white lines.
My experiments are in Kotlin.
I have found the problems in getting the dimensions of an ImageView in pixels. The layout effectively works the other way: you say define your layout, and the library works out the dimensions and the resize parameters for you. resize and the size cannot usually be calculated until the view has been laid out. See for instance...
Why does setting view dimensions asynchronously not work in Kotlin?
There is a Java solution that uses viewTreeObserver(). I can use this to get the dimensions in pixels in Toast, but I can't get the value out of the observer and into the context.
The exact pixel size is not the issue here. I could make the ImageView 50% of the display height and width, and calculate the number of pixels as a fraction of the screen dimensions. This would be accurate enough for the general layout of the grey border and the block of stripes. I could use the to lay out the canvas, and then force the view to fit that at 1:1 scale.
It feels that these tools are not supposed to work this way. Is there some completely different way of doing this that I am missing?
Postscript:
There was a way to do this...
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
binding = ActivityToneBinding.inflate(layoutInflater)
setContentView(binding.root)
stripeColour = lastColour
step = resources.displayMetrics.density
// Initialize the ImageView
stripeView = findViewById(R.id.stripeView)
stripeView.doOnLayout {
val myBitmap = Bitmap.createBitmap(
it.measuredWidth,
it.measuredHeight,
Bitmap.Config.ARGB_8888
)
val myCanvas = Canvas(myBitmap)
val xMax = it.measuredWidth.toFloat()
val yMax = it.measuredHeight.toFloat()
val myPaint = Paint()
myPaint.isAntiAlias = false
myPaint.color = Color.rgb(0, 0, 0)
var y = 0.0f
while (y < yMax) {
myCanvas.drawRect(0.0f, y, xMax, y+step, myPaint)
y += 2.0f*step
}
stripeView.setImageBitmap(myBitmap)
}
The black stripes are opaque, and I could update the other stripes by changing the background. The stripes all had nice sharp edges, without the interpolation I had been getting at the edges. This is a rather specific solution for my particular problem, but it seemed to work fine.
If you want to try it yourself, be warned: I am now seeing something I do not understand. It seems the light and dark stripes are on average brighter then they should be, particularly with dark colours. The tone curve for the display seems to fit the sRGB standard well when I measure large patches, but the sum of the light and dark stripes isn't what it should be. So, this is not the test for the tone curve I was hoping for.
Once I found the terms 'Bitmap' and 'Canvas' I found the right sort of tools, but it was hard to see how to string them together to make something that works. Here is a simple exercise that helped me...
https://developer.android.com/codelabs/advanced-android-kotlin-training-canvas#10
Ignore the bit that sets SYSTEM_UI_FLAG_FULLSCREEN. This is no longer supported. There are other ways to do the same thing, but this example works perfectly well without it.
If you want to render something at pixel resolution, do the rendering in a doOnLayout() call. You can then get the view height and width. The ImageView layout width and height is 0dp (match_constraints). I am still not quite there: my double height stripe pattern is still being interpolated.
class MainActivity : Activity() {
// Here are all the objects(instances)
// of classes that we need to do some drawing
lateinit var myImageView: ImageView
var background = Color.rgb(180, 180, 180)
var darkStripe = Color.rgb(0, 0, 0)
var lightStripe = Color.rgb(255, 255, 255)
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
// Initialize the ImageView
myImageView = findViewById(R.id.imageView)
myImageView.doOnLayout {
val myBitmap = Bitmap.createBitmap(
it.measuredWidth,
it.measuredHeight,
Bitmap.Config.ARGB_8888
)
val w = (it.measuredWidth/4).toFloat()
val h = (it.measuredHeight/4).toFloat()
val myCanvas = Canvas(myBitmap)
myCanvas.drawColor(background)
// Toast.makeText(applicationContext, w.toString(), Toast.LENGTH_SHORT).show()
val tex = createBitmap(
intArrayOf(lightStripe, lightStripe, darkStripe, darkStripe),
0, 1, 1, 4, Bitmap.Config.ARGB_8888
)
val shader: Shader = BitmapShader(tex, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT)
val myPaint = Paint()
myPaint.setAntiAlias(false)
myPaint.shader = shader
myCanvas.drawRect(w, h,w*3.0f,h*3.0f, myPaint)
myImageView.setImageBitmap(myBitmap)
}
}
}
Here's a revised version that gives my sharp stripes without the hardcoded 3.0 value...
step = resources.displayMetrics.density
// Initialize the ImageView
stripeView = findViewById(R.id.stripeView)
stripeView.doOnLayout {
val myBitmap = Bitmap.createBitmap(
it.measuredWidth,
it.measuredHeight,
Bitmap.Config.ARGB_8888
)
val myCanvas = Canvas(myBitmap)
val xMax = it.measuredWidth.toFloat()
val yMax = it.measuredHeight.toFloat()
val myPaint = Paint()
myPaint.isAntiAlias = false
myPaint.color = Color.rgb(0, 0, 0)
var y = 0.0f
while (y < yMax) {
myCanvas.drawRect(0.0f, y, xMax, y+step, myPaint)
y += 2.0f*step
}
stripeView.setImageBitmap(myBitmap)
}
I am drawing the black stripes as opaque rectangles. The background colour gives the other stripes. This means I can update the stripe colour without having to redraw.
Does anyone know if there is any way to set width and height of the focus region with Android CameraX?
I was succefully set the position of focus point with MeteringPoint. As I know by default, the member variable size of MeteringPoint is used to calculate the metering rectangle
metering rectangle width = size * sensorSizeOrCropRegion.width
metering rectangle height = size * sensorSizeOrCropRegion.height
Or if there is no way to do that by using CameraX directly, then is it possible to the camera2 interop APIs?
But I want to have a metering rectangle with, let say 50dp width and 50dp height. How can I do that?
Thank you,
val factory: MeteringPointFactory = SurfaceOrientedMeteringPointFactory(
previewView.width.toFloat(), previewView.height.toFloat())
val autoFocusPoint = factory.createPoint(PointX, PointY, size)
From the doc string of method createPoint(x, y, size):
Size is the size of the MeteringPoint width and height(ranging from 0
to 1). It is the (normalized) percentage of the sensor width/height
(or crop region width/height if crop region is set).
then;
camera.cameraControl.startFocusAndMetering(
FocusMeteringAction.Builder(
autoFocusPoint,
FocusMeteringAction.FLAG_AF
).build()
)
I'm using android studio to design a game.
I used the 'dp' unit in my 'xml' file for defining my elements.
In java code, I want to move those elements by function animate() like image_button_red1.animate().xBy(first value).yBy(second value);
this function only takes float value, But the animation is different in each device.
I want to use the 'dp' unit to solve this problem.
Is there a function that takes another unit like 'dp'?
I've found an answer.
I designed my layout in pixels and when I used, for example, animate().xby(10.0f), the translation was right in each device. The equation between the pixel and dp is: px = dp * (dpi / 160);
Then I used dp instead of px and used the function Resources.getSystem().getDisplayMetrics().density to get the density of each screen which is running the code.
So I used a float variable instead of 10.0f which this value calculated from the noted equation. My object size was 360 dp, and the equation changed to px = 2.25*dpi.
And in my case, I used a variable like this:
float House_size = 2.25f * 14.44f * Resources.getSystem().getDisplayMetrics().density;
then I used animate().xby(House_size);
And now the result works properly on every single device.
I have a game what I made in 480x320 resolution (I have set it in the build settings) in Unity. But I would like to publish my game for every Android device with every resolution. How can I do it, to tell Unity to scale my game up to the device's resolution? Is it possible to do?
Thanks in advance!
The answer to your question largely depends on how you've implemented the game. If you've created it using GUI textures, then it largely depends on how you've placed/sized your objects versus screen size, which makes things a little tricky.
If the majority of your game is done using objects (such as planes, cubes, etc) then there's two methods I usually choose to use.
1) First method is very easy to implement, though doesn't always look too good. You can simply change the camera's aspect ratio to match the one you've designed your game around. So in your case, since you've designed your game at 4:3, you'd do something like this:
Camera.aspect = 4f/3f;
However, if someone's playing on a screen meant for 16:9, the game will end up looking distorted and stretched.
2) The second method isn't as easy, requiring quite a bit of work and calculations, but will give a much cleaner looking result for you. If you're using an orthographic camera, one important thing to keep in mind is that regardless of what screen resolution is being used, the orthographic camera keeps the height at a set height and only changes the width. For example, with an orthographic camera at a size of 10, the height will be set to 2. With this in mind what you'd need to do is compensate for the widest possible camera within each level (for example, have a wide background) or dynamically change the Orthographic Size of the camera until its width matches what you've created.
If you've done a 3d game with a stereoscopic camera , screen resolution shouldn't really affect how it looks, but I guess that depends on the game, so more info would be required
The way i did is to change camera viewport according to device aspect ratio
Consider you made the game for 800x1280
The you can do this in any one of the script
float xFactor = Screen.width / 800f;
float yFactor = Screen.height / 1280f;
Camera.main.rect=new Rect(0,0,1,xFactor/yFactor);
and this works like magic
A easy way to do this is considering your target, I mean if you're doing a game for Iphone 5 then the aspect ratio is 9:16 v or 16:9 h.
public float targetRatio = 9f/16f; //The aspect ratio you did for the game.
void Start()
{
Camera cam = GetComponent<Camera>();
cam.aspect = targetRatio;
}
Here is my script for scaling the ortographic camera in 2D games
public float screenHeight = 1920f;
public float screenWidth = 1080f;
public float targetAspect = 9f / 16f;
public float orthographicSize;
private Camera mainCamera;
// Use this for initialization
void Start () {
// Initialize variables
mainCamera = Camera.main;
orthographicSize = mainCamera.orthographicSize;
// Calculating ortographic width
float orthoWidth = orthographicSize / screenHeight * screenWidth;
// Setting aspect ration
orthoWidth = orthoWidth / (targetAspect / mainCamera.aspect);
// Setting Size
Camera.main.orthographicSize = (orthoWidth / Screen.width * Screen.height);
}
I assume it's 2D instead of 3D, this what I do:
Create a Canvas object
Set the Canvas Scaler to Scale with Screen Size
Set the Reference Resolution to for example: 480x320
Set the Screen Match Mode to match width or height
Set the match to 1 if your current screen width is smaller (0 if height is smaller)
Create an Image as background inside the Canvas
Add Aspect Ratio Fitter script
Set the Aspect Mode to Fit in Parent (so the UI anchor can be anywhere)
Set the Aspect Ratio to 480/320 = 1.5
And add this snippet on main Canvas' Awake method:
var canvasScaler = GetComponent<CanvasScaler>();
var ratio = Screen.height / (float) Screen.width;
var rr = canvasScaler.referenceResolution;
canvasScaler.matchWidthOrHeight = (ratio < rr.x / rr.y) ? 1 : 0;
//Make sure to add Using Unity.UI on top of your Aspect Ratio Script!
For 3D objects you can use any of the answers above
The best solution for me is to use the theorem of intersecting lines so that there is neither a cut-off on the sides nor a distortion of the game view. That means that you have to step back or forward depending on the different aspect ratio.
If you like, I have an asset on the Unity asset store which automatically corrects the camera distance so you never have a distortion or a cut off no matter which handheld device you are using.
I'm making an app widget for Android, which due to being composed of custom elements such as graphs, must be rendered as a bitmap.
However, I've run into a few snags in the process.
1) Is there any way to find the maximum available space for an app widget? (OR: Is it possible to calculate the dimensions correctly for the minimum space available in WVGA (or similar wide) cases?
I don't know how to calculate the maximum available space for an app widget. With a conventional app widget it is possible to fill_parent, and all the space will be used. However, when rendering the widget as a bitmap, and to avoid stretching, the correct dimensions must be calculated. The documentation outlines how to calculate the minimum dimensions, but for cases such as WVGA, there will be unused space in landscape mode - causing the widget to look shorter than other widgets which stretch naturally.
float density = getResources().getDisplayMetrics().density;
int cx = ((int)Math.ceil(appWidgetInfo.minWidth / density) + 2) / 74;
int cy = ((int)Math.ceil(appWidgetInfo.minHeight / density) + 2) / 74;
int portraitWidth = (int)Math.round((80.0f * cx - 2.0f) * density);
int portraitHeight = (int)Math.round((100.0f * cy - 2.0f) * density);
int landscapeWidth = (int)Math.round((106.0f * cx - 2.0f) * density);
int landscapeHeight = (int)Math.round((74.0f * cy - 2.0f) * density);
Calculating cx and cy gives the number of horizontal and vertical cells. Subtracting - 2 from the calculated dpi (e.g. 74 * cy - 2) is to avoid cases where the resulting number of pixels is rounded down. (For example in landscape mode on Nexus One, the height is 110, not 111 (74 * 1.5).
2) When assigning a bitmap to an ImageView which is used as part of the RemoteViews to view the image, there are 2 methods:
2.1) By using setImageViewUri, and saving the bitmap to a PNG file. The image is then served using an openFile() implementation in a ContentProvider:
#Override
public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException
// Code to set up imageFileName
File file = new File(getContext().getCacheDir(), imageFileName);
return ParcelFileDescriptor.open(file, ParcelFileDescriptor.MODE_READ_ONLY);
}
This works, and it's the approach I'm currently using. However, if I set the scaleType of the ImageView to "center", which by the documentation is supposed to "Center the image in the view, but perform no scaling.", the image is incorrectly scaled. Setting the density of the bitmap to DENSITY_NONE or getResources().getDisplayMetrics().densityDpi doesn't make any difference when saving the bitmap to PNG, it seems to be ignored when the file is loaded by the ImageView. The result is that the image is scaled down, due to some dpi issue. This seems to describe the case:
http://code.google.com/p/android/issues/detail?id=6957&can=1&q=widget%20size&colspec=ID%20Type%20Status%20Owner%20Summary%20Stars
Because it is not possible to use scaleType:center, the only way I've found to work is to set the layout_width and layout_height of the ImageView statically to a given number of dpis, then rendering the bitmap to the same dpi. This requires the use of scaleType:fitXY. This approach works, but it is a very static setup - and it will not work for resizable 3.1 app widgets (I haven't tested this yet, but unless onUpdate() is called on each resize, this is true).
Is there any way to load an image to an ImageView unscaled, or is this impossible due to a bug in the framework?
2.1) By using setImageViewBitmap directly. Using this method with the Bitmap.DENSITY_NONE setting on the bitmap, the image can be shown without scaling correctly. The problem with this approach is that there is a limitation to how large images can be set through the IPC mechanism:
http://groups.google.com/group/android-developers/browse_thread/thread/e8d84920b999291f/d12eb1d0eaca93ac#01d5c89e5e7b4060
(not allowed more links)http://groups.google.com/group/android-developers/browse_thread/thread/b11550601e6b1dd3#4bef4fa8908f7e6a
I attempting a bit of a hack to get past this issue, by splitting the widget into a matrix of images which could be set in 100x100 pixel blocks. This did allow for larger widgets to work, but ended up being very heavy and failed on large widgets (4x4).
Sorry for a very long post. I've tried to explain a few of the different issues when attempting to use a bitmap rendered app widget. If anyone has attempted the same and have found any more solutions to these issues, or have any helpful comments, this will be highly appreciated.
An approach that worked for us for a similar situation was to generate our graph as a 9-patch png, with the actual graph part as the scalable central rectangle, and the caption text and indication icons (which we did not want all stretched out of shape) and border effects, placed on the outer rectangles of the image.