This is a very simple example to achieve the android multi screen wallpaper effect on iOS, written in Swift. Sorry for the poor and redundant English as I am a non-native speaker and new to coding.
Many android phones have a multi screen desktop with the feature that when you swipe between screens, the wallpaper moves horizontally according to your gesture. Usually the wallpaper will move in a smaller scale comparing to the icons or search boxes to convey a sense of depth.
To do this, I use a UIScrollView instead of a UIPageViewController.I tried the latter at first, which makes the viewController hierarchy become very complex, and I couldn't use the touchesMove method correctly, as when you pan around pages, a new child viewController will come in and interrupt the method.
Instead, I tried UIScrollView and here is what I have reached.
drag a wallpaper imageView into viewController, set the image, set the frame to (0,0,width of image, height of screen)
set the viewController size to freedom and set the width to theNumberOfPagesYouWantToDisplay * screenWidth (only to make the next steps easier, the width of the viewController will not affect the final result.)
drag a scrollView that fits in the viewController
add the "content UIs" into the scrollView to become a childView of it. Or you can do it using codes.
add a pageControl if you would like.
in your viewController.swift add the following code
import UIKit
class ViewController: UIViewController, UIScrollViewDelegate{
let pagesToDisplay = 10 // You can modify this to test the result.
let scrollViewWidth = CGFloat(320)
let scrollViewHeight = CGFloat(568) // You can also modify these based on the screen size of your device, this is an example for iPhone 5/5s
#IBOutlet weak var backgroundView: UIImageView!
#IBOutlet weak var scrollView: UIScrollView!
#IBOutlet weak var pageControl: UIPageControl!
override func viewDidLoad() {
super.viewDidLoad()
scrollView.delegate = self
scrollView.pagingEnabled = true
// Makes the scrollView feel like pageViewController.
scrollView.frame = CGRectMake(0,0,scrollViewWidth,scrollViewHeight)
//Remember to set the frame back to the screen size.
scrollView.contentSize = CGSizeMake(CGFloat(pagesToDisplay) * scrollViewWidth, scrollViewHeight)
pageControl.numberOfPages = pagesToDisplay
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
func scrollViewDidEndDecelerating(scrollView: UIScrollView) {
let pageIndex = Int(scrollView.contentOffset.x / scrollView.frame.size.width)
pageControl.currentPage = pageIndex
}
func scrollViewDidScroll(scrollView: UIScrollView) {
let bgvMaxOffset = scrollView.frame.width - backgroundView.frame.width
let svMaxOffset = scrollView.frame.width * CGFloat(pagesToDisplay - 1)
if scrollView.contentOffset.x >= 0 &&
scrollView.contentOffset.x <= svMaxOffset {
backgroundView.frame.origin.x = scrollView.contentOffset.x * (bgvMaxOffset / svMaxOffset)
}
}
}
Feel free to try this on your device, and it will be really appreciated if someone could give some advice on this. Thank you.
Related
I've been getting my butt kicked trying to get a vertically placed 3d model GLB format placed properly on a vertical surface.
Just to be clear, I am not referring to the difficulty of identifying vertical surface, that is a whole other problem in itself.
Removing common boilerplate of setup to minimize this post.
I am using a fragment that extends ARFragment.
class SceneFormARFragment: ArFragment() {
Then of course I have supplied the config with a few tweaks.
override fun getSessionConfiguration(session: Session?): Config {
val config = super.getSessionConfiguration(session)
// By default we are not tracking and tracking is driven by startTracking()
config.planeFindingMode = Config.PlaneFindingMode.DISABLED
config.focusMode = Config.FocusMode.AUTO
return config
}
And to start and stop my AR experience I wrote a couple of methods inside the fragment as follows.
private fun startTracking() = viewScope.launchWhenResumed {
try {
arSceneView.session?.apply {
val changedConfig = config
changedConfig.planeFindingMode = Config.PlaneFindingMode.HORIZONTAL_AND_VERTICAL
configure(changedConfig)
}
logv("startTracking")
planeDiscoveryController.show()
arSceneView.planeRenderer.isVisible = true
arSceneView.cameraStreamRenderPriority = 7
} catch (ex: Exception) {
loge("error starting ar session: ${ex.message}")
}
}
private fun stopTracking() = viewScope.launchWhenResumed {
try {
arSceneView.session?.apply {
val changedConfig = config
changedConfig.planeFindingMode = Config.PlaneFindingMode.DISABLED
configure(changedConfig)
}
logv("stopTracking")
planeDiscoveryController.hide()
arSceneView.planeRenderer.isVisible = false
arSceneView.cameraStreamRenderPriority = 0
} catch (ex: Exception) {
loge("error stopping ar session: ${ex.message}")
}
}
In case you are wondering the reason for "starting and stopping" the AR experience is to maximize the GPU cycles for other UX interactions that are heavy on this overlaid screen, so we wait to start or stop based on current live data state of other things that are happening.
Ok moving on.
Let's review the HitResult handling:
In this method I do a few things:
Load two variations of TV 3d models from the cloud (wall mount and stand mount)
I remove any active models if they have tapped a new area
Create an anchor node from the hitresult and assign it a name to remove it later
Add a TVTransformableNode to it and assign it a name to retrieve and manipulate it later
Determine the look direction of the horizontal stand mount 3D Model TV and set the worldRotation of the anchorNode to the new lookRotation. (NOTE*, I feel like the rotation should be applied to the TVNode, but it only seems to work when I apply it to the AnchorNode for whatever reason.) This camera position math also seems to help the vertical wall mount TV face outwards and anchor correctly. (I have reviewed the GLB models and I know they are properly anchored from the back on the wall model and from the bottom on the floor model)
I then limit the plane movement of the node to it's own respective plane type so that a floor model doesn't slide up to a wall and so that a wall model doesn't slide down to the floor.
That's about it. The horizontal placement works great, but the vertical placement is always randomized.
OnTapArPlane Code below:
private fun onARSurfaceTapped() {
setOnTapArPlaneListener { hitResult, plane, _ ->
var isHorizontal = false
val renderable = when (plane.type) {
Plane.Type.HORIZONTAL_UPWARD_FACING -> {
isHorizontal = true
standmountTVRenderable
}
Plane.Type.VERTICAL -> wallmountTVRenderable
else -> {
activity?.toast("Do you want it to fall on your head really?")
return#setOnTapArPlaneListener
}
}
lastSelectedPlaneOrientation = plane.type
removeActive3DTVModel()
val anchorNode = AnchorNode(hitResult.createAnchor())
anchorNode.name = TV_ANCHOR_NAME
anchorNode.setParent(arSceneView.scene)
val tvNode = TransformableNode(this.transformationSystem)
tvNode.scaleController.isEnabled = false
tvNode.setParent(anchorNode)
tvNode.name = TV_NODE_NAME
tvNode.select()
// Set orientation towards camera
// Ref: https://github.com/google-ar/sceneform-android-sdk/issues/379
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
if(isHorizontal) {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.HORIZONTAL_UPWARD_FACING)
} else {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.VERTICAL)
}
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
tvNode.renderable = renderable
addVideoTo3DModel(renderable)
}
}
Ignore the addvideoTo3dModel call, as that works fine, and I commented it
out just to ensure it doesn't play a role.
Things I've tried.
Extracting Translation without Rotation like described here interestingly enough, it does cause the TV to appear level with the floor each time, but then the TV is always mounted as if the anchor is at the base instead of the center back. So it's bad.
I've tried reviewing various posts and translating Unity or ARCore stuff directly into Sceneform, but failed to get anything to affect the outcome. example
I've tried creating the anchor from the plane and the pose as indicated in this answer with no luck
I've reviewed this link but never found anything useful
I've tracked this issue and tried solutions recommended by people in the thread, but no luck
The last thing I tried, and this is a bit embarrassing lol. I opened all 256 tagged with "SceneForm" in Stack Overflow and reviewed EVERY SINGLE one of them for anything that would help.
So I've exhausted the internet. All I have left is to ask the community and of course send help to SceneForm team at Android which I'm also going to do.
My best guess is that I need to do the Quaternion.axisRotation(Vector3, Float), but everything I have guessed at or trialed and errored has not worked. I assume I need to set the localRotation using worldPostion values for xyz of the phone maybe to help identify gravity. I really just don't know anymore lol.
I know Sceneform is pretty new and the documentation is HORRIBLE and may as well not exist with the lack of content or doc headers on it. The developers must really not want people to use it yet I'm guessing :(.
Last thing I'll say, is everything is working perfectly in my current implementation with the exception of the rotated vertical placement. Just to avoid rabbit trails on this discussion, I'm not having any other issues.
Oh and one last clue that I've noticed.
The TV almost seems to pivot around the center of the vertical plane, based on where I tap, the bottom almost seems to point towards the arbitrary center of the plane, if that helps anyone figure it out.
Oh and yes, I know my textures are missing from the GLBs, I packaged them incorrectly and intend to fix it later.
Screenshots attached.
Well I finally got it. Took awhile and some serious trial and error of rotating every node, axis, angle, and rotation before I finally got it to place nicely. So I'll share my results in case anyone else needs this as well.
End Result looked like:
Of course it is mildly subjective to how you held the phone and it's understanding of the surroundings, but it's always pretty darn close to level now without fail in both landscape and portrait testing that I have done.
So here's what I've learned.
Setting the worldRotation on the anchorNode will help keep the 3DModel facing towards the cameraview using a little subtraction.
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
However, this did not fix the orientation issue on the vertical placement. I found that if i did an X Rotation of 90 degress on the look rotation it worked everytime. It may differ based on your 3d model, but my anchor is center middle back, so I'm not sure how it determine which way was up. However, I noticed whenever I would set a worldRotation on the tvNode it would place the TV level, but would be leaning forward 90 degress. So after playing with the various rotations, I finally got the answer.
val tvRotation = Quaternion.axisAngle(Vector3(1f, 0f, 0f), 90f)
tvNode.worldRotation = tvRotation
That fixed up my problem. So The end Result of the onSurfaceTap and placement was this:
setOnTapArPlaneListener { hitResult, plane, _ ->
var isHorizontal = false
val renderable = when (plane.type) {
Plane.Type.HORIZONTAL_UPWARD_FACING -> {
isHorizontal = true
standmountTVRenderable
}
Plane.Type.VERTICAL -> wallmountTVRenderable
else -> {
activity?.toast("Do you want it to fall on your head really?")
return#setOnTapArPlaneListener
}
}
lastSelectedPlaneOrientation = plane.type
removeActive3DTVModel()
val anchorNode = AnchorNode(hitResult.createAnchor())
anchorNode.name = TV_ANCHOR_NAME
anchorNode.setParent(arSceneView.scene)
val tvNode = TransformableNode(this.transformationSystem)
tvNode.scaleController.isEnabled = false //disable scaling
tvNode.setParent(anchorNode)
tvNode.name = TV_NODE_NAME
tvNode.select()
val cameraPosition = arSceneView.scene.camera.worldPosition
val tvPosition = anchorNode.worldPosition
val direction = Vector3.subtract(cameraPosition, tvPosition)
//restrict moving node to active surface orientation
if (isHorizontal) {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.HORIZONTAL_UPWARD_FACING)
} else {
tvNode.translationController.allowedPlaneTypes.clear()
tvNode.translationController.allowedPlaneTypes.add(Plane.Type.VERTICAL)
//x 90 degree rotation to flat mount TV vertical with gravity
val tvRotation = Quaternion.axisAngle(Vector3(1f, 0f, 0f), 90f)
tvNode.worldRotation = tvRotation
}
//set anchor nodes world rotation to face the camera view and up
val lookRotation = Quaternion.lookRotation(direction, Vector3.up())
anchorNode.worldRotation = lookRotation
tvNode.renderable = renderable
viewModel.updateStateTo(AriaMainViewModel.ARFlowState.REPOSITIONING)
}
This has been tested pretty thoroughly without issues so far in portrait and landscape. I still have other issues with Sceneform, such as the dots only showing up about half the time even when there is a valid surface, and of course vertical detection on a mono color wall is not possible with the current SDK without a picture on the wall or something to distinguish the wall.
Also performing screenshots is not good as it doesn't include the 3D Model so that required custom Pixel Copy work and my screenshots are a bit slow, but at least they work, no thanks to the SDK.
So they have a long ways to go and it's frustrating to blaze the trail with their product and lack of documentation and definitely lack of responsiveness to customer serivce as well as GitHub logged issues, but hey at least I got it, and I hope this helps someone else.
Happy Coding!
I'm working on an android library to display an image as a fixed background image. To do this, I'm dynamically ajusting the position of the image every 10ms based on the locationOnScreen. I understand that it's an aweful solution, but I'm here to improve this :)
The issue with this is that there is a glitch when the parent scrollable view is scrolling too fast and the image jump while the other view are moving. (click on the gif for the full demo)
As it's a library, I don't want to add complexity when integrating the lib, meaning no scroll listener, theme or no window override etc.
Solution tried:
- changing the loop delay
- using window background is not possible for a library
- no access to activity theme or similar
handler
override fun run() {
fixedBackgroundImageLayout.getLocationOnScreen(locationOnScreen)
fixedBackgroundImagePlugin.update(locationOnScreen)
handler.postDelayed(this, 10)
}
FixedBackgroundImagePlugin#update
override fun update(locationOnScreen: IntArray) {
if (backgroundImageFrameLayout == null) {
return
}
yPosition = parentLocationOnScreen[1] - locationOnScreen[1]
backgroundImageFrameLayout.top = 2 * yPosition - lastYPosition - 10
lastYPosition = yPosition
}
the backgroundImageFrameLayout has the image as a background image.
I've also setup a sample repository to help you dig in if wanted.
I'm open to any advice/lead
Update: Previous code I posted had some interaction issues that, surprisingly, made the code work but pegged the CPU. The following code is substantially like the previous code, but behaves. Although the new code involves a scroll listener that causes the view to redraw itself, it is self-contained. The scroll listener is needed to detect when the view is re-positioned on the screen. When called, the scroll listener simply invalidates the view.
You can do what you need all within the custom view BackgroundImageFrameLayout. Here is a draw() override that will take care of drawing the background of the FrameLayout that holds the image.
Add the following to the init block:
viewTreeObserver.addOnScrollChangedListener {
invalidate()
}
Add the following override method to BackgroundImageFrameLayout:
override fun onDraw(canvas: Canvas) { // draw(canvas) may be better choice.
if (background == null) {
return
}
// Get the location of this view on the screen to compute its top and bottom coordinates.
val location = IntArray(2)
getLocationOnScreen(location)
val imageTop = location[1]
val imageBottom = imageTop + height
// Draw the slice of the image that should be viewable.
canvas.save()
canvas.translate(0f, -imageTop.toFloat())
background?.setBounds(0, imageTop, width, imageBottom)
background?.draw(canvas)
canvas.restore()
}
Remove everything else that tries to manipulate the background - you won't need it.
Here is a demo of the app with the new onDraw():
I think there may be some other issues with the size of the screen vs. the size of the image, but this is the direction that you should go (IMHO.)
My custom control can sometimes change its size. In this case I call RequestLayout and Invalidate. Then in OnMeasure I call SetMeasuredDimension with new height. But here I face two issues:
OnMeasure is called with MeasureSpecMode.Exact.
Even if I ignore MeasureSpecMode and call SetMeasuredDimension with new values control size stays the same.
So my question is if there is a way to tell Forms layout engine to update my controls size?
EDIT:
It is possible to force Forms to update layout by hiding and then showing the control but I'm looking for a less hacky way to fix this.
If we are really talking about Forms and not xamarin.android you can check about how it works here: https://learn.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/layouts/custom
So basically inside your Forms control after you have calculated the size you desire call InvalidateMeasure();
Override OnMeasure and pass the desired size, Forms will do the rest:
//fields used to store the desired size
private double _widthRequest = -1;
private double _heightRequest = -1;
protected override SizeRequest OnMeasure(double widthConstraint, double heightConstraint)
{
if (_widthRequest > 0 && _heightRequest > 0)
{
var wish = new Size(_widthRequest, _heightRequest);
_widthRequest = -1;
_heightRequest = -1;
return new SizeRequest(wish);
}
return base.OnMeasure(widthConstraint, heightConstraint);
}
I'm having a hard time to pan a view of a gameObject in Unity3d. I'm new to scripting and I'm trying to develop an AR (Augmented Reality) application for Android.
I need to have a gameObject (e.g. a model of a floor), from the normal top down view, rendered to a "pseudo" iso view, inclined to 45 degrees. As the gameObject is inclined, I need to have a panning function on its view, utilizing four (4) buttons (for left, right, forward(or up), backward(or down)).
The problem is that, I cannot use any of the known panning script snippets around the forum, as the AR camera has to be static in the scene.
Need to mention that, I need the panning function to be active only at the isometric view, (which I already compute with another script), not on top down view. So there must be no problem with the inclination of the axes of the gameObject, right?
Following, are two mockup images of the states, the gameObject (model floor) is rendered and the script code (from Unity reference), that I'm currently using, which is not very much functional for my needs.
Here is the code snippet, for left movement of the gameObject. I use the same with a change in -, +speed values, for the other movements, but I get it only move up, down, not forth, backwards:
#pragma strict
// The target gameObject.
var target: Transform;
// Speed in units per sec.
var speedLeft: float = -10;
private static var isPanLeft = false;
function FixedUpdate()
{
if(isPanLeft == true)
{
// The step size is equal to speed times frame time.
var step = speedLeft * Time.deltaTime;
// Move model position a step closer to the target.
transform.position = Vector3.MoveTowards(transform.position, target.position, step);
}
}
static function doPanLeft()
{
isPanLeft = !isPanLeft;
}
It would be great, if someone be kind enough to take a look at this post, and make a suggestion on how this functionality can be coded the easiest way, as I'm a newbie?
Furthermore, if a sample code or a tutorial can be provided, it will be appreciated, as I can learn from this, a lot. Thank you all in advance for your time and answers.
If i understand correctly you have a camera with some fixed rotation and position and you have a object you want to move up/down/left/right from the cameras perspective
To rotated an object to a set of angles you simply do
transform.rotation = Quaternion.Euler(45, 45, 45);
Then to move it you use the cameras up/right/forward in worldspace like this to move it up and left
transform.position += camera.transform.up;
transform.position -= camera.transform.right;
If you only have one camera in your scene you can access its transform by Camera.main.transform
An example of how to move it when someone presses the left arrow
if(Input.GetKeyDown(KeyCode.LeftArrow))
{
transform.position -= camera.transform.right;
}
I'm using Flash CS 5 and Flex 4, both to build an AIR application for android. I would like to know how to allow the user to move content(image or text) up and down(like a map,in this case only vertically).
There are no touch UI controls available yet, so you need to implement it yourself. Here's a little bit of code that might help get you started. I wrote it on the timeline so that I could test it quickly. You'll need to make a couple adjustments if you're using it in a class.
The variable content is a MovieClip that is on the stage. If it is larger than the height of the stage, you'll be able to scroll it by dragging it with the mouse (or with your finger on a touch screen). If it is smaller than the height of the stage, then it won't scroll at all because it doesn't need to.
var maxY:Number = 0;
var minY:Number = Math.min(0, stage.stageHeight - content.height);
var _startY:Number;
var _startMouseY:Number;
addEventListener(MouseEvent.MOUSE_DOWN, mouseDownHandler);
function mouseDownHandler(event:MouseEvent):void
{
_startY = content.y;
_startMouseY = mouseY;
stage.addEventListener(MouseEvent.MOUSE_MOVE, stage_mouseMoveHandler, false, 0, true);
stage.addEventListener(MouseEvent.MOUSE_UP, stage_mouseUpHandler, false, 0, true);
}
function stage_mouseMoveHandler(event:MouseEvent):void
{
var offsetY:Number = mouseY - _startMouseY;
content.y = Math.max(Math.min(maxY, _startY + offsetY), minY);
}
function stage_mouseUpHandler(event:MouseEvent):void
{
stage.removeEventListener(MouseEvent.MOUSE_MOVE, stage_mouseMoveHandler);
stage.removeEventListener(MouseEvent.MOUSE_UP, stage_mouseUpHandler);
}
Alternatively, you could use the scrollRect property. That one is pretty nice because it will mask the content to a rectangular region for you. If you just change y like in the code above, you can draw other display objects on top of the scrolling content to simulate masking. It's faster than scrollRect too.