I would like to create Android app for viewing images.
The idea is that users keep their tablet flat on the table (for sake of simplicity, only X and Y for now) and to scroll the picture (one that is too big to fit the screen) by moving their tablets (yes, this app has to use tablet movement, sorry, no fingers allowed :) ).
I managed to get some basic framework implemented (implementing sensor listeners is easy), but I'm not sure how to translate "LINEAR_ACCELERATION" to pixels. I'm pretty sure it can be done (for example, check "photo sphere" or "panorama" apps that move content exactly as you move your phone) but I can't find any working prototype online.
Where can I see how that kind of "magic" is done in real world?
Related
I am relatively new to react native and am not looking for actual code however I am wondering the best way to implement layouts/ styling of a page so that it looks the same proportion wise across multiple different devices and operating systems. For example iPhone X through to 13 have different size screens and then if you include the new Samsung's, Google Pixels and Huawei phones there is a lot of different sizes of screens.
I can create different styles by detecting what device is being used and getting the logical resolution from that however I am wondering if there is a better way than just doing this for 10 different phone sizes. For example in the picture you can see the front wheel of the car is on the edge of the blue box which gives it an almost 3D effect.[] To get this effect across 2 different devices I have to change the absolute positioning of the car for both devices however the more screen sizes/ devices I want it to look the same on the more code it is and it seems like I am using a long winded/ redundant method.
Another alternative I have thought about is just turning that section into an image and using it as a background image however I am not sure how that will affect performance.
Any thoughts?
Related:
Is there a HTML5/ jQuery Spherical Panorama viewer that works with touch mobile devices
Indirectly related:
Orbiting around the origin using a device's orientation
I am looking for a way to view different parts of a spherical panorama / equirectangular projection by moving the smartphone or tablet (PDA) in space.
Basically just like http://takemetotomorrowland.com/explore/bridgeway-plaza where different parts of the environment can be seen by turning the phone towards its direction in space.
Ideally I like to be able to view different parts of this image by moving the device in space. So turning and moving the device in real space to view the different parts of the projection on the web.
Now I know this is done with WebGL and a spherical panorama viewer like Pannellum but how is the part with moving the device in space done? Is there a library for that or is there a plugin for that?
What is this part of the device technology called that senses movements in 3D space and how can I translate those movements to HTML/JS?
Now I know this is not a specific programming question, I am sorry for that. But I am really looking for a plugin or library or at least find out the name of the sensor responsible for the movements of the device so that I can then work with it.
The feature you are looking for has been inmplemented in DeviceOrientationControls.
See this example.
three.js r.73
So I am making a game in processing, but I'm not aware as to how to make my game fit every android phone's screen, I've looked online, but I can't seem to find anything about it in Processing.
I'm fully aware of displayWidth/displayHeight, but I'm not too sure on how to make my buttons to be all pressable, and never change their positions, I would've thought that it would be done by putting all the co-ordinates using something like displayHeight/2 etc. etc. But I've tested the game out on my phone(Samsung Galaxy S3), but the text is not aligned properly
If anyone could please help me, I'd be super grateful!
android devices have different screen size, so instead of using px you can use percentage(%) value for the screen i believe it is sp(a different unit than px) by using this the properties you set for an identity or element becomes constant for all devices
So, here is the deal. I want to create a little game-like app(let me call it the game, from now on). Before I begin, I'd like to show you up the raw UI of the game.
So, as you can see, I would have:
a red ball in the middle
two walls on either sides
Procedure:
User would move the device to right side and hits the right wall (the gray ball on the right side is simulated)
Movement of device occures to left so it can reach the left wall as well. (like the gray ball on the left)
All I know regarding the program side is the sensors, especially Accelerometers. Most of the people at Stackoverflow are of the opinion that working with sensors not on its good stages, that is, the data comes from them is very noisy, processing them can be hard. However, I think a lot of games are using them to some extent...
Well, I tried moving an image by drawing it. I attempted to get data from gyroscope, accelerometer, linear accelerometer and used that data to draw my image. The problem is that if I move the device smoothly and slowly then I got almost no movement on the drawings. That gives the horrible user experience.
The question is how can I implement my game so that it can meet all my needs above even if I move my device slowly. Any approaches are welcome.
I have a very creative requirement - I am not sure if this is feasible - but it would certainly spice up my app if it could .
Premise: On Android phones, if the screen is covered by hand(not touching, just close to the screen) or if the
phone is placed over the ear during a call the phone locks or
basically it blacks out. So there must be some tech to recognize that
my hand is near the screen.
Problem: I have an image in my app. If the
user points to the image without touching the screen, just as an
extension to the premise, I must be able to know that the user is
pointing to the image and change the image. Is this possible ?
UPDATE: An example use:
Say I want to build a fun app, on touch the image leads to some other
place. For example - I have two doors one to a car and one to a lion.
Now just when the user is about to touch door 1 - the door should show
a message saying are you sure, and then actually touching it takes you
to another place. Kinda rudimentary example, but I hope you get the
point
The feature you are talking about is the proximity sensor. See Sensor and SensorEvent.values for Sensor.TYPE_PROXIMITY.
You could get the distance of the hand from the screen, but you won't really be sure where in the XY co-ordinate system the hand is. So you won't be able to figure out whether the user is pointing to the "car door" or to the "lion door".
You could make this work on a phone with a front camera with a wide angle so it can see the whole screen. You'd have to write the software for recognizing hand movements, and translate these to screen actions.
Why not just use touch, if I may ask?