Car race 3D object (not image) rotation in android - android

I want to develop a car race game in which I want to rotate car object (not image) in clockwise direction at the time of car selection. After that I want to change the car object color when user drag the screen to left, like image sliding in Android. I don't know about any type of 3rd party API which is used for gaming development to make it easier. I think that the rotating car must be an animation but As I search Android only supports the animations in the form of XML.

Please refer to OpenGL in Android, i think it's the only reasonable way for you to complete your task and to program any game in general.
This link will help you start: http://www.learnopengles.com/android-lesson-one-getting-started/

Related

Augmented Reality For Fitting Room Where to start?

I have no experience in augmented reality nor image processing. And I know there are lots of document in the internet but to look for right places I should know basic stuff at first. I'm planning to code an android app which will use augmented reality for virtual fitting room. And I have determined some functionalities of app. My question is how could i manage to do those functionalities, which topics should i look into, where to start, which key functionalities app should achieve and which open-source sdk you would suggest. So I can do deeper researches
-- Virtualizing clothes which will be provided by me and make them usable for app
-- Which attributes should virtualized clothes have and how to store them
-- Scan real-life clothes, virtualize them and make usable for app
-- Tracking human who will try on those clothes
-- Human body size can change so clothes which will fit on them should also resized for each person
-- Clothes should be looked as realistic as possible
-- Whenever a person moves, clothes should also move with that person (person bends, clothes also bends and fits on that person). And it should be quick as possible as it gets.
Have you tried Snapchat's face filters?
It's essentially the same problem. They need to:
Create a model of a face (where are the eyes, nose, mouth, chin, etc)
Create a texture to map onto the model of the face
Extract faces from an image/video and map the 2D coordinates from the image to the model of the face you've defined
Draw/Render the texture on top of the image/video feed
Now you'd have to do the same, but instead you'd do it for a human body.
The issues that you'd have to deal with is the fact that only "half" of your body would be visible to your camera at any time (because the other half is facing away from the camera). Also your textures would have to map to a 3D model, vs a relatively 2D model of a face (facial features are mostly on a flat plane which is a good enough estimation).
Good luck!

how to do head-tracing in android accurately?

I want to load an object using opengles and view it when rotating the device, as if there is a real object floating in real world.
For example, when I turn my head right, the object in the screen move left.
I'm using the android magnectic and accelorate sensor , but the result is hardly good.
What's the general way of doing this VR like app, Just like google cardboard?
Is there any better way?It is highly appreciated if there is any example, thank you!

Making scene making from real clicked images for Google cardboard

I know about Google cardboard and I want to make say a campus tour of any company which will be handled by head movements in Google Cardboard with unity how to make campus buildings from the real images which I clicked by my camera.I am new to unity and much aware with android coding.could you link any unity tutorial.
And second thing my approach toward this idea is good with unity or it should be with android.
i want to make thing like this youtube link which i want Please suggest
In theory you could position a great lot of images in 3D space and have a scene with a very modern-art-like look - but it's much easier to do a spherical image and drag it to a sphere.
You can use a digital camera and hugin to stitch photos manually for better quality or just take any android phone with a gyroscope and semi-good camera and do a photosphere.
After getting a spherical image, just drag it to a sphere with reversed normals and put the VR camera in it's center - voilla, you've got a VR app.
A good idea would be to allow the user some interaction, moving between scenes etc. Usually there is a point where you look and either wait a bit or press the cardboard button.

Move OpenGL ES objects with accelerometer

I'm developing an Android application. I want to do the following:
I will have a black screen with an object in its center, for example, a vase.
With this app, I will a 360 degrees view of vase. I explain: imagine the vase is the center of an imaginary circle. I want to make user follow this circle, to see the vase from any point of view. I don't know if I explain it well.
In real life, you can move around a vase and see it in front, behind, and other sides. I want to simulate this.
My problem is that I'm not sure if I can simulate this using accelerometer.
Who can I know if user is describing a circle with the mobile phone?
If you don't understand me or you need more details, please tell me.
You should combine accelerometer with compass. Compass gives you direction.

Touching an object in a tweened animation?

I'm having trouble porting a simple game I developed for the iPhone
over to Android. The game has an animated ball which moves from Point
A to Point B. The user must touch the ball before it reaches point B
or lose the game. This was easy to implement on the iPhone using Core
Animation since I could locate the current position of the ball by
accessing its animation layer. In Android, I attempted to recreate the
game using tweened animation and represented the ball as a Drawable.
My issue is that I can't determine if the user is touching the spot
because the Drawable apparently bounds do not update as the ball
visually moves - making the program think the ball is always in its
original position. While searching these forums I saw an Android team
dev. confirm that you can't get the current location in a tweened
animation but offered no solution for a workaround. Can I accomplish
this on the Android using my current approach? If not, what approach
should I use?
Best regards,
Michael
Can I accomplish this on the Android
using my current approach?
I doubt it. Android's tweened animations are not really designed for games.
If not, what approach should I use?
Implement the game using 2D drawing on the Canvas -- see the LunarLander sample in your SDK.

Categories

Resources