Create a menu CNN iOS app style - android

I downloaded the CNN iOS app and I seen the wonderful menu!
I post 2 screen
Now I'm asking how can I create that in my application. I will develop in iOS and Android but the same application for Android hasn't that menu..
It's possible to create this menu in Andorid?
Has anyone some advice to create that?
Thank you in advance

Got solution: Create one view with listiview, and one button with background photo. Than take the view out of screen. If the view is x = 320 y = 300 set y = -300 in sotryboard element postition settings and place the button in the top corner.
On button action make a animation of the 2 graphic object of y = +300. That's all.
- (IBAction)makeTheMagicWithButton:(id)sender {
//at first click y + 300 and view comes in
//at the 2nd click y - 300 and return back out the screen
float y;
[sender setSelected:![sender isSelected]];
if ([sender isSelected]) {
y = +300;
self.buttonImage = [UIImage imageNamed:#"menuICONSreturn.png"];
} else {
y = -300;
self.buttonImage = [UIImage imageNamed:#"menuICONS.png"];
}
//animation move the view
[UIView transitionWithView:self.view duration:0.5 options:UIViewAnimationOptionAllowAnimatedContent animations:^{[self.magicView setFrame:CGRectOffset(self.magicView.frame, 0, y)];}
completion:nil];
//animation move button menu
[UIView transitionWithView:self.view duration:0.5 options:UIViewAnimationOptionAllowAnimatedContent animations:^{[self.magicButton setFrame:CGRectOffset(self.magicButton.frame, 0, y)];}
completion:nil];
//change icon
[self.magicButton setBackgroundImage:self.buttonImage forState:UIControlStateNormal];
}
Hope it will help someone..

Related

Combine image with video stream on Android

I am investigating Augmented Reality on Android.
I am using ARCore and Sceneform within an Android application.
I have tried out the sample projects and now would like to develop my own application.
One effect I would like to achieve is to combine/overlay an image (say .jpeg or .png) with a live feed from the devices onboard camera.
The image will have a transparent background that allows the user to see the live feed and image simultaneously
However I do not want the overlayed image to be a fixed/static watermark, When the user zooms in, out or pans the overlayed image must also zoom in, out and pan etc.
I do not wish the overplayed image to become 3d or anything of that nature.
Is this effect possible with Sceneform? or will I need to use other 3rd party libraries and/or tools to achieve the desired results.
UPDATE
The user is drawing on a blank sheet of white paper. The sheet of paper is orientated so that the user is comfortably drawing (either left or right handed). The user is free to move the sheet of paper while they complete their image.
An Android device is held above the sheet of paper filming the user drawing their selected image.
The live camera feed is being cast to a large TV or monitor screen.
To aid the user they have selected a static image to "trace" or "Copy".
This image is chosen on the Android device and is being combined with the live camera stream within the Android application.
The user can zoom in and out on their drawing and the combined live stream and selected static image will also zoom in and out, this will enable the user to make an accurate copy of the selected static image by drawing "Free Hand".
When the user looks directly at the sheet of paper, they only see their drawing.
When the user views the cast live stream of them drawing on the TV or monitor they see their drawing and the chosen static image superimposed. The user can control the transparency of the static image to assist them in making an accurate copy of it.
I think what you are looking for is to use AR to display an image so that the image stays in place, for example over a sheet of paper in order to act as a guide for drawing a copy of the image on the paper.
There are 2 parts to this. First is to locate the sheet of paper, the second is to place the image over the paper and keep it there as the phone moves around.
Locating the sheet of paper can be done just by detecting the plane with the paper (having some contrast, or pattern or something vs. a plain white sheet of paper will help), then tap on where the center of the page should be. This is done in the HelloSceneform sample.
If you want to have a more accurate bounding of the paper, you could tap the 4 corners of the paper, and then create anchors there. To do this register a plane tapped listener in onCreate()
arFragment.setOnTapArPlaneListener(this::onPlaneTapped);
Then in onPlaneTapped, create the 4 anchorNodes. Once you have 4, initialize the drawing to be displayed.
private void onPlaneTapped(HitResult hitResult, Plane plane, MotionEvent event) {
if (cornerAnchors.size() != 4) {
AnchorNode corner = createCornerNode(hitResult.createAnchor());
arFragment.getArSceneView().getScene().addChild(corner);
cornerAnchors.add(corner);
}
if (cornerAnchors.size() == 4 && drawingNode == null) {
initializeDrawing();
}
}
To initialize the drawing, create a Sceneform Texture from the bitmap or drawable. This can be from a resource or a file URL. You want the texture to show the whole image, and scale as the model holding it is resized.
private void initializeDrawing() {
Texture.Sampler sampler = Texture.Sampler.builder()
.setWrapMode(Texture.Sampler.WrapMode.CLAMP_TO_EDGE)
.setMagFilter(Texture.Sampler.MagFilter.NEAREST)
.setMinFilter(Texture.Sampler.MinFilter.LINEAR_MIPMAP_LINEAR)
.build();
Texture.builder()
.setSource(this, R.drawable.logo_google_developers)
.setSampler(sampler)
.build()
.thenAccept(texture -> {
MaterialFactory.makeTransparentWithTexture(this, texture)
.thenAccept(this::buildDrawingRenderable);
});
}
The model to hold the texture is just a flat quad sized to the smallest dimension between the corners. This is the same logic as laying out a quad using OpenGL.
private void buildDrawingRenderable(Material material) {
Integer[] indices = {
0, 1, 3, 3, 1, 2
};
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
float width = Math.abs(max_x - min_x);
float height = Math.abs(max_z - min_z);
float extent = Math.min(width / 2, height / 2);
Vertex[] vertices = {
Vertex.builder()
.setPosition(new Vector3(-extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 1)) // top left
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 1)) // top right
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 0)) // bottom right
.build(),
Vertex.builder()
.setPosition(new Vector3(-extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 0)) // bottom left
.build()
};
RenderableDefinition.Submesh[] submeshes = {
RenderableDefinition.Submesh.builder().
setMaterial(material)
.setTriangleIndices(Arrays.asList(indices))
.build()
};
RenderableDefinition def = RenderableDefinition.builder()
.setSubmeshes(Arrays.asList(submeshes))
.setVertices(Arrays.asList(vertices)).build();
ModelRenderable.builder().setSource(def)
.setRegistryId("drawing").build()
.thenAccept(this::positionDrawing);
}
The last part is to position the quad in the center of the corners, and create a Transformable node so the image can be nudged into position, rotated, or scaled to be the perfect size.
private void positionDrawing(ModelRenderable drawingRenderable) {
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
Vector3 center = new Vector3((min_x + max_x) / 2f,
cornerAnchors.get(0).getWorldPosition().y, (min_z + max_z) / 2f);
Anchor centerAnchor = null;
Vector3 screenPt = arFragment.getArSceneView().getScene().getCamera().worldToScreenPoint(center);
List<HitResult> hits = arFragment.getArSceneView().getArFrame().hitTest(screenPt.x, screenPt.y);
for (HitResult hit : hits) {
if (hit.getTrackable() instanceof Plane) {
centerAnchor = hit.createAnchor();
break;
}
}
AnchorNode centerNode = new AnchorNode(centerAnchor);
centerNode.setParent(arFragment.getArSceneView().getScene());
drawingNode = new TransformableNode(arFragment.getTransformationSystem());
drawingNode.setParent(centerNode);
drawingNode.setRenderable(drawingRenderable);
}
The intended AR reference image can be scaled with ARobjects as points for the sizing of the template for the user.
The more complex AR images will not work easily, since the AR image is overlaid on top of the users tracing, and this will obstruct the tip of their pen/pencil.
My solution is to chromakey the white paper. This will replace the white paper with the chosen image or live feed. Moving the paper around as you specified would be an issue, unless you have a means of tracking the paper position.
As you can see in this example, AR objects are in front, while chromakey is background. Tracing surface (paper) would be in the center.
Reference to this example is on the link below.
RJ
YouTube - AR tracked environment

Tween class help for ActionScript 3

I'm trying to make my code where my avatar (thing you move with your finger) move by a tween because when you put your finger on it works but when you take it off and put it back on the avatar will instantly teleport to my finger and I want him to move to it. Please inform me what I've done wrong and what I can improve to make this work.(this is in a class which is why I use public )
public var lastPosX:Number; public var lastPosY:Number;
public function onTouchBegin (e:TouchEvent):void {
var newPosX:Number = avatar.x; //the point of x your finger is
var newPosY:Number = avatar.y; //the point of y your finger is
//checks for first time putting finger down
if ( isNaN(lastPosX)) {
avatar.x = e.stageX;
avatar.y = e.stageY;
//x and y values = to your finger
}
else {
var myTweenX:Tween = new Tween(avatar, "x", Strong.easeOut, lastPosX, newPosX, 5, true);
var myTweenY:Tween = new Tween(avatar, "y", Strong.easeOut, lastPosY, newPosY, 5, true);
//makes the avatar move to your finger when u lift your finger off and on
}
}
public function onTouchMove (e:TouchEvent):void {
avatar.x = e.stageX;
avatar.y = e.stageY;
//x and y values = to your finger
}
public function onTouchFinish (e:TouchEvent):void {
avatar.x = e.stageX;
avatar.y = e.stageY;
lastPosX = avatar.x;
lastPosY = avatar.y;
//x and y values = to your finger
//Defines x and y value of finger
}
If you develope game, Tween isn't good choice for updating position for avatar. Create main gaming loop, and update position for the avatar in onTouchBeginand correct position in onTouchMove. This tutorial should help you. In another way, you are forced to spawn new Tweens in onTouchMove.
switch for action up and down we have no default bocoz write now dont want to do only these 2 things
first 2 line gives touch coordinate
playable area is set such that only in that area touch works
roughly lower fourth area is occupid by character so takinh /4

Using Corona how can I get my app to register multiple button presses when the user performs a single drag gesture?

I am generating a 6x6 grid of buttons using widget.newButton. I would like the user to be able to add numbers to a selection by touching the screen and then dragging their finger over the desired buttons. For example if I wanted to select "811030" (i.e the top row of the grid) then I would just drag my finger over it.
Here is the code I have so far:
local widget = require( "widget" )
local function handleButtonEvent( event )
local phase = event.phase
if "moved" == phase then
print("Button Pressed")
end
end
function tileRow(numTiles, padding)
local tileWidth = (display.contentWidth / numTiles) - padding
local x = padding/2
local y = display.contentHeight - numTiles * (tileWidth + padding)
for i = 1, numTiles, 1 do
for j = 1, numTiles, 1 do
local myButton = widget.newButton
{
left = x,
top = y,
width = tileWidth,
height = tileWidth,
id = "button_"..i..j,
label = math.random(0,9),
onEvent = handleButtonEvent,
}
x = x + tileWidth + padding
end
x = padding/2
y = y + tileWidth + padding
end
end
tileRow(6,1)
Just access the label inside the event handler, and build your full number from there.
I'll let you handle the corner cases, like putting fullNumber back to "" at one point :)
local fullNumber = ""
local function handleButtonEvent( event )
local phase = event.phase
local btn = event.target
local btnLabel = btn.label
if "moved" == phase then
print("Button Pressed with label:"..btnLabel)
fullNumber=fullNumber..btnLabel
print("Full number: "..fullNumber)
end
end
Cheers !
Get the button under the finger
In handleButtonEvent, you can retrieve the coordinates (x, y) where the user is touching and moving his finger. Indeed the event.target is not enough to your purpose because the event.target will always be equal to the first touched button.
You have to implement this function GetMyButton(x, y) that will return a widget.newButton. It should not be hard.
Get the label of the found button
According to widget_button.lua, to get the label of a button you should do :
local btnLabel = btn:getLabel()
Concatenate the found labels (be aware of duplication)

How scrolling list in Unity3d with touch screen in Android?

I have list of items and this list have Scroll bar , in windows scroll bar work nice but in android touch screen scroll bar be very tin and user cant touch scroll bar , I want users can scrolling list with touch on list only .
TNX .
I find out.Must get touch screen pos and set a range for move .
scrollPosition1 = GUI.BeginScrollView(Rect (0,400,Screen.width,175),scrollPosition1, Rect (0, 0, 650, 0));
// touch screen
if (Input.touchCount==1 &&Screen.height -Input.GetTouch(0).position.y > 450 - scrollPositionHome.y && Screen.height - Input.GetTouch(0).position.y < 600 - scrollPositionHome.y )
{
var touchDelta2 : Vector2 = Input.GetTouch(0).deltaPosition;
scrollPosition1.x +=touchDelta2.x;
}
GUI.skin.font = fnt;
style.normal.textColor = Color.black;
style.alignment = TextAnchor.MiddleRight;
for (i=0;i < ImgSliderProducts.Length;i++)
{
GUI.DrawTexture(Rect(20+(i* 100),10,100,100), ImgSliderProducts[i],ScaleMode.ScaleToFit,true);
}
GUI.EndScrollView();

Drag and drop image from and to fixed position on fixed path

I am trying to allow the user to drag and drop and image from on position to another. The screen layout is as follows:
1 2 3
4 5 6
7 8 9
I want the user to grab image 2, 4, 6, or 8 and drag it to image 5. Upon dragging to image 5 I want to load up a fragment. The user can only drag the image in a straight line from it's current position to 5's position. ie image 2 and only drag down and only until it is overtop of image 5, image 4 can only drag right until overtop of 5, etc.
Any insight on how to do this is greatly appreciated.
Thanks,
DMan
So I figured out a way to get it to work. Pretty hacky but I basically use the image positions and transition it via a margin in the direct that I want from it's center until it reaches the desired position.
I created a temp image (mTempImage) and hide my main image because the image had parameters set in the layout that forced it to stay above a specific item in the layout and wouldnt let it margin less than that items height.
Here is a sample from my onTouchListener
int eventX = (int)event.getX();
int eventY = (int)event.getY();
int movingViewHeight = view.getHeight();
int movingViewWidth = view.getWidth();
int destinationTileTop = mTileHere.getTop();
int destinationTileLeft = mTileHere.getLeft();
// Transparent 'Tile Here' image (1)
// Transitional 'Tile Here' image (0 - 1)
// Opaque 'Tile Here' image (0)
float alphaMultiplier = 0;
switch(view.getId()) {
case R.id.item_position_2:
// Only allow sliding down from center of object (positive eventY)
if (eventY > movingViewHeight / 2) {
// Calculate the amount that the image has moved from it's center point
int posFromImgCenter = (eventY - movingViewHeight / 2);
// Check if the image has reached the center point and stop it's motion any farther
if (posFromImgCenter >= destinationTileTop) {
params.setMargins(view.getLeft(), destinationTileTop, 0, 0);
alphaMultiplier = 1;
}
// Slide the image with the positioning of the persons finger
else {
params.setMargins(view.getLeft(), posFromImgCenter, 0, 0);
alphaMultiplier = ((float)posFromImgCenter / (float)destinationTileTop);
}
}
// Attempting to slide in an invalid direction leave the image where it is
else
params.setMargins(view.getLeft(), view.getTop(), 0, 0);
... MORE CODE HERE FOR OTHER ITEM POSITIONS
// AFTER SWITCH STATEMENT COMPLETE UPDATE VIEW ITEMS ACCORDINGLY
// default 0 if not set with valid movement
mTileHere.setAlpha((int)(255 - (255 * alphaMultiplier)));
mTempImage.setLayoutParams(params);
mTempImage.invalidate();
Thanks,
DMan

Categories

Resources