360 panorama viewer with rajawali + VR howto change FieldOfView - android

I am trying to create a 360 image viewer using rajawali + vr (Cardboard toolkit).
When i disable the VR-Mode on the CardboardView the changes i made on Field of View property in the renderer has no effect.
in the Google Cardboard docs i found the the view will ignore it
For monocular rendering, the implementor should feel free to ignore the FieldOfView and instead create a perspective matrix with whatever field of view is desired for monocular rendering
My Question is how can i do this?
and where should i implement it, neither the renderer nor the CardboardView has a method to set an perspectiveMatrix (float[])?

updating the device param seems to always get overwritten by the gvr view
but if you decompile the FieldOfView class, you get this:
public void toPerspectiveMatrix(float near, float far, float[] perspective, int offset) {
if(offset + 16 > perspective.length) {
throw new IllegalArgumentException("Not enough space to write the result");
} else {
float l = (float)(-Math.tan(Math.toRadians((double)this.left))) * near;
float r = (float)Math.tan(Math.toRadians((double)this.right)) * near;
float b = (float)(-Math.tan(Math.toRadians((double)this.bottom))) * near;
float t = (float)Math.tan(Math.toRadians((double)this.top)) * near;
Matrix.frustumM(perspective, offset, l, r, b, t, near, far);
}
}

Related

Which is the best approach for dynamic drawing in Android?

I want to make a waveform drawing for an audio recorder in Android. The usual one with lines/bars, like this one:
More importantly, I want it live, while the song is being recorded. My app already computes the RMS through AudioRecord. But I am not sure which is the best approach for the actual drawing in terms of processing, resources, battery, etc.
The Visualizer does not show anything meaningful, IMO (are those graphs more or less random stuff??).
I've seen the canvas approach and the layout approach (there are probably more?). In the layout approach you add thin vertical layouts in a horizontal layout. The advantage is that you don't need to redraw the whole thing each 1/n secs, you just add one layout each 1/n secs... but you need hundreds of layouts (depending on n). In the canvas layout, you need to redraw the whole thing (right??) n times per second. Some even create bitmaps for each drawing...
So, which is cheaper, and why? Is there anything better nowadays? How much frequency update (i.e., n) is too much for generic low end devices?
EDIT1
Thanks to the beautiful trick #cactustictacs taught me in his answer, I was able to implement this with ease. Yet, the image is strangely rendered kind of "blurry by movement":
The waveform runs from right to left. You can easily see the blur movement, and the left-most and right-most pixels get "contaminated" by the other end. I guess I can just cut both extremes...
This renders better if I make my Bitmap bigger (i.e., making widthBitmap bigger), but then the onDraw will be heavier...
This is my full code:
package com.floritfoto.apps.ave;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;
import android.util.AttributeSet;
import java.util.Arrays;
public class Waveform extends androidx.appcompat.widget.AppCompatImageView {
//private float lastPosition = 0.5f; // 0.5 for drawLine method, 0 for the others
private int lastPosition = 0;
private final int widthBitmap = 50;
private final int heightBitmap = 80;
private final int[] transpixels = new int[heightBitmap];
private final int[] whitepixels = new int[heightBitmap];
//private float top, bot; // float for drawLine method, int for the others
private int aux, top;
//private float lpf;
private int width = widthBitmap;
private float proportionW = (float) (width/widthBitmap);
Boolean firstLoopIsFinished = false;
Bitmap MyBitmap = Bitmap.createBitmap(widthBitmap, heightBitmap, Bitmap.Config.ARGB_8888);
//Canvas canvasB = new Canvas(MyBitmap);
Paint MyPaint = new Paint();
Paint MyPaintTrans = new Paint();
Rect rectLbit, rectRbit, rectLdest, rectRdest;
public Waveform(Context context, AttributeSet attrs) {
super(context, attrs);
MyPaint.setColor(0xffFFFFFF);
MyPaint.setStrokeWidth(1);
MyPaintTrans.setColor(0xFF202020);
MyPaintTrans.setStrokeWidth(1);
Arrays.fill(transpixels, 0xFF202020);
Arrays.fill(whitepixels, 0xFFFFFFFF);
}
public void drawNewBar() {
// For drawRect or drawLine
/*
top = ((1.0f - Register.tone) * heightBitmap / 2.0f);
bot = ((1.0f + Register.tone) * heightBitmap / 2.0f);
// Using drawRect
//if (firstLoopIsFinished) canvasB.drawRect(lastPosition, 0, lastPosition+1, heightBitmap, MyPaintTrans); // Delete last stuff
//canvasB.drawRect(lastPosition, top, lastPosition+1, bot, MyPaint);
// Using drawLine
if (firstLoopIsFinished) canvasB.drawLine(lastPosition, 0, lastPosition, heightBitmap, MyPaintTrans); // Delete previous stuff
canvasB.drawLine(lastPosition ,top, lastPosition, bot, MyPaint);
*/
// Using setPixel (no tiene sentido, mucho mejor setPixels.
/*
int top = (int) ((1.0f - Register.tone) * heightBitmap / 2.0f);
int bot = (int) ((1.0f + Register.tone) * heightBitmap / 2.0f);
if (firstLoopIsFinished) {
for (int i = 0; i < top; ++i) {
MyBitmap.setPixel(lastPosition, i, 0xFF202020);
MyBitmap.setPixel(lastPosition, heightBitmap - i-1, 0xFF202020);
}
}
for (int i = top ; i < bot ; ++i) {
MyBitmap.setPixel(lastPosition,i,0xffFFFFFF);
}
//System.out.println("############## "+top+" "+bot);
*/
// Using setPixels. Works!!
top = (int) ((1.0f - Register.tone) * heightBitmap / 2.0f);
if (firstLoopIsFinished)
MyBitmap.setPixels(transpixels,0,1,lastPosition,0,1,heightBitmap);
MyBitmap.setPixels(whitepixels, top,1, lastPosition, top,1,heightBitmap-2*top);
lastPosition++;
aux = (int) (width - proportionW * (lastPosition));
rectLbit.right = lastPosition;
rectRbit.left = lastPosition;
rectLdest.right = aux;
rectRdest.left = aux;
if (lastPosition >= widthBitmap) { firstLoopIsFinished = true; lastPosition = 0; }
}
#Override
protected void onSizeChanged(int w, int h, int oldw, int oldh) {
super.onSizeChanged(w, h, oldw, oldh);
width = w;
proportionW = (float) width/widthBitmap;
rectLbit = new Rect(0, 0, widthBitmap, heightBitmap);
rectRbit = new Rect(0, 0, widthBitmap, heightBitmap);
rectLdest = new Rect(0, 0, width, h);
rectRdest = new Rect(0, 0, width, h);
}
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
drawNewBar();
canvas.drawBitmap(MyBitmap, rectLbit, rectRdest, MyPaint);
canvas.drawBitmap(MyBitmap, rectRbit, rectLdest, MyPaint);
}
}
EDIT2
I was able to prevent the blurring just using null as Paint in the canvas.drawBitmap:
canvas.drawBitmap(MyBitmap, rectLbit, rectRdest, null);
canvas.drawBitmap(MyBitmap, rectRbit, rectLdest, null);
No Paints needed.
Your basic custom view approach would be to implement onDraw and redraw your current data each frame. You'd probably keep some kind of circular Buffer holding your most recent n amplitude values, so each frame you'd iterate over those, and use drawRect to draw the bars (you'd calculate things like width, height scaling, start and end positions etc in onSizeChanged, and use those values when defining the coordinates for the Rects).
That in itself might be fine! The only way you can really tell how expensive draw calls are is to benchmark them, so you could try this approach out and see how it goes. Profile it to see how much time it takes, how much the CPU spikes etc.
There are a few things you can do to make onDraw as efficient as possible, mostly things like avoiding object allocations - so watch out for loop functions that create Iterators, and in the same way you're supposed to create a Paint once instead of creating them over and over in onDraw, you could reuse a single Rect object by setting its coordinates for each bar you need to draw.
Another approach you could try is creating a working Bitmap in your custom view, which you control, and calling drawBitmap inside onDraw to paint it onto the Canvas. That should be a pretty inexpensive call, and it can easily be stretched as required to fit the view.
The idea there, is that very time you get new data, you paint it onto the bitmap. Because of how your waveform looks (like blocks), and the fact you can scale it up, really all you need is a single vertical line of pixels for each value, right? So as the data comes in, you paint an extra line onto your already-existing bitmap, adding to the image. Instead of painting the entire waveform block by block every frame, you're just adding the new blocks.
The complication there is when you "fill" the bitmap - now you have to "shift" all the pixels to the left, dropping the oldest ones on the left side, so you can draw the new ones on the right. So you'll need a way to do that!
Another approach would be something similar to the circular buffer idea. If you don't know what that is, the idea is you take a normal buffer with a start and an end, but you treat one of the indices as your data's start point, wrap around to 0 when you hit the last index of the buffer, and stop when you hit the index you're calling your end point:
Partially filled buffer:
|start
123400
|end
Data: 1234
Full buffer:
|start
123456
|end
Data: 123456
After adding one more item:
|start
723456
|end
Data: 234567
See how once it's full, you shift the start and end one step "right", wrapping around if necessary? So you always have the most recent 6 values added. You just have to handle reading from the correct index ranges, from start -> lastIndex and then firstIndex -> end
You could do the same thing with a bitmap - start "filling" it from the left, increasing end so you can draw the next vertical line. Once it's full, start filling from the left by moving end there. When you actually draw the bitmap, instead of drawing the whole thing as-is (723456) you draw it in two parts (23456 then 7). Make sense? When you draw a bitmap to the canvas, there's a call that takes a source Rect and a destination one, so you can draw it in two chunks.
You could always redraw the bitmap from scratch each frame (clear it and draw the vertical lines), so you're basically redrawing your whole data buffer each time. Probably still faster than the drawRect approach for each value, but honestly not much easier than the "treat the bitmap as another circular buffer" method. If you're already managing one circular buffer, it's not much more work - since the buffer and the bitmap will have the same number of values (horizontal pixels in the bitmap's case) you can use the same start and end values for both
You would never do this with layouts. Layouts are for premade components. They're high level combinations of components and you don't want to dynamically add or remove views from it frequently. For this, you use a custom view with a canvas. Layouts aren't even an option for something like this.

Overlapping GLSurfaceView on SurfaceView and vice versa

ExoPlayer - SurfaceView
Camera2 + MediaCodec - GLSurfaceView
I am using the above view groups for playing video and camera recording.
UI-1: Exo-Surf at the center and Cam-GLS in the top right corner.
UI-2: Cam-GLS at the center and Exo-Surf in the top right corner.
To achieve this I am using setZOrderOnTop to set z-index, as both are inside RelativeLayout.
(exoPlayerView.videoSurfaceView as? SurfaceView)?.setZOrderOnTop(true/false)
It seems working fine on Samsung S9+ with API 29 - Android 10, and also for API 28.
But for API 21-27, it behaves with some random issues.
Dash-A top part of SurfaceView/GLSurfaceView is not visible
Dash-B bottom part of SurfaceView/GLSurfaceView is not visible
Entire SurfaceView / GLSurfaceView becomes completely transparent in the top right corner
Also tried using setZOrderMediaOverlay but no luck.
I am sure two surface view works together as Whatsapp and google duo apps are using them in video calls. But I am wondering if GLSurfaceView is causing an issue "something about locking the GL thread" as commented below in this answer.
Hoping for a working solution for API 21+ or any reference link, suggestions would be highly appreciated.
Instead of using the built-in GLSurfaceView, you'll have to create multiple SurfaceViews, and manage how OpenGL draws on one (or more) of those.
The Grafika code (that I mentioned in my comment to the answer you link) is here:
https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/MultiSurfaceActivity.java
In that code, onCreate creates the surfaces:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_multi_surface_test);
// #1 is at the bottom; mark it as secure just for fun. By default, this will use
// the RGB565 color format.
mSurfaceView1 = (SurfaceView) findViewById(R.id.multiSurfaceView1);
mSurfaceView1.getHolder().addCallback(this);
mSurfaceView1.setSecure(true);
// #2 is above it, in the "media overlay"; must be translucent or we will totally
// obscure #1 and it will be ignored by the compositor. The addition of the alpha
// plane should switch us to RGBA8888.
mSurfaceView2 = (SurfaceView) findViewById(R.id.multiSurfaceView2);
mSurfaceView2.getHolder().addCallback(this);
mSurfaceView2.getHolder().setFormat(PixelFormat.TRANSLUCENT);
mSurfaceView2.setZOrderMediaOverlay(true);
// #3 is above everything, including the UI. Also translucent.
mSurfaceView3 = (SurfaceView) findViewById(R.id.multiSurfaceView3);
mSurfaceView3.getHolder().addCallback(this);
mSurfaceView3.getHolder().setFormat(PixelFormat.TRANSLUCENT);
mSurfaceView3.setZOrderOnTop(true);
}
The initial draw code is in:
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height)
which calls different local methods depending on some local flags. For example, it calls an example of GL drawing here:
private void drawRectSurface(Surface surface, int left, int top, int width, int height) {
EglCore eglCore = new EglCore();
WindowSurface win = new WindowSurface(eglCore, surface, false);
win.makeCurrent();
GLES20.glClearColor(0, 0, 0, 0);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
for (int i = 0; i < 4; i++) {
int x, y, w, h;
if (width < height) {
// vertical
w = width / 4;
h = height;
x = left + w * i;
y = top;
} else {
// horizontal
w = width;
h = height / 4;
x = left;
y = top + h * i;
}
GLES20.glScissor(x, y, w, h);
switch (i) {
case 0: // 50% blue at 25% alpha, pre-multiplied
GLES20.glClearColor(0.0f, 0.0f, 0.125f, 0.25f);
break;
case 1: // 100% blue at 25% alpha, pre-multiplied
GLES20.glClearColor(0.0f, 0.0f, 0.25f, 0.25f);
break;
case 2: // 200% blue at 25% alpha, pre-multiplied (should get clipped)
GLES20.glClearColor(0.0f, 0.0f, 0.5f, 0.25f);
break;
case 3: // 100% white at 25% alpha, pre-multiplied
GLES20.glClearColor(0.25f, 0.25f, 0.25f, 0.25f);
break;
}
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
}
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
win.swapBuffers();
win.release();
eglCore.release();
}
I haven't used this code, so I can only suggest you search for additional details about the various calls you see in that code.
FIRST, try to get a simple example working that has two overlapping SurfaceViews, WITHOUT any OpenGL calls. E.g. solid background color views that overlap. And I reiterate the key point: Do Not make either of them a GLSurfaceView!
THEN attempt to change one of the views to initialize and use OpenGL. (Using logic similar to the code I describe above; still NOT a GLSurfaceView.)

Support multiple aspect ratio in Unity

I've been trying to create a Unity 2D game that supports each and every aspect ratios of devices for both android and tablets. Is there a way to do so that's been provided or recommended by Unity?
There are a few things that should be considered. The first is what elements should be allowed to scale? There are two categories, namely UI and Game Elements.
The Game Elements portion can mean a lot of things. If the game space is limited, the key is typically, including a generous portion of "negative space", or parts of the image that don't affect the game play significantly. For instance, the below image could be cropped from the left and right without affecting the image significantly. Put the center part of the image as the key element, or one side.
One could also stretch the elements, although that might lead to undesirable effects. Having a surplus of image and testing with different aspect rations is the best way typically for such background elements. These background elements can be placed in the background, with the canvas being set to "Scale With Screen Size", and setting the "Screen Match Mode" to the effect that works best for your image. See "Canvas Scaler" for more information.
As for the other UI elements, the key is to use anchor points. You can tell a UI element to take either a number of pixels, or fill a portion of the screen, when you place it. Look at the "Rect Transform" component included with each such UI object. You can adjust these on the screen as well.
Lastly, you could do it programmatically. There exists Screen.height and Screen.width. You could adjust the objects as desired in run time to make it work. I suggest you don't do this for everything, but it might help in some cases.
In my case, I do work by create all of it as a scale
So, it could support no matter screen are
//Find Screen resolution at the splash or loading screen
float scalex = DataFactory.SCREEN_WIDTH / (float)DataFactory.OUR_FIXED_GAME_SCREEN;
float scaley = DataFactory.SCREEN_HEIGHT / (float)DataFactory.OUR_FIXED_GAME_SCREEN;
if (scalex >= scaley)
DataFactory.SCALE = scalex;
else
DataFactory.SCALE = scaley;
//Set all size in game at the start
private int gameWidth = (int) (1400 * DataFactory.SCALE);
private int gameHeight = (int) (800 * DataFactory.SCALE);
private int startGameX = (int) (300 * DataFactory.SCALE);
private int startGameY = (int) (280 * DataFactory.SCALE);
private int objectX = (int) (410 * DataFactory.SCALE) + DataFactory.BEGIN_X;
private int objectY = (int) (979 * DataFactory.SCALE) + DataFactory.BEGIN_Y;
private int objectGapX = (int) (400 * DataFactory.SCALE);
private int objectGapY = (int) (180 * DataFactory.SCALE);
private int objectWidth = (int) (560 * DataFactory.SCALE);
private int objectHeight = (int) (400 * DataFactory.SCALE);
private int xRing = (int) (1005 * DataFactory.SCALE) + DataFactory.BEGIN_X;
private int yRing = (int) (1020 * DataFactory.SCALE) + DataFactory.BEGIN_Y;
private int radiusOutside = (int) (740 * DataFactory.SCALE);
private int radiusInside = (int) (480 * DataFactory.SCALE);
private int radiusObject = (int) (600 * DataFactory.SCALE);
private int yObjectRing = (int) (920 * DataFactory.SCALE) + DataFactory.BEGIN_Y;
* ALL FIXED VALUED IS THE VALUED THAT I CREATE BY BASE ON SINGLE SCREEN *
This is some sample of 3D Game that I made, however, I still use the same concept at GUI part
This is some sample of 2D Game that I used this concept
I know it’s an old post, wanted to show an alternative for this. You could try to define towards which axis you would like to scale your game (ex. all width should be always visible, height should scale respectively to the width): store all scene objects in a parent and scale the parent.
Ex. (my width was fixed and the height got cut off for the width )
bottomRightPosition = Camera.main.ScreenToWorldPoint(new Vector3(0, 0, - Camera.main.transform.position.z));
topLeftPosition = Camera.main.ScreenToWorldPoint(new Vector3(Screen.width, Screen.height, -Camera.main.transform.position.z));
float Width = topLeftPosition.x -bottomRightPosition.x
float scale = width / optimizedWorldDistance
gameHolder.transform.localScale = new Vector3(scale,scale,1); // for 2D
Note: my gameHolder is initially of scale (1,1,1);
You should put everything in a main game object and scale it with difference ratio using a simple script (camera 'something' could help you to detect the screen ratio).
That's my idea.
Sorry for my bad English.
Try the new unity3d ui with anchoring.
For all the UI elements you must use the unit UI system, which is the best way to support multi platforms and aspect ratios.
The following content is based on this article:
This article say basically the same things that I'm saying:
1)
Regarding design ONLY in high resolution the article said:
"Another approach is to use higher resolution graphics (in fact the
one with the highest resolution of the device you want to target) and
scale it down on all devices. However, this is not a good idea because
you effectively need much more memory and will lose performance on
low-end devices."
So design in high resolution and then scale down it's not the good approach.
So as the article said the best thing it's to have different images for different resolution (SD, HD UD) and load the right image when the game it's loading: the article said:
"The best approach is to use a different image with the higher resolution and use this image version on the iPhone 4 and the low-res version on an iPhone 3GS, which is effectively what Apple is doing by using images with a #2x suffix for the file name.
In a similar way, you can create all your graphics in ultra-high-resolution needed for the iPad 3 for instance and append another suffix, and load the right image based on the screen resolution the device has. This is called content scaling, as the game was written only for a single "logical" scene size, and all the images & fonts are scaled to the device resolution."
So by using this approach we solved the problem of target devices with differnt RESOLUTIONS.
No like the articole said there is another problem which is target the devices with different ASPECT RATIO:
From the artichle:
"However, this approach is not sufficient when you want to target devices with different aspect ratios"
To do that I usually choose an aspect ratio that can fit the design of my game and use the following script to maintain the same aspect ratio on different devices:
/* The MIT License (MIT)
Copyright (c) 2014, Marcel Căşvan
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. */
using System;
using System.Collections;
using UnityEngine;
[ExecuteInEditMode]
[RequireComponent (typeof (Camera))]
public class CameraFit : MonoBehaviour
{
#region FIELDS
public float UnitsForWidth = 1; // width of your scene in unity units
public static CameraFit Instance;
private float _width;
private float _height;
//*** bottom screen
private Vector3 _bl;
private Vector3 _bc;
private Vector3 _br;
//*** middle screen
private Vector3 _ml;
private Vector3 _mc;
private Vector3 _mr;
//*** top screen
private Vector3 _tl;
private Vector3 _tc;
private Vector3 _tr;
#endregion
#region PROPERTIES
public float Width {
get {
return _width;
}
}
public float Height {
get {
return _height;
}
}
// helper points:
public Vector3 BottomLeft {
get {
return _bl;
}
}
public Vector3 BottomCenter {
get {
return _bc;
}
}
public Vector3 BottomRight {
get {
return _br;
}
}
public Vector3 MiddleLeft {
get {
return _ml;
}
}
public Vector3 MiddleCenter {
get {
return _mc;
}
}
public Vector3 MiddleRight {
get {
return _mr;
}
}
public Vector3 TopLeft {
get {
return _tl;
}
}
public Vector3 TopCenter {
get {
return _tc;
}
}
public Vector3 TopRight {
get {
return _tr;
}
}
#endregion
#region METHODS
private void Awake()
{
try{
if((bool)GetComponent<Camera>()){
if (GetComponent<Camera>().orthographic) {
ComputeResolution();
}
}
}catch (Exception e){
Debug.LogException(e, this);
}
}
private void ComputeResolution()
{
float deviceWidth;
float deviceHeight;
float leftX, rightX, topY, bottomY;
#if UNITY_EDITOR
deviceWidth = GetGameView().x;
deviceHeight = GetGameView().y;
#else
deviceWidth = Screen.width;
deviceHeight = Screen.height;
#endif
//Debug.Log("Aspect Ratio " + GetComponent<Camera>().aspect);
if (GetComponent<Camera>().aspect >= 0.7f)
{
UnitsForWidth = 2.2f;
}
else
{
UnitsForWidth = 2f;
}
/* Set the ortograpish size (shich is half of the vertical size) when we change the ortosize of the camera the item will be scaled
* autoamtically to fit the size frame of the camera
*/
GetComponent<Camera>().orthographicSize = 1f / GetComponent<Camera>().aspect * UnitsForWidth / 2f;
//Get the new height and Widht based on the new orthographicSize
_height = 2f * GetComponent<Camera>().orthographicSize;
_width = _height * GetComponent<Camera>().aspect;
float cameraX, cameraY;
cameraX = GetComponent<Camera>().transform.position.x;
cameraY = GetComponent<Camera>().transform.position.y;
leftX = cameraX - _width / 2;
rightX = cameraX + _width / 2;
topY = cameraY + _height / 2;
bottomY = cameraY - _height / 2;
//*** bottom
_bl = new Vector3(leftX, bottomY, 0);
_bc = new Vector3(cameraX, bottomY, 0);
_br = new Vector3(rightX, bottomY, 0);
//*** middle
_ml = new Vector3(leftX, cameraY, 0);
_mc = new Vector3(cameraX, cameraY, 0);
_mr = new Vector3(rightX, cameraY, 0);
//*** top
_tl = new Vector3(leftX, topY, 0);
_tc = new Vector3(cameraX, topY , 0);
_tr = new Vector3(rightX, topY, 0);
Instance = this;
}
private void Update()
{
#if UNITY_EDITOR
ComputeResolution();
#endif
}
private void OnDrawGizmos()
{
if (GetComponent<Camera>().orthographic) {
DrawGizmos();
}
}
private void DrawGizmos()
{
//*** bottom
Gizmos.DrawIcon(_bl, "point.png", false);
Gizmos.DrawIcon(_bc, "point.png", false);
Gizmos.DrawIcon(_br, "point.png", false);
//*** middle
Gizmos.DrawIcon(_ml, "point.png", false);
Gizmos.DrawIcon(_mc, "point.png", false);
Gizmos.DrawIcon(_mr, "point.png", false);
//*** top
Gizmos.DrawIcon(_tl, "point.png", false);
Gizmos.DrawIcon(_tc, "point.png", false);
Gizmos.DrawIcon(_tr, "point.png", false);
Gizmos.color = Color.green;
Gizmos.DrawLine(_bl, _br);
Gizmos.DrawLine(_br, _tr);
Gizmos.DrawLine(_tr, _tl);
Gizmos.DrawLine(_tl, _bl);
}
private Vector2 GetGameView()
{
System.Type T = System.Type.GetType("UnityEditor.GameView,UnityEditor");
System.Reflection.MethodInfo getSizeOfMainGameView =
T.GetMethod("GetSizeOfMainGameView",System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static);
System.Object resolution = getSizeOfMainGameView.Invoke(null, null);
return (Vector2)resolution;
}
#endregion
}
[1]: http://v-play.net/doc/vplay-different-screen-sizes/
This should sole the different aspect ratios problem. Now if you want anchor some game object the be always in a fixed position event if the game is resized on devices with different aspect ratios, you can use the following script:
/***
* This script will anchor a GameObject to a relative screen position.
* This script is intended to be used with CameraFit.cs by Marcel Căşvan, available here: http://gamedev.stackexchange.com/a/89973/50623
*
* Note: For performance reasons it's currently assumed that the game resolution will not change after the game starts.
* You could not make this assumption by periodically calling UpdateAnchor() in the Update() function or a coroutine, but is left as an exercise to the reader.
*/
/* The MIT License (MIT)
Copyright (c) 2015, Eliot Lash
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE. */
using UnityEngine;
using System.Collections;
[ExecuteInEditMode]
public class CameraAnchor : MonoBehaviour {
public enum AnchorType {
BottomLeft,
BottomCenter,
BottomRight,
MiddleLeft,
MiddleCenter,
MiddleRight,
TopLeft,
TopCenter,
TopRight,
};
public AnchorType anchorType;
public Vector3 anchorOffset;
// Use this for initialization
void Start () {
UpdateAnchor();
}
void UpdateAnchor() {
switch(anchorType) {
case AnchorType.BottomLeft:
SetAnchor(CameraFit.Instance.BottomLeft);
break;
case AnchorType.BottomCenter:
SetAnchor(CameraFit.Instance.BottomCenter);
break;
case AnchorType.BottomRight:
SetAnchor(CameraFit.Instance.BottomRight);
break;
case AnchorType.MiddleLeft:
SetAnchor(CameraFit.Instance.MiddleLeft);
break;
case AnchorType.MiddleCenter:
SetAnchor(CameraFit.Instance.MiddleCenter);
break;
case AnchorType.MiddleRight:
SetAnchor(CameraFit.Instance.MiddleRight);
break;
case AnchorType.TopLeft:
SetAnchor(CameraFit.Instance.TopLeft);
break;
case AnchorType.TopCenter:
SetAnchor(CameraFit.Instance.TopCenter);
break;
case AnchorType.TopRight:
SetAnchor(CameraFit.Instance.TopRight);
break;
}
}
void SetAnchor(Vector3 anchor) {
Vector3 newPos = anchor + anchorOffset;
if (!transform.position.Equals(newPos)) {
transform.position = newPos;
}
}
// Update is called once per frame
#if UNITY_EDITOR
void Update () {
UpdateAnchor();
}
#endif
}
Hope this can help, for more info please read the article that I've linked above.

Converting Camera Coordinates to Custom View Coordinates

I am trying to make a simple face detection app consisting of a SurfaceView (essentially a camera preview) and a custom View (for drawing purposes) stacked on top. The two views are essentially the same size, stacked on one another in a RelativeLayout. When a person's face is detected, I want to draw a white rectangle on the custom View around their face.
The Camera.Face.rect object returns the face bound coordinates using the coordinate system explained here and the custom View uses the coordinate system described in the answer to this question. Some sort of conversion is needed before I can use it to draw on the canvas.
Therefore, I wrote an additional method ScaleFacetoView() in my custom view class (below) I redraw the custom view every time a face is detected by overriding the OnFaceDetection() method. The result is the white box appears correctly when a face is in the center. The problem I noticed is that it does not correct track my face when it moves to other parts of the screen.
Namely, if I move my face:
Up - the box goes left
Down - the box goes right
Right - the box goes upwards
Left - the box goes down
I seem to have incorrectly mapped the values when scaling the coordinates. Android docs provide this method of converting using a matrix, but it is rather confusing and I have no idea what it is doing. Can anyone provide some code on the correct way of converting Camera.Face coordinates to View coordinates?
Here's the code for my ScaleFacetoView() method.
public void ScaleFacetoView(Face[] data, int width, int height, TextView a){
//Extract data from the face object and accounts for the 1000 value offset
mLeft = data[0].rect.left + 1000;
mRight = data[0].rect.right + 1000;
mTop = data[0].rect.top + 1000;
mBottom = data[0].rect.bottom + 1000;
//Compute the scale factors
float xScaleFactor = 1;
float yScaleFactor = 1;
if (height > width){
xScaleFactor = (float) width/2000.0f;
yScaleFactor = (float) height/2000.0f;
}
else if (height < width){
xScaleFactor = (float) height/2000.0f;
yScaleFactor = (float) width/2000.0f;
}
//Scale the face parameters
mLeft = mLeft * xScaleFactor; //X-coordinate
mRight = mRight * xScaleFactor; //X-coordinate
mTop = mTop * yScaleFactor; //Y-coordinate
mBottom = mBottom * yScaleFactor; //Y-coordinate
}
As mentioned above, I call the custom view like so:
#Override
public void onFaceDetection(Face[] arg0, Camera arg1) {
if(arg0.length == 1){
//Get aspect ratio of the screen
View parent = (View) mRectangleView.getParent();
int width = parent.getWidth();
int height = parent.getHeight();
//Modify xy values in the view object
mRectangleView.ScaleFacetoView(arg0, width, height);
mRectangleView.setInvalidate();
//Toast.makeText( cc ,"Redrew the face.", Toast.LENGTH_SHORT).show();
mRectangleView.setVisibility(View.VISIBLE);
//rest of code
Using the explanation Kenny gave I manage to do the following.
This example works using the front facing camera.
RectF rectF = new RectF(face.rect);
Matrix matrix = new Matrix();
matrix.setScale(1, 1);
matrix.postScale(view.getWidth() / 2000f, view.getHeight() / 2000f);
matrix.postTranslate(view.getWidth() / 2f, view.getHeight() / 2f);
matrix.mapRect(rectF);
The returned Rectangle by the matrix has all the right coordinates to draw into the canvas.
If you are using the back camera I think is just a matter of changing the scale to:
matrix.setScale(-1, 1);
But I haven't tried that.
The Camera.Face class returns the face bound coordinates using the image frame that the phone would save into its internal storage, rather than using the image displayed in the Camera Preview. In my case, the images were saved in a different manner from the camera, resulting in a incorrect mapping. I had to manually account for the discrepancy by taking the coordinates, rotating it counter clockwise 90 degrees and flipping it on the y-axis prior to scaling it to the canvas used for the custom view.
EDIT:
It would also appear that you can't change the way the face bound coordinates are returned by modifying the camera capture orientation using the Camera.Parameters.setRotation(int) method either.

Setting up a cover flow in Android

I created an iphone application, and now I am consigned to do the same application in android. I used the OpenFlow from thefaj on github https://github.com/thefaj/OpenFlow
however I have yet to be able to find something with a working coverflow on the android..
Does anyone have experience with this in android or know a good place to start ?
I used this code on my project
http://www.inter-fuser.com/2010/02/android-coverflow-widget-v2.html
You can adapt it to load the contents from some datasource, it's not a hard work.
Just came across http://code.google.com/p/android-coverflow/ which seems to be based in the inter-fuser code with some optimisations
Extend Gallery, and override this method as so:
protected boolean getChildStaticTransformation(View child, Transformation t) {
t.clear();
t.setTransformationType(Transformation.TYPE_MATRIX);
final Matrix matrix = t.getMatrix();
float childCenterPos = child.getLeft() + (child.getWidth() / 2f);
float center = getWidth() / 2;
float diff = Math.abs(center - childCenterPos);
float scale = diff / getWidth();
matrix.setScale(1 - (scale), 1 - (scale));
return true;
}
Obviously you can do more interresting stuff with the matrix than just scaling, but this just as an example of how easy it can be done.

Categories

Resources