I'm feeling that this Question is already solved many times, but I cannot figure it out. I was basically following this little Tutorial about mobile vision and completed it. After that I tried to detect Objects myself starting with a ColorBlob and drawing its borders.
The idea is to start in the middle of the frame (holding the object in the middle of the camera on purpose) and detecting the edges of that object by its color. It works as long as I hold the phone in landscape mode (Frame.ROTATION_0). As soon as I'm in Portrait mode (Frame.Rotation_90) the bounding Rect gets drawn rotated, so an object with more height gets drawn with more width, and also a bit off.
The docs say that a detector always delivers coords to an unrotated upright frame, so how am I supposed to calculate the bounding rectangle coords relative to its rotation?
I don't think it matters much, but here is how I find the color Rect
public Rect getBounds(Frame frame){
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
int scale = 50;
int scaleX = w / scale;
int scaleY = h / scale;
int midX = w / 2;
int midY = h / 2;
float ratio = 10.0
Rect mBoundary = new Rect();
float[] hsv = new float[3];
Bitmap bmp = frame.getBitmap();
int px = bmp.getPixel(midX, midY);
Color.colorToHSV(px, hsv);
Log.d(TAG, "detect: mid hsv: " + hsv[0] + ", " + hsv[1] + ", " + hsv[2]);
float hue = hsv[0];
float nhue;
int x, y;
for (x = midX + scaleX; x < w; x+=scaleX){
px = bmp.getPixel(x, midY);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.right = x
} else {
break;
}
}
for (x = midX - scaleX; x >= 0; x-= scaleX){
px = bmp.getPixel(x, midY);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.left = x
} else {
break;
}
}
for (y = midY + scaleY; y < h; y+=scaleY){
px = bmp.getPixel(midX, y);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.bottom = y;
} else {
break;
}
}
for (y = midY - scaleY; y >= 0; y-=scaleY){
px = bmp.getPixel(midX, y);
Color.colorToHSV(px, hsv);
nhue = hsv[0];
if (nhue <= (hue + ratio) && nhue >= (hue - ratio)){
mBoundary.top = y
} else {
break;
}
}
return mBoundary;
}
Then I simply draw it in the GraphicOverlay.Graphics draw method on the canvas. I already use the transformX/Y methods on the Graphic and thought, that it will also account for the rotation.
I also use the CameraSource and CameraSourcePreview class provided from the samples.
Related
I want to only capture detected face image not whole image on camerasource.
I am drowing box like this
public void draw(Canvas canvas) {
Face face = mFace;
if (face == null) {
return;
}
// Draws a circle at the position of the detected face, with the face's track id below.
float x = translateX(face.getPosition().x + face.getWidth() / 2);
float y = translateY(face.getPosition().y + face.getHeight() / 2);
canvas.drawCircle(x, y, FACE_POSITION_RADIUS, mFacePositionPaint);
canvas.drawText("id: " + mFaceId, x + ID_X_OFFSET, y + ID_Y_OFFSET, mIdPaint);
canvas.drawText("happiness: " + String.format("%.2f", face.getIsSmilingProbability()), x - ID_X_OFFSET, y - ID_Y_OFFSET, mIdPaint);
canvas.drawText("right eye: " + String.format("%.2f", face.getIsRightEyeOpenProbability()), x + ID_X_OFFSET * 2, y + ID_Y_OFFSET * 2, mIdPaint);
canvas.drawText("left eye: " + String.format("%.2f", face.getIsLeftEyeOpenProbability()), x - ID_X_OFFSET*2, y - ID_Y_OFFSET*2, mIdPaint);
// Draws a bounding box around the face.
float xOffset = scaleX(face.getWidth() / 2.0f);
float yOffset = scaleY(face.getHeight() / 2.0f);
float left = x - xOffset;
float top = y - yOffset;
float right = x + xOffset;
float bottom = y + yOffset;
this.faceTrackingCords.y = top;
this.faceTrackingCords.width = right;
this.faceTrackingCords.x = left-right;
this.faceTrackingCords.height = bottom;
canvas.drawRect(left, top, right, bottom, mBoxPaint);
}
and trying to capture image like this but cropped image is not as expected.
mCameraSource.takePicture(null, new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
Bitmap bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
Bitmap bitmapCropped = Bitmap.createBitmap(bitmap,(int)faceTrackingCords.x, (int)faceTrackingCords.y,
(int)faceTrackingCords.width, (int)faceTrackingCords.height);
screenImage.setImageBitmap(bitmapCropped);
}
});
Kindly please help
Thanks in advance for your help first.
I have found so many examples using Gyroscope. But I couldn't find adequate one for me.
I'd like to make a simple quiz game that do actions when I tilt VM to 90 degrees forward and backward. Many examples said I might use "pitch" value of Gyroscope. Could you give some advices for me??
I have done a similar thing where i need draw a rectangle with includes nearby places and must point it to the place and show details.
public void onSensorChanged(SensorEvent event) {
final Handler handler = new Handler();
switch (event.sensor.getType()) {
case Sensor.TYPE_ACCELEROMETER:
mAcceleromterReading =
SensorUtilities.filterSensors(event.values, mAcceleromterReading);
break;
case Sensor.TYPE_MAGNETIC_FIELD:
mMagnetometerReading =
SensorUtilities.filterSensors(event.values, mMagnetometerReading);
break;
float[] orientation =
SensorUtilities.computeDeviceOrientation(mAcceleromterReading, mMagnetometerReading);
if (orientation != null) {
float azimuth = (float) Math.toDegrees(orientation[0]);
if (azimuth < 0) {
azimuth += 360f;
}
// Convert pitch and roll from radians to degrees
float pitch = (float) Math.toDegrees(orientation[1]);
float roll = (float) Math.toDegrees(orientation[2]);
if (abs(pitch - pitchPrev) > PITCH_THRESHOLD && abs(roll - rollPrev) > ROLL_THRESHOLD
&& abs(azimuth - azimuthPrev) > AZIMUTH_THRESHOLD) { // && abs(roll - rollPrev) > rollThreshold
if (DashplexManager.getInstance().mlocation != null) {
mOverlayDisplayView.setHorizontalFOV(mPreview.getHorizontalFOV());
mOverlayDisplayView.setVerticalFOV(mPreview.getVerticalFOV());
mOverlayDisplayView.setAzimuth(azimuth);
mOverlayDisplayView.setPitch(pitch);
mOverlayDisplayView.setRoll(roll);
// Update the OverlayDisplayView to red raw when sensor dataLogin changes,
// redrawing only when the camera is not pointing straight up or down
if (pitch <= 75 && pitch >= -75) {
//Log.d("issueAR", "invalidate: ");
mOverlayDisplayView.invalidate();
}
}
pitchPrev = pitch;
rollPrev = roll;
azimuthPrev = azimuth;
}
}
computeDeviceOrientation method
public static float[] computeDeviceOrientation(float[] accelerometerReading, float[] magnetometerReading) {
if (accelerometerReading == null || magnetometerReading == null) {
return null;
}
final float[] rotationMatrix = new float[9];
SensorManager.getRotationMatrix(rotationMatrix, null, accelerometerReading, magnetometerReading);
// Remap the coordinates with the camera pointing along the Y axis.
// This way, portrait and landscape orientation return the same azimuth to magnetic north.
final float cameraRotationMatrix[] = new float[9];
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_X,
SensorManager.AXIS_Z, cameraRotationMatrix);
final float[] orientationAngles = new float[3];
SensorManager.getOrientation(cameraRotationMatrix, orientationAngles);
// Return a float array containing [azimuth, pitch, roll]
return orientationAngles;
}
onDraw method
#SuppressLint("DrawAllocation")
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
// Log.d("issueAR", "onDraw: ");
// Log.d("issueAR", "mVerticalFOV: "+mVerticalFOV+" "+"mHorizontalFOV"+mHorizontalFOV);
// Get the viewports only once
if (!mGotViewports && mVerticalFOV > 0 && mHorizontalFOV > 0) {
mViewportHeight = canvas.getHeight() / mVerticalFOV;
mViewportWidth = canvas.getWidth() / mHorizontalFOV;
mGotViewports = true;
//Log.d("onDraw", "mViewportHeight: " + mViewportHeight);
}
if (!mGotViewports) {
return;
}
// Set the paints that remain constant only once
if (!mSetPaints) {
mTextPaint.setTextAlign(Paint.Align.LEFT);
mTextPaint.setTextSize(getResources().getDimensionPixelSize(R.dimen.canvas_text_size));
mTextPaint.setColor(Color.WHITE);
mOutlinePaint.setStyle(Paint.Style.STROKE);
mOutlinePaint.setStrokeWidth(mOutline);
mBubblePaint.setStyle(Paint.Style.FILL);
mSetPaints = true;
}
// Center of view
float x = canvas.getWidth() / 2;
float y = canvas.getHeight() / 2;
/*
* Uncomment line below to allow rotation of display around the center point
* based on the roll. However, this "feature" is not very intuitive, and requires
* locking device orientation to portrait or changes the sensor rotation matrix
* on device rotation. It's really quite a nightmare.
*/
//canvas.rotate((0.0f - mRoll), x, y);
float dy = mPitch * mViewportHeight;
if (mNearbyPlaces != null) {
//Log.d("OverlayDisplayView", "mNearbyPlaces: "+mNearbyPlaces.size());
// Iterate backwards to draw more distant places first
for (int i = mNearbyPlaces.size() - 1; i >= 0; i--) {
NearbyPlace nearbyPlace = mNearbyPlaces.get(i);
float xDegreesToTarget = mAzimuth - nearbyPlace.getBearingToPlace();
float dx = mViewportWidth * xDegreesToTarget;
float iconX = x - dx;
float iconY = y - dy;
if (isOverlapping(iconX, iconX).isOverlapped()) {
PointF point = calculateNewXY(new PointF(iconX, iconY + mViewportHeight));
iconX = point.x;
iconY = point.y;
}
nearbyPlace.setIconX(iconX);
nearbyPlace.setIconY(iconY);
Bitmap icon = getIcon(nearbyPlace.getIcon_id());
float width = icon.getWidth() + mTextPaint.measureText(nearbyPlace.getName()) + mMargin;
RectF recf=new RectF(iconX, iconY, width, icon.getHeight());
nearbyPlace.setRect(recf);
float angleToTarget = xDegreesToTarget;
if (xDegreesToTarget < 0) {
angleToTarget = 360 + xDegreesToTarget;
}
if (angleToTarget >= 0 && angleToTarget < 90) {
nearbyPlace.setQuadrant(1);
mQuad1Places.add(nearbyPlace);
} else if (angleToTarget >= 90 && angleToTarget < 180) {
nearbyPlace.setQuadrant(2);
mQuad2Places.add(nearbyPlace);
} else if (angleToTarget >= 180 && angleToTarget < 270) {
nearbyPlace.setQuadrant(3);
mQuad3Places.add(nearbyPlace);
} else {
nearbyPlace.setQuadrant(4);
mQuad4Places.add(nearbyPlace);
}
//Log.d("TAG", " - X: " + iconX + " y: " + iconY + " angle: " + angleToTarget + " display: " + nearbyPlace.getIcon_id());
}
drawQuadrant(mQuad1Places, canvas);
drawQuadrant(mQuad2Places, canvas);
drawQuadrant(mQuad3Places, canvas);
drawQuadrant(mQuad4Places, canvas);
}
}
It doesnot contain full code, but you may understand how pitch and azimuth with roll is used.. Best of luck
I am rendering video on GLSurfaceView by openGL. The openGL portion is written on C++ in native portion. This is my render routine:
void VideoRenderOpenGL2::Render(const unsigned char *pData)
{
......................
// GL_OPERATION is a macro, nothing special
GL_OPERATION(glUseProgram(m_program))
UpdateTextures(pData); // other routine, I will post the function if needed
bool bClear = true;
float vpx = 0.0f;
float vpy = 0.0f;
float vpw = 1.0f;
float vph = 1.0f;
int nOrientation = 0;
float uLeft, uRight, vTop, vBottom;
uLeft = vBottom = 0.0f;
uRight = m_uvx;
vTop = m_uvy;
GLfloat squareUvs[] = {
uLeft, vTop,
uRight, vTop,
uLeft, vBottom,
uRight, vBottom
};
if (bClear) {
GL_OPERATION(glViewport(0, 0, m_nDisplayWidth, m_nDisplayHeight))
GL_OPERATION(glClearColor(0, 0, 0, 1))
GL_OPERATION(glClear(GL_COLOR_BUFFER_BIT))
}
GLfloat squareVertices[8];
// drawing surface dimensions
int screenW = m_nDisplayWidth;
int screenH = m_nDisplayHeight;
if (nOrientation == 90 || nOrientation == 270) {
screenW = m_nDisplayHeight;
screenH = m_nDisplayWidth;
}
int x,y,w,h;
// Fill the smallest dimension, then compute the other one using the image ratio
if (screenW <= screenH) {
float ratio = m_nTextureHeight / (float)m_nTextureWidth;
w = screenW * vpw;
h = w * ratio;
if (h > screenH) {
w *= screenH /(float) h;
h = screenH;
}
x = vpx * m_nDisplayWidth;
y = vpy * m_nDisplayHeight;
} else {
float ratio = m_nTextureWidth / (float)m_nTextureHeight;
h = screenH * vph;
w = h * ratio;
if (w > screenW) {
h *= screenW / (float)w;
w = screenW;
}
x = vpx * screenW;
y = vpy * screenH;
}
// here m_nDisplayWidth = 5536, m_nDisplayHeight = 3114, w = 5536, h = 3114, x = 0, y = 0, screenW = 5536, screenH = 3114, m_nTextureWidth = 1280, m_nTextureHeight = 720
squareVertices[0] = (x - w * 0.5) / screenW - 0.;
squareVertices[1] = (y - h * 0.5) / screenH - 0.;
squareVertices[2] = (x + w * 0.5) / screenW - 0.;
squareVertices[3] = (y - h * 0.5) / screenH - 0.;
squareVertices[4] = (x - w * 0.5) / screenW - 0.;
squareVertices[5] = (y + h * 0.5) / screenH - 0.;
squareVertices[6] = (x + w * 0.5) / screenW - 0.;
squareVertices[7] = (y + h * 0.5) / screenH - 0.;
GL_OPERATION(glViewport(0, 0, m_nDisplayWidth, m_nDisplayHeight))
GLfloat mat[16];
#define VP_SIZE 1.0f
float vpDim = VP_SIZE / (2 * m_scaleFactor);
#define ENSURE_RANGE_A_INSIDE_RANGE_B(a, aSize, bMin, bMax) \
if (2 * aSize >= (bMax - bMin)) \
a = 0; \
else if ((a - aSize < bMin) || (a + aSize > bMax)) { \
float diff; \
if (a - aSize < bMin) diff = bMin - (a - aSize); \
else diff = bMax - (a + aSize); \
a += diff; \
}
float zoom_cx = 0.0f;
float zoom_cy = 0.0f;
ENSURE_RANGE_A_INSIDE_RANGE_B(zoom_cx, vpDim, squareVertices[0], squareVertices[2])
ENSURE_RANGE_A_INSIDE_RANGE_B(zoom_cy, vpDim, squareVertices[1], squareVertices[7])
LoadOrthographicMatrix(
zoom_cx - vpDim,
zoom_cx + vpDim,
zoom_cy - vpDim,
zoom_cy + vpDim,
0, 0.5, mat);
GL_OPERATION(glUniformMatrix4fv(m_uniforms[UNIFORM_PROJ_MATRIX], 1, GL_FALSE, mat))
#define degreesToRadians(d) (2.0 * 3.14157 * d / 360.0)
float rad = degreesToRadians(nOrientation);
GL_OPERATION(glUniform1f(m_uniforms[UNIFORM_ROTATION], rad))
GL_OPERATION(glActiveTexture(GL_TEXTURE0))
GL_OPERATION(glBindTexture(GL_TEXTURE_2D, m_textures[Y]))
GL_OPERATION(glUniform1i(m_uniforms[UNIFORM_TEXTURE_Y], 0))
GL_OPERATION(glActiveTexture(GL_TEXTURE1))
GL_OPERATION(glBindTexture(GL_TEXTURE_2D, m_textures[U]))
GL_OPERATION(glUniform1i(m_uniforms[UNIFORM_TEXTURE_U], 1))
GL_OPERATION(glActiveTexture(GL_TEXTURE2))
GL_OPERATION(glBindTexture(GL_TEXTURE_2D, m_textures[V]))
GL_OPERATION(glUniform1i(m_uniforms[UNIFORM_TEXTURE_V], 2))
GL_OPERATION(glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices))
GL_OPERATION(glEnableVertexAttribArray(ATTRIB_VERTEX))
GL_OPERATION(glVertexAttribPointer(ATTRIB_UV, 2, GL_FLOAT, 1, 0, squareUvs))
GL_OPERATION(glEnableVertexAttribArray(ATTRIB_UV))
GL_OPERATION(glDrawArrays(GL_TRIANGLE_STRIP, 0, 4))
}
And this is LoadOrthographicMatrix:
void VideoRenderOpenGL2::LoadOrthographicMatrix(float left, float right, float bottom, float top, float near, float far, float* mat)
{
float r_l = right - left;
float t_b = top - bottom;
float f_n = far - near;
float tx = - (right + left) / (right - left);
float ty = - (top + bottom) / (top - bottom);
float tz = - (far + near) / (far - near);
mat[0] = (2.0f / r_l);
mat[1] = mat[2] = mat[3] = 0.0f;
mat[4] = 0.0f;
mat[5] = (2.0f / t_b);
mat[6] = mat[7] = 0.0f;
mat[8] = mat[9] = 0.0f;
mat[10] = -2.0f / f_n;
mat[11] = 0.0f;
mat[12] = tx;
mat[13] = ty;
mat[14] = tz;
mat[15] = 1.0f;
}
Suppose, my device dimension is 1080 x 1557 and I am trying to render 2768 x 1557 (height equal to device height and corresponding width keeping aspect ratio with 1280 x 720) sized video on the GLSurfaceView. Everything works fine so far and Render(const unsigned char *pData) is properly rendering and glViewport(0, 0, m_nDisplayWidth, m_nDisplayHeight) is working fine.
But when I want to load video twice the size of 2768 x 1557 I mean 5536 x 3114 the video shinked/congested (not truncated) across X axis. The Render(...) is drawing the full contents of video, but not using the full canvas. I can't figure it out what's wrong here. Why the video is congested across X axis?
It is to be noted that, the video is rendered more congested when I increased the width & height more than 2 times. Its okay till 2768 x 1557
You may be exceeding limits of your OpenGL implementation. Particularly, the maximum texture size and maximum viewport dimensions could come into play.
To query the maximum texture size, use:
GLint maxTexSize = 0;
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTexSize);
and for the maximum viewport dimensions:
GLint viewportDims[2] = {0};
glGetIntegerv(GL_MAX_VIEWPORT_DIMS, viewportDims);
Typical values for these limits are as low as 2K for current low end devices, and possibly even lower for older devices. 4K and 8K are very common for current mainstream devices. Recent high end mobile GPUs support sizes up to 16K.
So before you try sizes over 4K, you should definitely check these limits. It's very likely that your 5536x3114 size will be beyond the limit of some fairly recent devices.
I am creating a game which involves shooting in the direction where user clicked.
So from point A(x,y) to point B(x1,y1) i want the bullet bitmap to animate and i have done some calculation/math and figured out some way to do it, but it's not that great-looking doesn't feel so natural.
My approach to doing this is calculate the difference between x and x1 and y and y1 and just scale it.
In example if the X difference between x and x1 is 100 and Y difference between y and y1 i calculate X/Y and get 2.0 which is equal to 2:1 so I know that I should move X two times faster than Y.
Here is my code, if anyone has any suggestions how to make it better, let me know.
float proportion;
float diffX = (x1 - x);
if(diffX == 0) diffX = 0.00001f;
float diffY = (y1 - y);
if(diffY == 0) diffY = 0.00001f;
if(Math.abs(diffX)>Math.abs(diffY)){
proportion = Math.abs(diffX)/Math.abs(diffY);
speedY = 2;
speedX = proportion * speedY;
}
else if(Math.abs(diffX)<Math.abs(diffY)){
proportion = Math.abs(diffY)/Math.abs(diffX);
speedX = 2;
speedY = proportion * speedX;
}
else{
speedX = speedY = 2;
}
if(diffY<0) speedY = -speedY;
if(diffX<0) speedX = -speedX;
if(speedX>=10) speedX = 9;
if(speedX<=-10) speedX = -9;
if(speedY>=10) speedY = 9;
if(speedY<=-10) speedY = -10;
The following implements LERP (linear interpolation) to move you along a straight line.
// move from (x1, y1) to (x2,y2) with speed "speed" (that must be positive)
final double deltay = y2-y1;
final double deltax = x2-x1;
double deltalen = sqrt(deltay*deltay + deltax*deltax);
if (deltalen <= speed)
display(x2, y2);
else {
double finalx = x1 + deltax * speed/deltalen; // surely deltalen > 0, since speed >=0
double finaly = y1 + deltay * speed/deltalen;
display(finalx, finaly);
}
Here is the code to elaborate on my comment:
float slope = (x2 -x1)/(y2 - y1);
float dx = 0.1f; // tweake to set bullet's speed
float x = x1;
while(x < x2)
{
float y = slope*(x - x1) + y1;
DisplayBullet(x, y);
x += dx;
}
// at this point x = x2 and, if everything went right, y = y2
Here I'm assuming that x1 < x2. You'll have to swap points when that's not the case.
I have tried Romain Guy's TextureView sample code (http://pastebin.com/J4uDgrZ8), it works great. But when I change lockCanvas(null) into lockCanvas(new Rect(x, y, x+20, y+20)), the example starts to flicker.
It seems 'lockCanvas(Rect)' can not work well for TextureView or any other reasons?
I am using Motorola XOOM with android 4.0.3.
Thanks for any help!
the code i modified is as follows:
public void run() {
float x = 0.0f;
float y = 0.0f;
float speedX = 5.0f;
float speedY = 3.0f;
Paint paint = new Paint();
paint.setColor(0xff00ff00);
while (mRunning && !Thread.interrupted()) {
//final Canvas canvas = mSurface.lockCanvas(null);
**final Canvas canvas = mSurface.lockCanvas(new Rect((int)x, (int)y,
(int)(x+20.0f), (int)(y+20.0f)));**
try {
canvas.drawColor(0x00000000, PorterDuff.Mode.CLEAR);
canvas.drawRect(x, y, x + 20.0f, y + 20.0f, paint);
} finally {
mSurface.unlockCanvasAndPost(canvas);
}
if (x + 20.0f + speedX >= mSurface.getWidth() || x + speedX <= 0.0f) {
speedX = -speedX;
}
if (y + 20.0f + speedY >= mSurface.getHeight() || y + speedY <= 0.0f) {
speedY = -speedY;
}
x += speedX;
y += speedY;
try {
//Thread.sleep(15);
**Thread.sleep(1);**
} catch (InterruptedException e) {
// Interrupted
}
}
I checked it carefully, found that it is "Thread.sleep(1)" together with "lockCanvas(Rect)" leads to the flicker. When use lockCanvas(null), sleep(1) is OK. So lockCanvas(Rect) can not be refreshed as fast as lockCanvas(null)??