I'm using android coverflow, and it works fine on most of devices, but it seems that in Android 4.0.3 it does not put the center image back to the center once that you slide back and forth.
They remain "stuck" and under the wrong angle.
Did anyone had similar issues? What could cause this behavior?
So middle image on the attached image should be centered and not angled as it is.
I just added
child.invalidate()
before
final int childCenter = getCenterOfView(child); in getChildStaticTransformation(View child, Transformation t)
so it becomes
protected boolean getChildStaticTransformation(View child, Transformation t) {
child.invalidate();
final int childCenter = getCenterOfView(child);
final int childWidth = child.getWidth();
int rotationAngle = 0;
Are you using Neil Davies Coverflow Widget V2?
If yes, I found out the problem. If no, I am sorry, I can't help you.
The problem is in the function getCenterOfView. More accurate, it is a problem about view.getLeft(). <-- please tell me if anyone know why it is different after 4.0
The value return from view.getLeft() is different at every time. So this will affect another function getChildStaticTransformation, it can't find which imageview is the center.
My solution, a dirty fix, is give a range for it to detect its center.
if (childCenter <= mCoveflowCenter + 125
&& childCenter >= mCoveflowCenter - 125) {
transformImageBitmap((ImageView) child, t, 0);
}
Please let me know if anyone has a better solution on this.
I resolved following this code
private int offsetChildrenLeftAndRight() {
int offset = 0;
for (int i = getChildCount() - 1; i >= 0; i--) {
getChildAt(i).offsetLeftAndRight(offset);
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.JELLY_BEAN)
getChildAt(i).invalidate();
}
return offset;
}
final int childCenter = getCenterOfView(child) + offsetChildrenLeftAndRight();
Related
I know there are some similar topics that have been posted, but I couldn’t find a good solution for my problem.
I have a GridView which is filed with a custom ImageAdapter. Everything works fine, but whenever I click on an image contained in the GridView, I would like to move another Imageview at the click's position.
However, the coordinates of the Event, that I take with event.getX() and event.getY(), don’t correspond to the click’s position.
I first thought of a problem of dp/px conversion, and I tried several solutions in this way but none of them worked.
Then I tried to use the getXPrecision(), but I couldn’t make a working solution…
Maybe there is another way?
I would like to make the correct position programmatically, without adding constants int, so my project will work on various phone and tablet, with different dp and resolutions.
EDIT : Here is a screenshot, where i clicked the 3rd cell of the first line, and setted the position of the pencil with getRawX() - getRawY(). As we can see, this is not the correct position, I want the red dot (imageview's center) to be positionned where i clicked.
The code used :
//getting the position of the onTouch event :
GridView centre = (GridView) findViewById(R.id.gridView);
adapter = new ImageAdapter(this, (dim * dim), tailleCell);
centre.setAdapter(adapter);
centre.setOnTouchListener(new View.OnTouchListener() {
public boolean onTouch(View view, MotionEvent event) {
int X = (int)event.getRawX();
int Y = (int)event.getRawY();
animation(X, Y, etat);
return false;
}
});`
//launching the animation (and setting position of the pencil) :
private void animation(int posX, int posY, int etat)
{
final ImageView img;
if(etat == 0)
{
img = (ImageView) findViewById(R.id.imageView);
}
else
{
img = (ImageView) findViewById(R.id.imageView2);
}
img.clearAnimation();
img.setVisibility(View.VISIBLE);
img.setX(posX);
img.setY(posY);
[...]
}
EDIT 2 : ~Solution :
Jesus Molina Rodríguez De Vera's solution wasn't working as expected, but i managed to make a workable solution. I just changed my code in the Event Handler to adjust the image's position :
int[] offset = new int[2];
centre.getLocationOnScreen(offset);
int Xoffset=offset[0];
int Yoffset = offset[1];
int X = (int)event.getRawX();
int Y = (int)event.getRawY();
animation(X-((int)Math.round(Xoffset/1.15)), Y-((int)Math.round(Yoffset/1.5)), etat);
Sorry for my bad English :)
Thanks for your help!
Try using getRawX() and getRawY() instead of getX and getY.
Edit
I think that i have found the problem.
You are obtaining X and Y relative to the GridView top-left corner, not to the absolute screen coordinetes.
What you can do is the following:
int[] offset = new int[2];
center.getLocationOnScreen(offset);
int Xoffset=offset[0];
int Yoffset = offset[1];
private void animation(int posX, int posY, int etat){
//...
img.setX(posX+Xoffset);
img.setY(posY+Yoffset);
[...]
}
This is supposed to set the the top-left corner of the ImageView in the selected point. In order to set the center in that point:
int ivWidth = img.getWidth();
int ivHeight = img.getHeight();
private void animation(int posX, int posY, int etat){
//...
int[] finalPosition=new int[2];
finalPosition[0] = posX+Xoffset-(ivWidth/2);
finalPosition[1] = posY+Yoffset-(ivHeight/2);
img.setX(finalPosition[0]);
img.setY(finalPosition[1]);
[...]
}
I haven't try it but it should work.
Edit 2
Xoffset and Yoffset are only needed if you use getX()/getY() instead of getRawX()/getRawY()
In short I want a preview similar to Google Plays, where you can slide up and the top image/video preview fades back and the rest of the view is moved up.
I've got a view that I want to be hiden like the video in this example, the action bar can remain stationary at all times, but I need the bottom part to be dragable.
I can't seem to find out which layout that is, or how it is done, all I managed to find were unrelated such as ViewPager. My current min sdk version is 18, the compile version is 21.
The following is the code I used in the app I am working
You will have to use the OnScrollChanged function in your ScrollView. ActionBar doesn't let you set the opacity , so set a background drawable on the actionbar and you can change its opacity based on the amount of scroll in the scrollview. I have given an example workflow
The function sets gives the appropriate alpha for the view locationImage based on its position WRT window .
this.getScrollY() gives you how much the scrollView has scrolled
public void OnScrollChanged(int l, int t, int oldl, int oldt) {
// Code ...
locationImage.setAlpha(getAlphaForView(locationImageInitialLocation- this.getScrollY()));
}
private float getAlphaForView(int position) {
int diff = 0;
float minAlpha = 0.4f, maxAlpha = 1.f;
float alpha = minAlpha; // min alpha
if (position > screenHeight)
alpha = minAlpha;
else if (position + locationImageHeight < screenHeight)
alpha = maxAlpha;
else {
diff = screenHeight - position;
alpha += ((diff * 1f) / locationImageHeight)* (maxAlpha - minAlpha); // 1f and 0.4f are maximum and min
// alpha
// this will return a number betn 0f and 0.6f
}
// System.out.println(alpha+" "+screenHeight +" "+locationImageInitialLocation+" "+position+" "+diff);
return alpha;
}
You can download an example working sample at https://github.com/ramanadv/fadingActionBar
Credit: CommandSpace
I have a TextView with an OnTouchListener. What I want is the character index the user is pointing to when I get the MotionEvent. Is there any way to get to the underlying font metrics of the TextView?
Have you tried something like this:
Layout layout = this.getLayout();
if (layout != null)
{
int line = layout.getLineForVertical(y);
int offset = layout.getOffsetForHorizontal(line, x);
// At this point, "offset" should be what you want - the character index
}
Hope this helps...
I am not aware of a simple direct way to do this but you should be able to put something together using the Paint object of the TextView via a call to TextView.getPaint()
Once you have the paint object you will have access to the underlying FontMetrices via a call to Paint.getFontMetrics() and have access to other functions like Paint.measureText() Paint.getTextBounds(), and Paint.getTextWidths() for accessing the actual size of the displayed text.
While it generally works I had a few problems with the answer from Tony Blues.
Firstly getOffsetForHorizontal returns an offset even if the x coordinate is way beyond the last character of the line.
Secondly the returned character offset sometimes belongs to the next character, not the character directly underneath the pointer. Apparently the method returns the offset of the nearest cursor position. This may be to the left or to the right of the character depending on what's closer by.
My solution uses getPrimaryHorizontal instead to determine the cursor position of a certain offset and uses binary search to find the offset underneath the pointer's x coordinate.
public static int getCharacterOffset(TextView textView, int x, int y) {
x += textView.getScrollX() - textView.getTotalPaddingLeft();
y += textView.getScrollY() - textView.getTotalPaddingTop();
final Layout layout = textView.getLayout();
final int lineCount = layout.getLineCount();
if (lineCount == 0 || y < layout.getLineTop(0) || y >= layout.getLineBottom(lineCount - 1))
return -1;
final int line = layout.getLineForVertical(y);
if (x < layout.getLineLeft(line) || x >= layout.getLineRight(line))
return -1;
int start = layout.getLineStart(line);
int end = layout.getLineEnd(line);
while (end > start + 1) {
int middle = start + (end - start) / 2;
if (x >= layout.getPrimaryHorizontal(middle)) {
start = middle;
}
else {
end = middle;
}
}
return start;
}
Edit: This updated version works better with unnatural line breaks, when a long word does not fit in a line and gets split somewhere in the middle.
Caveats: In hyphenated texts, clicking on the hyphen at the end of a line return the index of the character next to it. Also this method does not work well with RTL texts.
I want to ask about some ideas / study materials connected to binarization. I am trying to create system that detects human emotions. I am able to get areas such as brows, eyes, nose, mouth etc. but then comes another stage -> processing...
My images are taken in various places/time of day/weather conditions. It's problematic during binarization, with the same treshold value one images are fully black, other looks well and provide me informations I want.
What I want to ask you about is:
1) If there is known way how to bring all images to the same level of brightness?
2) How to create dependency between treshold value and brightness on image?
What I have tried for now is normalize the image... but there are no effects, maybe I'm doing something wrong. I'm using OpenCV (for android)
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
EDIT:
I tried adaptive treshold, OTSU - they didnt work for me. I have problems with using CLAHE in Android but I managed to implement Niblack algorithm.
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
nibelBlackTresholding(cleanFaceMatGRAY, -0.2);
private void nibelBlackTresholding(Mat image, double parameter) {
Mat meanPowered = image.clone();
Core.multiply(image, image, meanPowered);
Scalar mean = Core.mean(image);
Scalar stdmean = Core.mean(meanPowered);
double tresholdValue = mean.val[0] + parameter * stdmean.val[0];
int totalRows = image.rows();
int totalCols = image.cols();
for (int cols=0; cols < totalCols; cols++) {
for (int rows=0; rows < totalRows; rows++) {
if (image.get(rows, cols)[0] > tresholdValue) {
image.put(rows, cols, 255);
} else {
image.put(rows, cols, 0);
}
}
}
}
The results are really good, but still not enough for some images. I paste links cuz images are big and I don't want to take too much screen:
For example this one is tresholded really fine:
https://dl.dropboxusercontent.com/u/108321090/a1.png
https://dl.dropboxusercontent.com/u/108321090/a.png
But bad light produce shadows sometimes and this gives this effect:
https://dl.dropboxusercontent.com/u/108321090/b1.png
https://dl.dropboxusercontent.com/u/108321090/b.png
Do you have any idea that could help me to improve treshold of those images with high light difference (shadows)?
EDIT2:
I found that my previous Algorithm is implemented in wrong way. Std was calculated in wrong way. In Niblack Thresholding mean is local value not global. I repaired it according to this reference http://arxiv.org/ftp/arxiv/papers/1201/1201.5227.pdf
private void niblackThresholding2(Mat image, double parameter, int window) {
int totalRows = image.rows();
int totalCols = image.cols();
int offset = (window-1)/2;
double tresholdValue = 0;
double localMean = 0;
double meanDeviation = 0;
for (int y=offset+1; y<totalCols-offset; y++) {
for (int x=offset+1; x<totalRows-offset; x++) {
localMean = calculateLocalMean(x, y, image, window);
meanDeviation = image.get(y, x)[0] - localMean;
tresholdValue = localMean*(1 + parameter * ( (meanDeviation/(1 - meanDeviation)) - 1 ));
Log.d("QWERTY","TRESHOLD " +tresholdValue);
if (image.get(y, x)[0] > tresholdValue) {
image.put(y, x, 255);
} else {
image.put(y, x, 0);
}
}
}
}
private double calculateLocalMean(int x, int y, Mat image, int window) {
int offset = (window-1)/2;
Mat tempMat;
Rect tempRect = new Rect();
Point leftTop, bottomRight;
leftTop = new Point(x - (offset + 1), y - (offset + 1));
bottomRight = new Point(x + offset, y + offset);
tempRect = new Rect(leftTop, bottomRight);
tempMat = new Mat(image, tempRect);
return Core.mean(tempMat).val[0];
}
Results for 7x7 window and proposed in reference k parameter = 0.34: I still can't get rid of shadow on faces.
https://dl.dropboxusercontent.com/u/108321090/b2.png
https://dl.dropboxusercontent.com/u/108321090/b1.png
things to look at:
http://docs.opencv.org/java/org/opencv/imgproc/CLAHE.html
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#adaptiveThreshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20int,%20int,%20int,%20double)
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#threshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20double,%20int) (THRESH_OTSU)
I have made a class that extends View and this is the onDraw method. The program creates a maze and to get an appropriate reading of the height and width, I appeared to need to call them in the onDraw method (otherwise it would just return 0 for both). This may be what is screwing everything up. However, it gets the correct height based on the spacing of the visible squares in the section of the view that is painted.
The section of the view that appears to be unpainted is about the size of the context menu and does not match up with spares. I have looked for other people having this problem and it appears nobody else is having this problem and as best I can tell, I am not doing anything particularly different from them. If there is any other insight I can provide, please let me know.
I can't post pictures yet because I'm new at this whole stack overflow thing =(
Thus I tried to explain the phenomenon as best as I could.
Thanks!
#Override
public void onDraw(Canvas canvas) {
if(firstRun){
width = getMeasuredWidth();
height = getMeasuredHeight();
MazeMake();
invalidate();
}else
for (int i = 0; i < c; i++)
for (int j = 0; j < r; j++) {
grid[i][j].paintSq(canvas);
}
}
#Override
protected void onMeasure(int wMeasureSpec, int hMeasureSpec) {
int measuredHeight = measure(hMeasureSpec);
int measuredWidth = measure(wMeasureSpec);
setMeasuredDimension(measuredHeight, measuredWidth);
}
private int measure(int measureSpec) {
int specMode = MeasureSpec.getMode(measureSpec);
int specSize = MeasureSpec.getSize(measureSpec);
if (specMode == MeasureSpec.UNSPECIFIED)
return 500;
else {
return specSize;
}
}
Maybe override the " onMeasure (int widthMeasureSpec, int heightMeasureSpec)" method and don't forget to call "setMeasuredDimension(int, int)" as described in the documentation
http://developer.android.com/reference/android/view/View.html#onDraw(android.graphics.Canvas)
I got it. As it turns out, I had managed to switch the width and height in the setMeasuredDimension(int, int) call. I thought I had done so appropriately, but after hours of pulling my hair out, determined that was not, in fact, the case.