I'm using getOptimalPreviewSize() method on SurfaceChanged method. After using it when I click take picture button PreviewSize return first size and come back.
It have to stay same preview size. Why is it change?
Here Codes:
PictureCallback myPictureCallback_JPG = new PictureCallback(){
#Override
public void onPictureTaken(byte[] data, Camera camera) {
File pictureFile = getOutputMediaFile();
if (pictureFile == null){
return;
}
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(data);
fos.close();
} catch (FileNotFoundException e) {
Log.d("Method.PictureCallBack", "File not found: " + e.getMessage());
} catch (IOException e) {
Log.d("Method.PictureCallBack", "Error accessing file: " + e.getMessage());
}
camera.startPreview();
}
};
ShutterCallback myShutterCallback = new ShutterCallback(){
#Override
public void onShutter() {}
};
PictureCallback myPictureCallback_RAW = new PictureCallback(){
#Override
public void onPictureTaken(byte[] arg0, Camera arg1) {}
};
public void takePicture() {
mCamera.takePicture(myShutterCallback, myPictureCallback_RAW, myPictureCallback_JPG);
}
Preview size and picture sizes are different. Use getPreviewSize() to get the dimensions of the preview on screen. And using getPictureSize() you can get the exact dimensions of the picture that will be taken when you call takePicture.
I know this may be a late answer, but anyways, I wanted to share my solution to this problem.
All you need to do in order to solve this problem is the following:
Get all supported picture sizes using the Camera.Params.getSupportedPictureSizes() method.
Find the most acceptable sizes and store them in a temporary container(ArrayList will work just fine)
Find the size you want to use, and actually provide the camera with the size you want to use, trought the Camera.Params.setPictureSize(width, height)
This is not really a fourth step, it's rather an addition to the 3rd one.
In most cases your camera will have the incorrect orientation set by default, so will the image if not adjusted beforehand.
So, I would strongly suggest you to use the Camera.Params.setRotation(angle)
for the picture orientation change, and the Camera.Params.setDisplayOrientation(angle) for the preview orientation change.
And here is the code that's gonna do the thing with the picture size:
public static final float CAMERA_SIZE_RATIO_CALIBRATION = 0.1f;
private Camera.Size findTheBestPictureSize() {
List<Camera.Size> supportedSizes = mCamera.getParams().getSupportedPictureSizes();
ArrayList<Camera.Size> acceptableSizes = new ArrayList<Camera.Size>();
Camera.Size foundSize = null;
Camera.Size tmpSize = null;
float desiredRatio = (mDisplaySize[0] * 1f / mDisplaySize[1]);
float calculatedRatio;
float deltaRatio = 0f;
//Looking for the most acceptable sizes
int itemCount = supportedSizes.size();
for(int i = 0; i < itemCount; i++) {
tmpSize = supportedSizes.get(i);
calculatedRatio = (shouldSizeDimensionsBeFlipped ? (tmpSize.height * 1f / tmpSize.width)
: (tmpSize.width * 1f / tmpSize.height));
deltaRatio = Math.abs(calculatedRatio - desiredRatio);
if(deltaRatio <= CAMERA_SIZE_RATIO_CALIBRATION) {
acceptableSizes.add(tmpSize);
}
}
//Looking for the greatest acceptable size
itemCount = acceptableSizes.size();
for(int i = 0; i < itemCount; i++) {
tmpSize = acceptableSizes.get(i);
if(foundSize == null) {
foundSize = tmpSize;
continue;
}
if(tmpSize.width > foundSize.width && tmpSize.height > foundSize.height) {
foundSize = tmpSize;
}
}
return foundSize;
}
Where the shouldSizeDimensionsBeFlipped is a boolean value which determines whether we need to consider the fetched from the camera sizes as flipped(where width is taken as height, and height as width) or not(we will only need to do so, if the default Camera orientation is everything but 0 and 270)
I hope this is going to help someone who's having the same problem :)
Related
I have quite a horrible performance when trying to get the pixels of the Camera preview.
The image format is around 600x900.
The preview rate is quite stable 30fps on my HTC one.
As soon as I try to get the pixels of the image the framerate drops to below 5!
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture) {
Bitmap bmp = mTextureView.getBitmap();
int width = bmp.getWidth();
int height = bmp.getHeight();
int[] pixels = new int[bmp.getHeight() * bmp.getWidth()];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
}
The performance is so slow, it's not really bearable.
Now my only 'easy' solution is to skip frames to at least keep some visual performance.
But I'd actually like to have that code perform faster.
I would appreciate any ideas and suggestions, maybe someone solved this already ?
UPDATE
getbitmap: 188.341ms
array: 122ms
getPixels: 12.330ms
recycle: 152ms
It takes 190 milliseconds just to get the bitmap !! That's the problem
I digged into this for several hours.
So short answer: I found no way to avoid getBitmap() and increase performance.
The function is known to be slow, I found many similar questions and no results.
However I found another solution which is about 3 times faster and solves the problem for me.
I keep using the TextureView approach, I use it because it gives more freedom on how to display the Camera preview (for example I can display the camera live preview in a small window of my own aspect ratio without distortion)
But to work with the image data I do not use onSurefaceTextureUpdated() anymore.
I registered a callback of the cameraPreviewFrame which gives me the pixel data I need.
So no getBitmap anymore, and a lot more speed.
Fast, new code:
myCamera.setPreviewCallback(preview);
Camera.PreviewCallback preview = new Camera.PreviewCallback()
{
public void onPreviewFrame(byte[] data, Camera camera)
{
Camera.Parameters parameters = camera.getParameters();
Camera.Size size = parameters.getPreviewSize();
Image img = new Image(size.width, size.height, "Y800");
}
};
Slow:
private int[] surface_pixels=null;
private int surface_width=0;
private int surface_height=0;
#Override
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture)
{
int width,height;
Bitmap bmp= mTextureView.getBitmap();
height=barcodeBmp.getHeight();
width=barcodeBmp.getWidth();
if (surface_pixels == null)
{
surface_pixels = new int[height * width];
} else
{
if ((width != surface_width) || (height != surface_height))
{
surface_pixels = null;
surface_pixels = new int[height * width];
}
}
if ((width != surface_width) || (height != surface_height))
{
surface_height = barcodeBmp.getHeight();
surface_width = barcodeBmp.getWidth();
}
bmp.getPixels(surface_pixels, 0, width, 0, 0, width, height);
bmp.recycle();
Image img = new Image(width, height, "RGB4");
}
I hope this helps some people with the same problem.
If someone should find a way to create a bitmap in a fast manner within onSurfaceTextureUpdated please respond with a code sample.
Please try:
public void OnSurfaceTextureUpdated(SurfaceTexture surface) {
if (IsBusy) {
return;
}
IsBusy = true;
DoBigWork();
IsBusy = false;
}
I have found out that when using a textureview instead of a surfaceview as a camera preview (both hooked up to the camera via a mediarecorder) then the preview is much more fuzzy.
What I mean by fuzzy is that in a texture view you can see the pixels, especially when zooming. That is not the case when using a surfaceview. Why is that the case?
UPD:
Sorry,but after I re-write my shit code, the key is the preview size too small that caused "fuzziness", so you should set a reasonable preview Size,not the reason strikeout below, but auto-focus is suggested ...
Size size = getBestSupportSize(parameters.getSupportedPreviewSizes(), width, height);
parameters.setPreviewSize(size.width, size.height);
As to the method getBestSupportSize(), how to get the bestSize for your project needs, in this case, it is as large as the screen width andhe ratio is 4/3 your's may be some other, I calculate the ration dividing width/height.
private Size getBestSupportSize(List<Size> sizes, int width, int height) {
Size bestsize = sizes.get(0);
int screenWidth = getResources().getDisplayMetrics().widthPixels;
int dt = Integer.MAX_VALUE;
for (int i = sizes.size() - 1; i >= 0; i--) {
Log.d(TAG, "-index : " + i);
Size s = sizes.get(i);
if (s.width * 3.0f / 4 == s.height) {
int newDT = Math.abs(screenWidth - s.width);
if (newDT < dt && screenWidth < s.width) {
dt = newDT;
bestsize = s;
}
}
}
return bestsize;//note that if no "4/3" size supported,default return size[0]
}
So this "fuzziness" was caused by a small previewSize calcualate a best size for the camera using this getSupportedPreviewSizes() method
And I will keep the autoFocus snippet below, strikeout though, FYR if is needed.
Well i got the solution for this "fuzzy" problem,and my case is just using TextureView andsurfaceTexture to take a pic instead of old surfaceView withsurfaceHolderway.
The key is set this mCamera.autofocus(), why the pic is"fuzzy" is bacause we lack of this autoFocus setting.
like below :
mCamera.setPreviewTexture(surface);
//enable autoFocus if moving
mCamera.setAutoFocusMoveCallback(new AutoFocusMoveCallback() {
#Override
public void onAutoFocusMoving(boolean start, Camera camera) {
if (start) { //true means you are moving the camera
mCamera.autoFocus(myAutoFocus);
}
}
});
mCamera.startPreview();
The autoFocusCallback like this:
AutoFocusCallback myAutoFocus = new AutoFocusCallback() {
#Override
public void onAutoFocus(boolean success, Camera camera) {
}
};
I have built a module similar to vine app video recording. But I am not able to make the video size to 480x480 px . Is there any way to do that. Thanks
The Android camera has a limited list of available sizes for the camera. So we need to select best camera size and select sub image(480x480) from the original camera image.
For example on my HTC one m8 I have this sizes for the camera:
1920x1088
1920x1080
1808x1080
....
720x480
640x360
640x480
576x432
480x320
384x288
352x288
320x240
240x160
176x144
You can retrieve list of available size by use getSupportedPreviewSizes() method.
public Camera mCamera;//Your camera instance
public List<Camera.Size> cameraSizes;
private final int CAMERA_IMAGE_WIDTH = 480;
private final int CAMERA_IMAGE_HEIGHT = 480;
...
cameraSizes = mCamera.getParameters().getSupportedPreviewSizes()
After that, you need to find most suitable camera size and set preview size for the camera.
Camera.Size findBestCameraSize(int width, int height){
Camera.Size bestSize = cameraSizes.get(0);
int minimalArea = bestSize.height * bestSize.width;
for(int i = 1;i < cameraSizes.size();i++){
Camera.Size size = cameraSizes.get(i);
int area = size.height * size.width;
if(size.width < width || size.height < height){
continue;
}
if(area < minimalArea){
bestSize = size;
minimalArea = area;
}
}
return bestSize;
}
...
SurfaceHolder.Callback surfaceCallback = new SurfaceHolder.Callback() {
public void surfaceCreated(SurfaceHolder holder) {
//Do something
}
public void surfaceChanged(SurfaceHolder holder,
int format, int width,
int height) {
Camera.Parameters params = mCamera.getParameters();
Camera.Size size = findBestCameraSize(CAMERA_IMAGE_WIDTH, CAMERA_IMAGE_HEIGHT);
params.setPreviewSize(size.width, size.height);
camera.setParameters(params);
if(mCamera != null){
mCamera.startPreview();
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
// Do something
}
};
After setup camera size, we need to get sub image from result bitmap, from the camera. You need put this code where you receive a bitmap picture(Usually I use an OpenCV library and matrixes for better performance).
Bitmap imageFromCamera = //here ve receive image from camera.
Camera.Size size = mCamera.getParameters().getPreviewSize();
int x = (size.width - CAMERA_IMAGE_WIDTH)/2;
int y = (size.height - CAMERA_IMAGE_HEIGHT)/2;
Bitmap resultBitmap = null;
if(x < 0 || y < 0){
resultBitmap = imageFromCamera;
}else{
resultBitmap = Bitmap.createBitmap(imageFromCamera, x, y, CAMERA_IMAGE_WIDTH, CAMERA_IMAGE_HEIGHT);
}
The video capture resolution for android are limited to the native resolutions supported by the camera.
You can try using a 3rd party library for video post processing. So you can crop or re-scale the video captured by the camera.
I am using this one
android-gpuimage-videorecording
and it works quite well.
Background
I need to rotate images taken by the camera so that they will always have a normal orientation.
for this, I use the next code (used this post to get the image orientation)
//<= get the angle of the image , and decode the image from the file
final Matrix matrix = new Matrix();
//<= prepare the matrix based on the EXIF data (based on https://gist.github.com/9re/1990019 )
final Bitmap rotatedBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(),matrix,false);
bitmap.recycle();
fileOutputStream = new FileOutputStream(tempFilePath);
rotatedBitmap.compress(CompressFormat.JPEG, 100, fileOutputStream);
rotatedBitmap.recycle();
here the compression rate (AKA "quality" parameter) is 100.
The problem
The code works fine, but the result is larger than the original, much much larger.
The original file is around 600-700 KB, while the resulting file is around 3MB ...
This is even though both the input file and the output file are of the same format (JPEG).
The camera settings are at "super fine" quality. not sure what it means, but I think it has something to do with the compression ratio.
What I've tried
I've tried to set the "filter" parameter to either false or true. both resulted in large files.
Even without the rotation itself (just decode and encode), I get much larger files sizes...
Only when I've set compression ratio to around 85, I get similar files sizes, but I wonder how the quality is affected compared to the original files.
The question
Why does it occur?
Is there a way to get the exact same size and quality of the input file ?
Will using the same compression rate as the original file make it happen? Is it even possible to get the compression rate of the original file?
What does it mean to have a 100% compression rate ?
EDIT: I've found this link talking about rotation of JPEG files without losing the quality and file size , but is there a solution for it on Android ?
Here's another link that says it's possible, but I couldn't find any library that allows rotation of jpeg files without losing their quality
I tried two methods but I found out those methods take too long in my case. I still share what I used.
Method 1: LLJTran for Android
Get the LLJTran from here:
https://github.com/bkhall/AndroidMediaUtil
The code:
public static boolean rotateJpegFileBaseOnExifWithLLJTran(File imageFile, File outFile){
try {
int operation = 0;
int degree = getExifRotateDegree(imageFile.getAbsolutePath());
//int degree = 90;
switch(degree){
case 90:operation = LLJTran.ROT_90;break;
case 180:operation = LLJTran.ROT_180;break;
case 270:operation = LLJTran.ROT_270;break;
}
if (operation == 0){
Log.d(TAG, "Image orientation is already correct");
return false;
}
OutputStream output = null;
LLJTran llj = null;
try {
// Transform image
llj = new LLJTran(imageFile);
llj.read(LLJTran.READ_ALL, false); //don't know why setting second param to true will throw exception...
llj.transform(operation, LLJTran.OPT_DEFAULTS
| LLJTran.OPT_XFORM_ORIENTATION);
// write out file
output = new BufferedOutputStream(new FileOutputStream(outFile));
llj.save(output, LLJTran.OPT_WRITE_ALL);
return true;
} catch(Exception e){
e.printStackTrace();
return false;
}finally {
if(output != null)output.close();
if(llj != null)llj.freeMemory();
}
} catch (Exception e) {
// Unable to rotate image based on EXIF data
e.printStackTrace();
return false;
}
}
public static int getExifRotateDegree(String imagePath){
try {
ExifInterface exif;
exif = new ExifInterface(imagePath);
String orientstring = exif.getAttribute(ExifInterface.TAG_ORIENTATION);
int orientation = orientstring != null ? Integer.parseInt(orientstring) : ExifInterface.ORIENTATION_NORMAL;
if(orientation == ExifInterface.ORIENTATION_ROTATE_90)
return 90;
if(orientation == ExifInterface.ORIENTATION_ROTATE_180)
return 180;
if(orientation == ExifInterface.ORIENTATION_ROTATE_270)
return 270;
} catch (IOException e) {
e.printStackTrace();
}
return 0;
}
Method 2: Using libjepg-turbo's jpegtran executable
1 Follow the step describe here:
https://stackoverflow.com/a/12296343/1099884
Except that you don't need obj/local/armeabi/libjpeg.a on ndk-build because I only want the jpegtran executable but not mess with JNI with libjepg.a .
2 Place the jpegtran executable on asset folder.
The code:
public static boolean rotateJpegFileBaseOnExifWithJpegTran(Context context, File imageFile, File outFile){
try {
int operation = 0;
int degree = getExifRotateDegree(imageFile.getAbsolutePath());
//int degree = 90;
String exe = prepareJpegTranExe(context);
//chmod ,otherwise premission denied
boolean ret = runCommand("chmod 777 "+exe);
if(ret == false){
Log.d(TAG, "chmod jpegTran failed");
return false;
}
//rotate the jpeg with jpegtran
ret = runCommand(exe+
" -rotate "+degree+" -outfile "+outFile.getAbsolutePath()+" "+imageFile.getAbsolutePath());
return ret;
} catch (Exception e) {
// Unable to rotate image based on EXIF data
e.printStackTrace();
return false;
}
}
public static String prepareJpegTranExe(Context context){
File exeDir = context.getDir("JpegTran", 0);
File exe = new File(exeDir, "jpegtran");
if(!exe.exists()){
try {
InputStream is = context.getAssets().open("jpegtran");
FileOutputStream os = new FileOutputStream(exe);
int bufferSize = 16384;
byte[] buffer = new byte[bufferSize];
int count;
while ((count=is.read(buffer, 0, bufferSize))!=-1) {
os.write(buffer, 0, count);
}
is.close();
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
return exe.getAbsolutePath();
}
public static boolean runCommand(String cmd){
try{
Process process = Runtime.getRuntime().exec(cmd);
BufferedReader reader = new BufferedReader(
new InputStreamReader(process.getInputStream()));
int read;
char[] buffer = new char[4096];
StringBuffer output = new StringBuffer();
while ((read = reader.read(buffer)) > 0) {
output.append(buffer, 0, read);
}
reader.close();
// Waits for the command to finish.
process.waitFor();
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}
Unfortunately, both take too long. It is 16 seconds on my Samsung Galaxy S1!!!! But I found out this app (https://play.google.com/store/apps/details?id=com.lunohod.jpegtool) only take 3-4 seconds. There must be some way to do.
Once you are done setting you bestPreviewSize You have to now set for bestPictureSize every phone supports different picture sizes so to get Best Picture quality you have to check supported picture sizes and then set best size to camera parameter. You have to set those parameters in surface changed to get the width and height. surfaceChanged will be called in start and thus your new parameters will be set.
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Camera.Parameters myParameters = camera.getParameters();
myPicSize = getBestPictureSize(width, height);
if (myBestSize != null && myPicSize != null) {
myParameters.setPictureSize(myPicSize.width, myPicSize.height);
myParameters.setJpegQuality(100);
camera.setParameters(myParameters);
Toast.makeText(getApplicationContext(),
"CHANGED:Best PICTURE SIZE:\n" +
String.valueOf(myPicSize.width) + " ::: " + String.valueOf(myPicSize.height),
Toast.LENGTH_LONG).show();
}
}
Now the getBestPictureSize ..
private Camera.Size getBestPictureSize(int width, int height)
{
Camera.Size result=null;
Camera.Parameters p = camera.getParameters();
for (Camera.Size size : p.getSupportedPictureSizes()) {
if (size.width>width || size.height>height) {
if (result==null) {
result=size;
} else {
int resultArea=result.width*result.height;
int newArea=size.width*size.height;
if (newArea>resultArea) {
result=size;
}
}
}
}
return result;
}
For rotation, try this..
final Matrix matrix = new Matrix();
matrix.setRotate(90):
final Bitmap rotatedBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(),matrix,false);
I am using PNG images and it is working fine.... for JPEG images, please check the above code.
100% quality rate probably is a higher quality setting than the setting the files are originally saved with. This results in higher size but (almost) the same image.
I'm not sure how to get exactly the same size, maybe just setting the quality to 85% will do (Quick and Dirty).
However if you just want to rotate the pic in 90°-steps, you could edit just the JPEG-metadata without touching the pixel data itself.
Not sure how it's done in android, but this is how it works.
I'm making a line follower for my robot on Android (to learn Java/Android programming), currently I'm facing the image processing problem: the camera preview returns an image format called YUV which I want to convert to a threshold in order to know where the line is, how would one do that?
As of now I've succeeded getting something, that is I definitely can read data from the camera preview and by some miracle even know if the light intensity is over or under a certain value at a certain area on the screen. My goal is to draw the robot's path on an overlay over the camera's preview, that too works to some extent, but the problem is the YUV management.
As you can see not only the dark area is drawn sideways, but it also repeats itself 4 times and the preview image is stretched, I cannot figure out how to fix these problems.
Here's the relevant part of code:
public void surfaceCreated(SurfaceHolder arg0) {
// TODO Auto-generated method stub
// camera setup
mCamera = Camera.open();
Camera.Parameters parameters = mCamera.getParameters();
List<Camera.Size> sizes = parameters.getSupportedPreviewSizes();
for(int i=0; i<sizes.size(); i++)
{
Log.i("CS", i+" - width: "+sizes.get(i).width+" height: "+sizes.get(i).height+" size: "+(sizes.get(i).width*sizes.get(i).height));
}
// change preview size
final Camera.Size cs = sizes.get(8);
parameters.setPreviewSize(cs.width, cs.height);
// initialize image data array
imgData = new int[cs.width*cs.height];
// make picture gray scale
parameters.setColorEffect(Camera.Parameters.EFFECT_MONO);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
mCamera.setParameters(parameters);
// change display size
LayoutParams params = (LayoutParams) mSurfaceView.getLayoutParams();
params.height = (int) (mSurfaceView.getWidth()*cs.height/cs.width);
mSurfaceView.setLayoutParams(params);
LayoutParams overlayParams = (LayoutParams) swOverlay.getLayoutParams();
overlayParams.width = mSurfaceView.getWidth();
overlayParams.height = mSurfaceView.getHeight();
swOverlay.setLayoutParams(overlayParams);
try
{
mCamera.setPreviewDisplay(mSurfaceHolder);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
catch (IOException e)
{
e.printStackTrace();
mCamera.stopPreview();
mCamera.release();
}
// callback every time a new frame is available
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera)
{
// create bitmap from camera preview
int pixel, pixVal, frameSize = cs.width*cs.height;
for(int i=0; i<frameSize; i++)
{
pixel = (0xff & ((int) data[i])) - 16;
if(pixel < threshold)
{
pixVal = 0;
}
else
{
pixVal = 1;
}
imgData[i] = pixVal;
}
int cp = imgData[(int) (cs.width*(0.5+(cs.height/2)))];
//Log.i("CAMERA", "Center pixel RGB: "+cp);
debug.setText("Center pixel: "+cp);
// process preview image data
Paint paint = new Paint();
paint.setColor(Color.YELLOW);
int start, finish, last;
start = finish = last = -1;
float x_ratio = mSurfaceView.getWidth()/cs.width;
float y_ratio = mSurfaceView.getHeight()/cs.height;
// display calculated path on overlay using canvas
Canvas overlayCanvas = overlayHolder.lockCanvas();
overlayCanvas.drawColor(0, Mode.CLEAR);
// start by finding the tape from bottom of the screen
for(int y=cs.height; y>0; y--)
{
for(int x=0; x<cs.width; x++)
{
pixel = imgData[y*cs.height+x];
if(pixel == 1 && last == 0 && start == -1)
{
start = x;
}
else if(pixel == 0 && last == 1 && finish == -1)
{
finish = x;
break;
}
last = pixel;
}
//overlayCanvas.drawLine(start*x_ratio, y*y_ratio, finish*x_ratio, y*y_ratio, paint);
//start = finish = last = -1;
}
overlayHolder.unlockCanvasAndPost(overlayCanvas);
}
});
}
This code generates an error sometimes when quitting the application due to some method being called after release, which is the least of my problems.
UPDATE:
Now that the orientation problem is fixed (CCD sensor orientation) I'm still facing the repetition problem, this is probably related to my YUV data management...
Your surface and camera management looks correct, but I would doublecheck that camera actually accepted preview size settings ( some camera implementations reject some settings silently)
As you are working with portrait mode, you have to keep in mind that camera does not give a fart about prhone orientation - its coordinate origin isdetermined by CCD chip and is always to right corner and scan direction is from top to bottom and right to left - quite different from your overlay canvas. ( But if you are in landscape mode, everything is correct ;) ) - this is certaily source of odd drawing result
Your threshloding is bit naive and not very usefull in real life - I would suggest adaptive threshloding. In our javaocr project ( pure java, also has android demos ) we implemented efficient sauvola binarisation (see demos):
http://sourceforge.net/projects/javaocr/
Performance binarisation can be improved to work only on single image rows (patches welcome)
Issue with UV part of image is easy - default forman is NV21, luminance comes first
and this is just byte stream, and you do not need UV part of image at all (look into demos above)