YUV_NV21_TO_RGB not working? - android

I am staring to develop an app which monitors the camera preview, and does some image processing on it and displays int on a canvas. Just as a diagnostic I have the following code:
camera = Camera.open();
ImageFormat imf = new ImageFormat();
Camera.Parameters param = camera.getParameters();
param.setPreviewSize(128, 128);
preview_format = param.getPreviewFormat();
Camera.Size sz = param.getPreviewSize();
myimage = new int[sz.width*sz.height];
At run time it reports that preview_format is 17 which I understand is "NV21".
Later I have:
camera.setPreviewCallback(new PreviewCallback()
{
public void onPreviewFrame(byte[] _data, Camera _camera)
{
YUV_NV21_TO_RGB(myimage , _data, 128, 128) ;
}
});
The function YUV_NV21_TO_RGB was taken from here.
Meanwhile in another thread I have:
canvas.drawBitmap(
myimage, // the int array
0, // where to start in the array
128, // the stride ???
200, // x coord of where to display
200, // y coord of where to display
128, // wid
128, // ht
false, // alpha used?
null); // the paint used
The resulting image can be seen amongst other diagnostics in the square below. The stripes change as I move the phone around and appear to in some way correspond to what the camera is pointing at, but clearly it has been mangled. I tried using an alternative function found here, and another from wikipedia, but with seemingly identical results. Any ideas?
EDIT: One thought I had was that perhaps NV21 may not completely specify the format - maybe its a class of formats, where you need to go on and specify the bits per pixel or similar.
EDIT: An extra clue - if I cover the camera completely, the square goes entirely pure green.

Your preview size is not 128 by 128 because you fail to set it. You set it on the Camera.Parameters instance but you don't apply it to the camera.
You need to add the following line:
camera.setParameters(param);
And it's probably safe to get the parameters directly from the Camera instance:
preview_format = camera.getPreviewFormat();
Camera.Size sz = camera.getPreviewSize();

Related

Android Video Processing - how to connect the ImageReader Surface to the preview?

I'm using Android's Camera2 API and would like to perform some image processing on camera preview frames and then display the changes back on the preview (TextureView).
Starting from the common camera2video example, I've setup an ImageReader in my openCamera().
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(),
mVideoSize.getHeight(), ImageFormat.YUV_420_888, mMaxBufferedImages);
mImageReader.setOnImageAvailableListener(mImageAvailable, mBackgroundHandler);
In my startPreview(), I've setup the Surfaces to receive frames from the CaptureRequest.
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
List<Surface> surfaces = new ArrayList<>();
// Here is where we connect the mPreviewSurface to the mTextureView.
mPreviewSurface = new Surface(texture);
surfaces.add(mPreviewSurface);
mPreviewBuilder.addTarget(mPreviewSurface);
// Connect our Image Reader to the Camera to get the preview frames.
Surface readerSurface = mImageReader.getSurface();
surfaces.add(readerSurface);
mPreviewBuilder.addTarget(readerSurface);
Then I'll modify the image data in the OnImageAvailableListener() callback.
ImageReader.OnImageAvailableListener mImageAvailable = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
try {
Image image = reader.acquireLatestImage();
if (image == null)
return;
final Image.Plane[] planes = image.getPlanes();
// Do something to the pixels.
// Black out part of the image.
ByteBuffer y_data_buffer = planes[0].getBuffer();
byte[] y_data = new byte[y_data_buffer.remaining()];
y_data_buffer.get(y_data);
byte y_value;
for (int row = 0; row < image.getHeight() / 2; row++) {
for (int col = 0; col < image.getWidth() / 2; col++) {
y_value = y_data[row * image.getWidth() + col];
y_value = 0;
y_data[row * image.getWidth() + col] = y_value;
}
}
image.close();
} catch (IllegalStateException e) {
Log.d(TAG, "mImageAvailable() Too many images acquired");
}
}
};
As I understand it now I am sending images to 2 Surface instances, the one for mTextureView and the other for my ImageReader.
How can I get my mTextureView to use the same Surface as the ImageReader, or should I be manipulating the image data directly from the mTextureView's Surface?
Thanks
If you only want to display the modified output, then I'm not sure why you have two outputs configured (the TextureView and the ImageReader).
Generally, if you want something like
camera -> in-app edits -> display
You have several options, depending on the kinds of edits you want, and various tradeoffs between ease of coding, performance, and so on.
One of the most efficient options is to do your edits as an OpenGL shader.
In that case, a GLSurfaceView is probably the simplest option.
Create a SurfaceTexture object with a texture ID that's unused in the GLSurfaceView's EGL context, and pass a Surface created from the SurfaceTexture to the camera session and requests.
Then in the SurfaceView drawing method, call SurfaceTexture's updateTexImage() method, and then use the texture ID to render your output as you'd like it.
That does require a lot of OpenGL code, so if you're not familiar with it, that can be challenging.
You can also use RenderScript for a similar effect; there you'll have an output SurfaceView or TextureView, and then a RenderScript script that reads from an input Allocation from the Camera and writes to an output Allocation to the View; you can create such Allocations from a Surface.
The Google HdrViewfinderDemo camera2 sample app uses this approach. It's a lot less boilerplate.
Third, you can just use an ImageReader like you're doing now, but you'll have to do a lot of conversion yourself to write it to the screen. The simplest (but slowest) option is to get a Canvas from a SurfaceView or a ImageView, and just write pixels to it one by one. Or you can do that via the ANativeWindow NDK, which is faster but requires writing JNI code and still requires you to do YUV->RGB conversions yourself (or use undocumented APIs to push YUV into the ANativeWindow and hope it works).

Android Camera Parameter setPictureSize causes streaked picture

I am trying to take a picture using the Android camera. I have a requirement to capture a 1600 (w) x 1200 (h) image (3rd party vendor requirement). My code seems to work fine for many phone cameras but the setPictureSize causes a crash on some phones (Samsung Galaxy S4, Samsung Galaxy Note) and causes a streaked picture on others (Nexus 7 Tablet). On at least the Nexus the size I desire is showing up in the getSupportPictureSizes list.
I have tried specifying the orientation but it didn't help. Taking the picture with the default picture size works fine.
Here is an example of the streaking:
For my image capture I have a requirement of 1600x1200, jpg, 30% compression, so I am capturing a JPG file.
I think I have three choices:
1) Figure out how to capture the 1600x1200 size without a crash or streaking, or
2) Figure out how to change the size of the default picture size to a JPG that is 1600x1200.
3) Something else that is currently unknown to me.
I have found some other postings that have similar issues but not quite the same. I am in my 2nd day of trying things but am not finding a solution. Here is one posting that got close:
Camera picture to Bitmap results in messed up image (none of the suggestions helped me)
Here is the section of my code that worked fine for until I ran into the S4/Note/Nexus 7. I have added a bit of debugging code for now:
Camera.Parameters parameters = mCamera.getParameters();
Camera.Size size = getBestPreviewSize(width, height, parameters);
if (size != null) {
int pictureWidth = 1600;
int pictureHeight = 1200;
// testing
Camera.Size test = parameters.getPictureSize();
List<Camera.Size> testSizes = parameters.getSupportedPictureSizes();
for ( int i = 0; i < testSizes.size(); i++ ) {
test = testSizes.get(i);
}
test = testSizes.get(3);
// get(3) is 1600 x 1200
pictureWidth = test.width;
pictureHeight = test.height;
parameters.setPictureFormat(ImageFormat.JPEG);
parameters.setPictureSize(pictureWidth, pictureHeight);
parameters.setJpegQuality(30);
parameters.setPreviewSize(size.width, size.height);
// catch any exception
try {
// make sure the preview is stopped
mCamera.stopPreview();
mCamera.setParameters(parameters);
didConfig = true;
catch(Exception e) {
// some error presentation was removed for brevity
// since didConfig not set to TRUE it will fail gracefully
}
}
Here is the section of my code that saves the JPG file:
PictureCallback jpegCallback = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
if ( data.length > 0 ) {
String fileName = "image.jpg";
File file = new File(getFilesDir(), fileName);
String filePath = file.getAbsolutePath();
boolean goodWrite = false;
try {
OutputStream os = new FileOutputStream(file);
os.write(data);
os.close();
goodWrite = true;
} catch (IOException e) {
goodWrite = false;
}
if ( goodWrite ) {
// go on to the Preview
} else {
// TODO return an error to the calling activity
}
}
Log.d(TAG, "onPictureTaken - jpeg");
}
};
Any suggestions on how to correctly set up the camera parameters for taking photos or how to crop or resize the resulting photo would be great. Especially if it will work with older cameras (API level 8 or later)! Based on needing the full width of the picture I can only crop off the top.
Thanks!
EDIT: Here is what I ended up doing:
I started by processing the Camera.Parameters getSupportedPictureSizes to use the first one that had the height and width both greater than my desired size, AND the same width:height ratio. I set the Camera parameters to that picture size.
Then once the picture was taken:
BitmapFactory.Options options = new BitmapFactory.Options();;
options.inPurgeable = true;
// convert the byte array to a bitmap, taking care to allow for garbage collection
Bitmap original = BitmapFactory.decodeByteArray(input , 0, input.length, options);
// resize the bitmap to my desired scale
Bitmap resized = Bitmap.createScaledBitmap(original, 1600, 1200, true);
// create a new byte array and output the bitmap to a compressed JPG
ByteArrayOutputStream blob = new ByteArrayOutputStream();
resized.compress(Bitmap.CompressFormat.JPEG, 30, blob);
// recycle the memory since bitmaps seem to have slightly different garbage collection
original.recycle();
resized.recycle();
byte[] desired = blob.toByteArray();
Then I write out the desired jpg to a file for upload.
test = testSizes.get(3);
// get(3) is 1600 x 1200
There is no requirement that the array have 4+ elements, let alone that the fourth element be 1600x1200.
1) Figure out how to capture the 1600x1200 size without a crash or streaking
There is no guarantee that every device is capable of taking a picture with that exact resolution. You cannot specify arbitrary values for the resolution -- it must be one of the supported picture sizes. Some devices support arbitrary values, while other devices will give you corrupted output (as is the case here) or will flat-out crash.
2) Figure out how to change the size of the default picture size to a JPG that is 1600x1200
I am not aware that there is a "default picture size", and, beyond that, such a size will be immutable, since it is the default. Changing the picture size is your option #1 above.
3) Something else that is currently unknown to me.
For devices that support a resolution that is bigger on both axes, take a picture in that resolution, then crop to 1600x1200.
For all other devices, where one or both axes are smaller than desired, take a picture in whatever resolution suits you (largest, closest match to 4:3 aspect ratio, etc.), and then stretch/crop to get to 1600x1200.

Manipulating android camera frame data in JNI

I'm trying to use the NDK to do some image processing. I am NOT using opencv.
I am fairly new to Android so I was doing this in steps. I started by writing a simple app that would let me capture video from the camera and display it to the screen. I have this done.
Then I tried to manipulate the camera data in native. However, onPreviewFrame uses a byte array to capture frame information. This is my code -
public void onPreviewFrame(byte[] arg0, Camera arg1)
{
if (imageFormat == ImageFormat.NV21)
{
if ( !bProcessing )
{
FrameData = arg0;
mHandler.post(callnative);
}
}
}
And the callnative runnable is like so -
private Runnable callnative = new Runnable()
{
public void run()
{
bProcessing = true;
String returnNative = callTorch(MainActivity.assetManager, PreviewSizeWidth, PreviewSizeHeight, FrameData, pixels);
bitmap.setPixels(pixels, 0, PreviewSizeWidth, 0, 0, PreviewSizeWidth, PreviewSizeHeight);
MycameraClass.setImageBitmap(bitmap);
bProcessing = false;
}
};
The problem is, I need to use FrameData in native as the float datatype. However, it is in the form of a bytearray. I wanted to know how the frame data is stored. Is this a 2 dimensional array of bytes? So the camera returns an 8 bit image and stores this as 640x480 bytes? If that is so, in what form does C interpret this byte data type? Can I simply convert it to float? I have this in native -
jbyte *nativeData;
nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
__android_log_print(ANDROID_LOG_INFO, "Nativeprint", "nativedata is: %d",(int)nativeData[0]);
However, this prints -22 which leads me to believe that I am trying to print out a pointer. I am not sure why that is the case though.
I would appreciate any help on this.
You will not be able to get any float data type from the pixels buffer. the data are in bytes, which in C is the char datatype.
So this:
jbyte *nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
is the same as this:
char *nativeData = (char *)((env)->GetByteArrayElements(NV21FrameData, NULL));
The data is stored as 1 dimension array, so you will retrieve each pixel operating by width, height, and x and y calculations.
Also remember the preview camera frames from your sample are in YUV420sp, this means you will need to convert the data from YUV to RGB before you can set it in a bitmap.

Unable to use both cameras of Evo 4G using OpenCV4Android

I'm planning to calculate disparity map by taking two pictures from the two back cameras of Evo 3D. However, I'm able to use only one camera. I tried different index.
index
0 gives me left camera (one of the back cameras)
1 gives me front camera
-1 gives me left camera (one of the back cameras).
I once got other camera using -1 index, but it's not working anymore. I'm using CameraBridgeViewBase.
I have seen on Google group of android-opencv that people have successfully used both cameras of Evo 3D phone. I want to know how to do it? Is there some other index? or is there some other way using which I can use this.
P.S. Native Camera doesn't work. (Android 4.0.3).
The stereoscopic camera ID in Android changed from 2 to 100 with the ICS upgrade. This is the constant used by the Android Camera.open call. I don't think there was ever any official way to get one camera or the other. You can only get one image or both images.
As the above answer suggests, I used 100 as the Camera Index, but it didn't work with OpenCV, so I tried using Android's Camera SDK, but got some errors. But as this is a part of HTC Open Sense SDK, I downloaded it on my Eclipse and used http://www.htcdev.com/devcenter/opensense-sdk/stereoscopic-3d/s3d-sample-code/ . I used the base file of S3D Camera Demo and added few more functions so that I could access the Camera image data and convert it to OpenCV Mat.
So I made few changes in onTouchEvent function in that code, and more code there.
#Override
public boolean onTouchEvent(MotionEvent event) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
// toggle();
//Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
//startActivityForResult(cameraIntent, 1337);
int bufferSize = width * height * 3;
byte[] mPreviewBuffer = null;
// New preview buffer.
mPreviewBuffer = new byte[bufferSize + 4096];
// with buffer requires addbuffer.
camera.addCallbackBuffer(mPreviewBuffer);
camera.setPreviewCallbackWithBuffer(mCameraCallback);
break;
default:
break;
}
return true;
}
private final Camera.PreviewCallback mCameraCallback = new Camera.PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera c) {
Log.d(TAG, "ON Preview frame");
img = new Mat(height, width, CvType.CV_8UC1);
gray = new Mat(height, width, CvType.CV_8UC1);
img.put(0, 0, data);
Imgproc.cvtColor(img, gray, Imgproc.COLOR_YUV420sp2GRAY);
String pixvalue = String.valueOf(gray.get(300, 400)[0]);
String pixval1 = String.valueOf(gray.get(300, 400+width/2)[0]);
Log.d(TAG, pixvalue);
Log.d(TAG, pixval1);
// to do the camera image split processing using "data"
}
};
The Image that you get from the camera is in YUV420s mode and I was initially having problems accessing the data as I had created a 4 channel Mat. Actually, it needs only 1 channel Mat.

Camera preview processing on Android

I'm making a line follower for my robot on Android (to learn Java/Android programming), currently I'm facing the image processing problem: the camera preview returns an image format called YUV which I want to convert to a threshold in order to know where the line is, how would one do that?
As of now I've succeeded getting something, that is I definitely can read data from the camera preview and by some miracle even know if the light intensity is over or under a certain value at a certain area on the screen. My goal is to draw the robot's path on an overlay over the camera's preview, that too works to some extent, but the problem is the YUV management.
As you can see not only the dark area is drawn sideways, but it also repeats itself 4 times and the preview image is stretched, I cannot figure out how to fix these problems.
Here's the relevant part of code:
public void surfaceCreated(SurfaceHolder arg0) {
// TODO Auto-generated method stub
// camera setup
mCamera = Camera.open();
Camera.Parameters parameters = mCamera.getParameters();
List<Camera.Size> sizes = parameters.getSupportedPreviewSizes();
for(int i=0; i<sizes.size(); i++)
{
Log.i("CS", i+" - width: "+sizes.get(i).width+" height: "+sizes.get(i).height+" size: "+(sizes.get(i).width*sizes.get(i).height));
}
// change preview size
final Camera.Size cs = sizes.get(8);
parameters.setPreviewSize(cs.width, cs.height);
// initialize image data array
imgData = new int[cs.width*cs.height];
// make picture gray scale
parameters.setColorEffect(Camera.Parameters.EFFECT_MONO);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
mCamera.setParameters(parameters);
// change display size
LayoutParams params = (LayoutParams) mSurfaceView.getLayoutParams();
params.height = (int) (mSurfaceView.getWidth()*cs.height/cs.width);
mSurfaceView.setLayoutParams(params);
LayoutParams overlayParams = (LayoutParams) swOverlay.getLayoutParams();
overlayParams.width = mSurfaceView.getWidth();
overlayParams.height = mSurfaceView.getHeight();
swOverlay.setLayoutParams(overlayParams);
try
{
mCamera.setPreviewDisplay(mSurfaceHolder);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
catch (IOException e)
{
e.printStackTrace();
mCamera.stopPreview();
mCamera.release();
}
// callback every time a new frame is available
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera)
{
// create bitmap from camera preview
int pixel, pixVal, frameSize = cs.width*cs.height;
for(int i=0; i<frameSize; i++)
{
pixel = (0xff & ((int) data[i])) - 16;
if(pixel < threshold)
{
pixVal = 0;
}
else
{
pixVal = 1;
}
imgData[i] = pixVal;
}
int cp = imgData[(int) (cs.width*(0.5+(cs.height/2)))];
//Log.i("CAMERA", "Center pixel RGB: "+cp);
debug.setText("Center pixel: "+cp);
// process preview image data
Paint paint = new Paint();
paint.setColor(Color.YELLOW);
int start, finish, last;
start = finish = last = -1;
float x_ratio = mSurfaceView.getWidth()/cs.width;
float y_ratio = mSurfaceView.getHeight()/cs.height;
// display calculated path on overlay using canvas
Canvas overlayCanvas = overlayHolder.lockCanvas();
overlayCanvas.drawColor(0, Mode.CLEAR);
// start by finding the tape from bottom of the screen
for(int y=cs.height; y>0; y--)
{
for(int x=0; x<cs.width; x++)
{
pixel = imgData[y*cs.height+x];
if(pixel == 1 && last == 0 && start == -1)
{
start = x;
}
else if(pixel == 0 && last == 1 && finish == -1)
{
finish = x;
break;
}
last = pixel;
}
//overlayCanvas.drawLine(start*x_ratio, y*y_ratio, finish*x_ratio, y*y_ratio, paint);
//start = finish = last = -1;
}
overlayHolder.unlockCanvasAndPost(overlayCanvas);
}
});
}
This code generates an error sometimes when quitting the application due to some method being called after release, which is the least of my problems.
UPDATE:
Now that the orientation problem is fixed (CCD sensor orientation) I'm still facing the repetition problem, this is probably related to my YUV data management...
Your surface and camera management looks correct, but I would doublecheck that camera actually accepted preview size settings ( some camera implementations reject some settings silently)
As you are working with portrait mode, you have to keep in mind that camera does not give a fart about prhone orientation - its coordinate origin isdetermined by CCD chip and is always to right corner and scan direction is from top to bottom and right to left - quite different from your overlay canvas. ( But if you are in landscape mode, everything is correct ;) ) - this is certaily source of odd drawing result
Your threshloding is bit naive and not very usefull in real life - I would suggest adaptive threshloding. In our javaocr project ( pure java, also has android demos ) we implemented efficient sauvola binarisation (see demos):
http://sourceforge.net/projects/javaocr/
Performance binarisation can be improved to work only on single image rows (patches welcome)
Issue with UV part of image is easy - default forman is NV21, luminance comes first
and this is just byte stream, and you do not need UV part of image at all (look into demos above)

Categories

Resources