Unable to use both cameras of Evo 4G using OpenCV4Android - android

I'm planning to calculate disparity map by taking two pictures from the two back cameras of Evo 3D. However, I'm able to use only one camera. I tried different index.
index
0 gives me left camera (one of the back cameras)
1 gives me front camera
-1 gives me left camera (one of the back cameras).
I once got other camera using -1 index, but it's not working anymore. I'm using CameraBridgeViewBase.
I have seen on Google group of android-opencv that people have successfully used both cameras of Evo 3D phone. I want to know how to do it? Is there some other index? or is there some other way using which I can use this.
P.S. Native Camera doesn't work. (Android 4.0.3).

The stereoscopic camera ID in Android changed from 2 to 100 with the ICS upgrade. This is the constant used by the Android Camera.open call. I don't think there was ever any official way to get one camera or the other. You can only get one image or both images.

As the above answer suggests, I used 100 as the Camera Index, but it didn't work with OpenCV, so I tried using Android's Camera SDK, but got some errors. But as this is a part of HTC Open Sense SDK, I downloaded it on my Eclipse and used http://www.htcdev.com/devcenter/opensense-sdk/stereoscopic-3d/s3d-sample-code/ . I used the base file of S3D Camera Demo and added few more functions so that I could access the Camera image data and convert it to OpenCV Mat.
So I made few changes in onTouchEvent function in that code, and more code there.
#Override
public boolean onTouchEvent(MotionEvent event) {
switch (event.getAction()) {
case MotionEvent.ACTION_DOWN:
// toggle();
//Intent cameraIntent = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE);
//startActivityForResult(cameraIntent, 1337);
int bufferSize = width * height * 3;
byte[] mPreviewBuffer = null;
// New preview buffer.
mPreviewBuffer = new byte[bufferSize + 4096];
// with buffer requires addbuffer.
camera.addCallbackBuffer(mPreviewBuffer);
camera.setPreviewCallbackWithBuffer(mCameraCallback);
break;
default:
break;
}
return true;
}
private final Camera.PreviewCallback mCameraCallback = new Camera.PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera c) {
Log.d(TAG, "ON Preview frame");
img = new Mat(height, width, CvType.CV_8UC1);
gray = new Mat(height, width, CvType.CV_8UC1);
img.put(0, 0, data);
Imgproc.cvtColor(img, gray, Imgproc.COLOR_YUV420sp2GRAY);
String pixvalue = String.valueOf(gray.get(300, 400)[0]);
String pixval1 = String.valueOf(gray.get(300, 400+width/2)[0]);
Log.d(TAG, pixvalue);
Log.d(TAG, pixval1);
// to do the camera image split processing using "data"
}
};
The Image that you get from the camera is in YUV420s mode and I was initially having problems accessing the data as I had created a 4 channel Mat. Actually, it needs only 1 channel Mat.

Related

Image and video filters like snapchat in android

I am developing an application where I want the filters to be applied the way snapchat does, From what I can understand is that they are using PagerAdapter but I do not know how they are applying filters over the image or videos and it's not another image with filter applied to it. Any idea or code snippet which can do the same is highly appreciated for images and videos both and saving them too. Thanks :D
What I am doing here is overlaying two bitmaps over one another. How much either of the bitmaps should be visible is determined using the touch of the user. I have an enum for which direction the user is scrolling basically LEFT OR RIGHT and NONE. Depending on which direction the user scrolls different bitmap is applied on the current bitmap.
#Override
public boolean onScroll(MotionEvent e1, MotionEvent e2, float distanceX, float distanceY) {
if (mCurrentScrollDirection.ordinal() == ScrollDirection.NONE.ordinal()) {
if (distanceX > 0) {
mCurrentScrollDirection = ScrollDirection.LEFT;
} else {
mCurrentScrollDirection = ScrollDirection.RIGHT;
}
}
mTouchX = (int) e2.getX();
overlayBitmaps(mTouchX);
return false;
}
private void overlayBitmaps(int coordinateX) {
switch (mCurrentScrollDirection) {
case NONE: {
//do nothing here
break;
}
case LEFT: {
overlayNextBitmap(coordinateX);
break;
}
case RIGHT: {
overlayPreviousBitmap(coordinateX);
break;
}
}
}
private void overlayPreviousBitmap(int coordinateX) {
mImageCanvas.save();
Bitmap OSBitmap = Bitmap.createBitmap(mCurrentBitmap, coordinateX, 0, mCurrentBitmap.getWidth() - coordinateX, mCurrentBitmap.getHeight());
mImageCanvas.drawBitmap(OSBitmap, coordinateX, 0, null);
Bitmap FSBitmap = Bitmap.createBitmap(mPreviousBitmap, 0, 0, coordinateX, mCurrentBitmap.getHeight());
mImageCanvas.drawBitmap(FSBitmap, 0, 0, null);
mImageCanvas.restore();
mCapturedImageView.setImageDrawable(new BitmapDrawable(getResources(), mResultBitmap));
}
private void overlayNextBitmap(int coordinateX) {
mImageCanvas.save();
Bitmap OSBitmap = Bitmap.createBitmap(mCurrentBitmap, 0, 0, coordinateX, mCurrentBitmap.getHeight());
mImageCanvas.drawBitmap(OSBitmap, 0, 0, null);
Bitmap FSBitmap = Bitmap.createBitmap(mNextBitmap, coordinateX, 0, mCurrentBitmap.getWidth() - coordinateX, mCurrentBitmap.getHeight());
mImageCanvas.drawBitmap(FSBitmap, coordinateX, 0, null);
mImageCanvas.restore();
mCapturedImageView.setImageDrawable(new BitmapDrawable(getResources(), mResultBitmap));
}
This works quite well, I just haven't tested on low memory devices considering I could not find many :)
For complete code reference check this link out. It's my own library where you can capture images apply filters and get a callback to the calling activity. It's still work in progress.
An alternative solution:
Render the image onto a SurfaceTexture. Use that SurfaceTexture as an OpenGL "GL_OES_EGL_image_external" texture input into an OpenGL fragment shader. Draw a full-screen quad using this fragment shader onto a secondary SurfaceTexture. Render the secondary SurfaceTexture into a TextureView.
Getting this first part working is the difficult part. Once you've got that working, you will be able to apply different shaders to images, but not switch between them as shown in the picture. To add smooth swapping between images, render two different fragment shaders onto the secondary SurfaceTexture, using GL_SCISSOR to slice the screen in half depending on an offset value.
The main advantage of this method is that it will use significantly less memory. The bitmap can be loaded once, and after being rendered onto the SurfaceTexture once, may be discarded.
A secondary advantage of this method is that more complicated filters can be applied, and that with a little bit of extra work, you will be able to render videos as well.
If you're interested in seeing an implementation of this technique (which includes video filtering as well), check out the Kfilter library for photo and video filtering/processing.

Android Camera Parameter setPictureSize causes streaked picture

I am trying to take a picture using the Android camera. I have a requirement to capture a 1600 (w) x 1200 (h) image (3rd party vendor requirement). My code seems to work fine for many phone cameras but the setPictureSize causes a crash on some phones (Samsung Galaxy S4, Samsung Galaxy Note) and causes a streaked picture on others (Nexus 7 Tablet). On at least the Nexus the size I desire is showing up in the getSupportPictureSizes list.
I have tried specifying the orientation but it didn't help. Taking the picture with the default picture size works fine.
Here is an example of the streaking:
For my image capture I have a requirement of 1600x1200, jpg, 30% compression, so I am capturing a JPG file.
I think I have three choices:
1) Figure out how to capture the 1600x1200 size without a crash or streaking, or
2) Figure out how to change the size of the default picture size to a JPG that is 1600x1200.
3) Something else that is currently unknown to me.
I have found some other postings that have similar issues but not quite the same. I am in my 2nd day of trying things but am not finding a solution. Here is one posting that got close:
Camera picture to Bitmap results in messed up image (none of the suggestions helped me)
Here is the section of my code that worked fine for until I ran into the S4/Note/Nexus 7. I have added a bit of debugging code for now:
Camera.Parameters parameters = mCamera.getParameters();
Camera.Size size = getBestPreviewSize(width, height, parameters);
if (size != null) {
int pictureWidth = 1600;
int pictureHeight = 1200;
// testing
Camera.Size test = parameters.getPictureSize();
List<Camera.Size> testSizes = parameters.getSupportedPictureSizes();
for ( int i = 0; i < testSizes.size(); i++ ) {
test = testSizes.get(i);
}
test = testSizes.get(3);
// get(3) is 1600 x 1200
pictureWidth = test.width;
pictureHeight = test.height;
parameters.setPictureFormat(ImageFormat.JPEG);
parameters.setPictureSize(pictureWidth, pictureHeight);
parameters.setJpegQuality(30);
parameters.setPreviewSize(size.width, size.height);
// catch any exception
try {
// make sure the preview is stopped
mCamera.stopPreview();
mCamera.setParameters(parameters);
didConfig = true;
catch(Exception e) {
// some error presentation was removed for brevity
// since didConfig not set to TRUE it will fail gracefully
}
}
Here is the section of my code that saves the JPG file:
PictureCallback jpegCallback = new PictureCallback() {
public void onPictureTaken(byte[] data, Camera camera) {
if ( data.length > 0 ) {
String fileName = "image.jpg";
File file = new File(getFilesDir(), fileName);
String filePath = file.getAbsolutePath();
boolean goodWrite = false;
try {
OutputStream os = new FileOutputStream(file);
os.write(data);
os.close();
goodWrite = true;
} catch (IOException e) {
goodWrite = false;
}
if ( goodWrite ) {
// go on to the Preview
} else {
// TODO return an error to the calling activity
}
}
Log.d(TAG, "onPictureTaken - jpeg");
}
};
Any suggestions on how to correctly set up the camera parameters for taking photos or how to crop or resize the resulting photo would be great. Especially if it will work with older cameras (API level 8 or later)! Based on needing the full width of the picture I can only crop off the top.
Thanks!
EDIT: Here is what I ended up doing:
I started by processing the Camera.Parameters getSupportedPictureSizes to use the first one that had the height and width both greater than my desired size, AND the same width:height ratio. I set the Camera parameters to that picture size.
Then once the picture was taken:
BitmapFactory.Options options = new BitmapFactory.Options();;
options.inPurgeable = true;
// convert the byte array to a bitmap, taking care to allow for garbage collection
Bitmap original = BitmapFactory.decodeByteArray(input , 0, input.length, options);
// resize the bitmap to my desired scale
Bitmap resized = Bitmap.createScaledBitmap(original, 1600, 1200, true);
// create a new byte array and output the bitmap to a compressed JPG
ByteArrayOutputStream blob = new ByteArrayOutputStream();
resized.compress(Bitmap.CompressFormat.JPEG, 30, blob);
// recycle the memory since bitmaps seem to have slightly different garbage collection
original.recycle();
resized.recycle();
byte[] desired = blob.toByteArray();
Then I write out the desired jpg to a file for upload.
test = testSizes.get(3);
// get(3) is 1600 x 1200
There is no requirement that the array have 4+ elements, let alone that the fourth element be 1600x1200.
1) Figure out how to capture the 1600x1200 size without a crash or streaking
There is no guarantee that every device is capable of taking a picture with that exact resolution. You cannot specify arbitrary values for the resolution -- it must be one of the supported picture sizes. Some devices support arbitrary values, while other devices will give you corrupted output (as is the case here) or will flat-out crash.
2) Figure out how to change the size of the default picture size to a JPG that is 1600x1200
I am not aware that there is a "default picture size", and, beyond that, such a size will be immutable, since it is the default. Changing the picture size is your option #1 above.
3) Something else that is currently unknown to me.
For devices that support a resolution that is bigger on both axes, take a picture in that resolution, then crop to 1600x1200.
For all other devices, where one or both axes are smaller than desired, take a picture in whatever resolution suits you (largest, closest match to 4:3 aspect ratio, etc.), and then stretch/crop to get to 1600x1200.

YUV_NV21_TO_RGB not working?

I am staring to develop an app which monitors the camera preview, and does some image processing on it and displays int on a canvas. Just as a diagnostic I have the following code:
camera = Camera.open();
ImageFormat imf = new ImageFormat();
Camera.Parameters param = camera.getParameters();
param.setPreviewSize(128, 128);
preview_format = param.getPreviewFormat();
Camera.Size sz = param.getPreviewSize();
myimage = new int[sz.width*sz.height];
At run time it reports that preview_format is 17 which I understand is "NV21".
Later I have:
camera.setPreviewCallback(new PreviewCallback()
{
public void onPreviewFrame(byte[] _data, Camera _camera)
{
YUV_NV21_TO_RGB(myimage , _data, 128, 128) ;
}
});
The function YUV_NV21_TO_RGB was taken from here.
Meanwhile in another thread I have:
canvas.drawBitmap(
myimage, // the int array
0, // where to start in the array
128, // the stride ???
200, // x coord of where to display
200, // y coord of where to display
128, // wid
128, // ht
false, // alpha used?
null); // the paint used
The resulting image can be seen amongst other diagnostics in the square below. The stripes change as I move the phone around and appear to in some way correspond to what the camera is pointing at, but clearly it has been mangled. I tried using an alternative function found here, and another from wikipedia, but with seemingly identical results. Any ideas?
EDIT: One thought I had was that perhaps NV21 may not completely specify the format - maybe its a class of formats, where you need to go on and specify the bits per pixel or similar.
EDIT: An extra clue - if I cover the camera completely, the square goes entirely pure green.
Your preview size is not 128 by 128 because you fail to set it. You set it on the Camera.Parameters instance but you don't apply it to the camera.
You need to add the following line:
camera.setParameters(param);
And it's probably safe to get the parameters directly from the Camera instance:
preview_format = camera.getPreviewFormat();
Camera.Size sz = camera.getPreviewSize();

Manipulating android camera frame data in JNI

I'm trying to use the NDK to do some image processing. I am NOT using opencv.
I am fairly new to Android so I was doing this in steps. I started by writing a simple app that would let me capture video from the camera and display it to the screen. I have this done.
Then I tried to manipulate the camera data in native. However, onPreviewFrame uses a byte array to capture frame information. This is my code -
public void onPreviewFrame(byte[] arg0, Camera arg1)
{
if (imageFormat == ImageFormat.NV21)
{
if ( !bProcessing )
{
FrameData = arg0;
mHandler.post(callnative);
}
}
}
And the callnative runnable is like so -
private Runnable callnative = new Runnable()
{
public void run()
{
bProcessing = true;
String returnNative = callTorch(MainActivity.assetManager, PreviewSizeWidth, PreviewSizeHeight, FrameData, pixels);
bitmap.setPixels(pixels, 0, PreviewSizeWidth, 0, 0, PreviewSizeWidth, PreviewSizeHeight);
MycameraClass.setImageBitmap(bitmap);
bProcessing = false;
}
};
The problem is, I need to use FrameData in native as the float datatype. However, it is in the form of a bytearray. I wanted to know how the frame data is stored. Is this a 2 dimensional array of bytes? So the camera returns an 8 bit image and stores this as 640x480 bytes? If that is so, in what form does C interpret this byte data type? Can I simply convert it to float? I have this in native -
jbyte *nativeData;
nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
__android_log_print(ANDROID_LOG_INFO, "Nativeprint", "nativedata is: %d",(int)nativeData[0]);
However, this prints -22 which leads me to believe that I am trying to print out a pointer. I am not sure why that is the case though.
I would appreciate any help on this.
You will not be able to get any float data type from the pixels buffer. the data are in bytes, which in C is the char datatype.
So this:
jbyte *nativeData = (env)->GetByteArrayElements(NV21FrameData,NULL);
is the same as this:
char *nativeData = (char *)((env)->GetByteArrayElements(NV21FrameData, NULL));
The data is stored as 1 dimension array, so you will retrieve each pixel operating by width, height, and x and y calculations.
Also remember the preview camera frames from your sample are in YUV420sp, this means you will need to convert the data from YUV to RGB before you can set it in a bitmap.

Android camera unexplainable rotation on capture for some devices (not in EXIF)

What I'm doing seems like it should be simple, but I'm still lost after I've read every possible Stackoverflow answer I can find and Googled every article I can find.
I'm using a preview SurfaceView and capturing an image from an activity that is set for screenOrientation="landscape" in my AndroidManifest.xml.
I followed the sample Camera app code and thought things were working until I tried my app on a few Motorola devices running 1.5.
I have the OrientationEventListener running OK and I use reflection to see if set the rotation as such:
final int latchedOrientation = roundOrientation(mLastOrientation + 90);
Parameters parameters = preview.camera.getParameters();
JPLog.d("Setting camera rotation = %d", latchedOrientation);
try {
// if >= 2.0
Method method = Camera.Parameters.class.getMethod("setRotation",
int.class);
if(method != null) {
method.invoke(parameters, latchedOrientation);
}
} catch(Throwable t) {
// if < 2.0
parameters.set("rotation", latchedOrientation);
}
preview.camera.setParameters(parameters);
NexusOne (OS 2.2) - Works great. latchedOrientation = 0, picture OK without any rotation in the EXIF header.
T-Mobile G1 (OS 1.6) - Also works great. latchedOrientation = 0, picture OK.
Motorola Backflip (OS 1.5) - Image rotated. latchedOrientation = 0, picture has no EXIF rotation in it.
Motorola CLIQ (OS 1.5) - Image rotated. latchedOrientation = 0, picture has no EXIF rotation in it.
What's going on with these Motorola devices? I thought my problem was the Motorola camera driver wasn't rotating the images, so found the Sanselan EXIF reading classes for Android and was preparing to rotate them myself. Funny thing is, there is EXIF headers but no rotation element.
If I set the rotation manually to 90 degrees, the images come out perfect the Motorola devices, but now the G1 and the NexusOne have images that are rotated 90 degrees (not what I want). There has to be something I'm not getting here.
I'm doubting this is a 1.5 issue, or else someone would've posted info on it?
I had this issue and I used this method to capture the image. (Without creating a custom camera)
final Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
intent.putExtra(MediaStore.EXTRA_OUTPUT, Uri.fromFile(image));
startActivityForResult(intent, 0);
and did the rest in onActivityResult(int requestCode, int resultCode, Intent data) {}
But the original image (Actual SD card image) was correct and Bitmap was rotated when I fetched like this.
Bitmap bmp = BitmapFactory.decodeStream(..
The solution:
try {
File f = new File(SD_CARD_IMAGE_PATH);
ExifInterface exif = new ExifInterface(f.getPath());
int orientation = exif.getAttributeInt(ExifInterface.TAG_ORIENTATION, ExifInterface.ORIENTATION_NORMAL);
int angle = 0;
if (orientation == ExifInterface.ORIENTATION_ROTATE_90) {
angle = 90;
}
else if (orientation == ExifInterface.ORIENTATION_ROTATE_180) {
angle = 180;
}
else if (orientation == ExifInterface.ORIENTATION_ROTATE_270) {
angle = 270;
}
Matrix mat = new Matrix();
mat.postRotate(angle);
Bitmap bmp = BitmapFactory.decodeStream(new FileInputStream(f), null, null);
Bitmap correctBmp = Bitmap.createBitmap(bmp, 0, 0, bmp.getWidth(), bmp.getHeight(), mat, true);
}
catch (IOException e) {
Log.w("TAG", "-- Error in setting image");
}
catch(OutOfMemoryError oom) {
Log.w("TAG", "-- OOM Error in setting image");
}
This is actually a device-specific issue that mostly affects Motorola devices. The google devs included a setDisplayOrientation call in API 8 to work around the issue. The main bug is filed here.
For those that can't go to API 8, the two common solutions are:
Override onDraw
Override onDraw in a top-level ViewGroup and rotate the canvas by 90 degrees to compensate for the rotation. Note there is a caveat here as your touch events will also need to be rotated.
Use Landscape Mode
Lock the activity to landscape mode but draw assets as if they are in portrait. This means you do your layout and rotate your image assets to look like you are in portrait mode so that the view looks normal. This unfortunately makes it difficult to use the menu since the menu will open horizontally.
I have also seen people use an animation controller to rotate the view. The drawback here that I wasn't able to overcome is that the rotated view doesn't stretch to fill the screen. See the answer by Georg for a sample implementation.
Here is the code I used onActivityResult() in my activity. The intent returned was for picking an image of the type image/*. Works well for me!
Uri imageUri = intent.getData();
String[] orientationColumn = {MediaStore.Images.Media.ORIENTATION};
Cursor cur = managedQuery(imageUri, orientationColumn, null, null, null);
int orientation = -1;
if (cur != null && cur.moveToFirst()) {
orientation = cur.getInt(cur.getColumnIndex(orientationColumn[0]));
}
Matrix matrix = new Matrix();
matrix.postRotate(orientation);
It looks like the 'use landscape mode' suggestion is the only thing that really works. It seems that it's ok for this to be in either the manifest, or done via a call to setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE) in the activity onCreate.

Categories

Resources