4 channel IplImage javacv to android bitmap - android

I'm trying to record video by checking each frame of camera preview to bitmap with quality ARGB_8888. As it required 4 channel, Created IplImage with channel 4 too. Now the output have two major problems :
1) Bitmap that created from IplImage have grayscale. even if I have converted it from BGR2RGBA.
2) 4 channel IplImage gave me bitmap (divided in 4 parts) with same screen.
Let me put my code over here.
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if (yuvIplimage != null && recording) {
videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);
// Where imagewidth = 640 and imageheight = 480 (As per camera preview size)
// Create the yuvIplimage
IplImage yuvimage = IplImage.create(imageWidth, imageHeight * 3 / 2, IPL_DEPTH_8U, 2);
yuvimage.getByteBuffer().put(data);
IplImage rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3);
opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);
Bitmap bitmap = Bitmap.createBitmap(imageWidth, imageHeight,Bitmap.Config.RGB_565);
bitmap.copyPixelsFromBuffer(rgbimage.getByteBuffer());
//Save file to SDCARD------------
File file = new File(Environment.getExternalStorageDirectory(),
"rgbbitmap.png");
FileOutputStream fOut;
try {
fOut = new FileOutputStream(file);
bitmap.compress(Bitmap.CompressFormat.PNG, 100, fOut);
fOut.flush();
fOut.close();
// mybitmap.recycle();
} catch (Exception e) { // TODO
}
try {
// Get the correct time
recorder.setTimestamp(videoTimestamp);
// Record the image into FFmpegFrameRecorder
recorder.record(yuvimage);
} catch (FFmpegFrameRecorder.Exception e) {
Log.v(LOG_TAG, e.getMessage());
e.printStackTrace();
}
}
}
As well as, find the below bitmap image as I'm getting as output with 4 parts of same frame.
What's wrong with my code or what's missing by me? Let me know your best suggestions.
Thanks,

IplImage yuvImage = IplImage.create(width, height * 3 / 2, IPL_DEPTH_8U, 1);
yuvImage.getByteBuffer().put(data);
IplImage bgrImage = IplImage.create(width, height, IPL_DEPTH_8U, 3);
cvCvtColor(yuvImage, bgrImage, CV_YUV2BGR_NV21);
cvSaveImage("/mnt/sdcard/result.jpg", bgrImage);

Related

Android opencv byte[] to mat to byte[]

My goal is to add an overlay on the camera preview that will find book edges. For that, I override the onPreviewFrame where I do the following:
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Mat mat = new Mat((int) (height*1.5), width, CvType.CV_8UC1);
mat.put(0,0,data);
byte[] bytes = new byte[(int) (height*width*1.5)];
mat.get(0,0,bytes);
if (!test) { //to only do once
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
Uri picUri = Uri.fromFile(pictureFile);
updateGallery(picUri);
test = true;
} catch (IOException e) {
e.printStackTrace();
}
}
}
For now I simply want to take one of the previews and save it after the conversion to mat.
After spending countless hours getting the above to look right, the saved picture cannot be seen on my testing phone (LG Leon). I can't seem to find the issue. Am I mixing the height/width because I'm taking pictures in portrait mode? I tried switching them and still doesn't work. Where is the problem?
The fastest method I managed to find is described HERE in my recently asked question. You can find the method to extract the image in the answer I wrote in my question below. The thing is that the image you get through onPreviewFrame() is NV21. After receiving this image it may be that you need to convert it to RGB (depends on what do you want to achieve; this is also done in the answer I gave you previously).
Seems quite inefficient but it works for me (for now):
//get the camera parameters
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
//convert the byte[] to Bitmap through YuvImage;
//make sure the previewFormat is NV21 (I set it so somewhere before)
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 70, out);
Bitmap bmp = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
//convert Bitmap to Mat; note the bitmap config ARGB_8888 conversion that
//allows you to use other image processing methods and still save at the end
Mat orig = new Mat();
bmp = bmp.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp, orig);
//here you do whatever you want with the Mat
//Mat to Bitmap to OutputStream to byte[] to File
Utils.matToBitmap(orig, bmp);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 70, stream);
byte[] bytes = stream.toByteArray();
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
} catch (IOException e) {
e.printStackTrace();
}

Bitmap is cropped wrong in onPictureTaken

I want to implement my own face detection/recognition android app. When camera finds some face, a rectangle is displayed on camera preview (in real time). App has method for taking photos too. However, I dont want to save whole picture, only the area within the rectangle - the human face. When I give the rectangle coordinates to Bitmap.createBitmap method to crop my picture, correctness of cropped photo depends on the place on display, where the rectangle was shown. When a detected face appears in the middle of preview, createBitmap crops it circa fine, but not if it shows on left or right side of the display. Seems like the coordinates I send to Bitmap.createBitmap are conversed but I cannot find the ratio. Any solutions?
Here is my onPictureTaken method:
#Override
public void onPictureTaken(byte[] data, Camera camera) {
File pictureFile = getOutputMediaFile();
if (pictureFile == null) {
Log.d(TAG, "Error creating media file, check storage permissions: ");
return;
}
Bitmap picture = BitmapFactory.decodeByteArray(data, 0, data.length);
RectF faceRect = mPreview.getFaceRect();
float x = faceRect.left;
float y = faceRect.top;
float w = faceRect.right - faceRect.left;
float h = faceRect.bottom - faceRect.top;
int intX = (int) x;
int intY = (int) y;
int intW = (int) w;
int intH = (int) h;
Bitmap croppedPicture = Bitmap.createBitmap(picture, intX, intY, intW, intH);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
croppedPicture.compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] byteArrayFromPicture = stream.toByteArray();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(byteArrayFromPicture);
//fos.write(data);
fos.close();
} catch (FileNotFoundException e) {
Log.d(TAG, "File not found: " + e.getMessage());
} catch (IOException e) {
Log.d(TAG, "Error accessing file: " + e.getMessage());
}
}
and here is some example of cropped picture, I do not have enough reputation to post more links:
face close to the left edge of display
cropped pic1
(sorry about making picture of picture, I was lazy to implement saving the rectangle together with photo)
Resolved
The solution was very simple - because of using frontal camera the captured image was always reflected, added two if-clauses:
Bitmap picture = BitmapFactory.decodeByteArray(data, 0, data.length);
RectF faceRect = mPreview.getFaceRect();
Camera.Parameters parameters = mCamera.getParameters();
int picWidth = parameters.getPictureSize().width;
int intX = 0;
int intY = (int) faceRect.top;
int intW = (int) (faceRect.right - faceRect.left);
int intH = (int) (faceRect.bottom - faceRect.top);
if(faceRect.left > picWidth / 2) {
intX = (int) (faceRect.right - (faceRect.right - picWidth / 2) * 2);
}
else if(faceRect.left <= picWidth / 2) {
intX = (int) (picWidth - faceRect.right);
}
Bitmap croppedPicture = Bitmap.createBitmap(picture, intX, intY, intW, intH);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
croppedPicture.compress(Bitmap.CompressFormat.JPEG, 100, stream);
byte[] byteArrayFromPicture = stream.toByteArray();

Capturing Preview Frame in Portrait

Is there any way to acquire the preview frame directly in portrait inside onPreviewFrame method?
I've tried:
camera.setDisplayOrientation(90);
but this seems to work only for the display. The doc reports:
Set the clockwise rotation of preview display in degrees. This affects
the preview frames and the picture displayed after snapshot. This
method is useful for portrait mode applications.
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
This method is not allowed to be called during preview.
I'm targetting API level >= 8, and I've a portrait locked app. I want to avoid manually rotating byte array of data passed as frame.
Many thanks in advance.
Try this it will work
public void takeSnapPhoto() {
camera.setOneShotPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int format = parameters.getPreviewFormat();
//YUV formats require more conversion
if (format == ImageFormat.NV21 || format == ImageFormat.YUY2 || format == ImageFormat.NV16) {
int w = parameters.getPreviewSize().width;
int h = parameters.getPreviewSize().height;
// Get the YuV image
YuvImage yuv_image = new YuvImage(data, format, w, h, null);
// Convert YuV to Jpeg
Rect rect = new Rect(0, 0, w, h);
ByteArrayOutputStream output_stream = new ByteArrayOutputStream();
yuv_image.compressToJpeg(rect, 100, output_stream);
byte[] byt = output_stream.toByteArray();
FileOutputStream outStream = null;
try {
// Write to SD Card
File file = createFileInSDCard(FOLDER_PATH, "Image_"+System.currentTimeMillis()+".jpg");
//Uri uriSavedImage = Uri.fromFile(file);
outStream = new FileOutputStream(file);
outStream.write(byt);
outStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
}
});}

Android byte[] to image in Camera.onPreviewFrame

When trying to convert the byte[] of Camera.onPreviewFrame to Bitamp using BitmapFactory.decodeByteArray gives me an error SkImageDecoder::Factory returned null
Following is my code:
public void onPreviewFrame(byte[] data, Camera camera) {
Bitmap bmp=BitmapFactory.decodeByteArray(data, 0, data.length);
}
This has been hard to find! But since API 8, there is a YuvImage class in android.graphics. It's not an Image descendent, so all you can do with it is save it to Jpeg, but you could save it to memory stream and then load into Bitmap Image if that's what you need.
import android.graphics.YuvImage;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(),
size.width, size.height, null);
File file = new File(Environment.getExternalStorageDirectory()
.getPath() + "/out.jpg");
FileOutputStream filecon = new FileOutputStream(file);
image.compressToJpeg(
new Rect(0, 0, image.getWidth(), image.getHeight()), 90,
filecon);
} catch (FileNotFoundException e) {
Toast toast = Toast
.makeText(getBaseContext(), e.getMessage(), 1000);
toast.show();
}
}
Since Android 3.0 you can use a TextureView and TextureSurface to display the camera, and then use mTextureView.getBitmap() to retrieve a friendly RGB preview frame.
A very skeletal example of how to do this is given in the TextureView docs. Note that you'll have to set your application or activity to be hardware accelerated by putting android:hardwareAccelerated="true" in the manifest.
I found the answer after a long time. Here it is...
Instead of using BitmapFactory, I used my custom method to decode this byte[] data to a valid image format. To decode the image to a valid image format, one need to know what picture format is being used by the camera by calling camera.getParameters().getPictureFormat(). This returns a constant defined by ImageFormat. After knowing the format, use the appropriate encoder to encode the image.
In my case, the byte[] data was in the YUV format, so I looked for YUV to BMP conversion and that solved my problem.
you can try this:
This example send camera frames to server
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}

How to capture preview image frames from Camera Application in Android Programming?

I am writing an app to capture the camera preview frames and convert it to bitmap in Android. Here is my code:
Camera.PreviewCallback previewCallback = new Camera.PreviewCallback()
{
public void onPreviewFrame(byte[] data, Camera camera)
{
try
{
BitmapFactory.Options opts = new BitmapFactory.Options();
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);//,opts);
}
catch(Exception e)
{
}
}
};
mCamera = Camera.open();
mCamera.setPreviewCallback(previewCallback);
After I start preview, the callback got called with data, but the bitmap is null.
What did I do wrong when convert the byte array to BitMap?
In the onPreviewFrame() function, you should check the image format first.
This the NV21 example.
public void onPreviewFrame(byte[] data, Camera camera)
{
Parameters parameters = camera.getParameters();
imageFormat = parameters.getPreviewFormat();
if (imageFormat == ImageFormat.NV21)
{
Rect rect = new Rect(0, 0, PreviewSizeWidth, PreviewSizeHeight);
YuvImage img = new YuvImage(data, ImageFormat.NV21, PreviewSizeWidth, PreviewSizeHeight, null);
OutputStream outStream = null;
File file = new File(NowPictureFileName);
try
{
outStream = new FileOutputStream(file);
img.compressToJpeg(rect, 100, outStream);
outStream.flush();
outStream.close();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
For another way to take pictures, check out this article: how to use camera in android
Have you tried decoding the preview frame data to RGB before you use BitmapFactory? The default format is YUV which I'm not sure is compatible with BitmapFactory. Dave Manpearl's decode method can be found here:
Getting frames from Video Image in Android
Let me know if it works.
Cheers,
Paul

Categories

Resources