When trying to convert the byte[] of Camera.onPreviewFrame to Bitamp using BitmapFactory.decodeByteArray gives me an error SkImageDecoder::Factory returned null
Following is my code:
public void onPreviewFrame(byte[] data, Camera camera) {
Bitmap bmp=BitmapFactory.decodeByteArray(data, 0, data.length);
}
This has been hard to find! But since API 8, there is a YuvImage class in android.graphics. It's not an Image descendent, so all you can do with it is save it to Jpeg, but you could save it to memory stream and then load into Bitmap Image if that's what you need.
import android.graphics.YuvImage;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
Camera.Parameters parameters = camera.getParameters();
Size size = parameters.getPreviewSize();
YuvImage image = new YuvImage(data, parameters.getPreviewFormat(),
size.width, size.height, null);
File file = new File(Environment.getExternalStorageDirectory()
.getPath() + "/out.jpg");
FileOutputStream filecon = new FileOutputStream(file);
image.compressToJpeg(
new Rect(0, 0, image.getWidth(), image.getHeight()), 90,
filecon);
} catch (FileNotFoundException e) {
Toast toast = Toast
.makeText(getBaseContext(), e.getMessage(), 1000);
toast.show();
}
}
Since Android 3.0 you can use a TextureView and TextureSurface to display the camera, and then use mTextureView.getBitmap() to retrieve a friendly RGB preview frame.
A very skeletal example of how to do this is given in the TextureView docs. Note that you'll have to set your application or activity to be hardware accelerated by putting android:hardwareAccelerated="true" in the manifest.
I found the answer after a long time. Here it is...
Instead of using BitmapFactory, I used my custom method to decode this byte[] data to a valid image format. To decode the image to a valid image format, one need to know what picture format is being used by the camera by calling camera.getParameters().getPictureFormat(). This returns a constant defined by ImageFormat. After knowing the format, use the appropriate encoder to encode the image.
In my case, the byte[] data was in the YUV format, so I looked for YUV to BMP conversion and that solved my problem.
you can try this:
This example send camera frames to server
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
try {
byte[] baos = convertYuvToJpeg(data, camera);
StringBuilder dataBuilder = new StringBuilder();
dataBuilder.append("data:image/jpeg;base64,").append(Base64.encodeToString(baos, Base64.DEFAULT));
mSocket.emit("newFrame", dataBuilder.toString());
} catch (Exception e) {
Log.d("########", "ERROR");
}
}
};
public byte[] convertYuvToJpeg(byte[] data, Camera camera) {
YuvImage image = new YuvImage(data, ImageFormat.NV21,
camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int quality = 20; //set quality
image.compressToJpeg(new Rect(0, 0, camera.getParameters().getPreviewSize().width, camera.getParameters().getPreviewSize().height), quality, baos);//this line decreases the image quality
return baos.toByteArray();
}
Related
I am trying face detection and adding mask(graphic overlay) using google vision api ,the problem is i could not get the ouptut from camera after detecting and adding mask.so far I have tried this solution from github , https://github.com/googlesamples/android-vision/issues/24 ,based on this issue i have added a custom detector class,
Mobile Vision API - concatenate new detector object to continue frame processing . and added this on mydetector class How to create Bitmap from grayscaled byte buffer image? .
MyDetectorClass
class MyFaceDetector extends Detector<Face>
{
private Detector<Face> mDelegate;
MyFaceDetector(Detector<Face> delegate) {
mDelegate = delegate;
}
public SparseArray<Face> detect(Frame frame) {
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
public boolean isOperational() {
return mDelegate.isOperational();
}
public boolean setFocus(int id) {
return mDelegate.setFocus(id);
}
}
frame processing
public SparseArray<Face> detect(Frame frame)
{
// *** add your custom frame processing code here
ByteBuffer byteBuffer = frame.getGrayscaleImageData();
byte[] bytes = byteBuffer.array();
int w = frame.getMetadata().getWidth();
int h = frame.getMetadata().getHeight();
YuvImage yuvimage=new YuvImage(bytes, ImageFormat.NV21, w, h, null);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
yuvimage.compressToJpeg(new Rect(0, 0, w, h), 100, baos); // Where 100 is the quality of the generated jpeg
byte[] jpegArray = baos.toByteArray();
Bitmap bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.length);
Log.e("got bitmap","bitmap val " + bitmap);
return mDelegate.detect(frame);
}
i am getting a rotated bitmap ,that is without the mask (graphic overlay) i have added .How can i get the camera output with mask .
Thanks in advance.
The simple answer is: You can't.
Why? Android camera output frames in NV21 ByteBuffer. And you must generate your masks based on the landmarks points in a separated Bitmap, then join them.
Sorry but, that's how the Android Camera API work. Nothing can be done. You must do it manually.
Also, I wouldn't get the camera preview then convert it to YuvImage then to Bitmap. That process consumes a lot of resources and makes preview very very slow. Instead I would use this method which will be a lot faster and rotates your preview internally so you don't loose time doing it:
outputFrame = new Frame.Builder().setImageData(mPendingFrameData, mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.NV21)
.setId(mPendingFrameId)
.setTimestampMillis(mPendingTimeMillis)
.setRotation(mRotation)
.build();
mDetector.receiveFrame(outputFrame);
All the code can be found in CameraSource.java
My goal is to add an overlay on the camera preview that will find book edges. For that, I override the onPreviewFrame where I do the following:
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
Mat mat = new Mat((int) (height*1.5), width, CvType.CV_8UC1);
mat.put(0,0,data);
byte[] bytes = new byte[(int) (height*width*1.5)];
mat.get(0,0,bytes);
if (!test) { //to only do once
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
Uri picUri = Uri.fromFile(pictureFile);
updateGallery(picUri);
test = true;
} catch (IOException e) {
e.printStackTrace();
}
}
}
For now I simply want to take one of the previews and save it after the conversion to mat.
After spending countless hours getting the above to look right, the saved picture cannot be seen on my testing phone (LG Leon). I can't seem to find the issue. Am I mixing the height/width because I'm taking pictures in portrait mode? I tried switching them and still doesn't work. Where is the problem?
The fastest method I managed to find is described HERE in my recently asked question. You can find the method to extract the image in the answer I wrote in my question below. The thing is that the image you get through onPreviewFrame() is NV21. After receiving this image it may be that you need to convert it to RGB (depends on what do you want to achieve; this is also done in the answer I gave you previously).
Seems quite inefficient but it works for me (for now):
//get the camera parameters
Camera.Parameters parameters = camera.getParameters();
int width = parameters.getPreviewSize().width;
int height = parameters.getPreviewSize().height;
//convert the byte[] to Bitmap through YuvImage;
//make sure the previewFormat is NV21 (I set it so somewhere before)
YuvImage yuv = new YuvImage(data, parameters.getPreviewFormat(), width, height, null);
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, width, height), 70, out);
Bitmap bmp = BitmapFactory.decodeByteArray(out.toByteArray(), 0, out.size());
//convert Bitmap to Mat; note the bitmap config ARGB_8888 conversion that
//allows you to use other image processing methods and still save at the end
Mat orig = new Mat();
bmp = bmp.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp, orig);
//here you do whatever you want with the Mat
//Mat to Bitmap to OutputStream to byte[] to File
Utils.matToBitmap(orig, bmp);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.JPEG, 70, stream);
byte[] bytes = stream.toByteArray();
File pictureFile = getOutputMediaFile();
try {
FileOutputStream fos = new FileOutputStream(pictureFile);
fos.write(bytes);
fos.close();
} catch (IOException e) {
e.printStackTrace();
}
Is there any way to acquire the preview frame directly in portrait inside onPreviewFrame method?
I've tried:
camera.setDisplayOrientation(90);
but this seems to work only for the display. The doc reports:
Set the clockwise rotation of preview display in degrees. This affects
the preview frames and the picture displayed after snapshot. This
method is useful for portrait mode applications.
This does not affect the order of byte array passed in onPreviewFrame(byte[], Camera), JPEG pictures, or recorded videos.
This method is not allowed to be called during preview.
I'm targetting API level >= 8, and I've a portrait locked app. I want to avoid manually rotating byte array of data passed as frame.
Many thanks in advance.
Try this it will work
public void takeSnapPhoto() {
camera.setOneShotPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int format = parameters.getPreviewFormat();
//YUV formats require more conversion
if (format == ImageFormat.NV21 || format == ImageFormat.YUY2 || format == ImageFormat.NV16) {
int w = parameters.getPreviewSize().width;
int h = parameters.getPreviewSize().height;
// Get the YuV image
YuvImage yuv_image = new YuvImage(data, format, w, h, null);
// Convert YuV to Jpeg
Rect rect = new Rect(0, 0, w, h);
ByteArrayOutputStream output_stream = new ByteArrayOutputStream();
yuv_image.compressToJpeg(rect, 100, output_stream);
byte[] byt = output_stream.toByteArray();
FileOutputStream outStream = null;
try {
// Write to SD Card
File file = createFileInSDCard(FOLDER_PATH, "Image_"+System.currentTimeMillis()+".jpg");
//Uri uriSavedImage = Uri.fromFile(file);
outStream = new FileOutputStream(file);
outStream.write(byt);
outStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
}
});}
On Android,
Anyone have any idea what trick snapchat pulls to get such high fps on their camera preview? I have tried various methods:
using a textureview instead of surface view
forcing hardware acceleration
using lower resolutions
using different preview formats (YV12 , NV21 drops frames)
changing focusing mode
None have left me anywhere close to the constant 30fps or maybe even above that snapchat seems to get. I can just about get to the same fps as the stock google camera app, but this isn't great, and mine displays at much lower resolution.
EDIT:
The method used is the same as that used by the official android video recording app. The preview there is of the same image quality and is locked to 30fps.
try this it works
public void takeSnapPhoto() {
camera.setOneShotPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Parameters parameters = camera.getParameters();
int format = parameters.getPreviewFormat();
//YUV formats require more conversion
if (format == ImageFormat.NV21 || format == ImageFormat.YUY2 || format == ImageFormat.NV16) {
int w = parameters.getPreviewSize().width;
int h = parameters.getPreviewSize().height;
// Get the YuV image
YuvImage yuv_image = new YuvImage(data, format, w, h, null);
// Convert YuV to Jpeg
Rect rect = new Rect(0, 0, w, h);
ByteArrayOutputStream output_stream = new ByteArrayOutputStream();
yuv_image.compressToJpeg(rect, 100, output_stream);
byte[] byt = output_stream.toByteArray();
FileOutputStream outStream = null;
try {
// Write to SD Card
File file = createFileInSDCard(FOLDER_PATH, "Image_"+System.currentTimeMillis()+".jpg");
//Uri uriSavedImage = Uri.fromFile(file);
outStream = new FileOutputStream(file);
outStream.write(byt);
outStream.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
}
});}
I believed they used the android NDK. You can find more information in android developer.
Using pure C/C++ is faster than JAVA code in performance critical tasks such as image and video processing.
You can also try to improve the performance by compiling the application with another compiler, like the Intel compiler.
I am writing an app to capture the camera preview frames and convert it to bitmap in Android. Here is my code:
Camera.PreviewCallback previewCallback = new Camera.PreviewCallback()
{
public void onPreviewFrame(byte[] data, Camera camera)
{
try
{
BitmapFactory.Options opts = new BitmapFactory.Options();
Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0, data.length);//,opts);
}
catch(Exception e)
{
}
}
};
mCamera = Camera.open();
mCamera.setPreviewCallback(previewCallback);
After I start preview, the callback got called with data, but the bitmap is null.
What did I do wrong when convert the byte array to BitMap?
In the onPreviewFrame() function, you should check the image format first.
This the NV21 example.
public void onPreviewFrame(byte[] data, Camera camera)
{
Parameters parameters = camera.getParameters();
imageFormat = parameters.getPreviewFormat();
if (imageFormat == ImageFormat.NV21)
{
Rect rect = new Rect(0, 0, PreviewSizeWidth, PreviewSizeHeight);
YuvImage img = new YuvImage(data, ImageFormat.NV21, PreviewSizeWidth, PreviewSizeHeight, null);
OutputStream outStream = null;
File file = new File(NowPictureFileName);
try
{
outStream = new FileOutputStream(file);
img.compressToJpeg(rect, 100, outStream);
outStream.flush();
outStream.close();
}
catch (FileNotFoundException e)
{
e.printStackTrace();
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
For another way to take pictures, check out this article: how to use camera in android
Have you tried decoding the preview frame data to RGB before you use BitmapFactory? The default format is YUV which I'm not sure is compatible with BitmapFactory. Dave Manpearl's decode method can be found here:
Getting frames from Video Image in Android
Let me know if it works.
Cheers,
Paul