I have the following function which receives image as void*. How can I flip it upside down without using external libraries like OpenCV. Image width and height are known.
Note: this function will be called at least 30 times a second on Android so this needs to be efficient.
PushVideoFrame(void *bytes, int width, int height) {
if (clientPtr == nullptr) {
return ErrorCodes::DEVICE_CONNECTION;
}
char* data = static_cast< char *>(bytes);
//////// CODE TO FLIP IMAGE /////////////
clientPtr->PushVideoFrameAsync(data, width * height * 4)
}
You can swap rows with additional memory:
void flipVertically(uint8_t *data, int width, int height) {
vector<uint8_t> temp(width);
for (int i = 0; i < height / 2; i++) {
void *ptr1 = data + i*width;
void *ptr2 = data + (height-i-1)*width;
// swap(row1, row2) with temp buffer
memcpy(&temp[0], ptr1, width);
memcpy(ptr1, ptr2, width);
memcpy(ptr2, &temp[0], width);
}
}
Related
Can someone suggest me a fast method or a library for color splash effect? For example, I select a color and a photo gets desaturated for all colors except that one I have picked.
I have tried using pixel by pixel color check and then replace the color but it is too slow for big images.
int width = originalImage.getWidth();
int height = originalImage.getHeight();
int[] pixels = new int[width * height];
originalImage.getPixels(pixels, 0, width, 0, 0, width, height);
for (int x = 0; x < pixels.length; ++x) {
pixels[x] = Distance(pixels[x], fromColor) < 4 ? targetColor : pixels[x];
}
Bitmap newImage = Bitmap.createBitmap(width, height, originalImage.getConfig());
newImage.setPixels(pixels, 0, width, 0, 0, width, height);
public int Distance(int a, int b) {
return Math.abs(Color.red(a) - Color.red(b)) + Math.abs(Color.green(a) -
Color.green(b)) + Math.abs(Color.blue(a) - Color.blue(b));
}
EDIT:
Here are the original and the processed images, the color I am keeping is #ff9350:
You can try to go over possible color RGB values and prepareHashSet containing only those color values that should not be
desaturated. It seems that there should not be a lot of values.
After that you will be able to check whether the color should be desaturated or not. Thus, you will get rather long precalculation for a new given color, but your code that converts photo will become faster.
It can be implemented like so (haven't tested it under Android, just some kind of Java pseudo-code to show the idea):
// precalculation
Set<Color> keptColors = new HashSet<Color>();
final int MAX_DISTANCE = 4;
for (int r=red(fromColor)-MAX_DISTANCE; r<=red(fromColor)+MAX_DISTANCE; r++) {
for (int g=green(fromColor)-MAX_DISTANCE; g<=green(fromColor)+MAX_DISTANCE; g++) {
for (int b=blue(fromColor)-MAX_DISTANCE; b<=blue(fromColor)+MAX_DISTANCE; b++) {
if (Distance(rgb(r,g,b)), fromColor) < MAX_DISTANCE {
keptColors.add(rgb(r,g,b));
}
}
}
}
...
// in you photo processing code
keptColors.contains(pixels[x]) ? pixels[x] : targetColor;
use inRange function in OpenCV to isolate a Color
Scalar lower(0,100,100);
Scalar upper(10,255,255);
Core.inRange(hsv, lower, upper, segmentedImage);
check out this Answer
EDIT: Solved! See below.
I need to crop my image (YUV422888 color space) which I obtain from the onImageAvailable listener of Camera2. I don't want or need to convert it to Bitmap as it affects performance a lot, and also I'm actually interested in luma and not in RGB information (which is contained in Plane 0 of the Image).
I came up with the following solution:
Get the Y' information contained in the Plane 0 of the Image object made available by Camera2 in the listener.
Convert the Y' Plane into a byte[] array in.
Convert the byte[] array to a 2d byte[][] array in order to crop.
Use some for loops to crop at desired left, right, top and bottom coordinates.
Fold the 2d byte[][] array back to a 1d byte[] array out, containing cropped luma Y' information.
Point 4 unfortunately yields a corrupt image. What am I doing wrong?
In the onImageAvailableListener of Camera2 (please note that although I am computing a bitmap, it's only to see what's happening, as I'm not interested in the Bitmap/RGB data):
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer(); // Grab just the Y' Plane.
buffer.rewind();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
Bitmap bitmap = cropByteArray(data, image.getWidth(), image.getHeight()); // Just for preview/sanity check purposes. The bitmap is **corrupt**.
runOnUiThread(new bitmapRunnable(bitmap) {
#Override
public void run() {
image_view_preview.setImageBitmap(this.bitmap);
}
});
The cropByteArray function needs fixing. It outputs a bitmap that is corrupt, and should output an out byte[] array similar to in, but containing only the cropped area:
public Bitmap cropByteArray(byte[] in, int inw, int inh) {
int l = 100; // left crop start
int r = 400; // right crop end
int t = 400; // top crop start
int b = 700; // top crop end
int outw = r-l;
int outh = b-t;
byte[][] in2d = new byte[inw][inh]; // input width and height are 1080 x 1920.
byte[] out = new byte[outw*outh];
int[] pixels = new int[outw*outh];
i = 0;
for(int col = 0; col < inw; col++) {
for(int row = 0; row < inh; row++) {
in2d[col][row] = in[i++];
}
}
i = 0;
for(int col = l; col < r; col++) {
for(int row = t; row < b; row++) {
//out[i++] = in2d[col][row]; // out is the desired output of the function, but for now we output a bitmap instead
int grey = in2d[col][row] & 0xff;
pixels[i++] = 0xFF000000 | (grey * 0x00010101);
}
}
return Bitmap.createBitmap(pixels, inw, inh, Bitmap.Config.ARGB_8888);
}
EDIT Solved thanks to the suggestion by Eddy Talvala. The following code will yield the Y' (luma plane 0 from ImageReader) cropped to the desired coordinates. The cropped data is in the out byte array. The bitmap is generated just for confirmation. I am also attaching the handy YUVtoGrayscale() function below.
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int stride = planes[0].getRowStride();
buffer.rewind();
byte[] Y = new byte[buffer.capacity()];
buffer.get(Y);
int t=200; int l=600;
int out_h = 600; int out_w = 600;
byte[] out = new byte[out_w*out_h];
int firstRowOffset = stride * t + l;
for (int row = 0; row < out_h; row++) {
buffer.position(firstRowOffset + row * stride);
buffer.get(out, row * out_w, out_w);
}
Bitmap bitmap = YUVtoGrayscale(out, out_w, out_h);
Here goes the YUVtoGrayscale().
public Bitmap YUVtoGrayscale(byte[] yuv, int width, int height) {
int[] pixels = new int[yuv.length];
for (int i = 0; i < yuv.length; i++) {
int grey = yuv[i] & 0xff;
pixels[i] = 0xFF000000 | (grey * 0x00010101);
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.ARGB_8888);
}
There are some remaining issues. I am using the front camera and although the preview orientation is correct inside the TextureView, the image returned by ImageViewer is rotated clockwise and flipped vertically (a person is lying on their right cheek in the preview, only the right cheek is the left cheek because of the vertical flip) on my device which has sensor orientation of 270 deg. Is there an accepted solution to have both the preview and saved photos in the same, correct orientation using Camera2?
Cheers.
It'd be helpful if you described how the image is corrupt - do you see a valid image but it's distorted, or is it just total garbage, or just total black?
But I'm guessing you're not paying attention to the row stride of the Y plane (https://developer.android.com/reference/android/media/Image.Plane.html#getRowStride() ), which would typically result in an image that's skewed (vertical lines become angled lines).
When accessing the Y plane, the byte index of pixel (x,y) is:
y * rowStride + x
not
y * width + x
because row stride may be larger than width.
I'd also avoid copying so much; you really don't need the 2D array, and a large byte[] for the image also wastes memory.
You can instead seek() to the start of each output row, and then only read the bytes you need to copy straight into your destination byte[] out with ByteBuffer.get(byte[], offset, length).
That'd look something like
int stride = planes[0].getRowStride();
ByteBuffer img = planes[0].getBuffer();
int firstRowOffset = stride * t + l;
for (int row = 0; row < outh; row++) {
img.position(firstRowOffset + row * stride);
img.get(out, row * outw, outw);
}
I am trying to convert an Image received from ImageReader using the Camera 2 API to a OpenCV matrix and display it on screen using CameraBridgeViewBase, more specifically the function deliverAndDrawFrame. The ImageFormat for the reader is YUV_420_888, which, as far as I understand, has a Y plane with grayscale values for each pixel, and a U plane that has U/V every other with 1 for every 4 pixels. However, when I try to display this image it appears as if the image is repeating and is rotated 90 degrees. The code below is supposed to put the YUV data into a OpenCV matrix (just grayscale for now, not rgba):
/**
* Takes an {#link Image} in the {#link ImageFormat#YUV_420_888} and puts it into a provided {#link Mat} in rgba format.
*
* #param yuvImage {#link Image} in the {#link ImageFormat#YUV_420_888} format.
*/
public static void yuv420888imageToRgbaMat(final Image yuvImage, final Mat rgbaMat) {
final Image.Plane
Yp = yuvImage.getPlanes()[0],
UandVp = yuvImage.getPlanes()[1];
final ByteBuffer
Ybb = Yp .getBuffer(),
UandVbb = UandVp.getBuffer();
Ybb .get(mYdata , 0, 480*640 );
UandVbb.get(mUandVData, 0, 480*640 / 2 - 8);
for (int i = 0; i < 640*480; i++) {
for (int j = 0; j < 4; j++) {
mRawRGBAFrameData[i + 640*480*j] = mYdata[i];
}
mRawRGBAFrameData[i*4 ] = mYdata[i];
mRawRGBAFrameData[i*4+1] = mYdata[i];
mRawRGBAFrameData[i*4+2] = mYdata[i];
mRawRGBAFrameData[i*4+3] = -1;
}
}
Here is my code for the OpenCV frame:
private class CameraFrame implements CvCameraViewFrame {
private Mat mRgba;
#Override
public Mat gray() {
return null;
}
#Override
public Mat rgba() {
mRgbaMat.put(0, 0, mRawRGBAFrameData);
return mRgba;
}
public CameraFrame(final Mat rgba) {
super();
mRgba = rgba;
}
}
The code for receiving drawing the frame:
private final ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
final Image yuvImage = reader.acquireLatestImage();
yuv420888imageToRgbaMat(yuvImage, mRgbaMat);
deliverAndDrawFrame(mFrame);
yuvImage.close();
}
};
And, this is the code for making the image reader:
mRgbaMat = new Mat(mFrameHeight, mFrameWidth, CvType.CV_8UC4);
mFrame = new CameraFrame(mRgbaMat);
mImageReader = ImageReader.newInstance(mFrameWidth, mFrameHeight, ImageFormat.YUV_420_888, 1);
mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);
AllocateCache();
This is the initialization of the arrays:
protected static byte[] mRawRGBAFrameData = new byte[640*480*4], mYdata = new byte[640*480], mUandVData = new byte[640*480 / 2];
Notes: mFrameWidth is 480 and mFrameHeight is 640. One weird thing is that the height and width for ImageReader and the Image received from it have inverted dimensions.
Here is the image with the code above: https://i.stack.imgur.com/lcdzf.png
Here is the image with this instead in yuv420888imageToRgbaMat https://i.stack.imgur.com/T2MOI.png
for (int i = 0; i < 640*480; i++) {
mRawRGBAFrameData[i] = mYdata[i];
}
We can see that data is repeating in the Y frame and for some reason this gives an actual good looking image.
For anyone having the same problem of trying to use OpenCV with the Camera 2 API, I have come up with a solution. The first thing that I discovered was the fact that there is padding in the ByteBuffer that the ImageReader supplies, so this can cause distortion in the output if you do not account for it. Another thing that I chose do to was to create my own SurfaceView and draw to it using a Bitmap instead of using CameraViewBase, and so far it has worked out great. OpenCV has a function Util.matToBitmap that takes a BGR matrix and converts it to an android Bitmap, so that has been useful. I obtain the BGR matrix by putting information from the first two Image.Planes supplied by the ImageReader into an OpenCV one channel matrix that is formatted as YUV 420, and using Imgproc.cvtColor with Imgproc.COLOR_YUV420p2BGR. The important thing to know is that the Y plane of the image has full pixels, but the second UV plane has interleaved pixels that map one to four Y pixels, so the total length of the UV plane is half of the Y plane. See here. Anyways, here is some code:
Initialization of matrices
m_BGRMat = new Mat(Constants.VISION_IMAGE_HEIGHT, Constants.VISION_IMAGE_WIDTH, CvType.CV_8UC3);
m_Yuv420FrameMat = new Mat(Constants.VISION_IMAGE_HEIGHT * 3 / 2, Constants.VISION_IMAGE_WIDTH, CvType.CV_8UC1);
Every frame:
// Convert image to YUV 420 matrix
ImageUtils.imageToMat(image, m_Yuv420FrameMat, m_RawFrameData, m_RawFrameRowData);
// Convert YUV matrix to BGR matrix
Imgproc.cvtColor(m_Yuv420FrameMat, m_BGRMat, Imgproc.COLOR_YUV420p2BGR);
// Flip width and height then mirror vertically
Core.transpose(m_BGRMat, m_BGRMat);
Core.flip(m_BGRMat, m_BGRMat, 0);
// Draw to Surface View
m_PreviewView.drawImageMat(m_BGRMat);
Here is the conversion to YUV 420 matrix:
/**
* Takes an Android {#link Image} in the {#link ImageFormat#YUV_420_888} format and returns an OpenCV {#link Mat}.
*
* #param image {#link Image} in the {#link ImageFormat#YUV_420_888} format
*/
public static void imageToMat(final Image image, final Mat mat, byte[] data, byte[] rowData) {
ByteBuffer buffer;
int rowStride, pixelStride, width = image.getWidth(), height = image.getHeight(), offset = 0;
Image.Plane[] planes = image.getPlanes();
if (data == null || data.length != width * height) data = new byte[width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
if (rowData == null || rowData.length != planes[0].getRowStride()) rowData = new byte[planes[0].getRowStride()];
for (int i = 0; i < planes.length; i++) {
buffer = planes[i].getBuffer();
rowStride = planes[i].getRowStride();
pixelStride = planes[i].getPixelStride();
int
w = (i == 0) ? width : width / 2,
h = (i == 0) ? height : height / 2;
for (int row = 0; row < h; row++) {
int bytesPerPixel = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
if (pixelStride == bytesPerPixel) {
int length = w * bytesPerPixel;
buffer.get(data, offset, length);
// Advance buffer the remainder of the row stride, unless on the last row.
// Otherwise, this will throw an IllegalArgumentException because the buffer
// doesn't include the last padding.
if (h - row != 1)
buffer.position(buffer.position() + rowStride - length);
offset += length;
} else {
// On the last row only read the width of the image minus the pixel stride
// plus one. Otherwise, this will throw a BufferUnderflowException because the
// buffer doesn't include the last padding.
if (h - row == 1)
buffer.get(rowData, 0, width - pixelStride + 1);
else
buffer.get(rowData, 0, rowStride);
for (int col = 0; col < w; col++)
data[offset++] = rowData[col * pixelStride];
}
}
}
mat.put(0, 0, data);
}
And finally, drawing
/**
* Given an {#link Mat} that represents a BGR image, draw it on the surface canvas.
* use the OpenCV helper function {#link Utils#matToBitmap(Mat, Bitmap)} to create a {#link Bitmap}.
*
* #param bgrMat BGR frame {#link Mat}
*/
public void drawImageMat(final Mat bgrMat) {
if (m_HolderReady) {
// Create bitmap from BGR matrix
Utils.matToBitmap(bgrMat, m_Bitmap);
// Obtain the canvas and draw the bitmap on top of it
final SurfaceHolder holder = getHolder();
final Canvas canvas = holder.lockCanvas();
canvas.drawBitmap(m_Bitmap, null, new Rect(0, 0, m_HolderWidth, m_HolderHeight), null);
holder.unlockCanvasAndPost(canvas);
}
}
This way works, but I imagine the best way to do it is to set up an OpenGL rendering context and write some sort of simple shader to display the matrix.
I am making one music application in android.In this music list coming from server side. I don'tknow how to show waveform of audio in android ? like in soundcloud website. I have attached image below.
Perhaps, you can implements this feature without libraries, of course if you want only visualisation of audio sample.
For example:
public class PlayerVisualizerView extends View {
/**
* constant value for Height of the bar
*/
public static final int VISUALIZER_HEIGHT = 28;
/**
* bytes array converted from file.
*/
private byte[] bytes;
/**
* Percentage of audio sample scale
* Should updated dynamically while audioPlayer is played
*/
private float denseness;
/**
* Canvas painting for sample scale, filling played part of audio sample
*/
private Paint playedStatePainting = new Paint();
/**
* Canvas painting for sample scale, filling not played part of audio sample
*/
private Paint notPlayedStatePainting = new Paint();
private int width;
private int height;
public PlayerVisualizerView(Context context) {
super(context);
init();
}
public PlayerVisualizerView(Context context, #Nullable AttributeSet attrs) {
super(context, attrs);
init();
}
private void init() {
bytes = null;
playedStatePainting.setStrokeWidth(1f);
playedStatePainting.setAntiAlias(true);
playedStatePainting.setColor(ContextCompat.getColor(getContext(), R.color.gray));
notPlayedStatePainting.setStrokeWidth(1f);
notPlayedStatePainting.setAntiAlias(true);
notPlayedStatePainting.setColor(ContextCompat.getColor(getContext(), R.color.colorAccent));
}
/**
* update and redraw Visualizer view
*/
public void updateVisualizer(byte[] bytes) {
this.bytes = bytes;
invalidate();
}
/**
* Update player percent. 0 - file not played, 1 - full played
*
* #param percent
*/
public void updatePlayerPercent(float percent) {
denseness = (int) Math.ceil(width * percent);
if (denseness < 0) {
denseness = 0;
} else if (denseness > width) {
denseness = width;
}
invalidate();
}
#Override
protected void onLayout(boolean changed, int left, int top, int right, int bottom) {
super.onLayout(changed, left, top, right, bottom);
width = getMeasuredWidth();
height = getMeasuredHeight();
}
#Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
if (bytes == null || width == 0) {
return;
}
float totalBarsCount = width / dp(3);
if (totalBarsCount <= 0.1f) {
return;
}
byte value;
int samplesCount = (bytes.length * 8 / 5);
float samplesPerBar = samplesCount / totalBarsCount;
float barCounter = 0;
int nextBarNum = 0;
int y = (height - dp(VISUALIZER_HEIGHT)) / 2;
int barNum = 0;
int lastBarNum;
int drawBarCount;
for (int a = 0; a < samplesCount; a++) {
if (a != nextBarNum) {
continue;
}
drawBarCount = 0;
lastBarNum = nextBarNum;
while (lastBarNum == nextBarNum) {
barCounter += samplesPerBar;
nextBarNum = (int) barCounter;
drawBarCount++;
}
int bitPointer = a * 5;
int byteNum = bitPointer / Byte.SIZE;
int byteBitOffset = bitPointer - byteNum * Byte.SIZE;
int currentByteCount = Byte.SIZE - byteBitOffset;
int nextByteRest = 5 - currentByteCount;
value = (byte) ((bytes[byteNum] >> byteBitOffset) & ((2 << (Math.min(5, currentByteCount) - 1)) - 1));
if (nextByteRest > 0) {
value <<= nextByteRest;
value |= bytes[byteNum + 1] & ((2 << (nextByteRest - 1)) - 1);
}
for (int b = 0; b < drawBarCount; b++) {
int x = barNum * dp(3);
float left = x;
float top = y + dp(VISUALIZER_HEIGHT - Math.max(1, VISUALIZER_HEIGHT * value / 31.0f));
float right = x + dp(2);
float bottom = y + dp(VISUALIZER_HEIGHT);
if (x < denseness && x + dp(2) < denseness) {
canvas.drawRect(left, top, right, bottom, notPlayedStatePainting);
} else {
canvas.drawRect(left, top, right, bottom, playedStatePainting);
if (x < denseness) {
canvas.drawRect(left, top, right, bottom, notPlayedStatePainting);
}
}
barNum++;
}
}
}
public int dp(float value) {
if (value == 0) {
return 0;
}
return (int) Math.ceil(getContext().getResources().getDisplayMetrics().density * value);
}
}
Sorry, code with a small amount of comments, but it is working visualizer. You can attach it to any players you want.
How you can use it: add this view in your xml layout, then you have to update visualizer state with methods
public void updateVisualizer(byte[] bytes) {
playerVisualizerView.updateVisualizer(bytes);
}
public void updatePlayerProgress(float percent) {
playerVisualizerView.updatePlayerPercent(percent);
}
In updateVisualizer you pass bytes array with you audio sample, and in updatePlayerProgress you dynamically pass percentage, while audio sample is playing.
for converting file to bytes you can use this helper method
public static byte[] fileToBytes(File file) {
int size = (int) file.length();
byte[] bytes = new byte[size];
try {
BufferedInputStream buf = new BufferedInputStream(new FileInputStream(file));
buf.read(bytes, 0, bytes.length);
buf.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return bytes;
}
and for example(very shortly), how it looks like with Mosby library:
public class AudioRecorderPresenter extends MvpBasePresenter<AudioRecorderView> {
public void onStopRecord() {
// stopped and released MediaPlayer
// ...
// some preparation and saved audio file in audioFileName variable.
getView().updateVisualizer(FileUtils.fileToBytes(new File(audioFileName)));
}
}
}
UPD: I created the library for resolving this case github.com/scrobot/SoundWaveView. It still in status "WIP"(work in progress), but soon I will complete it.
I believe Scrobot's answer does not work. It assumes the input audio to be in a certain (quite peculiar) encoding (single-channel/mono linear PCM with 5 bit depth). And the algorithm to calculate amplitudes from the wave function is probably flawed. If you use that algorithm with any commonly used audio file format, you will get nothing but random data.
The truth is: It's just a bit more complicated that that.
Here's what's there to be done to achieve the OP's goal:
Use Android's MediaExtractor to read the input audio file (variousformats/encodings are supported)
Use Android's MediaCodec to decode the input audio encoding to a linear PCM encoding with certain bit depth (usually 16 bit)
Only after this step you got a byte array which you can linearly read and calculate amplitudes from.
Apply a loudness measure to the PCM-encoded data. There are many of them, some more complicated (e.g. LUFS/LKFS), some more basic (RMS). Let's take RMS (= Root Mean Squares) for example:
Determine the number of samples per bar.
Read all the samples for a single bar. Usually there are 2 channels, so for each sample you will get 2 short ints (16 bit) for PCM-16.
For each sample calculate the mean of all channels.
Maybe you will want to normalize the value in some way, e.g. to get float values between -1 and 1 you can divide by (2 / (1 << 16))
Square each sample (hence the "S" in RMS)
Calculate the mean of all the samples for a bar (hence the "M" in RMS)
Calculate the square root of the resulting value (hence the "R" in RMS)
Now you get a value which you can base the height of the bar on.
Repeat steps 2-8 for all the bars.
To implement this is quite an involved task. I could not find any library providing this whole process already. But at least Android's media API provides the algorithms for reading audio file in any format.
Note: RMS is considered a not very accurate loudness measure. But it seems to yield results which are at least somewhat related to what you can actually hear. For many applications it should be good enough.
JETPACK COMPOSE
AudioWaveform is a lightweight Jetpack Compose library that draws a waveform of audio.
XML
WaveformSeekBar is an android library that draws a waveform from a local audio file, resource, and URL using android.view.View (XML approach).
AUDIO PROCESSING
If you're looking for a fast audio processing library, you could use the existing Amplituda library. Amplituda also has caching and compressing features out of the box.
I'm trying to implement camera preview image data processing using camera2 api as proposed here: Camera preview image data processing with Android L and Camera2 API.
I successfully receive callbacks using onImageAvailableListener, but for future processing I need to obtain bitmap from YUV_420_888 android.media.Image. I searched for similar questions, but none of them helped.
Could you please suggest me how to convert android.media.Image (YUV_420_888) to Bitmap or maybe there's a better way of listening for preview frames?
You can do this using the built-in Renderscript intrinsic, ScriptIntrinsicYuvToRGB. Code taken from Camera2 api Imageformat.yuv_420_888 results on rotated image:
#Override
public void onImageAvailable(ImageReader reader)
{
// Get the YUV data
final Image image = reader.acquireLatestImage();
final ByteBuffer yuvBytes = this.imageToByteBuffer(image);
// Convert YUV to RGB
final RenderScript rs = RenderScript.create(this.mContext);
final Bitmap bitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
final Allocation allocationRgb = Allocation.createFromBitmap(rs, bitmap);
final Allocation allocationYuv = Allocation.createSized(rs, Element.U8(rs), yuvBytes.array().length);
allocationYuv.copyFrom(yuvBytes.array());
ScriptIntrinsicYuvToRGB scriptYuvToRgb = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
scriptYuvToRgb.setInput(allocationYuv);
scriptYuvToRgb.forEach(allocationRgb);
allocationRgb.copyTo(bitmap);
// Release
bitmap.recycle();
allocationYuv.destroy();
allocationRgb.destroy();
rs.destroy();
image.close();
}
private ByteBuffer imageToByteBuffer(final Image image)
{
final Rect crop = image.getCropRect();
final int width = crop.width();
final int height = crop.height();
final Image.Plane[] planes = image.getPlanes();
final byte[] rowData = new byte[planes[0].getRowStride()];
final int bufferSize = width * height * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
final ByteBuffer output = ByteBuffer.allocateDirect(bufferSize);
int channelOffset = 0;
int outputStride = 0;
for (int planeIndex = 0; planeIndex < 3; planeIndex++)
{
if (planeIndex == 0)
{
channelOffset = 0;
outputStride = 1;
}
else if (planeIndex == 1)
{
channelOffset = width * height + 1;
outputStride = 2;
}
else if (planeIndex == 2)
{
channelOffset = width * height;
outputStride = 2;
}
final ByteBuffer buffer = planes[planeIndex].getBuffer();
final int rowStride = planes[planeIndex].getRowStride();
final int pixelStride = planes[planeIndex].getPixelStride();
final int shift = (planeIndex == 0) ? 0 : 1;
final int widthShifted = width >> shift;
final int heightShifted = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < heightShifted; row++)
{
final int length;
if (pixelStride == 1 && outputStride == 1)
{
length = widthShifted;
buffer.get(output.array(), channelOffset, length);
channelOffset += length;
}
else
{
length = (widthShifted - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < widthShifted; col++)
{
output.array()[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < heightShifted - 1)
{
buffer.position(buffer.position() + rowStride - length);
}
}
}
return output;
}
For a simpler solution see my implementation here:
Conversion YUV 420_888 to Bitmap (full code)
The function takes the media.image as input, and creates three RenderScript allocations based on the y-, u- and v-planes. It follows the YUV_420_888 logic as shown in this Wikipedia illustration.
However, here we have three separate image planes for the Y, U and V-channels, thus I take these as three byte[], i.e. U8 allocations. The y-allocation has size width * height bytes, while the u- and v-allocatons have size width * height/4 bytes each, reflecting the fact that each u-byte covers 4 pixels (ditto each v byte).
I write some code about this, and it's the YUV datas preview and chang it to JPEG datas ,and I can use it to save as bitmap ,byte[] ,or others.(You can see the class "Allocation" ).
And SDK document says: "For efficient YUV processing with android.renderscript: Create a RenderScript Allocation with a supported YUV type, the IO_INPUT flag, and one of the sizes returned by getOutputSizes(Allocation.class), Then obtain the Surface with getSurface()."
here is the code, hope it will help you:https://github.com/pinguo-yuyidong/Camera2/blob/master/camera2/src/main/rs/yuv2rgb.rs