This question has been asked many times already but i cannot seem to find what i want exactly.
I am trying to create a camera app where I want to display the YUV or RGB value into the log when I point my camera to some color. The values must me 0...255 range for RGB or correspondent YUV color format. I can manage the conversion between them as there are many such examples on stack overflow. However, i cannot store the values into 3 separate variables and display them in the log.
so far i have managed to get
package com.example.virus.bpreader;
public class CaptureVideo extends SurfaceView implements SurfaceHolder.Callback, Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private int[] pixels;
public CaptureVideo(Context context, Camera cameraManager) {
super(context);
mCamera = cameraManager;
mCamera.setDisplayOrientation(90);
//get holder and set the class as callback
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
//when the surface is created, let the camera start the preview
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
mCamera.cancelAutoFocus();
Camera.Parameters params = mCamera.getParameters();
//get fps
params.getSupportedPreviewFpsRange();
//get resolution
params.getSupportedPreviewSizes();
//stop auto exposure
params.setAutoExposureLock(false);
// Check what resolutions are supported by your camera
List<Camera.Size> sizes = params.getSupportedPictureSizes();
// Iterate through all available resolutions and choose one
for (Camera.Size size : sizes) {
Log.i("Resolution", "Available resolution: " + size.width + " " + size.height);
}
//set resolution at 320*240
params.setPreviewSize(320,240);
//set frame rate at 10 fps
List<int[]> frameRates = params.getSupportedPreviewFpsRange();
int last = frameRates.size() - 1;
params.setPreviewFpsRange(10000, 10000);
//set Image Format
//params.setPreviewFormat(ImageFormat.NV21);
mCamera.setParameters(params);
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
//need to stop the preview and restart on orientation change
if (mHolder.getSurface() == null) {
return;
}
//stop preview
mCamera.stopPreview();
//start again
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
//stop and release
mCamera.startPreview();
mCamera.release();
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
int[] myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
Log.d("myPixel", String.valueOf(myPixels.length));
}
//yuv decode
int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
int r, g, b, y1192, y, i, uvp, u, v;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
// above answer is wrong at the following lines. just swap ***u*** and ***v***
u = (0xff & yuv420sp[uvp++]) - 128;
v = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
r = Math.max(0, Math.min(r, 262143));
g = Math.max(0, Math.min(g, 262143));
b = Math.max(0, Math.min(b, 262143));
// combine RGB
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) | 0xff);
}
}
return rgb;
}
}
Here the decode method should give a hex format of RGB color space (quite sure its all right as most of the answers are the same). The problem i am facing is when i am not quite sure how to call it inside the OnPreviewFrame method so that it displays the RGB values separately into the log.
N.B like i said i have seen lot of similar questions but could not find a solution to it. I do not want to store the file (image/video) as i only need the RGB/YUV values from the live camera preview when i point the camera to some color.
I need the rgb or yuv values because i want to plot a graph out of it against time.
Any help will be much appreciated.
Well if the problem is to get separate values of R, G And B from the RGB array, check this SO post here.
Hope it helps!
Related
need a help to fix these line of code to find brightest pixel and its coordinates (x,y) and draw a bitmap over it.
i have android preview live camera and want to scan the screen my process code is within onPreviewFframe by doing for loop and comparing preset float brightestValue to every value of pixels[]. my canvas staticaly draw circle in top left of the screen.
i expect dinamic change of circle position on the camera surfaceview while change the direction of the camera. sorry if my english is bad grammatically
thank you
public class CameraDemo extends Activity {
private static final String TAG = "CameraDemo";
Preview preview;
int brightestX = 0; // X-coordinate of the brightest video pixel
int brightestY = 0; // Y-coordinate of the brightest video pixel
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
Paint p = new Paint ();
p.setColor(Color.WHITE);
Bitmap bg = Bitmap.createBitmap(480,800,Bitmap.Config.ARGB_8888);
Canvas canvas = new Canvas(bg);
canvas.drawCircle(brightestX,brightestY,20,p);
preview = new Preview(this);
preview.setBackground(new BitmapDrawable(bg));
((FrameLayout) findViewById(R.id.preview)).addView(preview);
}
class Preview extends SurfaceView implements SurfaceHolder.Callback, Camera.PreviewCallback {
SurfaceHolder mHolder;
Camera mCamera;
private Camera.Parameters parameters;
private Camera.Size previewSize;
private int[] pixels;
Preview(Context context) {
super(context);
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
public void surfaceCreated(SurfaceHolder holder) {
// The Surface has been created, acquire the camera and tell it where
// to draw.
mCamera = Camera.open();
try {
mCamera.startPreview();
mCamera.setPreviewDisplay(holder);
mCamera.setDisplayOrientation(90);
mCamera.setPreviewCallbackWithBuffer(this);
parameters = mCamera.getParameters();
previewSize = parameters.getPreviewSize();
} catch (IOException exception) {
mCamera.release();
mCamera = null;
// TODO: add more exception handling logic here
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
mCamera.release();
mCamera = null;
}
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
parameters.setPreviewSize(w, h);
//set the camera's settings
mCamera.setParameters(parameters);
mCamera.startPreview();
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
//transforms NV21 pixel data into RGB pixels
decodeYUV420SP(pixels, data, previewSize.width, previewSize.height);
float brightestValue = 0; // Brightness of the brightest video pixel
for (int y = 0; y < previewSize.height ; y++) {
for (int x = 0; x < previewSize.width; x++) {
// Get the color stored in the pixel
float pixelBrightness = pixels[previewSize.width*previewSize.height];
// If that value is brighter than any previous, then store the
// brightness of that pixel, as well as its (x,y) location
if (pixelBrightness > brightestValue) {
brightestValue = pixelBrightness;
brightestY = y;
brightestX = x;
}
}
}
}
}
//Method from Ketai project! Not mine! See below...
void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) { int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143)
r = 262143;
if (g < 0) g = 0; else if (g > 262143)
g = 262143;
if (b < 0) b = 0; else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
}
You need to call canvas.drawCircle(brightestX,brightestY,20,p); after updating the position in the onPreviewFrame method, so the code looks like this:
if (pixelBrightness > brightestValue) {
brightestValue = pixelBrightness;
brightestY = y;
brightestX = x;
drawCircleOnNewPosition() // method that calls canvas.drawCircle
}
i'm developing an Android app that gets the color of a current pixel on the screen
the appication concept is very simple the CameraPreview class implements the SurfcaeHolder.Callback interface
and onSurfcaeChanged gets the color
my problem is the code works just fine on the samsung6s but it doesn't work on the nuxus6p the callbacks surfaceCreated and surfaceChanged never called
here is a sample of my code
public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback {
private SurfaceHolder mHolder;
private Camera mCamera;
String TAG = "TAG";
Context context;
OnColorDetected onColorDetected;
ColorDetectionPresenter presenter;
Camera.Size previewSize;
int midPixxel;
public CameraPreview(Context context, Camera camera, ColorDetectionPresenter presenter) {
super(context);
mCamera = camera;
this.context = context;
this.presenter = presenter;
// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
mHolder = getHolder();
mHolder.addCallback(this);
// deprecated setting, but required on Android versions prior to 3.0
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
public void setOnColorDetected(OnColorDetected onColorDetected) {
this.onColorDetected = onColorDetected;
}
public void surfaceDestroyed(SurfaceHolder holder) {
// empty. Take care of releasing the Camera preview in your activity.
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
// The Surface has been created, now tell the camera where to draw the preview.
try {
Log.d("SURFACE_STATE","created");
mCamera.setPreviewDisplay(holder);
mCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
int[] myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
for (int i = 0; i < myPixels.length; i++) {
//Toast.makeText(context, MainActivity.getBestMatchingColorName(myPixels[i]), Toast.LENGTH_SHORT).show();
}
}
});
mCamera.startPreview();
} catch (IOException e) {
Log.d(TAG, "Error setting camera preview: " + e.getMessage());
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
// If your preview can change or rotate, take care of those events here.
// Make sure to stop the preview before resizing or reformatting it.
Log.d("SURFACE_STATE","changed");
if (mHolder.getSurface() == null) {
// preview surface does not exist
return;
}
// stop preview before making changes
try {
mCamera.stopPreview();
} catch (Exception e) {
// ignore: tried to stop a non-existent preview
}
// set preview size and make any resize, rotate or
// reformatting changes here
// start preview with new settings
try {
Camera.Parameters parameters = mCamera.getParameters();
mCamera.setPreviewDisplay(mHolder);
List<Camera.Size> previewSizes = parameters.getSupportedPreviewSizes();
if (previewSizes == null)
previewSizes = parameters.getSupportedPictureSizes();
// You need to choose the most appropriate previewSize for your
previewSize = previewSizes.get(0);
for(int i=0;i<previewSizes.size();i++){
if(previewSizes.get(i).width>previewSize.width)
previewSize=previewSizes.get(i);
}
// .... select one of previewSizes here
cameraSetup(width, height);
parameters.setPictureSize(previewSize.width, previewSize.height);
mCamera.setParameters(parameters);
mCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
int[] myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
String colorName = presenter.getBestMatchingColorName(myPixels[myPixels.length / 2]);
onColorDetected.colorDetected(colorName);
}
});
mCamera.startPreview();
} catch (Exception e) {
Log.d(TAG, "Error starting camera preview: " + e.getMessage());
}
}
int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
// Pulled directly from:
// http://ketai.googlecode.com/svn/trunk/ketai/src/edu/uic/ketai/inputService/KetaiCamera.java
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
private void cameraSetup(int w, int h) {
// set the camera parameters, including the preview size
FrameLayout.LayoutParams lp = (FrameLayout.LayoutParams) getLayoutParams();
double cameraAspectRatio = ((double) previewSize.width) / previewSize.height;
if (((double) h) / w > cameraAspectRatio) {
lp.width = (int) (h / cameraAspectRatio + 0.5);
lp.height = h;
} else {
lp.height = (int) (w * cameraAspectRatio + 0.5);
lp.width = w;
lp.topMargin = (h - lp.height) / 2;
}
lp.gravity = Gravity.CENTER_HORIZONTAL | Gravity.TOP;
setLayoutParams(lp);
requestLayout();
}
}
and this is how i call it
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
colorDetectionPresenter = new ColorDetectionPresenter();
camera = Camera.open();
cameraPreview = new CameraPreview(getActivity(), camera, colorDetectionPresenter);
params = camera.getParameters();
params.setPreviewFormat(ImageFormat.NV21);
camera.setDisplayOrientation(90);
siz = params.getSupportedPreviewSizes().get(0);
params.setPreviewSize(siz.width, siz.height);
}
i have tried initializing the camera preview in the onCreate, onCreateViewand onResume none works
any suggestions ?
it only doesn't work on my nexus6p
My application overrides the onPreviewFrame callback to pass the current camera frame to a webrtc native function. This works perfectly, however I want to be able to switch to sending a static frame instead of video, if that option has been selected in my app.
So far I have created a YUV NV21 image, which I am storing in the assets dir. All attempts to pass that frame down to the native function have resulted in purple/green stripes rather than the actual image.
This is what I have so far;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
previewBufferLock.lock();
if (mFrameProvider.isEnabled()) {
mFrameProvider.overwriteWithFrame(data, expectedFrameSize);
}
if (isCaptureRunning) {
if (data.length == expectedFrameSize) {
ProvideCameraFrame(data, expectedFrameSize, context);
cameraUtils.addCallbackBuffer(camera, data);
}
}
previewBufferLock.unlock();
}
#Override
public byte[] overwriteWithPreviewFrame(byte[] data, int expectedFrameSize) {
if (mFrameData == null) {
loadPreviewFrame();
}
for (int i=0; i < expectedFrameSize; i++) {
if (i < mFrameData.length) {
data[i] = mFrameData[i];
}
}
return data;
}
And
private void loadPreviewFrame() {
try {
InputStream open = mContext.getResources().getAssets().open(PREVIEW_FRAME_FILE);
mFrameData = IOUtils.toByteArray(open);
open.close();
} catch (Exception e) {
Log.e("", "", e);
}
}
I have tried converting the image to a bitmap too. So the question is how can I open a YUV frame from assets and convert it into a suitable format to pass to the native methods.
Results in the following output;
Right after a long fight with Android API I have managed to get this working.
There were two issues that caused the green/purple output;
Loss of data: the generated YUV frame was larger than the original preview frame at the same resolution, so the data being passed down to the native code was missing around 30% of its image data.
Wrong resolution: the native code required the resolution of the preview frame and not the camera.
Below is a working solution for anyone who wishes to add a static frame;
so updated code:
#Override
public byte[] getPreviewFrameData(int width, int height) {
if (mPreviewFrameData == null) {
loadPreviewFrame(width, height);
}
return mPreviewFrameData;
}
private void loadPreviewFrame(int width, int height) {
try {
Bitmap previewImage = BitmapFactory.decodeResource(mContext.getResources(), R.drawable.frame);
Bitmap resizedPreviewImage = Bitmap.createScaledBitmap(previewImage, width, height, false);
BitmapConverter bitmapConverter = new BitmapConverter();
mPreviewFrameData = bitmapConverter.convertToNV21(resizedPreviewImage);
} catch (Exception e) {
Log.e("DisabledCameraFrameProvider", "Failed to loadPreviewFrame");
}
}
class BitmapConverter {
byte [] convertToNV21(Bitmap bitmap) {
int inputWidth = bitmap.getWidth();
int inputHeight = bitmap.getHeight();
int [] argb = new int[inputWidth * inputHeight];
bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
bitmap.recycle();
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff);
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
}
Then finally in your callback;
public void onPreviewFrame(byte[] data, Camera camera) {
byte[] bytes = data;
if (!mProvider.isVideoEnabled()) {
Camera.Size previewSize = camera.getParameters().getPreviewSize();
bytes = mProvider.getPreviewFrameData(previewSize.width, previewSize.height);
}
ProvideCameraFrame(bytes, bytes.length, context);
}
The key was to scale the image to the camera preview size and convert the image to YUV colour space.
I have created a demo application which opens the camera.now I want to get the color of pixel when user touches the live camera preview.
I have tried by overridingonTouchEvent and I succeed in getting the position of pixel in x,y but I am not getting the RGB color value from it.it always shows 0,0,0
All suggestions are welcomed including any alternate way to achieve the same functionality. [Excluding OpenCv because it requires to install OpenCvManager apk also to support my application]
Code :
public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback, PreviewCallback {
private Camera camera;
private SurfaceHolder holder;
int[] myPixels;
public CameraPreview(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public CameraPreview(Context context, AttributeSet attrs) {
super(context, attrs);
}
public CameraPreview(Context context) {
super(context);
}
public void init(Camera camera) {
this.camera = camera;
initSurfaceHolder();
}
#SuppressWarnings("deprecation") // needed for < 3.0
private void initSurfaceHolder() {
holder = getHolder();
holder.addCallback(this);
holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
initCamera(holder);
}
private void initCamera(SurfaceHolder holder) {
try {
camera.setPreviewDisplay(holder);
camera.getParameters().setPreviewFormat(ImageFormat.NV21);
camera.setPreviewCallback(this);
camera.startPreview();
} catch (Exception e) {
Log.d("Error setting camera preview", e);
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
}
#Override
public boolean onTouchEvent(MotionEvent event) {
if(event.getAction() == MotionEvent.ACTION_DOWN)
{
android.util.Log.d("touched", "called");
/* int x = (int)event.getX();
int y = (int)event.getY();
android.util.Log.d("touched pixel :", x+" "+y);
setDrawingCacheEnabled(true);
buildDrawingCache();
Bitmap mBmp = getDrawingCache();
int pixel = mBmp.getPixel(x, y);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
android.util.Log.d("touched pixel color :", redValue+" "+greenValue+" "+blueValue);
android.util.Log.d("touched pixel color from preview:", redValue+" "+greenValue+" "+blueValue);
*/
//how to get particular pixel from myPixels[]
}
return false;
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
android.util.Log.d("onPreviewFrame", "called");
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
}
public int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
// here we're using our own internal PImage attributes
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
// use interal buffer instead of pixels for UX reasons
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000)
| ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
}
I've followed a different approach to solve it. I'll post the code as soon as I get free.
Algorithm :
create an overlay on the live camera
when user touches , update the overlay with RGB data of latest YUV buffer stream got from live camera
pick RGB color from overlay image
It seems like myPixels is a 1D representation of 2D data (width x height).
Which means myPixels has a length of (width * height).
Lets say the pixel is (x,y) then
int idx = ( height * y ) + x;
int color = myPixels[idx];
With the above information you can modify decodeYUV420SP method to output only the color of a particular pixel.
There are several tutorials out there which explain how to get a simple camera preview up and running on an android device. But i couldn't find any example which explains how to manipulate the image before it's being rendered.
What I want to do is implementing custom color filters to simulate e.g. red and/or green deficiency.
I did some research on this and put together a working(ish) example. Here's what I found. It's pretty easy to get the raw data coming off of the camera. It's returned as a YUV byte array. You'd need to draw it manually onto a surface in order to be able to modify it. To do that you'd need to have a SurfaceView that you can manually run draw calls with. There are a couple of flags you can set that accomplish that.
In order to do the draw call manually you'd need to convert the byte array into a bitmap of some sort. Bitmaps and the BitmapDecoder don't seem to handle the YUV byte array very well at this point. There's been a bug filed for this but don't don't what the status is on that. So people have been trying to decode the byte array into an RGB format themselves.
Seems like doing the decoding manually has been kinda slow and people have had various degrees of success with it. Something like this should probably really be done with native code at the NDK level.
Still, it is possible to get it working. Also, my little demo is just me spending a couple of hours hacking stuff together (I guess doing this caught my imagination a little too much ;)). So chances are with some tweaking you could much improve what I've managed to get working.
This little code snip contains a couple of other gems I found as well. If all you want is to be able to draw over the surface you can override the surface's onDraw function - you could potentially analyze the returned camera image and draw an overlay - it'd be much faster than trying to process every frame. Also, I changed the SurfaceHolder.SURFACE_TYPE_NORMAL from what would be needed if you wanted a camera preview to show up. So a couple of changes to the code - the commented out code:
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
And the:
SurfaceHolder.SURFACE_TYPE_NORMAL //SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS - for preview to work
Should allow you to overlay frames based on the camera preview on top of the real preview.
Anyway, here's a working piece of code - Should give you something to start with.
Just put a line of code in one of your views like this:
<pathtocustomview.MySurfaceView android:id="#+id/surface_camera"
android:layout_width="fill_parent" android:layout_height="10dip"
android:layout_weight="1">
</pathtocustomview.MySurfaceView>
And include this class in your source somewhere:
package pathtocustomview;
import java.io.IOException;
import java.nio.Buffer;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;
import android.hardware.Camera;
import android.util.AttributeSet;
import android.util.Log;
import android.view.SurfaceHolder;
import android.view.SurfaceHolder.Callback;
import android.view.SurfaceView;
public class MySurfaceView extends SurfaceView implements Callback,
Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private boolean isPreviewRunning = false;
private byte [] rgbbuffer = new byte[256 * 256];
private int [] rgbints = new int[256 * 256];
protected final Paint rectanglePaint = new Paint();
public MySurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
rectanglePaint.setARGB(100, 200, 0, 0);
rectanglePaint.setStyle(Paint.Style.FILL);
rectanglePaint.setStrokeWidth(2);
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
#Override
protected void onDraw(Canvas canvas) {
canvas.drawRect(new Rect((int) Math.random() * 100,
(int) Math.random() * 100, 200, 200), rectanglePaint);
Log.w(this.getClass().getName(), "On Draw Called");
}
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
}
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
this.setWillNotDraw(false); // This allows us to make our own draw
// calls to this canvas
mCamera = Camera.open();
Camera.Parameters p = mCamera.getParameters();
p.setPreviewSize(240, 160);
mCamera.setParameters(p);
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e("Camera", e.getMessage());
}
}
}
public void onPreviewFrame(byte[] data, Camera camera) {
Log.d("Camera", "Got a camera frame");
Canvas c = null;
if(mHolder == null){
return;
}
try {
synchronized (mHolder) {
c = mHolder.lockCanvas(null);
// Do your drawing here
// So this data value you're getting back is formatted in YUV format and you can't do much
// with it until you convert it to rgb
int bwCounter=0;
int yuvsCounter=0;
for (int y=0;y<160;y++) {
System.arraycopy(data, yuvsCounter, rgbbuffer, bwCounter, 240);
yuvsCounter=yuvsCounter+240;
bwCounter=bwCounter+256;
}
for(int i = 0; i < rgbints.length; i++){
rgbints[i] = (int)rgbbuffer[i];
}
//decodeYUV(rgbbuffer, data, 100, 100);
c.drawBitmap(rgbints, 0, 256, 0, 0, 256, 256, false, new Paint());
Log.d("SOMETHING", "Got Bitmap");
}
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (c != null) {
mHolder.unlockCanvasAndPost(c);
}
}
}
}
I used walta's solution but I had some problems with YUV conversion, camera frames output sizes and crash on camera release.
Finally the following code worked for me:
public class MySurfaceView extends SurfaceView implements Callback, Camera.PreviewCallback {
private static final String TAG = "MySurfaceView";
private int width;
private int height;
private SurfaceHolder mHolder;
private Camera mCamera;
private int[] rgbints;
private boolean isPreviewRunning = false;
private int mMultiplyColor;
public MySurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
mHolder = getHolder();
mHolder.addCallback(this);
mMultiplyColor = getResources().getColor(R.color.multiply_color);
}
// #Override
// protected void onDraw(Canvas canvas) {
// Log.w(this.getClass().getName(), "On Draw Called");
// }
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
if (isPreviewRunning)
return;
this.setWillNotDraw(false); // This allows us to make our own draw calls to this canvas
mCamera = Camera.open();
isPreviewRunning = true;
Camera.Parameters p = mCamera.getParameters();
Size size = p.getPreviewSize();
width = size.width;
height = size.height;
p.setPreviewFormat(ImageFormat.NV21);
showSupportedCameraFormats(p);
mCamera.setParameters(p);
rgbints = new int[width * height];
// try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
//mHolder.removeCallback(this);
mCamera.setPreviewCallback(null);
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e("Camera", e.getMessage());
}
}
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("Camera", "Got a camera frame");
if (!isPreviewRunning)
return;
Canvas canvas = null;
if (mHolder == null) {
return;
}
try {
synchronized (mHolder) {
canvas = mHolder.lockCanvas(null);
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
decodeYUV(rgbints, data, width, height);
// draw the decoded image, centered on canvas
canvas.drawBitmap(rgbints, 0, width, canvasWidth-((width+canvasWidth)>>1), canvasHeight-((height+canvasHeight)>>1), width, height, false, null);
// use some color filter
canvas.drawColor(mMultiplyColor, Mode.MULTIPLY);
}
} catch (Exception e){
e.printStackTrace();
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (canvas != null) {
mHolder.unlockCanvasAndPost(canvas);
}
}
}
/**
* Decodes YUV frame to a buffer which can be use to create a bitmap. use
* this for OS < FROYO which has a native YUV decoder decode Y, U, and V
* values on the YUV 420 buffer described as YCbCr_422_SP by Android
*
* #param rgb
* the outgoing array of RGB bytes
* #param fg
* the incoming frame bytes
* #param width
* of source frame
* #param height
* of source frame
* #throws NullPointerException
* #throws IllegalArgumentException
*/
public void decodeYUV(int[] out, byte[] fg, int width, int height) throws NullPointerException, IllegalArgumentException {
int sz = width * height;
if (out == null)
throw new NullPointerException("buffer out is null");
if (out.length < sz)
throw new IllegalArgumentException("buffer out size " + out.length + " < minimum " + sz);
if (fg == null)
throw new NullPointerException("buffer 'fg' is null");
if (fg.length < sz)
throw new IllegalArgumentException("buffer fg size " + fg.length + " < minimum " + sz * 3 / 2);
int i, j;
int Y, Cr = 0, Cb = 0;
for (j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for (i = 0; i < width; i++) {
Y = fg[pixPtr];
if (Y < 0)
Y += 255;
if ((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if (Cb < 0)
Cb += 127;
else
Cb -= 128;
Cr = fg[cOff + 1];
if (Cr < 0)
Cr += 127;
else
Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if (R < 0)
R = 0;
else if (R > 255)
R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if (G < 0)
G = 0;
else if (G > 255)
G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if (B < 0)
B = 0;
else if (B > 255)
B = 255;
out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R;
}
}
}
private void showSupportedCameraFormats(Parameters p) {
List<Integer> supportedPictureFormats = p.getSupportedPreviewFormats();
Log.d(TAG, "preview format:" + cameraFormatIntToString(p.getPreviewFormat()));
for (Integer x : supportedPictureFormats) {
Log.d(TAG, "suppoterd format: " + cameraFormatIntToString(x.intValue()));
}
}
private String cameraFormatIntToString(int format) {
switch (format) {
case PixelFormat.JPEG:
return "JPEG";
case PixelFormat.YCbCr_420_SP:
return "NV21";
case PixelFormat.YCbCr_422_I:
return "YUY2";
case PixelFormat.YCbCr_422_SP:
return "NV16";
case PixelFormat.RGB_565:
return "RGB_565";
default:
return "Unknown:" + format;
}
}
}
To Use it, run from you activity's onCreate the following code:
SurfaceView surfaceView = new MySurfaceView(this, null);
RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT, RelativeLayout.LayoutParams.MATCH_PARENT);
surfaceView.setLayoutParams(layoutParams);
mRelativeLayout.addView(surfaceView);
Have you looked at GPUImage ?
It was originally an OSX/iOS library made by Brad Larson, that exists as an Objective-C wrapper around OpenGL/ES.
https://github.com/BradLarson/GPUImage
The people at CyberAgent have made an Android port (which doesn't have complete feature parity), which is a set of Java wrappers on top of the OpenGLES stuff. It's relatively high level, and pretty easy to implement, with a lot of the same functionality mentioned above...
https://github.com/CyberAgent/android-gpuimage