Can anyone please help me in getting the camerapreview frame data without clicking the camara click. I want to get the currect camera data with out clicking the camera button
I guess you are searching for this function in the Camera class,
public final void setPreviewCallback (Camera.PreviewCallback cb)
Define the callback
private PreviewCallback mPreviewCallback = new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
}
}
Once the preview is started this callback will get triggered on each frame, the data (byte[]) are in the preview format, which you can find while setting the Camera Parameters
First get a list of supported preview formats
List<Integer> Camera.Parameters.getSupportedPreviewFormats()
the default format is ImageFormat.NV21
If you want to change the preview format use this function, choose a format from the available formats
Camera.Parameters.setPreviewFormat(int pixel_format)
Seeing as you're using MonoDevelop and not writing in Java the procedure will be a little different.
You can create a camera preview handler class like so:
public class CameraListener : Java.Lang.Object, Camera.IPreviewCallback
{
public event PreviewFrameHandler PreviewFrame;
public void OnPreviewFrame(byte[] data, Camera camera)
{
if (PreviewFrame != null)
{
PreviewFrame(this, new PreviewFrameEventArgs(data, camera));
}
}
}
public delegate void PreviewFrameHandler(object sender, PreviewFrameEventArgs e);
public class PreviewFrameEventArgs : EventArgs
{
readonly byte[] _data;
readonly Camera _camera;
public byte[] Data { get { return _data; } }
public Camera Camera { get { return _camera; } }
public PreviewFrameEventArgs(byte[] data, Camera camera)
{
_data = data;
_camera = camera;
}
}
The class provides an event that is fired for each frame received.
In my own code I use the YUV420_NV21 format
I decode the data using the following method:
unsafe public static void convertYUV420_NV21toRGB565(byte* yuvIn, Int16* rgbOut, int width, int height, bool monochrome)
{
int size = width * height;
int offset = size;
int u, v, y1, y2, y3, y4;
for (int i = 0, k = 0; i < size; i += 2, k += 2)
{
y1 = yuvIn[i];
y2 = yuvIn[i + 1];
y3 = yuvIn[width + i];
y4 = yuvIn[width + i + 1];
u = yuvIn[offset + k];
v = yuvIn[offset + k + 1];
u = u - 128;
v = v - 128;
if (monochrome)
{
convertYUVtoRGB565Monochrome(y1, u, v, rgbOut, i);
convertYUVtoRGB565Monochrome(y2, u, v, rgbOut, (i + 1));
convertYUVtoRGB565Monochrome(y3, u, v, rgbOut, (width + i));
convertYUVtoRGB565Monochrome(y4, u, v, rgbOut, (width + i + 1));
}
else
{
convertYUVtoRGB565(y1, u, v, rgbOut, i);
convertYUVtoRGB565(y2, u, v, rgbOut, (i + 1));
convertYUVtoRGB565(y3, u, v, rgbOut, (width + i));
convertYUVtoRGB565(y4, u, v, rgbOut, (width + i + 1));
}
if (i != 0 && (i + 2) % width == 0)
i += width;
}
}
unsafe private static void convertYUVtoRGB565Monochrome(int y, int u, int v, Int16* rgbOut, int index)
{
rgbOut[index] = (short)(((y & 0xf8) << 8) |
((y & 0xfc) << 3) |
((y >> 3) & 0x1f));
}
unsafe private static void convertYUVtoRGB565(int y, int u, int v, Int16* rgbOut, int index)
{
int r = y + (int)1.402f * v;
int g = y - (int)(0.344f * u + 0.714f * v);
int b = y + (int)1.772f * u;
r = r > 255 ? 255 : r < 0 ? 0 : r;
g = g > 255 ? 255 : g < 0 ? 0 : g;
b = b > 255 ? 255 : b < 0 ? 0 : b;
rgbOut[index] = (short)(((b & 0xf8) << 8) |
((g & 0xfc) << 3) |
((r >> 3) & 0x1f));
}
I've included both monochrome and colour decoders.
The resulting data from this code is in the OpenGL 565 RGB format and can be used to initialise OpenGL textures or you can just mess with the pixels for image analysis etc.
Bob Powell.
This recipe from Xamarin explains how to use the Camera class to get a preview and display it to the user.
Related
I'm working on a video conference app using openvidu. We are trying to include wikitude AR session in the call.
The problem is that both of them requires access to the camera so I have the next scenario: if I instantiate the local participant video first I can't start the wikitude AR session because video don't load. If I instantiate the wikitude session firstly the other participants of the call don't see the device video.
I was able to create a custom video capturer for openvidu, that imitates the camera. It is required to send every frame for it to works.
package org.webrtc;
import android.content.Context;
import android.graphics.Bitmap;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.atomic.AtomicReference;
public class CustomVideoCapturer implements VideoCapturer {
private final static String TAG = "FileVideoCapturer";
//private final FileVideoCapturer.VideoReader videoReader;
private final Timer timer = new Timer();
private CapturerObserver capturerObserver;
private AtomicReference<Bitmap> image = new AtomicReference<Bitmap>();
private final TimerTask tickTask = new TimerTask() {
#Override
public void run() {
tick();
}
};
public CustomVideoCapturer() {
}
public void tick() {
Bitmap frame = image.get();
if (frame != null && !frame.isRecycled()) {
NV21Buffer nv21Buffer = new NV21Buffer(getNV21(frame),frame.getWidth(),frame.getHeight(), null);
VideoFrame videoFrame = new VideoFrame(nv21Buffer, 0, System.nanoTime());
capturerObserver.onFrameCaptured(videoFrame);
}
}
byte [] getNV21(Bitmap image) {
int [] argb = new int[image.getWidth() * image.getHeight()];
image.getPixels(argb, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight());
byte [] yuv = new byte[image.getWidth()*image.getHeight()*3/2];
encodeYUV420SP(yuv, argb, image.getWidth(), image.getHeight());
image.recycle();
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
public void sendFrame(Bitmap bitmap) {
image.set(bitmap);
}
#Override
public void initialize(SurfaceTextureHelper surfaceTextureHelper, Context applicationContext,
CapturerObserver capturerObserver) {
this.capturerObserver = capturerObserver;
}
#Override
public void startCapture(int width, int height, int framerate) {
//timer.schedule(tickTask, 0, 1000 / framerate);
threadCV().start();
}
Thread threadCV() {
return new Thread() {
#Override
public void run() {
while (true) {
if (image.get() != null) {
tick();
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
}
#Override
public void stopCapture() throws InterruptedException {
timer.cancel();
}
#Override
public void changeCaptureFormat(int width, int height, int framerate) {
// Empty on purpose
}
#Override
public void dispose() {
//videoReader.close();
}
#Override
public boolean isScreencast() {
return false;
}
private interface VideoReader {
VideoFrame getNextFrame();
void close();
}
/**
* Read video data from file for the .y4m container.
*/
}
On the local participant I than use this function to send the frame:
public void sendFrame(Bitmap frame) {
customVideoCapturer.sendFrame(frame);
}
But I wasn't be able to thake the frames from the wikitude Camera. There is a way to access the frames and resend them?
Such as of the Native Api sdk, version 9.10.0, according to answer from wikitude support
https://support.wikitude.com/support/discussions/topics/5000096719?page=1 , to access the camera frames a custom plugin should be created:
https://www.wikitude.com/external/doc/documentation/latest/androidnative/pluginsapi.html#plugins-api
This question has been asked many times already but i cannot seem to find what i want exactly.
I am trying to create a camera app where I want to display the YUV or RGB value into the log when I point my camera to some color. The values must me 0...255 range for RGB or correspondent YUV color format. I can manage the conversion between them as there are many such examples on stack overflow. However, i cannot store the values into 3 separate variables and display them in the log.
so far i have managed to get
package com.example.virus.bpreader;
public class CaptureVideo extends SurfaceView implements SurfaceHolder.Callback, Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private int[] pixels;
public CaptureVideo(Context context, Camera cameraManager) {
super(context);
mCamera = cameraManager;
mCamera.setDisplayOrientation(90);
//get holder and set the class as callback
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
//when the surface is created, let the camera start the preview
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
mCamera.cancelAutoFocus();
Camera.Parameters params = mCamera.getParameters();
//get fps
params.getSupportedPreviewFpsRange();
//get resolution
params.getSupportedPreviewSizes();
//stop auto exposure
params.setAutoExposureLock(false);
// Check what resolutions are supported by your camera
List<Camera.Size> sizes = params.getSupportedPictureSizes();
// Iterate through all available resolutions and choose one
for (Camera.Size size : sizes) {
Log.i("Resolution", "Available resolution: " + size.width + " " + size.height);
}
//set resolution at 320*240
params.setPreviewSize(320,240);
//set frame rate at 10 fps
List<int[]> frameRates = params.getSupportedPreviewFpsRange();
int last = frameRates.size() - 1;
params.setPreviewFpsRange(10000, 10000);
//set Image Format
//params.setPreviewFormat(ImageFormat.NV21);
mCamera.setParameters(params);
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
//need to stop the preview and restart on orientation change
if (mHolder.getSurface() == null) {
return;
}
//stop preview
mCamera.stopPreview();
//start again
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
//stop and release
mCamera.startPreview();
mCamera.release();
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
int[] myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
Log.d("myPixel", String.valueOf(myPixels.length));
}
//yuv decode
int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
int r, g, b, y1192, y, i, uvp, u, v;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
// above answer is wrong at the following lines. just swap ***u*** and ***v***
u = (0xff & yuv420sp[uvp++]) - 128;
v = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
r = Math.max(0, Math.min(r, 262143));
g = Math.max(0, Math.min(g, 262143));
b = Math.max(0, Math.min(b, 262143));
// combine RGB
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) | 0xff);
}
}
return rgb;
}
}
Here the decode method should give a hex format of RGB color space (quite sure its all right as most of the answers are the same). The problem i am facing is when i am not quite sure how to call it inside the OnPreviewFrame method so that it displays the RGB values separately into the log.
N.B like i said i have seen lot of similar questions but could not find a solution to it. I do not want to store the file (image/video) as i only need the RGB/YUV values from the live camera preview when i point the camera to some color.
I need the rgb or yuv values because i want to plot a graph out of it against time.
Any help will be much appreciated.
Well if the problem is to get separate values of R, G And B from the RGB array, check this SO post here.
Hope it helps!
My application overrides the onPreviewFrame callback to pass the current camera frame to a webrtc native function. This works perfectly, however I want to be able to switch to sending a static frame instead of video, if that option has been selected in my app.
So far I have created a YUV NV21 image, which I am storing in the assets dir. All attempts to pass that frame down to the native function have resulted in purple/green stripes rather than the actual image.
This is what I have so far;
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
previewBufferLock.lock();
if (mFrameProvider.isEnabled()) {
mFrameProvider.overwriteWithFrame(data, expectedFrameSize);
}
if (isCaptureRunning) {
if (data.length == expectedFrameSize) {
ProvideCameraFrame(data, expectedFrameSize, context);
cameraUtils.addCallbackBuffer(camera, data);
}
}
previewBufferLock.unlock();
}
#Override
public byte[] overwriteWithPreviewFrame(byte[] data, int expectedFrameSize) {
if (mFrameData == null) {
loadPreviewFrame();
}
for (int i=0; i < expectedFrameSize; i++) {
if (i < mFrameData.length) {
data[i] = mFrameData[i];
}
}
return data;
}
And
private void loadPreviewFrame() {
try {
InputStream open = mContext.getResources().getAssets().open(PREVIEW_FRAME_FILE);
mFrameData = IOUtils.toByteArray(open);
open.close();
} catch (Exception e) {
Log.e("", "", e);
}
}
I have tried converting the image to a bitmap too. So the question is how can I open a YUV frame from assets and convert it into a suitable format to pass to the native methods.
Results in the following output;
Right after a long fight with Android API I have managed to get this working.
There were two issues that caused the green/purple output;
Loss of data: the generated YUV frame was larger than the original preview frame at the same resolution, so the data being passed down to the native code was missing around 30% of its image data.
Wrong resolution: the native code required the resolution of the preview frame and not the camera.
Below is a working solution for anyone who wishes to add a static frame;
so updated code:
#Override
public byte[] getPreviewFrameData(int width, int height) {
if (mPreviewFrameData == null) {
loadPreviewFrame(width, height);
}
return mPreviewFrameData;
}
private void loadPreviewFrame(int width, int height) {
try {
Bitmap previewImage = BitmapFactory.decodeResource(mContext.getResources(), R.drawable.frame);
Bitmap resizedPreviewImage = Bitmap.createScaledBitmap(previewImage, width, height, false);
BitmapConverter bitmapConverter = new BitmapConverter();
mPreviewFrameData = bitmapConverter.convertToNV21(resizedPreviewImage);
} catch (Exception e) {
Log.e("DisabledCameraFrameProvider", "Failed to loadPreviewFrame");
}
}
class BitmapConverter {
byte [] convertToNV21(Bitmap bitmap) {
int inputWidth = bitmap.getWidth();
int inputHeight = bitmap.getHeight();
int [] argb = new int[inputWidth * inputHeight];
bitmap.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte [] yuv = new byte[inputWidth*inputHeight*3/2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
bitmap.recycle();
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff);
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
}
Then finally in your callback;
public void onPreviewFrame(byte[] data, Camera camera) {
byte[] bytes = data;
if (!mProvider.isVideoEnabled()) {
Camera.Size previewSize = camera.getParameters().getPreviewSize();
bytes = mProvider.getPreviewFrameData(previewSize.width, previewSize.height);
}
ProvideCameraFrame(bytes, bytes.length, context);
}
The key was to scale the image to the camera preview size and convert the image to YUV colour space.
I am writing a android app about camera.
The function I want to implement is let preview preserving red color on screen,
the other part on screen be grayscale.
Like this picture
I use the particular scalar multiply R, G and B component to attach grayscale
and preserve red part of preview after changeing raw data format to RGB.
but after I override onDraw(Canvas canvas) method in my custom View,
the result is like blue filter effect
Can anybody give some hint to let me know which step I am wrong.
THANK YOU.
The code of my custom View is below
package com.example.macampreviewdemo;
public class CamPreviewDemoActivity extends Activity {
/** Called when the activity is first created. */
int h, w;
public String tag = "tag";
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Log.i(tag, "onCreate");
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
h = metrics.heightPixels;
w = metrics.widthPixels;
this.requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
Display display = ((WindowManager)
getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();
setContentView(R.layout.main);
setRequestedOrientation(0);
ViewToDraw dtw = (ViewToDraw) findViewById(R.id.vtd);
CameraView cameraView = new CameraView(this, dtw, w, h);
((FrameLayout) findViewById(R.id.preview)).addView(cameraView);
}
}
And this:
package com.example.macampreviewdemo;
public class ViewToDraw extends View{
public String tag = "tag";
public byte[] image;
public boolean isCameraSet = false;
public int imgWidth, imgHeight;
Bitmap overlayBitmap;
Matrix matrix;
public ViewToDraw(Context context, AttributeSet attrs) {
super(context, attrs);
matrix = new Matrix();
}
public void cameraSet(){
isCameraSet = true;
}
public void putImage(byte[] img){
image = img;
}
#Override
protected void onDraw(Canvas canvas){
Log.i(tag, "onDraw() ");
int size = imgWidth * imgHeight;
int[] rgb = new int[imgWidth * imgHeight];
if(isCameraSet){
rgb = convertYUV420_NV21toARGB8888(image, imgWidth, imgHeight);
for (int k = 0; k < size; k++) {
if(Color.red(rgb[k]) == 255 &&
Color.green(rgb[k]) == 0 &&
Color.blue(rgb[k]) == 50){}
else{
rgb[k] = (int) (
(0.2126 * Color.red(rgb[k])) +
(0.7152 * Color.green(rgb[k])) +
(0.0722 * Color.blue(rgb[k]))
);
}
}
Log.i("tag", "rgb length = " + rgb.length);
overlayBitmap =
Bitmap.createBitmap(rgb, 0, imgWidth,
imgWidth, imgHeight,
Bitmap.Config.RGB_565);
canvas.drawBitmap(overlayBitmap, matrix, null);
overlayBitmap.recycle();
}
}
static public void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
int rtmp, gtmp, btmp;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)r = 0;
else if (r > 262143)r = 262143;
if (g < 0)g = 0;
else if (g > 262143)g = 262143;
if (b < 0)b = 0;
else if (b > 262143)b = 262143;
rtmp = ((r << 6) & 0xff0000);
gtmp = ((g >> 2) & 0xff00);
btmp = ((b >> 10) & 0xff);
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00)
| ((b >> 10) & 0xff);
}
}
}
int[] yuv420ToGrayScale(byte[] yuv420, int width, int height){
int size = width * height;
int y1, y2, y3, y4;
int[] pixel = new int[size];
for (int i = 0; i < size; i+=2) {
y1 = yuv420[i]&0xff;;
y2 = yuv420[i + 1]&0xff;;
y3 = yuv420[i + width]&0xff;;
y4 = yuv420[i + width +1]&0xff;;
pixel[i] = yuv420[i];
pixel[i + 1] = yuv420[i +1];
pixel[i + width ] = yuv420[width + i];
pixel[i + width + 1] = yuv420[i + width + 1];
if (i!=0 && (i+2)%width==0)
i+=width;
}
return pixel;
}
/**
* Converts YUV420 NV21 to ARGB8888
*
* #param data byte array on YUV420 NV21 format.
* #param width pixels width
* #param height pixels height
* #return a ARGB8888 pixels int array. Where each int is a pixels ARGB.
*/
public static int[] convertYUV420_NV21toARGB8888(byte [] data, int width, int height) {
int size = width*height;
int offset = size;
int[] pixels = new int[size];
int u, v, y1, y2, y3, y4;
// i along Y and the final pixels
// k along pixels U and V
for(int i=0, k=0; i < size; i+=2, k+=1) {
y1 = data[i ]&0xff;
y2 = data[i+1]&0xff;
y3 = data[width+i ]&0xff;
y4 = data[width+i+1]&0xff;
v = data[offset+k ]&0xff;
u = data[offset+k+1]&0xff;
v = v-128;
u = u-128;
pixels[i ] = convertYUVtoARGB(y1, u, v);
pixels[i+1] = convertYUVtoARGB(y2, u, v);
pixels[width+i ] = convertYUVtoARGB(y3, u, v);
pixels[width+i+1] = convertYUVtoARGB(y4, u, v);
if (i!=0 && (i+2)%width==0)
i+=width;
}
return pixels;
}
private static int convertYUVtoARGB(int y, int u, int v) {
int r,g,b;
r = y + (int)(1.402f*u);
g = y - (int)(0.344f*v + 0.714f*u);
b = y + (int)(1.772f*v);
r = r>255? 255 : r<0 ? 0 : r;
g = g>255? 255 : g<0 ? 0 : g;
b = b>255? 255 : b<0 ? 0 : b;
return 0xff000000 | (r<<16) | (g<<8) | b;
}
public static void myNV21ToRGB(int width, int height){
int yy, u, v;
int frame_size = width * height;
}
}
And this:
package com.example.macampreviewdemo;
public class CameraView extends SurfaceView implements SurfaceHolder.Callback{
public Camera mycamera;
List<Camera.Size> cameraSize;
private SurfaceHolder mHolder;
public ViewToDraw vtd;
int pickedH, pickedW;
int defaultH, defaultW;
public String tag = "tag";
public CameraView(Context context, ViewToDraw _vtd, int width, int height) {
super(context);
// TODO Auto-generated constructor stub
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
vtd = _vtd;
defaultH = height;
defaultW = width;
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
Log.i("tag"," surfaceCreated");
int i;
mycamera = Camera.open();
cameraSize = mycamera.getParameters().getSupportedPreviewSizes();
if(cameraSize != null){
// pick resolution
pickedH = defaultH;
pickedW = defaultW;
for(i=0;i<cameraSize.size();i++){
if(cameraSize.get(i).width < defaultW){
break;
}else{
pickedH = cameraSize.get(i).height;
pickedW = cameraSize.get(i).width;
}
}
}else{
Log.e("tag","null");
};
try {
mycamera.setPreviewDisplay(holder);
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
Log.i("tag","surfaceChanged");
Camera.Parameters parameters = mycamera.getParameters();
parameters.setPreviewSize(pickedW, pickedH);
mycamera.setParameters(parameters);
//create buffer
PixelFormat p = new PixelFormat();
PixelFormat.getPixelFormatInfo(parameters.getPreviewFormat(),p);
int bufSize = (pickedW*pickedH*p.bitsPerPixel)/8;
//add buffers
byte[] buffer = new byte[bufSize];
mycamera.addCallbackBuffer(buffer);
buffer = new byte[bufSize];
mycamera.addCallbackBuffer(buffer);
buffer = new byte[bufSize];
mycamera.addCallbackBuffer(buffer);
mycamera.setPreviewCallbackWithBuffer(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera) {
Log.i("tag", "onPreviewFrame");
Log.i("tag", "pickedH = " + pickedH);
Log.i("tag", "pickedW = " + pickedW);
vtd.putImage(data);
vtd.cameraSet();
vtd.imgHeight = pickedH;
vtd.imgWidth = pickedW;
vtd.invalidate();
mycamera.addCallbackBuffer(data);
}
});
mycamera.startPreview();
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
Log.i("tag", "surfaceDestroyed");
mycamera.setPreviewCallback(null);
mycamera.release();
mycamera = null;
}
}
And this:
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingBottom="#dimen/activity_vertical_margin"
android:paddingLeft="#dimen/activity_horizontal_margin"
android:paddingRight="#dimen/activity_horizontal_margin"
android:paddingTop="#dimen/activity_vertical_margin"
tools:context=".CamPreviewDemoActivity" >
<FrameLayout android:id="#+id/FrameLayout01"
android:layout_height="fill_parent" android:layout_width="fill_parent">
<FrameLayout
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="#+id/preview">
</FrameLayout>
<com.example.macampreviewdemo.ViewToDraw
android:id="#+id/vtd"
android:layout_height="fill_parent"
android:layout_width="fill_parent"/>
</FrameLayout>
</RelativeLayout>
I have created a demo application which opens the camera.now I want to get the color of pixel when user touches the live camera preview.
I have tried by overridingonTouchEvent and I succeed in getting the position of pixel in x,y but I am not getting the RGB color value from it.it always shows 0,0,0
All suggestions are welcomed including any alternate way to achieve the same functionality. [Excluding OpenCv because it requires to install OpenCvManager apk also to support my application]
Code :
public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback, PreviewCallback {
private Camera camera;
private SurfaceHolder holder;
int[] myPixels;
public CameraPreview(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public CameraPreview(Context context, AttributeSet attrs) {
super(context, attrs);
}
public CameraPreview(Context context) {
super(context);
}
public void init(Camera camera) {
this.camera = camera;
initSurfaceHolder();
}
#SuppressWarnings("deprecation") // needed for < 3.0
private void initSurfaceHolder() {
holder = getHolder();
holder.addCallback(this);
holder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
initCamera(holder);
}
private void initCamera(SurfaceHolder holder) {
try {
camera.setPreviewDisplay(holder);
camera.getParameters().setPreviewFormat(ImageFormat.NV21);
camera.setPreviewCallback(this);
camera.startPreview();
} catch (Exception e) {
Log.d("Error setting camera preview", e);
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
}
#Override
public boolean onTouchEvent(MotionEvent event) {
if(event.getAction() == MotionEvent.ACTION_DOWN)
{
android.util.Log.d("touched", "called");
/* int x = (int)event.getX();
int y = (int)event.getY();
android.util.Log.d("touched pixel :", x+" "+y);
setDrawingCacheEnabled(true);
buildDrawingCache();
Bitmap mBmp = getDrawingCache();
int pixel = mBmp.getPixel(x, y);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
android.util.Log.d("touched pixel color :", redValue+" "+greenValue+" "+blueValue);
android.util.Log.d("touched pixel color from preview:", redValue+" "+greenValue+" "+blueValue);
*/
//how to get particular pixel from myPixels[]
}
return false;
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
android.util.Log.d("onPreviewFrame", "called");
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
}
public int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
// here we're using our own internal PImage attributes
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
// use interal buffer instead of pixels for UX reasons
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000)
| ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
}
I've followed a different approach to solve it. I'll post the code as soon as I get free.
Algorithm :
create an overlay on the live camera
when user touches , update the overlay with RGB data of latest YUV buffer stream got from live camera
pick RGB color from overlay image
It seems like myPixels is a 1D representation of 2D data (width x height).
Which means myPixels has a length of (width * height).
Lets say the pixel is (x,y) then
int idx = ( height * y ) + x;
int color = myPixels[idx];
With the above information you can modify decodeYUV420SP method to output only the color of a particular pixel.