There are several tutorials out there which explain how to get a simple camera preview up and running on an android device. But i couldn't find any example which explains how to manipulate the image before it's being rendered.
What I want to do is implementing custom color filters to simulate e.g. red and/or green deficiency.
I did some research on this and put together a working(ish) example. Here's what I found. It's pretty easy to get the raw data coming off of the camera. It's returned as a YUV byte array. You'd need to draw it manually onto a surface in order to be able to modify it. To do that you'd need to have a SurfaceView that you can manually run draw calls with. There are a couple of flags you can set that accomplish that.
In order to do the draw call manually you'd need to convert the byte array into a bitmap of some sort. Bitmaps and the BitmapDecoder don't seem to handle the YUV byte array very well at this point. There's been a bug filed for this but don't don't what the status is on that. So people have been trying to decode the byte array into an RGB format themselves.
Seems like doing the decoding manually has been kinda slow and people have had various degrees of success with it. Something like this should probably really be done with native code at the NDK level.
Still, it is possible to get it working. Also, my little demo is just me spending a couple of hours hacking stuff together (I guess doing this caught my imagination a little too much ;)). So chances are with some tweaking you could much improve what I've managed to get working.
This little code snip contains a couple of other gems I found as well. If all you want is to be able to draw over the surface you can override the surface's onDraw function - you could potentially analyze the returned camera image and draw an overlay - it'd be much faster than trying to process every frame. Also, I changed the SurfaceHolder.SURFACE_TYPE_NORMAL from what would be needed if you wanted a camera preview to show up. So a couple of changes to the code - the commented out code:
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
And the:
SurfaceHolder.SURFACE_TYPE_NORMAL //SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS - for preview to work
Should allow you to overlay frames based on the camera preview on top of the real preview.
Anyway, here's a working piece of code - Should give you something to start with.
Just put a line of code in one of your views like this:
<pathtocustomview.MySurfaceView android:id="#+id/surface_camera"
android:layout_width="fill_parent" android:layout_height="10dip"
android:layout_weight="1">
</pathtocustomview.MySurfaceView>
And include this class in your source somewhere:
package pathtocustomview;
import java.io.IOException;
import java.nio.Buffer;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;
import android.hardware.Camera;
import android.util.AttributeSet;
import android.util.Log;
import android.view.SurfaceHolder;
import android.view.SurfaceHolder.Callback;
import android.view.SurfaceView;
public class MySurfaceView extends SurfaceView implements Callback,
Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private boolean isPreviewRunning = false;
private byte [] rgbbuffer = new byte[256 * 256];
private int [] rgbints = new int[256 * 256];
protected final Paint rectanglePaint = new Paint();
public MySurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
rectanglePaint.setARGB(100, 200, 0, 0);
rectanglePaint.setStyle(Paint.Style.FILL);
rectanglePaint.setStrokeWidth(2);
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
#Override
protected void onDraw(Canvas canvas) {
canvas.drawRect(new Rect((int) Math.random() * 100,
(int) Math.random() * 100, 200, 200), rectanglePaint);
Log.w(this.getClass().getName(), "On Draw Called");
}
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
}
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
this.setWillNotDraw(false); // This allows us to make our own draw
// calls to this canvas
mCamera = Camera.open();
Camera.Parameters p = mCamera.getParameters();
p.setPreviewSize(240, 160);
mCamera.setParameters(p);
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e("Camera", e.getMessage());
}
}
}
public void onPreviewFrame(byte[] data, Camera camera) {
Log.d("Camera", "Got a camera frame");
Canvas c = null;
if(mHolder == null){
return;
}
try {
synchronized (mHolder) {
c = mHolder.lockCanvas(null);
// Do your drawing here
// So this data value you're getting back is formatted in YUV format and you can't do much
// with it until you convert it to rgb
int bwCounter=0;
int yuvsCounter=0;
for (int y=0;y<160;y++) {
System.arraycopy(data, yuvsCounter, rgbbuffer, bwCounter, 240);
yuvsCounter=yuvsCounter+240;
bwCounter=bwCounter+256;
}
for(int i = 0; i < rgbints.length; i++){
rgbints[i] = (int)rgbbuffer[i];
}
//decodeYUV(rgbbuffer, data, 100, 100);
c.drawBitmap(rgbints, 0, 256, 0, 0, 256, 256, false, new Paint());
Log.d("SOMETHING", "Got Bitmap");
}
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (c != null) {
mHolder.unlockCanvasAndPost(c);
}
}
}
}
I used walta's solution but I had some problems with YUV conversion, camera frames output sizes and crash on camera release.
Finally the following code worked for me:
public class MySurfaceView extends SurfaceView implements Callback, Camera.PreviewCallback {
private static final String TAG = "MySurfaceView";
private int width;
private int height;
private SurfaceHolder mHolder;
private Camera mCamera;
private int[] rgbints;
private boolean isPreviewRunning = false;
private int mMultiplyColor;
public MySurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
mHolder = getHolder();
mHolder.addCallback(this);
mMultiplyColor = getResources().getColor(R.color.multiply_color);
}
// #Override
// protected void onDraw(Canvas canvas) {
// Log.w(this.getClass().getName(), "On Draw Called");
// }
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
if (isPreviewRunning)
return;
this.setWillNotDraw(false); // This allows us to make our own draw calls to this canvas
mCamera = Camera.open();
isPreviewRunning = true;
Camera.Parameters p = mCamera.getParameters();
Size size = p.getPreviewSize();
width = size.width;
height = size.height;
p.setPreviewFormat(ImageFormat.NV21);
showSupportedCameraFormats(p);
mCamera.setParameters(p);
rgbints = new int[width * height];
// try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
//mHolder.removeCallback(this);
mCamera.setPreviewCallback(null);
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e("Camera", e.getMessage());
}
}
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("Camera", "Got a camera frame");
if (!isPreviewRunning)
return;
Canvas canvas = null;
if (mHolder == null) {
return;
}
try {
synchronized (mHolder) {
canvas = mHolder.lockCanvas(null);
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
decodeYUV(rgbints, data, width, height);
// draw the decoded image, centered on canvas
canvas.drawBitmap(rgbints, 0, width, canvasWidth-((width+canvasWidth)>>1), canvasHeight-((height+canvasHeight)>>1), width, height, false, null);
// use some color filter
canvas.drawColor(mMultiplyColor, Mode.MULTIPLY);
}
} catch (Exception e){
e.printStackTrace();
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (canvas != null) {
mHolder.unlockCanvasAndPost(canvas);
}
}
}
/**
* Decodes YUV frame to a buffer which can be use to create a bitmap. use
* this for OS < FROYO which has a native YUV decoder decode Y, U, and V
* values on the YUV 420 buffer described as YCbCr_422_SP by Android
*
* #param rgb
* the outgoing array of RGB bytes
* #param fg
* the incoming frame bytes
* #param width
* of source frame
* #param height
* of source frame
* #throws NullPointerException
* #throws IllegalArgumentException
*/
public void decodeYUV(int[] out, byte[] fg, int width, int height) throws NullPointerException, IllegalArgumentException {
int sz = width * height;
if (out == null)
throw new NullPointerException("buffer out is null");
if (out.length < sz)
throw new IllegalArgumentException("buffer out size " + out.length + " < minimum " + sz);
if (fg == null)
throw new NullPointerException("buffer 'fg' is null");
if (fg.length < sz)
throw new IllegalArgumentException("buffer fg size " + fg.length + " < minimum " + sz * 3 / 2);
int i, j;
int Y, Cr = 0, Cb = 0;
for (j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for (i = 0; i < width; i++) {
Y = fg[pixPtr];
if (Y < 0)
Y += 255;
if ((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if (Cb < 0)
Cb += 127;
else
Cb -= 128;
Cr = fg[cOff + 1];
if (Cr < 0)
Cr += 127;
else
Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if (R < 0)
R = 0;
else if (R > 255)
R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if (G < 0)
G = 0;
else if (G > 255)
G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if (B < 0)
B = 0;
else if (B > 255)
B = 255;
out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R;
}
}
}
private void showSupportedCameraFormats(Parameters p) {
List<Integer> supportedPictureFormats = p.getSupportedPreviewFormats();
Log.d(TAG, "preview format:" + cameraFormatIntToString(p.getPreviewFormat()));
for (Integer x : supportedPictureFormats) {
Log.d(TAG, "suppoterd format: " + cameraFormatIntToString(x.intValue()));
}
}
private String cameraFormatIntToString(int format) {
switch (format) {
case PixelFormat.JPEG:
return "JPEG";
case PixelFormat.YCbCr_420_SP:
return "NV21";
case PixelFormat.YCbCr_422_I:
return "YUY2";
case PixelFormat.YCbCr_422_SP:
return "NV16";
case PixelFormat.RGB_565:
return "RGB_565";
default:
return "Unknown:" + format;
}
}
}
To Use it, run from you activity's onCreate the following code:
SurfaceView surfaceView = new MySurfaceView(this, null);
RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT, RelativeLayout.LayoutParams.MATCH_PARENT);
surfaceView.setLayoutParams(layoutParams);
mRelativeLayout.addView(surfaceView);
Have you looked at GPUImage ?
It was originally an OSX/iOS library made by Brad Larson, that exists as an Objective-C wrapper around OpenGL/ES.
https://github.com/BradLarson/GPUImage
The people at CyberAgent have made an Android port (which doesn't have complete feature parity), which is a set of Java wrappers on top of the OpenGLES stuff. It's relatively high level, and pretty easy to implement, with a lot of the same functionality mentioned above...
https://github.com/CyberAgent/android-gpuimage
Related
I'm trying to replace the frames from the device camera (which is normaly used in an AR session) with frames from a streamed camera via webrtc. To render the webrtc stream I'm using webrtc.SurfaceViewRenderer and to render the AR session I'm using opengl.GLSurfaceViewin activity_main.xml and this two viewers works as they are supposed to do separately but now I want to combine them. The problem is that I dont know how to extract the frames from the webrtc stream. The closets function I have found is Bitmap bmp = surfaceViewRenderer.getDrawingCache(); to capture the pixels but it always return null.
If I can get the pixles from the surfaceViewRenderer my idea is then to bind it to a texture and then render this texture as a background in the AR-scene
The code I have been following can be found at https://github.com/google-ar/arcore-android-sdk/blob/master/samples/hello_ar_java/app/src/main/java/com/google/ar/core/examples/java/helloar/HelloArActivity.java
This is the code where the AR scene is rendered using the device camera:
public void onDrawFrame(GL10 gl10) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
try {
session.setCameraTextureName(backgroundRenderer.getTextureId());
//replace this frame with the frames that is rendered in webrtc's SurfaceViewRenderer
Frame frame = session.update();
Camera camera = frame.getCamera();
backgroundRenderer.draw(frame);
.
.
.
And this is how my activity_main.xml looks like. In the end I will remove the SurfaceViewRenderer section
<LinearLayout
<org.webrtc.SurfaceViewRenderer
android:id="#+id/local_gl_surface_view"
android:layout_width="match_parent"
android:layout_height="248dp"
android:layout_gravity="bottom|end" />
<android.opengl.GLSurfaceView
android:id="#+id/surfaceview"
android:layout_width="match_parent"
android:layout_height="195dp"
android:layout_gravity="top" />
</LinearLayout>
I know this is an old question but if anyone requires the answer, here it is.
So after brainstorming for days and after a lot of RnD, I've found a workaround.
You don't need anything extra to add.
So the story here is to use ArCore and WebRtc At the same time and sharing what you see in your ArCore session to the WebRtc so that the remote user sees them. Correct?
Well, the trick is to give your camera to ArCore and instead of sharing the Camera with webrtc, create a screen share Video Capturer. Its pretty simple and WebRtc already supports it natively. (Let me know if you need the source code too).
Open your ArCore activity in the full screen by setting all those flags to your window. And start your ArCore Session as normal, after that initialise your webrtc call and provide the screen sharing logic instead of the video source and Voila! ArCore + WebRTC.
I implemented this with no issues and the latency is good too. ~100ms
Edit
I think sharing screen isn't a good idea (The remote user will be able to see all the sensitive data such as notifications and etc.).
Rather what I did is
extend the ViewCapturer class of WebRTC.
use PixelCopy class to create bitmaps.
and feed it with the SurfaceView in which your ArCore session is being played and then converting it to bitmaps and those bitmaps to frames and then sending it to the WebRTC's onFrameCapruted function.
This way you can just share your SurfaceView instead of sharing the complete screen and also you don't need any extra permission such as screen recording and this looks way much better like an actual realtime video sharing.
Edited 2 (Final Solution)
Step 1:
Create a custom video capturer that'll extend VideoCapturer of webrtc
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Matrix;
import android.opengl.GLES20;
import android.opengl.GLUtils;
import android.os.Build;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Looper;
import android.os.SystemClock;
import android.util.Log;
import android.view.PixelCopy;
import android.view.SurfaceView;
import androidx.annotation.RequiresApi;
import com.jaswant.webRTCIsAwesome.ArSessionActivity;
import org.webrtc.CapturerObserver;
import org.webrtc.JavaI420Buffer;
import org.webrtc.SurfaceTextureHelper;
import org.webrtc.TextureBufferImpl;
import org.webrtc.VideoCapturer;
import org.webrtc.VideoFrame;
import org.webrtc.VideoSink;
import org.webrtc.YuvConverter;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.nio.ByteBuffer;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
#RequiresApi(api = Build.VERSION_CODES.N)
public class CustomVideoCapturer implements VideoCapturer, VideoSink {
private static int VIEW_CAPTURER_FRAMERATE_MS = 10;
private int width;
private int height;
private SurfaceView view;
private Context context;
private CapturerObserver capturerObserver;
private SurfaceTextureHelper surfaceTextureHelper;
private boolean isDisposed;
private Bitmap viewBitmap;
private Handler handlerPixelCopy = new Handler(Looper.getMainLooper());
private Handler handler = new Handler(Looper.getMainLooper());
private AtomicBoolean started = new AtomicBoolean(false);
private long numCapturedFrames;
private YuvConverter yuvConverter = new YuvConverter();
private TextureBufferImpl buffer;
private long start = System.nanoTime();
private final Runnable viewCapturer = new Runnable() {
#RequiresApi(api = Build.VERSION_CODES.N)
#Override
public void run() {
boolean dropFrame = view.getWidth() == 0 || view.getHeight() == 0;
// Only capture the view if the dimensions have been established
if (!dropFrame) {
// Draw view into bitmap backed canvas
final Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
final HandlerThread handlerThread = new HandlerThread(ArSessionActivity.class.getSimpleName());
handlerThread.start();
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
PixelCopy.request(view, bitmap, copyResult -> {
if (copyResult == PixelCopy.SUCCESS) {
viewBitmap = getResizedBitmap(bitmap, 500);
if (viewBitmap != null) {
Log.d("BITMAP--->", viewBitmap.toString());
sendToServer(viewBitmap, yuvConverter, start);
}
} else {
Log.e("Pixel_copy-->", "Couldn't create bitmap of the SurfaceView");
}
handlerThread.quitSafely();
}, new Handler(handlerThread.getLooper()));
} else {
Log.i("Pixel_copy-->", "Saving an image of a SurfaceView is only supported from API 24");
}
} catch (Exception ignored) {
}
}
}
};
private Thread captureThread;
public CustomVideoCapturer(SurfaceView view, int framePerSecond) {
if (framePerSecond <= 0)
throw new IllegalArgumentException("framePersecond must be greater than 0");
this.view = view;
float tmp = (1f / framePerSecond) * 1000;
VIEW_CAPTURER_FRAMERATE_MS = Math.round(tmp);
}
private static void bitmapToI420(Bitmap src, JavaI420Buffer dest) {
int width = src.getWidth();
int height = src.getHeight();
if (width != dest.getWidth() || height != dest.getHeight())
return;
int strideY = dest.getStrideY();
int strideU = dest.getStrideU();
int strideV = dest.getStrideV();
ByteBuffer dataY = dest.getDataY();
ByteBuffer dataU = dest.getDataU();
ByteBuffer dataV = dest.getDataV();
for (int line = 0; line < height; line++) {
if (line % 2 == 0) {
for (int x = 0; x < width; x += 2) {
int px = src.getPixel(x, line);
byte r = (byte) ((px >> 16) & 0xff);
byte g = (byte) ((px >> 8) & 0xff);
byte b = (byte) (px & 0xff);
dataY.put(line * strideY + x, (byte) (((66 * r + 129 * g + 25 * b) >> 8) + 16));
dataU.put(line / 2 * strideU + x / 2, (byte) (((-38 * r + -74 * g + 112 * b) >> 8) + 128));
dataV.put(line / 2 * strideV + x / 2, (byte) (((112 * r + -94 * g + -18 * b) >> 8) + 128));
px = src.getPixel(x + 1, line);
r = (byte) ((px >> 16) & 0xff);
g = (byte) ((px >> 8) & 0xff);
b = (byte) (px & 0xff);
dataY.put(line * strideY + x, (byte) (((66 * r + 129 * g + 25 * b) >> 8) + 16));
}
} else {
for (int x = 0; x < width; x += 1) {
int px = src.getPixel(x, line);
byte r = (byte) ((px >> 16) & 0xff);
byte g = (byte) ((px >> 8) & 0xff);
byte b = (byte) (px & 0xff);
dataY.put(line * strideY + x, (byte) (((66 * r + 129 * g + 25 * b) >> 8) + 16));
}
}
}
}
public static Bitmap createFlippedBitmap(Bitmap source, boolean xFlip, boolean yFlip) {
try {
Matrix matrix = new Matrix();
matrix.postScale(xFlip ? -1 : 1, yFlip ? -1 : 1, source.getWidth() / 2f, source.getHeight() / 2f);
return Bitmap.createBitmap(source, 0, 0, source.getWidth(), source.getHeight(), matrix, true);
} catch (Exception e) {
return null;
}
}
private void checkNotDisposed() {
if (this.isDisposed) {
throw new RuntimeException("capturer is disposed.");
}
}
#Override
public synchronized void initialize(SurfaceTextureHelper surfaceTextureHelper, Context context, CapturerObserver capturerObserver) {
this.checkNotDisposed();
if (capturerObserver == null) {
throw new RuntimeException("capturerObserver not set.");
} else {
this.context = context;
this.capturerObserver = capturerObserver;
if (surfaceTextureHelper == null) {
throw new RuntimeException("surfaceTextureHelper not set.");
} else {
this.surfaceTextureHelper = surfaceTextureHelper;
}
}
}
#Override
public void startCapture(int width, int height, int fps) {
this.checkNotDisposed();
this.started.set(true);
this.width = width;
this.height = height;
this.capturerObserver.onCapturerStarted(true);
this.surfaceTextureHelper.startListening(this);
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);
/*try {
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
HandlerThread handlerThread = new HandlerThread(CustomVideoCapturer.class.getSimpleName());
capturerObserver.onCapturerStarted(true);
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
YuvConverter yuvConverter = new YuvConverter();
TextureBufferImpl buffer = new TextureBufferImpl(width, height, VideoFrame.TextureBuffer.Type.RGB, textures[0], new Matrix(), surfaceTextureHelper.getHandler(), yuvConverter, null);
// handlerThread.start();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
new Thread(() -> {
while (true) {
PixelCopy.request(view, bitmap, copyResult -> {
if (copyResult == PixelCopy.SUCCESS) {
viewBitmap = getResizedBitmap(bitmap, 500);
long start = System.nanoTime();
Log.d("BITMAP--->", viewBitmap.toString());
sendToServer(viewBitmap, yuvConverter, buffer, start);
} else {
Log.e("Pixel_copy-->", "Couldn't create bitmap of the SurfaceView");
}
handlerThread.quitSafely();
}, new Handler(Looper.getMainLooper()));
}
}).start();
}
} catch (Exception ignored) {
}*/
}
private void sendToServer(Bitmap bitmap, YuvConverter yuvConverter, long start) {
try {
int[] textures = new int[1];
GLES20.glGenTextures(0, textures, 0);
buffer = new TextureBufferImpl(width, height, VideoFrame.TextureBuffer.Type.RGB, textures[0], new Matrix(), surfaceTextureHelper.getHandler(), yuvConverter, null);
Bitmap flippedBitmap = createFlippedBitmap(bitmap, true, false);
surfaceTextureHelper.getHandler().post(() -> {
if (flippedBitmap != null) {
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, flippedBitmap, 0);
VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);
long frameTime = System.nanoTime() - start;
VideoFrame videoFrame = new VideoFrame(i420Buf, 180, frameTime);
capturerObserver.onFrameCaptured(videoFrame);
videoFrame.release();
try {
viewBitmap.recycle();
} catch (Exception e) {
}
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);
}
});
} catch (Exception ignored) {
}
}
#Override
public void stopCapture() throws InterruptedException {
this.checkNotDisposed();
CustomVideoCapturer.this.surfaceTextureHelper.stopListening();
CustomVideoCapturer.this.capturerObserver.onCapturerStopped();
started.set(false);
handler.removeCallbacksAndMessages(null);
handlerPixelCopy.removeCallbacksAndMessages(null);
}
#Override
public void changeCaptureFormat(int width, int height, int framerate) {
this.checkNotDisposed();
this.width = width;
this.height = height;
}
#Override
public void dispose() {
this.isDisposed = true;
}
#Override
public boolean isScreencast() {
return true;
}
private void sendFrame() {
final long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
/*surfaceTextureHelper.setTextureSize(width, height);
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
Matrix matrix = new Matrix();
matrix.preTranslate(0.5f, 0.5f);
matrix.preScale(1f, -1f);
matrix.preTranslate(-0.5f, -0.5f);
YuvConverter yuvConverter = new YuvConverter();
TextureBufferImpl buffer = new TextureBufferImpl(width, height,
VideoFrame.TextureBuffer.Type.RGB, textures[0], matrix,
surfaceTextureHelper.getHandler(), yuvConverter, null);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, viewBitmap, 0);
long frameTime = System.nanoTime() - captureTimeNs;
VideoFrame videoFrame = new VideoFrame(buffer.toI420(), 0, frameTime);
capturerObserver.onFrameCaptured(videoFrame);
videoFrame.release();
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);*/
// Create video frame
JavaI420Buffer buffer = JavaI420Buffer.allocate(viewBitmap.getWidth(), viewBitmap.getHeight());
bitmapToI420(viewBitmap, buffer);
VideoFrame videoFrame = new VideoFrame(buffer,
0, captureTimeNs);
// Notify the listener
if (started.get()) {
++this.numCapturedFrames;
this.capturerObserver.onFrameCaptured(videoFrame);
}
if (started.get()) {
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);
}
}
public long getNumCapturedFrames() {
return this.numCapturedFrames;
}
/**
* reduces the size of the image
*
* #param image
* #param maxSize
* #return
*/
public Bitmap getResizedBitmap(Bitmap image, int maxSize) {
int width = image.getWidth();
int height = image.getHeight();
try {
Bitmap bitmap = Bitmap.createScaledBitmap(image, width, height, true);
ByteArrayOutputStream out = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 50, out);
return BitmapFactory.decodeStream(new ByteArrayInputStream(out.toByteArray()));
} catch (Exception e) {
return null;
}
}
#Override
public void onFrame(VideoFrame videoFrame) {
}
}
Step 2 (Usage):
CustomVideoCapturer videoCapturer = new CustomVideoCapturer(arSceneView, 20);
videoSource = peerConnectionFactory.createVideoSource(videoCapturer.isScreencast());
videoCapturer.initialize(surfaceTextureHelper, context, videoSource.getCapturerObserver());
videoCapturer.startCapture(resolutionHeight, resolutionWidth, 30);
At after this you should be able to stream your AR frames to the remote user.
Step 3 (Optional):
You can play around with the getResizedBitmap() to change the resolution and size of your frames. REMEMBER: Operations with bitmaps can be process intensive.
I'm free to any type of suggestions or optimisations in this code. This is something I came up with after weeks of hustle.
This question has been asked many times already but i cannot seem to find what i want exactly.
I am trying to create a camera app where I want to display the YUV or RGB value into the log when I point my camera to some color. The values must me 0...255 range for RGB or correspondent YUV color format. I can manage the conversion between them as there are many such examples on stack overflow. However, i cannot store the values into 3 separate variables and display them in the log.
so far i have managed to get
package com.example.virus.bpreader;
public class CaptureVideo extends SurfaceView implements SurfaceHolder.Callback, Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private int[] pixels;
public CaptureVideo(Context context, Camera cameraManager) {
super(context);
mCamera = cameraManager;
mCamera.setDisplayOrientation(90);
//get holder and set the class as callback
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
//when the surface is created, let the camera start the preview
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
mCamera.cancelAutoFocus();
Camera.Parameters params = mCamera.getParameters();
//get fps
params.getSupportedPreviewFpsRange();
//get resolution
params.getSupportedPreviewSizes();
//stop auto exposure
params.setAutoExposureLock(false);
// Check what resolutions are supported by your camera
List<Camera.Size> sizes = params.getSupportedPictureSizes();
// Iterate through all available resolutions and choose one
for (Camera.Size size : sizes) {
Log.i("Resolution", "Available resolution: " + size.width + " " + size.height);
}
//set resolution at 320*240
params.setPreviewSize(320,240);
//set frame rate at 10 fps
List<int[]> frameRates = params.getSupportedPreviewFpsRange();
int last = frameRates.size() - 1;
params.setPreviewFpsRange(10000, 10000);
//set Image Format
//params.setPreviewFormat(ImageFormat.NV21);
mCamera.setParameters(params);
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
//need to stop the preview and restart on orientation change
if (mHolder.getSurface() == null) {
return;
}
//stop preview
mCamera.stopPreview();
//start again
try {
mCamera.setPreviewDisplay(holder);
mCamera.startPreview();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
//stop and release
mCamera.startPreview();
mCamera.release();
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
int[] myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
Log.d("myPixel", String.valueOf(myPixels.length));
}
//yuv decode
int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
int r, g, b, y1192, y, i, uvp, u, v;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
// above answer is wrong at the following lines. just swap ***u*** and ***v***
u = (0xff & yuv420sp[uvp++]) - 128;
v = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
r = Math.max(0, Math.min(r, 262143));
g = Math.max(0, Math.min(g, 262143));
b = Math.max(0, Math.min(b, 262143));
// combine RGB
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) | 0xff);
}
}
return rgb;
}
}
Here the decode method should give a hex format of RGB color space (quite sure its all right as most of the answers are the same). The problem i am facing is when i am not quite sure how to call it inside the OnPreviewFrame method so that it displays the RGB values separately into the log.
N.B like i said i have seen lot of similar questions but could not find a solution to it. I do not want to store the file (image/video) as i only need the RGB/YUV values from the live camera preview when i point the camera to some color.
I need the rgb or yuv values because i want to plot a graph out of it against time.
Any help will be much appreciated.
Well if the problem is to get separate values of R, G And B from the RGB array, check this SO post here.
Hope it helps!
i'm developing an Android app that gets the color of a current pixel on the screen
the appication concept is very simple the CameraPreview class implements the SurfcaeHolder.Callback interface
and onSurfcaeChanged gets the color
my problem is the code works just fine on the samsung6s but it doesn't work on the nuxus6p the callbacks surfaceCreated and surfaceChanged never called
here is a sample of my code
public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback {
private SurfaceHolder mHolder;
private Camera mCamera;
String TAG = "TAG";
Context context;
OnColorDetected onColorDetected;
ColorDetectionPresenter presenter;
Camera.Size previewSize;
int midPixxel;
public CameraPreview(Context context, Camera camera, ColorDetectionPresenter presenter) {
super(context);
mCamera = camera;
this.context = context;
this.presenter = presenter;
// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
mHolder = getHolder();
mHolder.addCallback(this);
// deprecated setting, but required on Android versions prior to 3.0
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
public void setOnColorDetected(OnColorDetected onColorDetected) {
this.onColorDetected = onColorDetected;
}
public void surfaceDestroyed(SurfaceHolder holder) {
// empty. Take care of releasing the Camera preview in your activity.
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
// The Surface has been created, now tell the camera where to draw the preview.
try {
Log.d("SURFACE_STATE","created");
mCamera.setPreviewDisplay(holder);
mCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
int[] myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
for (int i = 0; i < myPixels.length; i++) {
//Toast.makeText(context, MainActivity.getBestMatchingColorName(myPixels[i]), Toast.LENGTH_SHORT).show();
}
}
});
mCamera.startPreview();
} catch (IOException e) {
Log.d(TAG, "Error setting camera preview: " + e.getMessage());
}
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
// If your preview can change or rotate, take care of those events here.
// Make sure to stop the preview before resizing or reformatting it.
Log.d("SURFACE_STATE","changed");
if (mHolder.getSurface() == null) {
// preview surface does not exist
return;
}
// stop preview before making changes
try {
mCamera.stopPreview();
} catch (Exception e) {
// ignore: tried to stop a non-existent preview
}
// set preview size and make any resize, rotate or
// reformatting changes here
// start preview with new settings
try {
Camera.Parameters parameters = mCamera.getParameters();
mCamera.setPreviewDisplay(mHolder);
List<Camera.Size> previewSizes = parameters.getSupportedPreviewSizes();
if (previewSizes == null)
previewSizes = parameters.getSupportedPictureSizes();
// You need to choose the most appropriate previewSize for your
previewSize = previewSizes.get(0);
for(int i=0;i<previewSizes.size();i++){
if(previewSizes.get(i).width>previewSize.width)
previewSize=previewSizes.get(i);
}
// .... select one of previewSizes here
cameraSetup(width, height);
parameters.setPictureSize(previewSize.width, previewSize.height);
mCamera.setParameters(parameters);
mCamera.setPreviewCallback(new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
int frameHeight = camera.getParameters().getPreviewSize().height;
int frameWidth = camera.getParameters().getPreviewSize().width;
// number of pixels//transforms NV21 pixel data into RGB pixels
int rgb[] = new int[frameWidth * frameHeight];
// convertion
int[] myPixels = decodeYUV420SP(rgb, data, frameWidth, frameHeight);
String colorName = presenter.getBestMatchingColorName(myPixels[myPixels.length / 2]);
onColorDetected.colorDetected(colorName);
}
});
mCamera.startPreview();
} catch (Exception e) {
Log.d(TAG, "Error starting camera preview: " + e.getMessage());
}
}
int[] decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
// Pulled directly from:
// http://ketai.googlecode.com/svn/trunk/ketai/src/edu/uic/ketai/inputService/KetaiCamera.java
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
private void cameraSetup(int w, int h) {
// set the camera parameters, including the preview size
FrameLayout.LayoutParams lp = (FrameLayout.LayoutParams) getLayoutParams();
double cameraAspectRatio = ((double) previewSize.width) / previewSize.height;
if (((double) h) / w > cameraAspectRatio) {
lp.width = (int) (h / cameraAspectRatio + 0.5);
lp.height = h;
} else {
lp.height = (int) (w * cameraAspectRatio + 0.5);
lp.width = w;
lp.topMargin = (h - lp.height) / 2;
}
lp.gravity = Gravity.CENTER_HORIZONTAL | Gravity.TOP;
setLayoutParams(lp);
requestLayout();
}
}
and this is how i call it
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
colorDetectionPresenter = new ColorDetectionPresenter();
camera = Camera.open();
cameraPreview = new CameraPreview(getActivity(), camera, colorDetectionPresenter);
params = camera.getParameters();
params.setPreviewFormat(ImageFormat.NV21);
camera.setDisplayOrientation(90);
siz = params.getSupportedPreviewSizes().get(0);
params.setPreviewSize(siz.width, siz.height);
}
i have tried initializing the camera preview in the onCreate, onCreateViewand onResume none works
any suggestions ?
it only doesn't work on my nexus6p
Let me preface this being that I am pretty new to Android. I searched all over for help on this and pieced together what I have so far.
So my intent is to use the Camera Preview to read an RGB value that the user points the camera at and save that after prompting the user for each item. Example, I ask show me a shirt and after the user confirms I store that color in an array where I hold the colors. Planning to do something with those after this part.
Right now, the code I have opens the camera view with a close (x) button in the top right but I have hit a wall on where to go next. I implemented a decode function from the Ketai Project to decode the RGB value from Pixel data. Could use some clarification on that, how could I average what it gives me into a color? For-loop possibly?
I'm familiar with ImageView after seeing some of that but I want to read the RGB values more real-time as the user shows me.
package ricardo.colormatch;
import android.content.Context;
import android.hardware.Camera;
import android.util.Log;
import android.view.MotionEvent;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import java.io.IOException;
public class CameraView extends SurfaceView implements SurfaceHolder.callback, Camera.PreviewCallback
{
private SurfaceHolder mHolder;
private Camera mCamera;
private Camera.Parameters parameters;
private Camera.Size previewSize;
private int[] pixels;
private int[] myPixels;
private static final String TAG = "Cam";
public CameraView(Context context, Camera camera) {
super(context);
mCamera = camera;
mCamera.setDisplayOrientation(90);
// Here we want to get the holder and set this class as the callback where we will get camera data from
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
#Override
public void surfaceCreated(SurfaceHolder surfaceHolder) {
try {
//when we create the surface, we want the camera to draw images in this surface
mCamera.setPreviewDisplay(surfaceHolder);
mCamera.setPreviewCallback(this);
parameters = mCamera.getParameters();
previewSize = parameters.getPreviewSize();
pixels = new int[previewSize.width * previewSize.height];
mCamera.startPreview();
}catch(IOException e) {
Log.d("ERROR", "Camera error on surfaceCreated " + e.getMessage());
mCamera.release();
mCamera = null;
}
}
#Override
//make sure before changing surface that preview is stopped, it is rotated than restarted
public void surfaceChanged(SurfaceHolder surfaceHolder, int format, int w, int h) {
//checks if surface is ready to receive data
if (mHolder.getSurface() == null)
return;
try
{
mCamera.stopPreview(); //stops the preview
}
catch(Exception e)
{
// If camera is not running
}
//Recreate camera preview
try
{
parameters.setPreviewSize(w, h);
mCamera.setParameters(parameters);
mCamera.startPreview();
}
catch(Exception e)
{
Log.d("ERROR", "Camera error on surfaceChanged" + e.getMessage());
}
}
#Override
public void onPreviewFrame(byte[] data, Camera camera){
Log.d("Cam", data.toString());
decodeYUV420SP(pixels, data, previewSize.width, previewSize.height);
Log.d("Cam", pixels.toString());
}
//From the Ketai Project image processing
void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) {
r = 0;
} else if (r > 262143) {
r = 262143;
}
if (g < 0) {
g = 0;
} else if (g > 262143) {
g = 262143;
}
if (b < 0) {
b = 0;
} else if (b > 262143) {
b = 262143;
}
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
//destroy the camera in the surface
//Note : Move code into activity if using more than one screen
#Override
public void surfaceDestroyed(SurfaceHolder surfaceHolder){
mCamera.stopPreview();
mCamera.release();
}
}
MainActivity:
package ricardo.colormatch;
import android.hardware.Camera;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.View;
import android.widget.FrameLayout;
import android.widget.ImageButton;
public class MainActivity extends AppCompatActivity {
private Camera mCamera = null;
private CameraView mCameraView = null;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
try{
mCamera = Camera.open(); //open the camera
}catch(Exception e) {
Log.d("ERROR", "Failed to get camera: " + e.getMessage()); //get error message if problem opening camera
}
if(mCamera != null){
mCameraView = new CameraView(this,mCamera); //create our surfaceView for showing the camera data
FrameLayout camera_view = (FrameLayout)findViewById(R.id.camera_view);
camera_view.addView(mCameraView); // add surfaceView to layout
}
//button to close the app
ImageButton imgClose = (ImageButton)findViewById(R.id.imgClose);
imgClose.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view){
System.exit(0);
}
});
}
}
What I get from the logs:
[I#20928f56
[I#20928f56
And it just repeats as the app is running, why is that?
I am trying to take the camera preview and alter it in onPreviewFrame() and show it to the user. I have already achieved the required functionality but the problem is in SIZE OF CAMERA PREVIEW.It always takes smaller part of the screen and i want to make it fullscreen i.e want to take the camera preview which should fill whole screen of the device.I have read and tried any solutions available on net but none of them is working in my case.This is the Surface View class:
public class MySurfaceView extends SurfaceView implements Callback, Camera.PreviewCallback, android.view.SurfaceHolder.Callback {
private static final String TAG = "MySurfaceView";
private int width;
private int height;
public SurfaceHolder mHolder;
private Camera mCamera;
private int[] rgbints;
private int mMultiplyColor;
public MySurfaceView(Context context, AttributeSet attrs , Camera camera ,
int width , int height)
{
super(context, attrs);
mCamera = camera;
this.width = width;
this.height = height;
mHolder = getHolder();
mHolder.addCallback(this);
mMultiplyColor = getResources().getColor(R.color.honeydew);
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height)
{
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
this.setWillNotDraw(false); // This allows us to make our own draw calls to this canvas
rgbints = new int[width * height];
// try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try
{
CameraActivity cameraActivity = new CameraActivity();
cameraActivity.releaseCamera();
cameraActivity = null;
} catch (Exception e) {
Log.e("Camera", e.getMessage());
}
}
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Canvas canvas = null;
if (mHolder == null)
{
return;
}
try {
synchronized (mHolder)
{
canvas = mHolder.lockCanvas(null);
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
decodeYUV(rgbints, data, width, height);
// draw the decoded image, centered on canvas
canvas.drawBitmap(rgbints, 0, width, canvasWidth-((width+canvasWidth)>>1), canvasHeight-((height+canvasHeight)>>1), width, height, false, null);
// use some color filter
canvas.drawColor(mMultiplyColor, Mode.MULTIPLY);
}
} catch (Exception e){
e.printStackTrace();
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (canvas != null)
{
mHolder.unlockCanvasAndPost(canvas);
canvas = null;
}
}
}
public void decodeYUV(int[] out, byte[] fg, int width, int height) throws NullPointerException, IllegalArgumentException {
int sz = width * height;
if (out == null)
throw new NullPointerException("buffer out is null");
if (out.length < sz)
throw new IllegalArgumentException("buffer out size " + out.length + " < minimum " + sz);
if (fg == null)
throw new NullPointerException("buffer 'fg' is null");
if (fg.length < sz)
throw new IllegalArgumentException("buffer fg size " + fg.length + " < minimum " + sz * 3 / 2);
int i, j;
int Y, Cr = 0, Cb = 0;
for (j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for (i = 0; i < width; i++) {
Y = fg[pixPtr];
if (Y < 0)
Y += 255;
if ((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if (Cb < 0)
Cb += 127;
else
Cb -= 128;
Cr = fg[cOff + 1];
if (Cr < 0)
Cr += 127;
else
Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if (R < 0)
R = 0;
else if (R > 255)
R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if (G < 0)
G = 0;
else if (G > 255)
G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if (B < 0)
B = 0;
else if (B > 255)
B = 255;
out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R;
}
}
}
public void showSupportedCameraFormats(Parameters p) {
List<Integer> supportedPictureFormats = p.getSupportedPreviewFormats();
Log.d(TAG, "preview format:" + cameraFormatIntToString(p.getPreviewFormat()));
for (Integer x : supportedPictureFormats) {
Log.d(TAG, "suppoterd format: " + cameraFormatIntToString(x.intValue()));
}
}
#SuppressWarnings("deprecation")
private String cameraFormatIntToString(int format) {
switch (format) {
case PixelFormat.JPEG:
return "JPEG";
case PixelFormat.YCbCr_420_SP:
return "NV21";
case PixelFormat.YCbCr_422_I:
return "YUY2";
case PixelFormat.YCbCr_422_SP:
return "NV16";
case PixelFormat.RGB_565:
return "RGB_565";
default:
return "Unknown:" + format;
}
}
}
This is the caller class:
public class CameraActivity extends Activity {
private Camera mCamera;
private MySurfaceView surfaceView;
private RelativeLayout relativeLayout;
#Override
protected void onCreate(Bundle savedInstanceState) {
// TODO Auto-generated method stub
super.onCreate(savedInstanceState);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
setContentView(R.layout.activity_main);
}
#Override
protected void onDestroy() {
// TODO Auto-generated method stub
super.onDestroy();
releaseCamera();
}
#Override
protected void onPause() {
// TODO Auto-generated method stub
super.onPause();
releaseCamera();
}
#Override
protected void onResume() {
// TODO Auto-generated method stub
super.onResume();
mCamera = Camera.open();
Camera.Parameters p = mCamera.getParameters();
Size size = p.getPreviewSize();
int width = size.width;
int height = size.height;
p.setPreviewFormat(ImageFormat.JPEG);
mCamera.setParameters(p);
surfaceView = new MySurfaceView(this, null , mCamera ,width , height);
RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT, RelativeLayout.LayoutParams.MATCH_PARENT);
surfaceView.setLayoutParams(layoutParams);
relativeLayout = (RelativeLayout)findViewById(R.id.relativeLayout);
relativeLayout.addView(surfaceView);
surfaceView.showSupportedCameraFormats(p);
}
public void releaseCamera()
{
if (mCamera != null)
{
mCamera.stopPreview();
mCamera.setPreviewCallback(null);
surfaceView.getHolder().removeCallback(surfaceView);
mCamera.release(); // release the camera for other applications
mCamera = null;
surfaceView.mHolder.removeCallback(surfaceView);
surfaceView.mHolder = null;
surfaceView = null;
relativeLayout.removeAllViews();
relativeLayout.removeAllViewsInLayout();
relativeLayout = null;
}
}
}
Please help me.Thanks in advance.
EDIT:
Xml is :
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:id="#+id/relativeLayout"
android:layout_height="match_parent" >
</RelativeLayout>
instend of relativelayout take linearlayout and put framelayout in it:
<FrameLayout
android:id="#+id/preview"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:layout_weight="1" >
</FrameLayout>
and then set camerapreview in it:
surfaceView = new MySurfaceView(this, null , mCamera ,width , height);
((FrameLayout) findViewById(R.id.preview)).addView(surfaceView);
CameraPreview.java
class CameraPreview extends SurfaceView implements SurfaceHolder.Callback {
private static final String TAG = "Preview";
SurfaceHolder mHolder;
public Camera camera;
CameraPreview(Context context) {
super(context);
// Install a SurfaceHolder.Callback so we get notified when the
// underlying surface is created and destroyed.
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
public void surfaceCreated(SurfaceHolder holder) {
// The Surface has been created, acquire the camera and tell it where
// to draw.
if(camera == null){
camera = Camera.open();
camera.setDisplayOrientation(90);
try {
camera.setPreviewDisplay(holder);
camera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera arg1) {
CameraPreview.this.invalidate();
}
});
} catch (IOException e) {
camera.release();
camera = null;
}
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
// Surface will be destroyed when we return, so stop the preview.
// Because the CameraDevice object is not a shared resource, it's very
// important to release it when the activity is paused.
if(camera!=null){
camera.stopPreview();
camera.setPreviewCallback(null);
camera.release();
camera = null;
}
}
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// Now that the size is known, set up the camera parameters and begin
// the preview.
Camera.Parameters parameters = camera.getParameters();
// parameters.setPreviewSize(w, h);
camera.setParameters(parameters);
camera.startPreview();
}
#Override
public void draw(Canvas canvas) {
super.draw(canvas);
Paint p= new Paint(Color.RED);
Log.d(TAG,"draw");
canvas.drawText("PREVIEW", canvas.getWidth()/2, canvas.getHeight()/2, p );
}
public void releaseCameraAndPreview() {
if (camera != null) {
camera.release();
camera = null;
}
}
}