I'm trying to replace the frames from the device camera (which is normaly used in an AR session) with frames from a streamed camera via webrtc. To render the webrtc stream I'm using webrtc.SurfaceViewRenderer and to render the AR session I'm using opengl.GLSurfaceViewin activity_main.xml and this two viewers works as they are supposed to do separately but now I want to combine them. The problem is that I dont know how to extract the frames from the webrtc stream. The closets function I have found is Bitmap bmp = surfaceViewRenderer.getDrawingCache(); to capture the pixels but it always return null.
If I can get the pixles from the surfaceViewRenderer my idea is then to bind it to a texture and then render this texture as a background in the AR-scene
The code I have been following can be found at https://github.com/google-ar/arcore-android-sdk/blob/master/samples/hello_ar_java/app/src/main/java/com/google/ar/core/examples/java/helloar/HelloArActivity.java
This is the code where the AR scene is rendered using the device camera:
public void onDrawFrame(GL10 gl10) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
try {
session.setCameraTextureName(backgroundRenderer.getTextureId());
//replace this frame with the frames that is rendered in webrtc's SurfaceViewRenderer
Frame frame = session.update();
Camera camera = frame.getCamera();
backgroundRenderer.draw(frame);
.
.
.
And this is how my activity_main.xml looks like. In the end I will remove the SurfaceViewRenderer section
<LinearLayout
<org.webrtc.SurfaceViewRenderer
android:id="#+id/local_gl_surface_view"
android:layout_width="match_parent"
android:layout_height="248dp"
android:layout_gravity="bottom|end" />
<android.opengl.GLSurfaceView
android:id="#+id/surfaceview"
android:layout_width="match_parent"
android:layout_height="195dp"
android:layout_gravity="top" />
</LinearLayout>
I know this is an old question but if anyone requires the answer, here it is.
So after brainstorming for days and after a lot of RnD, I've found a workaround.
You don't need anything extra to add.
So the story here is to use ArCore and WebRtc At the same time and sharing what you see in your ArCore session to the WebRtc so that the remote user sees them. Correct?
Well, the trick is to give your camera to ArCore and instead of sharing the Camera with webrtc, create a screen share Video Capturer. Its pretty simple and WebRtc already supports it natively. (Let me know if you need the source code too).
Open your ArCore activity in the full screen by setting all those flags to your window. And start your ArCore Session as normal, after that initialise your webrtc call and provide the screen sharing logic instead of the video source and Voila! ArCore + WebRTC.
I implemented this with no issues and the latency is good too. ~100ms
Edit
I think sharing screen isn't a good idea (The remote user will be able to see all the sensitive data such as notifications and etc.).
Rather what I did is
extend the ViewCapturer class of WebRTC.
use PixelCopy class to create bitmaps.
and feed it with the SurfaceView in which your ArCore session is being played and then converting it to bitmaps and those bitmaps to frames and then sending it to the WebRTC's onFrameCapruted function.
This way you can just share your SurfaceView instead of sharing the complete screen and also you don't need any extra permission such as screen recording and this looks way much better like an actual realtime video sharing.
Edited 2 (Final Solution)
Step 1:
Create a custom video capturer that'll extend VideoCapturer of webrtc
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Matrix;
import android.opengl.GLES20;
import android.opengl.GLUtils;
import android.os.Build;
import android.os.Handler;
import android.os.HandlerThread;
import android.os.Looper;
import android.os.SystemClock;
import android.util.Log;
import android.view.PixelCopy;
import android.view.SurfaceView;
import androidx.annotation.RequiresApi;
import com.jaswant.webRTCIsAwesome.ArSessionActivity;
import org.webrtc.CapturerObserver;
import org.webrtc.JavaI420Buffer;
import org.webrtc.SurfaceTextureHelper;
import org.webrtc.TextureBufferImpl;
import org.webrtc.VideoCapturer;
import org.webrtc.VideoFrame;
import org.webrtc.VideoSink;
import org.webrtc.YuvConverter;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.nio.ByteBuffer;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicBoolean;
#RequiresApi(api = Build.VERSION_CODES.N)
public class CustomVideoCapturer implements VideoCapturer, VideoSink {
private static int VIEW_CAPTURER_FRAMERATE_MS = 10;
private int width;
private int height;
private SurfaceView view;
private Context context;
private CapturerObserver capturerObserver;
private SurfaceTextureHelper surfaceTextureHelper;
private boolean isDisposed;
private Bitmap viewBitmap;
private Handler handlerPixelCopy = new Handler(Looper.getMainLooper());
private Handler handler = new Handler(Looper.getMainLooper());
private AtomicBoolean started = new AtomicBoolean(false);
private long numCapturedFrames;
private YuvConverter yuvConverter = new YuvConverter();
private TextureBufferImpl buffer;
private long start = System.nanoTime();
private final Runnable viewCapturer = new Runnable() {
#RequiresApi(api = Build.VERSION_CODES.N)
#Override
public void run() {
boolean dropFrame = view.getWidth() == 0 || view.getHeight() == 0;
// Only capture the view if the dimensions have been established
if (!dropFrame) {
// Draw view into bitmap backed canvas
final Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
final HandlerThread handlerThread = new HandlerThread(ArSessionActivity.class.getSimpleName());
handlerThread.start();
try {
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
PixelCopy.request(view, bitmap, copyResult -> {
if (copyResult == PixelCopy.SUCCESS) {
viewBitmap = getResizedBitmap(bitmap, 500);
if (viewBitmap != null) {
Log.d("BITMAP--->", viewBitmap.toString());
sendToServer(viewBitmap, yuvConverter, start);
}
} else {
Log.e("Pixel_copy-->", "Couldn't create bitmap of the SurfaceView");
}
handlerThread.quitSafely();
}, new Handler(handlerThread.getLooper()));
} else {
Log.i("Pixel_copy-->", "Saving an image of a SurfaceView is only supported from API 24");
}
} catch (Exception ignored) {
}
}
}
};
private Thread captureThread;
public CustomVideoCapturer(SurfaceView view, int framePerSecond) {
if (framePerSecond <= 0)
throw new IllegalArgumentException("framePersecond must be greater than 0");
this.view = view;
float tmp = (1f / framePerSecond) * 1000;
VIEW_CAPTURER_FRAMERATE_MS = Math.round(tmp);
}
private static void bitmapToI420(Bitmap src, JavaI420Buffer dest) {
int width = src.getWidth();
int height = src.getHeight();
if (width != dest.getWidth() || height != dest.getHeight())
return;
int strideY = dest.getStrideY();
int strideU = dest.getStrideU();
int strideV = dest.getStrideV();
ByteBuffer dataY = dest.getDataY();
ByteBuffer dataU = dest.getDataU();
ByteBuffer dataV = dest.getDataV();
for (int line = 0; line < height; line++) {
if (line % 2 == 0) {
for (int x = 0; x < width; x += 2) {
int px = src.getPixel(x, line);
byte r = (byte) ((px >> 16) & 0xff);
byte g = (byte) ((px >> 8) & 0xff);
byte b = (byte) (px & 0xff);
dataY.put(line * strideY + x, (byte) (((66 * r + 129 * g + 25 * b) >> 8) + 16));
dataU.put(line / 2 * strideU + x / 2, (byte) (((-38 * r + -74 * g + 112 * b) >> 8) + 128));
dataV.put(line / 2 * strideV + x / 2, (byte) (((112 * r + -94 * g + -18 * b) >> 8) + 128));
px = src.getPixel(x + 1, line);
r = (byte) ((px >> 16) & 0xff);
g = (byte) ((px >> 8) & 0xff);
b = (byte) (px & 0xff);
dataY.put(line * strideY + x, (byte) (((66 * r + 129 * g + 25 * b) >> 8) + 16));
}
} else {
for (int x = 0; x < width; x += 1) {
int px = src.getPixel(x, line);
byte r = (byte) ((px >> 16) & 0xff);
byte g = (byte) ((px >> 8) & 0xff);
byte b = (byte) (px & 0xff);
dataY.put(line * strideY + x, (byte) (((66 * r + 129 * g + 25 * b) >> 8) + 16));
}
}
}
}
public static Bitmap createFlippedBitmap(Bitmap source, boolean xFlip, boolean yFlip) {
try {
Matrix matrix = new Matrix();
matrix.postScale(xFlip ? -1 : 1, yFlip ? -1 : 1, source.getWidth() / 2f, source.getHeight() / 2f);
return Bitmap.createBitmap(source, 0, 0, source.getWidth(), source.getHeight(), matrix, true);
} catch (Exception e) {
return null;
}
}
private void checkNotDisposed() {
if (this.isDisposed) {
throw new RuntimeException("capturer is disposed.");
}
}
#Override
public synchronized void initialize(SurfaceTextureHelper surfaceTextureHelper, Context context, CapturerObserver capturerObserver) {
this.checkNotDisposed();
if (capturerObserver == null) {
throw new RuntimeException("capturerObserver not set.");
} else {
this.context = context;
this.capturerObserver = capturerObserver;
if (surfaceTextureHelper == null) {
throw new RuntimeException("surfaceTextureHelper not set.");
} else {
this.surfaceTextureHelper = surfaceTextureHelper;
}
}
}
#Override
public void startCapture(int width, int height, int fps) {
this.checkNotDisposed();
this.started.set(true);
this.width = width;
this.height = height;
this.capturerObserver.onCapturerStarted(true);
this.surfaceTextureHelper.startListening(this);
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);
/*try {
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
HandlerThread handlerThread = new HandlerThread(CustomVideoCapturer.class.getSimpleName());
capturerObserver.onCapturerStarted(true);
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
YuvConverter yuvConverter = new YuvConverter();
TextureBufferImpl buffer = new TextureBufferImpl(width, height, VideoFrame.TextureBuffer.Type.RGB, textures[0], new Matrix(), surfaceTextureHelper.getHandler(), yuvConverter, null);
// handlerThread.start();
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
new Thread(() -> {
while (true) {
PixelCopy.request(view, bitmap, copyResult -> {
if (copyResult == PixelCopy.SUCCESS) {
viewBitmap = getResizedBitmap(bitmap, 500);
long start = System.nanoTime();
Log.d("BITMAP--->", viewBitmap.toString());
sendToServer(viewBitmap, yuvConverter, buffer, start);
} else {
Log.e("Pixel_copy-->", "Couldn't create bitmap of the SurfaceView");
}
handlerThread.quitSafely();
}, new Handler(Looper.getMainLooper()));
}
}).start();
}
} catch (Exception ignored) {
}*/
}
private void sendToServer(Bitmap bitmap, YuvConverter yuvConverter, long start) {
try {
int[] textures = new int[1];
GLES20.glGenTextures(0, textures, 0);
buffer = new TextureBufferImpl(width, height, VideoFrame.TextureBuffer.Type.RGB, textures[0], new Matrix(), surfaceTextureHelper.getHandler(), yuvConverter, null);
Bitmap flippedBitmap = createFlippedBitmap(bitmap, true, false);
surfaceTextureHelper.getHandler().post(() -> {
if (flippedBitmap != null) {
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, flippedBitmap, 0);
VideoFrame.I420Buffer i420Buf = yuvConverter.convert(buffer);
long frameTime = System.nanoTime() - start;
VideoFrame videoFrame = new VideoFrame(i420Buf, 180, frameTime);
capturerObserver.onFrameCaptured(videoFrame);
videoFrame.release();
try {
viewBitmap.recycle();
} catch (Exception e) {
}
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);
}
});
} catch (Exception ignored) {
}
}
#Override
public void stopCapture() throws InterruptedException {
this.checkNotDisposed();
CustomVideoCapturer.this.surfaceTextureHelper.stopListening();
CustomVideoCapturer.this.capturerObserver.onCapturerStopped();
started.set(false);
handler.removeCallbacksAndMessages(null);
handlerPixelCopy.removeCallbacksAndMessages(null);
}
#Override
public void changeCaptureFormat(int width, int height, int framerate) {
this.checkNotDisposed();
this.width = width;
this.height = height;
}
#Override
public void dispose() {
this.isDisposed = true;
}
#Override
public boolean isScreencast() {
return true;
}
private void sendFrame() {
final long captureTimeNs = TimeUnit.MILLISECONDS.toNanos(SystemClock.elapsedRealtime());
/*surfaceTextureHelper.setTextureSize(width, height);
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
Matrix matrix = new Matrix();
matrix.preTranslate(0.5f, 0.5f);
matrix.preScale(1f, -1f);
matrix.preTranslate(-0.5f, -0.5f);
YuvConverter yuvConverter = new YuvConverter();
TextureBufferImpl buffer = new TextureBufferImpl(width, height,
VideoFrame.TextureBuffer.Type.RGB, textures[0], matrix,
surfaceTextureHelper.getHandler(), yuvConverter, null);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, viewBitmap, 0);
long frameTime = System.nanoTime() - captureTimeNs;
VideoFrame videoFrame = new VideoFrame(buffer.toI420(), 0, frameTime);
capturerObserver.onFrameCaptured(videoFrame);
videoFrame.release();
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);*/
// Create video frame
JavaI420Buffer buffer = JavaI420Buffer.allocate(viewBitmap.getWidth(), viewBitmap.getHeight());
bitmapToI420(viewBitmap, buffer);
VideoFrame videoFrame = new VideoFrame(buffer,
0, captureTimeNs);
// Notify the listener
if (started.get()) {
++this.numCapturedFrames;
this.capturerObserver.onFrameCaptured(videoFrame);
}
if (started.get()) {
handler.postDelayed(viewCapturer, VIEW_CAPTURER_FRAMERATE_MS);
}
}
public long getNumCapturedFrames() {
return this.numCapturedFrames;
}
/**
* reduces the size of the image
*
* #param image
* #param maxSize
* #return
*/
public Bitmap getResizedBitmap(Bitmap image, int maxSize) {
int width = image.getWidth();
int height = image.getHeight();
try {
Bitmap bitmap = Bitmap.createScaledBitmap(image, width, height, true);
ByteArrayOutputStream out = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.JPEG, 50, out);
return BitmapFactory.decodeStream(new ByteArrayInputStream(out.toByteArray()));
} catch (Exception e) {
return null;
}
}
#Override
public void onFrame(VideoFrame videoFrame) {
}
}
Step 2 (Usage):
CustomVideoCapturer videoCapturer = new CustomVideoCapturer(arSceneView, 20);
videoSource = peerConnectionFactory.createVideoSource(videoCapturer.isScreencast());
videoCapturer.initialize(surfaceTextureHelper, context, videoSource.getCapturerObserver());
videoCapturer.startCapture(resolutionHeight, resolutionWidth, 30);
At after this you should be able to stream your AR frames to the remote user.
Step 3 (Optional):
You can play around with the getResizedBitmap() to change the resolution and size of your frames. REMEMBER: Operations with bitmaps can be process intensive.
I'm free to any type of suggestions or optimisations in this code. This is something I came up with after weeks of hustle.
Related
I'm working on a video conference app using openvidu. We are trying to include wikitude AR session in the call.
The problem is that both of them requires access to the camera so I have the next scenario: if I instantiate the local participant video first I can't start the wikitude AR session because video don't load. If I instantiate the wikitude session firstly the other participants of the call don't see the device video.
I was able to create a custom video capturer for openvidu, that imitates the camera. It is required to send every frame for it to works.
package org.webrtc;
import android.content.Context;
import android.graphics.Bitmap;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.atomic.AtomicReference;
public class CustomVideoCapturer implements VideoCapturer {
private final static String TAG = "FileVideoCapturer";
//private final FileVideoCapturer.VideoReader videoReader;
private final Timer timer = new Timer();
private CapturerObserver capturerObserver;
private AtomicReference<Bitmap> image = new AtomicReference<Bitmap>();
private final TimerTask tickTask = new TimerTask() {
#Override
public void run() {
tick();
}
};
public CustomVideoCapturer() {
}
public void tick() {
Bitmap frame = image.get();
if (frame != null && !frame.isRecycled()) {
NV21Buffer nv21Buffer = new NV21Buffer(getNV21(frame),frame.getWidth(),frame.getHeight(), null);
VideoFrame videoFrame = new VideoFrame(nv21Buffer, 0, System.nanoTime());
capturerObserver.onFrameCaptured(videoFrame);
}
}
byte [] getNV21(Bitmap image) {
int [] argb = new int[image.getWidth() * image.getHeight()];
image.getPixels(argb, 0, image.getWidth(), 0, 0, image.getWidth(), image.getHeight());
byte [] yuv = new byte[image.getWidth()*image.getHeight()*3/2];
encodeYUV420SP(yuv, argb, image.getWidth(), image.getHeight());
image.recycle();
return yuv;
}
void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
// well known RGB to YUV algorithm
Y = ( ( 66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ( ( -38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ( ( 112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte)((V<0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[uvIndex++] = (byte)((U<0) ? 0 : ((U > 255) ? 255 : U));
}
index ++;
}
}
}
public void sendFrame(Bitmap bitmap) {
image.set(bitmap);
}
#Override
public void initialize(SurfaceTextureHelper surfaceTextureHelper, Context applicationContext,
CapturerObserver capturerObserver) {
this.capturerObserver = capturerObserver;
}
#Override
public void startCapture(int width, int height, int framerate) {
//timer.schedule(tickTask, 0, 1000 / framerate);
threadCV().start();
}
Thread threadCV() {
return new Thread() {
#Override
public void run() {
while (true) {
if (image.get() != null) {
tick();
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
}
#Override
public void stopCapture() throws InterruptedException {
timer.cancel();
}
#Override
public void changeCaptureFormat(int width, int height, int framerate) {
// Empty on purpose
}
#Override
public void dispose() {
//videoReader.close();
}
#Override
public boolean isScreencast() {
return false;
}
private interface VideoReader {
VideoFrame getNextFrame();
void close();
}
/**
* Read video data from file for the .y4m container.
*/
}
On the local participant I than use this function to send the frame:
public void sendFrame(Bitmap frame) {
customVideoCapturer.sendFrame(frame);
}
But I wasn't be able to thake the frames from the wikitude Camera. There is a way to access the frames and resend them?
Such as of the Native Api sdk, version 9.10.0, according to answer from wikitude support
https://support.wikitude.com/support/discussions/topics/5000096719?page=1 , to access the camera frames a custom plugin should be created:
https://www.wikitude.com/external/doc/documentation/latest/androidnative/pluginsapi.html#plugins-api
I am capturing frames in OnPreviewFrame() and then processing them in a thread to check if they are valid or not.
public void onPreviewFrame(byte[] data, Camera camera) {
if (imageFormat == ImageFormat.NV21) {
//We only accept the NV21(YUV420) format.
frameCount++;
if (frameCount > 19 && frameCount % 2 == 0) {
Camera.Parameters parameters = camera.getParameters();
FrameModel fModel = new FrameModel(data);
fModel.setPreviewWidth(parameters.getPreviewSize().width);
fModel.setPreviewHeight(parameters.getPreviewSize().height);
fModel.setPicFormat(parameters.getPreviewFormat());
fModel.setFrameCount(frameCount);
validateFrame(fModel);
}
}
}
In validateFrame(), i submit a ValidatorThread runnable instance to a ThreadPoolExecutor with 4 core and max threads, to process the frames parallelly.
public class ValidatorThread implements Runnable {
private FrameModel frame;
public ValidatorThread(FrameModel fModel) {
frame = fModel;
}
#Override
public void run() {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
processNV21Data();
}
private void processNV21Data() {
YuvImage yuv = new YuvImage(frame.getData(), frame.getPicFormat(),
frame.getPreviewWidth(), frame.getPreviewHeight(), null);
frame.releaseData();
ByteArrayOutputStream out = new ByteArrayOutputStream();
yuv.compressToJpeg(new Rect(0, 0, frame.getPreviewWidth(), frame.getPreviewHeight()), 100, out);
byte[] bytes = out.toByteArray();
yuv = null;
try {
if (out != null)
out.close();
out = null;
} catch (IOException e) {
e.printStackTrace();
}
Bitmap baseBitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
bytes = null;
// rotate bitmap
baseBitmap = rotateImage(baseBitmap, frame.getRotation());
//create copy of original bitmap to use later
Bitmap mCheckedBitmap = baseBitmap.copy(Bitmap.Config.ARGB_8888, true);
// convert base bitmap to greyscale for validation
baseBitmap = toGrayscale(baseBitmap);
boolean isBitmapValid = Util.isBitmapValid(baseBitmap);
if (isBitmapValid) {
baseBitmap.recycle();
mCheckedBitmap.recycle();
frame = null;
} else {
baseBitmap.recycle();
mCheckedBitmap.recycle();
frame = null;
}
}
public Bitmap toGrayscale(Bitmap bmpOriginal) {
int width, height;
height = bmpOriginal.getHeight();
width = bmpOriginal.getWidth();
Bitmap bmpGrayscale = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
Canvas c = new Canvas(bmpGrayscale);
Paint paint = new Paint();
bmpOriginal.recycle();
return bmpGrayscale;
}
private Bitmap rotateImage(final Bitmap source, float angle) {
Matrix matrix = new Matrix();
matrix.postRotate(angle);
Bitmap rotatedBitmap = Bitmap.createBitmap(source, 0, 0, source.getWidth(), source.getHeight(), matrix, true);
source.recycle();
return rotatedBitmap;
}
}
The FrameModel class has such declaration :
public class FrameModel {
private byte[] data;
private int previewWidth;
private int previewHeight;
private int picFormat;
private int frameCount;
public void releaseData() {
data = null;
}
// getters and setters
}
I am getting OutOf Memory error while processing multiple frames.
Can anyone help what memory optimisation does the code need?
You can reduce memory usage if you produce grayscale bitmap from YUV data without going through Jpeg. This will also be significantly faster.
public Bitmap yuv2grayscale(byte[] yuv, int width, int height) {
int[] pixels = new int[width * height];
for (int i = 0; i < height*width; i++) {
int y = yuv[i] & 0xff;
pixels[i] = 0xFF000000 | y << 16 | y << 16 | y;
}
return Bitmap.createBitmap(pixels, width, height, Bitmap.Config.RGB_565);
}
Alternatively, you can create an RGB_565 bitmap without going through int[width*height] pixels array, and manipulate the bitmap pixels in place using NDK.
Let me preface this being that I am pretty new to Android. I searched all over for help on this and pieced together what I have so far.
So my intent is to use the Camera Preview to read an RGB value that the user points the camera at and save that after prompting the user for each item. Example, I ask show me a shirt and after the user confirms I store that color in an array where I hold the colors. Planning to do something with those after this part.
Right now, the code I have opens the camera view with a close (x) button in the top right but I have hit a wall on where to go next. I implemented a decode function from the Ketai Project to decode the RGB value from Pixel data. Could use some clarification on that, how could I average what it gives me into a color? For-loop possibly?
I'm familiar with ImageView after seeing some of that but I want to read the RGB values more real-time as the user shows me.
package ricardo.colormatch;
import android.content.Context;
import android.hardware.Camera;
import android.util.Log;
import android.view.MotionEvent;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;
import java.io.IOException;
public class CameraView extends SurfaceView implements SurfaceHolder.callback, Camera.PreviewCallback
{
private SurfaceHolder mHolder;
private Camera mCamera;
private Camera.Parameters parameters;
private Camera.Size previewSize;
private int[] pixels;
private int[] myPixels;
private static final String TAG = "Cam";
public CameraView(Context context, Camera camera) {
super(context);
mCamera = camera;
mCamera.setDisplayOrientation(90);
// Here we want to get the holder and set this class as the callback where we will get camera data from
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}
#Override
public void surfaceCreated(SurfaceHolder surfaceHolder) {
try {
//when we create the surface, we want the camera to draw images in this surface
mCamera.setPreviewDisplay(surfaceHolder);
mCamera.setPreviewCallback(this);
parameters = mCamera.getParameters();
previewSize = parameters.getPreviewSize();
pixels = new int[previewSize.width * previewSize.height];
mCamera.startPreview();
}catch(IOException e) {
Log.d("ERROR", "Camera error on surfaceCreated " + e.getMessage());
mCamera.release();
mCamera = null;
}
}
#Override
//make sure before changing surface that preview is stopped, it is rotated than restarted
public void surfaceChanged(SurfaceHolder surfaceHolder, int format, int w, int h) {
//checks if surface is ready to receive data
if (mHolder.getSurface() == null)
return;
try
{
mCamera.stopPreview(); //stops the preview
}
catch(Exception e)
{
// If camera is not running
}
//Recreate camera preview
try
{
parameters.setPreviewSize(w, h);
mCamera.setParameters(parameters);
mCamera.startPreview();
}
catch(Exception e)
{
Log.d("ERROR", "Camera error on surfaceChanged" + e.getMessage());
}
}
#Override
public void onPreviewFrame(byte[] data, Camera camera){
Log.d("Cam", data.toString());
decodeYUV420SP(pixels, data, previewSize.width, previewSize.height);
Log.d("Cam", pixels.toString());
}
//From the Ketai Project image processing
void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width, int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) {
r = 0;
} else if (r > 262143) {
r = 262143;
}
if (g < 0) {
g = 0;
} else if (g > 262143) {
g = 262143;
}
if (b < 0) {
b = 0;
} else if (b > 262143) {
b = 262143;
}
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((b >> 10) & 0xff);
}
}
}
//destroy the camera in the surface
//Note : Move code into activity if using more than one screen
#Override
public void surfaceDestroyed(SurfaceHolder surfaceHolder){
mCamera.stopPreview();
mCamera.release();
}
}
MainActivity:
package ricardo.colormatch;
import android.hardware.Camera;
import android.os.Bundle;
import android.support.v7.app.AppCompatActivity;
import android.util.Log;
import android.view.View;
import android.widget.FrameLayout;
import android.widget.ImageButton;
public class MainActivity extends AppCompatActivity {
private Camera mCamera = null;
private CameraView mCameraView = null;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
try{
mCamera = Camera.open(); //open the camera
}catch(Exception e) {
Log.d("ERROR", "Failed to get camera: " + e.getMessage()); //get error message if problem opening camera
}
if(mCamera != null){
mCameraView = new CameraView(this,mCamera); //create our surfaceView for showing the camera data
FrameLayout camera_view = (FrameLayout)findViewById(R.id.camera_view);
camera_view.addView(mCameraView); // add surfaceView to layout
}
//button to close the app
ImageButton imgClose = (ImageButton)findViewById(R.id.imgClose);
imgClose.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view){
System.exit(0);
}
});
}
}
What I get from the logs:
[I#20928f56
[I#20928f56
And it just repeats as the app is running, why is that?
how can I convert images to video without using FFmpeg or JCodec, only with android MediaCodec. The images for video is bitmap files that can be ARGB888 or YUV420 (my choice). The most important thing is that the video have to be playable in android devices and the maximum API is 16. I know all about API 18 MediaMuxer and I can not use it.
Please help me, I am stuck on this for many days.
(JCodec to slow, and FFmpeg very complicated to use).
There is no simple way to do this in API 16 that works across all devices.
You will encounter problems with buffer alignment, color spaces, and the need to use different YUV layouts on different devices.
Consider the buffer alignment issue. On pre-API 18 devices with Qualcomm SOCs, you had to align the CbCr planes at a 2K offset from the start of the buffer. On API 18, all devices use the same layout; this is enforced by CTS tests that were added in Android 4.3.
Even with API 18 you still have to runtime detect whether the encoder wants planar or semi-planar values. (It's probably not relevant for your situation, but none of the YUV formats output by the camera are accepted by MediaCodec.) Note there is no RGB input to MediaCodec.
If portability is not a concern, i.e. you're targeting a specific device, your code will be much simpler.
There are code snippets in the SO pages linked above. The closest "official" example is the buffer-to-buffer / buffer-to-surface tests in EncodeDecodeTest. These are API 18 tests, which means they don't do the "if QC and API16 then change buffer alignment" dance, but they do show how to do the planar vs. semi-planar layout detection. It doesn't include an RGB-to-YUV color converter, but there are examples of such around the web.
On the bright side, the encoder output seems to be just fine on any device and API version.
Converting the raw H.264 stream to a .mp4 file requires a 3rd-party library, since as you noted MediaMuxer is not available. I believe some people have installed ffmpeg as a command-line utility and executed it that way (maybe like this?).
import android.app.Activity;
import android.app.ProgressDialog;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.media.MediaCodec;
import android.media.MediaCodecInfo;
import android.media.MediaCodecList;
import android.media.MediaFormat;
import android.media.MediaMuxer;
import android.os.Environment;
import android.os.Handler;
import android.util.Log;
import java.io.File;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.text.DateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
public class MMediaMuxer {
private static final String MIME_TYPE = "video/avc"; // H.264 Advanced Video Coding
private static int _width = 512;
private static int _height = 512;
private static final int BIT_RATE = 800000;
private static final int INFLAME_INTERVAL = 1;
private static final int FRAME_RATE = 10;
private static boolean DEBUG = false;
private MediaCodec mediaCodec;
private MediaMuxer mediaMuxer;
private boolean mRunning;
private int generateIndex = 0;
private int mTrackIndex;
private int MAX_FRAME_VIDEO = 0;
private List<byte[]> bitList;
private List<byte[]> bitFirst;
private List<byte[]> bitLast;
private int current_index_frame = 0;
private static final String TAG = "CODEC";
private String outputPath;
private Activity _activity;
private ProgressDialog pd;
private String _title;
private String _mess;
public void Init(Activity activity, int width, int height, String title, String mess) {
_title = title;
_mess = mess;
_activity = activity;
_width = width;
_height = height;
Logd("MMediaMuxer Init");
ShowProgressBar();
}
private Handler aHandler = new Handler();
public void AddFrame(final byte[] byteFrame) {
CheckDataListState();
new Thread(new Runnable() {
#Override
public void run() {
Logd("Android get Frame");
Bitmap bit = BitmapFactory.decodeByteArray(byteFrame, 0, byteFrame.length);
Logd("Android convert Bitmap");
byte[] byteConvertFrame = getNV21(bit.getWidth(), bit.getHeight(), bit);
Logd("Android convert getNV21");
bitList.add(byteConvertFrame);
}
}).start();
}
public void AddFrame(byte[] byteFrame, int count, boolean isLast) {
CheckDataListState();
Logd("Android get Frames = " + count);
Bitmap bit = BitmapFactory.decodeByteArray(byteFrame, 0, byteFrame.length);
Logd("Android convert Bitmap");
byteFrame = getNV21(bit.getWidth(), bit.getHeight(), bit);
Logd("Android convert getNV21");
for (int i = 0; i < count; i++) {
if (isLast) {
bitLast.add(byteFrame);
} else {
bitFirst.add(byteFrame);
}
}
}
public void CreateVideo() {
current_index_frame = 0;
Logd("Prepare Frames Data");
bitFirst.addAll(bitList);
bitFirst.addAll(bitLast);
MAX_FRAME_VIDEO = bitFirst.size();
Logd("CreateVideo");
mRunning = true;
bufferEncoder();
}
public boolean GetStateEncoder() {
return mRunning;
}
public String GetPath() {
return outputPath;
}
public void onBackPressed() {
mRunning = false;
}
public void ShowProgressBar() {
_activity.runOnUiThread(new Runnable() {
public void run() {
pd = new ProgressDialog(_activity);
pd.setTitle(_title);
pd.setCancelable(false);
pd.setMessage(_mess);
pd.setCanceledOnTouchOutside(false);
pd.show();
}
});
}
public void HideProgressBar() {
new Thread(new Runnable() {
#Override
public void run() {
_activity.runOnUiThread(new Runnable() {
#Override
public void run() {
pd.dismiss();
}
});
}
}).start();
}
private void bufferEncoder() {
Runnable runnable = new Runnable() {
#Override
public void run() {
try {
Logd("PrepareEncoder start");
PrepareEncoder();
Logd("PrepareEncoder end");
} catch (IOException e) {
Loge(e.getMessage());
}
try {
while (mRunning) {
Encode();
}
} finally {
Logd("release");
Release();
HideProgressBar();
bitFirst = null;
bitLast = null;
}
}
};
Thread thread = new Thread(runnable);
thread.start();
}
public void ClearTask() {
bitList = null;
bitFirst = null;
bitLast = null;
}
private void PrepareEncoder() throws IOException {
MediaCodecInfo codecInfo = selectCodec(MIME_TYPE);
if (codecInfo == null) {
Loge("Unable to find an appropriate codec for " + MIME_TYPE);
}
Logd("found codec: " + codecInfo.getName());
int colorFormat;
try {
colorFormat = selectColorFormat(codecInfo, MIME_TYPE);
} catch (Exception e) {
colorFormat = MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar;
}
mediaCodec = MediaCodec.createByCodecName(codecInfo.getName());
MediaFormat mediaFormat = MediaFormat.createVideoFormat(MIME_TYPE, _width, _height);
mediaFormat.setInteger(MediaFormat.KEY_BIT_RATE, BIT_RATE);
mediaFormat.setInteger(MediaFormat.KEY_FRAME_RATE, FRAME_RATE);
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
mediaFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, INFLAME_INTERVAL);
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
mediaCodec.start();
try {
String currentDateTimeString = DateFormat.getDateTimeInstance().format(new Date());
outputPath = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_MOVIES),
"pixel"+currentDateTimeString+".mp4").toString();
mediaMuxer = new MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
} catch (IOException ioe) {
Loge("MediaMuxer creation failed");
}
}
private void Encode() {
while (true) {
if (!mRunning) {
break;
}
Logd("Encode start");
long TIMEOUT_USEC = 5000;
int inputBufIndex = mediaCodec.dequeueInputBuffer(TIMEOUT_USEC);
long ptsUsec = computePresentationTime(generateIndex, FRAME_RATE);
if (inputBufIndex >= 0) {
byte[] input = bitFirst.get(current_index_frame);
final ByteBuffer inputBuffer = mediaCodec.getInputBuffer(inputBufIndex);
inputBuffer.clear();
inputBuffer.put(input);
mediaCodec.queueInputBuffer(inputBufIndex, 0, input.length, ptsUsec, 0);
generateIndex++;
}
MediaCodec.BufferInfo mBufferInfo = new MediaCodec.BufferInfo();
int encoderStatus = mediaCodec.dequeueOutputBuffer(mBufferInfo, TIMEOUT_USEC);
if (encoderStatus == MediaCodec.INFO_TRY_AGAIN_LATER) {
// no output available yet
Loge("No output from encoder available");
} else if (encoderStatus == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED) {
// not expected for an encoder
MediaFormat newFormat = mediaCodec.getOutputFormat();
mTrackIndex = mediaMuxer.addTrack(newFormat);
mediaMuxer.start();
} else if (encoderStatus < 0) {
Loge("unexpected result from encoder.dequeueOutputBuffer: " + encoderStatus);
} else if (mBufferInfo.size != 0) {
ByteBuffer encodedData = mediaCodec.getOutputBuffer(encoderStatus);
if (encodedData == null) {
Loge("encoderOutputBuffer " + encoderStatus + " was null");
} else {
encodedData.position(mBufferInfo.offset);
encodedData.limit(mBufferInfo.offset + mBufferInfo.size);
mediaMuxer.writeSampleData(mTrackIndex, encodedData, mBufferInfo);
mediaCodec.releaseOutputBuffer(encoderStatus, false);
}
}
current_index_frame++;
if (current_index_frame > MAX_FRAME_VIDEO - 1) {
Log.d(TAG, "mRunning = false;");
mRunning = false;
}
Logd("Encode end");
}
}
private void Release() {
if (mediaCodec != null) {
mediaCodec.stop();
mediaCodec.release();
mediaCodec = null;
Logd("RELEASE CODEC");
}
if (mediaMuxer != null) {
mediaMuxer.stop();
mediaMuxer.release();
mediaMuxer = null;
Logd("RELEASE MUXER");
}
}
/**
* Returns the first codec capable of encoding the specified MIME type, or
* null if no match was found.
*/
private static MediaCodecInfo selectCodec(String mimeType) {
int numCodecs = MediaCodecList.getCodecCount();
for (int i = 0; i < numCodecs; i++) {
MediaCodecInfo codecInfo = MediaCodecList.getCodecInfoAt(i);
if (!codecInfo.isEncoder()) {
continue;
}
String[] types = codecInfo.getSupportedTypes();
for (int j = 0; j < types.length; j++) {
if (types[j].equalsIgnoreCase(mimeType)) {
return codecInfo;
}
}
}
return null;
}
/**
* Returns a color format that is supported by the codec and by this test
* code. If no match is found, this throws a test failure -- the set of
* formats known to the test should be expanded for new platforms.
*/
private static int selectColorFormat(MediaCodecInfo codecInfo,
String mimeType) {
MediaCodecInfo.CodecCapabilities capabilities = codecInfo
.getCapabilitiesForType(mimeType);
for (int i = 0; i < capabilities.colorFormats.length; i++) {
int colorFormat = capabilities.colorFormats[i];
if (isRecognizedFormat(colorFormat)) {
return colorFormat;
}
}
return 0; // not reached
}
/**
* Returns true if this is a color format that this test code understands
* (i.e. we know how to read and generate frames in this format).
*/
private static boolean isRecognizedFormat(int colorFormat) {
switch (colorFormat) {
// these are the formats we know how to handle for
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420SemiPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420PackedSemiPlanar:
case MediaCodecInfo.CodecCapabilities.COLOR_TI_FormatYUV420PackedSemiPlanar:
return true;
default:
return false;
}
}
private byte[] getNV21(int inputWidth, int inputHeight, Bitmap scaled) {
int[] argb = new int[inputWidth * inputHeight];
scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight);
byte[] yuv = new byte[inputWidth * inputHeight * 3 / 2];
encodeYUV420SP(yuv, argb, inputWidth, inputHeight);
scaled.recycle();
return yuv;
}
private void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
int yIndex = 0;
int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
int index = 0;
for (int j = 0; j < height; j++) {
for (int i = 0; i < width; i++) {
a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
R = (argb[index] & 0xff0000) >> 16;
G = (argb[index] & 0xff00) >> 8;
B = (argb[index] & 0xff) >> 0;
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
yuv420sp[yIndex++] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && index % 2 == 0) {
yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
}
index++;
}
}
}
private void CheckDataListState() {
if (bitList == null) {
bitList = new ArrayList<>();
}
if (bitFirst == null) {
bitFirst = new ArrayList<>();
}
if (bitLast == null) {
bitLast = new ArrayList<>();
}
}
private long computePresentationTime(long frameIndex, int framerate) {
return 132 + frameIndex * 1000000 / framerate;
}
private static void Logd(String Mess) {
if (DEBUG) {
Log.d(TAG, Mess);
}
}
private static void Loge(String Mess) {
Log.e(TAG, Mess);
}
}
Works with 21 SDK. Do not pay attention to several lists. So it was done since the conversion of a piece of data goes in runtime.
There are several tutorials out there which explain how to get a simple camera preview up and running on an android device. But i couldn't find any example which explains how to manipulate the image before it's being rendered.
What I want to do is implementing custom color filters to simulate e.g. red and/or green deficiency.
I did some research on this and put together a working(ish) example. Here's what I found. It's pretty easy to get the raw data coming off of the camera. It's returned as a YUV byte array. You'd need to draw it manually onto a surface in order to be able to modify it. To do that you'd need to have a SurfaceView that you can manually run draw calls with. There are a couple of flags you can set that accomplish that.
In order to do the draw call manually you'd need to convert the byte array into a bitmap of some sort. Bitmaps and the BitmapDecoder don't seem to handle the YUV byte array very well at this point. There's been a bug filed for this but don't don't what the status is on that. So people have been trying to decode the byte array into an RGB format themselves.
Seems like doing the decoding manually has been kinda slow and people have had various degrees of success with it. Something like this should probably really be done with native code at the NDK level.
Still, it is possible to get it working. Also, my little demo is just me spending a couple of hours hacking stuff together (I guess doing this caught my imagination a little too much ;)). So chances are with some tweaking you could much improve what I've managed to get working.
This little code snip contains a couple of other gems I found as well. If all you want is to be able to draw over the surface you can override the surface's onDraw function - you could potentially analyze the returned camera image and draw an overlay - it'd be much faster than trying to process every frame. Also, I changed the SurfaceHolder.SURFACE_TYPE_NORMAL from what would be needed if you wanted a camera preview to show up. So a couple of changes to the code - the commented out code:
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
And the:
SurfaceHolder.SURFACE_TYPE_NORMAL //SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS - for preview to work
Should allow you to overlay frames based on the camera preview on top of the real preview.
Anyway, here's a working piece of code - Should give you something to start with.
Just put a line of code in one of your views like this:
<pathtocustomview.MySurfaceView android:id="#+id/surface_camera"
android:layout_width="fill_parent" android:layout_height="10dip"
android:layout_weight="1">
</pathtocustomview.MySurfaceView>
And include this class in your source somewhere:
package pathtocustomview;
import java.io.IOException;
import java.nio.Buffer;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Rect;
import android.hardware.Camera;
import android.util.AttributeSet;
import android.util.Log;
import android.view.SurfaceHolder;
import android.view.SurfaceHolder.Callback;
import android.view.SurfaceView;
public class MySurfaceView extends SurfaceView implements Callback,
Camera.PreviewCallback {
private SurfaceHolder mHolder;
private Camera mCamera;
private boolean isPreviewRunning = false;
private byte [] rgbbuffer = new byte[256 * 256];
private int [] rgbints = new int[256 * 256];
protected final Paint rectanglePaint = new Paint();
public MySurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
rectanglePaint.setARGB(100, 200, 0, 0);
rectanglePaint.setStyle(Paint.Style.FILL);
rectanglePaint.setStrokeWidth(2);
mHolder = getHolder();
mHolder.addCallback(this);
mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);
}
#Override
protected void onDraw(Canvas canvas) {
canvas.drawRect(new Rect((int) Math.random() * 100,
(int) Math.random() * 100, 200, 200), rectanglePaint);
Log.w(this.getClass().getName(), "On Draw Called");
}
public void surfaceChanged(SurfaceHolder holder, int format, int width,
int height) {
}
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
this.setWillNotDraw(false); // This allows us to make our own draw
// calls to this canvas
mCamera = Camera.open();
Camera.Parameters p = mCamera.getParameters();
p.setPreviewSize(240, 160);
mCamera.setParameters(p);
//try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e("Camera", e.getMessage());
}
}
}
public void onPreviewFrame(byte[] data, Camera camera) {
Log.d("Camera", "Got a camera frame");
Canvas c = null;
if(mHolder == null){
return;
}
try {
synchronized (mHolder) {
c = mHolder.lockCanvas(null);
// Do your drawing here
// So this data value you're getting back is formatted in YUV format and you can't do much
// with it until you convert it to rgb
int bwCounter=0;
int yuvsCounter=0;
for (int y=0;y<160;y++) {
System.arraycopy(data, yuvsCounter, rgbbuffer, bwCounter, 240);
yuvsCounter=yuvsCounter+240;
bwCounter=bwCounter+256;
}
for(int i = 0; i < rgbints.length; i++){
rgbints[i] = (int)rgbbuffer[i];
}
//decodeYUV(rgbbuffer, data, 100, 100);
c.drawBitmap(rgbints, 0, 256, 0, 0, 256, 256, false, new Paint());
Log.d("SOMETHING", "Got Bitmap");
}
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (c != null) {
mHolder.unlockCanvasAndPost(c);
}
}
}
}
I used walta's solution but I had some problems with YUV conversion, camera frames output sizes and crash on camera release.
Finally the following code worked for me:
public class MySurfaceView extends SurfaceView implements Callback, Camera.PreviewCallback {
private static final String TAG = "MySurfaceView";
private int width;
private int height;
private SurfaceHolder mHolder;
private Camera mCamera;
private int[] rgbints;
private boolean isPreviewRunning = false;
private int mMultiplyColor;
public MySurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
mHolder = getHolder();
mHolder.addCallback(this);
mMultiplyColor = getResources().getColor(R.color.multiply_color);
}
// #Override
// protected void onDraw(Canvas canvas) {
// Log.w(this.getClass().getName(), "On Draw Called");
// }
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
#Override
public void surfaceCreated(SurfaceHolder holder) {
synchronized (this) {
if (isPreviewRunning)
return;
this.setWillNotDraw(false); // This allows us to make our own draw calls to this canvas
mCamera = Camera.open();
isPreviewRunning = true;
Camera.Parameters p = mCamera.getParameters();
Size size = p.getPreviewSize();
width = size.width;
height = size.height;
p.setPreviewFormat(ImageFormat.NV21);
showSupportedCameraFormats(p);
mCamera.setParameters(p);
rgbints = new int[width * height];
// try { mCamera.setPreviewDisplay(holder); } catch (IOException e)
// { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); }
mCamera.startPreview();
mCamera.setPreviewCallback(this);
}
}
#Override
public void surfaceDestroyed(SurfaceHolder holder) {
synchronized (this) {
try {
if (mCamera != null) {
//mHolder.removeCallback(this);
mCamera.setPreviewCallback(null);
mCamera.stopPreview();
isPreviewRunning = false;
mCamera.release();
}
} catch (Exception e) {
Log.e("Camera", e.getMessage());
}
}
}
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
// Log.d("Camera", "Got a camera frame");
if (!isPreviewRunning)
return;
Canvas canvas = null;
if (mHolder == null) {
return;
}
try {
synchronized (mHolder) {
canvas = mHolder.lockCanvas(null);
int canvasWidth = canvas.getWidth();
int canvasHeight = canvas.getHeight();
decodeYUV(rgbints, data, width, height);
// draw the decoded image, centered on canvas
canvas.drawBitmap(rgbints, 0, width, canvasWidth-((width+canvasWidth)>>1), canvasHeight-((height+canvasHeight)>>1), width, height, false, null);
// use some color filter
canvas.drawColor(mMultiplyColor, Mode.MULTIPLY);
}
} catch (Exception e){
e.printStackTrace();
} finally {
// do this in a finally so that if an exception is thrown
// during the above, we don't leave the Surface in an
// inconsistent state
if (canvas != null) {
mHolder.unlockCanvasAndPost(canvas);
}
}
}
/**
* Decodes YUV frame to a buffer which can be use to create a bitmap. use
* this for OS < FROYO which has a native YUV decoder decode Y, U, and V
* values on the YUV 420 buffer described as YCbCr_422_SP by Android
*
* #param rgb
* the outgoing array of RGB bytes
* #param fg
* the incoming frame bytes
* #param width
* of source frame
* #param height
* of source frame
* #throws NullPointerException
* #throws IllegalArgumentException
*/
public void decodeYUV(int[] out, byte[] fg, int width, int height) throws NullPointerException, IllegalArgumentException {
int sz = width * height;
if (out == null)
throw new NullPointerException("buffer out is null");
if (out.length < sz)
throw new IllegalArgumentException("buffer out size " + out.length + " < minimum " + sz);
if (fg == null)
throw new NullPointerException("buffer 'fg' is null");
if (fg.length < sz)
throw new IllegalArgumentException("buffer fg size " + fg.length + " < minimum " + sz * 3 / 2);
int i, j;
int Y, Cr = 0, Cb = 0;
for (j = 0; j < height; j++) {
int pixPtr = j * width;
final int jDiv2 = j >> 1;
for (i = 0; i < width; i++) {
Y = fg[pixPtr];
if (Y < 0)
Y += 255;
if ((i & 0x1) != 1) {
final int cOff = sz + jDiv2 * width + (i >> 1) * 2;
Cb = fg[cOff];
if (Cb < 0)
Cb += 127;
else
Cb -= 128;
Cr = fg[cOff + 1];
if (Cr < 0)
Cr += 127;
else
Cr -= 128;
}
int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
if (R < 0)
R = 0;
else if (R > 255)
R = 255;
int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
if (G < 0)
G = 0;
else if (G > 255)
G = 255;
int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
if (B < 0)
B = 0;
else if (B > 255)
B = 255;
out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R;
}
}
}
private void showSupportedCameraFormats(Parameters p) {
List<Integer> supportedPictureFormats = p.getSupportedPreviewFormats();
Log.d(TAG, "preview format:" + cameraFormatIntToString(p.getPreviewFormat()));
for (Integer x : supportedPictureFormats) {
Log.d(TAG, "suppoterd format: " + cameraFormatIntToString(x.intValue()));
}
}
private String cameraFormatIntToString(int format) {
switch (format) {
case PixelFormat.JPEG:
return "JPEG";
case PixelFormat.YCbCr_420_SP:
return "NV21";
case PixelFormat.YCbCr_422_I:
return "YUY2";
case PixelFormat.YCbCr_422_SP:
return "NV16";
case PixelFormat.RGB_565:
return "RGB_565";
default:
return "Unknown:" + format;
}
}
}
To Use it, run from you activity's onCreate the following code:
SurfaceView surfaceView = new MySurfaceView(this, null);
RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT, RelativeLayout.LayoutParams.MATCH_PARENT);
surfaceView.setLayoutParams(layoutParams);
mRelativeLayout.addView(surfaceView);
Have you looked at GPUImage ?
It was originally an OSX/iOS library made by Brad Larson, that exists as an Objective-C wrapper around OpenGL/ES.
https://github.com/BradLarson/GPUImage
The people at CyberAgent have made an Android port (which doesn't have complete feature parity), which is a set of Java wrappers on top of the OpenGLES stuff. It's relatively high level, and pretty easy to implement, with a lot of the same functionality mentioned above...
https://github.com/CyberAgent/android-gpuimage