Unable to blit from External Texture to EGLSurface in android - android

When i have tried to render texture and transformation matrix to the EGLSurface, no display is seen in the view.
As a follow up of this issue , slightly i have modified slightly the code by following grafika/fadden sample code continuous capture
Here is my code:
Here is a draw method which runs on RenderThread.
This draw method is getting invoked properly whevener the data is produced at the producer end from Native Code.
public void drawFrame() {
mOffScreenSurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
mSurfaceWindowUser.makeCurrent();
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mSurfaceWindowUser.swapBuffers();
}
run method of RenderThread ->
public void run() {
Looper.prepare();
mHandler = new RenderHandler(this);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mOffScreenSurface = new OffscreenSurface(mEglCore, 640, 480);
mOffScreenSurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraSurface = new Surface (mCameraTexture); // This surface i am sending to Native Code where i use ANativeWindow reference and copy the data using post method. {producer}
mCameraTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
#Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
Log.d (TAG, "Long breath.. data is pumbed by Native Layer producer..");
mHandler.frameReceivedFromProducer();
}
});
mSurfaceWindowUser = new WindowSurface(mEglCore, mSurfaceUser, false); // this mSurfaceUser is a surface received from MainActivity TextureView.
}
To confirm if the produce at the native side producing the data, if i pass directly the user surface Without any EGL configurations, the frames are rendered into the screen.
At the native Level,
geometryResult = ANativeWindow_setBuffersGeometry(userNaiveWindow,640, 480, WINDOW_FORMAT_RGBA_8888);
To Render the frame i use
ANativeWindow_lock and ANativeWindow_unlockAndPost() to render directly frame into buffer.
I could not able to think what could be wrong and where i have to dig more ?
Thanks fadden for your help.

Related

Android Video Processing - how to connect the ImageReader Surface to the preview?

I'm using Android's Camera2 API and would like to perform some image processing on camera preview frames and then display the changes back on the preview (TextureView).
Starting from the common camera2video example, I've setup an ImageReader in my openCamera().
mImageReader = ImageReader.newInstance(mVideoSize.getWidth(),
mVideoSize.getHeight(), ImageFormat.YUV_420_888, mMaxBufferedImages);
mImageReader.setOnImageAvailableListener(mImageAvailable, mBackgroundHandler);
In my startPreview(), I've setup the Surfaces to receive frames from the CaptureRequest.
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
mPreviewBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
List<Surface> surfaces = new ArrayList<>();
// Here is where we connect the mPreviewSurface to the mTextureView.
mPreviewSurface = new Surface(texture);
surfaces.add(mPreviewSurface);
mPreviewBuilder.addTarget(mPreviewSurface);
// Connect our Image Reader to the Camera to get the preview frames.
Surface readerSurface = mImageReader.getSurface();
surfaces.add(readerSurface);
mPreviewBuilder.addTarget(readerSurface);
Then I'll modify the image data in the OnImageAvailableListener() callback.
ImageReader.OnImageAvailableListener mImageAvailable = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
try {
Image image = reader.acquireLatestImage();
if (image == null)
return;
final Image.Plane[] planes = image.getPlanes();
// Do something to the pixels.
// Black out part of the image.
ByteBuffer y_data_buffer = planes[0].getBuffer();
byte[] y_data = new byte[y_data_buffer.remaining()];
y_data_buffer.get(y_data);
byte y_value;
for (int row = 0; row < image.getHeight() / 2; row++) {
for (int col = 0; col < image.getWidth() / 2; col++) {
y_value = y_data[row * image.getWidth() + col];
y_value = 0;
y_data[row * image.getWidth() + col] = y_value;
}
}
image.close();
} catch (IllegalStateException e) {
Log.d(TAG, "mImageAvailable() Too many images acquired");
}
}
};
As I understand it now I am sending images to 2 Surface instances, the one for mTextureView and the other for my ImageReader.
How can I get my mTextureView to use the same Surface as the ImageReader, or should I be manipulating the image data directly from the mTextureView's Surface?
Thanks
If you only want to display the modified output, then I'm not sure why you have two outputs configured (the TextureView and the ImageReader).
Generally, if you want something like
camera -> in-app edits -> display
You have several options, depending on the kinds of edits you want, and various tradeoffs between ease of coding, performance, and so on.
One of the most efficient options is to do your edits as an OpenGL shader.
In that case, a GLSurfaceView is probably the simplest option.
Create a SurfaceTexture object with a texture ID that's unused in the GLSurfaceView's EGL context, and pass a Surface created from the SurfaceTexture to the camera session and requests.
Then in the SurfaceView drawing method, call SurfaceTexture's updateTexImage() method, and then use the texture ID to render your output as you'd like it.
That does require a lot of OpenGL code, so if you're not familiar with it, that can be challenging.
You can also use RenderScript for a similar effect; there you'll have an output SurfaceView or TextureView, and then a RenderScript script that reads from an input Allocation from the Camera and writes to an output Allocation to the View; you can create such Allocations from a Surface.
The Google HdrViewfinderDemo camera2 sample app uses this approach. It's a lot less boilerplate.
Third, you can just use an ImageReader like you're doing now, but you'll have to do a lot of conversion yourself to write it to the screen. The simplest (but slowest) option is to get a Canvas from a SurfaceView or a ImageView, and just write pixels to it one by one. Or you can do that via the ANativeWindow NDK, which is faster but requires writing JNI code and still requires you to do YUV->RGB conversions yourself (or use undocumented APIs to push YUV into the ANativeWindow and hope it works).

Rendering camera into multiple surfaces - on and off screen

I want to render the camera output into a view and once in a while save the camera output frame to a file, with the constraint being - the saved frame should be the same resolution as the camera is configured, while the view is smaller than the camera output (maintaining the aspect ratio).
Based on the ContinuousCaptureActivity example in grafika, I thought the best approach would be to send the camera to a SurfaceTexture and generally rendering the output and downscaling it into a SurfaceView, and when needed, render the full frame into a different Surface that has no view, in order to retrieve a byte buffer from it in parallel to the regular SurfaceView rendering.
The example is very similar to my situation - the preview is rendered to a view of smaller size and can be recorded and saved at the full resolution via a VideoEncoder.
I replaced the VideoEncoder logic with my own and got stuck trying to provide a Surface, like the encoder does, for the full resolution rendering. How do I create such a Surface? Am I approaching this correctly?
Some code ideas based on the example:
Inside the surfaceCreated(SurfaceHolder holder) method (line 350):
#Override // SurfaceHolder.Callback
public void surfaceCreated(SurfaceHolder holder) {
Log.d(TAG, "surfaceCreated holder=" + holder);
mEglCore = new EglCore(null, EglCore.FLAG_RECORDABLE);
mDisplaySurface = new WindowSurface(mEglCore, holder.getSurface(), false);
mDisplaySurface.makeCurrent();
mFullFrameBlit = new FullFrameRect(
new Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT));
mTextureId = mFullFrameBlit.createTextureObject();
mCameraTexture = new SurfaceTexture(mTextureId);
mCameraTexture.setOnFrameAvailableListener(this);
Log.d(TAG, "starting camera preview");
try {
mCamera.setPreviewTexture(mCameraTexture);
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
mCamera.startPreview();
// *** MY EDIT START ***
// Encoder creation no longer needed
// try {
// mCircEncoder = new CircularEncoder(VIDEO_WIDTH, VIDEO_HEIGHT, 6000000,
// mCameraPreviewThousandFps / 1000, 7, mHandler);
// } catch (IOException ioe) {
// throw new RuntimeException(ioe);
// }
mEncoderSurface = new WindowSurface(mEglCore, mCameraTexture); // <-- Crashes with EGL error 0x3003
// *** MY EDIT END ***
updateControls();
}
The drawFrame() method (line 420):
private void drawFrame() {
//Log.d(TAG, "drawFrame");
if (mEglCore == null) {
Log.d(TAG, "Skipping drawFrame after shutdown");
return;
}
// Latch the next frame from the camera.
mDisplaySurface.makeCurrent();
mCameraTexture.updateTexImage();
mCameraTexture.getTransformMatrix(mTmpMatrix);
// Fill the SurfaceView with it.
SurfaceView sv = (SurfaceView) findViewById(R.id.continuousCapture_surfaceView);
int viewWidth = sv.getWidth();
int viewHeight = sv.getHeight();
GLES20.glViewport(0, 0, viewWidth, viewHeight);
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mDisplaySurface.swapBuffers();
// *** MY EDIT START ***
// Send it to the video encoder.
if (someCondition) {
mEncoderSurface.makeCurrent();
GLES20.glViewport(0, 0, VIDEO_WIDTH, VIDEO_HEIGHT);
mFullFrameBlit.drawFrame(mTextureId, mTmpMatrix);
mEncoderSurface.swapBuffers();
try {
mEncoderSurface.saveFrame(new File(getExternalFilesDir(null), String.valueOf(System.currentTimeMillis()) + ".png"));
} catch (IOException e) {
e.printStackTrace();
}
}
// *** MY EDIT END ***
}
You're on the right track. The SurfaceTexture just does a quick bit of wrapping around the original YUV frame from the camera, so the "external" texture is the original image, with no changes. You can't read the pixels straight out of an external texture, so you have to render it somewhere first.
The easiest way to do this is to create an off-screen pbuffer surface. Grafika's gles/OffscreenSurface class does exactly this (with a call to eglCreatePbufferSurface()). Make that EGLSurface current, render the texture onto a FullFrameRect, then read the framebuffer with glReadPixels() (see EglSurfaceBase#saveFrame() for code). Don't call eglSwapBuffers().
Note that you're not creating an Android Surface for the output, just an EGLSurface. (They're different.)

Camera preview image data processing with Android L and Camera2 API

I'm working on an android app that is processing the input image from the camera and displays it to the user. This is fairly simple, I register a PreviewCallback on the camera object with the setPreviewCallbackWithBuffer.
This is easy and works smoothly with the old camera API
public void onPreviewFrame(byte[] data, Camera cam) {
// custom image data processing
}
I'm trying to port my app to take advantage of the new Camera2 API and I'm not sure how exactly shall I do that. I followed the Camera2Video in L Preview samples that allows to record a video. However, there is no direct image data transfer in the sample, so I don't understand where exactly shall I get the image pixel data and how to process it.
Could anybody help me or suggest the way how one can get the the functionality of PreviewCallback in android L, or how it's possible to process preview data from the camera before displaying it to the screen? (there is no preview callback on the camera object)
Thank you!
Combining a few answers into a more digestible one because #VP's answer, while technically clear, is difficult to understand if it's your first time moving from Camera to Camera2:
Using https://github.com/googlesamples/android-Camera2Basic as a starting point, modify the following:
In createCameraPreviewSession() init a new Surface from mImageReader
Surface mImageSurface = mImageReader.getSurface();
Add that new surface as a output target of your CaptureRequest.Builder variable. Using the Camera2Basic sample, the variable will be mPreviewRequestBuilder
mPreviewRequestBuilder.addTarget(mImageSurface);
Here's the snippet with the new lines (see my #AngeloS comments):
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
//#AngeloS - Our new output surface for preview frame data
Surface mImageSurface = mImageReader.getSurface();
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
//#AngeloS - Add the new target to our CaptureRequest.Builder
mPreviewRequestBuilder.addTarget(mImageSurface);
mPreviewRequestBuilder.addTarget(surface);
...
Next, in setUpCameraOutputs(), change the format from ImageFormat.JPEG to ImageFormat.YUV_420_888 when you init your ImageReader. (PS, I also recommend dropping your preview size for smoother operation - one nice feature of Camera2)
mImageReader = ImageReader.newInstance(largest.getWidth() / 16, largest.getHeight() / 16, ImageFormat.YUV_420_888, 2);
Finally, in your onImageAvailable() method of ImageReader.OnImageAvailableListener, be sure to use #Kamala's suggestion because the preview will stop after a few frames if you don't close it
#Override
public void onImageAvailable(ImageReader reader) {
Log.d(TAG, "I'm an image frame!");
Image image = reader.acquireNextImage();
...
if (image != null)
image.close();
}
Since the Camera2 API is very different from the current Camera API, it might help to go through the documentation.
A good starting point is camera2basic example. It demonstrates how to use Camera2 API and configure ImageReader to get JPEG images and register ImageReader.OnImageAvailableListener to receive those images
To receive preview frames, you need to add your ImageReader's surface to setRepeatingRequest's CaptureRequest.Builder.
Also, you should set ImageReader's format to YUV_420_888, which will give you 30fps at 8MP (The documentation guarantees 30fps at 8MP for Nexus 5).
In the ImageReader.OnImageAvailableListener class, close the image after reading as shown below (this will release the buffer for next capture). You will have to handle exception on close
Image image = imageReader.acquireNextImage();
ByteBuffer buffer = image.getPlanes()[0].getBuffer();
byte[] bytes = new byte[buffer.remaining()];
buffer.get(bytes);
image.close();
I needed the same thing, so I used their example and added a call to a new function when the camera is in preview state.
private CameraCaptureSession.CaptureCallback mCaptureCallback
= new CameraCaptureSession.CaptureCallback()
private void process(CaptureResult result) {
switch (mState) {
case STATE_PREVIEW: {
if (buttonPressed){
savePreviewShot();
}
break;
}
The savePreviewShot() is simply a recycled version of the original captureStillPicture() adapted to use the preview template.
private void savePreviewShot(){
try {
final Activity activity = getActivity();
if (null == activity || null == mCameraDevice) {
return;
}
// This is the CaptureRequest.Builder that we use to take a picture.
final CaptureRequest.Builder captureBuilder =
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureBuilder.addTarget(mImageReader.getSurface());
// Orientation
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, ORIENTATIONS.get(rotation));
CameraCaptureSession.CaptureCallback CaptureCallback
= new CameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(CameraCaptureSession session, CaptureRequest request,
TotalCaptureResult result) {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd_HH:mm:ss:SSS");
Date resultdate = new Date(System.currentTimeMillis());
String mFileName = sdf.format(resultdate);
mFile = new File(getActivity().getExternalFilesDir(null), "pic "+mFileName+" preview.jpg");
Log.i("Saved file", ""+mFile.toString());
unlockFocus();
}
};
mCaptureSession.stopRepeating();
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, null);
} catch (Exception e) {
e.printStackTrace();
}
};
It's better to init ImageReader with max image buffer is 2 then use reader.acquireLatestImage() inside onImageAvailable().
Because acquireLatestImage() will acquire the latest Image from the ImageReader's queue, dropping older one. This function is recommended to use over acquireNextImage() for most use-cases, as it's more suited for real-time processing. Note that max image buffer should be at least 2.
And remember to close() your image after processing.

Is it possible to render an Android View to an OpenGL FBO or texture?

Is it possible to render a View (say, a WebView) to an FBO so it can be used as a texture in an OpenGL composition?
I brought together a complete demo project which renders a view to GL textures in real time in an efficient way which can be found in this repo. It shows how to render WebView to GL texture in real time as an example.
Also a brief code for this can look like the following (taken from the demo project from the repo above):
public class GLWebView extends WebView {
private ViewToGLRenderer mViewToGLRenderer;
...
// drawing magic
#Override
public void draw( Canvas canvas ) {
//returns canvas attached to gl texture to draw on
Canvas glAttachedCanvas = mViewToGLRenderer.onDrawViewBegin();
if(glAttachedCanvas != null) {
//translate canvas to reflect view scrolling
float xScale = glAttachedCanvas.getWidth() / (float)canvas.getWidth();
glAttachedCanvas.scale(xScale, xScale);
glAttachedCanvas.translate(-getScrollX(), -getScrollY());
//draw the view to provided canvas
super.draw(glAttachedCanvas);
}
// notify the canvas is updated
mViewToGLRenderer.onDrawViewEnd();
}
...
}
public class ViewToGLRenderer implements GLSurfaceView.Renderer{
private SurfaceTexture mSurfaceTexture;
private Surface mSurface;
private int mGlSurfaceTexture;
private Canvas mSurfaceCanvas;
...
#Override
public void onDrawFrame(GL10 gl){
synchronized (this){
// update texture
mSurfaceTexture.updateTexImage();
}
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height){
releaseSurface();
mGlSurfaceTexture = createTexture();
if (mGlSurfaceTexture > 0){
//attach the texture to a surface.
//It's a clue class for rendering an android view to gl level
mSurfaceTexture = new SurfaceTexture(mGlSurfaceTexture);
mSurfaceTexture.setDefaultBufferSize(mTextureWidth, mTextureHeight);
mSurface = new Surface(mSurfaceTexture);
}
}
public Canvas onDrawViewBegin(){
mSurfaceCanvas = null;
if (mSurface != null) {
try {
mSurfaceCanvas = mSurface.lockCanvas(null);
}catch (Exception e){
Log.e(TAG, "error while rendering view to gl: " + e);
}
}
return mSurfaceCanvas;
}
public void onDrawViewEnd(){
if(mSurfaceCanvas != null) {
mSurface.unlockCanvasAndPost(mSurfaceCanvas);
}
mSurfaceCanvas = null;
}
}
The demo output screenshot:
Yes is it certainly possible, I have written up a how-to here;
http://www.felixjones.co.uk/neo%20website/Android_View/
However for static elements that won't change, the bitmap option may be better.
At least someone managed to render text this way:
Rendering Text in OpenGL on Android
It describes the method I used for rendering high-quality dynamic text efficiently using OpenGL ES 1.0, with TrueType/OpenType font files.
[...]
The whole process is actually quite easy. We generate the bitmap (as a texture), calculate and store the size of each character, as well as it's location on the texture (UV coordinates). There are some other finer details, but we'll get to that.
OpenGL ES 2.0 Version: https://github.com/d3kod/Texample2

slow face detection android

Hi my face detection thread is working too slow,
I call this thread from onPreviewFrame only if the thread is not working else i just skip the call and after the thread detect face i call onDraw inside the view to draw rectangle
public void run() {
FaceDetector faceDetector = new FaceDetector(bitmapImg.getWidth(), bitmapImg.getHeight(), 1);
numOfFacesDetected = faceDetector.findFaces(bitmapImg, detectedFaces);
if (numOfFacesDetected != 0) {
detectedFaces.getMidPoint(eyesMidPoint);
eyesDistance = detectedFaces.eyesDistance();
handler.post(new Runnable() {
public void run() {
mPrev.invalidate();
// turn off thread lock
}
});
mPrev.setEyesDistance(eyesDistance);
mPrev.setEyesMidPoint(eyesMidPoint);
}
isThreadWorking = false;
}
public void onPreviewFrame(byte[] yuv, Camera camera) {
if (isThreadWorking)
return;
isThreadWorking = true;
ByteBuffer bbuffer = ByteBuffer.wrap(yuv);
bbuffer.get(grayBuff_, 0, bufflen_);
detectThread = new FaceDetectThread(handler);
detectThread.setBuffer(grayBuff_);
detectThread.start();
my questions is maybe because am working with bitmap and not gray scale it's taking too long ? how can i improve the speed ?
The FaceDetector API is not really made to process frames in a live preview. It's way to slow for that.
If you are running on a fairly new device, a better option is to use the FaceDetectionListener API in Android 14+. It is very fast and can be used to create an overlay on a preview SurfaceHolder.

Categories

Resources