Samsung Galaxy Duos (GT-S7562) Camera.takePicture but no callback being called - android

Following code is tested on HTC Desire S, Galaxy S II and emulator. It is working fine, but surprisingly it doesn't work on Galaxy S Duos (GT-S7562). What happens is that all calls are successful with no exception but callbacks are not called.
public class CameraManager implements PictureCallback {
private final static String DEBUG_TAG = "CameraManager";
public void TakePicture() {
try {
_camera = Camera.open(cameraId);
Log.d(DEBUG_TAG, "Camera.TakePicture.open");
SurfaceView view = new SurfaceView(CameraManager.this.getContext());
_camera.setPreviewDisplay(view.getHolder());
Log.d(DEBUG_TAG, "Camera.TakePicture.setPreviewDisplay");
_camera.startPreview();
Log.d(DEBUG_TAG, "Camera.TakePicture.startPreview");
AudioManager manager = (AudioManager) CameraManager.super.getContext().getSystemService(Context.AUDIO_SERVICE);
Log.d(DEBUG_TAG, "Camera.TakePicture.AudioManager.ctor()");
manager.setStreamVolume(AudioManager.STREAM_SYSTEM, 0 , AudioManager.FLAG_REMOVE_SOUND_AND_VIBRATE);
Log.d(DEBUG_TAG, "Camera.TakePicture.setStreamVolume");
Camera.ShutterCallback shutter = new Camera.ShutterCallback() {
#Override
public void onShutter() {
AudioManager manager = (AudioManager) CameraManager.super.getContext().getSystemService(Context.AUDIO_SERVICE);
Log.d(DEBUG_TAG, "Camera.TakePicture.Shutter.AudioManager.ctor()");
manager.setStreamVolume(AudioManager.STREAM_SYSTEM, manager.getStreamMaxVolume(AudioManager.STREAM_SYSTEM) , AudioManager.FLAG_ALLOW_RINGER_MODES);
Log.d(DEBUG_TAG, "Camera.TakePicture.Shutter.setStreamVolume");
}
};
Camera.PictureCallback rawCallback = new Camera.PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
if (data != null) {
Log.i(DEBUG_TAG, "Picture taken::RAW");
_camera.stopPreview();
_camera.release();
} else {
Log.wtf(DEBUG_TAG, "Picture NOT taken::RAW");
}
}
};
_camera.takePicture(shutter, rawCallback, CameraManager.this);
Log.d(DEBUG_TAG, "Camera.TakePicture.taken");
} catch (Exception err) {
err.printStackTrace();
Log.d(DEBUG_TAG, "Camera.TakePicture.Exception:: %s" + err.getMessage());
}
}
#Override
public void onPictureTaken(byte[] data, Camera camera) {
if (data != null) {
Log.i(DEBUG_TAG, "Picture taken::JPG");
_camera.stopPreview();
_camera.release();
} else {
Log.wtf(DEBUG_TAG, "Picture NOT taken::JPG");
}
}
}
Here's the output log of logcat for execution of above code, As you can see, callbacks are not called.:
[ 10-16 01:39:18.711 3873:0xf21 D/CameraManager ]
Camera.TakePicture.open
[ 10-16 01:39:18.891 3873:0xf21 D/CameraManager ]
Camera.TakePicture.setFrontCamera
[ 10-16 01:39:18.901 3873:0xf21 D/CameraManager ]
Camera.TakePicture.setPreviewDisplay
[ 10-16 01:39:18.901 3873:0xf21 D/CameraManager ]
Camera.TakePicture.startPreview
[ 10-16 01:39:18.901 3873:0xf21 D/CameraManager ]
Camera.TakePicture.AudioManager.ctor()
[ 10-16 01:39:19.001 3873:0xf21 D/CameraManager ]
Camera.TakePicture.setStreamVolume
[ 10-16 01:39:19.041 3873:0xf21 D/CameraManager ]
Camera.TakePicture.taken
I have also checked SO for similar problems with Galaxy S and found following code, I used it with no success:
Camera.Parameters parameters = camera.getParameters();
parameters.set("camera-id", 2);
// (800, 480) is also supported front camera preview size at Samsung Galaxy S.
parameters.setPreviewSize(640, 480);
camera.setParameters(parameters);
I was wondering if anyone could tell me what's wrong with my code? or maybe there's some limitations with this model that doesn't allow taking pictures without showing a preview surface. If so, then could you please let me know of any possible workaround? Note that this code is executed from an android service.

Documentation is explicit: you must start preview if you want to take a picture. From your code, it is not clear why the preview surface is not showing. IIRC, in Honeycomb and later, you cannot play with the preview surface coordinates to move it off screen. But you can usually hide the preview surface behind some image view.

Camera.takePicture with a rawCallback requires calling addRawImageCallbackBuffer
(I ran into the problem too and had to go to the source code to figure is out) When Camera.takePicture is called with second argument (Callback raw) non-null, the user must call Camera.addRawImageCallbackBuffer() at least once before takePicture() to start the supply of buffers for data to be returned in. If this is not done, the image is discarded (and apparently the callbacks are not called.
This is a block comment from android.hardware.Camera.java for addRawImageCallbackBuffer():
Adds a pre-allocated buffer to the raw image callback buffer queue.
Applications can add one or more buffers to the queue. When a raw image
frame arrives and there is still at least one available buffer, the
buffer will be used to hold the raw image data and removed from the
queue. Then raw image callback is invoked with the buffer. If a raw
image frame arrives but there is no buffer left, the frame is
discarded. Applications should add buffers back when they finish
processing the data in them by calling this method again in order
to avoid running out of raw image callback buffers.
The size of the buffer is determined by multiplying the raw image
width, height, and bytes per pixel. The width and height can be
read from {#link Camera.Parameters#getPictureSize()}. Bytes per pixel
can be computed from
{#link android.graphics.ImageFormat#getBitsPerPixel(int)} / 8,
using the image format from {#link Camera.Parameters#getPreviewFormat()}.
This method is only necessary when the PictureCallbck for raw image
is used while calling {#link #takePicture(Camera.ShutterCallback,
Camera.PictureCallback, Camera.PictureCallback, Camera.PictureCallback)}.
Please note that by calling this method, the mode for
application-managed callback buffers is triggered. If this method has
never been called, null will be returned by the raw image callback since
there is no image callback buffer available. Furthermore, When a supplied
buffer is too small to hold the raw image data, raw image callback will
return null and the buffer will be removed from the buffer queue.
#param callbackBuffer the buffer to add to the raw image callback buffer
queue. The size should be width * height * (bits per pixel) / 8. An
null callbackBuffer will be ignored and won't be added to the queue.
#see #takePicture(Camera.ShutterCallback,
Camera.PictureCallback, Camera.PictureCallback, Camera.PictureCallback)}.
Try your code with the 'raw' callback argument to takePicture() set to null.

Related

What is the correct way to get Preview Frames using Android NDK Camera2

Based on NDK camera sample texture-view, I want to create an ImageReader to get preview frames.
What I've done
Create the ImageReader and the camera session:
yuvReader_ = new ImageReader(&compatibleCameraRes_, AIMAGE_FORMAT_YUV_420_888);
camera_->CreateSession(ANativeWindow_fromSurface(env_, surface), yuvReader_->GetNativeWindow());
void NDKCamera::CreateSession(ANativeWindow* previewWindow, ANativeWindow* yuvWindow) {
// Create output from this app's ANativeWindow, and add into output container
requests_[PREVIEW_REQUEST_IDX].outputNativeWindow_ = previewWindow;
requests_[PREVIEW_REQUEST_IDX].template_ = TEMPLATE_PREVIEW;
requests_[YUV_REQUEST_IDX].outputNativeWindow_ = yuvWindow;
requests_[YUV_REQUEST_IDX].template_ = TEMPLATE_PREVIEW;
CALL_CONTAINER(create(&outputContainer_));
for (auto& req : requests_) {
if (!req.outputNativeWindow_) continue;
ANativeWindow_acquire(req.outputNativeWindow_);
CALL_OUTPUT(create(req.outputNativeWindow_, &req.sessionOutput_));
CALL_CONTAINER(add(outputContainer_, req.sessionOutput_));
CALL_TARGET(create(req.outputNativeWindow_, &req.target_));
CALL_DEV(createCaptureRequest(cameras_[activeCameraId_].device_,
req.template_, &req.request_));
CALL_REQUEST(addTarget(req.request_, req.target_));
}
// Create a capture session for the given preview request
captureSessionState_ = CaptureSessionState::READY;
CALL_DEV(createCaptureSession(cameras_[activeCameraId_].device_,
outputContainer_, GetSessionListener(),
&captureSession_));
}
Then start the preview:
void NDKCamera::StartPreview(bool start) {
if (start) {
ACaptureRequest* requests[] = { requests_[PREVIEW_REQUEST_IDX].request_, requests_[YUV_REQUEST_IDX].request_};
CALL_SESSION(setRepeatingRequest(captureSession_, nullptr, 2,
requests,
nullptr));
} else if (!start && captureSessionState_ == CaptureSessionState::ACTIVE) {
ACameraCaptureSession_stopRepeating(captureSession_);
}
}
I set two requests in setRepeatingRequest. One for TextureView display, and the other for receiving the preview frames in C++.
Now, the problem is after setting two outputs, the preview performance goes down (looks like playing slides), which doesn't occur in Java:
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
}
mCaptureSession = cameraCaptureSession;
startPreview();
}
#Override
public void onConfigureFailed(
#NonNull CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
}
}, null
);
I also tried one request with two output targets. But the code caused screen frozen:
void NDKCamera::CreateSession(ANativeWindow* textureViewWindow, ANativeWindow* imgReaderWindow) {
auto& req = requests_[PREVIEW_REQUEST_IDX];
req.outputNativeWindow_ = textureViewWindow;
req.yuvWindow = imgReaderWindow;
req.template_ = TEMPLATE_PREVIEW;
ACaptureSessionOutputContainer_create(&outputContainer_);
CALL_DEV(createCaptureRequest(cameras_[activeCameraId_].device_,
req.template_, &req.request_));
// Add the texture view surface to the container
ANativeWindow_acquire(req.outputNativeWindow_);
CALL_OUTPUT(create(req.outputNativeWindow_, &req.sessionOutput_));
CALL_CONTAINER(add(outputContainer_, req.sessionOutput_));
CALL_TARGET(create(req.outputNativeWindow_, &req.target_));
CALL_REQUEST(addTarget(req.request_, req.target_));
// Add the image reader surface to the container
ANativeWindow_acquire(req.yuvWindow);
CALL_OUTPUT(create(req.yuvWindow, &req.yuvOutput));
CALL_CONTAINER(add(outputContainer_, req.yuvOutput));
CALL_TARGET(create(req.yuvWindow, &req.yuvTarget));
CALL_REQUEST(addTarget(req.request_, req.yuvTarget));
captureSessionState_ = CaptureSessionState::READY;
ACameraDevice_createCaptureSession(cameras_[activeCameraId_].device_,
outputContainer_, GetSessionListener(),
&captureSession_);
}
void NDKCamera::StartPreview(bool start) {
if (start) {
ACaptureRequest* requests[] = { requests_[PREVIEW_REQUEST_IDX].request_};
ACameraCaptureSession_setRepeatingRequest(captureSession_, nullptr, 1,
requests,
nullptr);
} else if (!start && captureSessionState_ == CaptureSessionState::ACTIVE) {
ACameraCaptureSession_stopRepeating(captureSession_);
}
}
Here is the log:
2021-12-14 08:42:20.316 24536-24556/com.sample.textureview D/ACameraDevice: Device error received, code 3, frame number 13, request ID 0, subseq ID 0
2021-12-14 08:42:21.319 24536-24556/com.sample.textureview D/ACameraDevice: Device error received, code 3, frame number 14, request ID 0, subseq ID 0
2021-12-14 08:42:22.321 24536-24584/com.sample.textureview D/ACameraDevice: Device error received, code 3, frame number 15, request ID 0, subseq ID 0
2021-12-14 08:42:23.323 24536-24584/com.sample.textureview D/ACameraDevice: Device error received, code 3, frame number 16, request ID 0, subseq ID 0
2021-12-14 08:42:24.325 24536-24556/com.sample.textureview D/ACameraDevice: Device error received, code 3, frame number 17, request ID 0, subseq ID 0
2021-12-14 08:42:25.328 24536-24584/com.sample.textureview D/ACameraDevice: Device error received, code 3, frame number 18, request ID 0, subseq ID 0
2021-12-14 08:42:26.330 24536-24584/com.sample.textureview D/ACameraDevice: Device error received, code 3, frame number 19, request ID 0, subseq ID 0
Anybody knows why? Thanks!
I don't know how you've set up your Java code by comparison, but what you're doing in the NDK code will drop your frame rate by half. If you want to get both preview frames and frames to the native ImageReader at 30fps, you need to include both targets in a single capture request, not alternate between two capture requests that each target just one output. The latter will get you 15fps to each output at best.
So just create one request, and call addTarget twice on it with both the preview and the YUV windows. There are limits on how many targets you can add to a single request, but generally that's equal to the number of targets you can configure in a single session, which depends on the hardware capability of the device, and the resolution of each output.
2 streams, one preview and one app-bound YUV, should always work, however.

Display stream of byte array frames to textureview

I am trying to display a stream of frames received over network and display them to TextureView. My pipeline is as follows:
Receive video using GStreamer. I am using NDK. Gstreamer code is in C. I am using JNI callback to send individual frames received in appsink from C to Java. I do not want to use ANativeWindow from within the NDK to display to surface, as is done in the GStreamer Tutorial-3 example app.
In Java, these frames are added to a ArrayBlockingQueue. A separate thread pulls from this queue.
Following is the callback from pullFromQueue thread stays alive as long as app is running. The byte[] frame is a NV21 format frame of known width and height.
#DebugLog
private void pullFromQueueMethod() {
try {
long start = System.currentTimeMillis();
byte frame[] = framesQueue.take();
}
From here, I would like to use OpenGL to alter brightness, contrast and apply shaders to individual frames. Performance is of utmost concern to me and hence I cannot convert byte[] to Bitmap and then display to a SurfaceView. I have tried this and it takes nearly 50ms for a 768x576 frame on Nexus 5.
Surprisingly, I cannot find an example anywhere to do the same. All examples use either the Camera or MediaPlayer inbuilt functions to direct their preview to surface/texture. For example : camera.setPreviewTexture(surfaceTexture);. This links the output to a SurfaceTexture and hence you never have to handle displaying individual frames (never have to deal with byte arrays).
What I have attempted so far :
Seen this answer on StackOverflow. It suggests to use Grafika's createImageTexture(). Once I receive a Texture handle, how do I pass this to SurfaceTexture and continuously update it? Here is partial code of what I've implemented so far :
public class CameraActivity extends AppCompatActivity implements TextureView.SurfaceTextureListener {
int textureId = -1;
SurfaceTexture surfaceTexture;
TextureView textureView;
...
protected void onCreate(Bundle savedInstanceState) {
textureView = new TextureView(this);
textureView.setSurfaceTextureListener(this);
}
private void pullFromQueueMethod() {
try {
long start = System.currentTimeMillis();
byte frame[] = framesQueue.take();
if (textureId == -1){
textureId = GlUtil.createImageTexture(frame);
surfaceTexture = new SurfaceTexture(textureId);
textureView.setSurfaceTexture(surfaceTexture);
} else {
GlUtil.updateImageTexture(textureId); // self defined method that doesn't create a new texture, but calls GLES20.glTexImage2D() to update that texture
}
surfaceTexture.updateTexImage();
/* What do I do from here? Is this the right approach? */
}
}
To sum up. All I really need is an efficient manner to display a stream of frames (byte arrays). How do I achieve this?

Get frames from camera's phone in android

I would like to get frames from camera's phone. So, i try to capture video and i use matlab to find frames per second of this video, i got 250 frames per 10 seconds. But when i use
public void onPreviewFrame(byte[] data, Camera camera) {}
on Android, i only get 70 frames per 10 seconds.
Do you know why? I put my code below:
private Camera.PreviewCallback previewCallBack = new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
System.out.println("Get frame " + frameNumber);
if (data == null)
throw new NullPointerException();
Camera.Parameters p = camera.getParameters();
Camera.Size size = p.getPreviewSize();
if (frameNumber == 0) {
startTime = System.currentTimeMillis();
}
// Log.e("GetData", "Get frame " + frameNumber);
frameNumber++;
camera.addCallbackBuffer(data);
}
}
That's true; Android video recorder does not use Camera.PreviewCallback, and it may be much faster than what you get with Java callbacks. The reason is that it can send the video frame from camera to the hardware encoder inside the kernel, without ever putting the pixels into user space.
However, I have reliably achieved 30 FPS in Java on advanced devices, like Nexus 4 or Galaxy S3. The secrets are: to avoid garbage collection by using Camera.setPreviewCallbackWithBuffer(), and to push the callbacks off the UI thread by using an HandlerThread.
Naturally, the preview callback itself should be optimized as thoroughly as possible. In your sample, the calls to camera.getParameters() is slow and can be avoided. No allocations (new) should be made.

Getting the raw RGB data of the android camera

I'm working on a camera app on android. I'm currently taking my capture with the jpeg callback. I'd like to know if there's a way to get to raw data of the capture. I know there is a raw callback for the capture but it always return null.
So from the jpeg callback can I have access to raw data (succession of RGB pixels).
EDIT :
So from the jpeg callback can I have access to raw data (succession of YUV pixels).
I was successfully able to get a "raw" (YUV422) picture with android 5.1 running on a rk3288.
3 steps to get yuv image
init buffer
call addRawImageCallbackBuffer by relfexion
get the yuv picture in the dedicated callback
code sample
val weight = size.width * size.height * ImageFormat.getBitsPerPixel(ImageFormat.NV21) / 8
val bytes = ByteArray(weight);
val camera = android.hardware.Camera.open();
try {
val addRawImageCallbackBuffer = camera.javaClass
.getDeclaredMethod("addRawImageCallbackBuffer", bytes.javaClass)
addRawImageCallbackBuffer.invoke(camera, bytes)
} catch (e: Exception) {
Log.e("RNG", "Error", e);
}
...
camera.takePicture(null, { data, camera ->
val file = File("/sdcard/output.jpg")
file.createNewFile()
val yuv = YuvImage(data, ImageFormat.NV21, size.width, size.height, null)
yuv.compressToJpeg(Rect(0, 0, size.width, size.height), 80, file.outputStream())
}, null)
Explanation
The Camera.takePicture() method takes a callback for raw as second parameter.
camera.takePicture ( shutterCallback, rawCallback, jpegCallback );
This callback will return a null byteArray, unless I explicitly add a buffer for raw image first.
So, you're supposed to call camera.addRawImageCallbackBuffer for this purpose.
Nevertheless, this the method is not available (public but not exported, so you cannot call it directly).
Fortunately, the code sample demonstrates how to force call this method by reflection.
This will make the raw buffer to push a consistent yuv picture as a parameter.

Android: Jpeg saved from camera looks corrupted

I'm writing an Android application that saves a JPEG snapshot from the camera when the user clicks a button. Unfortunately, when I look at the JPEG file my code is saving looks corrupted. It appears to be caused by my call to parameters.setPreviewSize (see code snippet below) - if I remove that then the image saves fine; however without it I can't set the preview size and setDisplayOrientation also appears to have no effect without it.
My app is targeting API Level 8 (Android 2.2), and I'm debugging on an HTC Desire HD. Not quite sure what I'm doing wrong here... any help would be very much appreciated!
Cheers,
Scottie
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// Now that the size is known, set up the camera parameters and begin
// the preview.
Camera.Parameters parameters = mCamera.getParameters();
Camera.Size size = getBestPreviewSize(w,h);
// This next call is required in order for preview size to be set and
// setDisplayOrientation to take effect...
// Unfortunately it's also causing JPEG to be created wrong
parameters.setPreviewSize(size.width, size.height);
parameters.setPictureFormat(ImageFormat.JPEG);
mCamera.setParameters(parameters);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
// This is the snapshot button event handler
public void onSnapshotButtonClick(View target) {
//void android.hardware.Camera.takePicture(ShutterCallback shutter,
// PictureCallback raw, PictureCallback jpeg)
mPreview.mCamera.takePicture(null, null, mPictureCallback);
}
// This saves the camera snapshot as a JPEG file on the SD card
Camera.PictureCallback mPictureCallback = new Camera.PictureCallback() {
public void onPictureTaken(byte[] imageData, Camera c) {
if (imageData != null) {
FileOutputStream outStream = null;
try {
String myJpgPath = String.format(
"/sdcard/%d.jpg", System.currentTimeMillis());
outStream = new FileOutputStream(myJpgPath);
outStream.write(imageData);
outStream.close();
Log.d("TestApp", "onPictureTaken - wrote bytes: "
+ imageData.length);
c.startPreview();
Toast.makeText(getApplicationContext(), String.format("%s written", myJpgPath), Toast.LENGTH_SHORT).show();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
}
};
Another workaround is to match the aspect ratio between preview and picture sizes (i.e. setPreviewSize(w1,h1); setPictureSize(w2,h2) with w1/h1 ~ w2/h2 (small differences seem to be ok)). E.g. for Desire HD S w1=800,h1=480, w2=2592,h2=1552 works as well as w1=960,h1=720,h2=2592,h2=1952 (if you don't mind distorted images ;-)
I assume that you are using a common implementation of the getBestPreviewSize(w,h) method that is floating about, where you cycle through the different getSupportedPreviewSizes() to find the best match. Although I am not certain as to why it causes the images to be distorted, I have found that calling the parameters.setPreviewSize(size.width, size.height) method with the output of the getBestPreviewSize method is what is causing the problem on the HTC Desire. I have also verified that by commenting it out, the distorted image issue goes away.

Categories

Resources