I would like to get frames from camera's phone. So, i try to capture video and i use matlab to find frames per second of this video, i got 250 frames per 10 seconds. But when i use
public void onPreviewFrame(byte[] data, Camera camera) {}
on Android, i only get 70 frames per 10 seconds.
Do you know why? I put my code below:
private Camera.PreviewCallback previewCallBack = new Camera.PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
System.out.println("Get frame " + frameNumber);
if (data == null)
throw new NullPointerException();
Camera.Parameters p = camera.getParameters();
Camera.Size size = p.getPreviewSize();
if (frameNumber == 0) {
startTime = System.currentTimeMillis();
}
// Log.e("GetData", "Get frame " + frameNumber);
frameNumber++;
camera.addCallbackBuffer(data);
}
}
That's true; Android video recorder does not use Camera.PreviewCallback, and it may be much faster than what you get with Java callbacks. The reason is that it can send the video frame from camera to the hardware encoder inside the kernel, without ever putting the pixels into user space.
However, I have reliably achieved 30 FPS in Java on advanced devices, like Nexus 4 or Galaxy S3. The secrets are: to avoid garbage collection by using Camera.setPreviewCallbackWithBuffer(), and to push the callbacks off the UI thread by using an HandlerThread.
Naturally, the preview callback itself should be optimized as thoroughly as possible. In your sample, the calls to camera.getParameters() is slow and can be avoided. No allocations (new) should be made.
Related
I want to use MediaRecorder for recording videos instead of MediaCodec, because it's very easy to use as we know.
I also want to use OpenGL to process frames while recording
Then I use example code from Grafika's ContinuousCaptureActivity sample to init EGL rendering context, create cameraTexture and pass it to Camera2 API as Surface https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/ContinuousCaptureActivity.java#L392
and create EGLSurface encodeSurface from our recorderSurface https://github.com/google/grafika/blob/master/app/src/main/java/com/android/grafika/ContinuousCaptureActivity.java#L418
and so on (processing frames as in Grafika sample, everything the same as in the example code Grafika code)
Then when I start recording (MediaRecorder.start()), it records video ok if audio source wasn't set
But if audio recording is also enabled
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC)
...
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC)
Then final video has large duration (length) and it's not really playable. So MediaRecorder audio encoder ruins everything when using Surface as input and GLES for adding and processing frames
I have no idea how to fix it.
Here's my code to process frames (based on Grafika sample, it's almost the same):
class GLCameraFramesRender(
private val width: Int,
private val height: Int,
private val callback: Callback,
recorderSurface: Surface,
eglCore: EglCore
) : OnFrameAvailableListener {
private val fullFrameBlit: FullFrameRect
private val textureId: Int
private val encoderSurface: WindowSurface
private val tmpMatrix = FloatArray(16)
private val cameraTexture: SurfaceTexture
val cameraSurface: Surface
init {
encoderSurface = WindowSurface(eglCore, recorderSurface, true)
encoderSurface.makeCurrent()
fullFrameBlit = FullFrameRect(Texture2dProgram(Texture2dProgram.ProgramType.TEXTURE_EXT))
textureId = fullFrameBlit.createTextureObject()
cameraTexture = SurfaceTexture(textureId)
cameraSurface = Surface(cameraTexture)
cameraTexture.setOnFrameAvailableListener(this)
}
fun release() {
cameraTexture.setOnFrameAvailableListener(null)
cameraTexture.release()
cameraSurface.release()
fullFrameBlit.release(false)
eglCore.release()
}
override fun onFrameAvailable(surfaceTexture: SurfaceTexture) {
if (callback.isRecording()) {
drawFrame()
} else {
cameraTexture.updateTexImage()
}
}
private fun drawFrame() {
cameraTexture.updateTexImage()
cameraTexture.getTransformMatrix(tmpMatrix)
GLES20.glViewport(0, 0, width, height)
fullFrameBlit.drawFrame(textureId, tmpMatrix)
encoderSurface.setPresentationTime(cameraTexture.timestamp)
encoderSurface.swapBuffers()
}
interface Callback {
fun isRecording(): Boolean
}
}
It's very likely your timestamps aren't in the same timebase. The media recording system generally wants timestamps in the uptimeMillis timebase, but many camera devices produce data in the elapsedRealtime timebase. One counts time when the device is in deep sleep, and the other doesn't; the longer it's been since you rebooted your device, the bigger the discrepancy becomes.
It wouldn't matter until you add in the audio, since MediaRecorder's internal audio timestamps will be in uptimeMillis, while the camera frame timestamps will come in as elapsedRealtime. A discrepancy of more than a few fractions of a second would probably be noticeable as a bad A/V sync; a few minutes or more will just mess everything up.
When the camera talks to the media recording stack directly, it adjusts timestamps automatically; since you've placed the GPU in the middle, that doesn't happen (since the camera doesn't know that's where your frames are going eventually).
You can check if the camera is using elapsedRealtime as the timebase via SENSOR_INFO_TIMESTAMP_SOURCE. But in any case, you have a few choices:
If the camera uses TIMESTAMP_SOURCE_REALTIME, measure the difference between the two timestamp at the start of recording, and adjust the timestamps you feed into setPresentationTime accordingly (delta = elapsedRealtime - uptimeMillis; timestamp = timestamp - delta;)
Just use uptimeMillis() * 1000000 as the time for setPresentationTime. This may cause too much A/V skew, but it's easy to try.
I am trying to build a camera app which should be able to filter the frames with some filter applied on it (just for learning purposes). For that, I used the Camera2 API and OpenGL ES. I was able to apply grayscale filter on the frames so that the preview was in grayscale. Now, I wanted to record that filtered preview using MediaRecorder and I looked at the following sample to see how MediaRecorder is working with the Camera2 API ( I just added the OpenGL ES part ).
But when I record, then it records the unfiltered preview and not the filtered preview.
Here a demonstration. This is how the camera preview looks like when the grayscale filter is on:
And this is how it looks like when I play the recorded video after it is stored in the directory:
For me, it seems that MediaRecorder just takes the unfiltered/unprocessed frames and stores them.
Here are the relevant parts of my code:
// basically the same code from the link above
// here: mSurfaceTexture is the surface texture I created via glGenTextures()
public void startRecordingVideo() {
if (null == mCameraDevice || null == mCameraSize) {
return;
}
try {
closePreviewSession();
setUpMediaRecorder();
SurfaceTexture texture = mSurfaceTexture;
assert texture != null;
texture.setDefaultBufferSize(mCameraSize.getWidth(), mCameraSize.getHeight());
mCaptureRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
List<Surface> surfaces = new ArrayList<>();
// Set up Surface for the camera preview
Surface previewSurface = new Surface(texture);
surfaces.add(previewSurface);
mCaptureRequestBuilder.addTarget(previewSurface);
// Set up Surface for the MediaRecorder
Surface recorderSurface = mMediaRecorder.getSurface();
surfaces.add(recorderSurface);
mCaptureRequestBuilder.addTarget(recorderSurface);
// Start a capture session
// Once the session starts, we can update the UI and start recording
mCameraDevice.createCaptureSession(surfaces, mCameraCaptureSessionCallbackForTemplateRecord , mBackgroundHandler);
} catch (CameraAccessException | IOException e) {
e.printStackTrace();
}
}
The MediaRecorder part is also from the sample above:
private void setUpMediaRecorder() throws IOException {
final Activity activity = mActivity;
if (null == activity) {
return;
}
mMediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mMediaRecorder.setVideoSource(MediaRecorder.VideoSource.SURFACE);
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
if (mNextVideoAbsolutePath == null || mNextVideoAbsolutePath.isEmpty()) {
mNextVideoAbsolutePath = getVideoFilePath(mActivity);
}
mMediaRecorder.setOutputFile(mNextVideoAbsolutePath);
mMediaRecorder.setVideoEncodingBitRate(10000000);
mMediaRecorder.setVideoFrameRate(30);
mMediaRecorder.setVideoSize(mVideoSize.getWidth(), mVideoSize.getHeight());
mMediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.H264);
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
switch (mSensorOrientation) {
case SENSOR_ORIENTATION_DEFAULT_DEGREES:
mMediaRecorder.setOrientationHint(DEFAULT_ORIENTATIONS.get(rotation));
break;
case SENSOR_ORIENTATION_INVERSE_DEGREES:
mMediaRecorder.setOrientationHint(INVERSE_ORIENTATIONS.get(rotation));
break;
}
mMediaRecorder.prepare();
}
So, how can I tell MediaRecorder to use the filtered/processed frames ? Is that possible ?
What I tried was to call setInputSurface() on the MediaRecorder instance by passing it the previewSurface variable (before that I transformed that variable to a global variable, of course, so that I could use it in the setUpMediaRecorder() method, too ). But I got the error indicating that this was not a persistent surface. In the doc for setInputSurface() it states that a persistent surface should be used (whatever that means)
I hope someone can help ?
You cannot use MediaRecorder to work with such stream, because it can either work with input from one camera (and you have no control over the recording until you stop it), or record a Surface.
Well, in principle you could receive the color frames from the camera, convert them to grayscale, draw the result on a Surface and connect this Surface to a MediaRecorder, similar to how Camera2Video example implements slow motion recording.
Much better, compress the grayscale frames with MediaCodec library and store the resulting encoded frames in a video file with MediaMuxer library, similar to a camera recording example.
I am trying to display a stream of frames received over network and display them to TextureView. My pipeline is as follows:
Receive video using GStreamer. I am using NDK. Gstreamer code is in C. I am using JNI callback to send individual frames received in appsink from C to Java. I do not want to use ANativeWindow from within the NDK to display to surface, as is done in the GStreamer Tutorial-3 example app.
In Java, these frames are added to a ArrayBlockingQueue. A separate thread pulls from this queue.
Following is the callback from pullFromQueue thread stays alive as long as app is running. The byte[] frame is a NV21 format frame of known width and height.
#DebugLog
private void pullFromQueueMethod() {
try {
long start = System.currentTimeMillis();
byte frame[] = framesQueue.take();
}
From here, I would like to use OpenGL to alter brightness, contrast and apply shaders to individual frames. Performance is of utmost concern to me and hence I cannot convert byte[] to Bitmap and then display to a SurfaceView. I have tried this and it takes nearly 50ms for a 768x576 frame on Nexus 5.
Surprisingly, I cannot find an example anywhere to do the same. All examples use either the Camera or MediaPlayer inbuilt functions to direct their preview to surface/texture. For example : camera.setPreviewTexture(surfaceTexture);. This links the output to a SurfaceTexture and hence you never have to handle displaying individual frames (never have to deal with byte arrays).
What I have attempted so far :
Seen this answer on StackOverflow. It suggests to use Grafika's createImageTexture(). Once I receive a Texture handle, how do I pass this to SurfaceTexture and continuously update it? Here is partial code of what I've implemented so far :
public class CameraActivity extends AppCompatActivity implements TextureView.SurfaceTextureListener {
int textureId = -1;
SurfaceTexture surfaceTexture;
TextureView textureView;
...
protected void onCreate(Bundle savedInstanceState) {
textureView = new TextureView(this);
textureView.setSurfaceTextureListener(this);
}
private void pullFromQueueMethod() {
try {
long start = System.currentTimeMillis();
byte frame[] = framesQueue.take();
if (textureId == -1){
textureId = GlUtil.createImageTexture(frame);
surfaceTexture = new SurfaceTexture(textureId);
textureView.setSurfaceTexture(surfaceTexture);
} else {
GlUtil.updateImageTexture(textureId); // self defined method that doesn't create a new texture, but calls GLES20.glTexImage2D() to update that texture
}
surfaceTexture.updateTexImage();
/* What do I do from here? Is this the right approach? */
}
}
To sum up. All I really need is an efficient manner to display a stream of frames (byte arrays). How do I achieve this?
Following code is tested on HTC Desire S, Galaxy S II and emulator. It is working fine, but surprisingly it doesn't work on Galaxy S Duos (GT-S7562). What happens is that all calls are successful with no exception but callbacks are not called.
public class CameraManager implements PictureCallback {
private final static String DEBUG_TAG = "CameraManager";
public void TakePicture() {
try {
_camera = Camera.open(cameraId);
Log.d(DEBUG_TAG, "Camera.TakePicture.open");
SurfaceView view = new SurfaceView(CameraManager.this.getContext());
_camera.setPreviewDisplay(view.getHolder());
Log.d(DEBUG_TAG, "Camera.TakePicture.setPreviewDisplay");
_camera.startPreview();
Log.d(DEBUG_TAG, "Camera.TakePicture.startPreview");
AudioManager manager = (AudioManager) CameraManager.super.getContext().getSystemService(Context.AUDIO_SERVICE);
Log.d(DEBUG_TAG, "Camera.TakePicture.AudioManager.ctor()");
manager.setStreamVolume(AudioManager.STREAM_SYSTEM, 0 , AudioManager.FLAG_REMOVE_SOUND_AND_VIBRATE);
Log.d(DEBUG_TAG, "Camera.TakePicture.setStreamVolume");
Camera.ShutterCallback shutter = new Camera.ShutterCallback() {
#Override
public void onShutter() {
AudioManager manager = (AudioManager) CameraManager.super.getContext().getSystemService(Context.AUDIO_SERVICE);
Log.d(DEBUG_TAG, "Camera.TakePicture.Shutter.AudioManager.ctor()");
manager.setStreamVolume(AudioManager.STREAM_SYSTEM, manager.getStreamMaxVolume(AudioManager.STREAM_SYSTEM) , AudioManager.FLAG_ALLOW_RINGER_MODES);
Log.d(DEBUG_TAG, "Camera.TakePicture.Shutter.setStreamVolume");
}
};
Camera.PictureCallback rawCallback = new Camera.PictureCallback() {
#Override
public void onPictureTaken(byte[] data, Camera camera) {
if (data != null) {
Log.i(DEBUG_TAG, "Picture taken::RAW");
_camera.stopPreview();
_camera.release();
} else {
Log.wtf(DEBUG_TAG, "Picture NOT taken::RAW");
}
}
};
_camera.takePicture(shutter, rawCallback, CameraManager.this);
Log.d(DEBUG_TAG, "Camera.TakePicture.taken");
} catch (Exception err) {
err.printStackTrace();
Log.d(DEBUG_TAG, "Camera.TakePicture.Exception:: %s" + err.getMessage());
}
}
#Override
public void onPictureTaken(byte[] data, Camera camera) {
if (data != null) {
Log.i(DEBUG_TAG, "Picture taken::JPG");
_camera.stopPreview();
_camera.release();
} else {
Log.wtf(DEBUG_TAG, "Picture NOT taken::JPG");
}
}
}
Here's the output log of logcat for execution of above code, As you can see, callbacks are not called.:
[ 10-16 01:39:18.711 3873:0xf21 D/CameraManager ]
Camera.TakePicture.open
[ 10-16 01:39:18.891 3873:0xf21 D/CameraManager ]
Camera.TakePicture.setFrontCamera
[ 10-16 01:39:18.901 3873:0xf21 D/CameraManager ]
Camera.TakePicture.setPreviewDisplay
[ 10-16 01:39:18.901 3873:0xf21 D/CameraManager ]
Camera.TakePicture.startPreview
[ 10-16 01:39:18.901 3873:0xf21 D/CameraManager ]
Camera.TakePicture.AudioManager.ctor()
[ 10-16 01:39:19.001 3873:0xf21 D/CameraManager ]
Camera.TakePicture.setStreamVolume
[ 10-16 01:39:19.041 3873:0xf21 D/CameraManager ]
Camera.TakePicture.taken
I have also checked SO for similar problems with Galaxy S and found following code, I used it with no success:
Camera.Parameters parameters = camera.getParameters();
parameters.set("camera-id", 2);
// (800, 480) is also supported front camera preview size at Samsung Galaxy S.
parameters.setPreviewSize(640, 480);
camera.setParameters(parameters);
I was wondering if anyone could tell me what's wrong with my code? or maybe there's some limitations with this model that doesn't allow taking pictures without showing a preview surface. If so, then could you please let me know of any possible workaround? Note that this code is executed from an android service.
Documentation is explicit: you must start preview if you want to take a picture. From your code, it is not clear why the preview surface is not showing. IIRC, in Honeycomb and later, you cannot play with the preview surface coordinates to move it off screen. But you can usually hide the preview surface behind some image view.
Camera.takePicture with a rawCallback requires calling addRawImageCallbackBuffer
(I ran into the problem too and had to go to the source code to figure is out) When Camera.takePicture is called with second argument (Callback raw) non-null, the user must call Camera.addRawImageCallbackBuffer() at least once before takePicture() to start the supply of buffers for data to be returned in. If this is not done, the image is discarded (and apparently the callbacks are not called.
This is a block comment from android.hardware.Camera.java for addRawImageCallbackBuffer():
Adds a pre-allocated buffer to the raw image callback buffer queue.
Applications can add one or more buffers to the queue. When a raw image
frame arrives and there is still at least one available buffer, the
buffer will be used to hold the raw image data and removed from the
queue. Then raw image callback is invoked with the buffer. If a raw
image frame arrives but there is no buffer left, the frame is
discarded. Applications should add buffers back when they finish
processing the data in them by calling this method again in order
to avoid running out of raw image callback buffers.
The size of the buffer is determined by multiplying the raw image
width, height, and bytes per pixel. The width and height can be
read from {#link Camera.Parameters#getPictureSize()}. Bytes per pixel
can be computed from
{#link android.graphics.ImageFormat#getBitsPerPixel(int)} / 8,
using the image format from {#link Camera.Parameters#getPreviewFormat()}.
This method is only necessary when the PictureCallbck for raw image
is used while calling {#link #takePicture(Camera.ShutterCallback,
Camera.PictureCallback, Camera.PictureCallback, Camera.PictureCallback)}.
Please note that by calling this method, the mode for
application-managed callback buffers is triggered. If this method has
never been called, null will be returned by the raw image callback since
there is no image callback buffer available. Furthermore, When a supplied
buffer is too small to hold the raw image data, raw image callback will
return null and the buffer will be removed from the buffer queue.
#param callbackBuffer the buffer to add to the raw image callback buffer
queue. The size should be width * height * (bits per pixel) / 8. An
null callbackBuffer will be ignored and won't be added to the queue.
#see #takePicture(Camera.ShutterCallback,
Camera.PictureCallback, Camera.PictureCallback, Camera.PictureCallback)}.
Try your code with the 'raw' callback argument to takePicture() set to null.
I'm writing an Android application that saves a JPEG snapshot from the camera when the user clicks a button. Unfortunately, when I look at the JPEG file my code is saving looks corrupted. It appears to be caused by my call to parameters.setPreviewSize (see code snippet below) - if I remove that then the image saves fine; however without it I can't set the preview size and setDisplayOrientation also appears to have no effect without it.
My app is targeting API Level 8 (Android 2.2), and I'm debugging on an HTC Desire HD. Not quite sure what I'm doing wrong here... any help would be very much appreciated!
Cheers,
Scottie
public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {
// Now that the size is known, set up the camera parameters and begin
// the preview.
Camera.Parameters parameters = mCamera.getParameters();
Camera.Size size = getBestPreviewSize(w,h);
// This next call is required in order for preview size to be set and
// setDisplayOrientation to take effect...
// Unfortunately it's also causing JPEG to be created wrong
parameters.setPreviewSize(size.width, size.height);
parameters.setPictureFormat(ImageFormat.JPEG);
mCamera.setParameters(parameters);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
// This is the snapshot button event handler
public void onSnapshotButtonClick(View target) {
//void android.hardware.Camera.takePicture(ShutterCallback shutter,
// PictureCallback raw, PictureCallback jpeg)
mPreview.mCamera.takePicture(null, null, mPictureCallback);
}
// This saves the camera snapshot as a JPEG file on the SD card
Camera.PictureCallback mPictureCallback = new Camera.PictureCallback() {
public void onPictureTaken(byte[] imageData, Camera c) {
if (imageData != null) {
FileOutputStream outStream = null;
try {
String myJpgPath = String.format(
"/sdcard/%d.jpg", System.currentTimeMillis());
outStream = new FileOutputStream(myJpgPath);
outStream.write(imageData);
outStream.close();
Log.d("TestApp", "onPictureTaken - wrote bytes: "
+ imageData.length);
c.startPreview();
Toast.makeText(getApplicationContext(), String.format("%s written", myJpgPath), Toast.LENGTH_SHORT).show();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
}
};
Another workaround is to match the aspect ratio between preview and picture sizes (i.e. setPreviewSize(w1,h1); setPictureSize(w2,h2) with w1/h1 ~ w2/h2 (small differences seem to be ok)). E.g. for Desire HD S w1=800,h1=480, w2=2592,h2=1552 works as well as w1=960,h1=720,h2=2592,h2=1952 (if you don't mind distorted images ;-)
I assume that you are using a common implementation of the getBestPreviewSize(w,h) method that is floating about, where you cycle through the different getSupportedPreviewSizes() to find the best match. Although I am not certain as to why it causes the images to be distorted, I have found that calling the parameters.setPreviewSize(size.width, size.height) method with the output of the getBestPreviewSize method is what is causing the problem on the HTC Desire. I have also verified that by commenting it out, the distorted image issue goes away.