Related
in android I use these lines of epr code to create a qrcode, and everything works .. it generates the qrcode readable by my app with the correct data. I'm trying to replicate the same code on Flutter but despite generating the qrcode, it doesn't work. is not detected. probably has some wrong data?
qrCodeWriter = new QRCodeWriter();
try {
JSONObject jsonObject = new JSONObject();
jsonObject.put("type", 1);
jsonObject.put("content", email); // "esertest#clikapptest.it"
String display_name = ContentManager.getInstance(getContext()).getThisAttivita();
if (display_name != null) {
jsonObject.put("display_name", display_name);
} else
jsonObject.put("display_name", ContentManager.getInstance(getContext()).getMerchantData().getIdentifier());
if (scope.equals(Enums.SCOPES.COLLABORATORE.name()))
jsonObject.put("collaborator", DBHelper.getInstance(getActivity()).getSettingsField(DBHelper.SETTINGS_TABLE_FIELD_EMAIL));
jsonObject.put("mac", it.clikapp.toduba.network.Utils.getMacAddress(getContext()));
// BitMatrix bitMatrix = qrCodeWriter.encode(new String(Base64.encode(jsonObject.toString().getBytes(), Base64.DEFAULT)), BarcodeFormat.QR_CODE, iv_qr_code.getWidth(), iv_qr_code.getHeight(), hintsMap);
BitMatrix bitMatrix = qrCodeWriter.encode(Utils.encryptQRCode(ContentManager.getInstance(getContext()).getOauth().getQrCodeKey(), jsonObject.toString()), BarcodeFormat.QR_CODE, width, height, hintsMap);
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
//bitmap.setPixel(x, y, bitMatrix.get(x, y) ? Color.BLACK : Color.WHITE);//guest_pass_background_color
// bitmap.setPixel(x, y, bitMatrix.get(x, y) ? Color.WHITE : ContextCompat.getColor(getActivity(), R.color.colorPrimary));
if (getActivity() != null && !getActivity().isFinishing())
bitmap.setPixel(x, y, bitMatrix.get(x, y) ? Color.BLACK : ContextCompat.getColor(getActivity(), R.color.colorPrimary));
}
}
Flutter Version:
Map<String, dynamic> myData = {
'type': 1,
'content': connection.email, //TODO check
'display_name': "${(await contentManager.getUserInfo()).identifier}",
//'collaborator': connection.name, //TODO check
'mac': await Utilities.getDeviceMac(),
};
String encodedJson = jsonEncode(myData);
You can use a flutter plugin to add QR Scanning capabilities in your app
refer to this link
Future _scanBytes() async {
try {
String barcode = await scanner.scan();
setState(() => this.barcode = barcode);
} on PlatformException catch (e) {
if (e.code == scanner.CameraAccessDenied) {
setState(() {
this.barcode = 'The user did not grant the camera permission!';
});
} else {
setState(() => this.barcode = 'Unknown error: $e');
}
} on FormatException{
setState(() => this.barcode = 'null (User returned using the "back"-button before scanning anything. Result)');
} catch (e) {
setState(() => this.barcode = 'Unknown error: $e');
}
print("barcode "+barcode);
}
I'm trying to get the Android camera2 running in the background service, then process the frame in the callback ImageReader.OnImageAvailableListener. I already use the suggested raw format YUV_420_888 to get max fps, however I only get around 7fps on the resolution 640x480. This is even slower than what I get using the old Camera interface( I want to upgrade to Camera2 to get higher fps ) or with the OpenCV JavaCameraView( I can't use this because I need to run processing in the background service ).
Below is my service class. What am I missing?
My phone is Redmi Note 3 running Android 5.0.2
public class Camera2ServiceYUV extends Service {
protected static final String TAG = "VideoProcessing";
protected static final int CAMERACHOICE = CameraCharacteristics.LENS_FACING_BACK;
protected CameraDevice cameraDevice;
protected CameraCaptureSession captureSession;
protected ImageReader imageReader;
// A semaphore to prevent the app from exiting before closing the camera.
private Semaphore mCameraOpenCloseLock = new Semaphore(1);
public static final String RESULT_RECEIVER = "resultReceiver";
private static final int JPEG_COMPRESSION = 90;
public static final int RESULT_OK = 0;
public static final int RESULT_DEVICE_NO_CAMERA= 1;
public static final int RESULT_GET_CAMERA_FAILED = 2;
public static final int RESULT_ALREADY_RUNNING = 3;
public static final int RESULT_NOT_RUNNING = 4;
private static final String START_SERVICE_COMMAND = "startServiceCommands";
private static final int COMMAND_NONE = -1;
private static final int COMMAND_START = 0;
private static final int COMMAND_STOP = 1;
private boolean mRunning = false;
public Camera2ServiceYUV() {
}
public static void startToStart(Context context, ResultReceiver resultReceiver) {
Intent intent = new Intent(context, Camera2ServiceYUV.class);
intent.putExtra(START_SERVICE_COMMAND, COMMAND_START);
intent.putExtra(RESULT_RECEIVER, resultReceiver);
context.startService(intent);
}
public static void startToStop(Context context, ResultReceiver resultReceiver) {
Intent intent = new Intent(context, Camera2ServiceYUV.class);
intent.putExtra(START_SERVICE_COMMAND, COMMAND_STOP);
intent.putExtra(RESULT_RECEIVER, resultReceiver);
context.startService(intent);
}
// SERVICE INTERFACE
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
switch (intent.getIntExtra(START_SERVICE_COMMAND, COMMAND_NONE)) {
case COMMAND_START:
startCamera(intent);
break;
case COMMAND_STOP:
stopCamera(intent);
break;
default:
throw new UnsupportedOperationException("Cannot start the camera service with an illegal command.");
}
return START_STICKY;
}
#Override
public void onDestroy() {
try {
captureSession.abortCaptures();
} catch (CameraAccessException e) {
Log.e(TAG, e.getMessage());
}
captureSession.close();
}
#Override
public IBinder onBind(Intent intent) {
return null;
}
// CAMERA2 INTERFACE
/**
* 1. The android CameraManager class is used to manage all the camera devices in our android device
* Each camera device has a range of properties and settings that describe the device.
* It can be obtained through the camera characteristics.
*/
public void startCamera(Intent intent) {
final ResultReceiver resultReceiver = intent.getParcelableExtra(RESULT_RECEIVER);
if (mRunning) {
resultReceiver.send(RESULT_ALREADY_RUNNING, null);
return;
}
mRunning = true;
CameraManager manager = (CameraManager) getSystemService(CAMERA_SERVICE);
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
String pickedCamera = getCamera(manager);
Log.e(TAG,"AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA " + pickedCamera);
manager.openCamera(pickedCamera, cameraStateCallback, null);
CameraCharacteristics characteristics = manager.getCameraCharacteristics(pickedCamera);
Size[] jpegSizes = null;
if (characteristics != null) {
jpegSizes = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputSizes(ImageFormat.YUV_420_888);
}
int width = 640;
int height = 480;
// if (jpegSizes != null && 0 < jpegSizes.length) {
// width = jpegSizes[jpegSizes.length -1].getWidth();
// height = jpegSizes[jpegSizes.length - 1].getHeight();
// }
// for(Size s : jpegSizes)
// {
// Log.e(TAG,"Size = " + s.toString());
// }
// DEBUG
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
return;
}
Log.e(TAG,"Width = " + width + ", Height = " + height);
Log.e(TAG,"output stall duration = " + map.getOutputStallDuration(ImageFormat.YUV_420_888, new Size(width,height)) );
Log.e(TAG,"Min output stall duration = " + map.getOutputMinFrameDuration(ImageFormat.YUV_420_888, new Size(width,height)) );
// Size[] sizeList = map.getInputSizes(ImageFormat.YUV_420_888);
// for(Size s : sizeList)
// {
// Log.e(TAG,"Size = " + s.toString());
// }
imageReader = ImageReader.newInstance(width, height, ImageFormat.YUV_420_888, 2 /* images buffered */);
imageReader.setOnImageAvailableListener(onImageAvailableListener, null);
Log.i(TAG, "imageReader created");
} catch (CameraAccessException e) {
Log.e(TAG, e.getMessage());
resultReceiver.send(RESULT_DEVICE_NO_CAMERA, null);
}catch (InterruptedException e) {
resultReceiver.send(RESULT_GET_CAMERA_FAILED, null);
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
}
catch(SecurityException se)
{
resultReceiver.send(RESULT_GET_CAMERA_FAILED, null);
throw new RuntimeException("Security permission exception while trying to open the camera.", se);
}
resultReceiver.send(RESULT_OK, null);
}
// We can pick the camera being used, i.e. rear camera in this case.
private String getCamera(CameraManager manager) {
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId);
int cOrientation = characteristics.get(CameraCharacteristics.LENS_FACING);
if (cOrientation == CAMERACHOICE) {
return cameraId;
}
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
return null;
}
/**
* 1.1 Callbacks when the camera changes its state - opened, disconnected, or error.
*/
protected CameraDevice.StateCallback cameraStateCallback = new CameraDevice.StateCallback() {
#Override
public void onOpened(#NonNull CameraDevice camera) {
Log.i(TAG, "CameraDevice.StateCallback onOpened");
mCameraOpenCloseLock.release();
cameraDevice = camera;
createCaptureSession();
}
#Override
public void onDisconnected(#NonNull CameraDevice camera) {
Log.w(TAG, "CameraDevice.StateCallback onDisconnected");
mCameraOpenCloseLock.release();
camera.close();
cameraDevice = null;
}
#Override
public void onError(#NonNull CameraDevice camera, int error) {
Log.e(TAG, "CameraDevice.StateCallback onError " + error);
mCameraOpenCloseLock.release();
camera.close();
cameraDevice = null;
}
};
/**
* 2. To capture or stream images from a camera device, the application must first create
* a camera capture captureSession.
* The camera capture needs a surface to output what has been captured, in this case
* we use ImageReader in order to access the frame data.
*/
public void createCaptureSession() {
try {
cameraDevice.createCaptureSession(Arrays.asList(imageReader.getSurface()), sessionStateCallback, null);
} catch (CameraAccessException e) {
Log.e(TAG, e.getMessage());
}
}
protected CameraCaptureSession.StateCallback sessionStateCallback = new CameraCaptureSession.StateCallback() {
#Override
public void onConfigured(#NonNull CameraCaptureSession session) {
Log.i(TAG, "CameraCaptureSession.StateCallback onConfigured");
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the captureSession is ready, we start to grab the frame.
Camera2ServiceYUV.this.captureSession = session;
try {
session.setRepeatingRequest(createCaptureRequest(), null, null);
} catch (CameraAccessException e) {
Log.e(TAG, e.getMessage());
}
}
#Override
public void onConfigureFailed(#NonNull CameraCaptureSession session) {
Log.e(TAG, "CameraCaptureSession.StateCallback onConfigureFailed");
}
};
/**
* 3. The application then needs to construct a CaptureRequest, which defines all the capture parameters
* needed by a camera device to capture a single image.
*/
private CaptureRequest createCaptureRequest() {
try {
/**
* Check other templates for further details.
* TEMPLATE_MANUAL = 6
* TEMPLATE_PREVIEW = 1
* TEMPLATE_RECORD = 3
* TEMPLATE_STILL_CAPTURE = 2
* TEMPLATE_VIDEO_SNAPSHOT = 4
* TEMPLATE_ZERO_SHUTTER_LAG = 5
*
* TODO: can set camera features like auto focus, auto flash here
* captureRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
*/
CaptureRequest.Builder captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
// captureRequestBuilder.set(CaptureRequest.EDGE_MODE,
// CaptureRequest.EDGE_MODE_OFF);
// captureRequestBuilder.set(
// CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE,
// CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE_ON);
// captureRequestBuilder.set(
// CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE,
// CaptureRequest.COLOR_CORRECTION_ABERRATION_MODE_OFF);
// captureRequestBuilder.set(CaptureRequest.NOISE_REDUCTION_MODE,
// CaptureRequest.NOISE_REDUCTION_MODE_OFF);
// captureRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER,
// CaptureRequest.CONTROL_AF_TRIGGER_CANCEL);
//
// captureRequestBuilder.set(CaptureRequest.CONTROL_AE_LOCK, true);
// captureRequestBuilder.set(CaptureRequest.CONTROL_AWB_LOCK, true);
captureRequestBuilder.addTarget(imageReader.getSurface());
return captureRequestBuilder.build();
} catch (CameraAccessException e) {
Log.e(TAG, e.getMessage());
return null;
}
}
/**
* ImageReader provides a surface for the camera to output what has been captured.
* Upon the image available, call processImage() to process the image as desired.
*/
private long frameTime = 0;
private ImageReader.OnImageAvailableListener onImageAvailableListener = new ImageReader.OnImageAvailableListener() {
#Override
public void onImageAvailable(ImageReader reader) {
Log.i(TAG, "called ImageReader.OnImageAvailable");
Image img = reader.acquireLatestImage();
if (img != null) {
if( frameTime != 0 )
{
Log.e(TAG, "fps = " + (float)(1000.0 / (float)(SystemClock.elapsedRealtime() - frameTime)) + " fps");
}
frameTime = SystemClock.elapsedRealtime();
img.close();
}
}
};
private void processImage(Image image) {
Mat outputImage = imageToMat(image);
Bitmap bmp = Bitmap.createBitmap(outputImage.cols(), outputImage.rows(), Bitmap.Config.ARGB_8888);
Utils.bitmapToMat(bmp, outputImage);
Point mid = new Point(0, 0);
Point inEnd = new Point(outputImage.cols(), outputImage.rows());
Imgproc.line(outputImage, mid, inEnd, new Scalar(255, 0, 0), 2, Core.LINE_AA, 0);
Utils.matToBitmap(outputImage, bmp);
Intent broadcast = new Intent();
broadcast.setAction("your_load_photo_action");
broadcast.putExtra("BitmapImage", bmp);
sendBroadcast(broadcast);
}
private Mat imageToMat(Image image) {
ByteBuffer buffer;
int rowStride;
int pixelStride;
int width = image.getWidth();
int height = image.getHeight();
int offset = 0;
Image.Plane[] planes = image.getPlanes();
byte[] data = new byte[image.getWidth() * image.getHeight() * ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
byte[] rowData = new byte[planes[0].getRowStride()];
for (int i = 0; i < planes.length; i++) {
buffer = planes[i].getBuffer();
rowStride = planes[i].getRowStride();
pixelStride = planes[i].getPixelStride();
int w = (i == 0) ? width : width / 2;
int h = (i == 0) ? height : height / 2;
for (int row = 0; row < h; row++) {
int bytesPerPixel = ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8;
if (pixelStride == bytesPerPixel) {
int length = w * bytesPerPixel;
buffer.get(data, offset, length);
// Advance buffer the remainder of the row stride, unless on the last row.
// Otherwise, this will throw an IllegalArgumentException because the buffer
// doesn't include the last padding.
if (h - row != 1) {
buffer.position(buffer.position() + rowStride - length);
}
offset += length;
} else {
// On the last row only read the width of the image minus the pixel stride
// plus one. Otherwise, this will throw a BufferUnderflowException because the
// buffer doesn't include the last padding.
if (h - row == 1) {
buffer.get(rowData, 0, width - pixelStride + 1);
} else {
buffer.get(rowData, 0, rowStride);
}
for (int col = 0; col < w; col++) {
data[offset++] = rowData[col * pixelStride];
}
}
}
}
// Finally, create the Mat.
Mat mat = new Mat(height + height / 2, width, CV_8UC1);
mat.put(0, 0, data);
return mat;
}
private void stopCamera(Intent intent) {
ResultReceiver resultReceiver = intent.getParcelableExtra(RESULT_RECEIVER);
if (!mRunning) {
resultReceiver.send(RESULT_NOT_RUNNING, null);
return;
}
closeCamera();
resultReceiver.send(RESULT_OK, null);
mRunning = false;
Log.d(TAG, "Service is finished.");
}
/**
* Closes the current {#link CameraDevice}.
*/
private void closeCamera() {
try {
mCameraOpenCloseLock.acquire();
if (null != captureSession) {
captureSession.close();
captureSession = null;
}
if (null != cameraDevice) {
cameraDevice.close();
cameraDevice = null;
}
if (null != imageReader) {
imageReader.close();
imageReader = null;
}
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera closing.", e);
} finally {
mCameraOpenCloseLock.release();
}
}
}
I bumped into this problem recently when I try to upgrade my AR app from camera1 to camera2 API, I used a mid-range device for testing (Meizu S6) which has Exynos 7872 CPU and Mali-G71 GPU. What I want to achieve is a steady 30fps AR experience.
But through the migration I found that its quite tricky to get a decent preview frame rate using Camera2 API.
I configured my capture request using TEMPLATE_PREVIEW
mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
Then I Put 2 surfaces, one for preview which is a surfaceTexture at size 1280x720,
another ImageReader at size 1280x720 for image processing.
mImageReader = ImageReader.newInstance(
mVideoSize.getWidth(),
mVideoSize.getHeight(),
ImageFormat.YUV_420_888,
2);
List<Surface> surfaces =new ArrayList<>();
Surface previewSurface = new Surface(mSurfaceTexture);
surfaces.add(previewSurface);
mPreviewBuilder.addTarget(previewSurface);
Surface frameCaptureSurface = mImageReader.getSurface();
surfaces.add(frameCaptureSurface);
mPreviewBuilder.addTarget(frameCaptureSurface);
mPreviewBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CameraMetadata.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
mPreviewSession.setRepeatingRequest(mPreviewBuilder.build(), captureCallback, mBackgroundHandler);
Everything works as expected, my TextureView gets updated and framecallback gets called too Except ... the frame rate is about 10 fps and I haven't even do any image processing yet.
I have experimented many Camera2 API settings include SENSOR_FRAME_DURATION and different ImageFormat and size combinations but none of them improve the frame rate. But if I just remove the ImageReader from output surfaces, then preview gets 30 fps easily!
So I guess the problem is By adding ImageReader as Camera2 output surface decreased the preview frame rate drastically. At least on my case, so what is the solution?
My solution is glReadPixel
I know glReadPixel is one of the evil things because it copy bytes from GPU to main memory and also causing OpenGL to flush draw commands thus for sake of performance we'd better avoid using it. But its surprising that glReadPixel is actually pretty fast and providing much better frame rate then ImageReader's YUV_420_888 output.
In addition to reduce the memory overhead I make another draw call with smaller frame buffer like 360x640 instead of preview's 720p dedicated for feature detection.
Based on the implementation of camera2 by the openCV library.
I had the same problem, then I noticed this piece of code in the openCV code for the JavaCamera2View, you need to change the settings of the CaptureRequest.Builder that way:
CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
It changed the fps from 10fps to around 28-30fps for me. Worked for me with two target surfaces, one surface of the preview textureview, the second of the ImageReader:
Surface readerSurface = imageReader.getSurface();
Surface surface = new Surface(surfaceTexture);
captureBuilder.addTarget(surface);
captureBuilder.addTarget(readerSurface);
Cannot post a comment (not enough reps). But running into the same issue with Redmi 6.
If using the the TextureView for previewing the camera output I get around 30 fps but replacing it with ImageReader it went down to 8/9 fps. All the camera configs are same in either case.
Interesting enough, on trying out the CameraXBasic, it showed the same issue. The updates from Camera were sluggish. But the android-Camera2Basic (using TextureView) was running without any issues.
Update: 1
Tested out with lowering the preview size from 1280x720 to 640x480, and as expected saw a better performance.
This is what I know after tweaking with it a little, the problem lies on ImageReader's maxImage param, I changed it from 2 to 3 to 56, it changed the fps quite a lot, what I think is the surface which we render to camera2 from ImageReader has a tendency to block the process of saving the camera's image to the buffer/cache when Image class from ImageReader.OnImageAvailableListener is being processed or isn't released, or we could say the camera wanna use the buffer but it doesn't have enough buffer, so when we increase the max buffer of imageReader, we could give space to camera2 to save the image.
Help me guys, I'm creating augmented reality (unity+vuforia). I have button for screen shot screen, is work but file location on (/Data/Data/com.companyname.gamename/Files). How to change folder? (storage/emulated/0/DCIM/Camera/) .
using UnityEngine;
using System.Collections;
using System.IO;
public class SnapshotShare : MonoBehaviour
{
private AndroidUltimatePluginController androidUltimatePluginController;
Camera mainCamera;
RenderTexture renderTex;
Texture2D screenshot;
Texture2D LoadScreenshot;
int width = Screen.width;
int height = Screen.height;
string fileName;
string screenShotName = "Animal3D_";
void Start ()
{
androidUltimatePluginController = AndroidUltimatePluginController.GetInstance ();
}
public void Snapshot ()
{
StartCoroutine (CaptureScreen ());
}
public IEnumerator CaptureScreen ()
{
yield return null;
GameObject.Find ("Canvas").GetComponent<Canvas> ().enabled = false;
yield return new WaitForEndOfFrame ();
if (Screen.orientation == ScreenOrientation.Portrait || Screen.orientation == ScreenOrientation.PortraitUpsideDown) {
mainCamera = Camera.main.GetComponent<Camera> ();
renderTex = new RenderTexture (height, width, 24);
mainCamera.targetTexture = renderTex;
RenderTexture.active = renderTex;
mainCamera.Render ();
screenshot = new Texture2D (height, width, TextureFormat.RGB24, false);
screenshot.ReadPixels (new Rect (0, 0, height, width ), 0, 0);
screenshot.Apply ();
RenderTexture.active = null;
mainCamera.targetTexture = null;
}
if (Screen.orientation == ScreenOrientation.LandscapeLeft || Screen.orientation == ScreenOrientation.LandscapeRight) {
mainCamera = Camera.main.GetComponent<Camera> ();
renderTex = new RenderTexture (width, height, 24);
mainCamera.targetTexture = renderTex;
RenderTexture.active = renderTex;
mainCamera.Render ();
screenshot = new Texture2D (width, height, TextureFormat.RGB24, false);
screenshot.ReadPixels (new Rect (0, 0, width, height), 0, 0);
screenshot.Apply (); //false
RenderTexture.active = null;
mainCamera.targetTexture = null;
}
File.WriteAllBytes (Application.persistentDataPath + "/" +screenShotName+Time.frameCount+".jpg", screenshot.EncodeToJPG ());
GameObject.Find ("Canvas").GetComponent<Canvas> ().enabled = true;
}
public void LoadImage ()
{
string path = Application.persistentDataPath + "/" + screenShotName;
byte[] bytes;
bytes = System.IO.File.ReadAllBytes(path);
LoadScreenshot = new Texture2D(1,1);
LoadScreenshot.LoadImage(bytes);
GameObject.FindGameObjectWithTag ("Picture").GetComponent<Renderer> ().material.mainTexture = screenshot;
}
public void close ()
{
Application.Quit ();
}
}
Took from here (more details) and here (discussion).
I suggest you to save your captured screenshot in app location (/Data/Data/com.companyname.gamename/Files) and then use File.Move(source, dest) to move it:
if(Shot_Taken == true)
{
string Origin_Path = System.IO.Path.Combine(Application.persistentDataPath, Screen_Shot_File_Name);
// This is the path of my folder.
string Path = "/mnt/sdcard/DCIM/Inde/" + Screen_Shot_File_Name;
if(System.IO.File.Exists(Origin_Path))
{
System.IO.File.Move(Origin_Path, Path);
Shot_Taken = false;
}
}
i am developing android application in which i need to play AAC live audio stream coming from Red5 server.
I have successfully decoded the audio stream by using javacv-ffmpeg.
But my problem is how to play the audio from decoded samples.
I have tried by following way
int len = avcodec.avcodec_decode_audio4( audio_c, samples_frame, got_frame, pkt2);
if (len <= 0){
this.pkt2.size(0);
} else {
if (this.got_frame[0] != 0) {
long pts = avutil.av_frame_get_best_effort_timestamp(samples_frame);
int sample_format = samples_frame.format();
int planes = avutil.av_sample_fmt_is_planar(sample_format) != 0 ? samples_frame.channels() : 1;
int data_size = avutil.av_samples_get_buffer_size((IntPointer)null, audio_c.channels(), samples_frame.nb_samples(), audio_c.sample_fmt(), 1) / planes;
if ((samples_buf == null) || (samples_buf.length != planes)) {
samples_ptr = new BytePointer[planes];
samples_buf = new Buffer[planes];
}
BytePointer ptemp = samples_frame.data(0);
BytePointer[] temp_ptr = new BytePointer[1];
temp_ptr[0] = ptemp.capacity(sample_size);
ByteBuffer btemp = ptemp.asBuffer();
byte[] buftemp = new byte[sample_size];
btemp.get(buftemp, 0, buftemp.length);
play the buftemp[] with audiotrack.....
}
But only noise is heard from speakers, is there any processing is need to be done on AVFrame we get from decode_audio4(...) .
The Incoming audio stream is correctly encoded with AAC codec.
Any help, suggestion appreciated.
Thanks in advance.
You can use FFmpegFrameGrabber class to capture the stream. And extract the audio using a FloatBuffer class. This is a java example
public class PlayVideoAndAudio extends Application
{
private static final Logger LOG = Logger.getLogger(JavaFxPlayVideoAndAudio.class.getName());
private static final double SC16 = (double) 0x7FFF + 0.4999999999999999;
private static volatile Thread playThread;
public static void main(String[] args)
{
launch(args);
}
#Override
public void start(Stage primaryStage) throws Exception
{
String source = "rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov";
StackPane root = new StackPane();
ImageView imageView = new ImageView();
root.getChildren().add(imageView);
imageView.fitWidthProperty().bind(primaryStage.widthProperty());
imageView.fitHeightProperty().bind(primaryStage.heightProperty());
Scene scene = new Scene(root, 640, 480);
primaryStage.setTitle("Video + audio");
primaryStage.setScene(scene);
primaryStage.show();
playThread = new Thread(() -> {
try {
FFmpegFrameGrabber grabber = new FFmpegFrameGrabber(source);
grabber.start();
primaryStage.setWidth(grabber.getImageWidth());
primaryStage.setHeight(grabber.getImageHeight());
AudioFormat audioFormat = new AudioFormat(grabber.getSampleRate(), 16, grabber.getAudioChannels(), true, true);
DataLine.Info info = new DataLine.Info(SourceDataLine.class, audioFormat);
SourceDataLine soundLine = (SourceDataLine) AudioSystem.getLine(info);
soundLine.open(audioFormat);
soundLine.start();
Java2DFrameConverter converter = new Java2DFrameConverter();
ExecutorService executor = Executors.newSingleThreadExecutor();
while (!Thread.interrupted()) {
Frame frame = grabber.grab();
if (frame == null) {
break;
}
if (frame.image != null) {
Image image = SwingFXUtils.toFXImage(converter.convert(frame), null);
Platform.runLater(() -> {
imageView.setImage(image);
});
} else if (frame.samples != null) {
FloatBuffer channelSamplesFloatBuffer = (FloatBuffer) frame.samples[0];
channelSamplesFloatBuffer.rewind();
ByteBuffer outBuffer = ByteBuffer.allocate(channelSamplesFloatBuffer.capacity() * 2);
for (int i = 0; i < channelSamplesFloatBuffer.capacity(); i++) {
short val = (short)((double) channelSamplesFloatBuffer.get(i) * SC16);
outBuffer.putShort(val);
}
/**
* We need this because soundLine.write ignores
* interruptions during writing.
*/
try {
executor.submit(() -> {
soundLine.write(outBuffer.array(), 0, outBuffer.capacity());
outBuffer.clear();
}).get();
} catch (InterruptedException interruptedException) {
Thread.currentThread().interrupt();
}
}
}
executor.shutdownNow();
executor.awaitTermination(10, TimeUnit.SECONDS);
soundLine.stop();
grabber.stop();
grabber.release();
Platform.exit();
} catch (Exception exception) {
LOG.log(Level.SEVERE, null, exception);
System.exit(1);
}
});
playThread.start();
}
#Override
public void stop() throws Exception
{
playThread.interrupt();
}
}
Because, what data you are getting in buftemp[] is in this AV_SAMPLE_FMT_FLTP format, you have to change it to AV_SAMPLE_FMT_S16 format using SwrContext and then your problem will be solved.
I'm using ffmpeg to video capture for 30 seconds.
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
if (yuvIplimage != null && recording && rec)
{
new SaveFrame().execute(data);
}
}
}
save frame class is below
private class SaveFrame extends AsyncTask<byte[], Void, File> {
long t;
protected File doInBackground(byte[]... arg) {
t = 1000 * (System.currentTimeMillis() - firstTime - pausedTime);
toSaveFrames++;
File pathCache = new File(Environment.getExternalStorageDirectory()+"/DCIM", (System.currentTimeMillis() / 1000L)+ "_" + toSaveFrames + ".tmp");
BufferedOutputStream bos;
try {
bos = new BufferedOutputStream(new FileOutputStream(pathCache));
bos.write(arg[0]);
bos.flush();
bos.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
pathCache = null;
toSaveFrames--;
} catch (IOException e) {
e.printStackTrace();
pathCache = null;
toSaveFrames--;
}
return pathCache;
}
#Override
protected void onPostExecute(File filename)
{
if(filename!=null)
{
savedFrames++;
tempList.add(new FileFrame(t,filename));
}
}
}
finally i add all frames with crop and rotation
private class AddFrame extends AsyncTask<Void, Integer, Void> {
private int serial = 0;
#Override
protected Void doInBackground(Void... params) {
for(int i=0; i<tempList.size(); i++)
{
byte[] bytes = new byte[(int) tempList.get(i).file.length()];
try {
BufferedInputStream buf = new BufferedInputStream(new FileInputStream(tempList.get(i).file));
buf.read(bytes, 0, bytes.length);
buf.close();
IplImage image = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 2);
// final int startY = 640*(480-480)/2;
// final int lenY = 640*480;
// yuvIplimage.getByteBuffer().put(bytes, startY, lenY);
// final int startVU = 640*480+ 640*(480-480)/4;
// final int lenVU = 640* 480/2;
// yuvIplimage.getByteBuffer().put(bytes, startVU, lenVU);
if (tempList.get(i).time > recorder.getTimestamp()) {
recorder.setTimestamp(tempList.get(i).time);
}
image = cropImage(image);
image = rotate(image, 270);
// image = rotateImage(image);
recorder.record(image);
Log.i(LOG_TAG, "record " + i);
image = null;
serial++;
publishProgress(serial);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (com.googlecode.javacv.FrameRecorder.Exception e) {
e.printStackTrace();
}
}
return null;
}
#Override
protected void onProgressUpdate(Integer... serial) {
int value = serial[0];
creatingProgress.setProgress(value);
}
#Override
protected void onPostExecute(Void v)
{
creatingProgress.dismiss();
if (recorder != null && recording) {
recording = false;
Log.v(LOG_TAG,"Finishing recording, calling stop and release on recorder");
try {
recorder.stop();
recorder.release();
finish();
startActivity(new Intent(RecordActivity.this,AnswerViewActivity.class));
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
recorder = null;
}
}
}
my crop and rotate method are below
private IplImage cropImage(IplImage src)
{
cvSetImageROI(src, r);
IplImage cropped = IplImage.create(imageHeight, imageHeight, IPL_DEPTH_8U, 2);
cvCopy(src, cropped);
return cropped;
}
public static IplImage rotate(IplImage image, double angle) {
IplImage copy = opencv_core.cvCloneImage(image);
IplImage rotatedImage = opencv_core.cvCreateImage(opencv_core.cvGetSize(copy), copy.depth(), copy.nChannels());
CvMat mapMatrix = opencv_core.cvCreateMat( 2, 3, opencv_core.CV_32FC1 );
//Define Mid Point
CvPoint2D32f centerPoint = new CvPoint2D32f();
centerPoint.x(copy.width()/2);
centerPoint.y(copy.height()/2);
//Get Rotational Matrix
opencv_imgproc.cv2DRotationMatrix(centerPoint, angle, 1.0, mapMatrix);
//Rotate the Image
opencv_imgproc.cvWarpAffine(copy, rotatedImage, mapMatrix, opencv_imgproc.CV_INTER_CUBIC + opencv_imgproc.CV_WARP_FILL_OUTLIERS, opencv_core.cvScalarAll(170));
opencv_core.cvReleaseImage(copy);
opencv_core.cvReleaseMat(mapMatrix);
return rotatedImage;
}
my final video crop and rotate but green frames and colored frames mixed with this.
How to fix this problem. I'm not aware of iplimage. In some blogs they mention its YUV format. first u need to convert Y and then convert UV.
How to solve this problem?
I have modified the onPreviewFrame method of this Open Source Android Touch-To-Record library to take transpose and resize a captured frame.
I defined "yuvIplImage" as following in my setCameraParams() method.
IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
Also initialize your videoRecorder object as following, giving width as height and vice versa.
//call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);
//method implementation
public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
{
Log.e(TAG, "initVideoRecorder");
videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
videoRecorder.setVideoBitrate(1000000);
videoRecorder.setAudioBitrate(64000);
}
This is my onPreviewFrame() method:
#Override
public void onPreviewFrame(byte[] data, Camera camera)
{
long frameTimeStamp = 0L;
if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
{
frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
}
else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
{
frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
}
else
{
long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
}
synchronized(FragmentCamera.mVideoRecordLock)
{
if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
{
FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
{
FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
}
try
{
yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width, mPreviewSize.height);
bgrImage.getIntBuffer().put(_temp);
opencv_core.cvTranspose(bgrImage, transposed);
opencv_core.cvFlip(transposed, transposed, 1);
opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
opencv_core.cvCopy(transposed, squared, null);
opencv_core.cvResetImageROI(transposed);
videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
videoRecorder.record(squared);
}
catch(com.googlecode.javacv.FrameRecorder.Exception e)
{
e.printStackTrace();
}
}
lastSavedframe = new SavedFrames(data, frameTimeStamp);
}
}
This code uses a method "YUV_NV21_TO_BGR", which I found from this link
Basically this method is used to resolve, which I call as, "The Green Devil problem on Android", just like yours. I was having the same issue and wasted almost 3-4 days. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to #David Han from above google groups thread.
Also you should know that all this processing (transpose, flip and resize), in onPreviewFrame, takes much time which causes a very serious hit on Frames Per Second (FPS) rate. When I used this code, inside onPreviewFrame method, the resulting FPS of the recorded video was down to 3 frames/sec from 30fps.
I would advise not to use this approach. Rather you can go for post-recording processing (transpose, flip and resize) of your video file using JavaCV in an AsyncTask. Hope this helps.
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
//IplImage newImage = cvCreateImage(cvGetSize(yuvIplimage), IPL_DEPTH_8U, 1);
if (recording) {
videoTimestamp = 1000 * (System.currentTimeMillis() - startTime);
yuvimage = IplImage.create(imageWidth, imageHeight * 3 / 2, IPL_DEPTH_8U,1);
yuvimage.getByteBuffer().put(data);
rgbimage = IplImage.create(imageWidth, imageHeight, IPL_DEPTH_8U, 3);
opencv_imgproc.cvCvtColor(yuvimage, rgbimage, opencv_imgproc.CV_YUV2BGR_NV21);
IplImage rotateimage=null;
try {
recorder.setTimestamp(videoTimestamp);
int rot=0;
switch (degrees) {
case 0:
rot =1;
rotateimage=rotate(rgbimage,rot);
break;
case 180:
rot = -1;
rotateimage=rotate(rgbimage,rot);
break;
default:
rotateimage=rgbimage;
}
recorder.record(rotateimage);
} catch (FFmpegFrameRecorder.Exception e) {
e.printStackTrace();
}
}
}
IplImage rotate(IplImage IplSrc,int angle) {
IplImage img= IplImage.create(IplSrc.height(), IplSrc.width(), IplSrc.depth(), IplSrc.nChannels());
cvTranspose(IplSrc, img);
cvFlip(img, img, angle);
return img;
}
}
after many searches this works for me.