Is it possible to use webrtc VideoCapturer without peerconnection?
We have a working androidapp app (from examples/androidapp). We have taken following code from the working app into a separate activity where we use camera capturer directly without creating peerconnection. We create a video capturer (camera2) using an instance of CapturerObserver and then try to render it to org.webrtc.SurfaceViewRenderer. Below is the code.
As expected, onFrameCaptured of the CapturerObserver is being called multiple times with valid videoFrame object. From there, we pass it to SurfaceViewRenderer. However, video does not render and SurfaceViewRenderer remains black.
Is that a correct way of using VideoCapturer and SurfaceViewRenderer? Does it require any format conversion before sending to SurfaceViewRenderer?
private class MyCapturerObserver implements CapturerObserver {
#Override
public void onCapturerStarted(boolean b) {
Log.e(TAG, "capture started: " + b);
}
#Override
public void onCapturerStopped() {
Log.e(TAG, "capture stopped");
}
#Override
public void onFrameCaptured(final VideoFrame videoFrame) {
//fullscreenRenderer.onFrame(videoFrame);
runOnUiThread(new Runnable() {
#Override
public void run() {
fullscreenRenderer.onFrame(videoFrame);
}
});
}
}
capturer = createVideoCapturer();
captureObserver = new MyCapturerObserver();
surfaceTextureHelper =
SurfaceTextureHelper.create("CaptureThread", eglBase.getEglBaseContext());
capturer.initialize(surfaceTextureHelper, getApplicationContext(), captureObserver);
capturer.startCapture(1280, 720, 30);
Use factory.createVideoSource. You can use it before creating peerconnection. You can refer source code in the PeerConnectionClient.java
public VideoTrack createVideoTrack(VideoCapturer capturer) {
surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", rootEglBase.getEglBaseContext());
videoSource = factory.createVideoSource(capturer.isScreencast());
capturer.initialize(surfaceTextureHelper, appContext, videoSource.getCapturerObserver());
capturer.startCapture(videoWidth, videoHeight, videoFps);
localVideoTrack = factory.createVideoTrack(VIDEO_TRACK_ID, videoSource);
localVideoTrack.setEnabled(renderVideo);
localVideoTrack.addSink(localRender);
return localVideoTrack;
}
Related
public class AgoraEventHandler extends IRtcEngineEventHandler {
private ArrayList<EventHandler> mHandler = new ArrayList<>();
#Override
public void onNetworkQuality(int uid, int txQuality, int rxQuality) {
for (EventHandler handler : mHandler) {
handler.onNetworkQuality(uid, txQuality, rxQuality);
}
}
#Override
public void onRemoteVideoStats(RemoteVideoStats stats) {
for (EventHandler handler : mHandler) {
handler.onRemoteVideoStats(stats);
}
}
}
I got something like raw data saving but I want to know if that can help in saving the video on the server and replaying it.
You can save the videos locally in the app, but you need to access the low-level APIs in order to do that https://docs.agora.io/en/Interactive%20Broadcast/raw_data_video_android?platform=Android and create your own script to write all videoFrame to a file.
Alternatively, If you are looking for a cloud-based solution, Agora offers two solutions:
1.Cloud Recording: https://docs.agora.io/en/cloud-recording/product_cloud_recording%20?platform=Linux
2.On-premise Recording: https://docs.agora.io/en/Recording/product_recording?platform=Linux
My drone matrice 210.
DJI Android SDK 4.7.1
Device CrystalSky CS785, Android 5.1.1
I shuld display video stream from two camers at the same time, like a DJI Pilot.
My solutions:
I create two diferance DjiCodecManager, and use it in diferent VideoFeeder callbaks.
DJICodecManager primaryDJICodecManager = new DJICodecManager(Activity,
pramirySurfaceTexture,
pramirySurfaceTextureTextureWidth,
pramirySurfaceTextureTextureHeight);
DJICodecManager secondaryDJICodecManager = new DJICodecManager(Activity,
secondarySurfaceTexture,
secondarySurfaceTextureTextureWidth,
secondarySurfaceTextureTextureHeight);
pramirySurfaceTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
#Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
surfaceTexture.updateTexImage();
}
});
secondarySurfaceTexture.setOnFrameAvailableListener(new SurfaceTexture.OnFrameAvailableListener() {
#Override
public void onFrameAvailable(SurfaceTexture surfaceTexture) {
surfaceTexture.updateTexImage();
}
});
VideoFeeder.VideoFeed videoFeed = VideoFeeder.getInstance().getPrimaryVideoFeed();
VideoFeeder.VideoFeed secondaryVideoFeed = VideoFeeder.getInstance().getSecondaryVideoFeed();
secondaryVideoFeed.setCallback(new VideoFeeder.VideoDataCallback() {
#Override
public void onReceive(byte[] videoBuffer, int size) {
if (DjiManagers.mSecondaryCodecManager != null) {
secondaryDJICodecManager.sendDataToDecoder(videoBuffer, size);
}
}
});
videoFeed.setCallback(new VideoFeeder.VideoDataCallback() {
#Override
public void onReceive(byte[] videoBuffer, int size) {
if (DjiManagers.mCodecManager != null) {
primaryDJICodecManager.sendDataToDecoder(videoBuffer, size);
}
}
});
But the pramirySurfaceTexture callback does not work. And on the second texture, an image from different cameras (color and grayscale (I use a thermal imaging camera)) appears alternately, but most often the texture is green.
Is it possible to create and use two DJICodecManager?
And if not, how can I show the video stream simultaneously?
DJI support answered me.
To use two DJICodecManafers. You must use the other constructor:
primaryDJICodecManager = new DJICodecManager(Activity,
djiSdkWrapper.getSurfaceTexture(),
djiSdkWrapper.getSurfaceTextureWidth(),
djiSdkWrapper.getSurfaceTextureHeight(),
videoStreamSource);
where videoStreamSource it's one of this:
UsbAccessoryService.VideoStreamSource.Camera
UsbAccessoryService.VideoStreamSource.Fpv
UsbAccessoryService.VideoStreamSource.SecondaryCamera
And when you send data to decoding you must use onother sendDataToDecode methond:
primaryDJICodecManager.sendDataToDecoder(array, size, index);
where intdex it's one of this:
UsbAccessoryService.VideoStreamSource.Camera.getIndex()
UsbAccessoryService.VideoStreamSource.Fpv.getIndex()
UsbAccessoryService.VideoStreamSource.SecondaryCamera.getIndex()
In accordance with what you specified when creating DJICodecManager.
I am developing mobile app on Xamarin for android I try to use Camera2 class. Everything looks fine but this line occurs problem on convert type. It says (Java.Lang.Object -> Android.Hardware.Camera2.Params.Face[]) This line works on Android Studio but not in C#.
That's code I use on Xamarin. Other than face recognition, all builded requests works fine.
https://github.com/xamarin/monodroid-samples/tree/master/android5.0/Camera2Basic
Face[] faces = result.Get(CaptureResult.StatisticsFaces);
public class CameraCaptureListener : CameraCaptureSession.CaptureCallback
{
public FaceTrainActivityFragment Owner { get; set; }
public File File { get; set; }
public override void OnCaptureCompleted(CameraCaptureSession session, CaptureRequest request, TotalCaptureResult result)
{
Process(result);
}
public override void OnCaptureProgressed(CameraCaptureSession session, CaptureRequest request, CaptureResult partialResult)
{
Process(partialResult);
}
private void Process(CaptureResult result)
{
switch (Owner.mState)
{
case FaceTrainActivityFragment.STATE_PREVIEW:
{
if (result.Get(CaptureResult.StatisticsFaces) != null) {
//Face[] faces = result.Get(CaptureResult.StatisticsFaces);
//Face[] faces = (Face[])result.Get(CaptureResult.StatisticsFaces);
}
break;
}
}
}
}
it does not allowed me to compile even if I compile with using hard casting to (Face[]), it gives me same Java.Lang.Object error.
public void CreateCameraPreviewSession()
{
try
{
SurfaceTexture texture = mTextureView.SurfaceTexture;
if (texture == null)
{
throw new IllegalStateException("texture is null");
}
if (null == mCameraDevice) {
return;
}
// We configure the size of default buffer to be the size of camera preview we want.
texture.SetDefaultBufferSize(mPreviewSize.Width, mPreviewSize.Height);
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder = mCameraDevice.CreateCaptureRequest(CameraTemplate.Preview);
mPreviewRequestBuilder.AddTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
List<Surface> surfaces = new List<Surface>();
surfaces.Add(surface);
//surfaces.Add(mImageReader.Surface);
setFaceDetect(mPreviewRequestBuilder, mFaceDetectMode);
mCameraDevice.CreateCaptureSession(surfaces, new CameraCaptureSessionCallback(this), null);
}
catch (CameraAccessException e)
{
e.PrintStackTrace();
}
and I am callling CreateCameraPreviewSession inside of Camera State Listener like that
public class CameraStateListener : CameraDevice.StateCallback
{
public FaceTrainActivityFragment owner;
public override void OnOpened(CameraDevice cameraDevice)
{
// This method is called when the camera is opened. We start camera preview here.
owner.mCameraOpenCloseLock.Release();
owner.mCameraDevice = cameraDevice;
owner.CreateCameraPreviewSession();
}
It says (Java.Lang.Object -> Android.Hardware.Camera.Params.Face[]) This line works on Android Studio but not in C#.
From the error you are getting, you are probably using the wrong namespace for Face. Instead of Android.Hardware.Camera.Params.Face, please use Android.Hardware.Camera2.Params.Face.
using following code snippet to find similar faces via asynch task
class DetectionTask extends AsyncTask<InputStream, String, Face[]> {
private boolean mSucceed = true;
int mRequestCode;
DetectionTask(int requestCode) {
mRequestCode = requestCode;
}
#Override
protected Face[] doInBackground(InputStream... params) {
FaceServiceClient faceServiceClient = SampleApp.getFaceServiceClient();
try{
publishProgress("Detecting...");
// Start detection.
return faceServiceClient.detect(
params[0], /* Input stream of image to detect */
true, /* Whether to return face ID */
false, /* Whether to return face landmarks */
/* Which face attributes to analyze, currently we support:
age,gender,headPose,smile,facialHair */
null);
} catch (Exception e) {
mSucceed = false;
publishProgress(e.getMessage());
addLog(e.getMessage());
return null;
}
}
using this following class to generate a Face service client
This is my code for Sample App
public class SampleApp extends Application {
#Override
public void onCreate() {
super.onCreate();
sFaceServiceClient = new FaceServiceRestClient(getString(R.string.subscription_key));
}
public static FaceServiceClient getFaceServiceClient() {
return sFaceServiceClient;
}
private static FaceServiceClient sFaceServiceClient;
}
While debugging I found that sFaceServiceClient is always null when
calling SampleApp.getFaceServiceClient() so I don't get any response
due to null object . I tried using different keys via multiple
accounts but of no help . The same issue still persists. A help will be appreciated
I am trying to get the camera from DJI and use OpenCV with it, the problem relies on how to set OpenCv to get the video previewer that DJI is recording while the drone is active. The drone is actually working on streaming the video to my cellphone but when I try to use my OpenCV code to get the video preview id from the layout in the project I have in Android Studio, the app crashes every time I try to go to the camera view part of the app. Here is code that I use to initialize the OpenCv object to the video previewer captured by the DJI camera.
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setContentView(R.layout.activity_main);
openCvCameraView = (JavaCameraView)findViewById(R.id.video_previewer_surface);
openCvCameraView.setVisibility(SurfaceView.VISIBLE);
openCvCameraView.setCvCameraViewListener(this);
initUI();
// The callback for receiving the raw H264 video data for camera live view
mReceivedVideoDataCallBack = new CameraReceivedVideoDataCallback() {
#Override
public void onResult(byte[] videoBuffer, int size) {
if(mCodecManager != null){
// Send the raw H264 video data to codec manager for decoding
mCodecManager.sendDataToDecoder(videoBuffer, size);
}else {
Log.e(TAG, "mCodecManager is null");
}
}
};
DJICamera camera = FPVDemoApplication.getCameraInstance();
if (camera != null) {
camera.setDJICameraUpdatedSystemStateCallback(new DJICamera.CameraUpdatedSystemStateCallback() {
#Override
public void onResult(CameraSystemState cameraSystemState) {
if (null != cameraSystemState) {
int recordTime = cameraSystemState.getCurrentVideoRecordingTimeInSeconds();
int minutes = (recordTime % 3600) / 60;
int seconds = recordTime % 60;
final String timeString = String.format("%02d:%02d", minutes, seconds);
final boolean isVideoRecording = cameraSystemState.isRecording();
MainActivity.this.runOnUiThread(new Runnable() {
#Override
public void run() {
recordingTime.setText(timeString);
/*
* Update recordingTime TextView visibility and mRecordBtn's check state
*/
if (isVideoRecording){
recordingTime.setVisibility(View.VISIBLE);
}else
{
recordingTime.setVisibility(View.INVISIBLE);
}
}
});
}
}
});
}
}
It may be that you are using JavaCameraView which according to this post: What is the difference between `opencv.android.JavaCameraView` and `opencv.android.NativeCameraView`
The org.opencv.android.JavaCameraView class is implemented inside OpenCV library. It is inherited from CameraBridgeViewBase, that extends SurfaceView and uses standard Android camera API.
You are using the video feed from the DJI SDK and not the phone's hardware camera so that may explain the crash as when you invoke OpenCV it is in conflict with the incoming feed.
As I don't have a drone, my only suggestion is to look at the other DJI sample on Video Stream Decoding
https://github.com/DJI-Mobile-SDK-Tutorials/Android-VideoStreamDecodingSample
And instead of decoding the stream send the data to OpenCV perhaps in JNI (C/C++).