switch camera(back/front) in android webrtc - android

I have used libjingle library for webrtc android application. I have successfully implemented audio video streaming for two way communication.
Till now i was using front camera for video streaming but now i want to put option for users to select front or back camera for video streaming.
How can i archive it? i have no idea about this.
I have tried VideocaptureAndroid switch camera method but not working.
If anyone one knows then help me out for this functionality?
Thanks in advance.

You need to use the same videoCapturer object, which is created while initial MediaStream creation.
CameraVideoCapturer cameraVideoCapturer = (CameraVideoCapturer) videoCapturer;
cameraVideoCapturer.switchCamera(null);
AppRTC Reference

Using this version: org.webrtc:google-webrtc:1.0.22672
Create the VideoCapturer by this method:
VideoCapturer videoCaptor = createCameraCaptor(new Camera1Enumerator(false));
The trick is on isBackFacing(...)/ isFrontFacing(...)
private VideoCapturer createCameraCaptor(CameraEnumerator enumerator) {
final String[] deviceNames = enumerator.getDeviceNames();
// First, try to find back facing camera
Logging.d(TAG, "Looking for back facing cameras.");
for (String deviceName : deviceNames) {
if (enumerator.isBackFacing(deviceName)) {
Logging.d(TAG, "Creating back facing camera captor.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
// back facing camera not found, try something else
Logging.d(TAG, "Looking for other cameras.");
for (String deviceName : deviceNames) {
if (!enumerator.isBackFacing(deviceName)) {
Logging.d(TAG, "Creating other camera captor.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
return null;
}

Here is an example using libjingle.
If you want to switch between the front and rear camera you will need to get the name of the device you want to use. This can be done using VideoCapturerAndroid.getNameOfFrontFacingDevice() or VideoCapturerAndroid.getNameOfRearFacingDevice() depending on whether you want to use the front or rear camera.
Here's a simple example of how to get the correct VideoCapturer using io.pristine.libjingle:9127
private VideoCapturer getCameraCapturer(boolean useFrontCamera) {
String deviceName = useFrontCamera ? VideoCapturerAndroid.getNameOfFrontFacingDevice() : VideoCapturerAndroid.getNameOfBackFacingDevice();
return VideoCapturerAndroid.create(deviceName);
}
If you're using a different version of LibJingle or for any reason this doesn't work let me know and I'll be happy to help!
Cheers,

Create a new video capturer and start it. Dont forget to stop the old one.
fun switchCamera() {
cameraFacingFront = !cameraFacingFront
try {
videoCapturer!!.stopCapture()
} catch (e: InterruptedException) {
}
videoCapturer = createVideoCapturer(cameraFacingFront)
videoCapturer!!.initialize(
surfaceTextureHelper,
activity,
videoSource!!.getCapturerObserver()
)
videoCapturer!!.startCapture(
VIDEO_SIZE_WIDTH,
VIDEO_SIZE_HEIGHT,
VIDEO_FPS
)
}

Related

Exoplayer - How to check if MP4 video has audio?

I'm using URLs from an API. Some of the URLs are mp4's without sound(video is playing just no sound). How do I check if that video has sound or not? I've been searching through SimpleExoPlayer docs and testing the methods on my URLS
https://exoplayer.dev/doc/reference/com/google/android/exoplayer2/SimpleExoPlayer.html for the past couple hours
But I can't figure out how to detect check if the video playing has sound or not.
Tried all the methods in getAudioAttributes(), getAudioComponents() and now just tried getAudioFormat() but they all return null.
try{
Log.d(TAG, "onCreateView: " + player.getAudioFormat().channelCount);
}catch (Exception e){
Log.d(TAG, "onCreateView: " + e);
}
And yes I've made sure the link's actually have Audio.
You can track the current tracks with Player#EventListener#OnTracksChanged and get the current ones with Player#getCurrentTrackGroups(). If you go through the track groups you can look for the type. If you find AUDIO type there that means your video file contains the audio track.
If you additionally want to check if any of the audio tracks was selected, then Player#getCurrentTrackSelections() is the place to look at.
To complete #Hamza Khan's answer, here is my code to check whether the loaded video has any audio:
override fun onTracksChanged(
trackGroups: TrackGroupArray?,
trackSelections: TrackSelectionArray?
) {
if (trackGroups != null && !trackGroups.isEmpty){
for (arrayIndex in 0 until trackGroups.length){
for (groupIndex in 0 until trackGroups[arrayIndex].length){
val sampleMimeType = trackGroups[arrayIndex].getFormat(groupIndex).sampleMimeType
if ( sampleMimeType != null && sampleMimeType.contains("audio") ){
//video contains audio
}
}
}
}
}
player.addListener(new Player.EventListener() {
#Override
public void onTracksChanged(TrackGroupArray trackGroups, TrackSelectionArray trackSelections) {
if (trackGroups != null && !trackGroups.isEmpty()) {
for (int i = 0; i < trackGroups.length; i++) {
for (int g = 0; g < trackGroups.get(i).length; g++) {
String sampleMimeType = trackGroups.get(i).getFormat(g).sampleMimeType;
if (sampleMimeType != null && sampleMimeType.contains("audio")) {
//video contains audio
}
}
}
}
}
}
JAVA version of mrj answer
Came across this thread which helped me a lot!

On Android - how to capture image/video from the wide angle camera?

How to capture images or videos from the camera2 api wide angle camera?
or the telescopic camera?
I know how to handle camera capture for front & back camera.
I just can't understand how to open the camera and choose the wide/telescopic camera?
I guess it has something to do with setting one of the following :
CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA
CameraCharacteristics.getPhysicalCameraIds()
CameraCharacteristics.getAvailablePhysicalCameraRequestKeys()
CameraDevice.createCaptureSession(SessionConfiguration config)
CameraCharactersitics.LOGICAL_MULTI_CAMERA_SENSOR_SYNC_TYPE
But I fail to understand the scenario of setting it up and I didn;t find any good explanation.
I will appreciate any kind of tutorial or explanation.
Last question - how to test it with no phsical device? I mean - how to setup the Avd/emulator?
So I asked this on the CameraX discussion group and here is the reply from google:
For CameraX to support wide angle cameras we are working with manufacturers to expose those via camera2 first. Some devices indeed do so today in an non-determinstic manner. Will keep you posted as we progress, thanks!
So, If somebody still looks for the answer:
Almost no manufacturer supported that until android 10. from android 10 - all physical camera are logical cameras.
It means that you can see those cameras on
manager.getCameraIdList()
you will get a list of all available camera, just look for the CameraCharacteristics.LENS_FACING direction. and populate a list.
Here is the full code:
public CameraItem[] GetCameraListFirstTime() {
List<CameraItem> listValuesItems = new ArrayList<CameraItem>();
boolean IsMain = false;
boolean IsSelfie = false;
if(manager == null)
manager = (CameraManager)mContext.getSystemService(CAMERA_SERVICE);
try {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics chars = manager.getCameraCharacteristics(cameraId);
if (!IsMain && chars.get(CameraCharacteristics.LENS_FACING) == Camera.CameraInfo.CAMERA_FACING_FRONT) {
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Main"));
IsMain = true;
}
else if (!IsSelfie && chars.get(CameraCharacteristics.LENS_FACING) == Camera.CameraInfo.CAMERA_FACING_BACK) {
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Selfie"));
IsSelfie = true;
}
else
listValuesItems.add(new CameraItem(Integer.parseInt(cameraId), "Wide or Other"));
}
}
catch (CameraAccessException e) {
Log.e(TAG, e.toString());
}
return listValuesItems.toArray(new CameraItem[0]);
}
public class CameraItem implements java.io.Serializable{
public int Key;
public String Description;
public CameraItem(int key, String desc)
{
Key=key;
Description = desc;
}

WebRTC cannot record screen

I'm trying to make screen sharing app using WebRTC. I have code that can get and share video stream from camera. I need to modify it to instead get video via MediaProjection API. Based on this post I have modified my code to use org.webrtc.ScreenCapturerAndroid, but there is no video output shown. There is only black screen. If I use camera, everything works fine (I can see camera output on screen). Could someone please check my code and maybe point me in right direction? I have been stuck on this for three days already.
Here is my code:
public class MainActivity extends AppCompatActivity {
private static final String TAG = "VIDEO_CAPTURE";
private static final int CAPTURE_PERMISSION_REQUEST_CODE = 1;
private static final String VIDEO_TRACK_ID = "video_stream";
PeerConnectionFactory peerConnectionFactory;
SurfaceViewRenderer localVideoView;
ProxyVideoSink localSink;
VideoSource videoSource;
VideoTrack localVideoTrack;
EglBase rootEglBase;
boolean camera = false;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
rootEglBase = EglBase.create();
localVideoView = findViewById(R.id.local_gl_surface_view);
localVideoView.init(rootEglBase.getEglBaseContext(), null);
startScreenCapture();
}
#TargetApi(21)
private void startScreenCapture() {
MediaProjectionManager mMediaProjectionManager = (MediaProjectionManager) getApplication().getSystemService(Context.MEDIA_PROJECTION_SERVICE);
startActivityForResult(mMediaProjectionManager.createScreenCaptureIntent(), CAPTURE_PERMISSION_REQUEST_CODE);
}
#Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode != CAPTURE_PERMISSION_REQUEST_CODE) { return; }
start(data);
}
private void start(Intent permissionData) {
//Initialize PeerConnectionFactory globals.
PeerConnectionFactory.InitializationOptions initializationOptions =
PeerConnectionFactory.InitializationOptions.builder(this)
.setEnableVideoHwAcceleration(true)
.createInitializationOptions();
PeerConnectionFactory.initialize(initializationOptions);
//Create a new PeerConnectionFactory instance - using Hardware encoder and decoder.
PeerConnectionFactory.Options options = new PeerConnectionFactory.Options();
DefaultVideoEncoderFactory defaultVideoEncoderFactory = new DefaultVideoEncoderFactory(
rootEglBase.getEglBaseContext(), true,true);
DefaultVideoDecoderFactory defaultVideoDecoderFactory = new DefaultVideoDecoderFactory(rootEglBase.getEglBaseContext());
peerConnectionFactory = PeerConnectionFactory.builder()
.setOptions(options)
.setVideoDecoderFactory(defaultVideoDecoderFactory)
.setVideoEncoderFactory(defaultVideoEncoderFactory)
.createPeerConnectionFactory();;
VideoCapturer videoCapturerAndroid;
if (camera) {
videoCapturerAndroid = createCameraCapturer(new Camera1Enumerator(false));
} else {
videoCapturerAndroid = new ScreenCapturerAndroid(permissionData, new MediaProjection.Callback() {
#Override
public void onStop() {
super.onStop();
Log.e(TAG, "user has revoked permissions");
}
});
}
videoSource = peerConnectionFactory.createVideoSource(videoCapturerAndroid);
DisplayMetrics metrics = new DisplayMetrics();
MainActivity.this.getWindowManager().getDefaultDisplay().getRealMetrics(metrics);
videoCapturerAndroid.startCapture(metrics.widthPixels, metrics.heightPixels, 30);
localVideoTrack = peerConnectionFactory.createVideoTrack(VIDEO_TRACK_ID, videoSource);
localVideoTrack.setEnabled(true);
//localVideoTrack.addRenderer(new VideoRenderer(localRenderer));
localSink = new ProxyVideoSink().setTarget(localVideoView);
localVideoTrack.addSink(localSink);
}
//find first camera, this works without problem
private VideoCapturer createCameraCapturer(CameraEnumerator enumerator) {
final String[] deviceNames = enumerator.getDeviceNames();
// First, try to find front facing camera
Logging.d(TAG, "Looking for front facing cameras.");
for (String deviceName : deviceNames) {
if (enumerator.isFrontFacing(deviceName)) {
Logging.d(TAG, "Creating front facing camera capturer.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
// Front facing camera not found, try something else
Logging.d(TAG, "Looking for other cameras.");
for (String deviceName : deviceNames) {
if (!enumerator.isFrontFacing(deviceName)) {
Logging.d(TAG, "Creating other camera capturer.");
VideoCapturer videoCapturer = enumerator.createCapturer(deviceName, null);
if (videoCapturer != null) {
return videoCapturer;
}
}
}
return null;
}
}
ProxyVideoSink
public class ProxyVideoSink implements VideoSink {
private VideoSink target;
synchronized ProxyVideoSink setTarget(VideoSink target) { this.target = target; return this; }
#Override
public void onFrame(VideoFrame videoFrame) {
if (target == null) {
Log.w("VideoSink", "Dropping frame in proxy because target is null.");
return;
}
target.onFrame(videoFrame);
}
}
In logcat I can see, that some frames are rendered, but nothing is shown (black screen).
06-18 17:42:44.750 11357-11388/com.archona.webrtcscreencapturetest I/org.webrtc.Logging: EglRenderer: local_gl_surface_viewDuration: 4000 ms. Frames received: 117. Dropped: 0. Rendered: 117. Render fps: 29.2. Average render time: 4754 μs. Average swapBuffer time: 2913 μs.
06-18 17:42:48.752 11357-11388/com.archona.webrtcscreencapturetest I/org.webrtc.Logging: EglRenderer: local_gl_surface_viewDuration: 4001 ms. Frames received: 118. Dropped: 0. Rendered: 118. Render fps: 29.5. Average render time: 5015 μs. Average swapBuffer time: 3090 μs.
I'm using latest version of WebRTC library: implementation 'org.webrtc:google-webrtc:1.0.23546'.
My device has API level 24 (Android 7.0), but I have tested this code on 3 different devices with different API levels, so I don't suspect device specific problem.
I have tried building another app that uses MediaProjection API (without WebRTC) and I can see correct output inside SurfaceView.
I have tried downgrading webrtc library, but nothing seems to work.
Thanks for any help.
I was faced same issue using WebRTC library org.webrtc:google-webrtc:1.0.22672. I am using android 7.0 device. Video call is working fine. Issue is with screen sharing. Screen sharing showing black screen always.
Then I added following:
peerConnectionFactory.setVideoHwAccelerationOptions(rootEglBase.getEglBaseContext(), rootEglBase.getEglBaseContext());
Now it is working perfectly.

Microblink recognizer set up RegexParserSettings

I am trying to scan an image taken from resources using a Recognizer with a RegerParserSettings inside a fragment. The problem is that BaseRecognitionResult obtained through the callback onScanningDone is always null. I have tried to set up the RecognitionSettings with MRTDRecognizer and worked fine, so I think that the library is properly integrated. This is the source code that I am using:
#Override
public void onAttach(Context context) {
...
try {
mRecognizer = Recognizer.getSingletonInstance();
mRecognizer.setLicenseKey(context, LICENSE_KEY);
} catch (FeatureNotSupportedException | InvalidLicenceKeyException e) {
Log.d(TAG, e.getMessage());
}
buildRecognitionSettings();
mRecognizer.initialize(context, mRecognitionSettings, new DirectApiErrorListener() {
#Override
public void onRecognizerError(Throwable t) {
//Handle exception
}
});
}
private void buildRecognitionSettings() {
mRecognitionSettings = new RecognitionSettings();
mRecognitionSettings.setRecognizerSettingsArray(setupSettingsArray());
}
private RecognizerSettings[] setupSettingsArray() {
RegexParserSettings regexParserSettings = new RegexParserSettings("[A-Z0-9]{17}");
BlinkOCRRecognizerSettings sett = new BlinkOCRRecognizerSettings();
sett.addParser("myRegexParser", regexParserSettings);
return new RecognizerSettings[] { sett };
}
I scan the image like:
mRecognizer.recognizeBitmap(bitmap, Orientation.ORIENTATION_PORTRAIT, FragMicoblink.this);
And this is the callback handled in the fragment
#Override
public void onScanningDone(RecognitionResults results) {
BaseRecognitionResult[] dataArray = results.getRecognitionResults();
//dataArray is null
for(BaseRecognitionResult baseResult : dataArray) {
if (baseResult instanceof BlinkOCRRecognitionResult) {
BlinkOCRRecognitionResult result = (BlinkOCRRecognitionResult) baseResult;
if (result.isValid() && !result.isEmpty()) {
String parsedAmount = result.getParsedResult("myRegexParser");
if (parsedAmount != null && !parsedAmount.isEmpty()) {
Log.d(TAG, "Result: " + parsedAmount);
}
}
}
}
}`
Thanks in advance!
Helllo Spirrow.
The difference between your code and SegmentScanActivity is that your code uses DirectAPI, which can process only single bitmap image you send for processing, while SegmentScanActivity processes camera frames as they arrive from the camera. While doing so, it can utilize time redundant information to improve the OCR quality, i.e. it combines consecutive OCR results from multiple video frames to obtain a better quality OCR result.
This feature is not available via DirectAPI - you need to use either SegmentScanActivity, or custom scan activity with our camera management.
You can also find out more here:
https://github.com/BlinkID/blinkid-android/issues/54
Regards

Android native webrtc: add video after already connected

I have successfully been running WebRTC in my Android app for a while, using libjingle.so and PeerConnectionClient.java, etc., from Google's code library. However, I am now running into a problem where a user starts a connection as audio only (i.e., an audio call), but then toggles video on. I augmented the existing setVideoEnabled() in PeerConnectionClient as such:
public void setVideoEnabled(final boolean enable) {
executor.execute(new Runnable() {
#Override
public void run() {
renderVideo = enable;
if (localVideoTrack != null) {
localVideoTrack.setEnabled(renderVideo);
} else {
if (renderVideo) {
//AC: create a video track
String cameraDeviceName = VideoCapturerAndroid.getDeviceName(0);
String frontCameraDeviceName =
VideoCapturerAndroid.getNameOfFrontFacingDevice();
if (numberOfCameras > 1 && frontCameraDeviceName != null) {
cameraDeviceName = frontCameraDeviceName;
}
Log.i(TAG, "Opening camera: " + cameraDeviceName);
videoCapturer = VideoCapturerAndroid.create(cameraDeviceName);
if (createVideoTrack(videoCapturer) != null) {
mediaStream.addTrack(localVideoTrack);
localVideoTrack.setEnabled(renderVideo);
peerConnection.addStream(mediaStream);
} else {
Log.d(TAG, "Local video track is still null");
}
} else {
Log.d(TAG, "Local video track is null");
}
}
if (remoteVideoTrack != null) {
remoteVideoTrack.setEnabled(renderVideo);
} else {
Log.d(TAG,"Remote video track is null");
}
}
});
}
This allows me successfully see a local inset of the device's video camera, but it doesn't send the video to the remove client. I thought the peerConnection.addStream() call would do that, but perhaps I am missing something else?
To avoid building an external mechanism of communication between peers that will involve an answer from the second peer that the new stream can be added, you can always start with existing (but sometimes empty) video stream. Now it is just the matter of filling this stream with content when (and if) necessary.

Categories

Resources