I have heard about screen sharing on desktop using WebRTC. But for the Android, it seems not to have much information.
My question is:
Is it possible to use WebRTC for screen sharing on android?. I mean I can cast the current screen to the other phone's screen.
If 1 is Yes, How can I achieve this?
Thanks.
It is possible!
It can be done using the directions below.
I've used ScreenShareRTC in conjunction with ProjectRTC to stream the contents of the screen to a browser with decent quality and fairly low latency ~100ms.
I've added an example below that shows how to configure a screen share as a video source and add it as a track on a stream.
Get the VideoCapturer
#TargetApi(21)
private VideoCapturer createScreenCapturer() {
if (mMediaProjectionPermissionResultCode != Activity.RESULT_OK) {
report("User didn't give permission to capture the screen.");
return null;
}
return new ScreenCapturerAndroid(
mMediaProjectionPermissionResultData, new MediaProjection.Callback() {
#Override
public void onStop() {
report("User revoked permission to capture the screen.");
}
});
}
Initialize the capturer and add the tracks to the local media stream
private void initScreenCapturStream() {
mLocalMediaStream = factory.createLocalMediaStream("ARDAMS");
MediaConstraints videoConstraints = new MediaConstraints();
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxHeight", Integer.toString(mPeerConnParams.videoHeight)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxWidth", Integer.toString(mPeerConnParams.videoWidth)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxFrameRate", Integer.toString(mPeerConnParams.videoFps)));
videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("minFrameRate", Integer.toString(mPeerConnParams.videoFps)));
mVideoSource = factory.createVideoSource(videoCapturer);
videoCapturer.startCapture(mPeerConnParams.videoWidth, mPeerConnParams.videoHeight, mPeerConnParams.videoFps);
VideoTrack localVideoTrack = factory.createVideoTrack(VIDEO_TRACK_ID, mVideoSource);
localVideoTrack.setEnabled(true);
mLocalMediaStream.addTrack(factory.createVideoTrack("ARDAMSv0", mVideoSource));
AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
mLocalMediaStream.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));
mListener.onStatusChanged("STREAMING");
}
For more information this might be a good place to start. Its a Android project that connects to a ProjectRTC signalling server and shares the screen as video. I found it very helpful!
Android screen sharing project(Android client - Java)
https://github.com/Jeffiano/ScreenShareRTC
ProjectRTC(Node server)
https://github.com/pchab/ProjectRTC
Related
I'm using webrtc-android-framework module provided by Antmedia official website. I was able to make connection and I can see the video published on the other side without any issues. However I'm unable to switch from camera to screen sharing.
I'm using below code to switch from camera capture to screen sharing.
public void MakeScreenCaptureReady() {
final EglBase.Context eglBaseContext = eglBase.getEglBaseContext();
PeerConnectionFactory peerConnectionFactory = peerConnectionClient.factory;
// create AudioSource
AudioSource audioSource = peerConnectionFactory.createAudioSource(new MediaConstraints());
this.audioTrack = peerConnectionFactory.createAudioTrack("101", audioSource);
surfaceTextureHelper = SurfaceTextureHelper.create("CaptureThread", eglBaseContext);
// create VideoCapturer
videoCapturer = createScreenCapturer();
VideoSource videoSource =
peerConnectionFactory.createVideoSource(videoCapturer.isScreencast());
localVideoTrack = peerConnectionFactory.createVideoTrack("118", videoSource);
videoCapturer.initialize(surfaceTextureHelper, context, videoSource.getCapturerObserver());
videoCapturer.startCapture(720, 1280, 30);
peerConnectionClient.setLocalVideoTrack(localVideoTrack);
peerConnectionClient.localVideoSender.setTrack(localVideoTrack, true); //true for taking ownership and replacing the existing track
}
It buffers screen sharing video for 2-3 seconds and stops throwing source error at the subscribers end. Basically no chunks available on server to further buffer.
I have already taken required permission for screen sharing before hitting the above code.
startActivityForResult(
mMediaProjectionManager!!.createScreenCaptureIntent(),
SCREEN_RECORD_REQUEST_CODE
)
This is the below code I'm using to call the above method on onActivityResult :
intent.putExtra(CallActivity.EXTRA_SCREENCAPTURE, true)
webRTCClient.setMediaProjectionParams(resultCode, data)
webRTCClient.MakeScreenCaptureReady()
How do I achieve switching between camera and screen capture? Any help is much appreciated. Thanks!
I am developing native android WebRTC client that is suppoded to stream audio from custom device (I am getting audio stream via Bluetooth from that device). I am using libjingle library to implement WebRTC and I wonder if and how it is possible to hook up custom audio stream to audio track?
Currently I am adding default audio track like this:
localMS = factory.createLocalMediaStream("ARDAMS");
AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
localMS.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));
I saw that there is WebRtcAuidioRecord (https://github.com/pristineio/webrtc-android/blob/master/libjingle_peerconnection/src/main/java/org/webrtc/voiceengine/WebRtcAudioRecord.java) - is it possible to override it?
Anybody tried doing something like that?
Your post lead me to the below code, I am going to try it and let you know if I get it to work. I am trying to send one audio stream to Watson API and one to WebRTC but Android only lets one InputStream read for the microphone. I will update you if I get it to work.
private org.webrtc.MediaStream createMediaStream() {
org.webrtc.MediaStream mediaStream = mFactory.createLocalMediaStream(ARDAMS);
if (mEnableVideo) {
mVideoCapturer = createVideoCapturer();
if (mVideoCapturer != null) {
mediaStream.addTrack(createVideoTrack(mVideoCapturer));
} else {
mEnableVideo = false;
}
}
if (mEnableAudio) {
createAudioCapturer();
mediaStream.addTrack(mFactory.createAudioTrack(
AUDIO_TRACK_ID,
mFactory.createAudioSource(mAudioConstraints)));
}
return mediaStream;
}
/**
* Creates a instance of WebRtcAudioRecord.
*/
private void createAudioCapturer() {
if (mOption.getAudioType() == PeerOption.AudioType.EXTERNAL_RESOURCE) {
WebRtcAudioRecord.setAudioRecordModuleFactory(new WebRtcAudioRecordModuleFactory() {
#Override
public WebRtcAudioRecordModule create() {
AudioCapturerExternalResource module = new AudioCapturerExternalResource();
module.setUri(mOption.getAudioUri());
module.setSampleRate(mOption.getAudioSampleRate());
module.setBitDepth(mOption.getAudioBitDepth());
module.setChannel(mOption.getAudioChannel());
return module;
}
});
} else {
WebRtcAudioRecord.setAudioRecordModuleFactory(null);
}
}
Source:
https://www.programcreek.com/java-api-examples/?code=DeviceConnect/DeviceConnect-Android/DeviceConnect-Android-master/dConnectDevicePlugin/dConnectDeviceWebRTC/app/src/main/java/org/deviceconnect/android/deviceplugin/webrtc/core/MediaStream.java
Currently, I have a server that streams four RTMP MediaSources, one with 720p video source, one with 360p video source, one with 180p video source, and one audio-only source. If I wanted to switch resolutions, I have to stop the ExoPlayer instance, prepare the other track I wanted to switch to, then play.
The code I use to prepare the ExoPlayer instance:
TrackSelection.Factory adaptiveTrackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter);
TrackSelector trackSelector = new DefaultTrackSelector(adaptiveTrackSelectionFactory);
RtmpDataSourceFactory rtmpDataSourceFactory = new RtmpDataSourceFactory(bandwidthMeter);
ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory();
factory = new AVControlExtractorMediaSource.Factory(rtmpDataSourceFactory);
factory.setExtractorsFactory(extractorsFactory);
createSource();
//noinspection deprecation
mPlayer = ExoPlayerFactory.newSimpleInstance(mActivity, trackSelector, new DefaultLoadControl(
new DefaultAllocator(true, C.DEFAULT_BUFFER_SEGMENT_SIZE),
1000, // min buffer
2000, // max buffer
1000, // playback
1000, //playback after rebuffer
DefaultLoadControl.DEFAULT_TARGET_BUFFER_BYTES,
true
));
vwExoPlayer.setPlayer(mPlayer);
mPlayer.addAnalyticsListener(mAnalyticsListener);
With createSource() being:
private void createSource() {
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_BOTH_AV);
mMediaSource180 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_180()));
mMediaSource180.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource180"));
mMediaSource360 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_360()));
mMediaSource360.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource360"));
mMediaSource720 = factory.createMediaSource(Uri.parse(API.GAME_VIDEO_STREAM_URL_720()));
mMediaSource720.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSource720"));
factory.setTrackPlaybackFlag(AVControlExtractorMediaSource.PLAYBACK_AUDIO_ONLY);
mMediaSourceAudio = factory.createMediaSource(Uri.parse(API.GAME_AUDIO_STREAM_URL()));
mMediaSourceAudio.addEventListener(getHandler(), new MSourceDebuggerListener("GameMediaSourceAudio"));
}
private void releaseSource() {
mMediaSource180.releaseSource(null);
mMediaSource360.releaseSource(null);
mMediaSource720.releaseSource(null);
mMediaSourceAudio.releaseSource(null);
}
And the code I currently use to switch between these MediaSources is:
private void changeTrack(MediaSource source) {
if (currentMediaSource == source) return;
try {
this.currentMediaSource = source;
mPlayer.stop(true);
mPlayer.prepare(source, true, true);
mPlayer.setPlayWhenReady(true);
if (source == mMediaSourceAudio) {
if (!audioOnly) {
try {
TransitionManager.beginDelayedTransition(rootView);
} catch (Exception ignored) {
}
layAudioOnly.setVisibility(View.VISIBLE);
vwExoPlayer.setVisibility(View.INVISIBLE);
audioOnly = true;
try {
GameQnAFragment fragment = findFragment(GameQnAFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
try {
GamePollingFragment fragment = findFragment(GamePollingFragment.class);
if (fragment != null) {
fragment.signAudioOnly();
}
} catch (Exception e) {
Trace.e(e);
}
}
} else {
if (audioOnly) {
TransitionManager.beginDelayedTransition(rootView);
layAudioOnly.setVisibility(View.GONE);
vwExoPlayer.setVisibility(View.VISIBLE);
audioOnly = false;
}
}
} catch (Exception ignore) {
}
}
I wanted to implement a seamless switching between these MediaSources so that I don't need to stop and re-prepare, but it appears that this feature is not supported by ExoPlayer.
In addition, logging each MediaSource structure with the following code:
MappingTrackSelector.MappedTrackInfo info = ((DefaultTrackSelector)trackSelector).getCurrentMappedTrackInfo();
if(info != null) {
for (int i = 0; i < info.getRendererCount(); i++) {
TrackGroupArray trackGroups = info.getTrackGroups(i);
if (trackGroups.length != 0) {
for(int j = 0; j < trackGroups.length; j++) {
TrackGroup tg = trackGroups.get(j);
for(int k = 0; k < tg.length; k++) {
Log.i("track_info_"+i+"-"+j+"-"+k, tg.getFormat(k)+"");
}
}
}
}
}
Just nets me 1 video format and 1 audio format each.
My current workaround is to prepare another ExoPlayer instance in the background, replace the currently running instance with that upon preparations being complete, and release the old instance. That reduces the lag between the MediaSources somewhat, but doesn't come close to achieving seamless resolution changes like Youtube.
Should I implement my own TrackSelector and jam-pack all the 4 sources into that, should I implement another MediaSource that handles all 4 sources, or should I just tell the colleague who maintains the streams to switch to just one RTMP MediaSource with a sort of manifest that lists all the resolutions available for the AdaptiveTrackSelection to switch between them?
Adaptive Bit Rate Streaming is designed to allow easy switching between different bit rate streams, but it requires the streams to be segmented and the player to download the video segment by segment.
In this way the player can decide which bit rate to choose for the next segment depending on the current network conditions (and the device display size and t type). The player is able to seamlessly, apart from the different bitrate and quality, move from one bit rate to another this way.
See here for some more info: https://stackoverflow.com/a/42365034/334402
All the above relies on a delivery protocol which supports this segmentation and different bit rate streams. The most common ones today are HLS and MPEG-DASH.
The easiest way to support what I think you are looking for would be for you colleague who is supplying the stream to supply it using HLS and/or DASH.
Note that at the moment, both HLS and DASH are required as apple devices require HLS while other devices tend to default to DASH. Traditionally HLS used TS as the container for the video in the segments and DASH used fragmented MP4, but there is now a move for both to use CMAF, which is essentially fragmented MP4.
So in theory a single set of bit rate videos can be used for HLS and DASH now - in practice this will depend on whether your content is encrypted or not, as HLS and apple used one encryption mode and everyone else another in the past. This is changing now also but will take time before all devices support the new approach, where all devices can support the same encryption mode, so if your streams are encrypted this is an added complication at the moment.
I've been working on an Android application that shows live streaming video via RTSP.
Assuming I have a well-functioning RTSP server that passes h264 packets, and to view the stream we should connect to rtsp://1.2.3.4:5555/stream
So I tried to use the native MediaPlayer\VideoView, but no luck (the video was stuck after 2-3 seconds of playback, so I loaded mrmaffen's vlc-android-sdk (can be found here) and used the following code:
ArrayList<String> options = new ArrayList<String>();
options.add("--no-drop-late-frames");
options.add("--no-skip-frames");
options.add("-vvv");
videoVlc = new LibVLC(options);
newVideoMediaPlayer = new org.videolan.libvlc.MediaPlayer(videoVlc);
final IVLCVout vOut = newVideoMediaPlayer.getVLCVout();
vOut.addCallback(this);
vOut.setVideoView(videoView); //videoView is a pre-defined view which is part of the layout
vOut.attachViews();
newVideoMediaPlayer.setEventListener(this);
Media videoMedia = new Media (videoVlc, Uri.parse(mVideoPath));
newVideoMediaPlayer.setMedia(videoMedia);
newVideoMediaPlayer.play();
The problem is that I see a blank screen.
Keep in mind that when I put a RTSP link with audio stream only, it works fine.
Is someone familliar with this sdk and have an idea about this issue?
Thanks in advance
Try adding this option:
--rtsp-tcp
I play rtsp streaming with following code
try {
Uri rtspUri=Uri.parse("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov");
final MediaWrapper mw = new MediaWrapper(rtspUri);
mw.removeFlags(MediaWrapper.MEDIA_FORCE_AUDIO);
mw.addFlags(MediaWrapper.MEDIA_VIDEO);
MediaWrapperListPlayer.getInstance().getMediaList().add(mw);
VLCInstance.getMainMediaPlayer().setEventListener(this);
VLCInstance.get().setOnHardwareAccelerationError(this);
final IVLCVout vlcVout = VLCInstance.getMainMediaPlayer().getVLCVout();
vlcVout.addCallback(this);
vlcVout.setVideoView(mSurfaceView);
vlcVout.attachViews();
final SharedPreferences pref = PreferenceManager.getDefaultSharedPreferences(this);
final String aout = VLCOptions.getAout(pref);
VLCInstance.getMainMediaPlayer().setAudioOutput(aout);
MediaWrapperListPlayer.getInstance().playIndex(this, 0);
} catch (Exception e) {
Log.e(TAG, e.toString());
}
When you get playing event, you need enable video track.
private void onPlaying() {
stopLoadingAnimation();
VLCInstance.getMainMediaPlayer().setVideoTrackEnabled(true);
}
This may be helpful for you
Background
Android got a new API on Kitkat and Lollipop, to video capture the screen. You can do it either via the ADB tool or via code (starting from Lollipop).
Ever since the new API was out, many apps came to that use this feature, allowing to record the screen, and Microsoft even made its own Google-Now-On-tap competitor app.
Using ADB, you can use:
adb shell screenrecord /sdcard/video.mp4
You can even do it from within Android Studio itself.
The problem
I can't find any tutorial or explanation about how to do it using the API, meaning in code.
What I've found
The only place I've found is the documentations (here, under "Screen capturing and sharing"), telling me this:
Android 5.0 lets you add screen capturing and screen sharing
capabilities to your app with the new android.media.projection APIs.
This functionality is useful, for example, if you want to enable
screen sharing in a video conferencing app.
The new createVirtualDisplay() method allows your app to capture the
contents of the main screen (the default display) into a Surface
object, which your app can then send across the network. The API only
allows capturing non-secure screen content, and not system audio. To
begin screen capturing, your app must first request the user’s
permission by launching a screen capture dialog using an Intent
obtained through the createScreenCaptureIntent() method.
For an example of how to use the new APIs, see the MediaProjectionDemo
class in the sample project.
Thing is, I can't find any "MediaProjectionDemo" sample. Instead, I've found "Screen Capture" sample, but I don't understand how it works, as when I've run it, all I've seen is a blinking screen and I don't think it saves the video to a file. The sample seems very buggy.
The questions
How do I perform those actions using the new API:
start recording, optionally including audio (mic/speaker/both).
stop recording
take a screenshot instead of video.
Also, how do I customize it (resolution, requested fps, colors, time...)?
First step and the one which Ken White rightly suggested & which you may have already covered is the Example Code provided officially.
I have used their API earlier. I agree screenshot is pretty straight forward. But, screen recording is also under similar lines.
I will answer your questions in 3 sections and will wrap it up with a link. :)
1. Start Video Recording
private void startScreenRecord(final Intent intent) {
if (DEBUG) Log.v(TAG, "startScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer == null) {
final int resultCode = intent.getIntExtra(EXTRA_RESULT_CODE, 0);
// get MediaProjection
final MediaProjection projection = mMediaProjectionManager.getMediaProjection(resultCode, intent);
if (projection != null) {
final DisplayMetrics metrics = getResources().getDisplayMetrics();
final int density = metrics.densityDpi;
if (DEBUG) Log.v(TAG, "startRecording:");
try {
sMuxer = new MediaMuxerWrapper(".mp4"); // if you record audio only, ".m4a" is also OK.
if (true) {
// for screen capturing
new MediaScreenEncoder(sMuxer, mMediaEncoderListener,
projection, metrics.widthPixels, metrics.heightPixels, density);
}
if (true) {
// for audio capturing
new MediaAudioEncoder(sMuxer, mMediaEncoderListener);
}
sMuxer.prepare();
sMuxer.startRecording();
} catch (final IOException e) {
Log.e(TAG, "startScreenRecord:", e);
}
}
}
}
}
2. Stop Video Recording
private void stopScreenRecord() {
if (DEBUG) Log.v(TAG, "stopScreenRecord:sMuxer=" + sMuxer);
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.stopRecording();
sMuxer = null;
// you should not wait here
}
}
}
2.5. Pause and Resume Video Recording
private void pauseScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.pauseRecording();
}
}
}
private void resumeScreenRecord() {
synchronized(sSync) {
if (sMuxer != null) {
sMuxer.resumeRecording();
}
}
}
Hope the code helps. Here is the original link to the code that I referred to and from which this implementation(Video recording) is also derived from.
3. Take screenshot Instead of Video
I think by default its easy to capture the image in bitmap format. You can still go ahead with MediaProjectionDemo example to capture screenshot.
[EDIT] : Code encrypt for screenshot
a. To create virtual display depending on device width / height
mImageReader = ImageReader.newInstance(mWidth, mHeight, PixelFormat.RGBA_8888, 2);
mVirtualDisplay = sMediaProjection.createVirtualDisplay(SCREENCAP_NAME, mWidth, mHeight, mDensity, VIRTUAL_DISPLAY_FLAGS, mImageReader.getSurface(), null, mHandler);
mImageReader.setOnImageAvailableListener(new ImageAvailableListener(), mHandler);
b. Then start the Screen Capture based on an intent or action-
startActivityForResult(mProjectionManager.createScreenCaptureIntent(), REQUEST_CODE);
Stop Media projection-
sMediaProjection.stop();
c. Then convert to image-
//Process the media capture
image = mImageReader.acquireLatestImage();
Image.Plane[] planes = image.getPlanes();
ByteBuffer buffer = planes[0].getBuffer();
int pixelStride = planes[0].getPixelStride();
int rowStride = planes[0].getRowStride();
int rowPadding = rowStride - pixelStride * mWidth;
//Create bitmap
bitmap = Bitmap.createBitmap(mWidth + rowPadding / pixelStride, mHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(buffer);
//Write Bitmap to file in some path on the phone
fos = new FileOutputStream(STORE_DIRECTORY + "/myscreen_" + IMAGES_PRODUCED + ".png");
bitmap.compress(CompressFormat.PNG, 100, fos);
fos.close();
There are several implementations (full code) of Media Projection API available.
Some other links that can help you in your development-
Video Recording with MediaProjectionManager - website
android-ScreenCapture - github as per android developer's observations :)
screenrecorder - github
Capture and Record Android Screen using MediaProjection APIs - website
Hope it helps :) Happy coding and screen recording!
PS: Can you please tell me the Microsoft app you are talking about? I have not used it. Would like to try it :)