# Screen share - JavaScript

In this guide, we'll demonstrate how to share your screen using [twilio-video.js](/docs/video/javascript-getting-started). Chrome 72+, Firefox 66+ and Safari 12.2+ support the [getDisplayMedia](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia) API. This can be used to capture the screen directly from the web app. For previous versions of Chrome, you'll need to create an [extension](https://developer.chrome.com/docs/extensions). The web application will communicate with this extension to capture the screen.

## Chrome (72+), Firefox (66+), Safari (12.2+): Use getDisplayMedia

To share your screen in a Room, use `getDisplayMedia()` to get the screen's MediaStreamTrack and create a [LocalVideoTrack](https://sdk.twilio.com/js/video/releases/2.34.0/docs/LocalVideoTrack.html):

```js
const { connect, LocalVideoTrack } = require('twilio-video');

const stream = await navigator.mediaDevices.getDisplayMedia({video: {frameRate: 15}});
const screenTrack = new LocalVideoTrack(stream.getTracks()[0], {name:'myscreenshare'});
```

Then, you can either publish the LocalVideoTrack while joining a Room:

```js
const room = await connect(token, {
  name: 'presentation',
  tracks: [screenTrack]
});
```

or, publish the LocalVideoTrack after joining a Room:

```js
const room = await connect(token, {
  name: 'presentation'
});

room.localParticipant.publishTrack(screenTrack);
```

## Firefox (65-): Use getUserMedia

To share your screen in the Room, use `getUserMedia()` to get the screen's MediaStreamTrack and create a [LocalVideoTrack](https://sdk.twilio.com/js/video/releases/2.34.0/docs/LocalVideoTrack.html):

```js
const { connect, LocalVideoTrack } = require('twilio-video');

const stream = await navigator.mediaDevices.getUserMedia({
  mediaSource: 'window'
});

const screenTrack = new LocalVideoTrack(stream.getTracks()[0]);
```

Then, you can either publish the LocalVideoTrack while joining a Room:

```js
const room = await connect(token, {
  name: 'presentation',
  tracks: [screenTrack]
});
```

or, publish the LocalVideoTrack after joining a Room:

```js
const room = await connect(token, {
  name: 'presentation'
});

room.localParticipant.publishTrack(screenTrack);
```

## Screen Share Not Supported on Mobile Web Browsers

Currently, we don't support screen sharing on mobile browsers as [getDisplayMedia](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia) isn't supported. However, it can be achieved through the [iOS SDK](/docs/video/ios-v4-screen-share) and [Android SDK](/docs/video/android-screen-share).

## Chrome (71-): Build a Screen Share Extension

Our web app and extension will communicate using [message passing](https://developer.chrome.com/extensions/messaging). Specifically, our web app will be responsible for sending requests to our extension using Chrome's [`sendMessage`](https://developer.chrome.com/extensions/runtime#method-sendMessage) API, and our extension will be responsible for responding to requests raised through Chrome's [`onMessageExternal`](https://developer.chrome.com/extensions/runtime#event-onMessageExternal) event. By convention, every message passed between our web app and extension will be a JSON object containing a `type` property, and we will use this `type` property to distinguish different types of messages.

### Web App Requests

Our web app will send requests to our extension.

#### "getUserScreen" Requests

Since we want to enable screen share, the most important message our web app can send to our extension is a request to capture the user's screen. We want to distinguish these requests from other types of messages, so we will set its `type` equal to "getUserScreen". (We could choose any string for the message `type`, but "getUserScreen" bears a nice resemblance to the browser's [`getUserMedia`](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia) API.) Also, Chrome allows us to specify the [DesktopCaptureSourceType](https://developer.chrome.com/extensions/desktopCapture#type-DesktopCaptureSourceType)s we would like to prompt the user for, so we should include another property, `sources`, equal to an Array of DesktopCaptureSourceTypes. For example, the following "getUserScreen" request will prompt access to the user's screen, window, or tab:

```json
{
  "type": "getUserScreen",
  "sources": ["screen", "window", "tab"]
}
```

Our web app should expect a [success](#success-responses) or [error](#error-responses) message in response.

### Extension Responses

Our extension will respond to our web app's requests.

#### Success Responses

Any time we need to communicate a successful result from our extension, we'll send a message with `type` equal to "success", and possibly some additional data. For example, if our web app's ["getUserScreen" request](#getuserscreen-requests) succeeds, we should include the resulting `streamId` that Chrome provides us. Assuming Chrome returns us a `streamId` of "123", we should respond with

```json
{
  "type": "success",
  "streamId": "123"
}
```

#### Error Responses

Any time we need to communicate an error from our extension, we'll send a message with `type` equal to "error" and an error `message`. For example, if our web app's ["getUserScreen" request](#getuserscreen-requests) fails, we should respond with

```json
{
  "type": "error",
  "message": "Failed to get stream ID"
}
```

## Project Structure

In this guide, we propose the following project structure, with two top-level folders for our web app and extension.

```bash
.
├── web-app
│   ├── index.html
│   └── web-app.js
└── extension
    ├── extension.js
    └── manifest.json
```

***Note:** If you are adapting this guide to an existing project you may tweak the structure to your liking.*

### Web App

#### index.html

Since our web app will be loaded in a browser, we need some HTML entry-point to our application. This HTML file should load [web-app.js](#web-appjs) and twilio-video.js.

#### web-app.js

Our web app's logic for creating twilio-video.js Clients, connecting to Rooms, and [requesting the user's screen](#requesting-the-screen) will live in this file.

### Extension

#### extension.js

Our extension will run extension.js in a background page. This file will be responsible for [handling requests](#handling-requests). For more information, refer to Chrome's documentation on [background pages](https://developer.chrome.com/extensions/background_pages).

#### manifest.json

Every extension requires a manifest.json file. This file grants our extension access to Chrome's Tab and DesktopCapture APIs and controls which web apps can send messages to our extension. For more information on manifest.json, refer to Chrome's documentation on the [manifest file format](https://developer.chrome.com/extensions/manifest); otherwise, feel free to tweak the example provided here. Note that we've included "*://localhost/*" in our manifest.json's "externally\_connectable" section. This is useful during development, but you may not want to publish your extension with this value. Consider removing it once you're done developing your extension.

```json
{
  "manifest_version": 2,
  "name": "your-plugin-name",
  "version": "0.10",
  "background": {
    "scripts": ["extension.js"]
  },
  "externally_connectable": {
    "matches": ["*://localhost/*", "*://*.example.com/*"]
  },
  "permissions": [
    "desktopCapture",
    "tabs"
  ]
}
```

## Requesting the Screen

We define a helper function in our web app, `getUserScreen`, that will send a ["getUserScreen" request](#getuserscreen-requests) to our extension using Chrome's [`sendMessage`](https://developer.chrome.com/extensions/runtime#method-sendMessage) API. If our request succeeds, we can expect a ["success" response](#success-responses) containing a `streamId`. Our response callback will pass that `streamId` to [`getUserMedia`](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia), and—if all goes well—our function will return a Promise that resolves to a MediaStream representing the user's screen.

```js
/**
 * Get a MediaStream containing a MediaStreamTrack that represents the user's
 * screen.
 *
 * This function sends a "getUserScreen" request to our Chrome Extension which,
 * if successful, responds with the sourceId of one of the specified sources. We
 * then use the sourceId to call getUserMedia.
 *
 * @param {Array<DesktopCaptureSourceType>} sources
 * @param {string} extensionId
 * @returns {Promise<MediaStream>} stream
 */
function getUserScreen(sources, extensionId) {
  const request = {
    type: 'getUserScreen',
    sources: sources
  };
  return new Promise((resolve, reject) => {
    chrome.runtime.sendMessage(extensionId, request, response => {
      switch (response && response.type) {
        case 'success':
          resolve(response.streamId);
          break;

        case 'error':
          reject(new Error(error.message));
          break;

        default:
          reject(new Error('Unknown response'));
          break;
      }
    });
  }).then(streamId => {
    return navigator.mediaDevices.getUserMedia({
      video: true
    });
  });
}
```

## Connecting to a Room with Screen Sharing

Assume for the moment that we know our extension's ID and that we want to request the user's screen, window, or tab. We have all the information we need to call `getUserScreen`. When the Promise returned by `getUserScreen` resolves, we need to use the resulting MediaStream to construct the LocalVideoTrack object we intend to use in our Room. Once we've constructed our LocalVideoTrack representing the user's screen, we have two options for publishing it to the Room:

1. We can provide it in our call to `connect`, or
2. We can publish it after connecting to the Room using `publishTrack`.

Finally, we'll also want to add a listener for the "stopped" event. If the user stops sharing their screen, the "stopped" event will fire, and we may want to remove the LocalVideoTrack from the Room. We can do this by calling `unpublishTrack`.

```js
const { connect, LocalVideoTrack } = require('twilio-video');

// Option 1. Provide the screenLocalTrack when connecting.
async function option1() {
  const stream = await getUserScreen(['window', 'screen', 'tab'], 'your-extension-id');
  const screenLocalTrack = new LocalVideoTrack(stream.getVideoTracks()[0]);

  const room = await connect('my-token', {
    name: 'my-room-name',
    tracks: [screenLocalTrack]
  });

  screenLocalTrack.once('stopped', () => {
    room.localParticipant.unpublishTrack(screenLocalTrack);
  });

  return room;
}

// Option 2. First connect, and then publish screenLocalTrack.
async function option2() {
  const room = await connect('my-token', {
    name: 'my-room-name',
    tracks: []
  });

  const stream = await getUserScreen(['window', 'screen', 'tab'], 'your-extension-id');
  const screenLocalTrack = new LocalVideoTrack(stream.getVideoTracks()[0]);

  screenLocalTrack.once('stopped', () => {
    room.localParticipant.unpublishTrack(screenLocalTrack);
  });

  await room.localParticipant.publishTrack(screenLocalTrack);
  return room;
}
```

## Handling Requests

Our extension will listen to Chrome's [`onMessageExternal`](https://developer.chrome.com/extensions/runtime#event-onMessageExternal) event, which will be fired whenever our web app sends a message to the extension. In the event listener, we switch on the message `type` in order to determine how to handle the request. In this example, we only care about ["getUserScreen" requests](#getuserscreen-requests), but we also include a `default` case for handling unrecognized responses.

```js
chrome.runtime.onMessageExternal.addListener((message, sender, sendResponse) => {
  switch (message && message.type) {
    // Our web app sent us a "getUserScreen" request.
    case 'getUserScreen':
      handleGetUserScreenRequest(message.sources, sender.tab, sendResponse);
      break;

    // Our web app sent us a request we don't recognize.
    default:
      handleUnrecognizedRequest(sendResponse);
      break;
  }

  return true;
});
```

### "getUserScreen" Requests \[#getuserscreen--requests-2]

We define a helper function in our extension, `handleGetUserScreenRequest`, for responding to ["getUserScreen" requests](#getuserscreen-requests). The function invokes Chrome's [`chooseDesktopMedia`](https://developer.chrome.com/extensions/desktopCapture#method-chooseDesktopMedia) API with `sources` and, if the request succeeds, sends a [success response](#success-responses) containing a `streamId`; otherwise, it sends an [error response](#error-responses).

```js
/**
 * Respond to a "getUserScreen" request.
 * @param {Array<DesktopCaptureSourceType>} sources
 * @param {Tab} tab
 * @param {function} sendResponse
 * @returns {void}
 */
function handleGetUserScreenRequest(sources, tab, sendResponse) {
  chrome.desktopCapture.chooseDesktopMedia(sources, tab, streamId => {
    // The user canceled our request.
    if (!streamId) {
      sendResponse({
        type: 'error',
        message: 'Failed to get stream ID'
      });
    }

    // The user accepted our request.
    sendResponse({
      type: 'success',
      streamId: streamId
    });
  });
}
```

### Unrecognized Requests

For completeness, we'll also handle unrecognized requests. Any time we receive a message with a `type` we don't understand (or lacking a `type` altogether), our extension's `handleUnrecognizedResponse` function will send the following [error response](#error-responses):

```json
{
  "type": "error",
  "message": "Unrecognized request"
}
```

**handleUnrecognizedRequest Implementation**

```js
/**
 * Respond to an unrecognized request.
 * @param {function} sendResponse
 * @returns {void}
 */
function handleUnrecognizedRequest(sendResponse) {
  sendResponse({
    type: 'error',
    message: 'Unrecognized request'
  });
}
```

## Publishing the Extension

Finally, once we've built and tested our web app and extension, we will want to publish our extension in the Chrome Web Store so that users of our web app can enjoy our new screen share functionality. Take a look at Chrome's [documentation](https://developer.chrome.com/webstore/publish) for more information.
