# Video Source APIs - iOS

In this guide, we will show you how to use the VideoSource APIs to share video in a Room. These APIs allow you to choose the built-in camera(s), or any other source of content that is available to your application (or extension).

* [Overview](#overview)
* [Using the CameraSource API](#using-the-camerasource-api)
* [Writing a VideoSource](#writing-a-videosource)

## Overview

The VideoSource APIs describe producers and consumers of video content. A `VideoSource` produces content for a `LocalVideoTrack`. Sources have the following properties.

* VideoSources produce VideoFrames, and deliver them to VideoSinks.
* VideoSources receive format requests, and deliver requests to VideoSinks.
* The recommended maximum frame size is 1920x1080.
* The recommended maximum frame rate is 30 frames per second.
* The recommended pixel format is NV12.

A `VideoSink` consumes content from a `VideoSource`. Sinks have the following properties.

* VideoSinks handle format requests from VideoSources.
* VideoSinks consume frames from VideoSources.

In the next section, we will show you how to use the CameraSource API.

## Using the CameraSource API

A [CameraSource](https://twilio.github.io/twilio-video-ios/docs/latest_5.x/Classes/TVICameraSource.html) is a
[VideoSource](https://twilio.github.io/twilio-video-ios/docs/latest_5.x/Protocols/TVIVideoSource.html) that produces content from the built-in cameras. This is probably the first kind of video that you want to share, so it is a good place to begin.

### Create a CameraSource and a LocalVideoTrack

First, we want to create a `CameraSource`, and use that source to create a `LocalVideoTrack`.

```swift
guard let cameraSource = CameraSource() else {
    // Unable to initialize a camera source
    return
}
var videoTrack = LocalVideoTrack(source: cameraSource)
```

### Capture from a Device

Now that we've setup our Track and Source, it's time to start producing frames from one of the built-in cameras. Let's use a `CameraSource` utility method to help us discover a front facing `AVCaptureDevice`.

```swift
guard let frontCamera = CameraSource.captureDevice(position: .front) else {
    // The device does not have a front camera.
    return
}

// Start capturing with the device that we discovered.
cameraSource.startCapture(device: frontCamera)
```

In this example, `CameraSource` is automatically determining the best format to capture in. Typically, 640x480 at 30 frames / second is used as the default value.

### Connect to a Room with a LocalVideoTrack

Next, we want to connect to a Room with the `LocalVideoTrack` we created earlier.

```swift
let connectOptions = ConnectOptions(token: accessToken){ (builder) in
    builder.roomName = "my-room"
    if let localVideoTrack = self.localVideoTrack {
        builder.videoTracks = [localVideoTrack]
    }
}
self.room = TwilioVideoSDK.connect(options: connectOptions, delegate: self)
```

### Select a new Device

While you can select a single device at start time, `CameraSource` also supports switching devices while it is already running. For example, you could switch from a front facing device to a rear facing device.

```swift
guard let rearCamera = CameraSource.captureDevice(position: .back) else {
    // The device does not have a rear camera.
    return
}

cameraSource.selectCaptureDevice(rearCamera)
```

### Unpublishing Video and Stopping Capture

At some point after connecting to a Room, you might decide that you want to stop sharing video from the camera. Start with unpublishing the Track.

```swift
// Unpublish the Track. We will no longer be sharing video in the Room.
if let participant = self.room?.localParticipant,
    let videoTrack = self.localVideoTrack {
    participant.unpublishVideoTrack(videoTrack)
}
```

Finally, we will stop the source and destroy the objects.

```swift
// Stop capturing from the device.
self.camera?.stopCapture(completion: { (error) in
    if let theError = error {
        print("Error stopping capture:", theError as Any)
    }

    self.camera = nil
    self.localVideoTrack = nil
})
```

### Selecting a Device Format

An `AVCaptureDevice` can produce video in many possible formats. `CameraSource` offers utility methods to discover formats that are suitable for video streaming. Consider executing the following code on your iOS device:

```swift
// Assume that we discovered "frontDevice" earlier.

let formats = CameraSource.supportedFormats(captureDevice: frontDevice)
print(formats)
```

When this code is run on an iPhone X with iOS 12.4, the following formats are returned.

| Dimensions  | Frame Rate | Pixel Format |
| ----------- | ---------- | ------------ |
| 192 x 144   | 30         | 420f         |
| 352 x 288   | 30         | 420f         |
| 480 x 360   | 30         | 420f         |
| 640 x 480   | 30         | 420f         |
| 960 x 540   | 30         | 420f         |
| 1280 x 720  | 30         | 420f         |
| 1920 x 1080 | 30         | 420f         |
| 1920 x 1440 | 30         | 420f         |
| 3088 x 2320 | 30         | 420f         |

Once you've determined which format you would like to use, you can provide it when starting capture.

```swift
// Formats are ordered by increasing dimensions. Start with the smallest size.
cameraSource.startCapture(device: frontDevice,
                          format: formats.firstObject as! VideoFormat,
                          completion: nil)
```

In some applications, it may be important to change formats at runtime with as little disruption to the camera feed as possible.

```swift
// Select another format for the front facing camera.
cameraSource.selectCaptureDevice(frontDevice,
                                 format: formats.lastObject as! VideoFormat,
                                 completion: nil)
```

### Making a Format Request

Device formats afford quite a lot of flexibility, but there are some cases that `AVCaptureDevice` does not support out of the box. For example, what if you wanted to:

1. Produce square video.
2. Produce video that fills a portrait iPhone X / XR / XS screen.

These are both cases where you want to publish video in a different aspect ratio or size than `AVCaptureDevice` can produce. That is okay, because format requests are here to help with this problem.

```swift
let frontDevice = CameraSource.captureDevice(position: .front)!
let formats = CameraSource.supportedFormats(captureDevice: frontDevice)

// We match 640x480 directly, since it is known to be supported by all devices.
var preferredFormat: VideoFormat?
for format in formats {
    let theFormat = format as! VideoFormat
    if theFormat.dimensions.width == 640,
        theFormat.dimensions.height == 480 {
        preferredFormat = theFormat
    }
}

guard let captureFormat = preferredFormat else {
    // The preferred format could not be found.
    return
}

// Request cropping to 480x480.
let croppingRequest = VideoFormat()
let dimension = captureFormat.dimensions.height
croppingRequest.dimensions = CMVideoDimensions(width: dimension,
                                               height: dimension)

self.camera?.requestOutputFormat(croppingRequest)
self.camera?.startCapture(device: frontDevice,
                          format: captureFormat,
                          completion: nil)
```

The following diagram shows the effect of a format request on frames produced by `CameraSource`.

![Flowchart illustrating video source requirements for cropping with frame adaptation from 640x480 to 480x480.](https://docs-resources.prod.twilio.com/6e4bf87bfd9288941b37e9b2e93f0325b4fe10bef65324879716e87479d95d1b.png)

Take a look at the [iOS QuickStart Example](https://github.com/twilio/video-quickstart-ios/tree/master/VideoQuickStart) to learn more about using `CameraSource`.

## Tracking Orientation Changes

The `CameraSource` provides flexibility in how it tracks video orientation for capture and preview. By default, the `CameraSource` monitors `-[UIApplication statusBarOrientation]` for orientation changes. With the addition of the `UIWindowScene` APIs in iOS 13, `CameraSource` now has a property, `CameraSourceOptions.orientationTracker`, which allows you to specify how the `CameraSource` should track orientation changes.

The `orientationTracker` property accepts an object that implements the `CameraSourceOrientationTracker` protocol. A default implementation, `UserInterfaceTracker` is provided with the SDK. `UserInterfaceTracker` monitors for changes in `UIInterfaceOrientation` at the application or scene level. For example, if you wish to track orientation changes based on a scene, you would provide the scene to track when creating the `CameraSourceOptions`.

```swift
// Track the orientation of the key window's scene.
let options = CameraSourceOptions { (builder) in
    if let keyScene = UIApplication.shared.keyWindow?.windowScene {
        builder.orientationTracker = UserInterfaceTracker(scene: keyScene)
    }
}
let camera = CameraSource(options: options, delegate: self)
```

You will also need to forward `UIWindowScene` events from your `UIWindowSceneDelegate` to keep `UserInterfaceTracker` up to date as the scene changes.

```swift
// Forward UIWindowScene events
func windowScene(_ windowScene: UIWindowScene,
                 didUpdate previousCoordinateSpace: UICoordinateSpace,
                 interfaceOrientation previousInterfaceOrientation: UIInterfaceOrientation,
                 traitCollection previousTraitCollection: UITraitCollection) {
    UserInterfaceTracker.sceneInterfaceOrientationDidChange(windowScene)
}
```

You can also manually control how orientation is tracked. For example, you might decide to use `UIDevice` instead of `UIScene` to determine the orientation of the camera. To do this, you would create your own implementation of `CameraSourceOrientationTracker` which invokes the `- (void)trackerOrientationDidChange:(AVCaptureVideoOrientation)orientation` callback method when the device's orientation changes.

## Writing a VideoSource

VideoSources are real-time producers of content. Importantly, to optimize for low latency *delivery of individual frames is not guaranteed*. The video pipeline continuously monitors network and device conditions and may respond by:

* Reducing the number of bits allocated to the encoder.
* Downscaling the video to a smaller size.
* Dropping video frames at input.
* Cropping (minimal, to ensure pixel alignment).

### Sample Code

If you would like to implement your own VideoSource, or learn about advanced usage of `CameraSource` then an excellent place to begin is with our sample code.

* [ARKitExample](https://github.com/twilio/video-quickstart-ios/tree/master/ARKitExample)
* [ReplayKitExample](https://github.com/twilio/video-quickstart-ios/tree/master/ReplayKitExample)
* [ScreenCapturerExample](https://github.com/twilio/video-quickstart-ios/tree/master/ScreenCapturerExample)
* [VideoApp](https://github.com/twilio/twilio-video-app-ios)
