Rendering Features
The rendering pipelines for the Ncam AR Suite has a single texture input (the film) and two outputs for the CG and Composite images. When the Ncam Workflow is run the a global list of inputs and outputs is populated depending on the configuration. When a new Ncam camera that uses an Ncam AR Suite Rendering pipeline is created it's inputs and outputs are populated with those from this global list.
This list can be observed from the NcamARLite plug-in options which are found within the Project Settings:
A user wouldn't generally need to change these options as they are configured by the Workflow Wizard directly.
NcamARSuite provides two production tested cameras that are purposed for different AR scenarios. Each camera has a different Rendering Pipeline Component on it that renders and produces the augmented reality composite:
- NcVirtualOverlayCameraBP Virtual Overlay Pipeline (CG over Video);
- NcVirtualStudioCameraBP Virtual Studio Pipeline (Video over CG);
To use one of these cameras simply locate them in the Content Browser of Unreal navigating to the plugin folder’s Content/UECamera/Blueprints then drag it directly into the scene.
Each camera is derived from the parent NcTrackedCameraActor which has an additional component attached to it called the CameraDrawFrustumComponent. This component is driven by the camera tracking data to visualize the camera’s view frustum.
The tables below illustrate the visual effects which are available on the different rendering pipelines that are provided with the Ncam cameras.
| Virtual Overlay | Virtual Studio | Notes |
---|---|---|---|
Depth Of Field | Yes | Yes | |
Lens Distortion | Yes | Yes | |
Heat Haze Distortion | Limited | Yes | Heat haze is applied to other virtual objects but will not affect the video |
Transparency | Yes | Yes | |
Refraction | Limited | Yes | Virtual objects will be correctly refracted but the video will remain unaffected |
| Virtual Overlay | Virtual Studio | Notes |
---|---|---|---|
Dynamic light sources | Yes | Yes | |
Stationary light sources | Yes | Yes | |
Static (baked) light sources | Limited | Yes | Static lights can only affect virtual objects. It will not affect the video. |
Grey shadows | Yes | Yes | |
Coloured shadows | Yes | Yes | |
Reflections of the video in virtual surfaces | Yes | Yes | |
The rendering features such as Screen Space Reflections, Depth of Field, Shadows etc. all work coherently together with both CG and Film.
To access and configure the various features of the compositing pipeline, first locate the actor that it is attached to in the scene, This will be one of the Ncam camera blueprints.
The pipeline can then be found in the list of its components. The images below illustrate this as well as some of the parameters that are used to configure the various effects:
When using the virtual studio camera BP from the AR Suite content folder it is possible to add garbage matte objects to the scene. These are typically used to extend a CG background beyond the extent of the physical green screen area of the set.
Garbage matting is configured in the following way:
- Add a "NcVirtualStudioCameraBP" camera actor to the scene
- From the NcamARSuite Content > UECamera > Meshes > GarbageMatteObjects folder drag the garbage matte mesh into your level
- Add as many meshes as required, and scale and position them accordingly. Ensure that the meshes are "moveable"
- Select the NcVirtualStudioCameraBP actor and navigate to the pipeline component
- Expand the "Film" section and add the Garbage matte meshes in the level to the Garbage matte objects Array, and enable the "Enable Masking" checkbox
It is possible to invert the effect of the garbage matte. In this case everything outside of the garbage matte will be rendered full CG, and the garbage matte itself will be rendered as a holdout showing the camera image. This can be done by changing the "Masking material" in the Film section to the "ApplyInverseGarbageMatte" material
The Virtual Overlay compositing pipelines have a feature that allows shadows to be cast from virtual objects and superimposed onto the video. This is achieved by placing a moveable static mesh object in the scene which is positioned and orientated to approximate the surface onto which the shadow will be cast.
AR Suite Only
For example a holdout object could be placed on the ground plane to allow shadows to be cast from CG objects onto the floor. These objects are referred to as “Holdout Objects” and they will not be visible in the final Composite Output.
Useful holdout static mesh objects can be found under “UECamera/Meshes/HoldOuts” within the NcamARSuite content folder. To use a holdout first place it in the scene, position and orientate it to match the approximate location of the physical object it is representing (such as the floor) and add it to the Holdout Actors list of the chosen Ncam Compositing Pipeline.
When using hold-outs it is recommended that they fully encompass the scene so that there are no areas of transparency as they are rendered black. This often means that at least two hold-out objects are required, one to receive the shadows and another for blocking out the background. To make the set-up of the scene easier NcamARSuite provides a NcSkyHoldOut object that can be placed to encapsulate the scene.
By default, the Material of NcSkyHoldOut is set to NcShadowHoldoutUnlit which means that it will not receive any shadows. Similarly, any other hold-out object that is to be used for blocking and therefore shouldn’t receive shadows should also use a material with an Unlit Shading Model, e.g. NcShadowHoldoutUnlit as shown below.
The hold-outs which are to receive shadows for composition over the film do not need any particular Material or Shading Model configuration to be set. It is just important to have all hold-outs added to the “Holdout Actors” list of the chosen Ncam Compositing Pipeline, as illustrated in the image above.
The following images illustrate this workflow with a practical example that demonstrates the casting of shadows from multiple characters onto a hold-out with slightly curved edges.
The image below shows how the hold-outs are added to the scene.
Firstly an NcBevelledCubeHoldout is placed to receive the shadows and a second hold-out, the NcSkySphere, is used to mask out all remaining (black) background outside of the volume of the cube hold-out.
The NcSkySphere hold-out can be scaled as large as required. It’s purpose is to cover the entirety of the scene including the volume of the camera trajectory.
Both hold-out objects ensure that the shadows are cast and composited correctly onto the background image. The NcBevelledCubeHoldOut acts as the shadow receiver and the NcSkyHoldOut ensures that if the virtual camera moves outside of the volumethat the empty void isn’t rendered on in the composite as black. This behavior is further illustrated in the images below.
The NcamARSuite Plugin introduces three new options to increase performance when using any of our Virtual Overlay compositing pipelines. All of the options improve the performance of compositing Shadows onto the Film Image. These options can often be crucial when trying to reach a target frame rate especially when using complex scenes.
- Shadow Quality Percentage:
- Captures the Shadows at a lower resolution than the full frame
- A shadow quality of 50 % is recommended and is set by default
- Reducing the shadow quality increases the performance significantly compared to 100 % (full target resolution)
- Even when set to 25 % the visual difference is mostly minimal compared to the performance gain.
- Actor to Render Only in Shadow Pass
- Actors in this list are only rendered in the shadow pass
- By using this, and the option below in combination, simple proxy objects can be used to cast shadows instead of their more complex counterparts which can further improve the performance.
- Actor to Ignore in Shadow Pass
- All actors in this list will not be considered for use in the shadow pass and will only be rendered in the main CG pass.
- This option is useful to prevent an object from casting a shadow onto a holdout but not onto other CG elements.
Ncam AR Suite supports three colour spaces for SDI input and output:
- sRGB
- Rec709
- LogC
As the Unreal Engine uses sRGB as it’s native colour gamut choosing sRGB as the input and output colour space provides color consistency through the entire compositing pipeline. This means that the colour across all viewports within the Unreal Editor as well as on the output SDI monitor will be same.
Using other colour spaces such as Rec709 may produce higher quality, less noisy, results on the output SDI video but at the cost of introducing subtle differences from the colour of CG objects displayed in the Unreal viewport. The input and output colour spaces can be configured independently if required: