XR Post-Production Processes: Rendering using Amazon AWS
Rendering is one of the most expensive, time consuming and error prone tasks of any content creation studio. Rendering is a recurring task that multiple departments must complete. At some point in the process, several departments, including; principal photography, audio creation, animation, and visual effects will begin to render out files for various purposes. If the resulting files have subpar quality or need further artistic edits, then the process has to be started over again.
High Level Business Process Diagram - Rendering
At a small to midsize production studio, typically there are tight limits to annual hardware and software licensing investment. Even if there is a higher level of disparity in project structures, given that each project may be predefined by the client, or in some cases, self-defined, and thus set by company protocols. Of course, maximizing time and resources remains a major priority. Due to the nature of the sector, studios will have large scale computing needs and may face challenges, as the project elements are put together or multiple projects enter the pipeline.
In order to offset these challenges, our theoretical studio could leverage Amazon AWS to develop a cloud-based workflow. Amazon AWS for the workflow described in this example prices out around =<$0.01 per core/hour for rendering. Beyond Amazon AWS other areas where issues may present, such as, bandwidth and storage, will still need to be considered . In order to save money on licensing, elastic licensing models can be used via Thinkbox (The Foundry’s Nuke) and Autodesk. Amazon S3 and elastic files systems with EFS can drive the shared file system, while AWS Direct Connect can be used to transfer the files from our theoretical studio’s local storage. Finally, in our plan for our theoretical studio workflow, we can use NVIDIA GPU based EC2 Instances, along with windows and Linux (VCN + VirtualGL) to facilitate an entirely cloud-based artist workflow .
Amazon AWS Cloud Rendering Flow Chart
Drilling Down: Quality & Game Play Testing
Event Simulation Flow Chart
The proposed business model has many events which can be drilled down further for analysis. Let’s take a look at the new testing phase of the model and the all new bug testing capability that has been added in. Whenever, coding, quality and game-play testing is occurring our team and outside testers will submit bugs, or errors/glitches in the system, as they occur.
To the right is an event simulation flow chart, featuring an in-depth look at the reporting process and event flow from start to for the bug testing phase of our proposed business model. Looking at the model, note that the process begins when a new defect is found by the team. A ticket is opened and a team member from the development team is assigned to attempt to correct the issue. The team member creates a formal description of the problem and begins to do some preliminary research.
After the data is gathered, and the developer attempts to reproduce the error. If the error, cannot be reproduced then the developer will attempt to determine why and gather more data. If the developer cannot determine why after gathering more data and concludes that it’s not a bug, after gathering the data then the bug will be rejected.
If the bug is able to be reproduced, then the developer will attempt to find the root cause of the bug and select it to begin bug analysis. During bug analysis, that developer will determine one of three things. Is the error genuine? If not, the bug will be rejected. Is the bug fixable? If not, the issue will be escalated to a manager and removed from the queue. Is the root cause a genuine issue and what can be done to fix it?
Next the developer, will propose one or several new solutions to fix the error. When the developer believes that error is corrected, another member of the development team will retest the solution. If the solution fails to fix the problem during the retest, the issue will be sent back to the original developer for further analysis and new solutions.
If this solution works on the retest, then the error will be reviewed by another member of the development team. If a solution fails during the retest, then the ticket is reopened, and sent to an entirely new member of the development team to restart the process again. If the solution works, then the case is closed and the error or fixed is considered to be fixed.