Epic President Talks Unreal Engine 3 Tech In Exclusive Video Interview

March 10, 2011

By Andrew Burnes

At this week’s Game Developers Conference in San Francisco, Epic Games wowed attendees with a demonstration of the latest features they’ve developed for the industry-leading Unreal Engine 3, utilised by hundreds of developers and publishers the world over. Famed for its ease of use and for reducing both the cost and time it takes to develop a game, licensees are given automatic access to the upgrades Epic implements, and so the advanced, graphically-intensive features shown by Epic will appear in new PC games over the coming year.

Shown on a PC running three NVIDIA GTX 580 graphics cards in SLI, the Unreal Engine demonstration highlighted a number of new techniques previously reserved for CGI movies shown on the silver screen. As explained by Epic Games President Mike Capps in our exclusive video interview, the Unreal Engine ‘Samaritan’ demonstration highlights effects only possible with DirectX 11 and the latest video cards, though Epic Games’ Mark Rein believes that further optimisation will enable the effects to run smoothly on NVIDIA’s mid-range GTX 560 Ti graphics card.

The effects and techniques, shown in our screenshots below, are headlined by CGI-quality Bokeh depth of field blurring, allowing developers to fine-tune out-of-focus backgrounds to create visually pleasing scenes that highlight specific details in ways even real-world cameramen are unable to achieve. The second major enhancement is the implementation of real-time anti-aliased reflections, allowing full quality reflections of light, characters and world geometry without the jagged lines seen in past games that showcased the most basic of reflections, which were then often pre-baked to prevent performance dropping into single-digits.

Epic is especially proud of their subsurface scattering that allows light to pass realistically through transparent surfaces, the example being the shining of a torch through a hand and the resulting glow on the opposite side. Further enhancing the believability of computer generating humans is the addition of anti-aliased, super-sampled masked materials - or in other words, realistic looking hair. Individual strands are rendered and anti-aliased to create smooth, layered hair that is then improved through the addition of deferred rendering and shadow point light reflections, techniques that we will examine further in a future article.

For now, gaze in amazement at the screenshots and check out our exclusive video interview with Mike Capps: