Mandatory Graphics Techniques For Next Gen

It is 2014. If you read my editorial last week, I talk about moving on from last gen hardware and focusing entirely on the new consoles as well as PCs. I say this because I want our industry to continue marching forward rather than being shackled by weaker hardware. I also believe that a new console cycle is the perfect time to erase old standards and set new yardsticks by which to measure the sheer technical might of our industry.

That being the case, I have created a (non-exhaustive) list of visual and graphics features that absolutely must be in all games going forward on next gen consoles. Why not on PC? The simple answer is that most of these techniques are already used on PC games today. Yes, this piece will be tech-heavy, though I will do my best to explain things. Still, for most of you, listening to me preach about constant improvements in graphics and technology simply annoys you. Feel free to read something else.

Everything listed here can be done today. In fact, everything listed here could be done last year on either the older version of CryEngine (CryEngine 3) or Frostbite 3 – even though modern engines such as The Division’s Snowdrop looks incredibly impressive. There is simply no reason for these effects not to be in all titles going forward. Because of that, you will see frequent references to both of these engines and their subsequent games. They are simply the top of the class and should be recognized and respected for their pioneering achievements.


Given that ray tracing is a long way off from coming to PCs (nevermind consoles), it makes sense to discuss how lighting in games can improve and what features should be commonplace henceforth.

First off, deferred lighting absolutely must be everywhere. The power is there in these new machines and definitely in PCs. It should be implemented. Just what is deferred lighting? In essence, deferred lighting allows the efficient rendering of a vast number of light sources with per-pixel shading. Basically, the scene is rendered first followed by lighting pass on that scene on a per-pixel basis. This way, lights don’t have to be rendered over and over again.

Of course, ambient occlusion is a must. I’ve explained this technique many times before so I won’t beat the dead horse here. Dynamic soft shadows should be commonplace as well. In effect, these are soft shadows that dynamically respond to natural movements in real time. It goes without saying that this should contain high resolution, perspective-correct, and volumetric smooth-shadow implementations. In other words, in-game shadows should look and behave realistically.

Irradiance volumes should be implemented as well. This is slightly more complex to explain, especially given that this is only recently becoming more viable due to greater memory pools, but I’ll do my best. Imagine you have a flashlight that emits white light. You then shine this light on a bright green ball. The light bouncing off the ball will now have a slightly greenish hue to it.

Irradiance volumes allow for this green tinged reflected light. It also allows for you to place as many light sources as you need, even if these light sources have overlapping radii (think ceiling lamps). Irradiance volumes will also provide you with appropriate lighting values and color bleed of objects interacting with those overlapping lights (think different colored light emitted from Christmas tree lights interacting with each other).

All of this leads nicely to volumetric lighting, that is, light that takes up a physical volume and reacts in a physically accurate way with the surrounding environment and objects. The most obvious example of this would be the god rays in Crysis 3 or Battlefield 4. In most games, god rays are a post process that don’t physically react with their environment. However, the god rays in the aforementioned games will have a physically accurate interaction with smoke, for example. Sounds trivial, but it’s not.

Real-time reflections are a must. Admittedly, games have been doing this since the introduction of DirectX 11, with modern console games like Killzone Shadow Fall as a recent example. There are a lot of dependencies on the final result of the reflection, such as physically accurate material shaders, but this is thankfully becoming more and more commonplace now.

Even though ray tracing is some ways off, some engines, like CryEngine 3, still implement a single bounce lighting solution. This allows for calculating a photon’s bounce off of a surface as it hits a second surface. In rendering, this is known as indirect lighting. Now ideally, you need many more bounces to achieve a realistic result – typically anywhere from 8-16 bounces – but the resource expense of each  bounce increases linearly. Therefore, it’s realistic and practical to implement a single bounce solution with the above mentioned techniques to fill in the gaps.

Particles and Environment

Soft particles are used pretty much everywhere now, so this honestly isn’t a huge concern. What does need to be addressed, however, is particle shading. Particle shading allows for particles to receive full motion blur and also shadow and lighting information. This information can be transmitted from environment probes and global illumination to create more believable particle effects.

Pairing this with truly volumetric particles will aid in creating more accurate particle systems. An example of this is a column of smoke that takes up a real volume in space. This smoke would also interact with light in a realistic way as it does in Battlefield 4.

Full object and character tessellation is a must. Again, it’s 2014. DirectX 11 is here. Make it so. Parallax occlusion mapping (POM) can be used to great effect to nicely complement tessellation. POM is effectively a cheaper solution to tessellation which provides nearly the same effect.

Subsurface scattering (SSS) should also be implemented on characters (to make their skin look and feel more realistic) and objects (think light passing through leaves and clothing). SSS is effectively light entering an object (skin), is scattered by interacting with that object, and is then bounced out. A perfect example of this is Skyrim’s ENB where the characters’ skin looks a lot more believable.

Water is one place where great advancements have been made as recently as Assassin’s Creed IV. An entirely new sea engine was crafted specifically for that game. Techniques such as shaders and tessellation were used, but more can be done. CryEngine does a phenomenal job of utilizing caustics caused by displaced water. All of this water in-game should interact in a physically accurate manner with not only its surroundings, but also with the lighting solution.

Just what are caustics? A caustic is formed by light rays that have been either reflected or refracted off of a curved surface, or the projection of those curved rays onto another surface. The best example of this is the light being reflected onto your table due to a resting glass of water.

Post Processing

Finally, post processing is that final flourish after the frame has gone through the graphics pipeline. Since most antiliasing solutions we see today are trending towards a post solution, I don’t see this changing. My hope is that FXAA becomes phased out as more effective sub-pixel algorithms are utilized.

Of course, possibly the best solution would be to utilize deferred shading wherein geometry and the alpha channel receive antialising, but considering the rather resource intensive nature of this solution, I highly doubt we will see this on consoles.

Full motion blur is a must. Frankly, I don’t see motion blur not being used as it is more or less the norm now. Motion blur includes not only camera blur, but object blur as well (think the bluriness of your hand as you quickly wave it back and forth).

Depth of field (DoF) and HDR (high dynamic range) lighting are another must. DoF does not need to be used extensively, rather, its effects are best utilized for more subtle circumstances such are aiming down sights of a gun. HDR is rather trivial at this point, seeing as it’s been implemented in games for years.

. . .

In case I didn’t make my point clear, I want to see our industry forge new grounds. This is why the techniques I discussed above are meant to be a yardstick. That is, these techniques should be the absolute minimum suite of techniques to be used in game.

No doubt, more efficient and newer techniques will come along. I sincerely hope that as new techniques are introduced to our industry, they are subsequently implemented. Only through such innovation will our industry charge ahead in the technical space. All of these techniques combined are used to further drive home the immersion and escapism games provide. Let us not lose sight of that.

One comment

  • I haven’t read much of this article yet but I feel it is important for me to say that along with all of these graphical improvements and features, vertical sync should also be mandatory with next gen. At least with the consoles. If console devs won’t add in-game options of v-sync on or off and fps to be 30 or 60 fps, then I think v-sync should be on. Yes it adds a bit of stuttering and lag but with proper implementation and optimization, it can get pretty smooth.
    I understand why there’s so much screen tearing on last gen consoles.
    Although I hate thinking that screen tearing is “next gen” or allowed to be on “next gen consoles”
    To me at least, with not having a g-sync monitor yet or even a 90+Hz monitor yet(as with most console gamers), screen tearing is very disorienting, distracting, and immersion-breaking.
    I will indeed be very sad that if Titanfall, the presumed Xbox One “next gen title”(which it isn’t really) is shipped with screen tearing, it’s not next gen at all, and I won’t even want to try that experience.
    I understand that tearing is “ok” with competitive, fast paced games life FPS’ but I won’t have any of it.

Leave a Reply