SEED Research & Announcements Blogs Publications Careers Contact Us Research & Announcements Blogs Publications Careers Contact Us

Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games"

Machine Learning

Graphical errors are often hard to spot by eye during game testing. This paper presents a method for using Deep Convolutional Neural Networks (DCNNs) to detect common visual glitches in video games. The main use of this work is the partial automatization of graphical testing in the final stages of video game development.

Developing video games involves many steps, starting from the concept, to the final release. Often there are hundreds of developers and artists involved when creating a modern game. In this complex process, plenty of bugs can be introduced, and many of them having a negative effect on the rendered images. We refer to these graphical bugs as glitches.

Graphical glitches can occur at several stages: when updating the asset database (resulting in missing textures), updating the codebase (resulting in textures being corrupted), updating drivers, cross-platform development, and so on. Since graphics are one of the primary components of any video game, it is of high importance to assure the absence of glitches or malfunctions that otherwise may negatively affect the player’s experience.

This paper will also be presented at the 16th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, October 19-23, 2020.

Authors: Carlos García Ling, Konrad Tollmar, Linus Gisslén

Download the Paper "Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games"

Download the paper as PDF (5.2 MB).

 

Related News

Beyond White Noise for Real-Time Rendering

SEED
May 14, 2024
SEED’s Alan Wolfe discusses the use of different types of noise for random number generation, focusing on the application of blue noise in rendering images for gaming.

Evaluating Gesture Generation in a Large-Scale Open Challenge

SEED
May 9, 2024
This paper, published in Transactions on Graphics, reports on the second GENEA Challenge, a project to benchmark data-driven automatic co-speech gesture generation.

Filter-Adapted Spatio-Temporal Sampling for Real-Time Rendering

SEED
May 1, 2024
This paper, presented at I3D 2024, discusses how to tailor the frequencies of rendering noise to improve image denoising in real time rendering.