Antialiasing and Anisotropic Filtering

PreviousNext

Anisotropic Filtering

Anisotropic filtering improves the clarity and crispness of textured objects in games. Textures are images containing various types of data such as color, transparency, reflectivity, and bumps (normals) that are mapped to an object and processed by the GPU in order to give it a realistic appearance on your screen. At its native dimensions, however, a typical texture is far too computationally expensive to unconditionally reuse in a scene because the relative distance between the texel (a pixel of a texture) of the object and the camera affects the observable level of detail, which could easily translate to wasted processing time spent on obtaining multiple texture samples that are applied to a disproportionately small surface in the 3D scene. To simultaneously preserve performance and image quality, mipmaps are used; mipmaps are duplicates of a master texture that have been pre-rendered at lower resolutions which a graphics engine can call when a corresponding surface is a specific distance from the camera. With proper filtering, the use of multiple mipmap levels in a scene can have no discernable impact on its appearance while greatly optimizing performance.

Anisotropic filtering

Due to the dimensions of mipmaps conventionally being a power of two or smaller than the original texture, there exist points where multiple mipmaps may be sampled for a single texel which must be compensated for by the filtering method in use to avoid blurring and other visual artifacts. Bilinear filtering serves as the default, being the simplest and computationally cheapest form of texture filtering available due to its simple approach:to calculate a texel's final color, four texel samples are taken from the mipmap defined by the graphics engine at the approximate point where the target texel exists on-screen, which will appear as a combined result of those samples' color data. While this does account for distortions in texture angles, bilinear filtering takes samples exclusively from the mipmap identified by the graphics engine, meaning that any perspective-distorted texture lying at a point where two different mipmap sizes are called results in the displayed texture containing pronounced shifts in clarity. Trilinear filtering, the visually successive method to bilinear filtering, offers smooth transitions between mipmaps by continuously sampling and interpolating (averaging) texel data from the two closest mipmap sizes for the target texel, but this approach along with bilinear filtering both assume that the texture is displayed as square with the camera and thus suffers from quality loss when a texture is viewed at a steep angle. This is due to the texel covering a depth longer than and a width more narrow than the samples extracted from the mipmaps, resulting in blurriness from under- and over-sampling respectively.

Anisotropic filtering exists to provide superior image quality in virtually all cases at the slight expense of performance. By the computer science definition, anisotropy is the quality of possessing dissimilar coordinate values in a space, which applies to any texture not displayed as absolutely perpendicular to the camera. As previously mentioned, bilinear and trilinear filtering suffer from resultant quality loss when the sampled textures are oblique with the camera due to both methods obtaining texel samples from mipmaps assuming that the mapped texel is perfectly square in the rendered space, which is rarely true. This quality loss is also related to the fact that mipmaps are isotropic, or possessing identical dimensions, so when a texel is trapezoidal there is insufficient sampling in both directions. To solve this, anisotropic filtering scales either the height or width of a mipmap by a ratio relative to the perspective distortion of the texture; the ratio is dependent on the maximum sampling value specified, followed by taking the appropriate samples. AF can function with anisotropy levels between 1 (no scaling) and 16, defining the maximum degree which a mipmap can be scaled by, but AF is commonly offered to the user in powers of two: 2x, 4x, 8x, and 16x. The difference between these settings is the maximum angle that AF will filter the texture by. For example: 4x will filter textures at angles twice as steep as 2x, but will still apply standard 2x filtering to textures within the 2x range to optimize performance. There are subjective diminishing returns with the use of higher AF settings because the angles at which they are applied become exponentially rarer.

Anisotropic filtering can be controlled through the NVIDIA Control Panel within the 3D Settings section, however for the best performance and compatibility NVIDIA recommends that users set this to be controlled by the application.

Anti-aliasing

Anti-aliasing, or "AA" for short, is a rendering technique used to minimize the prevalence of aliasing, a type of visual artifact that looks like the steps on a stair, on any non-perpendicular edges of objects in a 3D scene. Aliasing is a consequence of an integral operation in the modern 3D rendering pipeline: rasterization, or the process of translating an image of nearly infinite detail to a discrete pixel matrix (the monitor.) This process has a propensity to create visible inconsistencies in edge continuity due to the GPU only coloring a pixel if the line passing through it occupies more than half of the space, resulting in a jagged edge where our eyes would normally expect a smooth contiguous line.

Anti-Aliasing

In the same fashion as anisotropic filtering, there are multiple approaches to this technique that carry certain advantages and disadvantages, however the two most common are supersampling and multisampling. Ultimately both types achieve the same effect: producing an intermediary color for a single pixel based on sampling neighboring sub-pixels (virtual pixels that are rendered by the GPU but not displayed on the screen) to generate a more realistic edge for an object, but the differentiation between them is made by how those methods obtain the final color.

Supersampling is a brute-force AA method that calculates a pixel's color by forcing the GPU to oversample the frame, or render it at dimensions equal to the resolution times the sampling rate (ex.: native resolution of 1680x1050 * 4 samples = 3360x2100) and obtaining color data from samples around a target pixel before downsampling (reducing) the frame to its original size. After the frame has been reduced a negative level of detail bias is applied to sharpen a sampled object's textures to counteract the blurring that downsampling and pixel merging produces. Supersampling is a type of full-scene anti-aliasing, meaning that every pixel in the frame is sampled and corrected rather than just those lying on the object's outer boundary, lending it to offer exceptional image quality but at an enormous cost to performance due to the GPU being required to compute so much additional information. The supersampling anti-aliasing modes are not available through the NVIDIA Control Panel.

Multisampling still oversamples the frame (renders at a larger resolution) in the same fashion as supersampling, but to conserve processing time each sub-pixel inherits the color value from the sample pixel and only assigns unique depth values to the sub-pixels (whereas supersampling calculates individual color/depth values.) With the GPU being aware of which pixel will be displayed when the frame is downsampled, the final color is calculated by evaluating the depth values: if the sub-pixel depths are inconsistent, the pixel is lying on the edge and will be colored by altering the opacity of the pixel by accounting the number of samples taken and the number of sub-pixels that had different depth values (which for 4x would be 100%, 75%, 50%, 25%, and 0%.) This method comes with a substantially lower performance requirement than supersampling while still providing acceptable frame rates, leading it to typically being available in most 3D games.

Multisampling can be forced at several sampling rates (2x up to 16x) through the NVIDIA Control Panel within the 3D Settings section, however for the best performance and compatibility NVIDIA recommends that users set this to be controlled by the application.

submit to reddit Email