NVIDIA has officially unveiled its latest texture compression algorithm, which naturally uses AI-based technology and the latest neural networks to achieve maximum efficiency. The green companies have already stated that their development is primarily focused on the ever-growing memory requirements of the hardware component of computers (we are talking about video cards), which are required to store textures in high resolution, as well as many properties and attributes associated with these textures. to render graphics with high fidelity.
The technology is called Neural Texture Compression or NTC, and NVIDIA claims that this development allows you to get four times higher resolution (16 more texels) than block compression (Block Compression), which is a completely standard texture-based compression GPU. It is block compression that is now actively used in a wide variety of projects where it is necessary to compress textures and graphic components, and the latest NVIDIA algorithm, according to the official publication of the company, works much better. However, it is also much more complex on a technical level.
The company explained that the technology represents textures in tensor format (three dimensions), without using any of the assumptions that are usually involved in block compression. Also, the developers of the technology note that random and local access are the most important feature of the NTC algorithm – when compressing textures based on the GPU, it is very important that textures can be accessed quickly with minimal delays even if the technology with the highest compression ratio is used. This is exactly what NVIDIA engineers managed to achieve.
“Neural textures can be rendered in real time using up to 16 times more texels than block compression, and 4K content is rendered at just 1.15ms on the RTX 4090,” from the official blog NVIDIA company.
But the main advantage of the new solution compared to block compression algorithms is that NTC does not need to use special hardware. The new algorithm uses only matrix multiplication methods that can be performed on the power of modern GPUs. This makes the algorithm much more practical and functional, since it has fewer restrictions when working with memory.