PBR-SR: Mesh PBR Texture Super Resolution from 2D Image Priors

1Technical University of Munich   2Intel Labs
Preprint
MY ALT TEXT

We present PBR-SR for physically based rendering (PBR) texture super-resolution, which outputs high-resolution, high-quality PBR textures from low-resolution PBR input in a zero-shot manner.

Abstract

We present PBR-SR, a novel method for physically based rendering (PBR) texture super resolution (SR). It outputs high-resolution, high-quality PBR textures from low-resolution (LR) PBR input in a zero-shot manner. PBR-SR leverages an off-the-shelf super-resolution model trained on natural images, and iteratively minimizes the deviations between super-resolution priors and differentiable renderings. These enhancements are then back-projected into the PBR map space in a differentiable manner to produce refined, high-resolution textures. To mitigate view inconsistencies and lighting sensitivity, which is common in view-based super-resolution, our method applies 2D prior constraints across multi-view renderings, iteratively refining the shared, upscaled textures. In parallel, we incorporate identity constraints directly in the PBR texture domain to ensure the upscaled textures remain faithful to the LR input. PBR-SR operates without any additional training or data requirements, relying entirely on pretrained image priors. We demonstrate that our approach produces high-fidelity PBR textures for both artist-designed and AI-generated meshes, outperforming both direct SR models application and prior texture optimization methods. Our results show high-quality outputs in both PBR and rendering evaluations, supporting advanced applications such as relighting.

Video

Method Overview

MY ALT TEXT

PBR-SR begins with a mesh and its LR PBR texture, which are used to initialize the target SR texture. Renderings are then generated from properly set cameras and passed through an image restoration latent diffusion model to produce SR renderings as pseudo-GT images. A differentiable mesh rasterizer generates corresponding renderings at the same resolution as SR pseudo-GT images. A robust pixel-wise loss is applied between these renderings and the pseudo-GTs, while a per-view weighting map is jointly optimized to adaptively balance supervision. Additionally, PBR consistency constraints are enforced on the SR textures using the input LR PBR cues. This process iteratively optimizes and refines the SR textures for high-quality results.

BibTeX


        @article{chen2025pbrsr,
          title={PBR-SR: Mesh PBR Texture Super Resolution from 2D Image Priors},
          author={Chen, Yujin and Nie, Yinyu and Ummenhofer, Benjamin and Birkl, Reiner and Paulitsch, Michael and Nie{\ss}ner, Matthias},
          journal={arXiv preprint arXiv:2506.02846},
          year={2025}
        }