Perception‑Guided Single‑Image Quality Assessment: Recent Developments and Future Trends
Bio‑inspired image processing is about drawing on how humans and other biological systems see the world and using those insights to build better image‑processing algorithms. It connects computational neuroscience, cognitive science, and biology with real-world imaging problems. The goal is simple: help computers “see” a bit more like humans. Over the years, this line of thinking has led to many effective and practical algorithms, some closely tied to biological vision research, others more loosely inspired but still extremely useful. This back-and-forth flow of ideas is powerful: biology inspires more robust models, and, in turn, image-processing tools help researchers understand the human visual system in new ways.
In real life, digital images can be distorted at many stages: capturing, transmitting, compressing, storing, or even displaying them. Noise, blur, artifacts, and color issues all degrade an image’s appearance. This raises an important question: How can we automatically predict an image’s quality in a way that aligns with human perception? That question is at the heart of Image Quality Assessment (IQA). Single-image (or no-reference) IQA is especially challenging because the system must judge the quality using only the distorted image, with no clean version available for comparison. In this talk, we explore perception-guided approaches to single-image quality assessment. The talk covers the core principles behind visual‑perception-driven IQA, including contrast sensitivity, structural perception, visual masking, attention, and scene understanding, and how these ideas connect to modern computational models. We examine recent developments spanning classical vision-based metrics to deep learning and generative approaches, and how these models aim to bridge the gap between human perception and automated evaluation.