SING: Analyzing Semantic Invariants in Classifiers

CVPR 2026 · Paper · GitHub

Almost every neural network classifier has a null space: directions in the feature space that leave predicted class scores completely unchanged. SING reveals what semantic content lives there by projecting features into a vision-language embedding space and comparing the original representation to its null-space-removed equivalent. IS (Image Score) measures the angular drift between the two (in degrees); AS (Attribute Score) measures whether that drift moves features toward or away from a concept like the predicted class. High IS means the null space contains rich semantic content (colors, textures, backgrounds) invisible to the classifier; positive AS means the null space was suppressing the concept, negative means it was reinforcing it. IS > 12° or |AS| > 4° indicate substantial semantic leakage. In this space, we trained translators for the penultimate layer of 13 different ImageNet1k classifiers to play with. Feel free to pick your own image or use one of the examples, and see how different models discard different semantics and how strongly null-space removal affects them in CLIP space.

Model
5 20
Examples