>>3536193>Or is this by design?
Diffraction limit is always a property of the specific optical formula.
As you stop down, even a single stop, diffraction lessens your *potential* maximum resolution, but stopping down also reduces the optical aberrations of imperfect (=all) lenses, so your achieved resolution increases.
The point past which the loss of resolution due to diffraction is bigger or equal to the gain due to reducing optical aberrations, is the diffraction limit of the lens.
Old lenses were diffraction limited at smaller apertures, I.e. kept improving (relative to their wide open performance) al the way down to f/11 and f/16, is because they were not very well corrected so *despite* diffraction, the end result was better at f/16 on that lens compared to f/5.6 on the *same* lens.
Modern lenses hit their diffraction limit earlier but it doesn’t mean that they’re worse at f/16 compared to a vintage lens at f/16. They’re the same or better at f/16, and better at every wider aperture compared to the vintage ones.
It just means that if they’re diffraction limited at f/5.6, then at f/5.6 you get the peak performance of *that* lens. Doesn’t negate the fact that less than peak performance of one lens can still be superior to peak performance of another lens.
Long story short, diffraction limit is solely dependent on lens formula, and old lenses looked “better” at smaller apertures because our eyes are good at spotting comparative differences (f/16 looks better than f/5.6) but shit at telling absolute values (the actual resolution of the lens at each aperture),