Oh I’ll bet this has been asked and answered a billion times, but has anyone ever seen a good explanation between the difference between film grain and digital pixels.
What I mean by that is that you can take a nice “clean” digital image from a good sized noiseless sensor and with some interpolation you can go very large with it even though you are beginning with a file that is actually much smaller than a full 35mm scan which is about 78 MB (or somewhere in that area if it’s RGB and 16 bit).
But as I say – I can take a much smaller digital file and easily go to that size with an interpolation program without seeing any noise or artifacts.
I’m not really saying it clearly but I remember when I began the switch from film to digital I had an idea that those grains corresponded to pixels and they just don’t. Anyone ever go through the same conceptual enigma?
film grain is somewhat organic and random, where pixels are ordered in a rigid grid and normal linear (or cubic, bicubic) algorithms just work better with that sort of “order”.
if you take a film scan with organic grain, it pretty much interferes with the pixel pattern and your imaging software doesn’t know exactly what to do with that seemingly random mess.
also, film grain looks differently in the highlights, midtones and shadows and that makes it even harder to compensate.
digital noise on the other hand is somewhat ordered and repetitive, hence makes sensor specific/heat map noise reduction in camera possible… it also explains why noise reduction algorithms (there, math again) work so well with digital, but fail miserably with the randomness of film.