I doubt that there is much difference at all in manufacturing cost. AFAIK, the cost is almost entirely a function of sensor size, assuming same fab line, and BSI reduces the need for finer geometries. I'd gues that yields might be very slightly lower on larger pixel count, but that's it.
Could be that the larger pixel size allows production on a coarser node line. Sony SS has several lines ranging from 180nm to 65nm. I would guess that the 180nm line is looking for business now, so if a sensor can be fabbed on that it could get a discount and maybe quicker delivery.
Based on the fact that high ISOs are noisier than other technologies , there is probably low QE, too, suggesting that the microlenses are not guiding light hitting all of the sensor into the photosites as efficiently as other sensor designs. True near-100% microlense coverage would allow normal QE even with a low total charge per unit of area; base ISO would simply be higher, with no extra noise at high ISOs.
Why not? It's been obvious to me for years now that when I capture more detail I sharpen less, and that leads to less noise. If I apply NR to the same physical sensor area from a higher density sensor to match the lesser detail I get from a lower density sensor I always end up with comparable or better results from the higher density sensor. Amp noise at 10000+ ISO exposures used to be an issue, but I try to avoid those conditions and would never base a camera purchase on that.
Computational photography methods can be applied to small and big pixels alike (maybe with different algoritms and parameters for 'optimal' results).
Thus the waters would just get muddied even further.
My reply was not to "computational photography," whatever that is. My reply was to your broader question of pixel properties (specifically the size of the photosites) and noise reduction ("AI," or otherwise). Nonetheless, and I don't have time right now to explore this in detail (excuse the pun), I'm reasonably certain that GIGO applies to computational post processing applications and the more detail I have to work with the better the result will be (I'll define that here as reflecting the noise and detail I saw).
Your "anecdotal evidence" doesn't even mention if the cameras you compared used the same technology level and the same noise/speed design tradeoffs, i.e. we don't know what else possibly affects them besides pixel size resp. count.
Well, somebody else in this thread posted a link to JimK's blog. I noticed that he used bilinear interpolation for his debayering, although there were better methods available (at that time, 10-15 years ago, i was working on raw files and found some variant of LMMSE to perform significantly better). Would that have changed JimK's results? Dunno. But what caught my eye is the one and only comment from that link:
Oh brother, you are beating a dead horse. Many of us have done lots of anecdotal comparisons and come up with the same results. I used to give a damn about these pointless arguments and would bend over backwards comparing the latest cameras as tested at DPR and/or Imaging Resource, and in the end it was a tremendous waste of time because some people just want to argue and can't accept that the available evidence ("anecdotal, or otherwise) shows that they are wrong. That was what DPR excelled at, and I have no doubt that's why I am not there anymore.
You can blow smoke until you're blue in the face, but the fact is that detail not recorded because of a lack of sensor resolution is lost and can only be imagined by AI or whatever. You can downsize a more detailed file and get great results, but when you upsize a less detailed file the results are invariably worse than the more detailed file even at the same display size (though I have found that when both are being downsized there is a point where it doesn't make a difference to me).
Contrast and saturation are adjustments that can be raised and lowered. To say one camera has inherently more of something like that than another is to ignore a basic step in post-processing. I have found over the years that trying to get any two cameras (even from the same manufacturer) to exactly match is a fool's errand, and in the end it doesn't matter to me because I just strive to get pleasing results from whatever camera I'm using (currently a D500 and a D850).
Oh brother, you are beating a dead horse. Many of us have done lots of anecdotal comparisons and come up with the same results. I used to give a damn about these pointless arguments and would bend over backwards comparing the latest cameras as tested at DPR and/or Imaging Resource, and in the end it was a tremendous waste of time because some people just want to argue and can't accept that the available evidence ("anecdotal, or otherwise) shows that they are wrong. That was what DPR excelled at, and I have no doubt that's why I am not there anymore.
You can blow smoke until you're blue in the face, but the fact is that detail not recorded because of a lack of sensor resolution is lost and can only be imagined by AI or whatever. You can downsize a more detailed file and get great results, but when you upsize a less detailed file the results are invariably worse than the more detailed file even at the same display size (though I have found that when both are being downsized there is a point where it doesn't make a difference to me).
Well, typically sensors with more pixels use newer technology, which complicates matters. Besides that i stopped bothering about noise in my photos years ago, AI noise reduction is that good (no matter what pixel size), not to mention stuff like multi exposure methods, like liveND.
Today, i even get pleasing results straight out of my Pixel 8 Pro. Tiny pixels and lots of computational "tricks" inside. No tinkering in post needed.
Like I wrote previously, I have zero want, or need, for more than 20-24MP. I also have zero interest in any of this AI processing malarky. None. I hate phones. With a bit of a passion. 24MP 36x24 sensors are to me, the sweet spot. The files aren't too big, requiring tonnes of computer horsepower and storage, or too small, also requiring a fair bit of horsepower to process, in the form of AI noise reduction etc etc, which also produces some pretty big files. Like 100MB +
24MP 36x24 files are child's play to work with, & edit, with brilliant results. With basic, cheap software. It's just so easy it's not funny.
Open file. Perhaps bump up the shadows a touch. Maybe pull down the highlights. If the lighting is abysmal, maybe tweak the White balance. Tweak saturation if desired. Resize depending on intended display medium, sharpen to suit. That's it. Done. Beautiful clean, virtually noise free, sharp & detailed colourful pictures, with just enough effort required to give them a personal touch, without getting bogged down sitting behind the computer screen. They're simply really good fun
As I mentioned in another thread, a big reason for this is that still cameras are also video cameras these days, and 35-40mp perform very poorly with video… you need to jump from 24 to about 45mp to get good video performance. Otherwise, the camera will need to irregularly scale or excessively crop to deliver video, and videographers hate both options.
I've printed 8x10s from 1MP and 62x44 from 12MP taken from a compact pocket camera, and they looked great. In fact, there's a 36x24 taken from my compact hanging framed on the wall at my place of employment. There's also a 36x24 printed from a 1920x1080 frame-grab from a GoPro.