Yes, I wish people would simply talk about the number of pixels in the image or the image size as x pixels by y pixels, instead of somewhat pretentiously talking about the resolution of the image.
And with my two different cameras and - yes - lenses, it should have come as no surprise at all.
The advantage at Nyquist went also to the 3MP with it's 54% fill factor and no microlenses and no AA filter. Best possible MTF at Nyquist being a theoretical 80% or so - as opposed the more common 64% or less.
There is a resolution gain visible with naked eye. The problem is that the way MTF is measured (slanted edge), it is not an actual measure of resolution. The slanted edge, in a way, does its own pixel shift and the one performed by the sensor does not matter much. This is cheating however.
Depends on the definition of resolution. The good thing about pixel shifted images is that they have a lot less aliasing, with close to a 200% fill factor. That means they can take a lot more sharpening.
Actually, that means they recover finer detail. Aliasing converts fine detail into something else and mixes it with the rest. Finer detail means higher resolution.
Well, you had to do it first to explain your claim of no gain in resolution. MTF does not apply for a linear map from a continuous object to a discrete (sampled) one. It cannot possibly measure resolution in the first place, which was my objection. What is measured by the slanted edge is the resulting resolution if you had the same pixel size but you can shift the sensor by a lot of very small increments. This is what in-sensor pixel shift actually does mechanically, so no surprise that the results are similar. But, agan, the slanted edge does not really measure the resolution, whatever you think it is, that you sensor provides with one shot.
What I think resolution should be? You convert the sampled image to a "continuous" one, and then use MTF if you wish, or just the Nyquist limit, assuming decent contrast left there. Each one of those can be done without an actual interpolation, just as a thought exercise, based on the samples. Whatever you get is limited by the Nyquist, and pixel shift has a higher Nyquist, so it wins. If there is aliasing in either case, it actually reduces the theoretical Nyquist limit.