It’s a record in a computer that contains numbers. (Typically three or four of them.) They represent a point sample. In a few cases (like pixel art) they might represent little squares. But the...
It’s a record in a computer that contains numbers. (Typically three or four of them.) They represent a point sample.
In a few cases (like pixel art) they might represent little squares. But the point samples from a digital camera’s sensor don’t.
You make a good point that the article is drawing an inappropriately confident conclusion. But in another case, computer renders, you do at least have a true point sample. Sometimes there are...
You make a good point that the article is drawing an inappropriately confident conclusion. But in another case, computer renders, you do at least have a true point sample. Sometimes there are multiple samples per pixel. But each sample is 0D.
Using just one point sample per "pixel" for computer renders gives a lot of artifacts, hence the multitude of AA methods to average many point samples together to one. In theory you can have one...
Using just one point sample per "pixel" for computer renders gives a lot of artifacts, hence the multitude of AA methods to average many point samples together to one. In theory you can have one point sample, but that's normally not what you see when rendered to a screen.
I'm putting "pixel" in quotes since this whole post is about discussing what a pixel actually is, what I mean in this case is the point sample that you have in the data representation sent to the screen.
Here's an article about cameras and moiré effects. Apparently digital cameras used to have optical blur filters to avoid moiré. Mathematically, this is using a low-pass filter to avoid aliasing....
Here's an article about cameras and moiré effects. Apparently digital cameras used to have optical blur filters to avoid moiré. Mathematically, this is using a low-pass filter to avoid aliasing. Similar things happen with audio, where if you have a low sampling rate you can get aliasing artifacts unless you use a low-pass filter to blur them out.
Apparently with higher resolution images, moiré effects don't happen as much, so they don't do that anymore and it's up to the photographer to deal with it. I wonder what camera phones do?
Maybe it would be more accurate to say that when processing photos, particularly low-resolution ones, you probably don't want to treat the pixels as perfect squares?
But this is something that people writing low-level image-processing algorithms would normally deal with.
This is amusing, but I believe thinking about pixels this way lacks some practical utility that the more conventional mode of thinking provides. If we're talking about something like a jpeg or a...
This is amusing, but I believe thinking about pixels this way lacks some practical utility that the more conventional mode of thinking provides.
If we're talking about something like a jpeg or a video file, sure, a pixel has no volume: It's just a collection of data indicating hue and intensity for a point of light. How pixels get presented is controlled by factors entirely outside of the place where they are defined.
If we are talking about print formats, a "PPI" (pixels per inch) value is provided which essentially defines the physical dimensions of every pixel in the document. This analysis degrades further if we move the context of "pixel" to the hardware level. Here, the pixel has a physical presence with volume.
Both of these exceptions need to be considered when defining your pixels. Things like size and subpixel arrangement on display hardware can have profound impacts on how your pixels are perceived by the viewer. Just take a look at how many font smoothing options Windows has to accommodate the many different types of displays out there.
It’s definitely an abstraction to think about pixels vs. points. But, when your concern is say, normalizing image resolution across devices/file systems/image codecs, it can be a useful...
It’s definitely an abstraction to think about pixels vs. points. But, when your concern is say, normalizing image resolution across devices/file systems/image codecs, it can be a useful abstraction. When the sub-pixel layouts of different digital display technologies come up, abstract pixels become a bit muddy, though. E.g., sub-pixels on many OLED displays are not simple RGB matrices. As with many technical terms, there’s a lot of domain-specific context that’s needed to understand what concept “pixel” signifies.
Also, the classic A Pixel Is Not A Little Square.
So, basically, the pixel is not a square but rather a "thing-in-itself".
sigh
It’s a record in a computer that contains numbers. (Typically three or four of them.) They represent a point sample.
In a few cases (like pixel art) they might represent little squares. But the point samples from a digital camera’s sensor don’t.
You make a good point that the article is drawing an inappropriately confident conclusion. But in another case, computer renders, you do at least have a true point sample. Sometimes there are multiple samples per pixel. But each sample is 0D.
Using just one point sample per "pixel" for computer renders gives a lot of artifacts, hence the multitude of AA methods to average many point samples together to one. In theory you can have one point sample, but that's normally not what you see when rendered to a screen.
I'm putting "pixel" in quotes since this whole post is about discussing what a pixel actually is, what I mean in this case is the point sample that you have in the data representation sent to the screen.
Here's an article about cameras and moiré effects. Apparently digital cameras used to have optical blur filters to avoid moiré. Mathematically, this is using a low-pass filter to avoid aliasing. Similar things happen with audio, where if you have a low sampling rate you can get aliasing artifacts unless you use a low-pass filter to blur them out.
Apparently with higher resolution images, moiré effects don't happen as much, so they don't do that anymore and it's up to the photographer to deal with it. I wonder what camera phones do?
Maybe it would be more accurate to say that when processing photos, particularly low-resolution ones, you probably don't want to treat the pixels as perfect squares?
But this is something that people writing low-level image-processing algorithms would normally deal with.
This is amusing, but I believe thinking about pixels this way lacks some practical utility that the more conventional mode of thinking provides.
If we're talking about something like a jpeg or a video file, sure, a pixel has no volume: It's just a collection of data indicating hue and intensity for a point of light. How pixels get presented is controlled by factors entirely outside of the place where they are defined.
If we are talking about print formats, a "PPI" (pixels per inch) value is provided which essentially defines the physical dimensions of every pixel in the document. This analysis degrades further if we move the context of "pixel" to the hardware level. Here, the pixel has a physical presence with volume.
Both of these exceptions need to be considered when defining your pixels. Things like size and subpixel arrangement on display hardware can have profound impacts on how your pixels are perceived by the viewer. Just take a look at how many font smoothing options Windows has to accommodate the many different types of displays out there.
It’s definitely an abstraction to think about pixels vs. points. But, when your concern is say, normalizing image resolution across devices/file systems/image codecs, it can be a useful abstraction. When the sub-pixel layouts of different digital display technologies come up, abstract pixels become a bit muddy, though. E.g., sub-pixels on many OLED displays are not simple RGB matrices. As with many technical terms, there’s a lot of domain-specific context that’s needed to understand what concept “pixel” signifies.