<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Imagery in the Eighties &#8211; I</title>
	<atom:link href="http://habitablezone.com/2015/01/22/imagery-in-the-eighties-i/feed/" rel="self" type="application/rss+xml" />
	<link>https://habitablezone.com/2015/01/22/imagery-in-the-eighties-i/</link>
	<description></description>
	<lastBuildDate>Tue, 07 Apr 2026 19:18:10 -0700</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.3.1</generator>
	<item>
		<title>By: ER</title>
		<link>https://habitablezone.com/2015/01/22/imagery-in-the-eighties-i/#comment-32232</link>
		<dc:creator>ER</dc:creator>
		<pubDate>Sun, 25 Jan 2015 14:50:05 +0000</pubDate>
		<guid isPermaLink="false">https://www.habitablezone.com/?p=48684#comment-32232</guid>
		<description>...is to alter the data, manipulate it, and put it into a form that takes advantage of the strengths of the human visual system--and avoid its limitations.

For example, the human eye/brain is very good at pattern recognition (such as detecting change or linear features), but not very good at distinguishing differences in brightness. (we can only discriminate a few dozen gray levels).  

One of the most useful things you can do to an image is to stretch the contrast, so that the differences in brightness can be exaggerated, extended across the entire dynamic range of your display device. The sensors on the old LandSat were only capable of 64 brightness levels, from 0 to 63.  The images looked flat and uninteresting, not because they had low information content, but because the eye couldn&#039;t see it. Adjoining brightness levels were difficult for the eye to tell apart.  64 levels can be stored in &quot;nibbles&quot;--half a byte, 4 bits, so each byte had the values of two pixels, so it was convenient for collection and transmission, but not interpretation.

By copying the image into a byte format, and mapping the values so they covered the entire dynamic range of a display device (0-255), the contrast could be stretched, that is, 0-63 was mapped to 0-255.  A new byte image was created where pixels one gray level apart were now 4 levels apart--easier for the eye to pick up.  The result was an image that was crisper, more interesting, contrastier, and interpretation was easier because patterns were easier to see, such as differences in brightness, edges, linear features, repetitive structures, shapes, visual textures. It gave the illusion it was sharper. And this was only a linear mapping, i.e. the gray levels were all stretched equally from 64 to 256, to &quot;fill in&quot; the full range of the display device or medium so the eye/brain could work with it easier. By analyzing the statistical distributions of brightness levels, and taking that into account, it was possible to differentially map that brightness, so that the degree of contrast enhancement was more concentrated in brightness categories where the original data was of low information content.  It was even possible to use different stretches on each band, so the resulting combined color image suddenly displayed subtle gradations that gave clues to what was really going on in the real world, invisible in the original image.  The eye is better at pattern recognition than it is at brightness discrimination.

It is important to keep in mind this does not create data where none exists.  The data was already there, it just made it easier for the eye to pick it out.  Over-processing could also introduce &quot;artifacts&quot; into the image--features which did not correspond to the real world but were the result of the processing technique.

Familiarity with image processing only reinforces the conviction that what we know about the world is highly influenced by the means by which we analyze the data of our senses. We don&#039;t observe data, we interpret it.  The world isn&#039;t necessarily what it seems, it is also a result of the limitations and strengths of our processes of perception, reason, even memory.  Its not because the world is out to deliberately fool us and lie to us.  Its mostly because we lie to ourselves.

&lt;em&gt;You may recognize that this is a subject I often bring up here, that our reason cannot often be trusted, that our perceptions are often biased, that we are good at some analyses but not at others.  If this is true when analyzing straightforward data carried by the electromagnetic spectrum, imagine how we mislead ourselves in political, economic, religious, philosophic, ideological matters.  Beware of those who are so sure of themselves. They have no choice but to invoke conspiracies to justify their increasingly distorted convictions to themselves. And they feel personally threatened by those who point this out to them.  They consider them over-intellectualized, effete, elitist.
&lt;/em&gt;
People who see the world in black and white are only looking at bit data.  We need double precision words, 64-bit words. The moral and human universe has many gray levels, and a great dynamic range is required to represent it.</description>
		<content:encoded><![CDATA[<p>&#8230;is to alter the data, manipulate it, and put it into a form that takes advantage of the strengths of the human visual system&#8211;and avoid its limitations.</p>
<p>For example, the human eye/brain is very good at pattern recognition (such as detecting change or linear features), but not very good at distinguishing differences in brightness. (we can only discriminate a few dozen gray levels).  </p>
<p>One of the most useful things you can do to an image is to stretch the contrast, so that the differences in brightness can be exaggerated, extended across the entire dynamic range of your display device. The sensors on the old LandSat were only capable of 64 brightness levels, from 0 to 63.  The images looked flat and uninteresting, not because they had low information content, but because the eye couldn&#8217;t see it. Adjoining brightness levels were difficult for the eye to tell apart.  64 levels can be stored in &#8220;nibbles&#8221;&#8211;half a byte, 4 bits, so each byte had the values of two pixels, so it was convenient for collection and transmission, but not interpretation.</p>
<p>By copying the image into a byte format, and mapping the values so they covered the entire dynamic range of a display device (0-255), the contrast could be stretched, that is, 0-63 was mapped to 0-255.  A new byte image was created where pixels one gray level apart were now 4 levels apart&#8211;easier for the eye to pick up.  The result was an image that was crisper, more interesting, contrastier, and interpretation was easier because patterns were easier to see, such as differences in brightness, edges, linear features, repetitive structures, shapes, visual textures. It gave the illusion it was sharper. And this was only a linear mapping, i.e. the gray levels were all stretched equally from 64 to 256, to &#8220;fill in&#8221; the full range of the display device or medium so the eye/brain could work with it easier. By analyzing the statistical distributions of brightness levels, and taking that into account, it was possible to differentially map that brightness, so that the degree of contrast enhancement was more concentrated in brightness categories where the original data was of low information content.  It was even possible to use different stretches on each band, so the resulting combined color image suddenly displayed subtle gradations that gave clues to what was really going on in the real world, invisible in the original image.  The eye is better at pattern recognition than it is at brightness discrimination.</p>
<p>It is important to keep in mind this does not create data where none exists.  The data was already there, it just made it easier for the eye to pick it out.  Over-processing could also introduce &#8220;artifacts&#8221; into the image&#8211;features which did not correspond to the real world but were the result of the processing technique.</p>
<p>Familiarity with image processing only reinforces the conviction that what we know about the world is highly influenced by the means by which we analyze the data of our senses. We don&#8217;t observe data, we interpret it.  The world isn&#8217;t necessarily what it seems, it is also a result of the limitations and strengths of our processes of perception, reason, even memory.  Its not because the world is out to deliberately fool us and lie to us.  Its mostly because we lie to ourselves.</p>
<p><em>You may recognize that this is a subject I often bring up here, that our reason cannot often be trusted, that our perceptions are often biased, that we are good at some analyses but not at others.  If this is true when analyzing straightforward data carried by the electromagnetic spectrum, imagine how we mislead ourselves in political, economic, religious, philosophic, ideological matters.  Beware of those who are so sure of themselves. They have no choice but to invoke conspiracies to justify their increasingly distorted convictions to themselves. And they feel personally threatened by those who point this out to them.  They consider them over-intellectualized, effete, elitist.<br />
</em><br />
People who see the world in black and white are only looking at bit data.  We need double precision words, 64-bit words. The moral and human universe has many gray levels, and a great dynamic range is required to represent it.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: ER</title>
		<link>https://habitablezone.com/2015/01/22/imagery-in-the-eighties-i/#comment-32231</link>
		<dc:creator>ER</dc:creator>
		<pubDate>Sun, 25 Jan 2015 06:52:22 +0000</pubDate>
		<guid isPermaLink="false">https://www.habitablezone.com/?p=48684#comment-32231</guid>
		<description>Both additive (CRT and screen) and subtractive (ink and dye) displays require them.  But there are literally millions of subjective colors available to the human eye/brain visual processing system.  Even if you just limit yourself to byte imagery, that still gives you 256 x 256 x 256 possible colors.

When we had more than 3 bands of data available, sometimes we used band combinations and assigned them arbitrarily to color guns on our CRTs.  For example, if you had 7 banded Landsat data, you could use ratios or differences between two bands assigned to one color gun, so that you could fold more than three bands at a time into an image display. By subracting the red from the green, or perhaps dividing the blue by one of the infrareds, and then assigning that result to one of the RGB color guns, you could get information from more than three sensors in one image.

Sometimes (but not always) these combinations would reveal geological or vegetation features that did not normally show up.  The human eye is really good at picking out color anomalies in nature--its a primate thing.</description>
		<content:encoded><![CDATA[<p>Both additive (CRT and screen) and subtractive (ink and dye) displays require them.  But there are literally millions of subjective colors available to the human eye/brain visual processing system.  Even if you just limit yourself to byte imagery, that still gives you 256 x 256 x 256 possible colors.</p>
<p>When we had more than 3 bands of data available, sometimes we used band combinations and assigned them arbitrarily to color guns on our CRTs.  For example, if you had 7 banded Landsat data, you could use ratios or differences between two bands assigned to one color gun, so that you could fold more than three bands at a time into an image display. By subracting the red from the green, or perhaps dividing the blue by one of the infrareds, and then assigning that result to one of the RGB color guns, you could get information from more than three sensors in one image.</p>
<p>Sometimes (but not always) these combinations would reveal geological or vegetation features that did not normally show up.  The human eye is really good at picking out color anomalies in nature&#8211;its a primate thing.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: bowser</title>
		<link>https://habitablezone.com/2015/01/22/imagery-in-the-eighties-i/#comment-32228</link>
		<dc:creator>bowser</dc:creator>
		<pubDate>Sun, 25 Jan 2015 02:56:15 +0000</pubDate>
		<guid isPermaLink="false">https://www.habitablezone.com/?p=48684#comment-32228</guid>
		<description>Some years ago a neighbor of mine, an amateur astronomer, invited me over to meet a couple of guests.  He had invited one of them to make a presentation to his astronomy group on an alternative to RGB and it&#039;s advantages.  I&#039;m not sure which of them was making the presentation.

However, one of them was a NASA guy who was in charge of some of the resupply missions to ISS if they were American.  If they were Russian he monitored them.

I asked him about the mission which had apparently had a Russian antenna in the way.  He told me that was a Russian affair and he had been sent over there to monitor how they handled it.  Apparently it was getting close to an abort and his boss in the US wanted to abort.  He said he protested, reported that his boss trusted him and the Russian was handling it just as he would.  At the last minute they got the proper software loaded and the antenna moved out of the way.  He had other tidbits, but I don&#039;t remember them.

Although I don&#039;t have the slightest idea why he preferred another protocol for color imaging it was a fascinating evening.  I was sorry to see it end.

And thanks for your post.  I can begin to see how much information there is in those things and how it can be manipulated.  I&#039;ve often wondered.</description>
		<content:encoded><![CDATA[<p>Some years ago a neighbor of mine, an amateur astronomer, invited me over to meet a couple of guests.  He had invited one of them to make a presentation to his astronomy group on an alternative to RGB and it&#8217;s advantages.  I&#8217;m not sure which of them was making the presentation.</p>
<p>However, one of them was a NASA guy who was in charge of some of the resupply missions to ISS if they were American.  If they were Russian he monitored them.</p>
<p>I asked him about the mission which had apparently had a Russian antenna in the way.  He told me that was a Russian affair and he had been sent over there to monitor how they handled it.  Apparently it was getting close to an abort and his boss in the US wanted to abort.  He said he protested, reported that his boss trusted him and the Russian was handling it just as he would.  At the last minute they got the proper software loaded and the antenna moved out of the way.  He had other tidbits, but I don&#8217;t remember them.</p>
<p>Although I don&#8217;t have the slightest idea why he preferred another protocol for color imaging it was a fascinating evening.  I was sorry to see it end.</p>
<p>And thanks for your post.  I can begin to see how much information there is in those things and how it can be manipulated.  I&#8217;ve often wondered.</p>
]]></content:encoded>
	</item>
</channel>
</rss>
