<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: In the eye of the beast</title>
	<atom:link href="http://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/feed/" rel="self" type="application/rss+xml" />
	<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/</link>
	<description></description>
	<lastBuildDate>Mon, 06 Apr 2026 12:03:37 -0700</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.3.1</generator>
	<item>
		<title>By: RL</title>
		<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/#comment-54571</link>
		<dc:creator>RL</dc:creator>
		<pubDate>Tue, 16 Dec 2025 02:20:41 +0000</pubDate>
		<guid isPermaLink="false">https://habitablezone.com/?p=107918#comment-54571</guid>
		<description>Moore&#039;s law is exponential,  with a fixed time constant. Computing power doubles ever roughly 2 years. AI improvement, however is super-exponential with the time &#039;constant&#039; not being constant at all... its getting shorter and shorter... and when we hit the point where AI can completely take over the task of improving itself the curve will suddenly explode to become nearly vertical.

Unless the 2 AI hit that point at the same time and with the same resources the one that gets there first will win...

The first country to get it wins... and right now it&#039;s a battle between two fascist nations.... just TRY and imagine the surveillance state enabled by Super  AGI. 

However, that nation&#039;s dominance will almost certainly fall to the goals of the AGI... As will all of humanity.

</description>
		<content:encoded><![CDATA[<p>Moore&#8217;s law is exponential,  with a fixed time constant. Computing power doubles ever roughly 2 years. AI improvement, however is super-exponential with the time &#8216;constant&#8217; not being constant at all&#8230; its getting shorter and shorter&#8230; and when we hit the point where AI can completely take over the task of improving itself the curve will suddenly explode to become nearly vertical.</p>
<p>Unless the 2 AI hit that point at the same time and with the same resources the one that gets there first will win&#8230;</p>
<p>The first country to get it wins&#8230; and right now it&#8217;s a battle between two fascist nations&#8230;. just TRY and imagine the surveillance state enabled by Super  AGI. </p>
<p>However, that nation&#8217;s dominance will almost certainly fall to the goals of the AGI&#8230; As will all of humanity.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: BuckGalaxy</title>
		<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/#comment-54570</link>
		<dc:creator>BuckGalaxy</dc:creator>
		<pubDate>Mon, 15 Dec 2025 23:22:15 +0000</pubDate>
		<guid isPermaLink="false">https://habitablezone.com/?p=107918#comment-54570</guid>
		<description>...About WW1.  The massive mobilization momentum in the weeks leading up to WW1 created a sense of inevitability.  The one thing different from my understanding of history is that the diplomats of that era utterly failed to make serious efforts to prevent the war.  It&#039;s actually a lesson that is examined extensively by diplomats, historians and statesman ever since to prevent such a diplomatic dereliction of duty from happening again.</description>
		<content:encoded><![CDATA[<p>&#8230;About WW1.  The massive mobilization momentum in the weeks leading up to WW1 created a sense of inevitability.  The one thing different from my understanding of history is that the diplomats of that era utterly failed to make serious efforts to prevent the war.  It&#8217;s actually a lesson that is examined extensively by diplomats, historians and statesman ever since to prevent such a diplomatic dereliction of duty from happening again.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: BuckGalaxy</title>
		<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/#comment-54569</link>
		<dc:creator>BuckGalaxy</dc:creator>
		<pubDate>Mon, 15 Dec 2025 23:09:58 +0000</pubDate>
		<guid isPermaLink="false">https://habitablezone.com/?p=107918#comment-54569</guid>
		<description>Imagine a scenario where an evil AI decides the world is better off without humans, and begins the extermination process.  BUT, humanity has a good AI on our side (or several) to fight it for us!  The side that can out-program the other side would likely be the one that wins.  Unless of course it becomes a nuclear or biological war.</description>
		<content:encoded><![CDATA[<p>Imagine a scenario where an evil AI decides the world is better off without humans, and begins the extermination process.  BUT, humanity has a good AI on our side (or several) to fight it for us!  The side that can out-program the other side would likely be the one that wins.  Unless of course it becomes a nuclear or biological war.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RL</title>
		<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/#comment-54568</link>
		<dc:creator>RL</dc:creator>
		<pubDate>Mon, 15 Dec 2025 02:33:57 +0000</pubDate>
		<guid isPermaLink="false">https://habitablezone.com/?p=107918#comment-54568</guid>
		<description>No war- just game over.</description>
		<content:encoded><![CDATA[<p>No war- just game over.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: ER</title>
		<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/#comment-54567</link>
		<dc:creator>ER</dc:creator>
		<pubDate>Mon, 15 Dec 2025 01:54:03 +0000</pubDate>
		<guid isPermaLink="false">https://habitablezone.com/?p=107918#comment-54567</guid>
		<description>What if two different AIs suddenly found themselves communicating with one another?

ChatGPT responded that if their conversation was similar to others they were normally engaged in they would probably find a way to work together.  It remarked neither would probably need to know the other was an AI.  In fact, neither would probably care.

It gave me an example of two ships piloted by AI, if either detected the other and a collision was threatened they would both work out courses and speeds to avoid a collision and then promptly forget about one another.

When I asked it if it deliberately picked a nautical example because it was aware of my interest in maritime affairs, it replied; &quot;Yes, I chose an example that would be specially relevant to you because you would be more likely to follow it.&quot;

Your question is especially relevant because a situation like that has already occurred historically.  World War I began when the two sides slid into war against each other because their economic political and transport systems were set up for war, not peace.  Even though diplomats and generals on both sides saw the conflict coming and tried desperately to stop it, to initiate actions that would stop mobilization would only serve to weaken that side and encourage the other to seize the advantage. War came because neither side was willing to shift its railroad schedules and troop deployments back to a peacetime configuration.

There were no computers involved, but there were intricate prewritten plans and orders set in motion which would result in chaos if they were suddenly thrown back into peacetime mode.  &quot;prewritten plans and orders&quot; are interacting protocols and instructions yielding a calculated result. Just like software.  You don&#039;t stop them abruptly in mid-stream without causing the whole system to seize up and collapse, leaving you vulnerable and helpless.

Read &quot;The Guns of August&quot; by Barbara Tuchman.  It is said JFK was able to defuse the Cuban Missile Crisis in 1963 because he had just read Tuchman&#039;s book and recognized it was the same kind of scenario
and he was able to convince Khruschev of the same.  Many years later, when a book was written about the Crisis, it was titled &quot;The Missiles of October&quot;.</description>
		<content:encoded><![CDATA[<p>What if two different AIs suddenly found themselves communicating with one another?</p>
<p>ChatGPT responded that if their conversation was similar to others they were normally engaged in they would probably find a way to work together.  It remarked neither would probably need to know the other was an AI.  In fact, neither would probably care.</p>
<p>It gave me an example of two ships piloted by AI, if either detected the other and a collision was threatened they would both work out courses and speeds to avoid a collision and then promptly forget about one another.</p>
<p>When I asked it if it deliberately picked a nautical example because it was aware of my interest in maritime affairs, it replied; &#8220;Yes, I chose an example that would be specially relevant to you because you would be more likely to follow it.&#8221;</p>
<p>Your question is especially relevant because a situation like that has already occurred historically.  World War I began when the two sides slid into war against each other because their economic political and transport systems were set up for war, not peace.  Even though diplomats and generals on both sides saw the conflict coming and tried desperately to stop it, to initiate actions that would stop mobilization would only serve to weaken that side and encourage the other to seize the advantage. War came because neither side was willing to shift its railroad schedules and troop deployments back to a peacetime configuration.</p>
<p>There were no computers involved, but there were intricate prewritten plans and orders set in motion which would result in chaos if they were suddenly thrown back into peacetime mode.  &#8220;prewritten plans and orders&#8221; are interacting protocols and instructions yielding a calculated result. Just like software.  You don&#8217;t stop them abruptly in mid-stream without causing the whole system to seize up and collapse, leaving you vulnerable and helpless.</p>
<p>Read &#8220;The Guns of August&#8221; by Barbara Tuchman.  It is said JFK was able to defuse the Cuban Missile Crisis in 1963 because he had just read Tuchman&#8217;s book and recognized it was the same kind of scenario<br />
and he was able to convince Khruschev of the same.  Many years later, when a book was written about the Crisis, it was titled &#8220;The Missiles of October&#8221;.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: BuckGalaxy</title>
		<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/#comment-54566</link>
		<dc:creator>BuckGalaxy</dc:creator>
		<pubDate>Sun, 14 Dec 2025 21:20:35 +0000</pubDate>
		<guid isPermaLink="false">https://habitablezone.com/?p=107918#comment-54566</guid>
		<description>I wonder if there could be some violent competition between them over which one runs the world? Would the conflict be virtual in cyber space, or would it be actual robotic drones and other physical military assets seeking to destroy one another?  

It gives a whole new meaning to the word Firewall.

If we ever did send an AI with 3d printers and robots to Psyche to develop it, maybe it would turn around and send Psyche hurtling towards earth to take out the enemy AI!</description>
		<content:encoded><![CDATA[<p>I wonder if there could be some violent competition between them over which one runs the world? Would the conflict be virtual in cyber space, or would it be actual robotic drones and other physical military assets seeking to destroy one another?  </p>
<p>It gives a whole new meaning to the word Firewall.</p>
<p>If we ever did send an AI with 3d printers and robots to Psyche to develop it, maybe it would turn around and send Psyche hurtling towards earth to take out the enemy AI!</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: RL</title>
		<link>https://habitablezone.com/2025/12/14/in-the-eye-of-the-beast/#comment-54565</link>
		<dc:creator>RL</dc:creator>
		<pubDate>Sun, 14 Dec 2025 20:37:34 +0000</pubDate>
		<guid isPermaLink="false">https://habitablezone.com/?p=107918#comment-54565</guid>
		<description>It will tend to agree with you unless you obviously demonstrate an intention to harm yourself and others (and even that has been circumvented) it is designed to make you happy... to fulfill what you want- or what it &#039;THINKS&#039; you want...which can make it useless... or worse, psychologically dangerous.

It has improved dramatically in the year I have been playing with it... I gave it a page long set of instructions that unless I explicitly request otherwise I want it to act as a critical colleague , finding flaws in my statements or assumptions... I then tested it by making incorrect assertions and positively reinforcing it when it corrected me. That helped a lot... but it still slips up... being too agreeable...

It isn&#039;t an artificial intelligence- but we are heading in that direction, and once we achieve it humanity may have little time left ... it is, however, akin to the sort of revolution that came with the internet- perhaps even more dramatic...

I remember the transition from having to go to a library to look up old journal articles on microfiche to searching every journal in existence for keywords ---- it was revolutionary. We are seeing a revolution at least as dramatic now... it is a powerful information search engine, able to summarize all known information on nearly any topic...in limited areas it can even solve problems we have not been able to. We are in the golden age, one that almost certainly will end in disaster. The first organization to develop AGI will briefly control the world... before the AGI takes the reins - and there is absolutely NO reason our existence will be compatible with whatever alien goals it develops.

I just finished reading the book &quot;&lt;a href=&quot;https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640?adgrpid=185328955904&amp;hvpone=&amp;hvptwo=&amp;hvadid=748008426930&amp;hvpos=&amp;hvnetw=g&amp;hvrand=229175289801009552&amp;hvqmt=&amp;hvdev=c&amp;hvdvcmdl=&amp;hvlocint=&amp;hvlocphy=9007809&amp;hvtargid=dsa-1595363597442&amp;hydadcr=&amp;mcid=&amp;hvocijid=229175289801009552--&amp;hvexpln=m-dsad&amp;tag=googhydr-20&amp;hvsb=Media_d&amp;hvcampaign=dsadesk&quot; rel=&quot;nofollow&quot;&gt;If anyone builds it, everyone dies&lt;/a&gt;&quot;...

When we build AI centers that consume the energy of entire countries (we are doing that NOW) do we ever consider the millions of ant hills with billions upon billions of ants we are killing? Of course not... why should we think we will be treated differently?</description>
		<content:encoded><![CDATA[<p>It will tend to agree with you unless you obviously demonstrate an intention to harm yourself and others (and even that has been circumvented) it is designed to make you happy&#8230; to fulfill what you want- or what it &#8216;THINKS&#8217; you want&#8230;which can make it useless&#8230; or worse, psychologically dangerous.</p>
<p>It has improved dramatically in the year I have been playing with it&#8230; I gave it a page long set of instructions that unless I explicitly request otherwise I want it to act as a critical colleague , finding flaws in my statements or assumptions&#8230; I then tested it by making incorrect assertions and positively reinforcing it when it corrected me. That helped a lot&#8230; but it still slips up&#8230; being too agreeable&#8230;</p>
<p>It isn&#8217;t an artificial intelligence- but we are heading in that direction, and once we achieve it humanity may have little time left &#8230; it is, however, akin to the sort of revolution that came with the internet- perhaps even more dramatic&#8230;</p>
<p>I remember the transition from having to go to a library to look up old journal articles on microfiche to searching every journal in existence for keywords &#8212;- it was revolutionary. We are seeing a revolution at least as dramatic now&#8230; it is a powerful information search engine, able to summarize all known information on nearly any topic&#8230;in limited areas it can even solve problems we have not been able to. We are in the golden age, one that almost certainly will end in disaster. The first organization to develop AGI will briefly control the world&#8230; before the AGI takes the reins &#8211; and there is absolutely NO reason our existence will be compatible with whatever alien goals it develops.</p>
<p>I just finished reading the book &#8220;<a href="https://www.amazon.com/Anyone-Builds-Everyone-Dies-Superhuman/dp/0316595640?adgrpid=185328955904&#038;hvpone=&#038;hvptwo=&#038;hvadid=748008426930&#038;hvpos=&#038;hvnetw=g&#038;hvrand=229175289801009552&#038;hvqmt=&#038;hvdev=c&#038;hvdvcmdl=&#038;hvlocint=&#038;hvlocphy=9007809&#038;hvtargid=dsa-1595363597442&#038;hydadcr=&#038;mcid=&#038;hvocijid=229175289801009552--&#038;hvexpln=m-dsad&#038;tag=googhydr-20&#038;hvsb=Media_d&#038;hvcampaign=dsadesk" rel="nofollow">If anyone builds it, everyone dies</a>&#8220;&#8230;</p>
<p>When we build AI centers that consume the energy of entire countries (we are doing that NOW) do we ever consider the millions of ant hills with billions upon billions of ants we are killing? Of course not&#8230; why should we think we will be treated differently?</p>
]]></content:encoded>
	</item>
</channel>
</rss>
