<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Big Data and Journalism</title>
	<atom:link href="http://habitablezone.com/2016/04/14/big-data-and-journalism/feed/" rel="self" type="application/rss+xml" />
	<link>https://habitablezone.com/2016/04/14/big-data-and-journalism/</link>
	<description></description>
	<lastBuildDate>Mon, 13 Apr 2026 02:46:53 -0700</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.3.1</generator>
	<item>
		<title>By: Robert</title>
		<link>https://habitablezone.com/2016/04/14/big-data-and-journalism/#comment-36214</link>
		<dc:creator>Robert</dc:creator>
		<pubDate>Thu, 14 Apr 2016 20:53:18 +0000</pubDate>
		<guid isPermaLink="false">https://www.habitablezone.com/?p=56913#comment-36214</guid>
		<description>I think the article&#039;s right that nobody could have comprehended a trove of data that vast, a decade ago. Tools like that are analogous to telescopes for the intellect, able to see across lightyears of data (he said hyperbolically).

There&#039;s an observation near the end that these tools ought to be made available to everybody, and I agree with that too. It&#039;s happening: Google and Amazon, to name just two I know of, opened up their machine learning systems to the public. Google&#039;s is available as open source, but both offer it as a facility, in their clouds, and that really does deliver it to the masses.

The prospect of being able to use Google&#039;s machine learning system afflicted me with an epiphany that led to a startup I work with filing a patent application with my name on it last month. I can&#039;t say anything about the details, but part of it involves recording (with consent) the actions of thousands of users on a web site, and feeding the data into the machine learning system so that over time it starts to &quot;understand&quot; human aesthetic design choices. The result, I hope, will be an assistant that evaluates some material and tentatively performs some work on it before showing it to a human for approval. Sorry, you understand why I have to be so vague. The lawyers tell me I can speak freely in a couple of years.

Where I&#039;m headed, and I do have a destination, is that the old idea of building An AI, a singular entity with intelligence and perhaps consciousness, is really outmoded. It almost seems like superstitious, or at least animistic, thinking.

What I see growing up around us more like augmentation of humans along with delegation to lesser, less creative but still smart machines of routine work. In machine learning systems we see machines that don&#039;t have to be inherently smart, they just have to be able to learn from humans. I think that rather than building machines smarter than us, we can continue the evolution of human intelligence by augmenting it with prostheses like big data analysis and teachable assistants, and interconnected with other minds and machines through a common network. We don&#039;t spin off individuals, we grow out a network of intelligence. In a few generations there may not be a sharp differentiation between islands of intelligence in a sea of dumb matter, but more of a soup of varying gradations of intelligence.

There are serious people who worry that Artificial Intelligence, the kind that&#039;s capitalized, will be the death of us for sure. And maybe they&#039;re right, if you conceive of it as creating super smart individuals who will inevitably develop egos out of a survival imperative and end up just as competitive and vicious as any organic.

The soup concept seems safer, because we don&#039;t split off Other intelligences. Kind of an application to AI of the old saying &quot;keep your friends close and your enemies closer.&quot;</description>
		<content:encoded><![CDATA[<p>I think the article&#8217;s right that nobody could have comprehended a trove of data that vast, a decade ago. Tools like that are analogous to telescopes for the intellect, able to see across lightyears of data (he said hyperbolically).</p>
<p>There&#8217;s an observation near the end that these tools ought to be made available to everybody, and I agree with that too. It&#8217;s happening: Google and Amazon, to name just two I know of, opened up their machine learning systems to the public. Google&#8217;s is available as open source, but both offer it as a facility, in their clouds, and that really does deliver it to the masses.</p>
<p>The prospect of being able to use Google&#8217;s machine learning system afflicted me with an epiphany that led to a startup I work with filing a patent application with my name on it last month. I can&#8217;t say anything about the details, but part of it involves recording (with consent) the actions of thousands of users on a web site, and feeding the data into the machine learning system so that over time it starts to &#8220;understand&#8221; human aesthetic design choices. The result, I hope, will be an assistant that evaluates some material and tentatively performs some work on it before showing it to a human for approval. Sorry, you understand why I have to be so vague. The lawyers tell me I can speak freely in a couple of years.</p>
<p>Where I&#8217;m headed, and I do have a destination, is that the old idea of building An AI, a singular entity with intelligence and perhaps consciousness, is really outmoded. It almost seems like superstitious, or at least animistic, thinking.</p>
<p>What I see growing up around us more like augmentation of humans along with delegation to lesser, less creative but still smart machines of routine work. In machine learning systems we see machines that don&#8217;t have to be inherently smart, they just have to be able to learn from humans. I think that rather than building machines smarter than us, we can continue the evolution of human intelligence by augmenting it with prostheses like big data analysis and teachable assistants, and interconnected with other minds and machines through a common network. We don&#8217;t spin off individuals, we grow out a network of intelligence. In a few generations there may not be a sharp differentiation between islands of intelligence in a sea of dumb matter, but more of a soup of varying gradations of intelligence.</p>
<p>There are serious people who worry that Artificial Intelligence, the kind that&#8217;s capitalized, will be the death of us for sure. And maybe they&#8217;re right, if you conceive of it as creating super smart individuals who will inevitably develop egos out of a survival imperative and end up just as competitive and vicious as any organic.</p>
<p>The soup concept seems safer, because we don&#8217;t split off Other intelligences. Kind of an application to AI of the old saying &#8220;keep your friends close and your enemies closer.&#8221;</p>
]]></content:encoded>
	</item>
</channel>
</rss>
