The buzz these days is all about AI, Artificial Intelligence. Is it good, is it bad? What are the unforeseeable consequences, what are the foreseeable ones? How will it affect society, even the development and fate of the human species? Everybody seems to be getting their tit caught in a wringer about this. Relax, there is nothing profound and philosophical or mystical about it all. It is just technology, its human consequences are no different in principle than any other technology. Does this mean that we shouldn’t be giving some thought to how it is developed or employed? Of course, like all technology, it is essentially amoral. And like any technology, it has the potential to help and to hurt us. We should proceed with caution but we should be willing to recognize the risks along with the benefits.
AI is a computer technology, it is meant to gather data, process it according to certain pre-programmed assumptions and to make decisions based on that processing. These decisions can be expressed as a report to be studied by its users to inform their own decisions, or as a means of removing the owners from the loop altogether–to automatically carry out those decisions. This is what all computers do, whether they are AI or the more ordinary algorithmic processors we normally associate with the term “computer”. But to a certain extent, this giving up some of our cognitive and executive functions to one of our artifacts is not new. The Legal System is a means of transferring some of our mental activity to an artificial process, so is a bureaucracy. So is our system of Constitutional government, or our Economy (there is still some debate as to whether that is an artifact or a natural process).. Simple feedback mechanisms have regulated many of our mechanical processes for centuries, governors on steam engines, analog computers (both mechanical and electrical, and later, electronic), and of course, the usual digital, algorithmic, programmable device. When a submarine fires a torpedo, or when a pilot launches a missile, some sort of primitive thinking machine guides the projectile to its target, either by following a fixed program or by using sensors to evaluate its surroundings and modifying its behavior accordingly. We have never been squeamish about exposing our fellow human beings to threats by soulless machines. We can always convince ourselves there is always a good reason to do so.
What makes AI unique is that the mechanical brain involved uses processes other than programs (recipes of instructions that respond predictably to changing circumstances). An AI system “thinks”, it “remembers”, it modifies its behavior using a form of a natural selection process based on its prior mistakes and successes. Its decisions are not predictable, they are not even repeatable. In fact, that is precisely how we want them to act. We want our AI computers to react to situations we did NOT anticipate. And we are willing to pay the price of an occasional blunder or even a catastrophe in order to achieve that autonomy. I don’t think there is anything wrong with this, in fact, I see it as a natural progression of technology.
Some of our existing algorithmic systems already approach this level of performance even without employing AI techniques. Programmable computers already translate languages, play Grand Master level chess, or operate search engines in a way that seems uncannily human. It makes one wonder whether consciousness can be simulated. Maybe there really is no such thing as ‘consciousness’, perhaps it is all an illusion.
AI is certainly a tool which can be profitably used to explore these philosophical issues. These are topics wroth expending our resources on; learning how our brains work is a laudable goal, and has many practical applications. By building AIs we can reach insights into how ‘natural’ intelligences work. No, I don’t believe in souls, spirits, or any other spooky manifestations or Divine interventions. Its all chemistry and electricity, guided by billions of years of Evolution, as far as I’m concerned. But even if I’m dead wrong, we’ll never know for sure until we try to build an artificial mind. Or as many of them as we can. We may eventually even learn (as I suspect) that the distinction between ‘natural’ and ‘artificial’ is itself “artificial”. Can ‘consciousness’ be simulated? Does it matter?
There are plenty of other good reasons to explore AI. There are many natural, stochastic processes that are simply too fractal, too indeterminate, too chaotic to be managed by a procedural algorithm. And even if these activities can eventually be handled by programmable computers, it may be faster and cheaper to let AI take care of some of them. AI can be very useful in handling situations where we simply do not have enough prior knowledge to make the right decisions. It can certainly be useful when decisions have to be made in a hurry, and based on insufficient data. Sure, AI may make mistakes, but it may make them less often than a programmable computer. AI may be the only way to navigate spacecraft on interstellar missions, where light-time delays make communications for human control impractical.
But there are also reasons for caution. Do we really want important human decisions to be carried out by intelligent machines whose primary raison d’etre is to maximize corporate profit, or ensure the survivability of a political party? Sure, there may be advantages to having auto accidents minimized by having robot drivers, but development and implementation of such technologies should not be left to some company whose only genuine interest is reducing the salaries of its drivers. And I hope I’m not offending you by pointing out no company can be trusted to make the right decision when profits are involved. Neither should any government be left to make such decisions without democratic supervision. People in power simply cannot be trusted to do the right thing. And they can be counted on to use whatever technology is available to get their way, no matter who gets hurt.
No, I have no solution to the question of how we should manage AI tech on a civilization-wide level. Like all, technologies, it has the potential for good or evil, and there are always unforeseen consequences. All I’m saying is we should be skeptical of those trying to promote it, especially if they expect to benefit mightily from it. Like I said, they simply cannot be trusted.
-
Should I move this to Flame?
-
Would you trust yourself to build an ethical AI?
-
Of course not.
-
And, it just occurred to me...
-
I agree with everything you just said, except...
-
I agree with everything you just said, except...
-
Reflexive need
-
So much right and so much wrong
- A zero sum-game.
-
New Maps of Hell
-
Now I am become death, destroyer of worlds
-
Its not the "creators" I mistrust.
-
What you always seem to miss
-
If I need a lawyer, I'll hire one in a heartbeat.
-
Slimy Lawyers
-
Yeah, some are nice guys.
-
A noticeable omission
- No, I made an edit while you were typing.l
-
A noticeable omission
-
Yeah, some are nice guys.
-
Slimy Lawyers
-
If I need a lawyer, I'll hire one in a heartbeat.
-
What you always seem to miss
-
Its not the "creators" I mistrust.
-
Reflexive need
-
I agree with everything you just said, except...
-
I agree with everything you just said, except...
-
And, it just occurred to me...
-
Of course not.
- Dying Inside