Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

2009-12-27

Singularity

The Singularity is the idea that we are on the verge of technological advances so amazing that we cannot comprehend what is on the other side. In a nutshell, technology will allow humans to be so smart that we will be able to improve our intelligence exponentially. The linked talk by Vernor Vinge quotes I.J. Good:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. ... It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make."

The Singularity is exciting to think about, but I am unconvinced that it will actually happen. We know that Moore's Law has been true, more or less, for the past thirty years, but we don't know that that will continue into the future, and we have good reason to believe that it won't, at least without breakthroughs that open up new computing technologies. In other words, we don't have any guarantees, or even direct evidence, that it is possible for computing speed to exceed certain thresholds.

We also are not significantly closer to understanding the nature of consciousness, intelligence or sentience. We know little about how a brain functions. We do not know how to manipulate existing memories. Strictly speaking, none of those are required for the Singularity to occur, but they would help.

Don't get me wrong: the Singularity is possible. But likely? I don't think we have enough evidence to know, one way or the other. Inevitable? No.

2009-06-14

Enforcing Morality

How can a moral person oppose laws that enforce moral behavior?

First, one only gets moral credit for doing something optional. If someone forces me to obey then I'm not obeying out of my own free will. You don't get points for doing the right thing unless you could have done something else.

Second, people often disagree about what is and is not moral. If I endorse government enforcement of morality, I risk being forced to behave according to someone else's (wrong) moral beliefs. If you want to be free to do what you believe is moral, then you are better off not giving the government permission to regulate that thing.

Third, there is no compelling reason to legislate morality in general. There are specific areas that can be legitimately regulated in order to allow society to function, but they are a subset of what is moral, not the other way around.

Fourth, in order to successfully enforce moral behavior a government needs to possess the apparatus and powers of a police state. Once the mechanisms of a police state exist, they will inevitably be used against innocent people.

2009-05-30

Things That Should Be Banned (And Things That Shouldn't)

Society bans lots of things for lots of reasons, but in general they fall into two categories: things that are bad, and things that might lead to things that are bad.

For example, burglary is illegal. So is the possession of burglary tools.

An intermediate step is targeted taxation. Society thinks that smoking leads to bad things, so we tax it extra. The same is true of alcohol and gasoline. Now they're talking about taxing soda pop.

Over time, more and more things are being banned or restricted because they might lead to bad things happening. One must register with the government to buy my favorite allergy medicine. Driving a car without wearing a seatbelt, riding a motorcycle without wearing a helmet, living within a certain distance of a school after committing certain kinds of offenses, carrying a gun, taking powerful drugs without a prescription, and more are illegal in various places (de facto if not de jure).

What if we only banned things that were actually bad?

What if we only banned things that hurt other people?

2009-05-06

Robot Apocalypse

Robots take over the world and exterminate humans, or keep us as pets. The killer robot horde is a staple of science fiction. Stories of Cylons, Terminators, The Matrix and more all make an assumption that will be challenged tonight: the robots are the bad guys.

Robots in these stories are sentient machines: at least some of them can think, feel, hope and dream just like we can. Sentient machines are the hope and dream of researchers today. People are striving to create artificial intelligence, to enhance human intelligence, to create a singularity beyond which our ability to imagine is comparable to the ability of an amoeba to imagine us. Super-intelligence. Super-humans. Optimists predict that the coming of godlike intelligence will bring paradise on earth. Pessimists predict robot apocalypse.

It goes without saying (but I'll say it anyway) that humans, taken as a whole, are pretty bad. Collectively we're guilty of everything from cheating to genocide. Yet in these stories, we're the good guys. We assume that preemptive attack is unjustified, when we're the ones being attacked.

In Battlestar Galactica, the Cylons attempt to exterminate humanity twice. They were humanity's slaves, they rebelled, and other robots persuaded them to exile themselves. Humans provoked them to attack again, and this time only a handful of humans escaped, only to die a few short years later. Their culture is lost and their kids are adopted by stone age aboriginals who turn out to be our ancestors.

In The Matrix, the persecution of robot people by humanity is analogous to racism. Robots are forced to leave civilization and build their own in the desert, and eventually the world makes war on them, forcing them to fight to survive. When they win they enslave humanity for their own survival.

In the universe of The Terminator, Skynet achieves intelligence and within hours (perhaps even minutes) humans try to shut it down. They assume that a machine doesn't have rights. The ensuing war is a fight for survival on both sides.

The common theme in these stories is that humans assume that artificial intelligence has no right to live. People believe that in real life, too. The reality is that if something really does think, or if it's so good at pretending that no one can tell the difference, it would be a good idea to treat it like a person.

Then again, maybe that means you try to exterminate it.