Be careful what you endogenize...

I'll admit upfront to only having a passing and most likely superficial familiarity with the issues explored by the transhumanist community. But as I was (metaphorically) thumbing through the latest issue of H+ magazine, I was struck by how... constrained many of the articles are. Futurology is a notably (and often comically) imprecise "science", and it's easy to be blind to the ways which technological developments will fundamentally transform the issues we face - which tends to lead to comparably absurd extrapolations of current trends into the indefinite future. Some believes that the shift from extensive (Malthusian) to intensive economic growth that began in roughly the 19th century is a temporary blessing which will be reversed by the advent of cheaply-replicable silicon brains.

This might strike one as intuitively undesirable, but not an absurd possibility if brain emulation or general artificial intelligence becomes sufficiently advanced to seriously blur the general distinction between labor and capital. But what strikes me as odd about many of the writers from H+ - and again, maybe this isn't representative of transhumanists in general - is what they want to keep constant in their arguments. Oftentimes there's a clear hedonist tendency to act as if technology will simply make it easier for us to achieve our desires, rather than actually shaping and redefining our desires. This isn't merely to say that the cultural changes which accompany technological growth will change the particulars of what we want, but that the broad nature of our appetites will become an endogenous variable that can be shaped by technology. Who says we'll seek pleasure, as it's currently understood, let alone particular avenues to pleasure such as sex, or "fun", or a satisfaction of our current set of appetites? It seems likely that there would be selection pressures which would favor beings with motivations geared towards self-replication - and in the future, the optimal set of motivations might not be very recognizable as "human" in either their attitudes or underlying architecture. These beings wouldn't be as arbitrary as paperclip maximizers, but I think it's easy to see how inhuman a person who was solely focused on self-replication would strike us as (assuming we could see past the personable attitudes which he would instrumentally employ.) To borrow the jargon of Tyler Cowen, expanding - or innovating, rather - neurodiversity and being able to select over cognitive profiles would have a transformative effect on social evolution, and I'd venture to say that our highly limited abilities to do so are a necessary condition of our being able to construct an ideal of what is "human."

Are transhumanists blind to this possibility - nay, likelihood? I doubt it, and I'm sure I'm beating someone's dead horse here. But if so, at least this post touches on the problematic esotericism (is that a word?) which seems to exist in some circles. In the end, I think the possible desirability of moving beyond the human condition deserves discussions and debate, and I have to wonder whether transhumanists purposely avoid this for PR reasons. Live forever! Expand your mind! Leap tall buildings in a single bound! It sounds nice, but it brushes aside the fact that new technologies really will have even broader social consequences than most critics would recognize. But I do believe that a lot of transhumanists really believe that new technologies will simply make it easier for people to acquire pleasure, either because the technologies will be developed selectively (no one will make AIs / emulated brains with motivations significantly different than ours) or because they're simply blind to the full set of possible consequences of new technologies.

Myself, I do see a hedonistic race to the bottom (so to speak) in the future, and that sometime in my lifetime people these issues will become salient enough that we'll have to seriously consider the merits of allowing the engineering of "alien" cognitive profiles. It'll be an interesting debate, for sure.

(Authors' note: Since this is my first post here, I figured I'd add a quick blurb. I'm a second-year PhD student of economics at George Mason University, and I like being involved in a lot of the discussion that occurs in this section of the blogosphere, and hence I'm trying to make my own contributions as I find inspiration. Future posting will probably be somewhat contingent on the quantity and quality of comments I receive, so don't be shy if you have any thoughts on what I've written... though I'm not sure if I should expect too many readers on this post, we'll see. In any case, that's all for now.)

Share this

Joining the Community

Sorry to hijack this thread, but there is a sysadmin problem. The front page invites people to start there own blogs, but there is no link for doing so. Is this community now closed? Or is this a bug?

Ugh! That should be "start

Ugh! That should be "start their own blogs."

If you're registered and

If you're registered and logged in, you should see a sidebar on your right headed by your name. Click on the "create content" item under that and you should be able to make a post. At least that's what worked as of yesterday.

Joining the Community

But how do you register? I see no link for doing so.

Try it now

We had to shut it down due to spammers. I just re-opened it.

Peter, Are you familiar with

Peter,

Are you familiar with Friendly AI?

Peter Twieg

Micha -

I am... it's my understanding that a lot of Friendly AI proponents such as Yudkowsky believe that we won't see a diversity of superintelligences with differing goals post-Singularity... and I'm skeptical of this notion. Robin Hanson tried to talk to me about the economics of the Singularity once, and even though I'm not too familiar with these issues I tried to take his disagreements with Yudkowsky to heart.

I just mentioned Friendly AI

I just mentioned Friendly AI because it seems like a counterexample to your worry that transhumanists are blind to the possibilities of nefarious consequences. The primary goal of FAI is to address these possibilities.

Yudkowsky's reason for doubting the diversity of superintelligences is his prediction that a superintelligence will judge itself to be a natural monopoly, and prevent the creation of other future superintelligences with divergent, unfriendly goals. Here is his argument:

If, as seems both likely and desirable, transhumanity first comes into existence as the result of AI, and if that AI is Friendly, then Friendliness is called upon to shoulder the massive burden of deciding, not only the future, but how much of the future should be decided. Transhumanity is almost unlimited power-in-potentia; the question becomes, not just how this power should be wielded, but whether it should be wielded, or even whether that power should be acquired in the first place. [...]

Suppose that each seed AI in the twenty-first century, as ve reaches transhumanity, becomes the seed and operating system of a polis - so that everyone gets to pick their own definition of Friendliness and live there. It doesn't seem that the system-as-a-whole would last very long. Good AI , good AI , good AI , good AI , good AI, evil solipsist AI, good AI, good AI, good AI, evil solipsist AI, good AI, good AI, good AI, good AI, evil aggressor AI. At this point, everyone in the Solar System who isn't behind the impregnable defenses of an existing superintelligence gets gobbled up by the evil aggressor superintelligence, after which the sequence ends. Flip through a deck of cards long enough, and sooner or later you'll turn up the ace of spades.

These are some of the factors which, in my opinion, make it likely that the Transition Guide will implement a Sysop Scenario - one underlying operating system for the Solar System; later, for all of human space. It is possible, although anthropomorphic, that the end result will be a Diaspora-like multiplicity of communities with virtual operating systems, or "Sysop skins", existing on top of the underlying operating system. I, for, one, strongly doubt it; it doesn't seem strange enough to represent a real future. But, even in the "operating system skin" case, the "flipping through the deck" and "hell polis" problems do not exist; try and construct a virtual operating system which allows you to create and abuse another sentient, and the underlying operating system will step in. Similarly, even if Earth turns out to be a haven for the Luddite biological humans and their kin, I would expect that the Sysop would maintain a presence - utterly unobtrusive, one hopes, but still there - to ensure that nobody on Earth launches their own hell polis, or tries to assimilate all the other Earthly refugees, or even creates a tormentable AI on their home computer. And so on.

But that is simply my personal opinion. A Friendly AI programmer does not get to decide whether Friendliness manifests in an individual human-level AI trying to do good, or in an AI who becomes the operating system of a polis, or in an AI who becomes the Sysop of human space. A Friendly AI programmer does not even get to decide whether the Sysop Scenario is a good idea; Sysop / nonSysop scenarios are not moral primitives. They are, formally and intuitively, subgoal content: The desirability of a Sysop Scenario is contingent on its predicted outcome. If someone demonstrated that neither the "flipping through the deck" nor the "hell polis" problems existed - or that a Sysop Scenario wouldn't help - then that would remove the underlying reason why I think the Sysop Scenario is a consequence of normative altruism. Similarly, most of the people who come down on the nonSysop side of the issue do so because of testable statements about the consequences of uniformity; that is, their indictment of the Sysop Scenario is contingent upon its predicted outcome. Whether the Transition Guide favors a Sysop Scenario or a "Coalition of Polises" is not a decision made by Friendship programmers. It is a consequence of moral primitives plus facts that may still be unknown to us.

Thanks for the reference. I

Thanks for the reference. I think the main point of contention I'd raise is that I don't think it can simply be asserted that an "evil aggressor AI" will be able to dominate the post-Singularity landscape. Why will an evil AI be so successful when the general trend of history has been to make "evil" (non-cooperative, roughly) strategies less and less viable?

I don't mean to accuse all transhumanists of being naive on these matters, though. I didn't pay close attention to Yudkowsky's fun theory series, but I never had the impression that he made overly-narrow assumptions about what transhuman desires and motivations would end up looking like.

I don't read Yudkowsky as

I don't read Yudkowsky as claiming that an evil aggresssor AI would dominate the landscape, only that it would dominate "everyone in the Solar System who isn't behind the impregnable defenses of an existing [friendly] superintelligence." Which is why he suspects that a Friendly AI, if implemented first, would use its first mover advantage to prevent the development of evil aggressor AIs.

Why will an evil AI be so successful when the general trend of history has been to make "evil" (non-cooperative, roughly) strategies less and less viable?

It's not a question of motives, it's a question of power. The only thing that can stop an evil superintelligence is a friendly superintelligence; regular humans with normal levels of intelligence are irrelevant. And one of the recurring themes Yudkowsky makes in that same paper is the importance of avoiding anthropomorphic thinking about non-human intelligences. General trends of human history involving selfish and cooperative evolutionary strategies simply don't apply to artificial intelligences, except to the extent that they are programmed that way.

The smart transhumanists & the sheep

There are plenty of transhumanists who are just rah-rah tech is good. But there are also some very smart transhumanists who have thought quite a lot about the possible consequences of new technology.