Monday, July 16, 2012

Poor Singularians

As an interested party in computer science, I love the idea of artificial intelligence. I know enough computational theory to know that AI is technically possible, there are many physical barriers to creating something we would recognize as 'intelligent.' I am, at heart, a materialist, and as such buy into emergence theory and such, but a theoretical possibility is not guaranteed to ever be invented. Our own bodies have nearly four billion years on us, although we could take a shorter path, since we have a deliberate goal in mind.

Alan Turing's hypothetical machine could emulate any system if you give it enough complexity, but that right there is where physical limits start to slow us down. Any computer you find lying around works on the same strict principles. Microscopic transistors store information simply by combining large patterns of on and off, 1 and 0. On is designated by an electron charge, off by a lack of one. While these transistors have become smaller and smaller over the decades, there is a physical limit on their size. Electrons do have a size themselves, and any material used to store them as a charge would in turn need to be a minimum size to not begin having quantum uncertainty issues, assuming the material itself doesn't do weird stuff that small. Many other complexities get in the way, such as bus speed, processor complexity, etc. In fact the binary nature of things, the 0s and 1s, increases complexity by a large magnitude compared to biological computers, with multi-state neurons in place of transistors (and other parts, as well. Neurons are multi-function.)

Assuming someone puts together a complex enough computer, we then come down to the software. In fact some have theorized we do in fact have the computing power, in the form of distributed computing (think large networks like the internet). This massive, distributed computing could, perhaps cooperate well enough towards a common set of goals or commands, so let us allow it for our hypotheticals. Now you have a mass of goo. Insane processing power, able to run millions of commands at once, approaching the complexity of organic computers. But now, unlike those organic computers, you need programs as a separate component. Within the structure of organic computers are certain instincts and drives, emotions and reactions. These are physically and chemically ingrained into the brain, which causes another issue we can examine later. Once more let us gloss over the issue of integration and use complex programs to emulate these biological impulses and structures. Ok, who codes this? Billions of years of admittedly messy code is still quite a project to emulate. Already the issue begins to become very, very clear.  Many, especially in computer science, assume it is just a matter of adaptive programming, where the programs can self modify, but they are the very people who should know better. An operating system, such as Windows XP (an old OS), has tens of millions of lines of code. The linux kernel alone has some 15 million plus lines of code.

So far we have a massive coding project, an insane network of computers, and we are still stuck with some issues of complexity. It really does keep coming back to complexity. To go back to biology, which has us beat so badly in the game of computing, PZ Myers at Phyrangula has a great post on the complexity of the brain. In his post:

You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue?


The programs can, as Turing showed mathematically, emulate many of these functions, so let us pretend we managed to reach the requirements and have a working AI, most likely in some manner an emulation of a human. This AI now has whatever we thought would best model our own thought, but without the hormones, senses, or the entire rest of our nervous and limbic systems. This entity we have created now has in common with us only what we were able to emulate of ourselves. Perhaps it shows some form of our emotion, but the only thing emotional theories agree on in psychology is that they have a strong physiological component. I already mentioned the integration of the 'programs' and 'hardware' in biological brains, and despite program emulations of such, this self modifying entity will not cling very strongly to human biological urges, emotions, instincts or morals. 

I don't automatically assume the new intelligence is going to kill us all, because that requires a very human anger or hate. But with only logical backings and likely flawed programming, this AI will be very, very different from us. It won't have an extended body, it will probably skip much of our built in abstractions and it really won't have common ground for us to communicate with, aside from purely technical instructions. 

To really build an AI that we would recognize as such, we would need to emulate much of ourselves in it. We would need some pretend body and environment, some emulated limbic and nervous system (the brain is only PART of the nervous system, something most futurists forget). We would also need to build a completely different type of computer, one where the architecture is structurally tied to certain actions. Basically we would need to build a human body, but far more expensive. It would need DNA instructions, separate abstracted layers like our 'reptile' brain to work it's normal, computer functions and higher order processors for complex thought. 

None of this is to say AI is at all impossible. For one thing, this assumes we use transistor based computers, which may the only ones you or I can buy, but are not the only ones being researched and made. But this is to say AI is not some accidental programming error that could happen secretly on the internet, or in some military lab deep underground. And it is also to point out the philosophical differences between us and any AI. It probably won't like us, but it won't hate us either. Both of those require some measure of survival instinct, which an artificial construct won't really have. 

On the flip side, the point of the PZ Myers post I linked earlier makes is that due to this, as well as the scanning issue he discusses, we are not very close to 'uploading' our brains. We are more than memories in slide show presentations, we are more than our brain. We are our entire bodies, every cell and nerve and sensation. AI is possible because any system of enough complexity can emerge into intelligence, but we may have very little to say to it.

Logic Priest

No comments:

Post a Comment