7th September 2014, 7:57 PM
I really don't think anything like that is nearly that close. Of course there's baby steps, but keep in mind neurons and semiconductors don't really work the same. At a fundamental level, every neuron is more like an entire processor than just one semiconductor, each one more like an entire computer input/output system. We've got some very broad strokes on which sections of the brain do certain general tasks, but those same broad strokes if applied to a PC wouldn't let you build an exact duplicate (knowing that this part is the processor and this part is the memory in a PC is basically useless if you're trying to actually build the hardware yourself or make an emulator).
Further, the brain is more than just the sum of electrical activity, as the flood of chemicals the brain floats in change how every neuron bathed in it responds to those electrical signals, so that's got to be emulated too. Not knowing exactly how every neuron is wired is a very big gap. With something like, say, a star, you don't need to know the interactions of every individual atom. You can generalize, create math that simplifies everything, because a star's reactions aren't dependent on chains of activity that go down to that scale. Living things, and weather systems, DO depend on such incredibly small variations to a ridiculous amount (those systems can be called "chaotic" for this reason), and maybe more so in living things, as there are SO many chains and chains of protein reactions that depend on the smallest parts of single molecules, and later reactions depend on THOSE smallest conditions. The brain is possibly more complex still than that.
The sheer daunting scale of the problem of virtualizing a brain is absolutely mind numbing. Reading commentary by biologists and neurologists on the issue makes that clear to me. I'm a bit more hopeful than some of them, in that I think someday that challenge might be overcome, that it is possible, but I am going to take their word for it that it very likely won't be in our lifetimes. I love technology, "do machines" and everything, but technologists have a bad habit of massively underestimating the biological problem ahead of them when they predict things like a singularity.
There's also another matter. Something like a superior intelligence is less a matter of computing power (though that is a factor), and MUCH more about the software. String together a trillion circuits, and that's all you have, a trillion unthinking and useless circuits doing basically nothing. They've got to be programmed with systems and such to actually stand a chance of becoming a superior intellect. It won't just spontaneously emerge. To program something like that, we've got to understand how our own brains are programmed, and studies like that are still in their infancy. If you had a brain's worth of neurons but they weren't actually intelligently arranged in any fashion, you'd have nothing. Evolutionary programming, starting with general goals for how you want a set of competing AIs to behave and selecting as time goes on, could help us get there, but we'd still lack an understanding of what they're doing (though perhaps it'd be easier to reverse engineer after the fact). A bigger problem? Evolution is more or less blind. We'd have no way if the AIs we just evolved were "friendly" or not.
So, the singularity is an interesting idea, but it's not right around the corner, and there's a LOT of things humanity needs to do to get there, many of those tasks very daunting indeed. Don't misunderstand, I think augmenting and altering the human mental condition is probably the one thing that'll save us as a species, I'm just more skeptical of just how challenging the task is.
Further, the brain is more than just the sum of electrical activity, as the flood of chemicals the brain floats in change how every neuron bathed in it responds to those electrical signals, so that's got to be emulated too. Not knowing exactly how every neuron is wired is a very big gap. With something like, say, a star, you don't need to know the interactions of every individual atom. You can generalize, create math that simplifies everything, because a star's reactions aren't dependent on chains of activity that go down to that scale. Living things, and weather systems, DO depend on such incredibly small variations to a ridiculous amount (those systems can be called "chaotic" for this reason), and maybe more so in living things, as there are SO many chains and chains of protein reactions that depend on the smallest parts of single molecules, and later reactions depend on THOSE smallest conditions. The brain is possibly more complex still than that.
The sheer daunting scale of the problem of virtualizing a brain is absolutely mind numbing. Reading commentary by biologists and neurologists on the issue makes that clear to me. I'm a bit more hopeful than some of them, in that I think someday that challenge might be overcome, that it is possible, but I am going to take their word for it that it very likely won't be in our lifetimes. I love technology, "do machines" and everything, but technologists have a bad habit of massively underestimating the biological problem ahead of them when they predict things like a singularity.
There's also another matter. Something like a superior intelligence is less a matter of computing power (though that is a factor), and MUCH more about the software. String together a trillion circuits, and that's all you have, a trillion unthinking and useless circuits doing basically nothing. They've got to be programmed with systems and such to actually stand a chance of becoming a superior intellect. It won't just spontaneously emerge. To program something like that, we've got to understand how our own brains are programmed, and studies like that are still in their infancy. If you had a brain's worth of neurons but they weren't actually intelligently arranged in any fashion, you'd have nothing. Evolutionary programming, starting with general goals for how you want a set of competing AIs to behave and selecting as time goes on, could help us get there, but we'd still lack an understanding of what they're doing (though perhaps it'd be easier to reverse engineer after the fact). A bigger problem? Evolution is more or less blind. We'd have no way if the AIs we just evolved were "friendly" or not.
So, the singularity is an interesting idea, but it's not right around the corner, and there's a LOT of things humanity needs to do to get there, many of those tasks very daunting indeed. Don't misunderstand, I think augmenting and altering the human mental condition is probably the one thing that'll save us as a species, I'm just more skeptical of just how challenging the task is.
"On two occasions, I have been asked [by members of Parliament], 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question." ~ Charles Babbage (1791-1871)