Brain chips

There can be few more potent ideas in futurology and science fiction than that of the brain chip – a direct interface between the biological information processing systems of the brain and nervous system and the artificial information processing systems of microprocessors and silicon electronics. It’s an idea that underlies science fiction notions of “jacking in” to cyberspace, or uploading ones brain, but it also provides hope to the severely disabled that lost functions and senses might be restored. It’s one of the central notions in the idea of human enhancement. Perhaps through a brain chip one might increase ones cognitive power in some way, or have direct access to massive banks of data. Because of the potency of the idea, even the crudest scientific developments tend to be reported in the most breathless terms. Stripping away some of the wishful thinking, what are the real prospects for this kind of technology?

The basic operations of the nervous system are pretty well understood, even if the way the complexities of higher level information processing work remain obscure, and the problem of consciousness is a truly deep mystery. The basic units of the nervous system are the highly specialised, excitable cells called neurons. Information is carried long distances by the propagation of pulses of voltage along long extensions of the cell called axons, and transferred between different neurons at junctions called synapses. Although the pulses carrying information are electrical in character, they are very different from the electrical signals carried in wires or through semiconductor devices. They arise from the fact that the contents of the cell are kept out of equilibrium with their surroundings by pumps which selectively transport charged ions across the cell membrane, resulting in a voltage across the membrane. This voltage can be relaxed when channels in the membrane, which are triggered by changes in voltage, open up. The information carrying impulse is actually a shock wave of reduced membrane potential, enabled by transport of ions through the membrane.

To find out what is going on inside a neuron, one needs to be able to measure the electrochemical potential across the membrane. Classically, this is done by inserting an electrochemical electrode into the interior of the nerve cell. The original work, carried out by Hodgkin, Huxley and oters in the 50’s, used squid neurons, because they are particularly large and easy to handle. So, in principle one could get a readout of the state of a human brain by measuring the potential at a representative series of points in each of its neurons. The problem, of course, is that there are a phenomenal number of neurons to be studied – around 20 billion in a human brain. Current technology has managed to miniaturise electrodes and pack them in quite dense arrays, allowing the simultaneous study of many neurons. A recent paper (Custom-designed high-density conformal planar multielectrode arrays for brain slice electrophysiology, PDF)) from Ted Berger’s group at the University of Southern California shows a good example of the state of the art – this has electrodes with 28 µm diameter, separated by 50 µm, in an array of 64 electrodes. These electrodes can both read the state of the neuron, and stimulate it. This kind of electrode array forms the basis of brain interfaces that are close to clinical trials – for example the BrainGate product.

In a rather different class from these direct, but invasive probes of nervous system activity at the single neutron level, there are some powerful, but indirect measures of brain activity, such as functional magnetic resonance imaging or positron emission tomography. These don’t directly measure the electrical activity of neurons, either individually or in groups; instead they rely on the fact that thinking is hard work (literally) and locally raises the rate of metabolism. Functional MRI and PET allow one to localise nervous activity to within a few cubic millimeters, which is hugely revealing in terms of identifying which parts of the brain are involved in which kind of mental activity, but which remains a long way away from the goal of unpicking the brain’s activity at the level of neurons.

There is another approach does probe activity at the single neuron level, but doesn’t feature the invasive procedure of inserting an electrode into the nerve itself. These are the neuron-silicon transistors developed in particular by Peter Fromherz at the Max Planck Institute for Biochemistry. These really are nerve chips, in that there is a direct interface between neurons and silicon microelectronics of the sort that can be highly miniaturised and integrated. On the other hand, these methods are currently restricted to operate in two dimensions, and require careful control of the growing medium that seems to rule out, or at least present big problems for, in-vivo use.

The central ingredient of this approach is a field effect transistor which is gated by the excitation of a nerve cell in contact with it (i.e., the current passed between the source and drain contacts of the transistor strongly depends on the voltage state of the membrane in proximity to the insulating gate dielectric layer). This provides a read-out of the state of a neuron; input to the neurons can also be made by capacitors, which can be made on the same chip. The basic idea was established 10 years ago – see for example Two-Way Silicon-Neuron Interface by Electrical Induction. The strength of this approach is that it is entirely compatible with the powerful methods of miniaturisation and integration of CMOS planar electronics. In more recent work, an individual mammalian cell can be probed “Signal Transmission from Individual Mammalian Nerve Cell to Field-Effect Transistor” (Small, 1 p 206 (2004), subscription required), and an integrated circuit with 16384 probes, capable of probing a neural network with a resolution of 7.8 µm has been built “Electrical imaging of neuronal activity by multi-transistor-array (MTA) recording at 7.8 µm resolution” (abstract, subscription required for full article).

Fromherz’s group have demonstrated two types of hybrid silicon/neuron circuits (see, for example, this review “Electrical Interfacing of Nerve Cells and Semiconductor Chips”, abstract, subscription required for full article). One circuit is a prototype for a neural prosthesis – an input from a neuron is read by the silicon electronics, which does some information processing and then outputs a signal to another neuron. Another, inverse, circuit is a prototype of a neural memory on a chip. Here there’s an input from silicon to a neuron, which is connected to another neuron by a synapse. This second neuron makes its output to silicon. This allows one to use the basic mechanism of neural memory – the fact that the strength of the connection at the synapse can be modified by the type of signals it has transmitted in the past – in conjunction with silicon electronics.

This is all very exciting, but Fromherz cautiously writes: “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.” Among the practical problems are the fact that it seems difficult to extend the method into in-vivo applications, it is restricted to two dimensions, and the spatial resolution is still quite large.

Pushing down to smaller sizes is, of course, the province of nanotechnology, and there are a couple of interesting and suggestive recent papers which suggest directions that this might go in the future.

Charles Lieber at Harvard has taken the basic idea of the neuron gated field effect transistor, and executed it using FETs made from silicon nanowires. A paper published last year in Science – Detection, Stimulation, and Inhibition of Neuronal Signals with High-Density Nanowire Transistor Arrays (abstract, subscription needed for full article) – demonstrated that this method permits the excitation and detection of signals from a single neuron with a resolution of 20 nm. This is enough to follow the progress of a nerve impulse along an axon. This gives a picture of what’s going on inside a living neuron with unprecendented resolution. But it’s still restricted to systems in two dimensions, and it only works when one has cultured the neurons one is studying.

Is there any prospect, then, of mapping out in a non-invasive way the activity of a living brain at the level of single neurons? This still looks a long way off. A paper from the group of Rodolfo Llinas at the NYU School of Medicine makes an ambitious proposal. The paper – Neuro-vascular central nervous recording/stimulating system: Using nanotechnology probes (Journal of Nanoparticle Research (2005) 7: 111–127, subscription only) – points out that if one could detect neural activity using probes within the capillaries that supply oxygen and nutrients to the brain’s neurons, one would be able to reach right into the brain with minimal disturbance. They have demonstrated the principle in-vitro using a 0.6 µm platinum electrode inserted into one of the capillaries supplying the neurons in the spinal cord. Their proposal is to further miniaturise the probe using 200 nm diameter polymer nanowires, and they further suggest making the probe steerable using electrically stimulated shape changes – “We are developing a steerable form of the conducting polymer nanowires. This would allow us to steer the nanowire-probe selectively into desired blood vessels, thus creating the first true steerable nano-endoscope.” Of course, even one steerable nano-endoscope is still a long way from sampling a significant fraction of the 25 km of capillaries that service the brain.

So, in some senses the brain chip is already with us. But there’s a continuum of complexity and sophisitication of such devices, and we’re still a long way from the science fiction vision of brain downloading. In the sense of creating an interface between the brain and the world, that is clearly possible now and has in some form been realised. Hybrid structures which combine the information processing capabilities of silicon electronics and nerve cells cultured outside the body are very close. But a full, two-way integration of the brain and artificial information processing systems remains a long way off.

26 thoughts on “Brain chips”

  1. Very informative post, Richard. I think the most likely near-term application of these technologies to reach into the sci-fi realm will be super vision. Retinal implants are much easier to design and, more importantly to install. Also, being limited to a two dimensional plane is not a problem with the retina. Retinal implants are also easily susceptible to a “Moore’s law” like acceleration. They have already giving some rudimentary sight to the, previously, completely blind. I believe that there is a limit now to 2×2 pixels. An increase in resolution matching what has been seen in digital cameras seems inevitable. Super-human resolutions and, perhaps, night-vision may only be a couple of decades away, and will most likely arrive well before a useful brain implant.

    As to mapping out what individual neurons are doing, I’d like to draw your attention to something I found on Brian Wang’s site:

    Researchers have used a modified rabies virus to map *just* the connections between one neuron and its neighbors. Previous efforts to do so have been plagued by viruses’ propensity to multiply beyond control and branch-out indefinitely. This modified virus causes just the axons radiating out from one neuron to turn fluorescent green. This is, of course, still an ex vivo experiment.

  2. Brain-computer interfaces seem like a solution looking for a problem. If you get brain-damaged, why not use some kind of stem cell regeneration to regenerate your brain? You probably don’t even need the stem-cells, the appropriate growth factors would probably do the job in most cases.

    Regeneration seems closer on the horizon than anything like brain downloading or uploading (or whatever the singularity people call it) and would probably be cheaper to boot. Besides, I hate the idea of putting non-biological crap into my body when a biological solution is much better.

  3. This post may be a little bit confused and erroneous, but anyway:

    I stuck my nose into this kind of thing a couple of years ago. I think I found some stuff suggesting that the bandwidth, as such, of muh of our sensory apparatus is not as wide as you would think.
    Furthermore, I heard on the radio that whether it was with sight or hearing, I cannot recall exactly which, the information actually gets processed before being passed down the nerves to the brain to interact with the various structures there. Hence merely tapping a nerve and sending signals down it in a manner similar to driving a car may be fine for making your subject twitch their little finger, or walk to the left. But it is not much use for the SF dream of brain chips. That would require a completely new level of description of the function of our nervous system, I suspect down to the molecular level.

    I am thinking that the only real interaction that might be possible, quickly and easily in the future, would involve external scanning using magentic fields, reading alterations in neurons as they occur. Then you need computers capable of rebuiding that into a coherent simulation that corresponds with your subjective experience. Something which is going to be a bit tricky. There are no regions of your brain that are neatly labelled “These neurons detect red”.

  4. Or failing that, there might be ways of growing artificial neural systems parallel to our current ones, but then exactly what you could make them from and how they would work, I really don’t know.

  5. “But there’s a continuum of complexity and sophisitication of such devices, and we’re still a long way from the science fiction vision of brain downloading”

    I doubt solid-state brains are possible. I’m sure there is more to consciousness than engineering artificial signal pathways, else a rightly times photon can be said to be a part of a conscious brain. Too many of the fake neurons in a mammalian CNS/brain and the subject will exhibit a severe neurological affliction or just die.

  6. Greetings – As is usual with your postings, the crew was sent scurrying to the Data Miner to get more knowledge. Here is a bit of what we found. Thank you again for stay on the crest of the wave.

    Peter Fromherz –

    Two-way silicon-neuron interface by electrical induction –

    Signal Transmission from Individual Mammalian Nerve Cell to Field-Effect Transistor –

    Electrical imaging of neuronal activity by multi-transistor-array (MTA) recording at 7.8µm resolution –

    Electrical Interfacing of Nerve Cells and Semiconductor Chips –

    Detection, Stimulation, and Inhibition of Neuronal Signals with High-Density Nanowire Transistor Arrays –

    Has an excellent movie on BUILDING A NANOCOMPUTER

  7. Brain downloading? It’s called uploading in SF, as far as I know.

    Logically speaking, if you can run a simulation of all the atoms and molecules in someones brain (And they are usually operating under certain definable contstraints) you can simulate their consciousness. This is the kind of general statement which has allowed SF authors to get away with uploading for well over 10 years now.

    Phillip, I’m not quite sure what you are syaing there. What are artificial signal pathways and rightly times photons?

  8. “Logically speaking, if you can run a simulation of all the atoms and molecules in someones brain (And they are usually operating under certain definable contstraints) you can simulate their consciousness”

    And “logically speaking”, simulating a physical process is not equivalent to enacting the process. I can simulate a chemical reaction using DFT and supercomputers, but that doesn’t recreate the physical process.

    Where you are perhaps fooling yourself (and impressionable other people) is assuming that your simulation corresponds to equality with the original. A “perfect” copy is assumed, but only “perfect” by computer programming coding standards or something, not “perfect” as in: identical ontological existence. The funny thing is that even a child knows the difference.

  9. My mistake – I wrote brain uploading in the first paragraph, and brain downloading in the last. As I understand it, one refers to uploading when the recipient is of higher computing power than the donor, and downloading when the reverse is true. Probably a Freudian slip representing my views on the likelihood of being able to achieve strong artificial intelligence in the near future.

  10. Whoa Philip. I think we’re on the same side in this, insofar as we both doubt the possibilities of uploading.

    What is this identical ontological existence of which you speak? It sounds more like philosophy than science. Furthermore, do you have any evidence for your previous statement:

    “I’m sure there is more to consciousness than engineering artificial signal pathways”

    How exactly would you differentiate between an uploaded, perfectly running computer replica of someones brain, and the real person?

  11. I can’t differentiate between a human being’s mind/body and a circa 22nd century (or before) robot that has human flesh and a mature “human behaviour programming”. If you take things up another level circa 23rd century (or before), I can’t distinguish between the sensory imput of a real really hot woman, and sensory wires jammed into the tactile centers of my brain. This is nothing more than an argument for solipsism. I can’t tell if a really good no-limit Hold’Em player is a good bot unless I observe it playing “too good” over a period of hours, in a few years I won’t be able to tell at all if enough world-class players tell the programmers their two cents. I can tell a chess program by the way it destroys me in the endgame, but again, a few years from now I may not be able to tell at all. Doesn’t make a computer program that exhibits learning, sentient.

    Consciousness requires more than signal processing; AFAIK, upload enthusiasts at best use a messy definition of consciousness that resembles the arbitrary definition of metabolism. There is some physical function using CNS-specific physics , that is certainly creating our perception of consciousness. If you replace (somehow) human neurons/brain-centers (the consciousness enabling one’s not optic nerves) with silicon diodes, you might still maintain some/most of signal processing inside a CNS, but you won’t create whatever is causing consciousness unless you “recreate” the CNS activity responsible for consciousness. I know it isn’t signal processes; too general a definition.

    Quite frankly, computer programming has no direct relevance in the fields of Philosophy of Mind, and in (if desired) creating brains. It certainly is important in designing intelligent systems (understand Deep Blue was not sentient), but the specific applications have to be tractible. A computer program designed to recursively improve its learning may or may not be tractible in the 21st Century utilizing available semi-conductor infrastructures. The first step would be to estimate the necessary software resources for the application (AGI), and then to estimate a “software progress barometer”. These two basic first steps have yet to be undertaken in any reliable fashion (equating bits with neurons is sci-fi BS) AFAIK, so I’m personally ignoring the field until someone does this.

    Same for MNT. An engineering system encompassing MNT product closure is not a certainty, and even assuming it were we won’t necessarily be able to build it given 21st Century UHV SPM surface sciences.

    “What is this identical ontological existence of which you speak?”

    A property of reality that there are physics (some of which I’m postulating occur in our brains) that can’t be encoded in computer or pocket calculator switches and gates. Brew a cup of coffee with a computer. Go ahead. Do it. This is usually where weasely upload enthusiasts change the subject or respond with a joke. You can simulate the flow water through coffee beans, can simulate the temperature of the ceramic mug, now go ahead and do it. Use a 25th century computer for all I care. Can we stop wasting time on this illusion? It is evil. Assigning intrinsic value to inanimate objects devalues conscious actors, like humans. Someone has brain-washed a whole bunch of otherwise intelligent futurists and that someone probably didn’t understand why an Intel chip can’t brew coffee.
    Sorry for the non-nanotechnology post, but the header is titled “brain chips”.

  12. Futurology or science fiction may be the proper venue for a discussion of brain chips, but it is nanotechnology where the term was coined and the discussion began, so your thoughts certainly do belong here.

  13. Hmm, ok, I’m with you as far as the first paragraph.

    However, I do wonder about learning capable computer programs. Possibly I have not insufficiently discounted the enthusiasts, but it still looks to me like the next few years of semi-conductor technology will allow us to do the kind of programming that you are talking about regarding programs that can teach themselves.
    (mind you, actually writing an appropriate program would be very tricky)

    I am afraid I don’t get the last paragraph though.
    Your coffee analogy seems to me inadequate, given that I cannot brew a cup of coffee using my brain. I can switch on the coffee maker though, just like those small bedside teasmaids have been doing so for decades. So it is with the hypothetical uploaded mind- it could just send a signal the the coffee maker to start making some coffee. The point here I think is that a decision has to be made. Whether that is a process of the physics of the brain that is computable or not, I don’t really know. I suppose we’ll spend a few decades trying to work it out.

    I too think your post is relevant though, since nanotechnology is, whether successful or not, linked to the future and the ideas, ethics and decisions we shall have to make in order to get into the future.

  14. Philip, why don’t you -using your special “intrinsic properties”- brain brew up a cup of coffee. *Imagine* the water flowing through the beans. *Imagine* the mug getting hot. Or better yet, *imagine* yourself drinking it. *Imagine* the caffeine working in your brain. No matter how vividly you imagine these things, you will not cause a cup of coffee to spontaneously materialize beside you. Can I not therefore conclude, that your thoughts and feelings which are created on the same substrate of that fake cup of coffee, are no more real than that cup of coffee?

    I have to ask, are you a hard-core Searlean? That is to say, if a full-fledged Turing test passing AI is created will you still insist that it is a mere zombie? If so, that is the ultimate solipsism. I find that kind of position very strange, it’s like a skeptic of bigfoot insisting that the presence of a hairy giant non-human biped would not constitute proof in the existence of bigfoot. One has to wonder why anyone skeptical about something would, a priori, construct *logical* arguments against any and all possible evidence that could convince them that their position is wrong. That is *exactly* what the Chinese room amounts to. It dismisses behavioral evidence while ignoring that is all we ever have to go on. They act like consciousness is a special circumstance; it is not. All a heliologists can do is observe the behavior of the sun, he can not *become* the sun and experience its “sunness” firsthand. Despite this, we believe in a world of matter external to our senses even though it MIGHT be an illusion concocted in our minds. How is “blind” belief in the minds of others any different? After all, I can not become you and experience myself thinking your thoughts, thus proving your brain capable of sustaining “Philipness” (the conscious state of being Philip Huggan). Despite that, I believe in your consciousness, and I would extend the same courtesy to a non-biological mind.

    Now, the biggest problem with zombie arguments is that they do not take evolution into account. If one insists that there is more than mere signal processing going on in the mind, and that whenever you have a putative mind that is just an amalgamation of signal processing it is just a zombie, then the question arises, how did consciousness evolve? Imaging two possible mutations in human evolution, one creates a brain only capable of complex signal processing, the other can confer “true” consciousness. In what way is the second to be favored by evolution over the first? If one takes zombie arguments seriously, the zombie race would be just as evolutionarily fit as its conscious counterpart.

  15. “(Nanoenthusiast) Can I not therefore conclude, that your thoughts and feelings which are created on the same substrate of that fake cup of coffee, are no more real than that cup of coffee?”

    I don’t think you meant to use two negative above…my brain can’t brew coffee. Your computer can’t brew coffee. That is the conclusion. Coffee requires a brewing temperature that would injure my brain and wreck your computer. A computer simulation is just a bunch of switches/gates flipping back and forth via electrons. That’s all they do. If the system is outside the bounds of semiconductor switches, say human thought or everything else (matter, energy, fields) in the universe that doesn’t happen to be that computer, that computer program is impotent to affect the rest of reality directly. What, you think if you magically flip the Intel switches back and forth in a certain order, that a unicorn will appear or coffee will be brewed or a human brain will materialize nearby? Plz.

    “how did consciousness evolve?”

    Animals that got an endocrine system jolt when they ******* hot blondes, made more babies. There are many mainstream papers that can shed some (better than nothing) light on this; papers written for brain-biology audiences not computer programming. This questions seems just like questions about a missing link or the origin of the (so said, impossible to evolve) eye.

    “Imaging two possible mutations in human evolution, one creates a brain only capable of complex signal processing, the other can confer “true” consciousness.”

    Impossible. Evolving an advanced (potentially intractibly impossible) learning algorithm requires a physical CNS for animals to procreate. Such an environment would have to be a designer environment: clearly not “human evolution”.
    Real physics and engineering have evolved far beyond such thought experiments. Yours is an argument to kill off sentient humans to free up resources for unsentient pet-rocks and calculators. Thankfully few people seriously entertain such notions (and these same people might also dismiss good memes carried along with the evil uploading-meme). There is real research being done at many universities and hospitals to find the seat of human consciousness. I assure you consciousness does not arise out of calculator components.

  16. It is precisely my point that the fakeness of the coffee in a computer and the coffee in your mind is identical. One is no more real than the other. But in the case of thought and emotion, I am supposed to believe that one is more real than the other with no good reason.

    Evolution is directly interested in actions, only the actions of animals matter in survival, whether they have subjective qualia when they **** a hot female is irrelevant. All that matters is they complete the act. In this way there is NO reason to believe qualia have ANY survival value. To insist otherwise is to (re)introduce the supernatural into the evolution of Man, for it would take a supernatural hand to make sure evolution turned out “right.”

    You say that it is impossible for evolution to produce mere signal processing, but you also said that optic nerves *could* be replaced with “silicon diodes.” If evolution saw fit to create one type of cell that can easily be replaced with a computer equivalent, why not all of them? Why would it require a designer environment?

    Uploading originally was just a thought experiment thought up by Hans Morevac. The purpose was to allow people to imagine themselves gradually being replaced neuron-by-neuron with computing elements. If one believes that machines can’t think, then at some point either you, or someone else, will see something wrong. Perhaps, you will lose the ability to see the color red as those parts of your brain are replaced. In that case, the difference between the biological and the synthetic will be clear, and there will be no mass genocide. The problem lies with the zombie arguments. If you believe in the philosopher’s zombie, it doesn’t matter if the replacement parts don’t ring any alarm bells. The original person is simply replaced with a mindless knock-off, one piece at a time. A knock-off, that acts just like the original.

    You then have the thorny philosophical issue of what would it feel like to die in this manner. The first suggestion is that you still maintain consciousness until some magic point and then you simply stop thinking. This, of course, raises the question, what is that magic point? Also, why is it that point and not some other point?

    The second idea is that you lose consciousness gradually, but you aren’t conscious of it! So if someone asked you what color piece of paper they were holding up, and the part of your brain dedicated to that task was already replaced, you would say, “Red,” without even having the qualia of experiencing the color red! In other word, you *think* you are having a particular mental state but you aren’t. This is an absurdity. The pro-consciousness-is-a-mysterious-thing people maintain that each of us is an omniscient god when it comes to his or her own mental states. Surely, you can’t be wrong about your own mental states. This either leads us back to the first suggestion, or we take on the third idea.

    The third, which is John Searle’s, is that you lose your consciousness gradually and you are aware of it; but this is different than the scenario where you tell the technician to stop when you go color blind. In this scenario, you lose consciousness, but are powerless to tell anyone. Mysteriously, when people ask you about seeing the color red, you hear a voice coming from your mouth that is not your own saying, “Yes.” Presumably, when they then ask you questions regarding, say, hearing you will be in control of your vocal cords, because the parts of your brain that involves hearing have not yet been replaced. But, when given the first chance to speak, won’t you, instead, talk about the earlier incident in the vision test? Since you are gradually being turned into a philosopher’s zombie, you can’t. So, that “other” voice will continue to speak for you. This creates a situation not different from the first suggestion, at some critical moment you do lose consciousness, in this case, just your conscious control of your body. But why do you not lose your subjective experience at the same time? And why is it that the presence of two minds, one real, one “fake,” in the same brain at the same time can’t be detected by external means?

    The absurdity of all these permutations makes the original idea seem sane by comparison, that your mind simply transitions to a new substrate; either that, or the substrate is found to be functionally deficient by yourself or a technician, and the process is halted and/or reversed. I suppose believing that makes me evil.

    This is probably all academic, if and when mind uploading ever becomes a reality, it will probably not initially look like the scenario outlined by Moravec. Instead, a cruder method of freeze and scan will be available first. In this method, the frozen brain is scanned one micro-layer at a time. The “uploadees” will be cryonics patients. However, large-scale uploading will likely only happen at such time that the gradual method can be perfected.

  17. I’m not reading the rest of your post NanoEnthusiast, but my reponse to the 1st paragraph is that the “fakeness” of the coffee in my mind uses Central Nervous System processes that trigger consciousness. Of course I’m not actually the coffee cup when I think about coffee.

    When a coffee cup is simulated, it is only a representation (that doesn’t utilize consciousness creating synapses and EM fields and mostly unknown presently, CNS mind functions) utilizing Intel switches. These Intel switches function utilizing different physical processes than do brains. If the simulation doesn’t correspondent to the real physical world system it is simulating, it is useless to us.

    I’ve watched you Transhumanists, Extropians and Singularitarians, rebuke and discredit other belief systems: the Catholic Church, Christianity, Islam, Raelianism, Communism, Stalinism, etc…while empracing belief systems that just plain don’t work: Libertarianism, Neoconservatism, Strong AI, uploading for longevity gains, Neocapitalism, Private Health care (this is an attack on the Public Health funding you guys need to live so long…)…

    The only belief I’m attacking as evil, is uploading for longevity gains. But take a look in a mirror before criticizing others. I don’t necessarily like the Catholic Church (incredibly wealthy, should be doing even more charitible work than they are), but for instance they are often the only charity distributing basic humanitarian aid in South America.

    Q: What have the uploading enthusiasts ever done?
    A: “Preach” the belief that we should terminate our human minds for software productivity advances, solely because some writers/thinkers don’t understand Philosophy of Mind or “brain physics”. Admittedly two hard disciplines.
    I don’t want H+ to be associated with real world nanotechnology, including all the real world mainstream UHV SPM advances that have occured and our available for free download (not the actual SPMs themselves, just the PDF files), until H+ cleans up its own false meme space (listed above), starting with the evil one: uploading for personal longevity gains.

  18. Well, it seems that we are at an impasse. Usually, in such matters of science, we would simply agree to disagree until an experiment can be conducted to prove who is right and who is wrong. Indeed, I don’t bother getting into arguments on most topics because, in the end, most issues can be settled by empirical evidence. The problem with the upload skeptics, is that they already have a philosophical security blanket in the form of the philosopher’s zombie to combat all forms of future empirical evidence that is in contradiction to their beliefs. If/when someone is uploaded, instead of asking the person if the process worked, they will declare him a zombie and put their fingers in their ears and scream, la la la la la la la I’m not listening! This is a completely unprecedented form of skepticism, it is akin to a UFO skeptic not believing in UFOs after seeing a giant flying saucer floating over New York City.

    I don’t know if uploading will or will not work, but I DO know that it WILL be an empirical matter just like everything else, e.g. MNT, or time travel, or perpetual motion machines, or even bigfoot. Philosophical, or logical arguments are irrelevant.

  19. There are many other potential software tests besides the Turing Test (hypothesized in the 1950’s before most modern “brain sciences” scientific instruments had been invented). I’ll contribute my two cents to this arena when someone estimates whether AGI is tractible in this century. Every AGI estimate I’ve seen to date trivializes the functionality of human neurons and ignores other essential brain components. Every AGI estimate I’ve seen assumes deflationary hardware performances where deflationary performance is requires, the latter preceding more slowly than Moore’s Law.

    My two cents for CNS consciousness advances over this century: “Brain sciences” will advance to the point where computer simulations (ironically) will be able to model what deviations from human and animal CNS’s, still yield “consciousness generating” brain activities (this is the most humane evolution of the field as well). We aren’t sure what these are; we know with 100% certainty consciousness does not come from optic nerves. We know with 100% certainty calculators and pictures and movies and Intel CPU chips, aren’t sentient. Likely it will be a basket of chemical and EM field interactions, stemming primarily from specific brain faculties (probably requires proteins IMO): the centers of our memories, emotion and endocrine system.

    Why should we waste any time judging whether software is sentient? We know us and animals like us use physical brain processes that are outside the bounds if what a computer uses. Basically, in the 1950s, Minsky or Moravec or Asimov, or whoever, postulated Strong AI. Without a mature brain sciences, assuming all signal processing is consciousness may have been tenable. But your community isolated yourselves from new brain sciences advances over the decades. Know we know with 100% certainty, computers aren’t conscious and we know about time-irreversible physical physics processees. But the computer programming community, dealing with arbitrary bits of code that in no way needs to correspond with reality, has ignored real physics.

    We’ve wasted this thread about brain prothetics, talking about whether or not a calculator or a computer (or a picture/movie if I understand your reasoning correctly) is sentient. There are mainstream “brain sciences” journals and blogs and university courses that will deprogram you of the notion computers are sentient by illustrating brain structures and functions we associate with thought. Yes many of these brain faculties are unknown to some degree, but we understand them enough to know they don’t resemble a computer.
    I’ve been very patient in allowing newbies to be brainwashed by these Transhumanist false gods, not wanting to upset funding sources but it all seems to be smoke and mirrors marketing.

  20. “Every AGI estimate I’ve seen assumes deflationary hardware performances where deflationary performance is requires, the latter preceding more slowly than Moore’s Law.”

    Sry, forgot to insert “software” in between “deflationary performance”.

    Basically, the nanotech world uses STMs and AFMs; SPM-like instuments. Drexlers 1992 Nanosystems, doesn’t (AFAIK) utilize these at all. It hypothesizes an industrial technology of pure diamondoids (strong covalently bonded lattices, for some reason also encompassing weaker carbon allotropes). Maybe if our SPM instruments really improve, or our existing instruments gain the ability (somehow) to metabolize all necessary MNT mechanosynthetic reactions and we run the SPMs for a few centuries or millenia (because our present SPMs can’t “handle” defect reactions quickly), it will happen. But that is so far away from the 21st century that the existing nanotech community sees fit to ignore Nanosystems. “Nanosystems with SPMs” would sell millions.

    In a similiar fashion, someone postulated a machine mind way back when, before we knew enough about brain sciences to rule out simple info transmission as a “consciousness marker”. The real brain sciences community has and is making enormous advances. The Strong AI community has gone off in its own tangent completely unconnected to the dynamically evolves brain sciences community, for quite some time. Gullible young adults don’t know the difference, don’t realize software does nothing but flip switches. This is the part that pisses me off.

  21. So Phillip, what is your position on mind, body and brain, their existence or otherwise?
    At the moment, you are making a particular point with regards to the brain and consciousness, without apparently proffering any evidence to back it up. I freely admit that uploading is iffy, but you seem to be dismissing it on the back of some near certainty, but I am uncertain what that is based upon.

    Also, you seem to be dismissive of just about everything, from capitalism to communism via religion, so what on earth is your personal viewpoint?

  22. To all – I’m sorry that my spam filter seems unaccountably to have taken a dislike to just about everyone. I sure it’s not sentient, but that doesn’t mean I understand its rules!

  23. More often than not positions of the form of ‘X will never be possible’ are hard to maintain, especially when the definition of X is compatible with a large parameter space, such that you either have to rule out every single possible manifestation of X, or establish some fundamental aspect of X out of which the impossibility arises. As far as i know, brain sciences have not established that AGI (which is in this case X) contains some fundamental aspect which makes it impossible. I dont care whether or not a brain is not a computer at any given level (ie switches) you care to show me. What i do care, is whether or not the brain (or whatever is “generating consciousness”) is not a computer at ANY level, however basic that may be (ie, some level of physics which can be abstracted away as a formal system).

    So in summary, i disagree with “AGI is not possible”. Until strong evidence to substantiate such a sweeping negative arises, ill consider it is possible (in principle), and ultimately something to be resolved (or not) empirically.

  24. “As far as i know, brain sciences have not established that AGI (which is in this case X) contains some fundamental aspect which makes it impossible.”

    I would read mainstream journals (the free preprints, personally) if I were interested in discerning the seat of consciousness (which I’m not really), like the Biophysics Journal referenced in this story:
    This hypothesis suggests mind is solitons of sounds on the cusp of some sort of phase-change brain divide. Whether or not this is true, it is a rational hypothesis based on what we *know* of human brains and minds. The *idea* that a physical system as different and utilizing a far smaller library of physical processes (computers are just switches switched by electrons), can actuate a mind is absurd. Is it necessary to engage chidren with teddybears and deprogram them too?! I’d never claim Fundamental Plant Biology research could yield plants that may one day play Mozart either.

    “Also, you seem to be dismissive of just about everything, from capitalism to communism via religion, so what on earth is your personal viewpoint?”

    I never dismissed capitalism. All religions seem openly hostile to human technological progress (because knowledge and engineering are attributes of a god) and thus opposed to human quality-of-living gains, generally. Price is necessary for a consumer, business owner, corporate overlord, or public official; they all need accurate prices communism has no rapid mechanism to transmit.

  25. No, I’m afraid you’ve lost us again, leaping from cutting edge unconfirmed hypotheses to discounting all possibility of conmputer representation of brains.

    Ah ha, I had assumed neocapitalism was capitalism- how many varieties do you count? I also wonder why you think that religion is openly hostile to human technological progress, and how you relate this to the involvement of many religious people in the 18th and 19th centuries scientific discoveries. Not to mention all the nature watching parsons that were spending their copious free time examining the world.

Comments are closed.