Ok, two weeks since my last post…It’s cool though…Finals are over, I’ve started working full time, and (drumroll!) I’ve been commissioned to create a feature-length documentary.
Thanks to trusty ol’ Moore’s Law (and a very special person) I am now the proud owner of a professional High Definition video camera. I’m blogging it and living it! This is where the REAL fun starts. By next year Moore’s Law will allow me to upgrade my editing workstation.
So…what about this documentary? It’s about these two street-racing Laotian brothers who are struggling to beat the competition.
How do I connect this to the Singularity? Well, every time I drive I think of how cars are like transport pods in our cells. Our cells use transport vesicles to transport proteins across the cell membrane in a process called Exocytosis. We use vehicles to transport ourselves from building to building in a process called commuting.
And what else could better show that we are on our way toward a merger between man and machine? Are we not cyborgs when driving our cars? Where do we end and the machine begin? Most would say where our nerves end, but I don’t think so. We ‘virtually’ graft our nerves to the car by sensing when to shift, press the gas, or receive information from the gauges. We essentially become one with the mechanical beast.
And so far it’s crude; we just grab a hold of the wheel and look at gauges. Due to the exponential nature of technological development that won’t last much longer. Research and practical testing is being done now to tie directly into our nervous systems. Just as the machine-age made way for the computer age the ‘grabbing the steering wheel and physically typing’ age will give way to a much more intricate, intelligent, and intimate forms of human interface with machines.
And I was going to relax, you know – take it easy this summer…yeah right! This is me we’re talking about..which reminds me I’ve been itching to come up with another podcast. Look for it soon.
Computer chips designed to mimic how the brain works could shed light on our cognitive abilities.
By Emily Singer
Unlike most neuroscience labs, Kwabena Boahen’s lab at Stanford University is spotless–no scattered pipettes or jumbled arrays of chemical bottles. Instead, a lone circuit board, housing a very special chip, sits on a bare lab bench. The transistors in a typical computer chip are arranged for maximal processing speed; but this microprocessor features clusters of tiny transistors designed to mimic the electrical properties of neurons. The transistors are arranged to behave like cells in the retina, the cochlea, or even the hippocampus, a spot deep in the brain that sorts and stores information.
Boahen is part of a small but growing community of scientists and engineers using a process they call “neuromorphing” to build complicated electronic circuits meant to model the behavior of neural circuits. Their work takes advantage of anatomical diagrams of different parts of the brain generated through years of painstaking animal studies by neuroscientists around the world. The hope is that hardwired models of the brain will yield insights difficult to glean through existing experimental techniques. “Brains do things in technically and conceptually novel ways which we should be able to explore,” says Rodney Douglas, a professor at the Institute of Neuroinformatics, in Zurich. “They can solve rather effortlessly issues which we cannot yet resolve with the largest and most modern digital machines. One of the ways to explore this is to develop hardware that goes in the same direction.”
Among the most intriguing aspects of the brain is its capacity to form memories–something that has fascinated neuroscientists for decades. That capacity appears to be rooted in the hippocampus, damage to which can lead to amnesia.
Extensive studies of neurons in the hippocampus and other parts of the brain have shed some light on how neural behavior gives rise to memories. Neurons encode information in the form of electrical pulses that can be transmitted to other neurons. When two connected neurons repeatedly fire in close succession, the connection between them is strengthened, so that the firing of the first helps trigger the firing of the second. As this process–known to neuroscientists as Hebbian learning–occurs in multiple neighboring cells, it creates webs of connections between different neurons, encoding and linking information.
To better understand how this works, Boahen and graduate student John Arthur developed a chip based on a layer of the hippocampus known as CA3. Sandwiched between two other cellular layers, one that receives input from the cortex and one that sends information back out again, CA3 is thought to be where memory actually happens–where information is stored and linked. Pointing to a diagram of the chip’s architecture, Boahen explains that each model cell on the chip is made up of a cluster of transistors designed to mimic the electrical activity of a neuron. The silicon cells are arranged in a 32-by-32 array, and each of them is programmed to connect weakly to 21 neighboring cells. To start with, the connections between the cells are turned off, mimicking “silent synapses.” (A synapse is a junction between neurons; a silent synapse is one where, if a given neural cell fires, it transmits a slight change in electrical activity to its neighbors, but not enough to trigger the propagation of an electrical signal.)
However, Boahen explains, the chip has the ability to change the strength of these connections, imitating what happens with neurons during Hebbian learning. The silicon cells monitor when their neighbors fire. If a cell fires just before its neighbor does, then the programmed connection between the two cells is strengthened. “We want to capture the associative memory function, so we want connections between the cells to turn on or off depending on whether cells are activated together,” Boahen says.
Sitting at his desk with the circuit board and a laptop in front of him, Arthur, who is now a postdoc in Boahen’s lab, demonstrates the chip’s ability to remember. First he sends electrical signals to the chip from the laptop, which also records the output of the chip’s silicon neurons. He repeatedly triggers activity only in neurons that form a U shape on the array; his laptop screen shows flashes of light that reproduce that pattern, representing the activity in the chip. Each neuron fires at a slightly different time, constantly monitoring the firing of its 21 connected neighbors. Gradually, connections between the neurons that make up the U are strengthened: the chip has “learned” the pattern. When Arthur then triggers activity in just the upper left corner of the U, flashes of light on the screen spontaneously re-create the rest of the pattern, as electrical activity spreads among silicon neurons on the chip. The chip has effectively recalled the rest of the U.
The Stanford researchers plan to add circuitry to the chip so that it will also model a layer of the hippocampus known as the dentate, which receives signals from the cortex and sends them to CA3. They hope this model will be able to lay down memories that are even more complex. “We want to be able to give it an A and have it recall the whole alphabet,” says Boahen.
The team is also in the process of developing other neuromorphic chips. Its latest project–and the most ambitious neuromorphic effort anywhere to date–is a model of the cortex, the most recently evolved part of our brain. The intricate structure of the cortex allows us to perform complex computational feats, such as understanding language, recognizing faces, and planning for the future. The model’s first-generation design will consist of a circuit board with 16 chips, each containing a 256-by-256 array of silicon neurons.
By creating chips that are able to mimic the cortex, the hippocampus, and the retina, Boahen hopes to better comprehend the brain and, eventually, to design neural prosthetics, such as an artificial retina. “Kwabena is one of the few people straddling two perspectives: those who want to engineer better chips and those who want to understand the brain,” says Terry Sejnowski, a computational neuroscientist at the Salk Institute in La Jolla, CA. “I think he’s one of those people who is ahead of his time.”
Emily Singer is the biotechnology and life sciences editor of Technology Review.
This first part of this post does not contain an opinion at all because I’m going to talk about facts. The second part, however, does contain my very own, artificially-created opinions…guaranteed to offend someone!
Evolutionists claim that life originated by natural processes, when one organism changed into another solely by chance.
Hello? Us evolutionists, or ‘those who don’t fight scientific validity for the sake of the continuing infection of our own religious meme-virus’ don’t claim that life originated ‘when one organism changed into another solely by chance’.
We claim that life originated when the very FIRST organism began solely by chance!
Of course life originated by natural processes (as described by natural science). Every process that was, is, and ever will be, is natural! If something isn’t natural, then it cannot possibly exist. If you exist then you must exist in nature, even God. To say that we did not arise from natural causes means that Skjaerlund is stating that his own God isn’t natural! And anyway…a scientist must put forward, consider, study, and suggest reasons for our origin and the origin of the Universe in order to test it. They must hypothesize. No real scientist can claim that life came from this or that, just that it is here, wasn’t here before, so there must have been an event where ‘life’ began. No one was standing there when it happened (except for Q and Captain Picard, of course).
What Skjaerlund says is that evolution doesn’t conform to his subjective religious-meme view of what the order of the Universe ‘should’ be and he pays no attention to the scientifically valid evidence that the Universe doesn’t revolve around him, nor his ideas.
Now, let’s play a little game called symantics.
1. existing in or formed by nature (opposed to artificial): a natural bridge.
2. based on the state of things in nature; constituted by nature: Growth is a natural
3. of or pertaining to nature or the universe: natural beauty.
4. of, pertaining to, or occupied with the study of natural science: conducting natural
1. to cause to come into being, as something unique that would not naturally evolve or
that is not made by ordinary processes.
2. to evolve from one’s own thought or imagination, as a work of art or an invention.
The term ‘natural’ is mainly used to describe something not made by humans. If it’s made by humans it’s called ‘artificial’
Life absolutely MUST have come from natural processes because humans did not create the first working cell. So…what the hell is Skjaerlund talking about????
He implies that his God is not natural. THAT, my dear reader with a beautiful, awe-inspiring, logical mind, IS A SCIENTIFICALLY VALID EXPLANATION.
And the ONLY one Skjaerlund makes.
GOD EXISTS SOLEY WITHIN HIS IMAGINATION, A MAN-CREATED SYMBOL, AND WITHIN THE DEFINITION OF THE WORD ‘ARTIFICIAL’ MEANING “to evolve from one’s own thought or imagination, as a work of art or an invention.”
GOD EVOLVED from Skjaerlunds very own mind! Evolution at work!
___MUAHAHA___U CAN’T ARGUE WITH THEM!____
I am a great evolutionist—->because—>I am a great creationist—->
—>I CREATE the best ideas!
I can’t STAND the creationism meme and it should be damned to hell!
If you are still reading this you are awesome! Thanks for bearing with me and allowing me to vent. If anything doesn’t make sense then by all means click on ‘no comments’ below to leave a comment or ask a question. Peace
The school year is coming to a close for thousands of college students. Are they prepared for the Singularity?
Considering the dizzying pace at which technology is advancing, there could be a course developed for the average student that outlines the current research regarding possible (and probable) future technological advances. The students could be given the chance to discuss the possible relevance of each technology to the future. This would give graduates a scope with which to peer into the future. Do you think this might enable them to make better decisions regarding the future? Why or why not?
Saw Spiderman the other night. Couldn’t believe the effects. Incredible. The Technological Singularity at work! There is just no way they could have done that even 5 years ago. The sheer amount and quality of the effects is just as amazing as Spiderman!
Unfortunately, my same gripe about the past two Spiderman movies is the same as this one…not enough spidey screen time! At least Spidey 3 was less of a soap-opera than #2. I just don’t remember Peter Parker being such an emotional wimp. Sure he was nerdy and anxious about MJ all the time, but this kid playing Peter Parker seems like he’s on the verge of crying all the time. I could be wrong, it’s been a looooong time since I’ve read a Spiderman comic book. I also remember Spiderman being funny. I used to crack up as a kid when he made fun of enemies but the movies barely do that at all.
Anyway, if you like these new Spiderman movies this one is the best so far. It’s closer to the comic books than the previous two.
Patricia Cohen reports for The New York Times on May 5, 2007:
A Split Emerges as Conservatives Discuss Darwin Excerpt:
[The argument within the Republican Party regarding evolution]exposes tensions within the Republicans’ “big tent,”…[this] could be seen Thursday night when the party’s 10 candidates for president were asked during their first debate whether they believed in evolution. Three — Senator Sam Brownback of Kansas; Mike Huckabee, former governor of Arkansas; and Representative Tom Tancredo of Colorado — indicated they did not.
Ok, threePRESIDENTIAL candidates don’t ‘believe’ in evolution?
The Singularity may indeed be necessary because although humans are evolving, our culture can’t evolve at an exponential rate. This proves that human culture evolves at a snails pace compared to our scientific and technological advances. Not only are there people who will deny the empirical facts underlying a cornerstone of our understanding of life, but these people have an opportunity to obtain the most powerful position of authority on the planet.
If we can’t evolve fast enough, our technology will.
Here are the first five criteria. For the remaining fifteen criteria, visit Chris’ post here>>>
Property 1: “Irregular” patterns of brain activity
Electrical oscillations occuring between 20 and 70 times per second are common in awake humans, but epilepsy, sleep, anesthesia and some forms of brain damage are accompanied by the dominance of highly regular oscillations slower than 4 Hz. Others have argued that the conscious EEG is characterized by a particular noise signature known as “pink” or “1/f” noise. Although the purpose of these peculiar electrical oscillations is poorly understood, they appear to be a consistent feature of primate consciousness.
Test: electroencephalography (EEG)
Property 2: Connectivity “loops” between thalamus and cortex
While an entire hemisphere can be removed without apparent loss of consciousness in human subjects, bilateral damage to specific regions of sensory cortex seems to remove those senses from consciousness (such as in blindsight, where subjects lose the ability to consciously report visual stimulation, but can nonetheless act reflexively or non-conscoiusly in response to such stimulation).
However, damage to the thalamus (and associated brainstem regions) results in a complete loss of consciousness. Seth et al. argue that the thalamus maintains the state of consciousness through its interactions with neocortex, and the neocortex is responsible for providing the contents of consciousness.
CHAMPAIGN, Ill., May 1 (UPI — A U.S.-Singapore research team has found the aging human brain reflects cultural differences in the way it processes visual information.
The study, along with another published by the same team last year, is the first to demonstrate culture can alter the brain’s perceptive mechanisms.
The new finding is the result of collaboration between University of Illinois psychology professor Denise Park and Michael Chee of the Cognitive Neuroscience Laboratory in Singapore.
The two scientists and their colleagues conducted an array of cognitive tests at their facilities in the U.S. and Singapore, using functional magnetic resonance imaging scanners. Their analysis of 37 young and old East Asians, and 38 young and old Westerners found significant cultural differences in how the older adults’ brains responded to visual stimuli.
“These are the first studies to show culture is sculpting the brain,” said Park, principal investigator of the study. “The effect is seen not so much in structural changes, but at the level of perception.”
The research is detailed in the May issue of the journal Cognitive, Affective & Behavioral Neuroscience. Park also will present the finding during the May 24-27 meeting in Washington of the Association for Psychological Science.
Copyright 2007 by United Press International. All Rights Reserved.
Right now the rain is pouring gently outside. Birds have decided to nest and raise their young right in front of my back door, approximately 5 feet away. The momma bird stares hard at me as I walk out the door with a steak readied for the grill. She is sitting on her eggs, unmoving, caressing them, keeping them warm, and protecting them. Lucky for me they have not hatched or she would have probably pecked me to death.
She is programmed to do what she does to ensure the survival of her offspring. Does she feel love? Is that the emotion that drives her to protect them? What matters to her is that they survive, that the message she brings forth within her DNA is replicated, and has the best chance of replicating again.
What about knowledge? What about the messages encoded in our brains? Are they ‘programmed’ to survive and evolve? Each parent donates DNA and knowledge to a person. But unlike DNA, knowledge can come from anyone? It is superior to DNA in that regards. It is more like a virus. A single thought – the basic unit of cultural knowledge – is called a ‘Meme’
Meme’s have been called ‘viruses of the mind’ because they are learned through repetition and use their host to replicate into the minds of others’
DNA is not a virus, it’s an encoded body of information that produces a host rather than infect one. It’s a program that is made to run given the correct environment and circumstance.
Is our collective knowledge a form of software DNA? Is our job to replicate it into the next host (i.e. computers) or is it to become the next host, such as artificial intelligence building itself?
Subscribe and get this blog delivered to you via RSS!
Blogging the Singularity Bloggers:
Chris Williamson: Filmmaker, science enthusiast, and futurist concerned with the accelerating nature of technological growth and where it's headed. He is currently studying for his MFA in Film Production.
Frank Whittemore: As an IT professional since 1961, the accelerating change of technology is not news to him but the wonder will never cease! Be sure check out Frank's blog about Life Extension!