Thinking about the hardware of thinking: Can disruptive technologies help us achieve uploading?
November 30, 2010 by Suzanne Gildert
As we begin to run larger and more brain-like emulations, will our current methods of simulating neural networks, using general-purpose silicon processors, be enough, even in principle? As we wish to run computations faster and more efficiently, we might we need to consider if the design of the hardware that we all take for granted is optimal.
In a presentation (at Teleplace, produced online by teleXLR8 on November 28, part of the Advancing Substrate Independent Minds (ASIM) series), I discussed the recent return to a focus on co-design — designing specialized software algorithms running on specialized hardware — and how this approach may help us create much more powerful applications in the future.
As an example, I discussed some possible ways of running AI algorithms on novel forms of computer hardware, such as superconducting quantum computing processors. These behave entirely differently to our current silicon chips, and help to emphasize just how important disruptive technologies may be to our attempts to build intelligent machines.
teleXLR8 is a telepresence community for cultural acceleration. It “produces online events, featuring first-class content and speakers, with the best system for e-learning and collaboration in an online 3D environment.”