Ray Kurzweil’s new book The Singularity is Nearer is now available for pre-order. The book will be released June 2024.
Please visit the official book website to learn more + click-through for the book’s listing on fine book-sellers.
book website :: visit
Enjoy !
]]>Optimism exists on a continuum in between confidence and hope.Let me take these in order.
I am confident that the acceleration and expanding purview of informationtechnology will solve within twenty years the problems that nowpreoccupy us.
Consider energy. We are awash in energy (10,000 times more thanrequired to meet all our needs falls on Earth) but we are not verygood at capturing it. That will change with the full nanotechnology-basedassembly of macro objects at the nano scale, controlled by massivelyparallel information processes, which will be feasible within twentyyears. Even though our energy needs are projected to triple withinthat time, we’ll capture that .0003 of the sunlight needed to meetour energy needs with no use of fossil fuels, using extremely inexpensive,highly efficient, lightweight, nano-engineered solar panels, andwe’ll store the energy in highly distributed (and therefore safe)nanotechnology-based fuel cells. Solar power is now providing 1part in 1,000 of our needs, but that percentage is doubling everytwo years, which means multiplying by 1,000 in twenty years.
Almost all the discussions I’ve seen about energy and its consequences(such as global warming) fail to consider the ability of futurenanotechnology-based solutions to solve this problem. This developmentwill be motivated not just by concern for the environment but alsoby the $2 trillion we spend annually on energy. This is alreadya major area of venture funding.
Consider health. As of just recently, we have the tools to reprogrambiology. This is also at an early stage but is progressing throughthe same exponential growth of information technology, which wesee in every aspect of biological progress. The amount of geneticdata we have sequenced has doubled every year, and the price perbase pair has come down commensurately. The first genome cost abillion dollars. The National Institutes of Health is now startinga project to collect a million genomes at $1,000 apiece. We canturn genes off with RNA interference, add new genes (to adults)with new reliable forms of gene therapy, and turn on and off proteinsand enzymes at critical stages of disease progression. We are gainingthe means to model, simulate, and reprogram disease and aging processesas information processes. In ten years, these technologies willbe 1,000 times more powerful than they are today, and it will bea very different world, in terms of our ability to turn off diseaseand aging.
Consider prosperity. The 50-percent deflation rate inherent ininformation technology and its growing purview is causing the declineof poverty. The poverty rate in Asia, according to the World Bank,declined by 50 percent over the past ten years due to informationtechnology and will decline at current rates by 90 percent in thenext ten years. All areas of the world are affected, including Africa,which is now undergoing a rapid invasion of the Internet. Even sub-SaharanAfrica has had an average annual 5 percent economic growth ratein the last few years.
OK, so what am I optimistic (but not necessarily confident) about?
All of these technologies have existential downsides. We are alreadyliving with enough thermonuclear weapons to destroy all mammalianlife on this planet-weapons that are still on a hair-trigger. Rememberthese? They’re still there, and they represent an existential threat.
We have a new existential threat, which is the ability of a destructivelyminded group or individual to reprogram a biological virus to bemore deadly, more communicable, or (most daunting of all) more stealthy(that is, having a longer incubation period, so that the early spreadis undetected). The good news is that we have the tools to set upa rapid-response system like the one we have for software viruses.It took us five years to sequence HIV, but we can now sequence avirus in a day or two. RNA interference can turn viruses off, sinceviruses are genes, albeit pathological ones. Sun Microsystems founderBill Joy and I have proposed setting up a rapid-response systemthat could detect a new virus, sequence it, design an RNAi (RNA-mediatedinterference) medication, or a safe antigen-based vaccine, and gearup production in a matter of days. The methods exist, but as yeta working rapid-response system does not. We need to put one inplace quickly.
So I’m optimistic that we will make it through without sufferingan existential catastrophe. It would be helpful if we gave the twoaforementioned existential threats a higher priority.
And, finally, what am I hopeful, but not necessarily optimistic,about?
Who would have thought right after September 11, 2001, that wewould go five years without another destructive incident at thator greater scale? That seemed unlikely at the time, but despiteall the subsequent turmoil in the world, it has happened. I am hopefulthat this respite will continue.
© Ray Kurzweil 2007
]]>the Kurzweil Library
set :: stories on progress
— contents —
~ story
~ about
~ brochure
~ featurette
~ webpages
story |
Plants could soon provide our electricity. In a small way they’re already doing that in research labs and greenhouses at project Plant-e.
Plant-e is a university and commercially sponsored research group at Wageningen Univ. + Research in the Netherlands.
The Plant Microbial Fuel Cell from Plant-e can generate electricity from the natural interaction between plant roots and soil bacteria.
How it happens.
It works by taking advantage of the up to 70 percent of organic material produced by a plant’s photo-synthesis process that cannot be used by the plant — and is excreted through the roots.
As natural occurring bacteria around the roots break down this organic residue, electrons are released as a waste product. By placing an electrode close to the bacteria to absorb these electrons, the research team — led by Marjolein Helder PhD — is able to generate electricity.
quote |
name: by Marjolein Helder PhD
bio: researcher :: botanist + environmentalist
school: Wageningen Univ. + Research
Solar panels are making more energy per square meter — but we expect to reduce the costs of our system technology in the future. And our system can be used for a variety of applications.
Our tech is making electricity — but also could be used as roof insulation or as a water collector. On a bigger scale it’s possible to produce rice and electricity at the same time, and in that way combine food and energy production.
— Marjolein Helder PhD
Uses for this valuable tech.
Plant Microbial Fuel Cells can be used on many scales. An experimental 15 square meter model can produce enough energy to power a computer notebook.
Currently Plant-e is working on a system for large scale electricity production in existing green areas like wetlands and rice paddy fields.
A first prototype of a green electricity roof has been installed on one building at Wageningen Univ. + Research — and researchers are keeping a close eye on what is growing there. The first field pilots will be started in 2014. The tech was patented in 2007.
After 5 years of lab research: Plant-e is now taking the first steps toward commercializing the technology. In the future, bio-electricity from plants could produce as much as 3.2 watts per square meter of plant growth.
note: with materials from EuroNews
research |
group: Plant-e
web: home ~ channel
banner: Spark of nature.
Plant-e is making electricity from living plants.
HALF DASH
presented by
school: Wageningen Univ. + Research
web: home ~ channel
banner: Exploring nature’s potential to improve quality of life.
watching
featurette |
How plants can create electricity. :: watch
part 1. | Meet Plant-e.
part 2. | A story on Planet-e. — watch
part 3. | Living plants generate electricity. — watch
part 4. | The power of plants. — watch
IMAGE
reading
1. |
school: Wageningen Univ. + Research
story: Dutch Innovation Award for ePlant
read | story
research institutes:
for plant research | home
centre for development innovation | home
event |
school: Yale Univ.
motto: light + truth
event: Yale Innovation Summit
theme: Expanding impact.
season: spring
date: May 29 — May 30
year: 2024
where: New Haven, CT | US
event website | visit
presented by
card :: Yale Univ.
about |
The Yale Innovation Summit highlights entrepreneurial Yale Univ. faculty + students and the investable innovations coming out of Yale labs. The summit offers networking, education, and inspiration to the entire entrepreneurial eco-system.
letter |
At Yale University we don’t just want to make new things — we want to make things better. We want to offer our education as accessibly as possible. And to as broad a range of students as we can, throughout the world.
— Peter Salovey • PhD
bio: President
school: Yale Univ.
bio: social psychologist
2 tracks |
Welcome to the index.1. | bio-tech track — commercializing ideas for drug discovery + biological therapeutics
2. | tech track — commercializing ideas from physical sciences + computer sciences: engineering, software, services
showcase |
The Yale Innovation Summit offers a premier opportunity to showcase your breakthrough tech to a large number of interested investors + industry leaders — both through a prominent electronic poster display with interactive screens + through 2 live pitch events.
A brief application and a 5 minute presentation deck is all you need to apply for:
reference
Yale Univ. | home
— channels —
home
innovation + entrepreneurship
watching
— featurette series —
group: Yale Univ.
motto: light + truth
featurette series title: Innovation to Impact
— summary —
The Innovation to Impact featurette series highlights leading Yale Univ. entrepreneurs. They’re turning their ground-breaking research + ideas into businesses that have raised $1+ million — and poised to have significant impact in people’s lives.
1. |
featurette series title: Innovation to Impact
episode title: Arvinas
— summary —
Meet Craig Crews MD — he’s a professor of molecular, cellular, and developmental biology at Yale Univ. He’s founder of Arvinas. The company is translating innovative protein degradation approaches into novel drugs for the treatment of cancer and other diseases.
— reference —
Arvinas | home
2. |
featurette series title: Innovation to Impact
episode title: NextCure
— summary —
Meet Lieping Chen MD + PhD — he’s a professor in cancer research + professor of immuno-biology, dermatology, and medicine. His specialty is medical oncology. He’s co-director of the cancer immunology program at Yale Cancer Center.
He’s a pioneer in the field of immuno-oncology. His discoveries have led to life-saving drugs for cancer patients. With the support of the Office of Co-Operative Research at Yale Univ. — Chen launched a start-up called NextCure that’s leading to breakthrough treatments.
— reference —
NextCure | home
3. |
featurette series title: Innovation to Impact
episode title: GestVision
— summary —
Meet Wendy Davis — she’s an MBA from the Yale School of Management at Yale Univ. She’s founder of GestVision. She’s developing a simple urine test to detect the pregnancy disorder called pre-eclampsia that impacts 1 -in- 12 pregnancies in the United States. And puts 10,000s of pregnant women and their unborn children at risk of death + serious health complications.
— reference —
GestVision | home
— notes —
* featured sketch by Louis Isadore Kahn
CT = Connecticut • United States
MBA = Master of Business Administration
This visionary speech that Richard Feynman gave on December 29th, 1959, at the annual meeting of the American Physical Society at the California Institute of Technology helped give birth to the now exploding field of nanotechnology.
I imagine experimental physicists must often look with envy at men like Kamerlingh Onnes, who discovered a field like low temperature, which seems to be bottomless and in which one can go down and down.
Such a man is then a leader and has some temporary monopoly in a scientific adventure. Percy Bridgman, in designing a way to obtain higher pressures, opened up another new field and was able to move into it and to lead us all along. The development of ever higher vacuum was a continuing development of the same kind.
I would like to describe a field, in which little has been done, but in which an enormous amount can be done in principle. This field is not quite the same as the others in that it will not tell us much of fundamental physics (in the sense of, “What are the strange particles?”) but it is more like solid-state physics in the sense that it might tell us much of great interest about the strange phenomena that occur in complex situations. Furthermore, a point that is most important is that it would have an enormous number of technical applications.
What I want to talk about is the problem of manipulating and controlling things on a small scale.
As soon as I mention this, people tell me about miniaturization, and how far it has progressed today. They tell me about electric motors that are the size of the nail on your small finger. And there is a device on the market, they tell me, by which you can write the Lord’s Prayer on the head of a pin. But that’s nothing; that’s the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction.
Why cannot we write the entire 24 volumes of the Encyclopedia Brittanica on the head of a pin?
Let’s see what would be involved. The head of a pin is a sixteenth of an inch across. If you magnify it by 25,000 diameters, the area of the head of the pin is then equal to the area of all the pages of the Encyclopaedia Brittanica. Therefore, all it is necessary to do is to reduce in size all the writing in the Encyclopaedia by 25,000 times. Is that possible? The resolving power of the eye is about 1/120 of an inch–that is roughly the diameter of one of the little dots on the fine half-tone reproductions in the Encyclopaedia. This, when you demagnify it by 25,000 times, is still 80 angstroms in diameter–32 atoms across, in an ordinary metal. In other words, one of those dots still would contain in its area 1,000 atoms. So, each dot can easily be adjusted in size as required by the photoengraving, and there is no question that there is enough room on the head of a pin to put all of the Encyclopaedia Brittanica.
Furthermore, it can be read if it is so written. Let’s imagine that it is written in raised letters of metal; that is, where the black is in the Encyclopedia, we have raised letters of metal that are actually 1/25,000 of their ordinary size. How would we read it?
If we had something written in such a way, we could read it using techniques in common use today. (They will undoubtedly find a better way when we do actually have it written, but to make my point conservatively I shall just take techniques we know today.) We would press the metal into a plastic material and make a mold of it, then peel the plastic off very carefully, evaporate silica into the plastic to get a very thin film, then shadow it by evaporating gold at an angle against the silica so that all the little letters will appear clearly, dissolve the plastic away from the silica film, and then look through it with an electron microscope!
There is no question that if the thing were reduced by 25,000 times in the form of raised letters on the pin, it would be easy for us to read it today. Furthermore; there is no question that we would find it easy to make copies of the master; we would just need to press the same metal plate again into plastic and we would have another copy.
The next question is: How do we write it? We have no standard technique to do this now. But let me argue that it is not as difficult as it first appears to be. We can reverse the lenses of the electron microscope in order to demagnify as well as magnify. A source of ions, sent through the microscope lenses in reverse, could be focused to a very small spot. We could write with that spot like we write in a TV cathode ray oscilloscope, by going across in lines, and having an adjustment which determines the amount of material which is going to be deposited as we scan in lines.
This method might be very slow because of space charge limitations. There will be more rapid methods. We could first make, perhaps by some photo process, a screen which has holes in it in the form of the letters. Then we would strike an arc behind the holes and draw metallic ions through the holes; then we could again use our system of lenses and make a small image in the form of ions, which would deposit the metal on the pin.
A simpler way might be this (though I am not sure it would work): We take light and, through an optical microscope running backward, we focus it onto a very small photoelectric screen. Then electrons come away from the screen where the light is shining. These electrons are focused down in size by the electron microscope lenses to impinge directly upon the surface of the metal. Will such a beam etch away the metal if it is run long enough? I don’t know. If it doesn’t work for a metal surface, it must be possible to find some surface with which to coat the original pin so that, where the electrons bombard, a change is made which we could recognize later.
There is no intensity problem in these devices–not what you are used to in magnification, where you have to take a few electrons and spread them over a bigger and bigger screen; it is just the opposite. The light which we get from a page is concentrated onto a very small area so it is very intense. The few electrons which come from the photoelectric screen are demagnified down to a very tiny area so that, again, they are very intense. I don’t know why this hasn’t been done yet!
That’s the Encyclopaedia Brittanica on the head of a pin, but let’s consider all the books in the world. The Library of Congress has approximately 9 million volumes; the British Museum Library has 5 million volumes; there are also 5 million volumes in the National Library in France. Undoubtedly there are duplications, so let us say that there are some 24 million volumes of interest in the world.
What would happen if I print all this down at the scale we have been discussing? How much space would it take? It would take, of course, the area of about a million pinheads because, instead of there being just the 24 volumes of the Encyclopaedia, there are 24 million volumes. The million pinheads can be put in a square of a thousand pins on a side, or an area of about 3 square yards. That is to say, the silica replica with the paper-thin backing of plastic, with which we have made the copies, with all this information, is on an area of approximately the size of 35 pages of the Encyclopaedia. That is about half as many pages as there are in this magazine. All of the information which all of mankind has every recorded in books can be carried around in a pamphlet in your hand–and not written in code, but a simple reproduction of the original pictures, engravings, and everything else on a small scale without loss of resolution.
What would our librarian at Caltech say, as she runs all over from one building to another, if I tell her that, ten years from now, all of the information that she is struggling to keep track of–120,000 volumes, stacked from the floor to the ceiling, drawers full of cards, storage rooms full of the older books–can be kept on just one library card! When the University of Brazil, for example, finds that their library is burned, we can send them a copy of every book in our library by striking off a copy from the master plate in a few hours and mailing it in an envelope no bigger or heavier than any other ordinary air mail letter.
Now, the name of this talk is “There is Plenty of Room at the Bottom”–not just “There is Room at the Bottom.” What I have demonstrated is that there is room–that you can decrease the size of things in a practical way. I now want to show that there is plenty of room. I will not now discuss how we are going to do it, but only what is possible in principle–in other words, what is possible according to the laws of physics. I am not inventing anti-gravity, which is possible someday only if the laws are not what we think. I am telling you what could be done if the laws are what we think; we are not doing it simply because we haven’t yet gotten around to it.
Suppose that, instead of trying to reproduce the pictures and all the information directly in its present form, we write only the information content in a code of dots and dashes, or something like that, to represent the various letters. Each letter represents six or seven “bits” of information; that is, you need only about six or seven dots or dashes for each letter. Now, instead of writing everything, as I did before, on the surface of the head of a pin, I am going to use the interior of the material as well.
Let us represent a dot by a small spot of one metal, the next dash, by an adjacent spot of another metal, and so on. Suppose, to be conservative, that a bit of information is going to require a little cube of atoms 5 times 5 times 5–that is 125 atoms. Perhaps we need a hundred and some odd atoms to make sure that the information is not lost through diffusion, or through some other process.
I have estimated how many letters there are in the Encyclopaedia, and I have assumed that each of my 24 million books is as big as an Encyclopaedia volume, and have calculated, then, how many bits of information there are (10^15). For each bit I allow 100 atoms. And it turns out that all of the information that man has carefully accumulated in all the books in the world can be written in this form in a cube of material one two-hundredth of an inch wide–which is the barest piece of dust that can be made out by the human eye. So there is plenty of room at the bottom! Don’t tell me about microfilm!
This fact–that enormous amounts of information can be carried in an exceedingly small space–is, of course, well known to the biologists, and resolves the mystery which existed before we understood all this clearly, of how it could be that, in the tiniest cell, all of the information for the organization of a complex creature such as ourselves can be stored. All this information–whether we have brown eyes, or whether we think at all, or that in the embryo the jawbone should first develop with a little hole in the side so that later a nerve can grow through it–all this information is contained in a very tiny fraction of the cell in the form of long-chain DNA molecules in which approximately 50 atoms are used for one bit of information about the cell.
If I have written in a code, with 5 times 5 times 5 atoms to a bit, the question is: How could I read it today? The electron microscope is not quite good enough, with the greatest care and effort, it can only resolve about 10 angstroms. I would like to try and impress upon you while I am talking about all of these things on a small scale, the importance of improving the electron microscope by a hundred times. It is not impossible; it is not against the laws of diffraction of the electron. The wave length of the electron in such a microscope is only 1/20 of an angstrom. So it should be possible to see the individual atoms. What good would it be to see individual atoms distinctly?
We have friends in other fields–in biology, for instance. We physicists often look at them and say, “You know the reason you fellows are making so little progress?” (Actually I don’t know any field where they are making more rapid progress than they are in biology today.) “You should use more mathematics, like we do.” They could answer us–but they’re polite, so I’ll answer for them: “What you should do in order for us to make more rapid progress is to make the electron microscope 100 times better.”
What are the most central and fundamental problems of biology today? They are questions like: What is the sequence of bases in the DNA? What happens when you have a mutation? How is the base order in the DNA connected to the order of amino acids in the protein? What is the structure of the RNA; is it single-chain or double-chain, and how is it related in its order of bases to the DNA? What is the organization of the microsomes? How are proteins synthesized? Where does the RNA go? How does it sit? Where do the proteins sit? Where do the amino acids go in? In photosynthesis, where is the chlorophyll; how is it arranged; where are the carotenoids involved in this thing? What is the system of the conversion of light into chemical energy?
It is very easy to answer many of these fundamental biological questions; you just look at the thing! You will see the order of bases in the chain; you will see the structure of the microsome. Unfortunately, the present microscope sees at a scale which is just a bit too crude. Make the microscope one hundred times more powerful, and many problems of biology would be made very much easier. I exaggerate, of course, but the biologists would surely be very thankful to you–and they would prefer that to the criticism that they should use more mathematics.
The theory of chemical processes today is based on theoretical physics. In this sense, physics supplies the foundation of chemistry. But chemistry also has analysis. If you have a strange substance and you want to know what it is, you go through a long and complicated process of chemical analysis. You can analyze almost anything today, so I am a little late with my idea. But if the physicists wanted to, they could also dig under the chemists in the problem of chemical analysis. It would be very easy to make an analysis of any complicated chemical substance; all one would have to do would be to look at it and see where the atoms are. The only trouble is that the electron microscope is one hundred times too poor. (Later, I would like to ask the question: Can the physicists do something about the third problem of chemistry–namely, synthesis? Is there a physical way to synthesize any chemical substance?
The reason the electron microscope is so poor is that the f- value of the lenses is only 1 part to 1,000; you don’t have a big enough numerical aperture. And I know that there are theorems which prove that it is impossible, with axially symmetrical stationary field lenses, to produce an f-value any bigger than so and so; and therefore the resolving power at the present time is at its theoretical maximum. But in every theorem there are assumptions. Why must the field be symmetrical? I put this out as a challenge: Is there no way to make the electron microscope more powerful?
The biological example of writing information on a small scale has inspired me to think of something that should be possible. Biology is not simply writing information; it is doing something about it. A biological system can be exceedingly small. Many of the cells are very tiny, but they are very active; they manufacture various substances; they walk around; they wiggle; and they do all kinds of marvelous things–all on a very small scale. Also, they store information. Consider the possibility that we too can make a thing very small which does what we want–that we can manufacture an object that maneuvers at that level!
There may even be an economic point to this business of making things very small. Let me remind you of some of the problems of computing machines. In computers we have to store an enormous amount of information. The kind of writing that I was mentioning before, in which I had everything down as a distribution of metal, is permanent. Much more interesting to a computer is a way of writing, erasing, and writing something else. (This is usually because we don’t want to waste the material on which we have just written. Yet if we could write it in a very small space, it wouldn’t make any difference; it could just be thrown away after it was read. It doesn’t cost very much for the material).
I don’t know how to do this on a small scale in a practical way, but I do know that computing machines are very large; they fill rooms. Why can’t we make them very small, make them of little wires, little elements–and by little, I mean little. For instance, the wires should be 10 or 100 atoms in diameter, and the circuits should be a few thousand angstroms across. Everybody who has analyzed the logical theory of computers has come to the conclusion that the possibilities of computers are very interesting–if they could be made to be more complicated by several orders of magnitude. If they had millions of times as many elements, they could make judgments. They would have time to calculate what is the best way to make the calculation that they are about to make. They could select the method of analysis which, from their experience, is better than the one that we would give to them. And in many other ways, they would have new qualitative features.
If I look at your face I immediately recognize that I have seen it before. (Actually, my friends will say I have chosen an unfortunate example here for the subject of this illustration. At least I recognize that it is a man and not an apple.) Yet there is no machine which, with that speed, can take a picture of a face and say even that it is a man; and much less that it is the same man that you showed it before–unless it is exactly the same picture. If the face is changed; if I am closer to the face; if I am further from the face; if the light changes–I recognize it anyway. Now, this little computer I carry in my head is easily able to do that. The computers that we build are not able to do that. The number of elements in this bone box of mine are enormously greater than the number of elements in our “wonderful” computers. But our mechanical computers are too big; the elements in this box are microscopic. I want to make some that are submicroscopic.
If we wanted to make a computer that had all these marvelous extra qualitative abilities, we would have to make it, perhaps, the size of the Pentagon. This has several disadvantages. First, it requires too much material; there may not be enough germanium in the world for all the transistors which would have to be put into this enormous thing. There is also the problem of heat generation and power consumption; TVA would be needed to run the computer. But an even more practical difficulty is that the computer would be limited to a certain speed. Because of its large size, there is finite time required to get the information from one place to another. The information cannot go any faster than the speed of light–so, ultimately, when our computers get faster and faster and more and more elaborate, we will have to make them smaller and smaller.
But there is plenty of room to make them smaller. There is nothing that I can see in the physical laws that says the computer elements cannot be made enormously smaller than they are now. In fact, there may be certain advantages.
How can we make such a device? What kind of manufacturing processes would we use? One possibility we might consider, since we have talked about writing by putting atoms down in a certain arrangement, would be to evaporate the material, then evaporate the insulator next to it. Then, for the next layer, evaporate another position of a wire, another insulator, and so on. So, you simply evaporate until you have a block of stuff which has the elements–coils and condensers, transistors and so on–of exceedingly fine dimensions.
But I would like to discuss, just for amusement, that there are other possibilities. Why can’t we manufacture these small computers somewhat like we manufacture the big ones? Why can’t we drill holes, cut things, solder things, stamp things out, mold different shapes all at an infinitesimal level? What are the limitations as to how small a thing has to be before you can no longer mold it? How many times when you are working on something frustratingly tiny like your wife’s wrist watch, have you said to yourself, “If I could only train an ant to do this!” What I would like to suggest is the possibility of training an ant to train a mite to do this. What are the possibilities of small but movable machines? They may or may not be useful, but they surely would be fun to make.
Consider any machine–for example, an automobile–and ask about the problems of making an infinitesimal machine like it. Suppose, in the particular design of the automobile, we need a certain precision of the parts; we need an accuracy, let’s suppose, of 4/10,000 of an inch. If things are more inaccurate than that in the shape of the cylinder and so on, it isn’t going to work very well. If I make the thing too small, I have to worry about the size of the atoms; I can’t make a circle of “balls” so to speak, if the circle is too small. So, if I make the error, corresponding to 4/10,000 of an inch, correspond to an error of 10 atoms, it turns out that I can reduce the dimensions of an automobile 4,000 times, approximately–so that it is 1 mm. across. Obviously, if you redesign the car so that it would work with a much larger tolerance, which is not at all impossible, then you could make a much smaller device.
It is interesting to consider what the problems are in such small machines. Firstly, with parts stressed to the same degree, the forces go as the area you are reducing, so that things like weight and inertia are of relatively no importance. The strength of material, in other words, is very much greater in proportion. The stresses and expansion of the flywheel from centrifugal force, for example, would be the same proportion only if the rotational speed is increased in the same proportion as we decrease the size. On the other hand, the metals that we use have a grain structure, and this would be very annoying at small scale because the material is not homogeneous. Plastics and glass and things of this amorphous nature are very much more homogeneous, and so we would have to make our machines out of such materials.
There are problems associated with the electrical part of the system–with the copper wires and the magnetic parts. The magnetic properties on a very small scale are not the same as on a large scale; there is the “domain” problem involved. A big magnet made of millions of domains can only be made on a small scale with one domain. The electrical equipment won’t simply be scaled down; it has to be redesigned. But I can see no reason why it can’t be redesigned to work again.
Lubrication involves some interesting points. The effective viscosity of oil would be higher and higher in proportion as we went down (and if we increase the speed as much as we can). If we don’t increase the speed so much, and change from oil to kerosene or some other fluid, the problem is not so bad. But actually we may not have to lubricate at all! We have a lot of extra force. Let the bearings run dry; they won’t run hot because the heat escapes away from such a small device very, very rapidly.
This rapid heat loss would prevent the gasoline from exploding, so an internal combustion engine is impossible. Other chemical reactions, liberating energy when cold, can be used. Probably an external supply of electrical power would be most convenient for such small machines.
What would be the utility of such machines? Who knows? Of course, a small automobile would only be useful for the mites to drive around in, and I suppose our Christian interests don’t go that far. However, we did note the possibility of the manufacture of small elements for computers in completely automatic factories, containing lathes and other machine tools at the very small level. The small lathe would not have to be exactly like our big lathe. I leave to your imagination the improvement of the design to take full advantage of the properties of things on a small scale, and in such a way that the fully automatic aspect would be easiest to manage.
A friend of mine (Albert R. Hibbs) suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and “looks” around. (Of course the information has to be fed out.) It finds out which valve is the faulty one and takes a little knife and slices it out. Other small machines might be permanently incorporated in the body to assist some inadequately-functioning organ.
Now comes the interesting question: How do we make such a tiny mechanism? I leave that to you. However, let me suggest one weird possibility. You know, in the atomic energy plants they have materials and machines that they can’t handle directly because they have become radioactive. To unscrew nuts and put on bolts and so on, they have a set of master and slave hands, so that by operating a set of levers here, you control the “hands” there, and can turn them this way and that so you can handle things quite nicely.
Most of these devices are actually made rather simply, in that there is a particular cable, like a marionette string, that goes directly from the controls to the “hands.” But, of course, things also have been made using servo motors, so that the connection between the one thing and the other is electrical rather than mechanical. When you turn the levers, they turn a servo motor, and it changes the electrical currents in the wires, which repositions a motor at the other end.
Now, I want to build much the same device–a master-slave system which operates electrically. But I want the slaves to be made especially carefully by modern large-scale machinists so that they are one-fourth the scale of the “hands” that you ordinarily maneuver. So you have a scheme by which you can do things at one- quarter scale anyway–the little servo motors with little hands play with little nuts and bolts; they drill little holes; they are four times smaller. Aha! So I manufacture a quarter-size lathe; I manufacture quarter-size tools; and I make, at the one-quarter scale, still another set of hands again relatively one-quarter size! This is one-sixteenth size, from my point of view. And after I finish doing this I wire directly from my large-scale system, through transformers perhaps, to the one-sixteenth-size servo motors. Thus I can now manipulate the one-sixteenth size hands.
Well, you get the principle from there on. It is rather a difficult program, but it is a possibility. You might say that one can go much farther in one step than from one to four. Of course, this has all to be designed very carefully and it is not necessary simply to make it like hands. If you thought of it very carefully, you could probably arrive at a much better system for doing such things.
If you work through a pantograph, even today, you can get much more than a factor of four in even one step. But you can’t work directly through a pantograph which makes a smaller pantograph which then makes a smaller pantograph–because of the looseness of the holes and the irregularities of construction. The end of the pantograph wiggles with a relatively greater irregularity than the irregularity with which you move your hands. In going down this scale, I would find the end of the pantograph on the end of the pantograph on the end of the pantograph shaking so badly that it wasn’t doing anything sensible at all.
At each stage, it is necessary to improve the precision of the apparatus. If, for instance, having made a small lathe with a pantograph, we find its lead screw irregular–more irregular than the large-scale one–we could lap the lead screw against breakable nuts that you can reverse in the usual way back and forth until this lead screw is, at its scale, as accurate as our original lead screws, at our scale.
We can make flats by rubbing unflat surfaces in triplicates together–in three pairs–and the flats then become flatter than the thing you started with. Thus, it is not impossible to improve precision on a small scale by the correct operations. So, when we build this stuff, it is necessary at each step to improve the accuracy of the equipment by working for awhile down there, making accurate lead screws, Johansen blocks, and all the other materials which we use in accurate machine work at the higher level. We have to stop at each level and manufacture all the stuff to go to the next level–a very long and very difficult program. Perhaps you can figure a better way than that to get down to small scale more rapidly.
Yet, after all this, you have just got one little baby lathe four thousand times smaller than usual. But we were thinking of making an enormous computer, which we were going to build by drilling holes on this lathe to make little washers for the computer. How many washers can you manufacture on this one lathe?
When I make my first set of slave “hands” at one-fourth scale, I am going to make ten sets. I make ten sets of “hands,” and I wire them to my original levers so they each do exactly the same thing at the same time in parallel. Now, when I am making my new devices one-quarter again as small, I let each one manufacture ten copies, so that I would have a hundred “hands” at the 1/16th size.
Where am I going to put the million lathes that I am going to have? Why, there is nothing to it; the volume is much less than that of even one full-scale lathe. For instance, if I made a billion little lathes, each 1/4000 of the scale of a regular lathe, there are plenty of materials and space available because in the billion little ones there is less than 2 percent of the materials in one big lathe.
It doesn’t cost anything for materials, you see. So I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on.
As we go down in size, there are a number of interesting problems that arise. All things do not simply scale down in proportion. There is the problem that materials stick together by the molecular (Van der Waals) attractions. It would be like this: After you have made a part and you unscrew the nut from a bolt, it isn’t going to fall down because the gravity isn’t appreciable; it would even be hard to get it off the bolt. It would be like those old movies of a man with his hands full of molasses, trying to get rid of a glass of water. There will be several problems of this nature that we will have to be ready to design for.
But I am not afraid to consider the final question as to whether, ultimately–in the great future–we can arrange the atoms the way we want; the very atoms, all the way down! What would happen if we could arrange the atoms one by one the way we want them (within reason, of course; you can’t put them so that they are chemically unstable, for example).
Up to now, we have been content to dig in the ground to find minerals. We heat them and we do things on a large scale with them, and we hope to get a pure substance with just so much impurity, and so on. But we must always accept some atomic arrangement that nature gives us. We haven’t got anything, say, with a “checkerboard” arrangement, with the impurity atoms exactly arranged 1,000 angstroms apart, or in some other particular pattern.
What could we do with layered structures with just the right layers? What would the properties of materials be if we could really arrange the atoms the way we want them? They would be very interesting to investigate theoretically. I can’t see exactly what would happen, but I can hardly doubt that when we have some control of the arrangement of things on a small scale we will get an enormously greater range of possible properties that substances can have, and of different things that we can do.
Consider, for example, a piece of material in which we make little coils and condensers (or their solid state analogs) 1,000 or 10,000 angstroms in a circuit, one right next to the other, over a large area, with little antennas sticking out at the other end–a whole series of circuits. Is it possible, for example, to emit light from a whole set of antennas, like we emit radio waves from an organized set of antennas to beam the radio programs to Europe? The same thing would be to beam the light out in a definite direction with very high intensity. (Perhaps such a beam is not very useful technically or economically.)
I have thought about some of the problems of building electric circuits on a small scale, and the problem of resistance is serious. If you build a corresponding circuit on a small scale, its natural frequency goes up, since the wave length goes down as the scale; but the skin depth only decreases with the square root of the scale ratio, and so resistive problems are of increasing difficulty. Possibly we can beat resistance through the use of superconductivity if the frequency is not too high, or by other tricks.
When we get to the very, very small world–say circuits of seven atoms–we have a lot of new things that would happen that represent completely new opportunities for design. Atoms on a small scale behave like nothing on a large scale, for they satisfy the laws of quantum mechanics. So, as we go down and fiddle around with the atoms down there, we are working with different laws, and we can expect to do different things. We can manufacture in different ways. We can use, not just circuits, but some system involving the quantized energy levels, or the interactions of quantized spins, etc.
Another thing we will notice is that, if we go down far enough, all of our devices can be mass produced so that they are absolutely perfect copies of one another. We cannot build two large machines so that the dimensions are exactly the same. But if your machine is only 100 atoms high, you only have to get it correct to one-half of one percent to make sure the other machine is exactly the same size–namely, 100 atoms high!
At the atomic level, we have new kinds of forces and new kinds of possibilities, new kinds of effects. The problems of manufacture and reproduction of materials will be quite different. I am, as I said, inspired by the biological phenomena in which chemical forces are used in repetitious fashion to produce all kinds of weird effects (one of which is the author).
The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big.
Ultimately, we can do chemical synthesis. A chemist comes to us and says, “Look, I want a molecule that has the atoms arranged thus and so; make me that molecule.” The chemist does a mysterious thing when he wants to make a molecule. He sees that it has got that ring, so he mixes this and that, and he shakes it, and he fiddles around. And, at the end of a difficult process, he usually does succeed in synthesizing what he wants. By the time I get my devices working, so that we can do it by physics, he will have figured out how to synthesize absolutely anything, so that this will really be useless.
But it is interesting that it would be, in principle, possible (I think) for a physicist to synthesize any chemical substance that the chemist writes down. Give the orders and the physicist synthesizes it. How? Put the atoms down where the chemist says, and so you make the substance. The problems of chemistry and biology can be greatly helped if our ability to see what we are doing, and to do things on an atomic level, is ultimately developed–a development which I think cannot be avoided.
Now, you might say, “Who should do this and why should they do it?” Well, I pointed out a few of the economic applications, but I know that the reason that you would do it might be just for fun. But have some fun! Let’s have a competition between laboratories. Let one laboratory make a tiny motor which it sends to another lab which sends it back with a thing that fits inside the shaft of the first motor.
Just for the fun of it, and in order to get kids interested in this field, I would propose that someone who has some contact with the high schools think of making some kind of high school competition. After all, we haven’t even started in this field, and even the kids can write smaller than has ever been written before. They could have competition in high schools. The Los Angeles high school could send a pin to the Venice high school on which it says, “How’s this?” They get the pin back, and in the dot of the “i” it says, “Not so hot.”
Perhaps this doesn’t excite you to do it, and only economics will do so. Then I want to do something; but I can’t do it at the present moment, because I haven’t prepared the ground. It is my intention to offer a prize of $1,000 to the first guy who can take the information on the page of a book and put it on an area 1/25,000 smaller in linear scale in such manner that it can be read by an electron microscope.
And I want to offer another prize–if I can figure out how to phrase it so that I don’t get into a mess of arguments about definitions–of another $1,000 to the first guy who makes an operating electric motor–a rotating electric motor which can be controlled from the outside and, not counting the lead-in wires, is only 1/64 inch cube.
I do not expect that such prizes will have to wait very long for claimants.
]]>— contents —
~ story
~ paper
~ reference
~ reading
~ watching
— story —
Even though virtual reality headsets are popular for gaming, they haven’t yet become the go-to device for watching television, shopping, or using software tools for design and modelling.
One reason why is because VR can make users feel sick with nausea, imbalance, eye strain, and headaches. This happens because VR creates an illusion of 3D viewing — but the user is actually staring at a fixed-distance 2D display. The solution for better 3D visualization exists in a 60-year-old tech that’s being updated for the digital world — holograms.
A new method called tensor holography enables the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smart-phone.
Holograms deliver an exceptional 3D representation of the world around us — and they’re beautiful. Holograms offer a shifting visual perspective based on the viewer’s position. They allow the human eye to adjust its focal depth — so you can move your focus easily from foreground to background. The visual holographic display appears just like a real 3D object — as if you could touch it.
Researchers have been trying to make computer-generated holograms. But the process has traditionally required a super-computer to churn through heaps of physics simulations — to create that life-like effect. That’s time-consuming and can yield less-than-photo-realistic results.
Holograms in the blink of an eye.
To deal with this, researchers at the Massachusetts Institute of Technology (MIT) designed a new way to produce holograms — almost instantly. The software they’re using is called a deep learning artificial intelligence program, because it can teach itself. They said it’s so efficient that it can create a hologram on a laptop — in the blink of an eye.
Liang Shi is a PhD student at MIT and the lead researcher on the project. He said:
People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations. It’s often said that commercially available holographic displays will be around in 10 years — but this statement has been around for decades.
This new approach — tensor holography — will finally bring that elusive 10-year goal within reach. This advance could fuel a spill-over of holography into fields like VR and 3D printing.
The quest for better 3D.
A typical lens-based photograph encodes the brightness of each light wave that touches it. So a photo can faithfully reproduce a scene’s colors, but it ultimately makes a flat image.
But a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth. For example: a typical photograph of the famous oil painting Water Lilies can highlight the art’s color palate. But a hologram can bring the work to life — rendering the raised, unique 3D texture of each brush stroke. Despite being popular, holograms have been a challenge to make + share.
First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam — with half the beam used to illuminate the subject, and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth. The resulting images were static, so they couldn’t capture motion. And they were hard-copy only, making them difficult to reproduce and share.
Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photo-realistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.
They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.
The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new data-base, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photo-realistic training data. Next, the algorithm got to work.
By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.
“We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multic-amera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.
IMAGE
A considerable leap in ability.
3D holography in real-time would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.
Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.
The research team said: “It’s a considerable leap that could completely change people’s attitudes toward holography. We feel like neural networks were born for this task.”
publication: Nature
paper title: Towards real-time photo-realistic 3D holography with deep neural networks.
read | paper
— description —
The ability to present 3D scenes with continuous depth sensation has a profound impact on: virtual + augmented reality, human–computer interaction, education, and training.
presented by
Nature | home ~ channel
tag line: text
Spinger Nature grp. | home ~ channel
tag line: Opening doors to discovery.
webpages
the Massachusetts Institute of Technology | home ~ channel
motto: text
name: Liang Shi
web: home
reading
1. |
publication: Insider
tag line: What you want to know.
web: home~• channel
story title: Here’s what happens to your body when you’ve been in virtual reality for too long
read | story
Please add summary.
presented by
group: Axel Springer
tag line: The media + tech company.
banner: We empower free decisions.
web: home ~ channel
watching
1. |
publication:
tag line:
web: home • channel
featurette title:
watch | featurette
summary……….
presented by
group:
tag line:
— notes —
AI = artificial intelligence
AR = augmented reality
VR = virtual reality
2D = 2-dimensional
3D = 3-dimensional
Q. What is artificial intelligence?
A. It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines.
Q. Isn’t there a solid definition of intelligence that doesn’t depend on relating it to human intelligence?
A. Not yet. The problem is that we cannot yet characterize in general what kinds of computational procedures we want to call intelligent. We understand some of the mechanisms of intelligence and not others.
Q. Is intelligence a single thing so that one can ask a yes or no question “Is this machine intelligent or not?”?
A. No. Intelligence involves mechanisms, and AI research has discovered how to make computers carry out some of them and not others. If doing a task requires only mechanisms that are well understood today, computer programs can give very impressive performances on these tasks. Such programs should be considered “somewhat intelligent”.
Q. Isn’t AI about simulating human intelligence?
A. Sometimes but not always or even usually. On the one hand, we can learn something about how to make machines solve problems by observing other people or just by observing our own methods. On the other hand, most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals. AI researchers are free to use methods that are not observed in people or that involve much more computing than people can do.
Q. What about IQ? Do computer programs have IQs?
A. No. IQ is based on the rates at which intelligence develops in children. It is the ratio of the age at which a child normally makes a certain score to the child’s age. The scale is extended to adults in a suitable way. IQ correlates well with various measures of success or failure in life, but making computers that can score high on IQ tests would be weakly correlated with their usefulness. For example, the ability of a child to repeat back a long sequence of digits correlates well with other intellectual abilities, perhaps because it measures how much information the child can compute with at once. However, “digit span” is trivial for even extremely limited computers.
However, some of the problems on IQ tests are useful challenges for AI.
Q. What about other comparisons between human and computer intelligence?
Arthur R. Jensen [Jen98], a leading researcher in human intelligence, suggests “as a heuristic hypothesis” that all normal humans have the same intellectual mechanisms and that differences in intelligence are related to “quantitative biochemical and physiological conditions”. I see them as speed, short term memory, and the ability to form accurate and retrievable long term memories.
Whether or not Jensen is right about human intelligence, the situation in AI today is the reverse.
Computer programs have plenty of speed and memory but their abilities correspond to the intellectual mechanisms that program designers understand well enough to put in programs. Some abilities that children normally don’t develop till they are teenagers may be in, and some abilities possessed by two year olds are still out. The matter is further complicated by the fact that the cognitive sciences still have not succeeded in determining exactly what the human abilities are. Very likely the organization of the intellectual mechanisms for AI can usefully be different from that in people.
Whenever people do better than computers on some task or computers use a lot of computation to do as well as people, this demonstrates that the program designers lack understanding of the intellectual mechanisms required to do the task efficiently.
Q. When did AI research start?
A. After WWII, a number of people independently started to work on intelligent machines. The English mathematician Alan Turing may have been the first. He gave a lecture on it in 1947. He also may have been the first to decide that AI was best researched by programming computers rather than by building machines. By the late 1950s, there were many researchers on AI, and most of them were basing their work on programming computers.
Q. Does AI aim to put the human mind into the computer?
A. Some researchers say they have that objective, but maybe they are using the phrase metaphorically. The human mind has a lot of peculiarities, and I’m not sure anyone is serious about imitating all of them.
Q. What is the Turing test?
A. Alan Turing’s 1950 article Computing Machinery and Intelligence [Tur50] discussed conditions for considering a machine to be intelligent. He argued that if the machine could successfully pretend to be human to a knowledgeable observer then you certainly should consider it intelligent. This test would satisfy most people but not all philosophers. The observer could interact with the machine and a human by teletype (to avoid requiring that the machine imitate the appearance or voice of the person), and the human would try to persuade the observer that it was human and the machine would try to fool the observer.
The Turing test is a one-sided test. A machine that passes the test should certainly be considered intelligent, but a machine could still be considered intelligent without knowing enough about humans to imitate a human.
Daniel Dennett’s book Brainchildren [Den98] has an excellent discussion of the Turing test and the various partial Turing tests that have been implemented, i.e. with restrictions on the observer’s knowledge of AI and the subject matter of questioning. It turns out that some people are easily led into believing that a rather dumb program is intelligent.
Q. Does AI aim at human-level intelligence?
A. Yes. The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans. However, many people involved in particular research areas are much less ambitious.
Q. How far is AI from reaching human-level intelligence? When will it happen?
A. A few people think that human-level intelligence can be achieved by writing large numbers of programs of the kind people are now writing and assembling vast knowledge basis of facts in the languages now used for expressing knowledge.
However, most AI researchers believe that new fundamental ideas are required, and therefore it cannot be predicted when human level intelligence will be achieved.
Q. Are computers the right kind of machine to be made intelligent?
A. Computers can be programmed to simulate any kind of machine.
Many researchers invented non-computer machines, hoping that they would be intelligent in different ways than the computer programs could be. However, they usually simulate their invented machines on a computer and come to doubt that the new machine is worth building. Because many billions of dollars that have been spent in making computers faster and faster, another kind of machine would have to be very fast to perform better than a program on a computer simulating the machine.
Q. Are computers fast enough to be intelligent?
A. Some people think much faster computers are required as well as new ideas. My own opinion is that the computers of 30 years ago were fast enough if only we knew how to program them. Of course, quite apart from the ambitions of AI researchers, computers will keep getting faster.
Q. What about parallel machines?
A. Machines with many processors are much faster than single processors can be. Parallelism itself presents no advantages, and parallel machines are somewhat awkward to program. When extreme speed is required, it is necessary to face this awkwardness.
Q. What about making a “child machine” that could improve by reading and by learning from experience?
A. This idea has been proposed many times, starting in the 1940s. Eventually, it will be made to work. However, AI programs haven’t yet reached the level of being able to learn much of what a child learns from physical experience. Nor do present programs understand language well enough to learn much by reading.
Q. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?
A. I think yes, but we aren’t yet at a level of AI at which this process can begin.
Q. What about chess?
A. Alexander Kronrod, a Russian AI researcher, said “Chess is the Drosophila of AI.” He was making an analogy with geneticists’ use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs.
Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races.
Q. What about Go?
A. The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of subpositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.
Sooner or later, AI research will overcome this scandalous weakness.
Q. Don’t some people say that AI is a bad idea?
A. The philosopher John Searle says that the idea of a non-biological machine being intelligent is incoherent. The philosopher Hubert Dreyfus says that AI is impossible. The computer scientist Joseph Weizenbaum says the idea is obscene, anti-human and immoral. Various people have said that since artificial intelligence hasn’t reached human level by now, it must be impossible. Still other people are disappointed that companies they invested in went bankrupt.
Q. Aren’t computability theory and computational complexity the keys to AI? [Note to the layman and beginners in computer science: These are quite technical branches of mathematical logic and computer science, and the answer to the question has to be somewhat technical.]
A. No. These theories are relevant but don’t address the fundamental problems of AI.
In the 1930s mathematical logicians, especially Kurt Gödel and Alan Turing, established that there did not exist algorithms that were guaranteed to solve all problems in certain important mathematical domains. Whether a sentence of first order logic is a theorem is one example, and whether a polynomial equations in several variables has integer solutions is another. Humans solve problems in these domains all the time, and this has been offered as an argument (usually with some decorations) that computers are intrinsically incapable of doing what people do. However, people can’t guarantee to solve arbitrary problems in these domains either.
In the 1960s computer scientists, especially Steve Cook and Richard Karp developed the theory of NP-complete problem domains. Problems in these domains are solvable, but seem to take time exponential in the size of the problem. Which sentences of propositional calculus are satisfiable is a basic example of an NP-complete problem domain. Humans often solve problems in NP-complete domains in times much shorter than is guaranteed by the general algorithms, but can’t solve them quickly in general.
What is important for AI is to have algorithms as capable as people at solving problems. The identification of subdomains for which good algorithms exist is important, but a lot of AI problem solvers are not associated with readily identified subdomains.
The theory of the difficulty of general classes of problems is called computational complexity. So far this theory hasn’t interacted with AI as much as might have been hoped. Success in problem solving by humans and by AI programs seems to rely on properties of problems and problem solving methods that the neither the complexity researchers nor the AI community have been able to identify precisely.
Algorithmic complexity theory as developed by Solomonoff, Kolmogorov and Chaitin (independently of one another) is also relevant. It defines the complexity of a symbolic object as the length of the shortest program that will generate it. Proving that a candidate program is the shortest or close to the shortest is an unsolvable problem, but representing objects by short programs that generate them should be often illuminating even when you can’t prove that the program is the shortest.
Q. What are the branches of AI?
A. Here’s a list, but some branches are surely missing, because no-one has identified them yet. Some of these may be regarded as concepts or topics rather than full branches.
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was [McC59]. [McC89] is a more recent summary. [McC96] lists some of the concepts involved in logical aI. [Sha97] is an important text.
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.
When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used.
From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises
This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.
Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. [Mit97] is a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information.
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.
This is a study of the kinds of knowledge that are required for solving problems in the world.
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.
A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, may be more useful. [My opinion].
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations.
Q. What are the applications of AI?
A. Here are some.
You can buy machines that can play master level chess for a few hundred dollars. There is some AI in them, but they play well against people mainly through brute force computation–looking at hundreds of thousands of positions. To beat a world champion by brute force and known reliable heuristics requires being able to look at 200 million positions per second.
In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flight information by a system using speech recognition of flight numbers and city names. It is quite convenient. On the the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient.
Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.
The world is composed of three-dimensional objects, but the inputs to the human eye and computers’ TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial three-dimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.
A “knowledge engineer” interviews experts in a certain domain and tries to embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current expert systems depends on their users having common sense.
One of the most feasible kinds of expert system given the present knowledge of AI is to put some information in one of a fixed set of categories using several sources of information. An example is advising whether to accept a proposed credit card purchase. Information is available about the owner of the credit card, his record of payment and also about the item he is buying and about the establishment from which he is buying it (e.g., about whether there have been previous credit card frauds at this establishment).
Q. How is AI research done?
A. AI research has both theoretical and experimental sides. The experimental side has both basic and applied aspects.
There are two main lines of research. One is biological, based on the idea that since humans are intelligent, AI should study humans and imitate their psychology or physiology. The other is phenomenal, based on studying and formalizing common sense facts about the world and the problems that the world presents to the achievement of goals. The two approaches interact to some extent, and both should eventually succeed. It is a race, but both racers seem to be walking.
Q. What should I study before or while learning AI?
A. Study mathematics, especially mathematical logic. The more you learn about science in general the better. For the biological approaches to AI, study psychology and the physiology of the nervous system. Learn some programming languages-at least C, Lisp and Prolog. It is also a good idea to learn one basic machine language. Jobs are likely to depend on knowing the languages currently in fashion. In the late 1990s, these include C++ and Java.
Q. What is a good textbook on AI?
A. Artificial Intelligence by Stuart Russell and Peter Norvig, Prentice Hall is the most commonly used textbbook in 1997. The general views expressed there do not exactly correspond to those of this essay. Artificial Intelligence: A New Synthesis by Nils Nilsson, Morgan Kaufman, may be easier to read.
Q. What organizations and publications are concerned with AI?
A. The American Association for Artificial Intelligence (AAAI), the European Coordinating Committee for Artificial Intelligence (ECCAI) and the Society for Artificial Intelligence and Simulation of Behavior (AISB) are scientific societies concerned with AI research. The Association for Computing Machinery (ACM) has a special interest group on artificial intelligence SIGART.
The International Joint Conference on AI (IJCAI) is the main international conference. The AAAI runs a US National Conference on AI. Electronic Transactions on Artificial Intelligence, Artificial Intelligence, and Journal of Artificial Intelligence Research, and IEEE Transactions on Pattern Analysis and Machine Intelligence are four of the main journals publishing AI research papers. I have not yet found everything that should be in this paragraph.
Page of Positive Reviews lists papers that experts have found important.
Funding a Revolution: Government Support for Computing Research by a committee of the National Research covers support for AI research in Chapter 9.
Den98 Daniel Dennett. Brainchildren: Essays on Designing Minds. MIT Press, 1998.
Jen98 Arthur R. Jensen. Does IQ matter? Commentary, pages 20-21, November 1998. The reference is just to Jensen’s comment-one of many.
McC59 John McCarthy. Programs with Common Sense. In Mechanisation of Thought Processes, Proceedings of the Symposium of the National Physics Laboratory, pages 77-84, London, U.K., 1959. Her Majesty’s Stationery Office. Reprinted in McC90.
McC89 John McCarthy. Artificial Intelligence, Logic and Formalizing Common Sense. In Richmond Thomason, editor, Philosophical Logic and Artificial Intelligence. Klüver Academic, 1989.
McC96 John McCarthy. Concepts of Logical AI, 1996. Web only for now but may be referenced.
Mit97 Tom Mitchell. Machine Learning. McGraw-Hill, 1997.
Sha97 Murray Shanahan. Solving the Frame Problem, a mathematical investigation of the common sense law of inertia. M.I.T. Press, 1997.
]]>The version that appears on Vernor Vinge’s website can be read here.
Vernor Vinge is a retired San Diego State University math professor, computer scientist, and science fiction author. He is best known for his Hugo Award-winning novels A Fire Upon the Deep, A Deepness in the Sky, Rainbows End, Fast Times at Fairmont High, and The Cookie Monster, as well as for his 1993 essay “The Coming Technological Singularity,” in which he argues that the creation of superhuman artificial intelligence will mark the point at which “the human era will be ended,” such that no current models of reality are sufficient to predict beyond it.
What Is the Singularity?
The acceleration of technological progress has been the central feature of this century. I argue in this paper that we are on the edge of change comparable to the rise of human life on Earth.
The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence.
There are several means by which science may achieve this breakthrough (and this is another reason for having confidence that the event will occur):
The first three possibilities depend in large part on improvements in computer hardware. Progress in computer hardware has followed an amazingly steady curve in the last few decades [17]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [20] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I’m not guilty of a relative-time ambiguity, let me more specific: I’ll be surprised if this event occurs before 2005 or after 2030.)
What are the consequences of this event? When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still — shorter time scale. The best analogy that I see is with the evolutionary past: Animals can adapt to problems and make inventions, but often no faster than natural selection can do its work–the world acts as its own simulator in the case of natural selection. We humans have the ability to internalize the world and conduct “what if’s” in our heads; we can solve many problems thousands of times faster than natural selection. Now, by creating the means to execute those simulations at much higher speeds, we are entering a regime as radically different from our human past as we humans are from the lower animals.
From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control. Developments that before were thought might only happen in “a million years” (if ever) will likely happen in the next century. (In [5], Greg Bear paints a picture of the major changes happening in a matter of hours.)
I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown. In the 1950s there were very few who saw it: Stan Ulam [28] paraphrased John von Neumann as saying:
One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed (see [25]).)
In the 1960s there was recognition of some of the implications of superhuman intelligence. I. J. Good wrote [11]:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. … It is more probable than not that, within the twentieth century, an ultraintelligent machine will be built and that it will be the last invention that man need make.
Good has captured the essence of the runaway, but does not pursue its most disturbing consequences. Any intelligent machine of the sort he describes would not be humankind’s “tool” — any more than humans are the tools of rabbits or robins or chimpanzees.
Through the ’60s and ’70s and ’80s, recognition of the cataclysm spread [29] [1] [31] [5]. Perhaps it was the science-fiction writers who felt the first concrete impact. After all, the “hard” science-fiction writers are the ones who try to write specific stories about all that technology may do for us. More and more, these writers felt an opaque wall across the future. Once, they could put such fantasies millions of years in the future [24]. Now they saw that their most diligent extrapolations resulted in the unknowable … soon. Once, galactic empires might have seemed a Post-Human domain. Now, sadly, even interplanetary ones are.
What about the ’90s and the ’00s and the ’10s, as we slide toward the edge? How will the approach of the Singularity spread across the human world view? For a while yet, the general critics of machine sapience will have good press. After all, till we have hardware as powerful as a human brain it is probably foolish to think we’ll be able to create human equivalent (or greater) intelligence. (There is the far-fetched possibility that we could make a human equivalent out of less powerful hardware, if we were willing to give up speed, if we were willing to settle for an artificial being who was literally slow [30]. But it’s much more likely that devising the software will be a tricky process, involving lots of false starts and experimentation. If so, then the arrival of self-aware machines will not happen till after the development of hardware that is substantially more powerful than humans’ natural equipment.)
But as time passes, we should see more symptoms. The dilemma felt by science fiction writers will be perceived in other creative endeavors. (I have heard thoughtful comic book writers worry about how to have spectacular effects when everything visible can be produced by the technologically commonplace.) We will see automation replacing higher and higher level jobs. We have tools right now (symbolic math programs, cad/cam) that release us from most low-level drudgery. Or put another way: The work that is truly productive is the domain of a steadily smaller and more elite fraction of humanity. In the coming of the Singularity, we are seeing the predictions of true technological unemployment finally come true.
Another symptom of progress toward the Singularity: ideas themselves should spread ever faster, and even the most radical will quickly become commonplace. When I began writing science fiction in the middle ’60s, it seemed very easy to find ideas that took decades to percolate into the cultural consciousness; now the lead time seems more like eighteen months. (Of course, this could just be me losing my imagination as I get old, but I see the effect in others too.) Like the shock in a compressible flow, the Singularity moves closer as we accelerate through the critical speed.
And what of the arrival of the Singularity itself? What can be said of its actual appearance? Since it involves an intellectual runaway, it will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected — perhaps even to the researchers involved. (“But all our previous models were catatonic! We were just tweaking some parameters…”) If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened.
And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Post-Human era. And for all my rampant technological optimism, sometimes I think I’d be more comfortable if I were regarding these transcendental events from one thousand years remove … instead of twenty.
Can the Singularity be Avoided?
Well, maybe it won’t happen at all: Sometimes I try to imagine the symptoms that we should expect to see if the Singularity is not to develop. There are the widely respected arguments of Penrose [19] and Searle [22] against the practicality of machine sapience. In August of 1992, Thinking Machines Corporation held a workshop to investigate the question “How We Will Build a Machine that Thinks” [27]. As you might guess from the workshop’s title, the participants were not especially supportive of the arguments against machine intelligence. In fact, there was general agreement that minds can exist on nonbiological substrates and that algorithms are of central importance to the existence of minds. However, there was much debate about the raw hardware power that is present in organic brains. A minority felt that the largest 1992 computers were within three orders of magnitude of the power of the human brain. The majority of the participants agreed with Moravec’s estimate [17] that we are ten to forty years away from hardware parity. And yet there was another minority who pointed to [7] [21], and conjectured that the computational competence of single neurons may be far higher than generally believed. If so, our present computer hardware might be as much as ten orders of magnitude short of the equipment we carry around in our heads. If this is true (or for that matter, if the Penrose or Searle critique is valid), we might never see a Singularity. Instead, in the early ’00s we would find our hardware performance curves beginning to level off — this because of our inability to automate the design work needed to support further hardware improvements. We’d end up with some very powerful hardware, but without the ability to push it further. Commercial digital signal processing might be awesome, giving an analog appearance even to digital operations, but nothing would ever “wake up” and there would never be the intellectual runaway which is the essence of the Singularity. It would likely be seen as a golden age … and it would also be an end of progress. This is very like the future predicted by Gunther Stent. In fact, on page 137 of [25], Stent explicitly cites the development of transhuman intelligence as a sufficient condition to break his projections.
But if the technological Singularity can happen, it will. Even if all the governments of the world were to understand the “threat” and be in deadly fear of it, progress toward the goal would continue. In fiction, there have been stories of laws passed forbidding the construction of “a machine in the likeness of the human mind” [13]. In fact, the competitive advantage — economic, military, even artistic — of every advance in automation is so compelling that passing laws, or having customs, that forbid such things merely assures that someone else will get them first.
Eric Drexler [8] has provided spectacular insights about how far technical improvement may go. He agrees that superhuman intelligences will be available in the near future — and that such entities pose a threat to the human status quo. But Drexler argues that we can confine such transhuman devices so that their results can be examined and used safely. This is I. J. Good’s ultraintelligent machine, with a dose of caution. I argue that confinement is intrinsically impractical. For the case of physical confinement: Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate — say, one million times slower than you, there is little doubt that over a period of years (your time) you could come up with “helpful advice” that would incidentally set you free. (I call this “fast thinking” form of superintelligence “weak superhumanity”. Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time. “Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. It’s hard to say precisely what “strong superhumanity” would be like, but the difference appears to be profound. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight? (Now if the dog mind were cleverly rewired and then run at high speed, we might see something different….) Many speculations about superintelligence seem to be based on the weakly superhuman model. I believe that our best guesses about the post-Singularity world can be obtained by thinking on the nature of strong superhumanity. I will return to this point later in the paper.)
Another approach to confinement is to build rules into the mind of the created superhuman entity (for example, Asimov’s Laws [3]). I think that any rules strict enough to be effective would also produce a device whose ability was clearly inferior to the unfettered versions (and so human competition would favor the development of the those more dangerous models). Still, the Asimov dream is a wonderful one: Imagine a willing slave, who has 1000 times your capabilities in every way. Imagine a creature who could satisfy your every safe wish (whatever that means) and still have 99.9% of its time free for other activities. There would be a new universe we never really understood, but filled with benevolent gods (though one of my wishes might be to become one of them).
If the Singularity cannot be prevented or confined, just how bad could the Post-Human era be? Well … pretty bad. The physical extinction of the human race is one possibility. (Or as Eric Drexler put it of nanotechnology: Given all that such technology can do, perhaps governments would simply decide that they no longer need citizens!). Yet physical extinction may not be the scariest possibility. Again, analogies: Think of the different ways we relate to animals. Some of the crude physical abuses are implausible, yet…. In a Post-Human world there would still be plenty of niches where human equivalent automation would be desirable: embedded systems in autonomous devices, self-aware daemons in the lower functioning of larger sentients. (A strongly superhuman intelligence would likely be a Society of Mind [16] with some very competent components.) Some of these human equivalents might be used for nothing more than digital signal processing. They would be more like whales than humans. Others might be very human-like, yet with a one-sidedness, a dedication that would put them in a mental hospital in our era. Though none of these creatures might be flesh-and-blood humans, they might be the closest things in the new enviroment to what we call human now. (I. J. Good had something to say about this, though at this late date the advice may be moot: Good [12] proposed a “Meta-Golden Rule”, which might be paraphrased as “Treat your inferiors as you would be treated by your superiors.” It’s a wonderful, paradoxical idea (and most of my friends don’t believe it) since the game-theoretic payoff is so hard to articulate. Yet if we were able to follow it, in some sense that might say something about the plausibility of such kindness in this universe.)
I have argued above that we cannot prevent the Singularity, that its coming is an inevitable consequence of the humans’ natural competitiveness and the possibilities inherent in technology. And yet … we are the initiators. Even the largest avalanche is triggered by small things. We have the freedom to establish initial conditions, make things happen in ways that are less inimical than others. Of course (as with starting avalanches), it may not be clear what the right guiding nudge really is:
Other Paths to the Singularity: Intelligence Amplification
When people speak of creating superhumanly intelligent beings, they are usually imagining an AI project. But as I noted at the beginning of this paper, there are other paths to superhumanity. Computer networks and human-computer interfaces seem more mundane than AI, and yet they could lead to the Singularity. I call this contrasting approach Intelligence Amplification (IA). IA is something that is proceeding very naturally, in most cases not even recognized by its developers for what it is. But every time our ability to access information and to communicate it to others is improved, in some sense we have achieved an increase over natural intelligence. Even now, the team of a PhD human and good computer workstation (even an off-net workstation!) could probably max any written intelligence test in existence.
And it’s very likely that IA is a much easier road to the achievement of superhumanity than pure AI. In humans, the hardest development problems have already been solved. Building up from within ourselves ought to be easier than figuring out first what we really are and then building machines that are all of that. And there is at least conjectural precedent for this approach. Cairns-Smith [6] has speculated that biological life may have begun as an adjunct to still more primitive life based on crystalline growth. Lynn Margulis (in [15] and elsewhere) has made strong arguments that mutualism is a great driving force in evolution.
Note that I am not proposing that AI research be ignored or less funded. What goes on with AI will often have applications in IA, and vice versa. I am suggesting that we recognize that in network and interface research there is something as profound (and potential wild) as Artificial Intelligence. With that insight, we may see projects that are not as directly applicable as conventional interface and network design work, but which serve to advance us toward the Singularity along the IA path.
Here are some possible projects that take on special significance, given the IA point of view:
The above examples illustrate research that can be done within the context of contemporary computer science departments. There are other paradigms. For example, much of the work in Artificial Intelligence and neural nets would benefit from a closer connection with biological life. Instead of simply trying to model and understand biological life with computers, research could be directed toward the creation of composite systems that rely on biological life for guidance or for the providing features we don’t understand well enough yet to implement in hardware. A long-time dream of science-fiction has been direct brain to computer interfaces [2] [29]. In fact, there is concrete work that can be done (and is being done) in this area:
Originally, I had hoped that this discussion of IA would yield some clearly safer approaches to the Singularity. (After all, IA allows our participation in a kind of transcendance.) Alas, looking back over these IA proposals, about all I am sure of is that they should be considered, that they may give us more options. But as for safety … well, some of the suggestions are a little scarey on their face. One of my informal reviewers pointed out that IA for individual humans creates a rather sinister elite. We humans have millions of years of evolutionary baggage that makes us regard competition in a deadly light. Much of that deadliness may not be necessary in today’s world, one where losers take on the winners’ tricks and are coopted into the winners’ enterprises. A creature that was built de novo might possibly be a much more benign entity than one with a kernel based on fang and talon. And even the egalitarian view of an Internet that wakes up along with all mankind can be viewed as a nightmare [26].
The problem is not simply that the Singularity represents the passing of humankind from center stage, but that it contradicts our most deeply held notions of being. I think a closer look at the notion of strong superhumanity can show why that is.
Strong Superhumanity and the Best We Can Ask for
Suppose we could tailor the Singularity. Suppose we could attain our most extravagant hopes. What then would we ask for: That humans themselves would become their own successors, that whatever injustice occurs would be tempered by our knowledge of our roots. For those who remained unaltered, the goal would be benign treatment (perhaps even giving the stay-behinds the appearance of being masters of godlike slaves). It could be a golden age that also involved progress (overleaping Stent’s barrier). Immortality (or at least a lifetime as long as we can make the universe survive [10] [4]) would be achievable.
But in this brightest and kindest world, the philosophical problems themselves become intimidating. A mind that stays at the same capacity cannot live forever; after a few thousand years it would look more like a repeating tape loop than a person. (The most chilling picture I have seen of this is in [18].) To live indefinitely long, the mind itself must grow … and when it becomes great enough, and looks back … what fellow-feeling can it have with the soul that it was originally? Certainly the later being would be everything the original was, but so much vastly more. And so even for the individual, the Cairns-Smith or Lynn Margulis notion of new life growing incrementally out of the old must still be valid.
This “problem” about immortality comes up in much more direct ways. The notion of ego and self-awareness has been the bedrock of the hardheaded rationalism of the last few centuries. Yet now the notion of self-awareness is under attack from the Artificial Intelligence people (“self-awareness and other delusions”). Intelligence Amplification undercuts our concept of ego from another direction. The post-Singularity world will involve extremely high-bandwidth networking. A central feature of strongly superhuman entities will likely be their ability to communicate at variable bandwidths, including ones far higher than speech or written messages. What happens when pieces of ego can be copied and merged, when the size of a selfawareness can grow or shrink to fit the nature of the problems under consideration? These are essential features of strong superhumanity and the Singularity. Thinking about them, one begins to feel how essentially strange and different the Post-Human era will be-no matter how cleverly and benignly it is brought to be.
From one angle, the vision fits many of our happiest dreams: a time unending, where we can truly know one another and understand the deepest mysteries. From another angle, it’s a lot like the worst-case scenario I imagined earlier in this paper.
Which is the valid viewpoint? In fact, I think the new era is simply too different to fit into the classical frame of good and evil. That frame is based on the idea of isolated, immutable minds connected by tenuous, low-bandwith links. But the post-Singularity world does fit with the larger tradition of change and cooperation that started long ago (perhaps even before the rise of biological life). I think there are notions of ethics that would apply in such an era. Research into IA and high-bandwidth communications should improve this understanding. I see just the glimmerings of this now [32]. There is Good’s Meta-Golden Rule; perhaps there are rules for distinguishing self from others on the basis of bandwidth of connection. And while mind and self will be vastly more labile than in the past, much of what we value (knowledge, memory, thought) need never be lost. I think Freeman Dyson has it right when he says [9]: “God is what mind becomes when it has passed beyond the scale of our comprehension.”
[I wish to thank John Carroll of San Diego State University and Howard Davidson of Sun Microsystems for discussing the draft version of this paper with me.]
by Ray Kurzweil
April 2023
Regarding the Open Letter to “pause” research on AI “more powerful than GPT-4,” this criterion is too vague to be practical. And the proposal faces a serious coordination problem: those that agree to a pause may fall far behind corporations or nations that disagree. There are tremendous benefits to advancing AI in critical fields such as medicine and health, education, pursuit of renewable energy sources to replace fossil fuels, and scores of other fields. I didn’t sign, because I believe we can address the signers’ safety concerns in a more tailored way that doesn’t compromise these vital lines of research.
I participated in the Asilomar AI Principles Conference in 2017 and was actively involved in the creation of guidelines to create Artificial Intelligence in an ethical manner. So I know that safety is a critical issue. But more nuance is needed if we wish to unlock AI’s profound advantages to health and productivity while avoiding the real perils.
— Ray Kurzweil
Inventor, best-selling author, and futurist.
reference
the Future of Life Institute | home ~ channel
open letter: Pause Giant AI Experiments. | view
date: March 2023
— about —
This letter’s signatories call on all AI labs to immediately pause for at least 6 months the training of artificial intelligence systems more powerful than GPT-4.
]]>— content —
~ story
~ report
~ key points
~ quote
~ featurettes
~ reading
story |
A study by Northwestern Univ. looks at the world-wide growth of data — and the global energy needed to power it. The research team asks the question: Will we run out of energy to power civilization’s data-use — soon or in the future? According to Microsoft, there are billions of devices and data centers processing over 2.5 exa-bytes of data every day.
The study’s analysis paints a detailed picture of global energy-use — as we attempt to keep-up with today’s increasing demand for data. Filled with computing and networking equipment: data centers are central locations that collect, store, and process data. As the world relies more + more on data-intensive tech, the energy-use of data centers is a heightened worry.
While data center facilities have been successful — to-date — at using energy efficiently enough to track the rising tide of demand — analysts warn that when these efficiency measures can no longer match-up, an enormous spike of energy need will happen.
report |
group: AAAS
publication: Science
report title: re-Calibrating global data center energy-use estimates.
author: by Northwestern Univ.
date: February 2020
read | report
about |
This analysis presents a nuanced picture of global energy-use. Data centers represent the information backbone of an increasingly digitized world.
Demand for their services has been rising rapidly, and data-intensive tech — like artificial intelligence, smart + connected energy systems, distributed manufacturing systems, and autonomous vehicles — promise to increase demand further. Data centers are energy-intensive — accounting for 1% of world-wide electricity-use.
presented by
card :: AAAS
card :: Science
— key points —
The world is using more + more electronic data — that equals more energy needed to power it:
What the study discovered:
What’s to come:
The team listed steps to slow future growth in energy-use:
— quote —
While the historical efficiency progress made by data centers is remarkable, our findings don’t mean IT industry and policy-makers can rest on their laurels. We think there’s enough remaining efficiency potential to last several years.
But ever-growing demand for data means that everyone — including policy-makers, data center operators, equipment manufacturers, and data consumers — must increase efforts to avoid a sharp rise in energy-use, later this decade.
To paint a more complete picture, the research team integrated new data. Including info on data center equipment stocks, efficiency trends, and market structure. So the final model enables analysis of the energy used:
The reality is we need to better monitor energy-use.
— Eric Masanet PhD
bio: professor at Northwestern Univ.
bio: mechanical engineer
bio: leader: Energy + Resource Systems Analysis Lab
— featurette —
group: Facebook
featurette title: Facebook’s data centers
tag line: Bring the world closer together.
— featurette —
group: Google
featurette title: Google’s data centers
tag line: text
— featurette —
group: Google
featurette title: Making Google’s data centers green
tag line: text
group: ABB
featurette title: ABB’s data centers
tag line: Let’s write the future. Together.
— summary —
It used to be an industrial mine. Now it’s on course to become Europe’s biggest + greenest data center. And Lefdal, on Norway’s coast, is fully powered and protected by ABB technology.
— featurette —
group: Kolos
featurette: Planning the world’s largest data center
banner: Powering the future.
— about —
Kolos is planing to build the world’s largest data center — powered by 100% renewable energy. Kolos is changing the paradigm in data center infra-structure. Moving away from dense, high-cost, fossil fuel-driven areas — to an area abundant in clean, renewable energy. Beautifully integrated into the natural landscape + community, the Kolos data center will bring world-class tech industry and jobs.
A fortress of data:
webpages |
AFCOM | home
AFCOM | Data Center Institute
tag line: We are experts.
tag line: Advancing data center + IT infra-structure professionals.
webpages |
Energy Star | home
Energy Star | booklet: Best Practices Guide — for energy-efficient data center design
Energy Star | booklet: Score for Data Centers — for spaces meeting the needs of high-density computing
banner: The simple choice for energy efficiency.
webpages |
school: Northwestern Univ.
web: home ~ channel
motto: Whatsoever things are true.
reading
1. |
group: Data Center Dynamics
publication: analysis
story title: Huge data center efficiency gains stave off energy surge for now.
read | story
— summary —
A six-fold increase in compute led to energy demands rising just 6%.
tag line: The business of data centers.
2. |
group: ABB
story title: ABB solutions power Europe’s greenest data center in Norway.
read | story
— summary —
With ABB technology, Lefdal mine data center plans to become Europe’s biggest — with the smallest environmental footprint.
tag line: Let’s write the future. Together.
3. |
group: GeoTel
publication: blog
story title: Microsoft’s data centers will be using 60% renewable energy by 2020.
read | story
— summary —
Microsoft’s data centers are the beginning of Microsoft large plan of being environmentally friendly.
DASHED
presented by
card :: GeoTel
tag line: Tele-communications location-based intelligence.
tag line: Know the fiber landscape.
4. |
group: Singularity grp.
publication: Singularity Hub
story title: AI is an energy-guzzler.
read | story
— summary —
We need to re-think its design, and soon.
DASHED
presented by
card :: Singularity grp.
tag line: We prepare you to seize exponential opportunities.
5. |
group: Informa
section: InformaTech
publication: Data Center Knowledge
story title: There are now more than 500 hyper-scale data centers in the world.
read | story
— summary —
How long does it take the world to build 100 hyper-scale data centers? About 2 years, according to Synergy Research. The number of these massive facilities — they house all our data, serve all our entertainment, and power + cool the computing infra-structure for apps our lives revolve around — is +500 in year 2019.
What is the criteria for a data center to be labelled “hyper-scale” — analysts have different “scale-of-business criteria” in assessing the size of company operations in cloud, e-commerce, or social networking markets. The facilities are measured in 10,000s of servers.
DASHED
presented by
card :: Informa
card :: InformaTech
group:
banner: Championing the specialist.
banner: Inspiring the tech community to design, build, and run a better digital world.
IMAGE
— notes —
e-commerce = electronic commerce
AI = artificial intelligence
IT = information tech
AAAS = American Assoc. for the Advancement of Science
AFCOM = Assoc. for Computer Operations Management
ABB = Asea, Brown, Boveri
EPA = Environmental Protection Agency ~ United States
DOE = Department of Energy | United States
* Energy Star is a program of the EPA + DOE ~ United States
]]>the Kurzweil Library
set :: stories on progress
— contents —
~ story
~ graphics
~ report
~ about
~ reading
story |
New research from medical scientists at Johns Hopkins Univ. linked abnormally formed proteins in the human brain with the psychiatric illness called schizophrenia — in a significant number of patients. While they’re not yet sure what the connection is, the study reports that deformed proteins were found in the brains of many patients who were diagnosed with schizophrenia.
This leads researchers to guess that deformed proteins had a role in the disease — either as a cause or an effect. The team says this link is an important clue to gaining knowledge about schizophrenia. It’s a mysterious + incurable illness that’s not well understood.
Valuable research.
The team analyzed post-mortem brain tissue from 42 schizophrenia patients, donated by brain banks across 3 different institutions. They compared these with post-mortem brain samples (from the same collections) of 41 people who had not been diagnosed with schizophrenia. Having samples from different collections enabled the team to test and re-test their results for consistency.
The researchers looked at 2 areas of the brain they believe are disrupted in patients with schizophrenia.
The team looked closely at proteins in those sections of the human brain, dividing them into 2 groups.
image | below
This graphic highlights the 2 brain regions sampled in the abnormal protein study researching schizophrenia. The model shows a whole human brain, seen from the side.
Possible connection with other diseases.
After they’re assembled by the human body, proteins normally fold-up in a way that makes them soluble. Mis-folded proteins are often insoluble, don’t function properly, or at all — and are linked to many diseases.
The team — thinking about how clumps of insoluble, mis-folded proteins are seen in the brains of patients with Alzheimer disease and other illnesses — wanted to see if they could find evidence of similar pathology in schizophrenia. Unable to access these key brain areas in living patients, they could only look at post-mortem brains.
image | left
An image of human brain tissue under a microscope — showing clumping proteins against the backdrop of normal cells.
Researchers observed these types of chunky + deformed proteins in the schizophrenia study.
credit: Univ. of Pennsylvania
Interesting results.
The results of this research were impressive: 20 of the 42 schizophrenia brains that were analyzed contained significantly more insoluble proteins — compared with the other 22 schizophrenia brains, and the 41 control brains. The researchers then used a physics technique called mass spectrometry to learn more about the proteins in the insoluble portion. They found unique and abnormal proteins in greater abundance.
In a nutshell, mass spectrometry uses high-tech devices to accurately measure the mass of different molecules in a sample. Even large bio-molecules like proteins are identifiable by mass, which means that biologists can perform interesting experiments using mass spectroscopy, adding a new dimension to their research.
Looking forward.
The study’s discovery of deformed proteins is consistent with a theory of schizophrenia that says the illness is related to abnormal brain development.
Abnormal proteins were present in only half of the brain samples from schizophrenia patients. The team said this could be evidence of a sub-type of schizophrenia — which has implications for diagnosis and treatment development.
More research is under way to link insoluble proteins with the cause of schizophrenia, its specific clinical symptoms, and to explore if similar irregularities are present in other psychiatric illnesses.
report |
school: Johns Hopkins Univ.
publication: the American Journal of Psychiatry
report: Increased protein insolubility in brains from a sub-set of patients with schizophrenia
date: May 2019
read | report
IMAGE
about | schizophrenia
Schizophrenia is a psychiatric illness — it’s a chronic, severe lifetime mental disorder that affects how a person thinks, feels, and behaves. People with schizophrenia behave like they’ve lost touch with reality.
It’s not as common as other mental disorders, but the symptoms are disabling — the disease is devastating to patients and their families. Schizophrenia is most commonly diagnosed between the ages of 16 -to- 30.
The symptoms.
There are 3 types of schizophrenia symptoms —positive, negative, cognitive.
a. | positive symptoms are psychotic behaviors not seen in healthy people. People with positive symptoms may lose-touch with reality.
b. | negative symptoms are disruptions in normal emotions and behaviors.
c. | cognitive symptoms patients may notice changes in their memory + thinking.
Known risk factors.
Several factors contribute to the risk of developing schizophrenia.
a. | inherited genetics.
Scientists have long known that schizophrenia sometimes runs in families. However, there are many people who have schizophrenia who don’t have a family member with the disorder and conversely, many people with one or more family members with the disorder who do not develop it themselves. Scientists believe that many different factors may increase the risk of schizophrenia. It’s not currently possible to use genetic data to predict who’ll develop the illness.
b. | different brain structure.
Scientists think an imbalance in the complex activities in the brain might play a role in schizophrenia.
c. | fetal growth dysfunction.
Some experts think problems during a baby’s brain development before birth can lead to schizophrenia. The brain also experiences major changes during puberty, that might trigger psychotic symptoms in people who are vulnerable because of their genetics, or have existing brain abnormalities since birth.
d. | environmental factors can involve.
about | protein
Protein is a macro-nutrient that is essential to building + maintaining healthy body tissues in humans and mammals. It’s commonly found in meat, dairy, beans, and nuts. Proteins are large, complex molecules made-up of amino acids — which are organic compounds made-up of the elements: carbon, hydrogen, nitrogen, oxygen, or sulfur.
Amino acids are the building blocks of proteins, and proteins are the building blocks of our bodies. We intake proteins through the nutritious foods we eat. When we digest proteins they move throughout the body to be incorporated into our biology. When they assemble into the tissues and fluids we need to live, sometimes accidents happen. Especially inside soft or spongy tissues, proteins can become crushed by fluid pressures caused by illness or injury, or by chemicals they bump into that aren’t meant to be there — such as toxins, alcohol, and industrial substances.
Some scientists believe that proteins can become deformed in body tissues when they encounter infectious pathogens like viruses + bacteria. A deformed protein loses its characteristic healthy shape. When the protein’s shape is broken, it can mechanically cause damage to other tissues.
reading |
govt. office: the National Institutes of Health ~ US
story: About schizophrenia.
read | story
— notes —
govt. = government
NIH = the National Institutes of Health
US = the United States
]]>
Hello,
If you’d like us to include an event on-topic, feel free to e-mail us at — readers@KurzweilTech.com
Annual events can be listed permanently + updated.
— library editor
months | autumn
months | winter
months | spring
months | summer
— contents —
~ about
~ film
~ guide | parts 1 to 11
~ companion stories
about | the film
Search On is an original documentary film by Google. It features 11 stories of people around the world using Google tech to solve big problems, answer hard questions, and take action.
And in surprising ways. The film follows people on a quest for better answers — and the magic that happens at the intersection of tech + humanity.
company: Google
film title: Search On
deck: An original documentary.
watch | trailer
presented by
Google | home ~ channel
tag line: We’re organizing the world’s information — making it universally accessible + useful.
Alphabet | home
tag line: We give projects the resources, freedom, and focus — to make ideas happen.
The products by Google featured in this film.
— the film —
guide | parts 1 to 11
An introduction.
11 personal stories are showcased in the documentary film Search On. Each vignette has an illustrated companion story you can read.
visit | the homepage — Search On
part 1. |
title: An eye fit for Liberty.
Watch how a father used YouTube by Google to make his daughter a better prosthetic eye. Discover how Dwayne helped Liberty and other people, as he discovered his purpose in life — and a new vocation. He’s using technology to bring well-being to eye patients.
read | companion story — Eyes
referenced in the film
Ocular Prosthetics | home
tag line: Your specialists for adult + pediatric eyes.
part 2. |
title: Beneath the canopy.
A tribe uses mobile phones + TensorFlow by Google to fight illegal logging in the Amazon. Learn how the Tembé use tech to protect their home. Is it possible to save a rainforest by listening to it? This tribe is pairing old cell phones + machine learning to fight de-forestation.
read | companion story — rainforest
referenced in the film
RainForest Connection | home
tag line: The most impactful way to stop climate change — save rainforests.
part 3. |
title: Riding to remember.
When dementia takes memories away from senior citizens, a stationary bicycle helps bring them back —with a virtual ride down memory. Learn how BikeAround utilizes Street View + Maps by Google. Meet the researcher who’s helping patients.
read | companion story — BikeAround
referenced in the film
BikeAround | profile
tag line: The experience bike for body + mind.
part 4. |
title: Daniel and the sea of sound.
A young music lover follows the call of whales, to find his own path. See how music led Daniel to study the ocean with tech. Daniel didn’t know what engineering was when he started community college. Now he’s making breakthroughs, using machine learning to track endangered ocean animals.
read | companion story — soundwaves
referenced in the film
the Monterey Bay Aquarium Research Institute | home ~ channel
tag line: Advancing marine science + engineering to understand our changing ocean.
the Monterey Bay Aquarium | home ~ channel
tag line: Inspiring conservation of the ocean.
banner: Your window to marine life.
part 5. |
title: The agoraphobic traveller.
An agoraphobic artist uses Google Street View to photograph the world. Her anxiety + disability limit her travel, but she’s found another way to see. Read about Jacqui’s travel photography. Meet the artist who’s captured the world in her camera — without leaving home.
read | companion story — agoraphic traveller
referenced in the film
Stories for Good | home
tag line: We harness the power of story to drive positive change in the world.
part 6. |
title: Living to serve.
A veteran finds a new mission to build a more inclusive world. Learn how Matt engineers assistive tech using Android by Google — with open source tech + a passion for service. Now he’s working to give his autistic son and others a chance to live more independently.
read | companion story — living to serve
referenced in the film
the Human Engineering Research Lab | home ~ channel
tag line: Improving the mobility + function of people with disabilities through engineering.
the Univ. of Pittsburgh | home ~ channel
motto: truth + virtue
part 7. |
title: Between worlds.
A self-taught Native American coder brings her community with her. Learn how Search by Google propelled Robin’s journey. Looking for a way to reconcile childhood on the reservation with her adolescence — she discovered a life of activism, tech, and science. She’s using technology to create opportunity for all.
read | companion story — between worlds
referenced in the film
the American Indian Science + Engineering Society | home ~ channel
tag line: Advancing indigenous people in STEM.
the American Indian College Fund | home ~ channel
tag line: Education is the answer.
Diné College | home ~ channel
tag line: The nation’s college.
banner: Your future starts here.
Unapologetically Indigenous | home
tag line: Entering the digital pow-wow.
part 8. |
title: Positive current.
A teen scientist invents a powered tool to test water for lead contamination using Android by Google. Meet Gitanjali and 5 other bright young women tackling the contaminated water crisis. They’re the women of the water. And they’re raising their voices — and using tech + big data — to lead the charge for clean water. They’re building a brighter future.
read | companion story — clean water
referenced in the film
in3 | home
tag line: We’re providing programs + services for this generation of leaders — and the next.
banner: inclusive + innovation + incubator
the National Society of Black Engineers | home ~ channel
tag line: A legacy of excellence.
— featurette —
part 9. |
title: Pedaling for peace.
Meet the man spreading peace with a bicycle and Translate by Google. Join Dynan on his mission of friendship. With only a tent, a bike, and Translate — he’s on a 4-year journey to meet + learn from a world of people. Billions of people use Translate to read menus, directions, and street signs in 100+ languages. But at its core, it’s a tool to help dialogue + explore. For Dnyan it’s indispensable — it’s how he connects with people around the globe.
read | companion story — pedaling for peace
referenced in the film
the Ministry of Culture ~ India | home ~ channel
tag line: The preservation and promotion of art + culture in India.
150 Years of Celebrating the Mahatma | home
tag line: The birth anniversary of Mahatma Gandhi.
chapter 10. |
title: The guardians.
Over the course of one year, over 3,500 female motorcyclists used Maps by Google to plan + navigate the world’s largest motorcycle relay. Ride alongside 3 women — from the US, Mexico, and South Africa — as they embark on a journey of connection across borders. Learn about the record-breaking Women Riders World Relay. They’re separated by boundaries and bonded by bikes.
read | companion story — women riders
referenced in the film
Women Riders World Relay | home ~ channel
tag line: unites us + excites us
banner: The world’s largest motorcycle relay.
IMAGE
chapter 11. |
title: Call me blood.
How one woman saves lives with motorbikes, blood banks, and Maps by Google. Learn about LifeBank’s crucial work. See how doctors, dispatch drivers, and blood donors are coming together to save lives across Africa. The LifeBank team uses Google Maps to connect dispatch riders with blood banks and hospitals in need. LifeBank has been able to decrease delivery blood time from 24 hours to less than 45 minutes.
read | companion story — LifeBank
referenced in the film
LifeBank | home ~ channel
tag line: We are digitising the supply chain for health-care in Africa.
banner: We’re safer, faster, and more reliable.
inspiration | by Google
Google created these positive messages to accompany the film’s theme.
— notes —
AI = artificial intelligence
STEM or STEAM = science + tech + engineering + arts + math
US = United States
WRWR = Women Riders World Relay
image | above
In this photo you can see the doughnut shaped interior of the nuclear fusion reactor at the JET Fusion test lab.
— contents —
~ story
~ featurette
~ quotes
~ watching
~ reading
— story —
Researchers have exceeded the current record for generating energy from a nuclear fusion reaction. It’s a big step toward solving the world power consumption crisis. Nuclear fusion is called the holy grail of energy production — because it could lead to a virtually unlimited source of safe + sustainable power.
The nuclear test happened at the world’s most powerful fusion plant — the JET Fusion facility in the UK. The record-breaking nuclear fusion reaction had a temperature of more than 150 million degrees Celsius, 10 times hotter than the heart of the sun. The research team explained that the breakthrough is a landmark for this technology, and a key step toward developing practical nuclear fusion.
The difference between fusion + fission.
Most nuclear reactors use fission — that’s when big unstable atoms like uranium are split in two, releasing lots of energy + radiation. Fission is the current technique used around the globe in nuclear power plants. But fusion is different. It involves forcing atoms of hydrogen together — fusing them to create: helium, lots of energy, and just a tiny bit of short-lived radiation.
Using fusion to create mini-suns inside reactors like this, is one of the greatest tech challenges humanity has ever faced. It holds the potential for producing almost unlimited supplies of energy, forever. The essential part of the test fusion reaction only lasted for 5 seconds — and only generated enough power to boil about 60 kettles of water — 11 mega-watts of power. But it’s an important proof of the science of fusion. The power output was more than double what was achieved in similar tests in year 1997.
How it was achieved.
Temperatures to produce nuclear fusion need to be extremely high — above 100 million Celsius. No materials exist that can withstand direct contact with such heat. So, to achieve fusion in a lab, scientists invented a solution — super-heated gas (called plasma) is held inside a doughnut shaped magnetic field. In geometry this doughnut shape is called a torus. And objects that have a torus shape are called toroidal. The plasma used in the JET Fusion reactor is hotter than anything in our solar system.
Future lab tests will make the plasma from a mix of two forms — called isotopes — of hydrogen. These hydrogen isotopes are called deuterium and tritium. The JET Fusion scientists successfully engineered a lining for the 80 cubic meter toroidal shaped vessel enclosing the magnetic field — that functions efficiently with these isotopes.
For its experiments in year 1997 — the team used carbon. But carbon absorbs tritium, which is radioactive. So for the recent tests, they constructed new walls for the vessel. The walls are made with the metals beryllium and tungsten. These are 10 times less absorbent. Then the JET Fusion team tuned their plasma to function effectively in this new environment.
— featurette —
This is an animated 3D fly-through tour of the outside + inside of the actual fusion reactor.
On the horizon.
The current test supports the design of an even bigger fusion reactor being constructed in France. The JET Fusion reactor can’t actually run any longer because its copper electro-magnets get too hot. New tests will use internally cooled super-conducting magnets. Nuclear fusion reactions in labs famously consume more energy to launch than they can output — this is called an energy deficit. For example, the JET Fusion lab used two 500 mega-watt flywheels just to power the experiment.
But scientists believe this deficit can be overcome. New toroidal vessels are being developed — with a volume that’s 10 times bigger than JET Fusion’s vessel. Ultimately the goal is to build a commercial power plant that can produce more energy than it consumes to operate — so the excess energy can be fed into electric power grids.
There’s still much research + development to do. Possible power plants of the future — based on nuclear fusion instead of nuclear fission — would produce no greenhouse gases and only a small amount of short-lived radioactive waste.
About the lab.
The JET Fusion facility is located at the Culham Centre for Fusion Energy — managed by the UK Atomic Energy Authority. International nuclear scientists collaborate at the research center as part of the EURO Fusion consortium — made-up of 4,800 experts.
The lab was designed to study nuclear fusion in conditions similar to the specs needed for a real power plant. It’s the only lab that can operate with the deuterium-tritium fuel mix that’s used for commercial fusion energy. This excellent facility achieves major advances in the science + engineering of fusion. Its success led to the construction of the first commercial-scale nuclear fusion machine — and its scientists have proven the design for future fusion power plants.
image | above
In this photo you can see the towers of a typical nuclear fission facility that produces electricity — called a power plant. Today there are approx. 440 of these nuclear power reactors operating globally — providing approx. 10 percent of the world’s electricity.
The hope of recent nuclear fusion tests is to develop practical techniques to harness energy successfully — paving the way for sustainable fusion power plants of tomorrow. Our current world-wide nuclear energy reactors operate using fission.
These vintage systems are not as efficient, and produce toxic by-products. If fission power plants overheat, it spells disaster for human health + habitat. They also produce an enormous amount of radioactive waste that’s getting more + more difficult to dispose of — since the radiation decays so slowly that it continues to pose biological + environmental risks long after it’s passed its usefulness in the reactor.
image | below
A photo of high voltage power lines that distribute electricity in wide-ranging grids.
quotes
1. |
name: by Joe Milnes PhD
bio: Head of Operations | JET Fusion
What’s really significant that we demonstrated inside JET: is that we can create a mini-sun, the right kind of mini-sun, hold it there for a sustained period — and get really good performance levels.
That’s a major step forward in our quest to achieve fusion power plants. I do think we’ll see commercial fusion in our lifetime. Why is it taking so long?
It’s really hard and very complex. But it’s worth it — we have to do it for the future.
— Joe Milnes PhD
2. |
name: by Roger Harrabin
bio: Energy + Environment Analyst | BBC
There’s huge uncertainty about when fusion power will be ready for commercialization. One estimate suggests maybe 20 years.
Then fusion would need to scale-up. That would mean a delay of another few decades.
— Roger Harrabin
3. |
name: by Arthur Turrell PhD
bio: Deputy Director for Research + Economics | the Office for National Statistics ~ UK
web: home
This is a stunning result because they managed to demonstrate the greatest amount of energy output from the fusion reactions of any device in history. The test demonstrated stability of the plasma for over 5 seconds.
That doesn’t sound very long. But on a nuclear time-scale it’s a very long time. And it’s possible then to go from 5 seconds, to 5 minutes, to 5 hours — or even longer.
— Arthur Turrell PhD
watching
1. |
blog: Kurzgesagt
featurette title: Fusion power explained
watch | featurette
presented by
Kurzgesagt | home ~ channel
tag line: In a nutshell.
2. |
institution: Fusion for Energy
featurette title: About nuclear fusion
watch | featurette
presented by
Fusion for Energy | home ~ channel
tag line: Bringing the power of the sun to Earth.
3. |
broadcast: CNBC
featurette title: Is nuclear fusion the answer to clean energy?
watch | featurette
presented by
CNBC | home ~ channel ~ line-up
tag line: First in business world-wide.
NBCUniversal | home ~ brands ~ corporate
tag line: One of the world’s leading media + entertainment companies.
webpages
JET Fusion | home ~ channel
tag line: Tackling the main challenges of putting fusion electricity on the grid.
Euro Fusion | home ~ channel
tag line: Realising fusion electricity.
Fusion for Energy | home ~ channel
tag line: Bringing the power of the sun to Earth.
reading
1. |
publication: Reuters
story title: European scientists set nuclear fusion energy record
read | story
presented by
Reuters | home ~ channel
tag line: Breaking international news + views.
Thomson Reuters | home ~ channel
tag line: Let’s shape tomorrow together.
2. |
institution: the American Enterprise Institute
blog: AEIdeas
story title: The coming fusion revolution
deck: My long-read q + a with Arthur Turrell PhD.
read | story
presented by
the American Enterprise Institute | home ~ channel
tag line: A competition of ideas is fundamental to a free society.
AEIdeas | home
tag line: Public policy commentary from the American Enterprise Institute.
— notes —
BBC = the British Broadcasting corp.
NBC = the National Broadcasting co.
JET Fusion = Joint European Torus Fusion
ONS = the Office for National Statistics | UK
UK = United Kingdom
co. = company
corp. = corporation
3D = 3-dimensional
— contents —
~ story
~ quote
~ featurette
~ reading
— story —
A team of computer researchers developed an AI software program with social skills — called S Sharp — that out-performed humans in its ability to co-operate. This was tested through a series of games between humans and the artificial intelligence software. The tests paired people with S Sharp in a variety of social scenarios.
Building AI that co-operates with us at human-level.
One of the games humans played against the software is called the prisoner’s dilemma. This classic game shows how two rational people might not co-operate — even if it appears that’s in both their best interests to work together. The other challenge was a sophisticated block-sharing game.
In most cases, the S Sharp software out-performed humans in finding compromises that benefit both parties. To see the experiment in action, watch the good featurette below. This project was helmed by two well known computer scientists:
The researchers tested humans and the AI in 3 types of game interactions:
— quote —
name: by Jacob Crandall PhD
bio: computer scientist
bio: teacher | Brigham Young Univ.
web: profile
Computers can now beat the best human minds in the most intellectually challenging games — like chess. They can also perform tasks that are difficult for adult humans to learn — like driving cars. Yet autonomous machines have difficulty learning to co-operate. That’s something even young children do.
Human co-operation appears easy. But it’s very difficult to emulate because it relies on cultural norms, deeply rooted instincts, and social mechanisms — that express disapproval of non-collaborative behavior.
Such common sense mechanisms aren’t easily built into machines. The same AI software programs that effectively play the board games of chess + checkers, Atari video games, and the card game of poker — often fail to consistently co-operate when it’s necessary.
Other AI software often takes 100s of rounds of experience to learn to collaborate with each other, if they do at all. Can we build computers that co-operate with humans — the way humans do with each other? Building on decades of research in AI, we built a new software program that learns to collaborate with other machines . Simply by trying to maximize its own world.
We did experiments that paired the AI with people in various social scenarios — including a prisoner’s dilemma challenge and a sophisticated block-sharing game. The program consistently learns to co-operate with another computer — but it doesn’t with people. But people didn’t co-operate much with each other either.
As we all know, humans can collaborate better if they can communicate their intentions through words + body language. So we gave our AI a way to listen to people, and talk to them.
This lets the AI play in previously unanticipated scenarios. The resulting algorithm achieved our goal. It consistently learns to co-operate with people as well as people do. Our results show that 2 computers make a much better team — better than 2 humans, and better than a human + a computer.
But the program isn’t a blind collaborator. The AI can get pretty angry if people don’t behave well. The historic computer scientist Alan Turing PhD believed machines could potentially demonstrate human-like intelligence. Since then, AI has been regularly portrayed as a threat to humanity or human jobs.
To protect people, programmers have tried to code AI to follow legal + ethical principles — like the 3 laws of robotics written by Isaac Asimov PhD. Our research shows that a new path is possible.
Machines designed to selfishly maximize their pay-offs can — and should — make an autonomous choice to co-operate with humans across a wide range of situations. Two humans — if they’re honest with each other + loyal — would do as well as 2 machines. About half of the humans lied at some point. So the AI is learning that moral characteristics are better — since it’s programmed not to lie. And it also learns to maintain co-operation once it emerges.
We need to understand the math behind collaborating with people. What attributes does AI need so it can develop social skills. AI must be able to respond to us — and articulate what it’s doing. It must interact with other people.
This research could help humans with their relationships. In society, relationships break-down all the time. AI is often better than humans at reaching compromise — so it could teach us how to get-along.
— Jacob Crandall PhD
featurette
institution: the Institute for Advanced Study in Toulouse
featurette title: Unlocking robot-human co-operation
watch | featurette
presented by
the Institute for Advanced Study in Toulouse | home ~ channel
tag line: Knowledge across frontiers.
banner: A unified scientific project studying human behavior.
webpages
1. |
name: Iyad Rahwan PhD
web: home
profile | the Massachusetts Institute of Technology
profile | the Max Planck Institute for Human Development
2. |
name: Jacob Crandall PhD
web: home
profile | Brigham Young Univ.
reading
1. |
publication: Nature
paper title: Co-operating with machines
read | paper
presented by
Nature | home ~ channel
tag line: text
Springer | home ~ channel
tag line: text
— notes —
AI = artificial intelligence
S# = S Sharp software
MIT = the Massachusetts Institute of Technology
BYU = Brigham Young Univ.
IAST = the Institute for Advanced Study in Toulouse | France
univ. = university
— file —
box 1: stories on progress
box 2:
post title: AI software with social skills teaches humans to collaborate
deck: Unlocking human-computer co-operation.
collection: the Kurzweil Library
tab: stories on progress
image | above
This mushy + agile mechanical squid is the future of robotics — a completely soft machine that will flex + grip just like biological muscle.
With their fine motor skills, soft robots can handle complex tasks that require mobility, delicacy, precision, and safety. Synthetic muscle protoypes in the lab today give these robots with extreme weight-lifting strength.
credit: Queen Mary Univ. of London
— contents —
~ story
~ quote
~ featurette
— story —
Researchers at Columbia Univ. built a 3D printable, synthetic soft muscle that can mimic nature’s biology — lifting 1000 times its own weight. The artificial muscle is 3 times stronger than natural muscle — and can push, pull, bend, twist, and lift weighty objects. The breakthrough enables a new generation of completely soft robots.
Today’s mechanisms that move robotics — called actuators — are bulky pneumatic (gas pressure) or hydraulic (fluid pressure) inflation systems made of elastomer skins — that expand when air or liquid is pushed into them. But those require external compressors + pressure regulating equipment.
The team is led by award-winning roboticist Hod Lipson PhD — from the Creative Machines Lab at Columbia Univ.
image | above
In the experiment above, the synthetic soft muscle expands + deflates — just like a real biological muscle. This prototype is a breakthrough for building soft robots.
credit: Columbia Univ.
books | by Hod Lipson PhD
1. |
book title: Fabricated
deck: The new world of 3D printing.
visit | book
2. |
book title: Driverless
deck: Intelligent cars and the road ahead.
visit | book
image | above
A portrait of Hod Lipson PhD.
Replicating natural motion.
Inspired by living organisms, robotics made of soft materials will be essential in fields where robots must interact physically with people — such as: manufacturing, emergency services, customer assistance, in-home aid, and health care. Unlike rigid robots, soft robots can replicate natural motion — grasping + manipulation — to provide medical help, perform delicate tasks, or pick-up soft objects.
The soft muscle recipe.
Researchers used a silicone rubber matrix with ethanol (alcohol) distributed throughout in micro-bubbles. This design combines elastic properties + extreme volume change abilities — and it’s easy to fabricate, low cost, and made of environmentally safe materials.
The soft, composite material is 3D printed into its final shape. When it’s heated, the ethanol in the material boils and the pressure inside those micro-bubbles grows — forcing the elastic silicone elastomer matrix to also expand.
The researchers next plan to accelerate the muscle’s response time + increase its shelf life. They also plan to use artificial intelligence software that will learn to control the muscle — as a final milestone in replicating natural human motion.
The synthetic muscle was tested in a variety of robotic applications. Because it’s elastic, it was able to expand + contract up to 900%. The new material can withstand strain 15 times larger than biological muscle. The team said these abilities will enable new kinds of soft robots.
quote
name: by Hod Lipson PhD
bio: mechanical engineer
bio: teacher | Columbia Univ.
web: profile ~ home
We’ve had great strides in making robot software — but robot bodies are still primitive. This is a big piece of the puzzle. Just like biology, the new actuator can be shaped + re-shaped 1,000s of ways. We’ve overcome one of the final barriers to making life-like robots.
— Hod Lipson PhD
Science details for building soft actuators.
Inspired by natural muscle, a key challenge in soft robotics is developing:
Existing robotics actuators have limits:
featurette
school: Columbia Univ.
featurette title: Synthetic muscle for soft life-like robotics
watch | featurette
webpages
the Creative Machines Lab | home
tag line: Robots that create + are creative.
presented by
Columbia Univ. | home ~ channel
motto: In your light we see the light.
— notes —
3D = 3-dimensional
AI = artificial intelligence
univ. = university
]]>— event —
event title: TBD Conference
theme: hope/expectation of the future.
web: home
season: spring
date: March 31
year: 2022
where: Global Live Stream
event website | visit
presented by: Paul Armstrong, CEO, HERE/FORTH
for our readers: 33% discount off ticket price: | LINK
— summary —
A global pandemic still raging, the great reset, the great resignation, a changing political landscape, restless new superpowers, and Marvel is pushing its films out; the world has never seen this level of uncertainty before. We need to plan and see a way out…where we’re aiming for.
Now in its fourth year, TBD (Technology. Behaviour. Data.) Espérance (pron: ‘EHSP-ERAHNS’) is the only one-day conference specifically designed to help you focus, reframe and plan for what the foreseeable future (and a little bit further) brings your way focused on hope and expectation of the future. As the world deals with varients, new political currents and a changing consumer, a sensible head is what’s needed. Resilience was last year (‘Mollitiam’), it’s time to plan and grow.
— speakers include —
— the agenda —
The final program for TBD is never revealed until the day of the conference in order to make it most useful for attendees. Speakers and themes, however, are disclosed. Expect big ideas and thought-provoking talks at ‘Espérance’ that will push you, your business and your thinking forward.
— contents —
~ letter
~ about | the book
~ about | the author
~ book excerpts | feat. Ray Kurzweil
~ reading
— letter —
Dear readers,
I’m happy to recommend this new book by best-selling non-fiction author — and my friend — Chip Walter. The book is titled Immortality, Inc. He’s dedicated several years to investigating the global efforts to end human aging + disease. I fully believe in this multi-pronged research that crosses science, technology, policy, ethics, finance, and futurism.
This will be humanity’s next great step forward — the inevitable, necessary evolution of biology. It’s time for society to rise-up against the defeatist concept that disease, aging, and death are part of life. We can’t accept anything less than life-long health + immortality. I believe that today’s steady stream of new knowledge in medicine + physics will join hands with advanced medical tech — to finally win this battle.
This decade will see progress in wide-ranging fields that affect health + life-span:
The next step on the map to immortality: is to live long enough — stay healthy enough — to get to this fast-approaching time in history when we end aging + disease. Chip Walter’s book is an excellent primer — a tour of the people, places, know-how, and ideas that will conquer death. I hope you’ll be intrigued by his well-researched book.
Below, I’ve included the dictionary’s definition of a watershed moment. I believe we have to look toward to the future, with the understanding that 1,000s of tiny steps in progress can quickly add-up to sudden, tremendous breakthrough — enabling a huge leap for humanity.
I hope you enjoy reading the many excerpts from the book, below.
— Ray Kurzweil
by definition | watershed
wa — ter — shed — noun
— A period in history marking a turning-point in a course of action, in a state of affairs, or in a range of possibility.
— An event marking a unique or important historical change of course, on which important developments depend.
— A tipping-point.
idiom: A watershed moment.
title: Immortality, Inc.
deck: Renegade science, Silicon Valley billions, and the quest to live forever.
author: by Chip Walter
date: 2020
This book is available at fine book-sellers.
Amazon | Barnes + Nobel | Books-a-Million | IndieBound
about | the book
The book Immortality, Inc. — by acclaimed science journalist Chip Walter — explores today’s scientific pursuit of immortality with exclusive visits inside labs. Plus in-depth interviews with visionaries who believe we’ll soon crack the aging process + cure death.
Billionaires are betting their fortunes on advances to prove aging is unnecessary + death is a disability that can be cured. Researchers are delving into the mysteries of biology to keep those processes from happening. The author weaves-in fascinating conversations about the future of humanity.
The book interviews notable anti-aging champions.
about | the author
Chip Walter is a best-selling non-fiction author, journalist, and documentary film-maker. He’s also a National Geographic fellow and former bureau chief at CNN.
He’s written 5 mainstream non-fiction books on science + nature — available at all fine book-sellers.
name: Chip Walter
bio: non-fiction author + journalist
web: home ~ books ~ channel
image | above
A portrait of Chip Walter.
an EXCERPT
from the book
section: epilogue
1. |
Any problem can be solved.
Ray Kurzweil is an inventor, entrepreneur, futurist, author — and a director of engineering at Google. I was sitting with Ray one afternoon when I asked him how he felt about the passage of time — and the idea of running out of it.
He didn’t like to talk about it, but in year 2008 he made a pilgrimage to the hospital to have his mitral heart valve repaired. It was a life-long, genetic shortcoming — and it needed to be taken care of. A leaky heart valve is never a recipe for immortality.
The procedure didn’t require new valves from pigs, only some suturing. So when I brought up the question of mortality, he just grinned. He said: ‘I worry about that a lot less, now that I know I’m not going to die.’
And why should he think otherwise? The ‘bridges’ were advancing, nano-tech was evolving, and artificially intelligent algorithms were undoing the mysteries of humanity’s demise every day. For Kurzweil it was only a matter of time, and time’s acceleration.
After all these years, he had accomplished the promise he’d made during his childhood. He changed the world — with his inventions and his ideas. Others may have explored the notion of living forever, but no one has driven the message into the mainstream with his fervor.
And no one hammers away harder at the importance of exponential growth than Kurzweil — science driven by the irresistible fusion of human + artificial intelligence. Kurzweil believes that any problem can be solved. Even the problem that — so far — has killed every living thing on Earth.
an EXCERPT
from the book
chapter: no. 26
title: The seed of the singularity.
1. |
The science to solve aging.
If the science necessary to solve aging was going to go anywhere, one last remarkable and ironic piece of the longevity puzzle would have to fall into place. Smart machines would need to arise in defense of the human race. Already machine learning was embedding itself in the medical arts, and digital technology had long ago become science’s handmaiden. Craig Venter’s work with the Human Genome Project had marked a milestone. But now — as the search for immortality deepened — much more digital muscle would be necessary.
Art Levinson himself had put the facts concisely: When it came to flipping the genetic switches needed to evade aging, there was no way any human at a lab bench — no matter how gifted, how insightful, or how hardworking — could possibly locate and comprehend their magical pathways. And without that, curing the Ultimate Problem was simply not going to happen.
Homo sapiens required a tool that was faster, smarter, and more tireless than humans themselves. The kind that Riccardo Sabatini used in the Face Project made a good example. ‘Machine learning’ was one term that Sabatini and other computer scientists used to describe this brand of work. But there was another more common name that nearly everyone had heard of — artificial intelligence (AI).
AI is different from other forms of computer code. It consists of legions of algorithms that are eerily similar to the human mind itself. AI can learn to solve problems without being explicitly told what to do ahead of time. It can — in some ways — think for itself, at high speed. It’s the stuff of countless dystopian futures. Ironic, then, that such capabilities should now emerge as our saviors.
It’s doubly ironic that those same tools have been the source of so much of Silicon Valley’s wealth. It’s almost as if evolution had anointed the Valley — with all its computing power and money — as the chosen instrument for immunizing death. In the machines and their algorithms, this created a symbiosis: digits, molecules, biology, tech — coming together in a strange and unexpected harmony.
2. |
Intelligent machines can solve problems.
Ray Kurzweil could have told you this was going to happen. More than 50 years ago, when he was 14 years old, he wrote a paper that outlined how a machine might somehow become as intelligent as a human. He hadn’t divined a direct connection between artificial intelligence and longevity just yet, but he always fervently believed that truly intelligent machines could solve nearly any problem.
The essentials of that thinking hadn’t really changed since Kurzweil’s paper. In fact, he used much of it as the basis for his best-selling 2012 book How to Create a Mind. The book argued that human-level intelligence could be created in computers by reverse-engineering the human brain. Figure out how the neo-cortex worked, employ pioneering software and hardware to do the same in a computer — and voilà — a fully human-like but entirely artificial machine.
Just after the book came out, Larry Page suggested that Kurzweil join Google to “bring natural language understanding” to the company — figure out how computers might someday talk and communicate like humans. Initially Kurzweil only planned to ask Page if Google — or Bill Maris at Google Ventures — might like to invest in the business he wanted to create based on the book. Instead, Page said just come into the Google fold.
This way, Kurzweil could work with the canny computer scientists at Google and tap into its bountiful digital resources — not to mention free office space and all the hardware and software cycles a big thinker could ask for. So in December 2012, Kurzweil — for the first time in his life — joined a company that didn’t have his own name on the corporate logo. But that was ok. The dream of creating something as remarkable as a virtual mind — the holy grail of AI — was deep in the man’s DNA. If he had to become an employee to solve the world’s problems, including death, he could live with that.
The team’s first endeavor under Kurzweil’s tutelage as a director of engineering was to create machine-learning algorithms that could understand users’ e-mails — and then provide short, but sensible answers, all on their own. After 5 years of work with a group of 35 scientists, team Kurzweil created its first Google product: Smart Reply.
Launched in 2017, the initial version listened to the e-mail you received, and then Smart Reply provided 3 possible answers — short responses like “Let’s do Monday.” Smart Reply wasn’t going to solve aging — not immediately.
But in Kurzweil’s view, it made a good first example of artificially intelligent software comprehending a human thought and then providing a response that made sense. On the surface, it might appear trivial, but it really wasn’t. And in the end it would lead to life everlasting. How?
Building on Smart Reply, Kurzweil planned to ratchet-up his project to the point where machines could — on the fly and in context — speak as fluently in any language as he could. The new version would be able to pull all the right words, in all the right order, out of thin air — and carry on an entirely sensible, human-like conversation.
Once that was possible, he figured the machines would be pretty much as smart as we are — which also made them the seed of the singularity he felt would arrive in the mid-21st century.
3. |
A more powerful version of the human body + mind.
The seminal concept behind Ray Kurzweil’s work was something he called “intelligent pattern recognizers” — layers and layers of them that reside in the brain. In his view, these modules are what made the Homo sapiens neo-cortex — the most recently evolved sector of the human brain — such a ringing success. Kurzweil estimated the cerebral cortex houses about 300 million of them, each consisting of clusters of neurons.
Placed in context, he held that these modules rapidly bootstrap simple concepts in an increasingly complex human hierarchy that — layer-by-layer — deliver remarkable insights like art, math, and language. The modules manage this by quickly identifying a few low-level cues, then sensibly pulling in more modules — to generate still more bootstrapped knowledge.
For example, a module that sees the visual image of a horizontal bar, and then sees 2 sides of a pyramid would — in the context of a sentence — immediately recognize it as letter A. Other related modules would see additional letters related to A, to piece together the word ‘apple’ — rather than “pear.”
More modules would attach additional words and then tastes: maybe the smell of pies, memories, a location that the modules then figure out is a kitchen — until the next thing you know, you’re craving a piece of your grandmother’s apple pie. This might trigger other thoughts, feelings, memories, and insights. All of this happens in a blink — powered by the brain’s 100 billion inter-connected neurons.
This might seem a simple example, and a long way from Shakespeare’s: “Tomorrow and tomorrow and tomorrow — creeps in this petty pace from day to day” — a line from his classic play Macbeth. But the point was that this network of inter-woven, highly flexible modules was the wellspring of human intelligence. And Kurzweil’s goal was to develop the artificially intelligent software that would reverse-engineer this unique human trait. If such an advance were possible, it might not be immediately obvious how AI would lead to immortality.
But to Kurzweil, it was all of a piece. With the advent of AI, he foresaw the evolution of a newer, more powerful version of the human body + mind. One that wasn’t strictly biological, but instead employed nano-tech — cell-size nano-bots that could clean arteries, strengthen muscle, and boost organs. While simultaneously allowing the brains of mere mortals to access the vast cerebral spaces of the cloud.
But not in the way we do now, with clunky phones and iPads from Apple — instead with invisible, cell-size machines injected like serums into the cerebral cortex. Essentially becoming enhanced, artificial brain cells — something I found myself calling neuro-bots.
4. |
Physically invincible, augmented people.
Ray Kurzweil predicts that within decades millions of people will be physically invincible, supplied with trillions of neuro-bots capable of linking directly to the ubiquitous cloud. Anyone thus augmented will not require stem cell rejuvenation, or even revamped genetics. They wouldn’t need to ask Google questions — the answer would simply be there, available like every other memory. The average human would not watch a movie. She’d be immersed, imagining it more completely than our own recollections currently do.
One would not hum a song. The music would come into the mind, full-blown, in the highest possible fidelity. In a blink, neuro-bots will even be able to shift your reality from wherever your body is currently located into any other place you might like: Kathmandu in Nepal, ancient Rome, or a beach in the Seychelles. Complete with warm sun and crystal clear water lapping your toes, every sensation as real as real. It wouldn’t be real, but it would feel that way — thanks to a seamless, sensory melding of the neuro-bots re-arranging the chemistry in your brain.
Best of all, this new human hybrid could be digitally backed-up and then downloaded — to create a cloned copy containing all of the information in your mind + body. So that even if your “self” suddenly died, you’d have a perfect back-up available to resume life, as if nothing at all had happened. True immortality that would, once and for all, absolutely obliterate Gompertz’s beta.
Kurzweil considered this a 4th and final bridge. With it, his ultimate view of everlasting life would at last emerge at a time + place that didn’t simply upgrade old-fashioned biology — the kind Calico and Human Longevity were working on — but upgraded it with nano-tech that made you immortal and incredibly intelligent, almost god-like.
Of course, Kurzweil would never use the term god-like. To him, entwining humans and machines so thoroughly that they became indistinguishable was simply the next natural course of human evolution.
One might feel that Kurzweil’s bridge 4 thinking was just a touch outside the views of the average Homo sapiens. Some, however, felt it was a very real threat. Elon Musk and — prior to his death in 2018 — Stephen Hawking had warned that super-intelligent AIs could take over the planet. Partly thanks to the work Musk’s friend Larry Page was supporting. Musk said in July 2017: “I have exposure to very cutting-edge AI. And I think people should be really concerned about it.”
Hawking had written in an open letter with Musk — and a few dozen AI experts — that the emergence of AI would lead to creatures so smart and swift they’d leave us looking like the cerebral equivalent of an amoeba. He said it could become the “worst event in the history of our civilization.”
Remarks like these aggravated the Kurzweilian brain. More and more, he grew peevish with media cries that repeatedly told the world that in no time we’d all be living in a dystopian future where our overlords transformed Siri into a menacing version of George Orwell’s novel 1984. But look how tech had advanced the human race.
Despite the horrors of the last century, the rate of death caused by war over the past 600 years had dropped several hundred fold. Murder rates were rapidly declining. FBI statistics showed that — between years 1993 to 2015 — the US murder rate had plummeted 50%. The same was true of property crime. Despite media reports of our collective demise, Kurzweil believed the world was a better, safer, happier, and smarter place — thanks to the advances that the keepers of science and innovation made possible.
For Kurzweil, the smart thing was to let tech march ahead — Tom Swift style — because that was where we were headed. Yes, one had to be vigilant and control the power of smart machines. He had been saying that for years. But no need to hit the panic button.
Machines wouldn’t match human intelligence for another 10 years, and the singularity itself wouldn’t arrive until year 2045 — a date with destiny that he planned to keep, when he celebrated his 97th birthday. The best approach would be to put safety measures into place along the lines of 3 laws of robotics by Isaac Asimov.
Like the first stone knives, created over 2 million years ago, all tech could be used for good or ill. But if properly managed, AI would surely be our saviors, not our terminators — our partners, not our competitors. Just watch: AI was going to save our skins. Kurzweil could see it. Levinson and Venter saw it too, each in their own way. There could be no doubt that smart machines was where the end of The End lay.
— end —
reading
Set of best-selling science books by author Chip Walter.
1. |
title: Last Ape Standing
deck: The 7-million-tear story of how we survived.
author: by Chip Walter
date: 2013
explore this book | click
2. |
title: Thumbs, Toes, and Tears
deck: And other traits that make us human.
author: by Chip Walter
date: 2006
explore this book | click
3. |
title: I’m Working on That
deck: A trek from science fiction to science fact.
author: by Chip Walter + William Shatner
date: 2002
explore this book | click
— notes —
AI = artificial intelligence
CNN = Cable News Network
DNA = deoxy-ribo-nucleic acid
FBI = the Federal Bureau of Investigation | US
HLI = Human Longevity, Inc.
— contents —
~ about | the story collection
~ letter
publication: Scientific American
title: 50, 100, and 150 Years Ago
deck: Innovation + Discovery: chronicled in Scientific American
visit | the story collection
about | the collection
An immense repository of 300+ stories going back to the 1845. And spanning a breadth of human history that gives you a front-row seat — to civilization’s progress as-it-happened. This fascinating curated collection of invention updates monthly.
presented by
Scientific American | home ~ channel
tag line: Expertise, insights, and illumination.
banner: Celebrating 175 years of discovery.
Springer Nature | home ~ channel
tag line: We’re a world-leading research, educational, and professional publisher.
banner: 180 years of progress + 180 years of discovery
— quote —
I hope you enjoy the journey.
— Daniel Schlenoff
name: Daniel Schlenoff
bio: writer | Scientific American
bio: editor of the collection: 50, 100, and 150 Years Ago
image | above
A portrait of Daniel Schlenoff.
— letter —
Hello,
Tracking 1.5 centuries of innovation + discovery, Scientific American magazine’s story collection — 50, 100, and 150 Years Ago — is curated by editor Daniel Schlenoff.
The chronicle presents 300+ stories on tech invention + science knowledge — archived since year 1845. This set is good reading for anyone watching the path of progress.
— library editor
— notes —
* no notes
]]>image | above
Pictured is a human brain from the top.
— contents —
~ story
~ diagram
~ featurette
~ webpages
— story —
Researchers at Lund University in Sweden have been developed implantable electrodes that can capture signals from a living human (or) animal brain over a long period of time —- but without causing brain tissue damage. This bio-medical tech will make it possible to monitor — and eventually understand — brain function in both healthy + diseased people.
A clever multi-stage design.
Lead researcher Jens Schouenborg PhD said:
There are 2 big problems that must be solved for scientists to be able to record signals from the brain with good results. First, the electrode must be bio-friendly — that means it can’t cause any significant damage to the brain tissue. Second, the electrode must be flexible inside the brain tissue.
Remember that the brain floats in fluid inside the skull —- it moves around when we do any activity: like breathe, walk, or just turn our head. The electrode and the implantation tech that we made have both of these important properties. So our design is highly specialized.
The researchers named their tailored electrodes “3D electrodes.” They’re extremely soft + flexible, so they can make stable recordings of electrical activity in brain tissue over a long time.
Their electrode is so soft that it bends when it touches a watery surface. To implant it, the team developed a technique that encapsulates the electrodes in a hard — but dissolvable — gelatin material, that’s also very gentle on the brain’s delicate tissue.
Looking at the image + video, you can see the tiny strands of electrodes bundled together into a broom-like shape — that’s encased in gelatin. That gelatin is hard enough to implant into the brain’s tissue. Then the gelatin melts-away, leaving the electrode array in-place, intact, without damage.
Team researcher Johan Agorelius said:
This tech retains the electrodes in their original form inside the brain. It can monitor what happens internally without disturbing well-functioning brain tissue.
image | above
A diagram of the super-soft + ultra-flexible electrode implant. Shown installed inside a living brain. The white areas labelled ‘bone’ are the skull.
Other electrodes can’t maintain their shape.
Other flexible electrodes commonly used in medical research can’t maintain their shape when they’re implanted. That’s why the team has been trying to develop a solid chip with limited flexibility. Other types of electrodes are also much stiffer. These older models of electrodes rub against + irritate the brain tissue — and the cells around the electrodes die.
Team researcher Johan Agorelius said:
The brain signals then become misleading or completely non-existent. Our new tech lets us implant the most pliable electrodes — as flexible we want — but still retain the exact shape of the electrode inside the brain.
This tool creates entirely new opportunities to gather data on what happens inside the brain. This kind of knowledge helps us understand + develop treatments for the worst diseases + disabilities like Parkinson’s disease, head trauma, congenital brain deformity, migraine, and the types of brain injuries that lead to motor paralysis.
image | above
The array of electrodes is a bundle of 8 thin gold leads.
guide to the image:
a. | the electrode array before it’s embedded into the gelatin
b. | the same electrode array after it’s embedded into the gelatin + shaped as a needle
c. | the same electrode array inside a section of brain tissue — 3 weeks after implantation
You can see the overall shape of the super-soft + ultra-flexible electrode bundle is well-preserved inside the gelatin — and stays intact even after it’s implanted into a brain.
— featurette —
— webpages —
Lund Univ. | home ~ channel
motto: Prepared for both.
— notes —
3D = 3-dimensional
]]>image | above
Pictured is the BRETT robot used in lab experiments at the Univ. of California. This robot responds to smart AI software that enables the robot to complete physical tasks by trial + error.
It’s called autonomous learning — meaning BRETT can approach a new environment, with objects he’s never seen, and figure-out by himself how to touch, move, assemble, stack, open, close, and operate all kinds of things he detects in the space. Even if he’s never encountered them before. The AI software program gives him the ability to learn without being pre-programmed on each task ahead of time.
— contents —
~ story
~ quotes
~ featurette
~ reading
story |
Univ. of California researchers have developed new computer AI software that enables robots to learn physical skills — called motor tasks — by trial + error. The robot uses a step-by-step process similar to the way humans learn.
The lab made a demo of their technique — called re-inforcement learning. In the test: the robot completes a variety of physical tasks — without any pre-programmed details about its surroundings.
Some of the robot’s successful test tasks:
Robots learn motor tasks autonomously with AI.
The research is part of the People + Robots Initiative — at the Univ. of California’s Center for Information Technology Research in the Interest of Society — CITRIS. The center develops info-tech solutions for world-wide benefit. They advance AI, robotics, and automation.
1. |
quote | by Pieter Abbeel PhD
What we’re showing in this project is a new approach to enable a robot to learn. The key is that when a robot is faced with something new, we won’t have to re-program it. The same AI software enables the robot to learn all the different tasks we gave it.
name: Pieter Abbeel PhD
about: A robotics engineer, teacher, and a lead researcher on the project.
the Kurzweil Library
:: card catalog | visit
::
view | full profile
bio: robotics engineer
bio: teacher
field:
at | the Univ. of California
web: profile
2. |
quote | by Trevor Darrell PhD
Most robotic applications happen in controlled environments — where physical objects are in predictable positions in the surroundings. The challenge of putting robots into real-life settings — like homes, offices, or transported to new or unknown facilities — is that those environments are constantly changing. The robot must be able to sense + adapt to its surroundings.
name: Trevor Darrell PhD
about:bio: robotics engineer
bio: teacher | the Univ. of California
web: profile
image | left
Pictured is the BRETT robot. He’s programmed with AI software to use tools + complete motor tasks. He lives in an experimental lab at the Univ. of California.
You can see the robot’s gripper hand pulling a wood nail out of a wood beam with the back-end of a hammer.
He does this by himself — carefully adjusting the arc, angle, direction, pressure, motion, and force he applies to the hammer. Just like a human would.
He finally accomplishes his task across many trial + error attempts. So he’s also learning the same way humans do.
The research team, located at the university’s famous Berkeley campus, nick-named the robot BRETT for:
B | Berkeley
R | Robot for the
E | Elimination of
T | Tedious
T | Tasks
Better than old approaches.
Previous techniques to help a robot make its way through a 3D environment required:
Instead, the researchers used the computer software technique called deep-learning AI — enabling the robot to make sense of all the data it receives, from all its sensors.
AI deep-learning software programs create layers of pattern recognition — that handle raw sensory data coming from the robot’s 3D environment. From sound, echo, touch, pressure, temperature, motion, position, and camera vision. With AI the robot can track patterns in the ongoing info-stream it’s getting — from its many sensors.
DOTLINE
3. |
quote | by Sergey Levine PhD
Humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife. And we don’t need to be pre-programmed to do activities. People learn new skills over time — from experience + by watching other humans.
This learning process is so deeply rooted in our biology, that we can’t even communicate to somebody precisely how to do any physical task. We can only give guidance as they learn it on their own.
name: Sergey Levine PhD
about
bio: robotics engineer
bio: teacher | the Univ. of California
web: profile
— featurette —
An AI feedback loop powers the software.
For the experiments, the team used the Personal Robot 2 — called PR2 — product from Clearpath Robotics. And nick-named it the Berkeley Robot for the Elimination of Tedious Tasks — BRETT.
They presented BRETT with a series of motor tasks — examples: placing blocks into matching openings, and stacking Lego blocks. The software program controlling BRETT’s learning supplies a score, based-on how well the robot is doing its task.
This training process lets the robot learn on its own. As BRETT moves its joints + manipulates physical objects — the AI software program calculates good values for 92,000 factors in its environment that it needs to assess.
A jump in self-learning robotics advances.
When BRETT is given the coordinates for the beginning + end of a task, the robot can master most tasks in 10 minutes. When the robot is not given the location for the objects in the scene — and needs to learn vision + control together — the learning process takes 3 hours.
— quote —
The field of robotics will see big improvements — as our ability to process huge amounts of data improves. With more data, you can start learning more complex things. It’s a long way before robots can learn to clean a house, do the dishes, and sort laundry.
But our promising results show these kinds of AI software programs can enable robots to learn complex tasks by themselves, without being pre-programmed. In 5 — 10 years we’ll see major advances in robot learning capability.
— Pieter Abbeel PhD
name: Pieter Abbeel PhD
bio: robotics engineer
bio: teacher | the Univ. of California
web: profile
watching
1. |
school: the Univ. of California
featurette title: BRETT the robot learns to put things together on his own
watch | featurette
— summary —
Univ. of California at Berkeley researchers developed software that enables robots to learn motor tasks through trial + error. This is similar to the way humans learn — marking a major milestone in the computer software field of artificial intelligence. In their experiments, the robot used AI deep-learning — a software programming technique — to complete tasks without pre-programmed details about its surroundings.
presented by
school: the Univ. of California
web: home ~ channel
motto: Let there be light.
program: People + Robots
web: home ~ background ~ channel
tag line: —
2. |
broadcast: Bloomberg
featurette title: See smart robots learn to play like human children
watch | featurette
— summary —
BRETT is a robot that can think. Researchers at the Univ. of California at Berkeley have programmed BRETT to learn on its own — through trial + error — how to accomplish tasks. Such as screwing a cap on a bottle, putting Lego blocks together, and solving a simple puzzle. It’s all made possible by a type of AI computer program called deep-learning that’s revolutionizing robotics.
presented by
broadcast: Bloomberg
web: home ~ channel
tag line: All the most important market news. All in one place.
webpages
Clearpath Robotics | home ~ channel
tag line: Boldly go where no robot has gone before.
— about —
We build the world’s best robot development platforms. Developing autonomous robots has never been easier. Our goal is to automate the planet’s dullest, dirtiest, and deadliest jobs. We’re the leader in research robotics — blazing the trail for robots in industry.
— notes —
UC = the University of California
univ. = university
AI = artificial intelligence
IT = information technology
info-tech = information technology
CITRIS = Center for Information Technology Research in the Interest of Society
BRETT = Berkeley Robot for the Elimination of Tedious Tasks
note: This event will also be web-streamed.
— event —
event title: Future Forum
theme: Society in Transition ~ effects of the pandemic
season: autumn
date: October 14 — 15
year: 2021
where: Chicago, Illinois ~ US
visit | event website
presented by
group: the German Center for Research + Innovation
tag line: Land of ideas.
web: home
— summary —
Each year German + American thought leaders from academia, the public, and private sectors convene: to exchange perspectives, find collaborators— and determine a collective path toward a progressive future.
This year’s forum is on the topic Society in Transition: effects of the pandemic. We’ll investigate the consequences of the global pandemic — through a wide range of inter-disciplinary + international perspectives.
past themes:
artificial intelligence
building biopolis: cities + climate change
about | the German Center for Research + Innovation
The center promotes innovation + collaboration by:
It was established in year 2010 to strengthen Germany’s reputation as a land of research, science, and innovation. And provide a platform for leaders in science, tech, and humanities to engage in tran-satlantic exchange + collaboration.
— notes —
US = United States
]]>images | above + below
The Project DR augmented reality (AR) system projects a diagnostic medical image of a spinal injury directly onto the patient’s skin. The 3D overlay effect is visible using virtual reality eyewear goggles called a headset — or hand-held glass panels specialized for AR display.
~ story
~ featurettes
~ webpages
~ reading
— story —
New technology — called Project DR — is bringing the power of augmented reality into clinical research + hospitals. The system projects medical images — like CT scans or MRI scans — directly onto a patient’s skin. This gives physicians a 3D visual of the patient from the outside + inside. With both views, a physician can virtually see inside the patient — since the images are mapped to the patient’s body from above.
The project was created by researchers at the Univ. of Alberta, in Canada.
How it functions.
ProjectDR uses a system called motion capture. Special markers are placed on the patient’s body. The space is flooded with infrared light that bounces off the markers. The computer system uses infrared cameras (that’s a type of thermal camera) that can see the markers — and keep-up with them as they move.
By tracking the body markers, Project DR knows just where to project the medical scan images onto the patient’s body. It maps the medical scans to the anatomical markers. This creates an effect — like you’re looking inside the human body. And it can follow the patient moving — so the scans can be projected continuously.
To understand a motion capture system — and for a primer on virtual reality, augmented reality, and mixed reality — see the good featurettes below. They show actual footage of virtual experiences + 3D demos — explaining this tech.
The project’s supervising researcher Pierre Boulanger PhD said:
Our Project DR system is an augmented reality (AR) software platform that projects medical diagnostic images onto the skin surface of a patient.
This system corrects the image if there’s distortion from skin contours — and automatically adjusts to fit the size + shape of the patient. Our software uses 2D or 3D diagnostic image formats. And the output can be viewed through see-through or projected displays.
Benefits for the future.
Some applications for this AR system include:
Project DR can also project limited sections of body scans — for example: only lungs, skeleton, or blood vessels — depending on what a physician wants to examine.
1. | quote
name: Pierre Boulanger PhD
bio: project researcher
We demo Project DR in an operating room — in a surgical simulation lab — to test the pros + cons in surgeries. We’re also doing pilot studies to test this system for teaching physical therapy techniques. Next we’ll conduct real-life surgical pilot studies.
2. | quote
name: Ian Watts
bio: research team member
We wanted to create a system that would show clinicians a patient’s internal anatomy — within the context of the body. The difficult part is having the image track properly on the patient’s body — even as they shift + move. I’m improving the system’s automatic calibration and adding depth sensors.
featurettes
1. |
school: the Univ. of Alberta
title: Project DR demo
project: Project DR | visit
watch | featurette
— about —
Project DR is an augmented reality (AR) system that enables medical images — such as CT scans + MRI data — to be displayed directly on a patient’s body, in a way that moves as the patient does.
presented by
school: the Univ.of Alberta
web: home ~ main channel ~ science channel
motto: Whatsoever things are true.
2. |
school : Full Sail Univ.
title: What is motion capture?
watch | featurette
— about —
From films + video games — to the military, sports, and medical fields — motion capture is used to record the movement of objects or people. And create 3D animated models. Teacher Tyrone Jordan demos the basics of motion capture.
presented by
school: Full Sail Univ.
web: home ~ channel
motto: Explore degrees in entertainment, media, arts, and tech.
3. |
company: SimpliLearn
web: home ~ channel
motto: We transform lives by empowering people via digital skills.
title: The rise of tech: VR (virtual reality), AR (augmented reality), and MR (mixed reality).
watch | featurette
These immersive reality systems: VR (virtual reality), AR (augmented reality), and MR (mixed reality) — are amongst the fastest-growing computer technologies today, involving both hardware + software. In a nutshell, immersive tech creates or extends our sensory experience.
It immerses the user in a digital environment — that’s visual, auditory, and even tactile. It can be a pure fantasy world, or a realistic model, or overlays that appear on top of the actual world around us. This tech is gaining momentum and has lots of applications. Watch this video to understand what immersive tech is — and how it’s useful in re-imagining our future.
presented by
company: SimpliLearn
motto: We transform lives by empowering people via digital skills.
web: home ~ channel
— webpages —
name: Pierre Boulanger • PhD
web: home
reading
1. |
from: New Atlas
title: ProjectDR allows doctors to see into patients’ bodies
read | story
Imagine if doctors could see through a patient’s skin. And their perspective of the underlying bones + organs changed accordingly — as the person moved around. That’s what scientists at the Univ. of Alberta in Canada have developed. It’s still in the experimental phase, and is called ProjectDR.
presented by
New Atlas | home ~ channel
tag line: Extraordinary ideas moving the world forward.
2. |
from: Futurism
title: Doctors can now use augmented reality to peek under a patient’s skin
read | story
A new tech takes some of the guesswork out of medicine.
presented by
Futurism | home ~ channel
tag line: Wonder what’s next.
Singularity grp. | home ~ channel
tag line: We prepare you to seize exponential opportunities.
3. |
from: the Richard van Hooijdonk blog
title: ProjectDR uses new augmented reality tech to let doctors see through your skin
read | story
watch | featurette
A pilot program at the Univ. of Alberta brings digital magic to the operating room — so surgeons can see inside their patients before they make the first incision. The researchers have developed a system called ProjectDR, a revolutionary augmented reality tech for medical teaching + patient care.
presented by
the Richard van Hooijdonk blog | home ~ channel ~ blog
tag line: A keynote speaker, trend-watcher, and futurist.
— notes —
AR = augmented reality
VR = virtual reality
CT scan = computed tomography scan
MRI scan = magnetic resonance imaging scan
* physio-therapy is also called physical therapy
]]>— event —
event title: CogX
theme: The festival of AI + emerging tech.
web: home • channel
season: summer
date: June 14 — 16
year: 2021
where: London • UK
event website | visit
presented by
organization: 2030 Vision
tag line: Advancing 4th Industrial Revolution technology for the global goals.
web: home • channel
— summary —
Cog X is a festival AI + transformational tech. We’re addressing the question: How do we get the next 10 years right?
— gather together —
Come join Europe’s leading AI festival to enjoy:
1,000s of the brightest innovators will gather to connect and discuss the impact of AI. We’re taking over an entire London neighborhood — bringing it to life with days of talks, leading-edge tech, round-tables, workshops, debates, private lunches, exhibits, pop-ups, art, music, public events. Plus the winners of the annual awards will be revealed, recognizing the best products + tech across all key industries and fields.
The over-arching theme of the festival is: how can AI + emerging tech support the Sustainable Development Goals from the United Nations. Ethics is at the heart of CogX festival. We’ll explore how to address the biases and accidental consequences of AI + emerging tech.
By 2030 AI will have a $13 — $16 trillion impact on the global economy. Join us as we bring together the key players from across global industry, government, and academics — to move the conversation forward.
CogX will encourage debate on how tech can help catalyze positive and enduring change. CogX also offers executive briefings + numerous private events tailored for: senior executives, global politicians, AI thought leaders.
— mission —
— background —
organization: United Nations
list: the 17 Sustainable Development Goals
The Sustainable Development Goals — also called the global goals — are the blueprint drawn-up by the United Nations to achieve a better future for all people. They address: poverty, environmental degradation, social welfare, and quality of life.
So nobody is left-behind, society needs to achieve the goals by 2030.
— webpages —
organization: United Nations
tag line: ~
web: home • channel
project: the Sustainable Development Goals
tag line: ~
web: home • knowledge platform
about | 2030 Vision
Tech partnerships for the global goals. 2030 Vision projects connect knowledge, expertise, tech, and resources to solve global needs. Our projects are collaborations between prestigious cross-sector groups that drive wide-scale innovation for the UN Sustainable Development Goals — using tech to imagine solutions.
Our ambition is to transform the use of technology through collaborative partnerships and innovative projects, to support the delivery of the United Nations Sustainable Development Goals — unlocking the commercial opportunities they offer, by identifying + scaling impactful tech through multi-sector partnerships.
— notes —
Cog X = Cognition X
AI = artificial intelligence
ML = machine learning
IoT = internet of things
VR + AR = virtual reality + augmented reality
UK = country: United Kingdom
UN = United Nations
University of Hong Kong researchers have invented a radical new lightweight material that could replace traditional bulky, heavy motors or actuators in robots, medical devices, prosthetic muscles, exoskeletons, micro-robots, and other types of devices.
The new actuating material — nickel hydroxide-oxyhydroxide — can be instantly triggered and wirelessly powered by low-intensity visible light or electricity at relatively low intensity. It can exert a force of up to 3000 times its own weight — producing stress and speed comparable to mammalian skeletal muscles, according to the researchers.
The material is also responsive to heat and humidity changes, which could allow autonomous machines to harness tiny energy changes in the environment.
The major component is nickel, so the material cost is low, and fabrication uses a simple electrodeposition process, allowing for scaling up and manufacture in industry.
Developing actuating materials was identified as the leading grand challenge in “The grand challenges of Science Robotics” to “deeply root robotics research in science while developing novel robotic platforms that will enable new scientific discoveries.”
University of Hong Kong | Future Robots need No Motors
Ref.: Science Robotics. Source: University of Hong Kong.
]]>— contents —
~ book
~ description
~ quote
~ about | the author
book title: the Social Singularity
deck: How de-centralization will allow us to transcend politics and create global prosperity.
author: by Max Borders
date: 2018
This book is available at fine book-sellers.
Amazon | Barnes + Nobel | Books-a-Million | IndieBound
— description –
The world is rapidly de-centralizing.
Welcome to the social singularity.
In this book, futurist Max Borders shows that humanity is already building systems that will ‘underthrow’ great centers of power. Exploring the promise of a de-centralized world, he says civilization will:
Borders takes the reader on a tour of: modern pagan festivals, cities of the future, and radically new ways to organize society. He examines trends likely to revolutionize the way we live + work.
Although the technological singularity fast approaches — Borders explains that a parallel process of human re-organization will allow us to gain enormous benefits.
The paradox: global citizens’ billion little acts of subversion will help us lead richer, healthier lives. We must master the technological tools taking us into the tomorrow: automation, artificial intelligence, bio-tech, agro-tech, factory-tech, clean energy, and the smart-city of the future.
source: publisher
— quote —
De-centralization is not a choice, it’s an inevitability. Thankfully, the process can liberate people from poverty, end acrimonious politics, and help humanity avoid technological + civil collapse.
— Max Borders
about | the author
Author Max Borders is co-founder of the Future Frontiers conference + festival.
He’s also founder of Social Evolution, a non-profit organization dedicated to building mutual aid societies + solving social problems through innovation.
He lectures widely about the future of humanity. He was the former Editor + Director of Content for the Foundation for Economic Education.
image | above
A portrait of Max Borders.
webpages
name: Max Borders
bio: non-fiction author + futurist
bio: event producer
web: home ~ blog ~ books
Future Frontiers | home ~ channel
tag line: A gathering of visionaries creating the future of self, culture, and society.
Social Evolution | home
tag line: Innovating around power.
the Foundation for Economic Education | home ~ channel
tag line: Set your path, change the world.
— notes —
AI = artificial intelligence
]]>image | above
The contemporary city of Shanghai in China — home to over 26 million people.
— contents —
~ story
~ featurette
~ reference
~ reading
— story —
Samo Burja is a sociologist, writer, and the founder of Bismarck Analysis — a firm that analyzes institutions, from governments to companies. His research focuses on the causes of societal flourishing and decay. He writes on history, epistemology, and strategy.
Lessons from history.
In this talk, he describes a connected mesh of human interaction — from small to enormous scale — that’s the foundation for society’s progress. Then he details the complex ways this fabric gets affected. Eventually leading to either:
A robust + healthy human ecology — that thrives with each technological + scientific advance.
— or —
An anemic + unhealthy society — that back-slides into dysfunction + mass disorder.
Defying the odds.
Burja outlines these steps: investigate the landscape, evaluate our odds, then try to plot the best course. He explains:
Our civilization is made-up of countless individuals and pieces of material technology — that come together to form institutions and inter-dependent systems of logistics, development, and production. These institutions + systems then store the knowledge required for their own renewal + growth.
We pin the hopes of our common human project on this renewal + growth of the whole civilization. Whether this project is going well is a challenging — but vital — question to answer. History shows us we’re not safe from institutional collapse. Advances in technology mitigate some aspects, but produce their own risks. Agile institutions that make use of both social + technical knowledge not only mitigate such risks — but promise unprecedented human flourishing.
There has never been an immortal society. No matter how technologically advanced our own society is — it’s unlikely to be an exception. For a good future that defies these odds, we must understand the hidden forces shaping society.
We hope you enjoy his presentation. Explore more materials by sociologist Samo Burja on his website, and on his Medium blog — listed below in the reference section.
note: This talk by Samo Burja was held at a conference presented by the Foresight Institute.
image | below
A close-up of a modern wind turbine being assembled.
featurette
featurette title: Civilization: knowledge, institutions, and humanity’s future
presenter: Samo Burja
watch | featurette
— summary —
A talk by sociologist Samo Burja — exploring his research into how technologically advanced civilizations can find stability, by understanding the mechanism of its institutions + knowledge management.
presented by
group: Foresight Institute
tag line: Advancing beneficial technologies.
web: home • channel
reference
— presenter —
name: Samo Burja
tag line: There has never been an immortal human society. I work on figuring out why.
web: home • channel
bio | Samo Burja is a sociologist and researcher. He’s founder of the advisory firm Bismarck Analysis.
— firm —
group: Bismarck Analysis
tag line: We help companies, governments, philanthropists, and investors maintain + advance our civilization.
web: home
IMAGE
related reading
1. |
blog title: Samo Burja
tag line: There has never been an immortal society. Figuring out why. I write on history, epistemology, and strategy.
web: home ~
presented on
platform: Medium
tag line: Explore new perspectives.
web: home
— contents —
~ event
~ summary
— event —
event title: Paradigm Shift Summit
theme: Insight + Predictions
season: autumn
date: November 10 — 13
year: 2021
where: Orlando, FL • United States
event website | visit
presented by
group: Paradigm Press
tag line: text
web: home • channel
— summary —
Please join us for the Paradigm Shift Summit. Paradigm Press provides independent economic commentary, analysis, and education — through print + web publications, videos, seminars, and conference calls.
We provide non-biased market commentary + market news. Paradigm Press is 100% independent. Our editors are not shy about making bold predictions. Their insights have been recognized by:
Bloomberg
the Wall Street Journal
the Economist
the Financial Times
the Washington Post
the San Francisco Chronicle
the Los Angeles Times
the Daily Telegraph
US News + World Report
Fox • cNBC • Reuters
get valuable insights:
Hear predictions from your favorite experts — sharing the paradigm shifts that truly matter for year 2020 and beyond. You can meet + mingle with 300 other VIPs who are discovering secret paradigms in finance.
We’ve teamed-up with the world’s most credible insiders, experts, and operatives who’ve diverged from the mainstream to bring their models straight to the people.
the summit features:
letter |
Hello,
Terry Grossman MD — my friend + health sciences partner — and I created our web shop Transcend to accompany our 2 best-selling personal health books.
our books:
visit | Fantastic Voyage
deck: Live long enough to live forever.
year: 2004
visit | Transcend
deck: 9 steps to living well forever.
year: 2009
We put quality and care into our Transcend products and resources. Our selection of supplements, wellness + lifestyle items, and longevity formulas are hand-picked.
We hope to make Transcend a trusted source for your whole-health journey.
best wishes,
Ray Kurzweil
Terry Grossman MD
webpages
Transcend | home
the Grossman Wellness Center | home
the TRANSCEND program | 9 steps
T — talk to your doctors
Prevention and early detection are essential to good health. Be pro-active about your personal program. Stay informed and talk routinely with your health care provider.
read | full story
R — relaxation
Better coping with life’s pressures + upsets improves health. You can take steps to identify + reduce today’s stresses in your life.
read | full story
A — assessment
The two pillars of longevity health planning are prevention + early detection. Have a routine plan for labs, yearly exams, specialized assessments, and self-care. Staying fit, staying on a health maintenance plan — staying in-the-know.
read | full story
N — nutrition
Eating nutrient-rich food helps with exhaustion, repairs injuries, and heals your body. Good meal planning keeps your energy balanced, your natural immunities high, plus supplies plenty of vitamins.
Avoid processed sugar, and eat round meals balanced across the food groups. Limit fast carbohydrates in your diet to a healthy portion — and enjoy lean proteins + veggies. It’s nature’s-own recipe for wellness.
read | full story
S — supplementation
There’s daily research into nutrition, deficiency, illness, and supplementation. To get started, read about these trending topics.
read | full story
C — calories
The obesity + eating disorder epidemics that seem to accompany long sedentary hours, stressful life experiences, and effects from eating large quantities of processed sugar and not enough vegetables + lean meats — as well as chronic meal skipping — is sweeping the country. About two thirds of the US is overweight. Reducing obesity — and the opposite, under-eating — can clear-up many health disorders, restore your physical mobility, help digestion + re-generation + healing, cheer you up, and improve pregnancy.
A little at a time you can heal obesity and anorexia. Essential balanced meals and routine schedule — and not finding yourself in situations where cheap oils, deep-fried food, and sugar become your only food options — are key to your healthy life path. Preparing ahead for eating good, quality calories in your day, including a comfortable fitness routine — can work miracles for all types of eating disorders.
read | full story
E — exercise
A key piece of our program: moderate exercise — is a magic wand for your healthy life. Fitness helps circulation + digestion, boosts natural defenses, reduces stress, improves muscle tone + movement.
People who exercise a little each day — famously live longer + healthier. We are as young as we feel — and safe exercise will boost your today + your tomorrow. Experts believe exercise can flush the body plus speed-up healing. Our program integrates 3 exercise types — to build muscle balance, endurance, tone, flexibility, and re-generative health.
read | full story
N — new technologies
We’re at the cusp of amazing knowledge discoveries about nature, wellness, biology, and the true roots of illness. As we move into the new era in which health and medicine become information technologies, we are gaining tools to augment, repair, rebuild and improve the human body.
read | full story
D — de-toxification
You are constantly bombarded with toxins from inside and out, and some of that is inevitable. But you can limit your exposure, strengthen your body’s natural immunity, and clean-out your body of debris. Get your juices flowing, on a routine, to handle the stresses of modern living. Drinking plenty of clean water, and reducing dangerously toxic substance habits, is another key in your healthy life path.
read | full story
reading
no. 1 |
Spare parts.
Scientists world-wide are working to create spare parts for the human body. These technologies provide assistance to people with physical disabilities. Plus we’ll soon be able to fully replace or assist body parts that need healing.
read | full story
no. 2 |
Wellness from within.
We often have days where we feel something’s amiss — but we struggle to label what it is. We explore helpful ways to cope with difficult memories + emotions.
We present some practical steps for your everyday routine: daily time to check-up on how you’re feeling, setting aside space for meditation, keeping a diary of goals + inspirations, sleeping well, supporting yourself with exercise, eating a balanced diet. And learning what works for your mind + body to find balance.
read | full story
About Terry Grossman MD
Hello readers,
I first met Terry Grossman MD at a tech futures event where I was speaking in year 1999. In the two decades since, we’ve formed a lasting bond.
Terry’s a leading thinker and well-known activist for longevity. He’s also a life-long learner.
He lectures globally, writes books on health — and practices traditional + alternative medicine in his clinic. He investigates nutrition supplements + therapies. And tracks breakthroughs in medical knowledge.
Together we created the Transcend shop — and co-wrote two best-selling books. Terry’s available for health consultations at his Grossman Wellness Center. I’ve been a patient of Terry’s since 2000.
I recommend his holistic blend of classic + alternative medicine. Our longevity recipe:
— routine check-ups
— tranquility techniques
— good nutrition
— whole living
— moderate exercise
— quality sleep
— mental health support
— personal lab tests
— select Rx + vaccinations
— today’s body imaging
— the best medical tech
best wishes,
Ray Kurzweil
The bio for Terry Grossman MD.
name: Terry Alan Grossman MD
school: Brandeis Univ. — 1968
school: Univ. of Florida | School of Medicine — 1979
family practice: since 1980
clinic: Grossman Wellness Clinic | home
position: founder + medical director
year: 1995 — present
membership: board certified
organization: American Academy of Anti-Aging Medicine
organization: American College for Advancement in Medicine
medical license: Colorado medical board
license number: DR 0023148
medical license: California medical board
license number: G 85531
— contents —
~ story
~ paper
~ reference
~ reading
~ watching
— story —
Engineering researchers at the University of Toronto, in Canada — used AI software programs to design a privacy filter for your photos that disables automatic facial recognition systems.
Each time you upload a photo or video to a social media platform, its automated, digital facial recognition systems learn a little more about you. These algorithms ingest data about who you are, your location, and people you know — and they’re constantly improving.
As concerns over privacy + data security on social networks grow, Univ. of Toronto engineering researchers — led by Parham Aarabi PhD and graduate student Avishek Bose — have created a computer software algorithm to dynamically disrupt facial recognition systems.
What is facial recognition?
A facial recognition system is a tech capable of identifying a human face — found in a digital photo, graphic image, or in a video frame — and then matching it against a data-base of stored faces. The most advanced tech can be used to authenticate people through ID verification services — it can pinpoint + measure detailed, distinct facial features in an image. The process of measuring human physical characteristics is called bio-metrics.
Face recognition is commonly used on smart-phones and in robotics. Its accuracy as a bio-metric tech is lower than iris recognition, vein pattern recognition, voiceprint + fingerprint recognition. But it’s widely used because it’s contact-less and non-invasive, especially for video surveillance and automatically indexing images.
— paper —
platform: ResearchGate
paper title: Adversarial attacks on face detectors using neural net based constrained optimization
read | paper
Facial recognition gets better + better.
Parham Aarabi PhD said: “Personal privacy is a real issue as facial recognition becomes better + better. This is one way beneficial anti-facial-recognition systems can combat that ability.”
Their solution leverages a deep learning technique called adversarial training, which pits 2 AI algorithms against each other. In computing, deep learning is a math technique that uses complex sets of data — trained to find solutions to problems — to process information. Inspired by the biology of human thinking, deep learning helps computers quickly recognize and process images + speech. Like all techniques in the computer software field of AI — deep learning is good at recognizing hard-to-find patterns in big data-sets.
Computer systems called neural networks run AI software that can achieve astounding human-level abilities of pattern recognition. With their deep learning algorithms, they can process in seconds what takes human analysts weeks, months, or years.
A neural network uses a series of deep learning algorithms to recognize underlying relationships in a set of data through a process that mimics human reasoning. The researchers harnessed the power of neural networks to engineer a system that could block automated facial recognition.
images | below
A computer software program is able to identify a human face in a digital image — and then tag that face’s key facial features with points so that they can be measured.
The software measures these primary features — called landmarks or nodal points. For example: the distance between the eyes, the nose width, eye socket depth and distance from forehead to chin. The “total picture” of all this data is stored as your individual faceprint.
Each human face has a unique matrix of facial features, similar to the way a thumbprint is individual and unlikely to have a biological duplicate. So any human face can be used as a good bio-metric print — a unique identification that’s not easily copied. Using the point matrix — the faceprint — faces can be stored in a data-base, and then later matched against a data-set of all faces in a system.
How it works.
Aarabi and Bose designed a pair of 2 neural networks:
The result is an automated software filter that can be applied to photos, to protect a user’s privacy. Their algorithm alters very specific pixels in the image, making changes that are almost imperceptible to the human eye.
Bose said: “The disruptive AI can attack what the neural net for the face detection is looking for. For example, if the detection AI is looking for the corner of the eyes — it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.”
Aarabi and Bose tested their software on an industry standard pool of more than 600 faces — called the 300-W face data-set. This data-set includes a wide range of:
Their system successfully reduced the number of faces that were originally detectable from 100 % — down to 0.5 %
Bose said: “The key is training 2 neural networks against each other — one creates an increasingly robust facial detection system, and the other creates an even stronger tool to disable facial detection.
— project —
group: Imperial College London
motto: Scientific knowledge + protection.
web: home • channel
project title: 300 Faces in the Wild • data-set
web: home
* aka: 300-W
A powerful tool.
In addition to disabling facial recognition, the new tech also:
Next, the team hopes to make the privacy filter publicly available — as a smart-phone app, mobile tablet app, or website.
Aarabi said: “10 years ago these software algorithms would have to be human-defined. But now neural networks learn by themselves — you don’t need to supply them anything except training data. They can do some really amazing things. It’s a fascinating time in the computer field, there’s enormous potential.”
Some fast-moving facts.
facts source: Interesting Engineering
reference
Univ. of Toronto | home • channel
Parham Aarabi PhD | home
Avishek Joey Bose | home
reading
1. |
publication: University of Toronto Magazine
tag line: Celebrating the university’s research + teaching excellence.
web: home
story title: Engineering AI researchers design privacy filter for your photos that disables facial recognition systems
read | story
— summary —
As concerns about privacy + data security on social networks grow, engineering researchers have created an algorithm to dynamically disrupt facial recognition systems.
presented by
group: Univ. of Toronto
motto: As a tree through the ages.
web: home • channel
2. |
publication: Interesting Engineering
tag line: A cutting-edge, leading community designed for all lovers of engineering, technology, and science.
web: home • channel
story title: Privacy tool cloaks faces to trick facial recognition software
read | story
— summary —
The Fawkes method — developed to out-smart automated computer facial recognition — was developed by researchers at the Sand Lab at the University of Chicago.
presented by
group: Interesting Engineering
tag line: Founded on the core mission of connecting like-minded engineers around the globe.
watching
1. |
publication: Interesting Engineering
tag line: A cutting-edge, leading community designed for all lovers of engineering, technology, and science.
web: home • channel
featurette title: This is how facial recognition works
watch | featurette
— summary —
Regardless of whether you are alone, in a crowd, or in real-time — facial recognition can identify you. Although people have known about facial recognition tech for some time, advances in deep learning + faster processing of big data have helped it develop quickly.
The global facial recognition market is growing each year. It’s being used in across many industries. Watch to learn how this bio-metric tech works.
presented by
group: Interesting Engineering
tag line: Founded on the core mission of connecting like-minded engineers around the globe.
2. |
publication: Hak5
tag line: Trust your techno-lust.
web: home • channel
featurette title: Defeating facial recognition
watch | featurette
— summary —
How to defeat facial recognition — and avoid surveillance? The crew explains.
presented by
group: Hak5
tag line: Advancing the info-sec industry: award-winning podcasts, leading pentest gear, and an inclusive community.
3. |
publication: Reason
tag line: free minds + free markets
web: home • channel
featurette title: How fashion designers are out-smarting facial recognition
watch | featurette
— summary —
Every day, your movement is tracked. Your purchases are logged, your searches saved. And increasingly, your face is scanned. Facial recognition tech is becoming more widespread daily, and governments are finding new applications in the midst of pandemic. Countries use location tracking to help compliance with quarantines. So can we resist the surveillance society — and should we?
Privacy activists say we should be alarmed by the rise of automated facial recognition surveillance. But some trans-humanists says it’s time to embrace the end of privacy as we know it.
presented by
group: Reason Foundation
tag line: Exploring new ways of living in an increasingly individualistic world.
web: home
4. |
publication: Murtaza’s Workhop
tag line: Weekly videos on robotics + AI projects.
web: home • channel
featurette title: tutorial: How to perform facial recognition
read | learning materials
watch | featurette
— summary —
In this video we’re going to learn how to perform facial recognition with high accuracy. We’ll first briefly go through the theory — and learn the basic implementation. Then we’ll create an attendance project that will use webcam to detect faces, and record the attendance live in a Microsoft Excel sheet.
presented by
group: Murtaza Hassan
tag line: Learning made easy: robotics + AI
— notes —
AI = artificial intelligence
NN = neural network
DL = deep learning
info-sec = information security
pentest = penetration testing device
— contents —
~ book
~ summary
~ author profile
~ listening
~ reference
— book —
book title: Troublemakers
deck: Silicon Valley’s coming of age.
author: by Leslie Berlin PhD
date: 2018
This book is available at fine book-sellers.
Amazon | book
Barnes + Noble | book
Books-a-Million | book
IndieBound | book
— summary —
The story of 7 high-tech pioneers. This is a richly told narrative of the Silicon Valley generation that launched 5 major tech industries in 7 years, laying the foundation for today’s world.
Written by journalist Leslie Berlin PhD, project historian at the Silicon Valley Archives of Stanford Univ.
At a time when the 5 most valuable companies on the planet are high-tech firms — Troublemakers is the story of how we got here. This is the gripping history of 7 pioneers of Silicon Valley in the 1970s + early 1980s. Together, they worked across industries to bring tech from deep inside government offices and university labs — to mainstream life.
Meet the people and their stories.
In her book Troublemakers respected author Leslie Berlin PhD introduces the people + stories behind the birth of the web and the micro-processor — plus these famous companies:
In 7 years — 5 major industries were born: personal computing, video games, bio-technology, modern start-up investment, and enterprise data-base systems. Stanford University began licensing faculty innovations to business, and the Silicon Valley tech community began to have influence in modern US politics.
The mavericks who invented our future.
Together, these troublemakers re-wrote the rules and invented our future. Featured are well-known Silicon Valley trailblazers including these legendary people:
name: Steve Jobs — co-founder of Apple
bio: Visionary leader who began the personal digital device revolution.
name: Regis McKenna — marketing guru
bio: Instrumental in launching the most innovative products of the computer age.
name: Don Valentine — early investor
bio: Called the grand-father of Silicon Valley venture capital.
name: Al Alcorn — engineer at Atari
bio: Pioneer of the first successful video game.
name: Sandra Kurtzig — founder of ASK Group
bio: One of the first female software entrepreneurs.
name: Bob Taylor — inventor at Xerox
bio: An internet genius who innovated the personal computer.
name: Fawn Alvarez — Chief of Staff at ROLM
bio: Progressed from a factory assembler to an executive.
name: Robert Swanson — co-founder of Genentech
bio: A leading bio-tech investor.
name: Larry Ellison — co-founder of Oracle
bio: Software business magnate.
name: Mike Markkula — CEO at Apple
bio: Early investor in personal computing.
name: Niels Reimers — founder of the Office of Tech Licensing • Stanford Univ.
bio: A transformative business thinker.
— author profile —
name: Leslie Berlin PhD
web: home
1. |
bio: project historian
school: Stanford Univ.
motto: The winds of freedom blow.
web: home ~ channel
group: Silicon Valley Archives
web: home
2. |
bio: fellow
school: Stanford Univ.
motto: The winds of freedom blow.
web: home ~ channel
project: the Center for Advanced Studies in the Behavioral Sciences
tag line: A leading incubator of human-centered knowledge, to collectively design a better future.
web: home
3. |
bio: advisor
group: the Smithsonian Institution
tag line: The world’s largest museum + research complex.
web: home ~ channel
group: the National Museum of American History
tag line: Empowering people to create a just + compassionate future by exploring, preserving, and sharing our past.
web: home ~ channel
group: Lemelson Center for the Study of Invention + Innovation
tag line: Exploring invention + innovation through stories, activities, and research.
web: home ~ channel
image | left
The historian, journalist, and book author Leslie Berlin PhD. She’s best-known for her subject matter expertise on the rise of the age of computing.
She follows the evolution of digital tech that’s become key to everyday life.
credit: Leslie Berlin • PhD
IMAGE
listening
1. |
series: History in 5
tag line: A weekly dose of history.
web: home • channel
featurette title: 5 character traits that made Silicon Valley what it is today
hostess: Leslie Berlin • PhD
— summary —
In the space of only 7 years and 35 miles, 5 major industries — personal computing, video games, bio-tech, venture capital, and advanced micro-electronic semi-conductor logic — were born. Stanford University historian Leslie Berlin PhD introduces the people + discusses the pervasive character traits behind the success.
presented by
group: Simon + Schuster
tag line: Find your next great read.
web: home • channel
reference
Apple | home • channel
Oracle | home • channel
Roche | home • channel
Genentech • by Roche | home • channel
Xerox | home • channel
PARC • by Xerox | home • channel
Stanford Univ. | home • channel
Stanford Univ. • Office of Tech Licensing | home • channel
Kleiner Perkins | home • channel
Sequoia Capital | home • channel
Ellison Foundation | home
— notes —
* Silicon Valley is colloquial for the San Francisco, CA bay area • United States
ASK Group = founders names — Ari + Sandra Kurtzig
ROLM = founders names — Richeson, Oshman, Loewenstern, Maxfield
PARC = Palo Alto Research Center • by Xerox
image | above
Pictured is an artist’s realistic illustration of the shape of corona-virus.
The top stories.
Here’s a collection of key stories on the web + in broadcast media — on the topic of artificial intelligence (AI) software being used in several ways to take-on the global corona-virus pandemic.
The ways AI is being applied:
1. |
publication: the New York Times
tag line: All the news that’s fit to print.
web: home • channel
story title: AI versus the corona-virus
read | story
— summary —
A new consortium of top scientists will be able to use some of the world’s most advanced super-computers to look for solutions.
presented by
group: the New York Times Company
tag line: We seek the truth + help people understand the world.
web: home • channel
2. |
publication: Vox
tag line: Vox explains the news.
web: home channel
blog: re-Code
tag line: Uncovering and explaining how our digital world is changing — and changing us.
story title: Don’t expect AI to solve the corona-virus crisis on its own
read | story
— summary —
How AI is helping find a cure. How optimistic should we be about the impact of artificial intelligence in a pandemic? The re-Code blog is uncovering + explaining how our digital world is changing — and changing us.
presented by
group: Vox Media
tag line: Dedicated to getting the future right.
web: home • channel
3. |
group: by Vox Media
tag line: Dedicated to getting the future right.
publication: Vox
tag line: Vox explains the news.
blog: re-Code
tag line: Uncovering and explaining how our digital world is changing — and changing us.
story title: Scientists are identifying potential treatments for corona-virus via artificial intelligence
read | story
— summary —
Researchers used AI to mine through existing medical information to find drugs that they say might be helpful for tackling the novel corona-virus. The re-Code blog is uncovering + explaining how our digital world is changing — and changing us.
4. |
group: by Vox Media
tag line: Dedicated to getting the future right.
publication: Vox
tag line: Vox explains the news.
blog: re-Code
tag line: Uncovering and explaining how our digital world is changing — and changing us.
story title: How AI is battling the corona-virus outbreak
read | story
— summary —
AI helped spot an early warning about the outbreak, and researchers have used flight traveler data to figure out where the novel corona-virus could pop up next. The re-Code blog is uncovering + explaining how our digital world is changing — and changing us.
5. |
group: by ACS
tag line: Chemistry for life.
publication: Chemical Engineering + News
tag line: The chemistry news that matters most.
story title: 2 groups use artificial intelligence to find compounds that could fight the novel corona-virus
read | story
— summary —
One group identifies an existing drug, the other finds 6 novel molecules — but the consequences of reporting possibly helpful molecules are unclear.
6. |
group: by IEEE
tag line: Advancing technology for humanity.
publication: Spectrum
tag line: Major trends + developments in tech, engineering, and science.
blog: the Human OS
tag line: A bio-medical engineering blog on: wearable sensors, big-data, implanted devices, and personalized medicine.
story title: 5 companies using AI to fight corona-virus
read | story
— summary —
Deep learning models predict old and new drugs that might successfully treat corona-virus. The Human OS blog is the bio-medical engineering blog featuring the: wearable sensors, big data analytics, and implanted devices that enable new ventures in personalized medicine.
7. |
group: by CBS + Viacom
tag line: Truly premium content, at true scale.
publication: ZD Net
tag line: Trends, tech, and opportunities that matter to IT professionals + decision makers.
story title: AI and the corona-virus fight: how artificial intelligence is taking-on
read | story
— summary —
From moderating social media to unpicking the very essence of corona-virus, AI is helping tackle the pandemic in all manner of ingenious ways.
8. |
group: by Forbes Media
tag line: World-wide magazines on business, finance, marketing, science, tech, investments, and entrepreneurship.
publication: Forbes
tag line: The defining voice of entrepreneurial capitalism.
blog: Cognitive World
tag line: A think-tank, knowledge hub, and eco-system for AI transformation.
story title: How artificial intelligence can help fight corona-virus
read | story
— summary —
Artificial intelligence and genetic applied science are making it easier, faster, and cheaper to understand — how the virus spreads, how to manage it, and how to contain its devastating effects. The Cognitive World blog is a think-tank, knowledge hub and eco-system for AI transformation.
IMAGE
— notes —
AI = artificial intelligence
C+EN = Chemical Engineering + News
ACS = American Chemical Society
CBS = Columbia Broadcasting System
IEEE = Institute of Electrical + Electronics Engineers
ZD = Ziff Davis
— contents —
~ letter | from Ray Kurzweil
~ the book
~ about | the book — by publisher
~ about | the author — by publisher
~ author’s writings
— letter —
Dear readers,
I enjoyed the book the Secret Language of Cells by author Jon Lieff MD — it takes us on an exciting journey into a world where we can visualize elaborate conversations among immune cells, brain cells, gut cells, bacteria, and even viruses.
Lieff gives a wealth of examples for his thesis that this cellular signaling is the basis of life. It’s a must-read for anyone seeking to understand modern biology and advanced medical science. It’s equally important for those of us who wonder — as I do — how this ubiquitous information transfer in the form of cellular conversations might be related to the emergence of intelligence + consciousness.
— Ray Kurzweil
book title: the Secret Language of Cells
deck: What biological conversations tell us — about the brain-body connection, the future of medicine, and life itself.
author: by Jon Lieff MD
date: 2020
presented by
Jon Lieff MD | home ~ channel
tag line: Searching for the mind.
visit | blog
visit | resources
This book is available at fine book-sellers.
Amazon | Barnes + Nobel | Books-a-Million | IndieBound
about | the book
by publisher
An introduction.
Your cells are talking about you. Right now, both your inner and outer worlds are abuzz with chatter among living cells of every possible kind — from those in your body and brain to those in the environment around you. From electrical alerts to chemical codes, the greatest secret of modern biology, hiding in plain sight, is that all of life’s activity boils down to one thing: information transfer in the form of cellular conversations.
While cells are commonly considered the building block of living things, it’s communication between cells that brings us to life — controlling our bodies and brains, determining whether we are healthy or sick, and influencing how we think, feel, and behave. This conversation has determined all of biology, evolution, and the emergence of intelligence.
Different cells speaking the same language.
In the Secret Language of Cells, doctor and neuroscientist Jon Lieff MD lets us listen-in on these conversations. And reveals their significance for everything from mental health to cancer. He explains the surprising science of how very different cells — bacteria and brain cells, blood cells and viruses — all speak the same language. This has long been over-looked: because scientific journals use jargon that’s hard to understood across disciplines, much less by the general public.
Lieff presents a fascinating, accessible look into cellular communication science — a ground-breaking and comprehensive exploration of this biological phenomenon. Discover the intriguing lives of cells as they ask questions, get answers, give feedback, gather information, call for each other, and make complex decisions.
Understanding life.
During infections, immune T-cells tell brain cells that we should “feel sick” and lie down. Cancer cells warn their community about immune and microbe attacks. Gut cells talk with microbes to determine which are friends — and which are enemies. And microbes talk with each other — and with much more complicated human cells — in ways that determine which medicines work, and which will fail.
With applications for immunity, chronic pain, weight loss, depression, cancer treatment, and virtually every aspect of health + biology — cellular communication is revolutionizing our understanding not just of disease, but of life itself.
author’s writings
1. |
publication: Scientific American
read | stories by Jon Lieff MD
presented by
Scientific American | home ~ channel
tag line: Expertise, insights, and illumination.
banner: Celebrating 175 years of discovery.
Springer Nature | home ~ channel
tag line: We’re a world-leading research, educational, and professional publisher.
banner: 180 years of progress + 180 years of discovery
2. |
platform: Science 2.0
read | stories by Jon Lieff MD
presented by
Science 2.0 | web ~ channel
tag line: the world’s best scientists + the internet’s smartest readers
about | Jon Lieff MD
by publisher
Jon Lieff MD is a neuro-psychiatrist who earned his medical degree from Harvard Univ. Known as an innovator in several medical fields, he pioneered the creation of integrated treatment units that focus on complex patients with combined medical, psychiatric, and neurological problems.
He built some of the first geriatric medical + psychiatry hospital units, and the largest geriatric treatment network in New England, which he directed for 25 years. He’s innovating specialized treatment programs for brain-injured patients.
Jon Lieff MD is an expert in the field of geriatric psychiatry — and a Distinguished Life Fellow of the American Psychiatric Assoc. While he was president of the American Assoc. for Geriatric Psychiatry (AAGP), he helped found the major journal in that field — the American Journal of Geriatric Psychiatry — and was its consulting editor for 10 years.
He helped found the Geriatric Psychiatry Committee and the High Technology Committee for the Massachusetts Psychiatric Society. He’s a member of several American Psychiatric Assoc. committees — plus chaired their committee on tele-medicine. Lieff has been studying the question: where can the mind be found in nature?
At first, his inquiry related to neuroscience and the interactions of psychiatric, neurological, and medical conditions. Then he expanded to include intelligence in a wide range of animals — and eventually individual cells, microbes, and viruses.
To share his extensive research, Jon Lieff developed his website — Searching for the Mind — where he posts weekly notes on neuroscience, molecular biology, microbiology, immunology, cancer, and related fields. His blog has a solid following in both the scientific + lay communities. His readers include: physicians, scientists, professors, authors, journal editors, and spiritualists.
Lieff has been featured on the television show 20 / 20 — and the magazines Newsweek and People. He wrote 2 of the first books on high-tech in psychiatry for the American Psychiatric Press. He’s published 20+ professional articles — and lectures widely on: neuro-psychiatry, neuroscience, psycho-pharmacology, dementia, depression, and medical tech.
His latest book is the Secret Language of Cells — synthesizing 12 years of analysis of scientific literature in a clear and understandable way — for both general science readers + scientific experts. He blogs about cellular biology, neuroscience, and microbiology — with special emphasis on conversation between cells.
— notes —
NPR = National Public Radio
]]>— contents —
~ story
~ by definition
~ by the numbers
~ reading
— story —
Millions of people take approx. 5 or more medications a day — but testing the many side-effects of those pharmaceutical drug combinations has historically been difficult. Researchers at Stanford Univ. have invented a way to predict side-effects using computer modeling based-on artificial intelligence. The team explained that most drug combinations (called poly-pharmacy) have never been systematically studied.
Their AI software system is called Decagon — and they say it can help physicians make better decisions about what drugs to prescribe. It could also help researchers find better combinations of prescription drugs to treat complex diseases.
Limited knowledge.
With so many prescription drugs currently on the pharmaceutical market: “it’s practically impossible to test a new drug in combination with all other drugs — because just for one drug that would be 5,000 new experiments,” said Marinka Zitnik PhD — a researcher on the project.
Zitnik said: “With some new drug combinations we don’t know what will happen.” The researchers explained that poly-pharmacy side-effects happen because of drug -to- drug interactions — the effects of one pharmaceutical drug may change (positively or negatively) if it’s taken with another drug.
The knowledge of medical drug interactions is often limited — because the complex relationships between many pharmaceutical combinations is only rarely observed. Discovering poly-pharmacy side-effects is a serious challenge that’s important for patient health.
Tracking how pharmaceutical drugs affect proteins.
The researchers created a data-base containing descriptions of how 19,000+ proteins found in the human body interact with each other — and how various drugs affect these proteins. Using 4+ million known associations between drugs and their side-effects, the team crafted a method to identify patterns in how side-effects arise — based-on the way pharmaceutical drugs interact with proteins.
With that method, the system could predict the outcome of taking 2 drugs together. To evaluate their method, the group looked to see if its predictions came true. In many cases, they did. For example: there was no existing indication that the combination of a cholesterol drug named Lipitor and a blood-pressure medication called Norvasc, could lead to muscle inflammation. But Decagon predicted that it would, and that foresight was proven correct.
Looking ahead.
The team hopes to extend their results to include more multiple drug interactions. They aim to create a user-friendly tool — that gives physicians guidance on whether it’s a good idea to prescribe a particular pharmaceutical drug to a particular patient. And to help researchers developing drug protocols for complex diseases, with fewer side-effects.
by definition | what is poly-pharmacy
pol • y • phar • ma • cy — noun
— the simultaneous prescription of multiple pharmaceutical drugs — to treat a single condition.
— the simultaneous use of multiple pharmaceutical drugs by a single patient — for one or more conditions.
— by the numbers —
According to the CDC:
source: CDC
reading
1. |
group: by Stanford University
motto: The wind of freedom blows.
story title: AI helps Stanford Univ. computer scientists predict the side effects of millions of drug combinations
read | story
— summary —
Millions of people take upwards of 5 medications a day, but testing the side-effects of such combinations is impractical. Now Stanford Univ. computer scientists have figured out how to predict side-effects using artificial intelligence.
2. |
group: by Mark Allen Group
tag line: Connecting specialist audiences with critical information.
tag line: we inform • we educate • we inspire • we engage • we know our markets • we enable your business
publication: the Engineer
tag line: First for technology + innovation.
story title: Decagon AI system predicts side-effects of drug combinations
read | story
— summary —
Stanford Univ. researchers have developed an AI tool called Decagon that can predict the potential side-effects of drug combinations.
— notes —
AI = artificial intelligence
CDC = Centers for Disease Control + Prevention • United States
US = United States
MAG = Mark Allen Group
— book —
book title: the Synthetic Age
deck: Out-designing evolution, resurrecting species, and re-engineering our world.
author: by Christopher Preston • PhD
date: 2018
this book on Good Reads | visit
IMAGE
— summary —
This book imagines a future where humans fundamentally re-shape the natural world — using nano-tech, synthetic biology, de-extinction, and climate engineering.
We’ve all heard there’s are no longer any places left on Earth untouched by humans. The significance of this goes beyond statistics documenting melting glaciers and shrinking species counts. It signals a new geological epoch. In his book the Synthetic Age author Christopher Preston argues that what’s most startling about this coming epoch isn’t just how much impact humans have had — but how much deliberate shaping we’ll begin to do.
Dawn of the Synthetic Age.
Emerging tech promises to give humanity the power to control nature’s basic operations. We’re exiting the Holocene and entering the Anthropocene. But most importantly — we’re leaving behind the time when planetary change is the unintended consequence of unbridled industrialism. A world designed by engineers + technicians is the birth of the planet’s first Synthetic Age.
the book explores a range of tech that could re-configure Earth’s bio-sphere:
and looks at climate engineering attempts:
Our purpose-built future.
What does it mean when humans shift from being caretakers of the Earth — to being its shapers? Who should we trust to decide the contours of our synthetic future? These questions are too important to be left to chance.
about | Christopher Preston • PhD
1. |
group: by Carnegie Council
tag line: For ethics in international affairs.
publication: Ethics + International Affairs
tag line: —
read | profile: Christopher Preston • PhD
— summary —
We help close the gap between theory + practice.
2. |
group: by Center for Humans + Nature
tag line: Expanding our natural + civic imagination.
tag line: asking questions • inspiring change
publication: Minding Nature
tag line: A journal exploring conservation values + the practice of ecological democratic citizenship.
read | profile: Christoper Preston • PhD
— summary —
We share ideas that foster a socially + ecologically inter-connected world.
reading
1. |
group: by Aeon Media Group
tag line: A world of ideas.
publication: Aeon
tag line: A magazine of ideas + culture.
story title: Forget the Anthropocene, we’ve entered the synthetic age
author: by Christopher Preston • PhD
read | story
— summary —
There’s nowhere on Earth free from the traces of human activity. These planetary changes have been described by scientists as the end of one geological epoch: the Holocene — and the start of the next: the Anthropocene. In this new “human age” — civilization’s impact on the oceans, land, and atmosphere has become a feature of Earth.
Society is changing how the planet functions. Powerful new tech signals a potential take-over of Earth’s most basic operations by humans. From this time forward: bio-tech + climate engineering will transform the planet into an increasingly synthetic whole.
— notes —
]]>image | above
Researchers have successfully trained an AI software program to detect the presence of Alzheimer’s disease in people — by looking at the way they talk. The tool relies on the fact that patients with Alzheimer’s tend to use English language differently than healthy people.
— contents —
~ story
~ by definition
~ pages
~ reading
— story —
Researchers from Stevens Institute of Technology designed an artificial intelligence software tool that can diagnose Alzheimer’s with 95% accuracy, reducing expensive diagnostic scans or in-person testing.
The software program is also able to document + explain its conclusions, so human experts can check the accuracy of its diagnosis.
The tell-tale signs
Some tell-tale language signs the AI software can detect:
The project was developed by K.P. Subbalakshmi PhD. She’s the founding director of the Stevens Institute of Artificial Intelligence — and a professor of electrical + computer engineering. She said:
This is a real breakthrough. We’re opening an exciting new field of research, and making it easier to explain to patients why the AI algorithm came to the conclusion that it did — while diagnosing patients. This is absolutely state-of-the-art. Our AI software is the most accurate diagnostic tool currently available. This increases our ability to trust an AI system with important medical diagnosis.
— K.P. Subbalakshmi PhD
Alzheimer’s disease can affect a person’s use of language. By using AI software that learns over time — called a “convolutional neural network” — Subbalakshmi and her students developed a tool that accurately identifies well-known, tell-tale signs of Alzheimer’s — by detecting subtle language patterns that could easily be overlooked.
Tracking human language
Subbalakshmi and her team trained their algorithm using text produced by both healthy subjects and known Alzheimer’s patients — as they described a drawing of children stealing cookies from a jar. Using tools developed by Google, Subbalakshmi and her team converted each sentence into a unique number sequence — called a vector — representing a specific point in a 512-dimensional space.
With this approach, complex sentences can be assigned a concrete number value. This makes it easier to analyze structural + thematic relationships between sentences. By using those vectors along with hand-crafted features identified by subject matter experts — the AI software system gradually learned to spot similarities + differences between sentences spoken by healthy or unhealthy subjects. It can determine — with remarkable accuracy — the probability that a sample of speech belongs to an Alzheimer’s patient.
Also, the software can also easily incorporate new Alzheimer’s detection criteria that’s identified by other research teams in the future. So it will become more accurate over time.
The algorithm itself is incredibly powerful, we’re only constrained by the data available to us. We designed our system to be both modular and transparent. If other researchers identify new markers of Alzheimer’s, we can simply plug those into our architecture to generate even better results.
This method can be used to detect other medical conditions. When we get more + better data, we’ll be able to create streamlined, accurate AI software tools to diagnosis many other illnesses too.
— K.P. Subbalakshmi PhD
Robust diagnostic ability in the future
The next step is to train the AI software on a much bigger volume of sample text. In the near future, AI software could diagnose Alzheimer’s based on any sample of text — from a personal e-mail, to a social-media post. To accomplish that goal, an algorithm needs to be trained on a large volume of sample texts — of different types — spoken or written by diagnosed Alzheimer’s patients. With a larger sample set containing the tell-tale language markers of Alzheimer’s disease, the software can become better familiar with what to look for.
Subbalakshmi is programming her software to diagnose patients using other languages. Her team is also exploring ways that other neurological medical conditions — like aphasia, stroke, traumatic brain injury, and depression — can affect a patient’s use of language.
by definition | what is explainable artificial intelligence?
the Enterprisers Project | home
the Enterprisers Project | What is explainable AI?
— excerpt —
Explainable AI (artificial intelligence) means humans can understand the path a software system took to make a decision. Let’s break-down this concept in plain English, and explore why it matters.
AI software — that uses computational techniques like machine learning / deep learning — takes inputs and then produces outputs (or makes decisions) with no decipherable explanation or context. The system makes a decision or takes some action, and we don’t necessarily know why or how it decided. The system just does it, based on instructions the original programmer coded into the software program — that’s invisible to the user.
That’s called the “black box” model of AI, and it’s mysterious. In some cases, that’s just fine. In other contexts, it’s plenty ominous. For small programs like AI chatbots or sentiment analysis of social feeds, it doesn’t really matter if the AI system operates in a black box. But for software programs with a big human impact — autonomous vehicles, aerial navigation + drones, military applications — being able to understand the AI software’s decision-making process is mission-critical.
Enter “explainable AI” — sometimes known as “interpretable AI” or by the acronym XAI. As the name suggests, it’s AI that can be explained and understood by humans.
read | full definition
on the web | pages
Stevens Institute of Technology | home
Stevens Institute of Technology | YouTube
tag line: The innovation university.
on the web | pages
K.P. Subbalakshmi PhD | home
reading
group: by Stevens Institute of Technology
tag line: The innovation university.
story title: AI tool promises faster, more accurate Alzheimer’s diagnosis
read | story
— summary —
Stevens Institute of Technology team uses explainable AI to address trustability of AI systems in the medical field.
group: by Xtelligent Healthcare Media
tag line: Using data to deliver relevant content to our readers.
publication: Health IT Analytics
story title: AI tool diagnoses Alzheimer’s with 95% accuracy
read | story
— summary —
The algorithm can detect subtle differences in the way people with Alzheimer’s disease use language.
— notes —
AI = artificial intelligence
XAI = explainable artificial intelligence
IT = information tech
image | above
The hand-held TytoHome health care exam device.
credit: TytoCare
— in this post —
~ story
~ featurettes
~ benefits
~ by the numbers
~ get the device
~ reading
— story —
TytoCare’s smart-home styled device named TytoHome is an award-winning, hand-held, medical tool kit. It can conduct basic, all-in-one health check-ups from the comfort of your own dwelling or office — and connect with your doctor remotely.
TytoHome is an electronic, portable, at-home exam kit plus smart-phone app — that lets you perform guided medical tests with a health-care provider. You can conduct exams on the heart, lungs, abdomen, ears, throat, skin, and body temperature. You can see and hear your physician using the device. Results appear instantly on your physician’s screen.
Then you can receive a diagnosis, a treatment plan, and prescriptions — anytime + anywhere. TytoHome helps your physician quickly, conveniently diagnose common conditions like: ear infection, head cold, sore throat, fever, cough, sinus + chest congestion, stomach ache, bug bites, and rashes.
To use TytoHome all you need is a smart-phone or tablet — plus a wi-fi internet connection. You can record your exam and send it to your doctor, or conduct a live exam guided by your physician.
And if you need a doctor, TytoCare has a trusted physicians network available 24 hours a day / 365 days a year — for expert help. TytoCare and its licensed medical partners give you the comfort of on-demand medical exams — with the security of fully qualified + board-certified physicians.
You can use TytoHome to:
on the web | pages
TytoCare | home
TytoCare | YouTube channel
TytoCare | TytoHome
TytoCare | featurette — TytoPro
TytoCare | featurette — TytoClinic
tag line: Always know with Tyto.
tag line: Your on-demand medical exam.
— featurette —
group: by TytoCare
featurette title: meet TytoHome • no. 1
year: 2020
— summary —
TytoHome by TytoCare is your on-demand + in-home medical exam.
— featurette —
group: by TytoCare
featurette title: meet TytoHome • no. 2
year: 2020
— summary —
TytoHome by TytoCare is your on-demand + in-home medical exam.
— benefits —
certainty:
TytoHome goes beyond a phone or video chat with a doctor — by providing an on-demand, clinic-quality medical exam, right from your home, so you can know with certainty what to do next.
peace of mind:
Debates about whether to rush to the emergency room are over. With TytoHome, you’ll know immediately if action needs to be taken — or if you can go back to sleep.
convenience:
See a physician from the comfort of home — without waiting for an appointment or travelling to the clinic. Especially during times when going out is risky with inclement weather or community-wide infections.
security:
TytoHome uses a secure platform that only you and the health-care provider can access — and in full compliance with international health data standards and laws. So you can be certain your personal info is safe + secure.
— by the numbers —
* as of year 2019
— get the device —
In the United States: Best Buy is the current + exclusive retail shop offering the TytoHome medical exam kit for approx. $300 — for home customers. The device is FDA cleared + reviewed and clinically tested.
TytoCare’s goal is to upgrade health-care delivery with on-demand medical exams and tele-health visits — and also offers health system services for professionals.
on the web | pages
Best Buy | home
Best Buy | product: TytoHome • by TytoCare
tag line: expert service + unbeatable price
tag line: Let’s talk about what’s possible.
reading:
1. |
publication: Wired
tag line: —
story title: The TytoHome medical kit by TeleCare lets doctors monitor patients remotely
read | story
— summary —
Soon this easy, in-home diagnostic device will be standard home equipment for every family.
group: by Conde Nast
tag line: —
2. |
publication: Time
tag line: —
list title: Best Inventions of 2019
category: heath-care
story title: TytoHome • an easy-access doctor
read | list
read | profile
— summary —
100 innovations making the world better, smarter, and even a little more fun.
— profile —
Getting to the doctor’s office isn’t always easy. But the creators of the TytoHome — by TytoCare — hope to eliminate that trip altogether. Its hand-held device measures vitals. It examines heart, lungs, ears, skin and throat with special adapters.
And video-conferences with a doctor to monitor the metrics in real time. The company’s CEO Dedi Gilad said: “It transforms primary care by putting health in the hands of consumers.”
— notes —
FDA = Federal Drug Administration • United States
]]>image | above
Pictured is a bio-medical test called an antibiogram. Bacteria is grown in a glass lab tray called a petri dish. A range of antibiotics are applied to the dish — each one is labeled with a button.
After some time, researchers can examine the space around each button. They can see which antibiotics destroyed the bacteria — and which did not.
IMAGE
— contents —
~ story facts
~ reading
— story facts —-
A powerful antibiotic was discovered using an artificial intelligence software tool.
part 1:
part 2:
part 3:
part 4:
image | above
A petri dish in a bio-medical lab holds growing bacteria that was treated with a test of antibiotics.
image | above
The team tested the antibiotic they discovered — and named “halicin” — on infected lab mice, to see if it could clear difficult bacteria from their bodies. The experiment was successful.
reading
1. |
school: Massachusetts Institute of Technology
motto: mind + hand
web: home • channel
story title: Artificial intelligence yields new antibiotic
read | story
— summary —
A deep-learning model identifies a powerful new drug that can kill many species of antibiotic-resistant bacteria.
2. |
broadcast: BBC
tag line: To inform, educate, and entertain.
web: home • channel
story title: Scientists discover powerful antibiotic using AI
read | story
— summary —
In a world first, scientists have discovered a new type of antibiotic using artificial intelligence.
3. |
publication: the Guardian
tag line: The world’s leading liberal voice.
story title: Powerful antibiotic discovered using machine learning for the first time
read | story
— summary —
Team at MIT says halicin kills some of the world’s most dangerous strains.
presented by
group: Guardian Media Group • the Scott Trust
tag line: Available for everyone, funded by readers.
web: page • page
4.|
platform: the Conversation
tag line: Academic rigor, journalistic flair.
web: home • channel
story title: Deep learning AI discovers surprising new antibiotics
read | story
— summary —
New antibiotics are desperately needed. Yet few new antibiotics have entered the market of late, and even these are just minor variants of old antibiotics. While the prospects look bleak, the recent revolution in artificial intelligence offers new hope.
5. |
publication: Nature
tag line: ~
web: home • channel
story title: Powerful antibiotics discovered using AI
read | story
— summary —
Machine learning spots molecules that work even against “un-treatable” strains of bacteria.
presented by
group: Springer
tag line: International publisher: science, technology, medicine.
web: home • channel
6. |
publication: Financial Times
tag line: Make sense of a disrupted world.
web: home • channel
story title: AI discovers antibiotics to treat drug-resistant diseases
read | story
— summary —
Machine learning uncovers potent new drug able to kill 35 powerful bacteria.
presented by
company: Nikkei
tag line: fair + impartial
web: home
7. |
publication: Stat
tag line: Reporting from the frontiers of health + medicine.
web: home channel
story title: Aided by machine learning, scientists find a novel antibiotic able to kill super-bugs in mice
read | story
— summary —
Now artificial intelligence is giving scientists a reason to dramatically expand their search into data-bases of molecules that look nothing like existing antiobiotics.
presented by
group: Boston Globe Media
tag line: New England’s leading media company.
web: home
8. |
publication: Singularity Hub
tag line: ~
web: home • channel
story title: AI just discovered a new antibiotic to kill the world’s nastiest bacteria
read | story
— summary —
text
presented by
company: by Singularity Univ.
tag line:
web: home • channel
IMAGE
reference
LINE
company: IBM
tag line: Think.
web: home • channel
project: Watson for Health
tag line: Making progress in health, together.
web: : home learn about AI in medicine
— summary —
Researchers are building better software with artificial intelligence for applications in medicine. Gaining insights into diagnostics, health-care processes, treatment options, and patient outcomes — with the support of computers using a technique called machine learning.
publication: AI in HealthCare
tag line: Innovation to transform health-care.
web: home
presented by
company: TriMed Media
tag line:
web: home
IMAGE
— notes —
AI = artificial intelligence
ML = machine learning
MDR = multi-drug-resistant
MIT = Massachusetts Institute of Technology
IBM = International Business Machines
WHO = World Health Organization
— in this post —
~ story
~ chart
~ by the numbers
~ pages
~ reading
— story —
Researchers at Northwestern Univ. believe a smart-phone app they developed called Purple Robot can detect depression in people by tracking:
Using this smart-phone data collection app, the researchers correlated that the more time a person spends using a mobile phone, the more likely they’re depressed. The average daily mobile phone use for depressed people was 68 minutes — and for non-depressed people it was 17 minutes.
Another pattern researchers noticed was related to a person’s location. Spending most of their time at home, and most of their time in fewer locations — as measured by GPS tracking on their mobile phone — was linked to depression. Also linked to depression: having a less regular day-to-day schedule, and going to work at different times each day.
The study resulted in a significant outcome: the researchers were able to predict if somebody was depressed — just by looking at this mobile phone data — with 87% accuracy.
The research study.
28 adult participants were recruited from the community to carry a mobile phone with the sensor data acquisition app — called Purple Robot — for 2 weeks. Before the 2-weeks started: to determine if they were depressed, the participants filled-out a standard questionnaire that measures the signs + symptoms of depression.
The questionnaire is called PHQ-9 — it’s widely used by health-care providers, and relies on the patient self-reporting their feelings + experiences. The form asks about common symptoms used to diagnose depression — on a scale of 1 -to- 10.
symptoms used to diagnose depression:
according to the PHQ-9 questionnaire:
Researchers analyzed the GPS locations and phone use for the 28 individuals — 20 females + 8 males, average age of 29 years-old — for 2 weeks. The sensor tracked GPS location every 5 minutes.
Then — using the GPS + phone use data collected from the mobile phones — the researchers correlated those results with the participants’ depression test results. They were able to see a pattern emerge, accurately matching the trends in the data with people who self-reported as depressed or not depressed.
MAP
chart | below
art: by Center for Behavioral Intervention Technologies • Northwestern Univ.
— chart —
Signs of depression surfacing.
David Mohr PhD is a clinical psychologist — and Director of the Center for Behavioral Intervention Technologies at Northwestern Univ.
He said: “The significance of this research is we can detect if somebody has depression — and the severity of their symptoms — without asking them any questions. We’re detecting depression with smart-phones — that are providing data passively.
“The research data showed that depressed people don’t usually go to many places — that reflects the loss of motivation seen with depression. So depressed people withdraw. They don’t have the motivation or energy to go out and do things.”
Depression is a mental health disorder that’s common, upsetting, and often recurring. It frequently goes un-detected + un-treated. Today’s mobile phones are everywhere — and have a large complement of sensors that can monitor the behavior patterns of their users. Those patterns might be a key to discovering symptoms of depression.
The mobile phone use data didn’t identify how people were using their phones. But Mohr suspects that people who spent the most time on their smart-phones were surfing the web or playing games — probably not talking with friends.
He said: “When people are on their smart-phones, they’re likely to avoid thinking about painful things, or difficult relationships. It’s an avoidance behavior that’s common with depression.”
Better than questionnaires.
The smart-phone data was better at detecting depression than daily questions the participants answered about how sad they were feeling on a scale of 1 -to- 10. Those answers are often un-reliable.
This research can develop new ways of monitoring people at risk for depression — and enable health care providers to intervene more quickly. If the app watching the mobile phone detects patterns of activity associated with depression — an alert can be sent to physicians to get people the care they need.
Future studies + conclusions.
Next, the researchers plan to study: if getting people to change the behaviors linked to depression can improve their mood.
Mohr said: “We’ll see if we can reduce symptoms of depression by encouraging people to: maintain a more regular routine, spend more time in a variety of places, and reduce their mobile phone use.”
In conclusion, mobile phone sensors offer clinical opportunities:
— by the numbers —
source: Hope for Depression Research Foundation
— types of depression —
The umbrella of depressive illness:
— signals of depression —
source: Hope for Depression Research Foundation
— questionnaire —
group: by Pfizer
form: Patient Health Questionnaire 9
for: diagnosing depression
read | questionnaire
format: Adobe
about | questionnaire
The form called Patient Health Questionnaire 9 (PHQ-9) — developed by pharmaceutical company Pfizer — is an important medical standard for diagnosing depression.
Patients fill-out the 9 question form — it takes just 5 minutes. They answer questions about symptoms + feelings they experience. The questionnaire gives physicians a good basis for discovering depression in patients who may not know how or why they’re suffering.
The form is a mainstay of health-care providers — but researchers are looking for better ways to detect depression without asking patients to self-describe their symptoms. Because that technique can be un-reliable. If depression goes un-detected, symptoms can increase and people don’t get the help they need to cope.
on the web | pages
Center for Behavioral Intervention Technologies • Northwestern Univ. | home
Center for Behavioral Intervention Technologies • Northwestern Univ. | Purple Robot
tag line: Envisioning a world with effective, usable, and sustainable digital mental health services for all people.
motto: Whatsoever things are true.
— summary —
At the center, we evaluate behavioral intervention tech-enabled services. We explore new techniques for mental health and well-being — that are proven scientifically effective.
We analyze data from smart-phone sensors. An ever-increasing amount + variety of sensors are collecting and transmitting data about our lives. This data is used by companies for advertising — but can also help us with daily tasks. For example: sensor data from our mobile phones is collected to see best driving routes that avoid traffic. And step-counting sensors track walking fitness.
We’re learning how sensor data can improve life for people with mental health issues. Our “personal sensing” research harnesses sensor data collected from mobile devices — to identify behaviors + states associated with mental health. Then patterns are identified to improve digital mental health interventions like apps + bots.
about | Purple Robot
The Purple Robot app — built for the Android mobile operating system owned by Google — provides a real-time sensor data platform for collecting info about the smart-phone user, and their surroundings.
Purple Robot provides:
on the web | pages
Hope for Depression Research Foundation | home
Hope for Depression Research Foundation | organizations
Hope for Depression Research Foundation | help + resources
Hope for Depression Research Foundation | research
watch | featurette
tag line: Funding the best minds, to heal minds.
READING
on the web | reading
group: Northwestern Univ.
story title: Your phone knows if you’re depressed
deck: Time spent on smart-phone + GPS location sensor data detect depression
read | story
motto: Whatsoever things are true.
on the web | reading
publication: Journal of Medical Internet Research
report title: Mobile phone sensor correlates of depressive symptom severity in daily life behavior
deck: An exploratory study
read | report
tag line: Advancing digital health research.
FILE
— notes —
GPS = global positioning system
PHQ-9 = patient health questionnaire — 9
CBITs = Center for Behavioral Intervention Technologies • Northwestern Univ.
NU = Northwestern Univ. • United States
— in this post —
~ story
~ by the numbers
~ featurettes
~ pages
~ reading
~ watching
— story —
The National Football League (NFL) is asking inventors, scientists, physicians, and engineers to make the game of football safer. 300 of the brightest minds from around the world gathered at a symposium to talk about changes that could be made — one of the key improvements will be a better helmet design.
The NFL’s initiative — called Head Health Tech — set aside $3 million in grants to support development of new helmet prototypes. The goal of the NFL helmet challenge is preventing traumatic head injury to players through innovation. The science teams are exploring:
Injuries are a mounting concern.
Traumatic brain and spine injuries — such as concussion, whiplash, fractured skull, and broken neck bones — have been an increasing concern for football players from youth to adult. These injuries affect both amateur and professional leagues. Head trauma from impact shock can lead to life-long disability and even death. In the 100 years since popular football became an all-time American contact-sport, few improvements have been made to the safety gear that makes-up a standard football uniform.
The NFL said: “We see opportunities to change the paradigm of how helmets are designed — and use the world’s state-of-the-art from multiple fields. There are new approaches, new materials, new concepts.”
This is the league’s first helmet challenge. The winner of the contest will be awarded $1 million and also $2 million in grant funding to develop the prototype. The first round of proposals are due January 2020 — and final helmet prototypes are due May 2021.
Year 2020 marks the NFL’s 100th game season — and public scrutiny is intensifying, while legal teams and families demand that players be better protected from concussion. A team of neuro-scientists at a start-up called BrainGuard are working quickly to develop a better helmet that can prevent brain injuries.
image | above
The NFL’s helmet challenge is part of their Head Health Tech initiative.
art: by NFL
Designing for shock absorption.
Pressure from the new football season — and continued heart-breaking accounts from NFL players who suffered severe brain injuries — makes the roll-out of a better helmet increasingly urgent.
BrainGuard thinks its design could be a game-changer. Their team of engineers, led by neuro-scientist Robert Knight MD, bashes a prototype football helmet from every angle — hitting it again and again — to see how it absorbs and disperses energy before the blow impacts the brain.
The helmet features an inner and outer shell — connected by highly absorbent padding and a network of rubber struts that stretch, acting as shock absorbers. No matter where the helmet is hit, only the outer shell moves.
Knight explained: “Every time it does that, and it moves, these struts are absorbing some of the force — mitigating the amount of force that goes to the inner shell, which then of course goes directly to your skull and to your brain.”
The design specifically addresses rotational force injuries: the twisting and turning of the brain caused by severe blows to the head. When tested against other helmets, BrainGuard says the amount of rotational force was reduced by 25 % — 45 %.
Life-threatening second impact syndrome.
The medical condition called second impact syndrome is a potentially catastrophic injury that occurs when a person — who already has a concussion — sustains a second head injury, before symptoms from the first concussion have healed. Concussion causes brain inflammation. If the patient sustains a second head injury during this vulnerable time, the brain can swell so much there isn’t enough space in the skull — this can destroy brain tissue. And in some cases lead to death.
Improved helmets must be designed to prevent the player from sustaining a concussion at all. Current helmets don’t provide protection from extreme impact, rotational forces, or the cumulative effect of multiple collisions.
image | above
Prototype of a helmet that absorbs shock + rotational forces.
photo: by BrainGuard
A need for new standards.
Football players’ parents and coaches welcome any safety improvements. The helmet is a critical piece of gear, like airbags in motor vehicles. And the same way airbags are now required by law in all automobiles, BrainGuard hopes its 2-shelled helmets will become standard equipment — not just for football players: but also for hockey, baseball, and cycling. Any sport or activity where a better helmet could reduce brain injuries.
The NFL has 1,700 players. But there are 1.2 million youth each year playing Pop Warner football, and high school + college football — who are equally at risk for head injury. 63,000 high school students in the United States suffer a traumatic brain injury per year, leaving many kids with persistent long-term physical + cognitive disability.
The World Health Organization predicts that traumatic brain injury will become a leading cause of death + disability in the world by year 2020.
featurette | watch
group: by Univ. of California + BrainGuard
featurette title: Building a better helmet
featurette | watch
group: by Univ. of Washington + Vicis
featurette title: Building a better helmet
— by the numbers —
statistics: by CDC
source: data
— from traumatic brain injury
— over a period of 1 year
* not including military activities
* for year 2013
featurette | watch
group: by CDC
featurette title: What is a concussion?
on the web | pages
CDC | home
CDC | Heads-Up
CDC | helmet safety
Heads-Up by CDC is an education pack about safe sports and activities for kids + teens.
The materials teach prevention, recognition, and response to head concussion and other serious brain injuries.
tag line: saving lives + protecting people
Hope for BrainGuard’s prototype helmet.
The helmet designed by BrainGuard has layers that move to protect the head.
BrainGuard has been awarded 13 United States patents + 6 foreign patents. Their innovative, multi-layered helmet protects against rotational shear forces that are a major contributor to traumatic brain injury (TBI). These damaging forces also play a role in the brain’s accumulation of toxic, malformed proteins seen in chronic traumatic encephalopathy (CTE) — a brain disease that forms from repetitive head blows.
Encephalopathy is a medical term meaning: brain disease, damage, or malfunction. It has a wide range of symptoms
CTE is consistently found in people who’ve suffered multiple traumatic brain injuries — from accidents, sports, or military experiences. BrainGuard’s helmet design solution results in marked rotational force reduction in lab testing. The hope for the improved helmet is that is can prevent concussion.
IMAGE
featurette | watch
group: by Associated Press
series: Science Says
featurette title: How repeated head blows affect the brain
tag line: The definitive source for news.
— summary —
The medical term CTE stands for chronic traumatic encephalopathy. Researchers are tackling fresh questions about this long-term degenerative brain disease now that it has been detected in the brains of nearly 200 football players after death. As a new NFL season gets underway, here’s a look at what’s known about CTE.
We examine the evidence behind health + science claims by putting them into context. Our occasional series Science Says helps you dissect the latest research and why it matters.
image | above
left-side of photo — cross-sections of healthy human brain tissue.
right-side of photo — cross-sections of a human brain suffering from CTE.
This image from a medical research project at Boston Univ. clearly shows the pathology associated with CTE — a long-term brain disability that’s caused by traumatic brain injury like concussion.
photo: Boston Univ.
on the web | pages
BrainGuard | home
BrainGuard | press
tag line: Layers that move to protect — the magic is in the motion.
on the web | pages
Vicis | home
Vicis | YouTube channel
tag line: protect the athlete + elevate the game
on the web | pages
NFL | home
NFL | YouTube channel
Next Generation Stats • by NFL | home
Player Health + Safety • by NFL | home
Player Health + Safety • by NFL | helmet challenge
Player Health + Safety • by NFL | head health tech
tag line: play smart + play safe
on the web | pages
Concussion Legacy Foundation | home
Concussion Legacy Foundation | YouTube channel
tag line: We are solving the concussion crisis.
on the web | pages
Headway Foundation | home
Headway Foundation | YouTube channel
tag line: Real concussion progress.
on the web | pages
Boston Univ. | home
Boston Univ. | Chronic Traumatic Encephalopathy Center
tag line: Conducting high-impact, innovative research on chronic traumatic encephalopathy.
on the web | pages
Virginia Polytechnic Institute + State Univ. | home
Virginia Polytechnic Institute + State Univ. | helmet ratings
tag line: Translating research to reduce concussion risk.
on the web | reading
group: Associated Press
story title: NFL at 100
deck: Helmets go high-tech in quest for player safety
read | story
tag line: The definitive source for news.
on the web | reading
publication: Wired
story title: Football’s concussion crisis is awash with pseudo-science
deck: Products that offer a “seat belt” or “bubble wrap” for the brain claim to reduce head trauma
deck: If only the laws of physics worked that way
read | story
on the web | reading
publication: San Francisco Chronicle
story title: What will football helmets look like in future?
deck: Scientists may know
read | story
on the web | reading
publication: Boston
story title: Reebok launches the CheckLight: a head monitoring skull-cap
deck: The CheckLight monitors head trauma in real time
read | story
watch | featurette
publication: Boston
story title: Company makes high-tech football helmets worn by the pros
deck: Xenith is changing the football helmet game
read | story
on the web | reading
group: Univ. of California
story title: New helmet design can deal with sports twists + turns
deck: BrainGuard is building a better helmet
read | story
on the web | reading
broadcast: NBC
story title: CTE study finds evidence of brain disease in 110 out of 111 former NFL players
read | story
— summary —
Research on 202 former football players found evidence of brain disease in nearly all of the athletes: from the NFL and college sports — to high school students. The study is an important update on CTE, a brain disease linked with repeated head blows.
on the web | watching
broadcast: NBC
show title: Today
episode title: Football to Battlefield
deck: Helmet company Vicis expands focus from NFL — to include kids + military
watch | featurette
tag line: Share the moment.
— summary —
The high-tech helmets made by company Vicis are ranked best by the NFL — for reducing the impact of blows to the head. Already a top brand in world-wide professional + amateur sports for adults — Vicis also wants to make headgear to protect kids on the field. Eventually Vicis wants to take its helmet research to the battlefield, getting headgear to military troops in harm’s way.
on the web | watching
publication: Digital Trends
featurette title: Vicis creates the football helmet of the future
watch | featurette
tag line: Upgrade your lifestyle.
on the web | watching
platform: Patreon
group: by Entertain the Elk
featurette title: The evolution of football helmets
watch | featurette
tag line: Videos exploring art + entertainment.
IMAGE
— notes —
Univ. of CA = University of California • United States
Univ. of WA = University of Washington • United States
Virginia Tech = Virginia Polytechnic Institute + State Univ. • United States
ER = emergency room
TBI = traumatic brain injury
CTE = chronic traumatic encephalopathy
AP = Associated Press
NBC = National Broadcasting Company
NFL = National Football League
WHO = World Health Organization
CDC = Centers for Disease Control + Prevention • United States
— in this post —
~ story
~ by definition
~ pages
~ reading
— story —
The National Football League (NFL) is partnering with Amazon’s high-tech company Amazon Web Services (AWS) to get a deeper understanding of the game of football — with the goal to better predict + prevent player head injury like concussion.
The computer software field of artificial intelligence (AI) is transforming every major industry, including sports. Applied to football, AI is improving how player safety is visualized and assessed — shaping tech that can see how injuries happen.
Seeing complex patterns.
Since year 2017, the NFL has officially used Amazon’s data collection + management tools to build a significant data-base of statistical info — for better analysis of player performance and game outcomes.
That ongoing program — called Next Generation Stats — is expanding to tackle the growing concern of player safety. An enormous amount of data is captured by placing radio frequency identification (RFID) tags on player’s shoulder gear and the game ball. RFID uses electro-magnetic fields to automatically identify + track tags attached to objects. These tags electronically store info. Then ultra-wide-band signal receivers track the players and ball movement down to the inch.
NFL signed onto a partnership with Zebra to install RFID data sensors in players’ shoulder pads, helmets, and across NFL stadiums. These chips detect how fast the player’s running — and how heavy an impact is that a player’s taken, which is important for medically diagnosing trauma to the head.
Behind every incredible football play are 1000s of data-points that could be missed: such as player’s acceleration, football field location, and movement patterns. The NFL uses Amazon Web Services to track the scale, speed, and complexity of that data — in real-time.
So with Amazon’s machine learning + artificial intelligence — software tools that can sift through mountains of confusing or overwhelming data, and see key patterns in it — the NFL can visualize detailed action on the football field. The software uncovers insights, and expands the fan experience with a broad range of stats. The knowledge gained from the data also helps coaches improve game-play.
The NFL is expanding its partnership with Amazon Web Services — using Amazon’s pattern-recognition software and cloud computing products.
IMAGE
featurette | watch
group: NFL + Amazon Web Services
featurette title: A team-up to transform player health + safety
The digital athlete.
The most revolutionary component of the NFL + Amazon partnership is the creation of the Digital Athlete — a computer simulation model that can replicate infinite scenarios within the game environment.
The Digital Athlete simulation applies Amazon’s computer vision tech to the NFL’s data-sets. The data comes from many sources: like player position, play type, equipment choice, playing surface, environmental factors — and player injury info.
The knowledge that comes from seeing patterns in the matrix of this data will improve injury treatment. And eventually help predict + prevent injuries in football — and other sports.
The future of football.
The expanded partnership combines the NFL’s extensive set of game data with Amazon’s tech.
the goals:
The engineering road-map.
The NFL also created the “engineering road-map.” It’s a $60 million initiative to engage teams — of bio-mechanical + bio-medical + bio-engineering researchers — that will develop solutions to predict + prevent player injury.
A research example.
To better understand how concussion-causing injuries happen on the football field: the Univ. of Virginia supervises a project to simulate player impacts with modified crash-test mannequins.
The research team compiled a review of reported concussions that happened in NFL games. They shared data with helmet manufacturers, designers, entrepreneurs, universities to think-tank novel designs for protective gear.
Based on those results, the NFL changed the kick-off rule — this decreased reported concussions by 38 % on kick-off plays, compared to the prior 3-year average.
The next 100 years.
The NFL is trying to make football safer by:
Looking ahead to the league’s next 100 years, the NFL said:
We’re committed to re-imagining the future of football so that everybody wins — players, coaches, clubs, and fans. Our research will learn about the human body and how injuries happen. That knowledge will reach far beyond football. As we look ahead to our next 100 seasons, we’re proud of this endeavor.
— NFL
by definition | What is cloud computing?
The service called cloud computing is the on-demand delivery of information technology (IT) resources over the web with pay-as-you-go pricing.
Instead of buying, owning, and maintaining computer equipment such as physical data centers + servers, a customer can access hardware + software tech services — like computing power, storage, and data-bases — on an as-needed basis from a cloud provider.
some cloud computing providers:
visit | Amazon Web Services • by Amazon
visit | Azure • by Microsoft
visit | Alibaba Cloud • by Alibaba
visit | Google Cloud • by Google
visit | IBM Cloud • by IBM
visit | Oracle Cloud • by Oracle
visit | RackSpace Cloud • by RackSpace
on the web | pages
Zebra | home
Zebra | YouTube channel
Zebra | story: changing the NFL forever with RFID sensors
tag line: Capture your edge.
on the web | pages
Amazon | home
Amazon | YouTube channel
Amazon Web Services • by Amazon | home
Amazon Web Services • by Amazon | Next Generation Stats
Amazon Web Services • by Amazon | YouTube channel
IMAGE
on the web | pages
NFL | home
NFL | YouTube channel
Next Generation Stats • by NFL | home
Player Health + Safety • by NFL | home
Player Health + Safety • by NFL | helmet challenge
Player Health + Safety • by NFL | head health tech
tag line: play smart + play safe
IMAGE
on the web | reading
publication: Engadget • by Verizon
story title: Amazon + the NFL team-up to create a digital athlete simulation
deck: They’ll use it to test different game scenarios — and predict + prevent injuries
read | story
tag line: The original home for tech news + reviews.
on the web | reading
publication: Digital Trends
story title: Amazon + the NFL plan virtual games to understand real injuries
read | story
tag line: Upgrade your lifestyle.
on the web | reading
publication: Vox • by Vox Media
story title: This high-tech football could change the NFL
read | story
— notes —
AI = artificial intelligence
IT = information technology
RFID = radio frequency identification
gen = generation
stats = statistics
AWS = Amazon Web Services
NFL = National Football League
IBM = International Business Machines
image | above
A medical scan of a patient’s brain is labelled with an orange spot — marked by artificial intelligence software, where it found a probable aneurysm.
from: Stanford Univ.
— note —
Dear readers,
Important breakthroughs happen ever-faster each year. We collect good writing from magazines + newspapers on lead headlines — so you can connect with essential stories on progress.
posted below:
topic | progress in bio-medicine |
story | Medical artificial intelligence software detects aneurysms. |
— facts —-
an artificial intelligence software tool found aneurysms better than physicians:
part 1:
part 2:
part 3:
part 4.
image | above
A medical diagram illustrating an aneurysm in a human brain.
art: by Elizabeth Weissbrod • Weissbrod Studios
reading:
1. |
organization: Stanford Univ.
story title: Stanford Univ. researchers develop artificial intelligence tool to help detect brain aneurysms
read | story
— summary —
Radiologists improved their diagnosis of brain aneurysms with artificial intelligence software made by medical researchers + computer scientists.
2. |
publication: the Next Web
story title: Stanford Univ. new AI can help doctors spot brain aneurysms
read | story
tag line: Sharing, inventing, and advancing tech developments.
3. |
publication: Extreme Tech • by Ziff Davis
story title: Stanford Univ. latest AI helps doctors diagnose brain aneurysms more accurately
read | story
4. |
publication: Geek • by Ziff Davis
story title: New AI tool can help doctors detect brain aneurysms
read | story
5. |
publication: Psychology Today
story title: Stanford Univ. unveils AI tool for finding aneurysms
read | story
— summary —
Deep learning helps clinicians identify brain aneurysms with better accuracy.
6. |
publication: Radiology Business • by TriMed Media
story title: AI helps specialists improve cerebral aneurysm detection rates
read | story
tag line: For leaders navigating value-based care.
7. |
publication: Health Imaging • by TriMed Media
story title: AI detects missed aneurysms in magnetic resonance angiography with increased sensitivity
read | story
tag line: Insights in imaging + informatics.
8. |
publication: Forbes
story title: AI is not ready for the intricacies of radiology
read | story
on the web | pages
IBM | home
IBM | Watson for Health: home
IBM | Watson for Health: learn about AI in medicine
tag line: Making progress in health, together.
— summary —
Researchers are building better software with artificial intelligence for applications in medicine. Gaining insights into diagnostics, health-care processes, treatment options, and patient outcomes — with the support of computers using a technique called machine learning.
on the web | pages
American Stroke Assoc. | home
American Stroke Assoc. | about cerebral aneurysm
on the web | pages
Brain Aneurysm Foundation | home
Brain Aneurysm Foundation | about brain aneurysm
Stop the Pop • by Brain Aneurysm Foundation | home
tag line: Stop the pop.
on the web | pages
AI in HealthCare • by TriMed Media | home
tag line: Innovation to transform health-care.
IMAGE FOLDER
— notes —
AI = artificial intelligence
ML = machine learning
MRI = magnetic resonance imaging
IBM = International Business Machines
* American Stroke Association is a division of American Heart Association
]]>Uses science fiction to address a series of classic and contemporary philosophical issues, including many raised by recent scientific developments.
Explores questions relating to transhumanism, brain enhancement, time travel, the nature of the self, and the ethics of artificial intelligence.
Features numerous updates to the popular and highly acclaimed first edition, including new chapters addressing the cutting-edge topic of the technological Singularity.
Draws on a broad range of science fiction’s more familiar novels, films, and TV series, including “I, Robot,” “The Hunger Games,” “The Matrix,” “Star Trek,” “Blade Runner,” and “Brave New World.”
Provides a gateway into classic philosophical puzzles and topics informed by the latest technology.
Susan Schneider is an Associate Professor of Philosophy and Cognitive Science at the University of Connecticut and a faculty member in the technology and ethics group at Yale’s Interdisciplinary Center for Bioethics. Her work is on the nature of the self, which she examines from the vantage point of issues in philosophy of mind, artificial intelligence (A.I.), metaphysics, astrobiology, epistemology, and neuroscience. The topics she has written about most recently include the software approach to the mind, A.I. ethics, and the nature of the person. She is also a fellow with the Institute for Ethics and Emerging Technologies and the Center of Theological Inquiry in Princeton. Schneider is also a blogger for The Huffington Post.
—Publisher
]]>— book —
book title: the Square and the Tower
deck: Networks + Power: from the Freemasons to Facebook
author: by Niall Ferguson
year: 2018
this book on Good Reads | visit
— summary —
A re-casting of the turning points in world history, including the one we’re living through, as a collision between old power hierarchies and new social networks.
Most history is hierarchical: it’s about emperors, presidents, prime ministers and field marshals. It’s about states, armies and corporations. It’s about orders from on high. Even history “from below” is often about trade unions and workers’ parties. But what if that’s simply because hierarchical institutions create the archives that historians rely on? What if we are missing the informal, less well documented social networks that are the true sources of power and drivers of change?
The 21st century has been hailed as the Age of Networks. However, in the Square and the Tower, Niall Ferguson argues that networks have always been with us, from the structure of the brain to the food chain, from the family tree to freemasonry. Throughout history, hierarchies housed in high towers have claimed to rule, but often real power has resided in the networks in the town square below. For it is networks that tend to innovate. And it is through networks that revolutionary ideas can contagiously spread. Just because conspiracy theorists like to fantasize about such networks doesn’t mean they are not real.
From the cults of ancient Rome to the dynasties of the Renaissance, from the founding fathers to Facebook, the Square and the Tower tells the story of the rise, fall and rise of networks, and shows how network theory–concepts such as clustering, degrees of separation, weak ties, contagions and phase transitions–can transform our understanding of both the past and the present.
Just as The Ascent of Money put Wall Street into historical perspective, so The Square and the Tower does the same for Silicon Valley. And it offers a bold prediction about which hierarchies will withstand this latest wave of network disruption–and which will be toppled.
on the web | pages
Niall Ferguson | home
]]>
event: Intersekt
theme: Fin-tech is going global.
season: autumn
date: September 7 — 8
year: 2022
place: Melbourne | Australia
visit | event website
presented by
group: FinTech Australia
tag line:
web: home ~ channel
— summary —
Intersekt is the leading annual fin-tech industry conference gathering of: fin-techs, hubs, accelerators, policy-makers, regulators, investors, and advisors. Devoted to examining, discussing, and unlocking the potential of the fin-tech market.
Our program features top international speakers, free workshops, optional satellite events — plus extensive networking opportunities with social functions.
Hosted by FinTech Australia, Intersekt brings together an impressive line-up of global speakers on the future of fin-tech.
Intersekt is a melting pot between the banking sector and fin-tech industry. Intersekt is a platform for collaboration. We aim for attendees to leave the event with a larger network, and renewed notions on the industry’s future.
— the program —
about | FinTech Australia
We were founded by start-ups — and we work with founders, scale-ups, and the fin-tech eco-system. We represent our members + advocate for outcomes facilitating growth.
We support fin-tech innovation, adoption, disruption, and investment. Our success is reflected by our members’ achievements.
— our strategy —
— notes —
FSS = financial services sector
IPO = initial public offering
ML = machine learning
OB = open banking
SME = small + medium enterprise banking
RC = Royal Commission
EY = Earnst + Young
fin-tech = financial tech
insur-tech = insurance tech
insure-tech = insurance tech
image | above
Cerebras says its computer chip is the largest ever built: as big as a dinner plate — 100 times the size of a typical chip.
photo: by Cerebras
IMAGE
— note —
Dear readers,
Important breakthroughs happen ever-faster each year. We collect good writing from magazines + newspapers on lead headlines — so you can connect with essential stories on progress.
below is:
topic | progress in computing |
story | A breakthrough processor invented for AI. |
— facts —-
the world’s largest chip:
1. |
publication: the New York Times
story title: To power AI, start-up creates a giant computer chip
read | story
2. |
publication: BBC
story title: Cerebras reveals world’s largest computer chip for AI tasks
read | story
3. |
publication: VentureBeat
story title: Cerebras Systems unveils a record 1.2 trillion transistor chip for AI
deck: text
read | story
no. 4 |
publication: EE Times
story title: Start-up spins whole wafer for AI
deck: Cerebras taps wafer-scale integration for training.
read | story
no. 5 |
publication: Wired
story title: To power AI, this start-up built a really, really big chip
deck: Many computer chips are smaller than your fingernail.
deck: Cerebras new chip for AI systems is bigger than a standard iPad.
read | story
on the web | pages
Cerebras | home
Cerebras | technology
IMAGE
— notes —
AI = artificial intelligence
GPU = graphics processing unit
SRAM = static random access memory
EE = electrical engineering
BBC = British Broadcasting Company
https://www.nytimes.com/2018/02/12/technology/google-artificial-intelligence-chips.html
]]>— story —
Here is a recent table showing the tremendous increase world-wide in fin-tech investment.
Experts explain that this level of accelerated investment represents a major change in the way banking + financial services are: created, managed, and supported.
Traditional, incumbent companies feel the pressing urgency to keep-up with the fin-tech trends. Or they’ll be left behind as both business + retail customers move away from old-fashioned financial products — and into contemporary fin-tech services that are more affordable, inclusive, and portable.
on the web | reading
publication: the Street
deck: What is fin-tech?
— summary —
Fin-tech uses + examples in year 2020. Today: the phrase “fin-tech” refers to innovations in the financial + tech cross-over space — and typically refers to companies or services that use tech to provide financial services to businesses or consumers.
— table —
table topic: global fin-tech investments
from: Accenture
from: CB Insights
$ • global investments — in fin-tech products + services | ||
---|---|---|
year | 2010 | $ 1.9 billion |
year | 2011 | $ 2.5 billion |
year | 2012 | $ 3.2 billion |
year | 2013 | $ 4.8 billion |
year | 2014 | $ 13.3 billion |
year | 2015 | $ 21.2 billion |
year | 2016 | $ 23.3 billion |
year | 2017 | $ 27.4 billion |
* CB Insights = historical name ChubbyBrain
]]>image | above
Model of an infant skull showing its separated bone plates.
— contents —
~ story
~ gallery
~ featurette
~ sketches
~ reading
— story —
An introduction.
When babies pass through the mother’s birth canal during childbirth, the tight fit temporarily squashes their tiny heads — elongating their flexible skulls. The pressure dramatically changes the shape of their brain. For a new study, French data scientists + medical researchers have created 3D images that show the extensive cone-shaped distortion. The study was conducted in association with the university hospital center of the Univ. Clermont Auvergne in France.
A baby’s head can naturally change shape under pressure, because the bones in infant skulls haven’t fused together yet. There are soft areas at the top of the baby’s head to cope with being squeezed through the birth canal — so the newborn can be pushed-out with less head trauma.
Then the soft regions begin to harden and fuse over time. This makes room for the brain to grow during infancy, but protects it from impact as the child starts moving around. It can take 9 to 18 months before a baby’s skull is fully formed.
But the precise mechanics of how a baby’s skull + brain change shape during labor are not well understood. To learn more about that process — and how it can lead to injury — the French scientists conducted magnetic resonance imaging (MRI) scans of 7 pregnant women:
The problem of newborn head molding.
Their images revealed significant skull squeezing — known as newborn (or fetal) head molding —– in all the infants, and showed that the mechanical pressures on infant heads + brains during childbirth are stronger than we once thought.
In all 7 fetuses, skull bones that did not overlap prior to labor: were visibly overlapped once labor began, deforming the newborn’s head and brain. In 5 babies, the skulls returned to their pre-labor shape soon after childbirth — and the deformity was not noticeable when the newborns were examined.
Some valuable insights.
The MRI scans successfully captured views of soft tissues that were not visible with ultra-sound imaging. These visuals gave the physicians important clues to understand:
The researchers said these results could be used someday to help create a ‘virtual labor’ — like a walk-through model showing the probable risks of your baby’s birth, before it happens. That could give mothers and their obstetric doctors early warning, so they can plan how to move forward with the childbirth.
image gallery | below
The 3D digital reconstruction — from MRI scans — of the infants’ brain + skull bones:
HALFLINE
presented by
PLoS One | home ~
tag line: t
the Public Library of Science | home ~
tag line: t
BABY no. 1 |
The 3D reconstruction — from MRI scans — of the infant’s skull bones.
BABY no. 1 |
The 3D reconstruction — from MRI scans — of the infant’s brain.
BABY no. 2 |
The 3D reconstruction — from MRI scans — of the infant’s brain.
BABY no. 3 |
The 3D reconstruction — from MRI scans — of the infant’s brain.
Better understanding another head deformity illness.
The 3D models created by this study produced a deeper knowledge of newborn head molding than physicians had before. These results will help doctors better understand a similar childhood condition called deformational plagiocephaly. The name plagiocephaly means “sloping head” — from the Greek words plagio for sloping + cephale for head.
Children with deformational plagiocephaly have a dangerous injury. Their skull + brain became deformed from too much compression against a crib mattress — or another flat surface. This illness is accidentally caused by parents who don’t rotate their infant’s body when the baby is laying down for a long time, without moving.
If a baby doesn’t switch positions enough — the skull gets lots of continual pressure at the same spot on their head. Eventually the baby’s fluids and soft tissues move away from the pressure — flattening-out in one part of the head, and blobbing-out into another. That’s why the illness is nick-named “flat head” syndrome. It’s been linked to developmental disability.
To prevent this illness, doctors teach parents to rotate the baby’s body on a routine schedule, especially while sleeping. In severe cases, physicians customize a padded helmet for the child — to protect its head from pressure, and fix the deformation over time.
— featurette —
An anatomical tour of the infant skull.
IMAGE
images | below
The pressures from the mother’s birth canal during childbirth cause a range of deformed head shapes called newborn (or fetal) head molding. These sketches from a medical book show a variety of common positions the baby can take while its exiting the womb — showing the way the head and brain deform in those postures.
book: Williams Obstetrics
year: 2018
publisher: McGraw Hill Education
The future of infant health.
It’s not well understood what the long-term prognosis is for a baby born with extreme newborn head molding. Even if the soft tissues and malleable bone eventually return to their normal shape — are there unseen + lasting consequences for that child’s health? Is there a risk of physical or cognitive disability? The startling images from the 3-D model leave parents with a lot of unanswered questions.
Does severe newborn head molding change the baby’s bone + tissues in a way that causes future risks: like micro-fractures, bone brittleness, vascular disorders, bruising problems? Or a diminished ability to heal from accidents or illness that affect the head?
Could babies born with extreme newborn head molding experience a higher rate of traumatic brain injury, later on? For example, from typical head impacts that happen to all kids — from falling down or playing sports? Will that child grow-up to have a lower endurance for brain concussion?
Can the common medical procedure called a Caesarean section (or C section) — where doctors surgically remove the infant through an abdominal incision— reduce fetal head molding that’s caused by squeezing through the birth canal?
A picture worth a thousand words.
In this study, physicians can clearly see the intense pressure on the baby’s head during childbirth. The good news is that researchers are finally getting the graphical evidence they need — so they can sort-out precisely what happens with newborn head molding.
The 3-D reconstruction paints a close-up picture of the baby’s experience. Data from the study showing: the shape, size, and timing of newborn head molding helps the medical team:
Technology leads the way forward for medicine. Our health-care system is based entirely on what we can observe + understand. Modern tools — such as MRI and computer graphics — give us the ability to witness biology in motion.
image | below
A sketch from a medical book showing a baby with the condition called newborn — or fetal — head molding.
HALFLINE
presented by
book: Williams Obstetrics
date: 2018
publisher: McGraw Hill Education | visit
IMAGE
reading
1. |
publication: Live Science
story title: Head deformity ignites debate among baby experts
deck: Has increased exponentially over 20 years.
read | story
HALFLINE
2. |
publication: Live Science
story title: How much do babies’ skulls get squished during birth?
deck: A whole lot, 3D images reveal
read | story
HALFLINE
presented by
text | home ~
tag line: text
3. |
publication: NewsWeek
story title: 3D MRI images show how a baby’s head shape changes during labor in incredible detail
read | story
HALFLINE
presented by
text | home ~
tag line: text
4. |
publication: One
story title: 3D MRI of fetal head molding + brain shape changes during 2nd stage of labor
read | story
HALFLINE
presented by
PLoS One | home
tag line:
— about —
PLoS One is a journal of the Public Library of Science.
PLoS | home
tag line:
BOTTOM
by definition | fetal ultra-sound
— notes —
MRI = magnetic resonance imaging
3D = 3-dimensional
PLoS = the Public Library of Science
CHU = university hospital center
]]>event title: Viva Technology
group: Publicis grp. + Les Echos
theme: The world’s rendezvous for start-ups + leaders.
season: summer
when: June 11 — 13
year: 2020
where: Paris, France
visit | event website
— summary —
Viva Technology is the world’s rendezvous to celebrate innovation. It’s a gathering of the brightest minds, talents, and products.
The world’s rendezvous for start-ups + leaders.
From top speakers and exhibitions to open innovation and live experiences, Viva Technology is a celebration of today’s innovations — and tomorrow’s possibilities — for everyone who believes in the power of tech to transform society.
Viva Technology is a global tech event. We’re serving-up immersive experiences for visitors, start-ups. and business leaders from around the world.
watch | videos
Tours of the Viva Technology event.
watch: promo
watch: stage design
watch: performance • by Les Vandales
watch: event lounge • by Accor Hotels
watch | videos
Tours of the Viva Technology event.
watch: mini-tour • A
watch: mini-tour • B
watch | videos
Tours of the Viva Technology event.
watch: exhibit • by La Poste
watch: exhibit • by Google
watch: exhibit • by Eiver
watch: exhibit • by Engie
watch | videos
Tours of the Viva Technology event.
watch: demo • by TurnCircles
watch: demo • by Nurun
on the web | pages
Viva Technology | home
Viva Technology | YouTube channel
on the web | pages
Publicis | home
— notes —
AR + VR = augmented reality + virtual reality
AI = artificial intelligence
LVMH = Moet Hennessy • Louis Vuitton
* Les Echos group is under umbrella of Moet Hennessy • Louis Vuitton
]]>