
One of the most exciting, and indeed somewhat scary new forms of technology is what is called nanotechnology. It is at its core technology that involves the use of matter on an atomic, molecular, and supramolecular scale and manipulation of matter with at least one dimension sized from 1 to 100 nanometers. There are many areas in which such technology could be applied, such as industrial usage and the fields of medicine, aeronautics, engineering, architecture, electronics, biomaterials, ecological uses such as devouring oils spills or plastic garbage, and many others, and it has been seen as a very promising wave of the future. Indeed, even as you read this there are already countless products on the market that make use of the technology to one extent or the other, including such disparate items as fabric, materials, antibacterials, agricultural chemicals, and even baby food. However, as with any new form of technology there are bound to be concerns on their effects on the environment and society, and in the case of nanotechnology there has been much discussion and debate on how safe it is, including a variety of somber doomsday scenarios like something out of a science fiction or horror movie. Surely one of the most spectacular and terrifying of these nightmare prospects is what has come to be called simply the “Grey Goo.”
The gray goo is a nanotechnology nightmare scenario first conjured up by the mathematician and futurist molecular nanotechnology pioneer Eric Drexler, who first coined the term in his 1986 book Engines of Creation. The basic idea lies with the presumption that at our rate of advancement we will at some point reach a level in which we are able to create nanotechnological self-replicating machines, whether intentionally or by accident. These nano-machines would essentially be extremely tiny versions of the self-replicating robots first speculated on by mathematician John von Neumann, only in this case they would be far too small for the eye to see. These nano-machines would theoretically be able to build copies of themselves, and would be free of the need for power sources as they would be able to break down the materials around them to use as for power generation and more replication.

Eric Drexler
It sounds at first like it would be amazing technology, and with such molecule sized devices we could basically program them to build or do anything we want. You could take them and basically shape them into any material, like tiny little Lego blocks or termites building a mound, and they would be invaluable, but Drexler’s hypothetical Grey Goo scenario predicts that these machines would have the potential to begin replicating out of control on an epic scale, exponentially and without practical limit, consuming all around them to use to power even more growth and in the worst case scenario overtaking the planet and consuming all life on Earth. Drexler wrote of this:
Imagine such a replicator floating in a bottle of chemicals, making copies of itself…the first replicator assembles a copy in one thousand seconds, the two replicators then build two more in the next thousand seconds, the four build another four, and the eight build another eight. At the end of ten hours, there are not thirty-six new replicators, but over 68 billion. In less than a day, they would weigh a ton; in less than two days, they would outweigh the Earth; in another four hours, they would exceed the mass of the Sun and all the planets combined — if the bottle of chemicals hadn’t run dry long before.
Early assembler-based replicators could beat the most advanced modern organisms. ‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous ‘bacteria’ could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop — at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.
In this scenario, what would keep nanobots programmed to, say, devour plastic bottles in landfills or oil from an oil spill, from malfunctioning and beginning to devour everything else? Without any practical way to stop them the robots would overwhelm the planet and possibly beyond with incredible speed, and adios muchachos. Earth, and perhaps even the other planets in the solar system would be reduced to teeming masses of hungry nanomachines that would just continue to replicate until every last bit of resources was completely used up. If there is even the slightest possibility that this might happen, warned Drexler, then we should abandon all pursuit of self-replicating nanobot technology altogether, with him concluding “We cannot afford certain kinds of accidents with replicating assemblers.” It is a sobering thought, and all pretty spooky to say the least, to the point that it immediately freaked governments out and fueled much debate on regulations and the need for oversight on the development of such technology, but how likely is it that this would ever happen outside of the realm of hypothetical speculation?

On the more pessimistic end of the scale we have those who believe that this is all only a matter of time, an issue of “when,” not “if,” and that the only way to stop it is to initiate immediate protocols and limitations governing such research. After all, such self-replicating robots are already being pursued, so the sooner we install some sort of oversight or abolition the better. Of the dour prospect of this technology inevitably going bad or falling into the wrong hands, one computer scientist Bill Joy has warned, “it is far easier to create destructive uses for nanotechnology than constructive ones.” Yet then again, it has been argued on the other side that any such robots would by necessity have elaborate built-in limitations preventing them from replicating out of control, and it has also been pointed out that we may never achieve the ability to create such robots in the first place, since they would have to have advanced artificial intelligence that may be impossible to implement on such a small scale, and such robots furthermore have been increasingly seen as impractical and not really worth pursuing anyway. Their programming would also be limited and it is thought that even if they did begin to replicate on their own we would likely recognize the threat and be able to neutralize it before it reached the Grey Goo scenario. The idea from this camp is that there are more pressing concerns to worry about rather than the science fiction horror show of the Grey Goo. One report from the Center for Responsible Nanotechnology has said of all of this:
Grey goo eventually may become a concern requiring special policy. However, goo would be extremely difficult to design and build, and its replication would be inefficient. Worse and more imminent dangers may come from non-replicating nano-weaponry. Since there are numerous greater risks from molecular manufacturing that may happen almost immediately after the technology is developed, grey goo should not be a primary concern. Focusing on grey goo allows more urgent technology and security issues to remain unexplored.

Even Drexler himself has distanced himself from the gravity and possibility of such a scenario, in more recent years stating that the Grey Goo is less and less plausible a scenario, especially with the manufacturing methods being pursued now. He has come forth to say that the type of robots so feared in the scenario are needlessly complex, and not even practical or necessary for enjoying the benefits of nanotechnology. He now thinks that our attention would be better focused on other potential issues inherit to nanotechnology. In an article he co-authored with Chris Phoenix in the August 2004 issue of the Institute of Physics journal Nanotechnology, he stated:
Nanotechnology-based fabrication can be thoroughly non-biological and inherently safe: such systems need have no ability to move about, use natural resources, or undergo incremental mutation. Moreover, self-replication is unnecessary: the development and use of highly productive systems of nanomachinery (nanofactories) need not involve the construction of autonomous self-replicating nanomachines. Accordingly, the construction of anything resembling a dangerous self-replicating nanomachine can and should be prohibited. Although advanced nanotechnologies could (with great difficulty and little incentive) be used to build such devices, other concerns present greater problems. Since weapon systems will be both easier to build and more likely to draw investment, the potential for dangerous systems is best considered in the context of military competition and arms control.
So is nanotechnology safe? Is it the answer to our prayers and groundbreaking technology that will save the world and make our lives easier? Or is it to prove to be our undoing and doom? Since it is all more or less in its infant stage, there is no way to really know at this point, and we are in uncharted waters. Will we make it all work, or will the Grey Goo destroy us all? Perhaps only time will tell.
The post The Future Nightmare of the Grey Goo first appeared on Mysterious Universe.
from Mysterious Universe https://ift.tt/356kqz2
No comments:
Post a Comment