All that’s needed to make a case for radical reform of the idea of self is a thought-experiment with a clear and compelling outcome. However, the world is full of people who pay no attention to thought-experiments, however revealing they may be in exposing inconsistencies in everyday ideas, because thought-experiments aren’t ‘real’. These are people from Missouri, as the saying goes, who demand to be shown. And I have nothing to show, yet.
But even people from Missouri can be convinced to take a possibility seriously if there’s enough evidence that, although not here yet, it’s coming fast, probably not very far away, not in front of the house yet but closer than the next county. Like the second Al-Qaeda attack on American soil. Something worth thinking about.
This post examines the possibility whether we will one day have the capability of replicating a living human being, and if so, when might that be? Thus it is highly speculative, maybe the most speculative topic I deal with in this project. Predictions of future technological developments are notoriously bad. Why do I think I can do better than others? Honestly, I don’t think I can.
My credentials on this score consist in a career working in technology companies. I know how it goes. Inventions run foul of the unforeseen – that’s what makes them inventions. Schedules are usually optimistic (certainly mine were). You start with a clear idea, but the devil is in the details, and details can take unbelievable amounts of time to sort out. And that’s just the technical side, which every tech entrepreneur knows is the easy part. There is rarely enough money, because the money requirement is usually underestimated, and because investors keep tech visionaries on a short leash, and because there has to be a business plan – the brilliant innovation has to be mapped, somehow, onto the sordid reality of commercial demand in order to produce a return on investment. A quick return, too; most investors won’t wait five years for a start-up to climb out of the sandbox. Added to that is a lot of human inertia and bone-head ignorance. So things take time, and often fall flat because they take too much time.
But when a breakthrough comes, it changes the world rapidly. Thomas Watson Sr., chairman of IBM in 1943, famously predicted that the total world market for computers would be five, or at most six, machines. I remember a science-fiction story from the 60’s about the future of computing: it portrayed a world in which a family would own its own compact computer, no bigger than a hall closet, by 2025. (And in some inconceivably far-off time, after the stars have flamed out, the ‘spirit of man’ would ‘flow into the computer’. Welcome to the internet!)
So what can I say about the future of replication technology that has any credibility? There’s a technical problem and a business problem.
The Business Problem
The business problem is always the same: where’s the payback? Business doesn’t care what a mindblowing, beautiful, breakthrough, disruptive technology you’ve come up with; in fact, it prefers that none of those adjectives apply, because they add to the uncertainty.
The first big business driver for replication technology is likely to be automated manufacturing. Here we are talking mainly about replicating inanimate objects. The vision is this: (1) build a perfect prototype widget (2) press a button to scan it, capturing all the important information about its construction (3) press another button to use the information to build a copy from its constituent atoms and molecules by an automated process. Or a million copies. Presto – manufacturing returns to North America!
I have no doubt that automated molecular manufacturing will make venture capitalists drool. But not yet, because the technical capability is not visible over the VCs’ horizon, which is at most five years in the future. Molecular nanotechnology in 2009 is restricted to highly specialized applications. An early breakthrough application may be in manufacturing smaller, faster integrated circuits. Photolithography, which is the mainstream technology for producing IC’s, is rapidly approaching a limitation imposed by laws of physics. Although it has been surprisingly successful in reproducing features smaller than the wavelength of light, the current wisdom is that photolithography will likely hit a hard limit at a feature size of around 50 nm. Nanolithography, using molecular self-assembly, has been demonstrated to build features of 20 nm. This process may soon become cost-competitive, and become a beachhead for molecular manufacturing, a disruptive technology which may be expected to change the world as much as the steam engine or the internet.
Once we can replicate things, it seems only a matter of time before we will be able to replicate living organisms, including the higher forms of life such as human beings. This will open up enough new killer apps to keep investors pumping money into the R&D lifeline. When human beings are the subject, the focus will be different – not on building multiple copies of people (a bad idea all round), but on using replication technology as the most cost-effective way (in some cases, the only way) to provide services. I can think of killer apps in three areas: transportation, health, and life insurance.
I described the transportation application in the Introduction to this project, and I intend to explore it more fully in subsequent posts. The global airlines industry, with somewhere around $500 B in annual sales, could be entirely replaced by replication technology.
Health and cosmetic applications have huge potential. If we can digitize a human being and make a copy indistinguishable from the original, we can also make improvements. Harmful viruses and bacteria could be identified by their genomes, and simply filtered out of the data stream. Cancerous cells could also be marked and omitted from the copy. No doubt the software could be made smart enough to leave out arterial plaque. And to repair cavities in teeth. And take off a few pounds of adipose tissue.
It’s easy to imagine that we’ll be offered weight loss as an extra (at an added cost!) when we book our vacation getaways! In fact, it’s hard to imagine that we will not be. The weight loss market in the US alone was estimated at $55B for 2007. World-wide health care spending is in the low trillions, and climbing rapidly.
Although insurance is not considered sexy, the life insurance application of replication technology would offer a major improvement over any products available today, because it would actually restore lives! When you take out a policy, you will be scanned, and your information stored on a secure server. In the event of death, you will be physically rebuilt. No doubt the invoice for renewed coverage will come with a reminder to make regular back-ups.
Life insurance is another big market in 2009 (over $2B worldwide); but life insurance sales based on replication technology are potentially probably much higher, because it offers improved benefits to a broader market. Income-earners would ensure not only their own lives but their childrens’; and retirees might find life insurance an antidote to some of the anxieties of advancing age.
In the big picture, the business problem is not a problem at all. The carrots that replication technology will dangle in front of the investing public are as big and juicy as they come, and will provide sufficient motivation, as long as intermediate applications can be rolled out along the way to grease the wheels of commerce.
The Technical Problem
On the technical side, the problem of replication breaks down into two main subproblems. (1) Capturing all the information that matters about the object to be replicated – the measurement problem. (2) Using that information to manufacturing a replica – the assembly problem. They are both big problems.
Actually there’s a third problem in between, which seems insignificant in comparison to the two big ones. That is the problem of managing the collected information: storing and transmitting the data. The data management problem is well in hand with today’s technologies, and is only worth mentioning because of the very high volumes of data involved. This is one area where we have some well-grounded numbers.
Everyone knows Moore’s Law: the number of transistors included in the most cost-effective design for commercial IC’s doubles every two years. (Moore’s revised estimate, in 1975, was two years, not 1.5, the number that most commonly ricochets around the internet.) Moore was on the money, as can be seen in a Wikipedia chart covering 40 years.
In related trends, the cost of hard disk space halves every sixteen months, and according to Butter’s Law of Photonics, which parallels Moore’s law, the amount of data coming out of an optical fiber doubles every nine months. Thus, the cost of transmitting a bit over an optical network decreases by half every nine months.
One handle we can get on the problem of replicating a human being is to estimate the size of the data.
There are about 7 x 1027 atoms in the adult human body. Taking a brute force approach, we could attempt to record the element and position of every one. Seven bits is enough to identify the element (probably more than enough; only 40 elements are found in the human body in measurable amounts). Recording spatial position to sufficient precision is more expensive. If we only record atoms, we must reconstruct chemical bonds by proximity. The location of atoms would have to be known well enough to support an unambiguous reconstruction of chemical bonds. An atom’s position in each of the three spatial dimensions could be given to this accuracy by a number of approximately 109 to 1010. Thirty-two bits, which can represent over 4 x 109 distinct values, might be enough. So, for each atom, we would need at least (3 x 32) + 7 = 103 bits, or 13 bytes. Multiplying by the number of atoms gives us 13 x (7 x 1027) bytes = roughly 1029 bytes, or 1017 terabytes.
Fifty years from now, will we be able to manage a file of 1017 terabytes? In 2009, hard disk storage runs around $100 per terabyte. If the cost of storage continues to drop by half every 16 months, we’ll have a 1011 improvement in the amount of data storage our $100 can buy. The cost of storing 1017 terabytes in 2059 is therefore projected at $100 x 1017 / 1011 = $100,000,000. Too expensive (unless inflation runs rampant!)
If storage costs fall faster, or we wait longer than fifty years, of course it’s a different story.
Trends in data transmission rates tell a similar story. Current ‘internet backbone’ data transmission rates are represented by (Optical Carrier Level) OC-192, at 10 GBps. 10 GB = 1010 bits. The time needed to transmit 1030 bits at 1010 bits per second is 1020 seconds, i.e. about 30 trillion years.
That’s using today’s technology. If data transmission rates increase exponentially at the rate of Moore’s law (doubling every two years), then 50 years of technological progress can be expected to reduce the transmission time by a factor of about 33.5 million, which is not even close. One hundred years at that rate yields an improvement factor of only 1015. However, some sources have more optimistic numbers, projecting an annual doubling of broadband peak data rate. That would allow us to 1030 bits in just one second, a mere 67 years from now.
Replication Need Not Be Exact
But actually, the brute force approach to replication is silly. No one would do it that way.
Replication technology will have to grapple with the question of what makes a copy “good enough” – sufficiently faithful to the original for practical purposes. The answer to that question depends on what is being replicated, and for what purposes.
If a manufactured object is being replicated, then ordinary manufacturing QA standards are appropriate. A piece of furniture must meet criteria of strength, comfort, and appearance. A replicated Stradivarius violin should have the tone and timbre of the original, so that an expert cannot tell the difference between their sounds. A replicated bottle of 1998 Château Pétrus must have the rich colour and extraordinary nose of the original, inspiring oenophiles, in blind tests, to similar descriptions of “green olives and blackberry jam, with hints of vanilla and Indian spices. Some dark chocolate too.” A satisfactory copy of a restaurant dish – Homard Bretonne, say, from the kitchen of Le Bristol in Paris – must have flavour, texture, and plating equal to the original, and of course must be served hot!
Standards will shift when it comes to replicating living creatures. If the creature is beef on the hoof, intended for slaughter, then the replica must be, at a minimum, alive and healthy. If, as seems more likely, it is breeding stock, then its fertility is obviously important, as is the integrity of its genetic material. The buyer of a replicated breeder should insist on verification that the animal’s genome is faithful to the original.
And when the animal is a saddle-horse, a trained guide-dog, or a pet, its behaviour will matter too. I don’t doubt that replication technology will be in demand by pet-owners. Cats and dogs have life-spans tragically out of sync with their human owners. If the capability exists of scanning Bootsie in her playful prime and restoring her years later, after arthritis and feline leukemia have taken their toll and she has been put down, many pet-lovers will jump at the chance, and some will be willing to pay big bucks. But not if the rebuilt Bootsie hisses at them as she would at a stranger. Bootsie must know her home and her people, and display all the remembered behavioural traits, endearing and otherwise. “She’s been on the table licking the butter again!” Such is the touchstone of an authentically recreated personality.
That goes in spades when the replicated subject is human. A paterfamilias must remember his children’s birthdays, his wife’s favourite flowers, and the details of his work. His golf handicap should not be affected. Preservation of memory and personality, of learned skills (both conscious and unconscious), of habits of speech and turns of mind, will be vital to the success of any commercial venture that uses replication technology with human subjects.
An ‘exact’ molecule-for-molecule reproduction is not necessary, or even desirable. Human beings strive for change and self-improvement. If the technology exists to make uncontroversial improvements to my body – to equalize my leg lengths, remove pimples, replenish my thinning hair, eliminate a cancer –why wouldn’t I use it?
If ‘exact’ replication is not required, there are tremendous opportunities to cut down the volumes of data required to reconstruct a human being. A great deal of information at the atomic level could be replaced by information at the cell level. A quick Wikipedia search tells us the number of cells in the human body is about ten trillion, or 1013, less than the square root of the number of atoms (7 x 1027).
One erythrocyte (red blood cell) is much like another. An important difference is the degree to which the hemoglobin molecules they contain are oxygenated. Erythrocytes circulate in the bloodstream, and so the position of individual erythrocytes is unimportant. The distribution of erythrocytes in the circulatory system is important – more oxygenated in the arteries, less so in the veins – but the distribution is fairly standard for healthy human beings. We could almost get away with a detailed model of one erythrocyte, plus an estimate of the total count.
Similar savings in data volume apply to others of the approximately 210 cell types in the human body. Perhaps this number is low, and more cell differentiation will be found. If we allow two bytes (65535 cases) to identify the cell type and its important attributes, 12 bytes for its 3D position in space, and maybe 4 more bytes for its orientation, we get 18 bytes per cell. Let’s allow 20. Still a lot, but it’s still overkill. Except for neurons, the exact position and orientation of individual cells doesn’t matter. What matters is the tissues: the shape, size and condition of higher-level structures like muscles and bones, and their connections to other structures.
If this is approximately right, then the volume of data to be managed is more like 1016 than 1030 bits. Using today’s data transmission technology, it could be transferred in about four months. Assuming the conservative projection that transmission rates will double every two years, we will be able to transfer that amount of data in one second as early as 2049.
Human Replication – Three Special Topics
The nervous system is a special case, because the details matter. Our personalities, memories, learned abilities, emotional sensitivities – most of what makes us ‘who we are’ – is stored in specific connections between neurons. I will come back to neurons, but before doing so I want to touch on two other special topics: DNA and the ‘human flora’.
In 2003, the Human Genome Project completed 13 years of work at a cost of a billion dollars. In 2009, a full human genome can be sequenced in a few weeks for less than $50,000. Recently, costs have dropped by a factor of ten each year. In October 2009, IBM joined the race to produce a personal genome for less than $1000. Any projections of progress in this field are almost certain to be out of date before they are published.
It seems safe to say that long before a living human being can be replicated, recording his or her genome will be a simple task with negligible cost. And the informational content of a genome is small: just 1.6 GB.
The design of a human replicator would have to address the question whether a person’s genome could be adequately represented by the genome of a single (representative) cell, or whether the genome of every cell should be represented separately.
Genetic variations between somatic cells of a single individual are described as mutations or “copying errors”, and are usually considered undesirable. Most such errors have no appreciable effect on the host organism, but some give rise to cancers. The mutation rate in non-germ-line human cells has been measured as approximately one per million cell divisions. So, everyone’s body contains mutant cells. The question is whether the variations are worth preserving.
If science should decide that there is value in preserving all genetic variations within the individual, that will have a significant impact on the design of replication technology. First, and most seriously, it will be necessary to collect the information – to sequence all ten trillion cells. Secondly, we will have to store and transmit all this information. If we were to store each cell’s genome independently, we’d be looking at 1.6 GB x ten trillion = a whopping 16 billion terabytes, or 16 zettabytes (ZB). This is 100 times the the number estimated by IDC as the total volume of digital data produced on earth in 2006. In other words, a scary number. (But perhaps not scary for long; the same 2007 IDC paper says that 0.9 ZB of digital data will be produced annually as early as 2010.)
However, the volume of information need not be nearly that large. All that’s needed to represent the genetic variation within a person is his or her ‘representative’ genome, plus a list of each cell’s differences from the representative genome. Given the rate of variation of one in a million, data volumes would be reduced accordingly – i.e. to a number on the order of 16 thousand gigabytes instead of 16 trillion. That relatively modest data volume could be achieved without any loss of information.
But my guess is that it will go the other way – the genome of a single representative cell will be considered enough. A ‘representative’ cell would be identified by sequencing the genetic material of a small sample – perhaps a few hundred – of the ten trillion cells in a person’s body, and eliminating those containing variations. This will not be difficult in a very few years. Replication technology will likely be viewed as having the salutary side-effect of ‘cleaning up’ a person’s genome by weeding out copying errors, with a measurable and significant reduction in the subsequent incidence of cancer.
Mutations in germline cells – that are potentially passed on to offspring – are a slightly different concern. On the one hand, most germline mutations, like copying errors in somatic cells, are either neutral or harmful. Parents-to-be do not, as a rule, want mutation; that is why lead aprons are used to cover patients’ genitals when they are X-rayed. On the other hand, mutations are occasionally beneficial, and random mutations are the source of the genetic diversity which makes evolution possible. So an argument can be made for not eliminating mutations altogether. Replication technology could address this concern by preserving DNA sequences exactly in germline cells, if desired, while cleaning up the rest of a person’s DNA. But again, I suspect most people would opt to have the germline cells cleaned up too.
The human flora consists of all the microorganisms in the human body. Many of them reside in the digestive tract, where they play an important role in extracting nutrients from the food we eat. A remarkable fact about the flora is its sheer numbers: according to Wikipedia, there are at least ten times as many bacteria as human cells in the body! Capturing the flora’s information and rebuilding it would put a significant added burden on our replication technology (unless the technology worked at the atomic/molecular level, in which case it would make little difference).
But that would be another waste of resources. Granted, the flora is important (as anyone who has experienced digestive upsets from taking antibiotics can attest) but the specifics of the flora are not important. I am not sentimentally attached to individuals among the 1014 bacteria in my gut, any more than I am attached to individual erythrocytes. I’d happily accept a ‘standard human flora’ in my reconstituted gut; and if I was so unfortunate as to have a tapeworm before replication, I’d consider my situation much improved.
Neurons are a special case, because the detailed organization of the nervous system matters a great deal. Our memories, intentions, beliefs, likes and dislikes, knowledge, learned abilities, and thousands of habits are physically embodied in brain-states. I certainly wouldn’t want to make personal use of replication technology if it were incapable of capturing this information and faithfully reproducing it.
The Human Connectome Project (HCP) is the first attempt to take a detailed, comprehensive look at the organization of human nervous systems. Launched in July 2009 by the National Institutes of Health Blueprint for Neuroscience Research, the mission of the five-year, $30M project is “to map the wiring diagram of the entire, living human brain.”
State-of-the-art imaging technologies will be used in combination to map axon pathways and other brain connections: high-angular resolution diffusion imaging (HARDI) to locate axon bundles, resting state fMRI (R-fMRI) to find coordinated networks in brains at rest, and electrophysiology and magnetoencephalography (MEG) combined with fMRI (E/M fMRI) to show which pathways are activated when people engage in specific tasks.
All these technologies will be used to test hundreds of healthy adults of both sexes. The brain imaging data will be correlated with demographic data, blood samples, DNA, and clinical assessments of sensation, motor, cognition, emotion, and social behaviour.
The deliverables specified for the project include (1) improved non-invasive imaging tools to obtain connectivity data from humans in vivo, (2) a “high quality and well characterized, quantitative” set of connectivity data derived from the hundreds of subjects, and (3) rapid and effective dissemination of project results to the wider research community.
If successful, the HCP is expected to produce a discontinuous improvement in our understanding of how the brain works, and in our ability to capture the physical correlates of our mental states. The five-year project will not, I am sure, deliver the detail of human brain connectivity at the level required for a human replicator, but it is an important start down that road. How long will it take to get there? There is no good answer today, but the HCP holds promise that in five years, we will have a much clearer picture. If the HCP’s progress follows a similar exponential curve to that of the Human Genome Project, it will be impressive indeed.
Conclusions
The more I dig into this topic, the more I am aware of its vastness. I have hardly touched on the second big technical problem – the assembly problem, how to manufacture replicas from the recorded information. Molecular manufacturing is in its infancy, only slightly beyond science fiction. But its commercial potential is so attractive that there is already a Texas-based nanotechnology company, Zyvex Corp., which aims to develop a Molecular Assembler that can “build with individual atoms and lead to self replicating machinery and consumer goods”. Not surprisingly, their current customer base is the semiconductor industry.
The field of synthetic biology, which creates special-purpose DNA from scratch, is another interesting approach to molecular manufacturing. DNA can be viewed as nature’s chemical factory. The New Yorker recently reported that a company called Amyris Biotechnologies has developed microbes with synthetic DNA to produce scarce anti-malarial drugs and biofuels. Their methodology is characterized by ‘plug-and-play’ standard parts which can be combined programmatically into larger, more complex units. Synthetic biology is another field in which progress is exponential. The ‘Carlson Curve’ shows the capacity to produce synthetic DNA improving at a rate exceeding Moore’s Law.
Work is also being done on technologies quite unlike the ones I’ve outlined, that would truly work on the atomic and molecular level. In a chapter of his Physics of the Impossible, physicist Michio Kaku describes two technologies which have been implemented in laboratories, either of which could lead to the capability of teleportation. One involves ‘quantum entanglement’ and the other does not. At this point I am going to duck, and leave the job of assessing the potential of these technologies as an exercise for the reader.
Ultimately, the ‘fog of technology’ makes it hard to see the future clearly. The multiplicity of technological avenues, and the need for new inventions, means I cannot deliver a credible timeline for human replication. There is reason to believe that it is at least fifty years out; and my gut feel is more like a hundred. But there is little doubt about the direction we, as a technophilic society, have taken, or that we are making remarkable progress. We have the will to create replication technology, and we are well on our way.
It is indeed possible to replicate a human.
It’s much simpler than most people think it is as well.
I’d be more than happy to talk with you more.
Mrgameinwatch@sbcglobal.net