Except for legends and claims of miracles, most histories of transplantation cover only the last 60 years because there were no earlier successes. However, the story of even this era has been documented in such rich detail that a full account would fill several volumes. Thus, this brief summary must be limited to highly selected “landmarks.” Some landmarks had an immediate impact, but the importance of others went unrecognized for decades. Some findings that deserved landmark status were overlooked or forgotten, whereas others of no biological significance had major impact. Placing these events in perspective is challenging. Several of transplantation’s pioneers are still alive, and most of the others are within living memory. Virtually all of them have produced their own accounts. For the most part, they agree on what the “landmarks” are, but their differences in emphasis and perspective make an interesting story.
PRE-HISTORY: THE ERA OF MYTHS AND MIRACLES
The idea of replacing diseased or damaged body parts has been around for millennia. Envisioned were complex transplants such as the “successful” transplantation of an entire leg by the 3rd century sainted physicians Cosmos and Damien, which is depicted in several famous paintings (Zimmerman 1998). As early as 600 b.c., the use of autogenous skin flaps to replace missing noses was conceived, and by the sixteenth century, Gaspare Tagliacozzi (Tagliacozzi 1597) and other pioneering plastic surgeons were successful with such procedures. The obvious extension of these methods was to use detached or “free” grafts of the patient’s own tissue or that of other donors. But not until the twentieth century was it ever mentioned that grafts might fail. Even the great eighteenth century experimentalist John Hunter, who transplanted human teeth and autotransplanted cocks’ spurs into their combs, seemed unaware that homografts would fail (Martin 1970). Only in the last half of the 20th century has there been a consensus that the outcome of homografts differs from that of autografts.
For a long time, proponents of skin homografts refused to admit that they would not work. Success was even claimed for grafts of whole ears and noses. After centuries of sloppy observation and self-deception, the realization crept in that detached (free) skin grafts were useless. In retrospect, the technical failure of even the early autografts was not surprising because at first full-thickness skin grafts were used. These thick grafts never became established because their underlying layer of fat and other tissue prevented revascularization. The first major technical advance (almost a “landmark”) came only in 1869, when Jacques-Louis Reverdin discovered that small, thin (split thickness) grafts would heal (Reverdin 1869). His autogenous “pinch grafts” successfully covered burns, ulcers, or open wounds. Others caught on, and soon an extensive experience with both autografts and homografts was accumulated. Surprisingly, neither Reverdin nor other enthusiasts (including well-respected surgeons such as Joseph Lister) noticed that homografts were inferior to autografts (Goldman 1987). Only one report of the time suggested otherwise. In 1871, the British surgeon George Pollock described a set of successful autogenous grafts, while on the same patient’s wound, homografts both from himself and another donor soon “disappeared” (Pollock 1871). This report was ignored while for additional decades claims of successful homografts continued. In one such case during the Boer War, Winston Churchill donated skin to help heal the open wound of a fellow officer. Years later, Churchill (1944) reported that this graft was still successful.
TRANSPLANTATION RESEARCH OF A LOST ERA
Although consensus on the fate of homografts would not be reached for another 50 years, during the first decades of the twentieth century several well-known investigators established not only the inevitability of homograft failure but most of the other basic principles of transplantation immunology. In 1903, Paul Ehrlich studied transplantation of tumors in mice without bothering to determine the response to transplants of normal tissue (Medawar 1958). The use of tumor homografts confused the issue because they would sometimes overwhelm their recipients before being rejected. In the same year, the Danish biologist Carl Jensen (1903) perceived that the failure of tumor homografts was an immune reaction, but this explanation was discounted by Ehrlich because no antibody (the accepted hallmark of immunity) could be detected.
Georg Schöne may deserve recognition as the first transplantation immunologist. Working in Ehrlich’s laboratory in 1912, he studied grafts of skin rather than tumors. He determined that homografts always failed and that subsequent grafts from the same donor failed more rapidly than the first (Schöne 1912). Three decades later, several surgeons would be squabbling for the credit of discovering this “second set” response, suggesting that they deserved a share of Medawar’s Nobel Prize, which in part was based on this landmark observation.
By the end of the 1920s, scientists at the Rockefeller Institute firmly established other tenets of transplantation immunology including the central role of the lymphocyte. James B. Murphy’s work was especially prescient (Murphy 1914a; Silverstein 2001). He showed that resistance to tumor homografts was dependent on the lymphoid system. He sought to extend graft survival by getting rid of lymphocytes with irradiation, splenectomy, or benzol, the first chemical immunosuppressive agent, noting that these methods decreased the lymphocytic infiltration he observed in failing homografts (Murphy 1914b). Murphy was convinced that these cells were responsible for homograft destruction. He could not explain how, because of the firmly prevailing notion of the time that lymphocytes were fixed cells that lacked mobility. All of these findings were published in widely read scientific journals, but they were ignored by most and eventually were largely forgotten. David Hamilton in his excellent recent history dubs this the “lost era” of transplantation (Hamilton 2012, pp. 105–125).
PIONEERS OF ORGAN GRAFTING
Half a century before proponents of skin homografts finally conceded their futility, surgeons using the more complex model of kidney transplantation recognized that homograft failure was inevitable. Alexis Carrel is commonly credited with originating both vascular suturing and its use in organ transplantation (Fig. 1). Although the award of the 1912 Nobel Prize for his development of these techniques was well deserved, he was actually not the first in either endeavor. Mathieu Jaboulay, the Chief of Surgery in Lyon, where Carrel trained, and the German surgeon Julius Dörfler introduced the full-thickness blood vessel suturing technique (Dörfler 1895; Jaboulay and Brain 1896). Carrel only adopted this method a decade later on the advice of Charles Guthrie after he initially advocated partial thickness suturing.
Alexis Carrel, whose pioneering work on blood vessel suturing and organ transplantation was recognized by the 1912 Nobel Prize. (Photograph ca. 1907 from the collection of the American Surgical Association.)
Technically successful kidney transplants were accomplished first not by Carrel but by Emerich Ullmann, who in 1902 performed a dog autotransplant and a dog-to-goat xenograft (Ullmann 1914). In 1906, the first two renal transplants in humans were performed by Jaboulay using a pig donor for one and a goat donor for the other (Jaboulay 1906). Ernst Unger, after first performing more than 100 kidney transplants in animals, performed the third and fourth human transplants in 1909 using monkey donors (Unger 1910). None of these early human kidney xenografts functioned for more than a few days, and all of the patients soon died.
In 1904, Carrel left France after failing in several examinations to qualify for a faculty position there. After a brief sojourn in Montreal, he moved to Chicago, where he partnered with the physiologist Charles Guthrie. They collaborated for barely 12 months, but during this time, they successfully transplanted the kidney, thyroid, ovary, heart, lung, and small bowel, averaging a publication on this work every 14 days (Malanin 1979). Carrel’s success with organ grafts was not dependent on a new method of suturing but on his use of fine needles and suture material, his exceptional technical skill, and his obsession with strict asepsis.
Carrel’s relationship with Guthrie soon cooled because Guthrie objected to Carrel’s seven single-author papers about their joint work and to Carrel’s habit of advancing his fame by reports in the newspapers. After Carrel left Chicago in 1906 for the Rockefeller Institute in New York, Guthrie published in Science a criticism of Carrel and the contention that he, rather than Carrel, deserved most of the credit for their joint accomplishments. He wrote: “It is a singular fact that up until the time Carrel and I engaged in the work together, his experiments did not yield good results, and that our results almost from the beginning of our work together were excellent!” (Guthrie 1909). This was only one of several instances in which the Nobel Prize Committee saw it differently than some of the candidates, and Guthrie was not given a share of the Nobel Prize.
Carrel’s extensive experience with organ transplants in animals left no doubt that, although autografts could be consistently successful, homografts never were. In view of the stubborn ongoing claims of successful skin homografts, this was one of Carrel’s most important findings, in itself a landmark. Carrel did not know why homografts failed, but he began to explore methods to avoid this such as matching of donor and recipient. Under the influence of his colleague James B. Murphy, he irradiated recipients in unpublished and now forgotten experiments, finding that this improved results (Flexner 1914).
World War I interrupted the productive transplantation research at the Rockefeller Institute by Carrel, Murphy, and their colleagues. Carrel, after spending the war in France treating wounded soldiers, returned to the Rockefeller Institute but not to research in transplantation. Instead, he formed an unlikely partnership with the aviator Charles Lindbergh, who approached him to discuss the possibility of a heart operation on a relative. Carrel responded that open heart surgery would require a pump oxygenator. Lindbergh offered to build such a device, and Carrel provided laboratory space for the project (Lindbergh 1978). Lindbergh’s pump was not used for heart surgery but for perfusion of organs and tissues. It allowed preservation of organs for as long as 3 weeks (Anonymous 1931). The publicity-seeking Carrel made sure that the pump was prominently exhibited at the 1939 New York World’s Fair.
Carrel was the originator of tissue culture, another technique that subsequently played an important role in transplantation. He incubated small fragments of embryonic chicken heart in dilute plasma. Carrel claimed that by 1919, this “immortal” tissue had been cultured for 1939 passages and was still normal and pulsating. But it was subsequently determined that embryonic cells with a normal diploid set of chromosomes cannot be maintained in culture for more than 50 doublings unless they undergo malignant transformation. Carrel’s laboratory technician eventually admitted the fraud, saying that because Dr. Carrel would be upset if the strain was lost, she added new embryo cells when they were needed (Witkowski 1980).
Despite the continued claims of success for skin homografts, the well-documented failure of experimental organ homografts by Carrel discouraged further research in this field, which during the 1920s and 1930s was continued by only a few. Frank Mann at the Mayo Clinic conducted extensive studies of canine renal and heart homografts but failed to extend Carrel’s earlier findings or explore Carrel’s suggestions for preventing rejection (Mann 1932).
LEO LOEB, A FORGOTTEN HERO
In the 1930s, Leo Loeb, an émigré from Nazi Germany working at Washington University in St. Louis, was one of only a few transplantation researchers. He determined that the strength and timing of rejection of skin homografts in rats was governed by the extent of genetic disparity of donor and recipient. He also showed that the lymphocyte was involved (Loeb 1945). As Chief of Pathology, he influenced his plastic surgeon colleagues James B. Brown and Earl Padgett in studies determining that identical twins would accept exchanged skin grafts (Brown 1937). Loeb reported his finding that grafts exchanged between (inadequately) inbred mice of the same strain would fail, mistakenly claiming that this was due to some mystical “finer mechanism.” For this error (later retracted after further inbreeding of his mice), he was viciously ridiculed by his famous faculty colleague, the geneticist C.C. Little, and by others including Peter Medawar, whose unfair antipathy for Loeb influenced him to dismiss the importance of the lymphocyte and for years espouse the humoral theory of rejection (Hamilton 2012, p. 160). This dispute forever tarnished the reputation of Leo Loeb, a forgotten hero of transplantation research.
UNMODIFIED ANIMAL AND HUMAN KIDNEY TRANSPLANTS
In 1933, the Soviet surgeon Yu Yu Voronoy performed the first human-to-human kidney transplant. That the kidney was not procured until 6 hours after the donor’s death and that it was transplanted across a major blood group mismatch probably accounted for its prompt failure (Voronoy 1937). Four other human homografts that Voronoy performed between 1933 and 1949 also failed rapidly. Published in Russian, this experience remained unknown in the West until the 1950s.
In the 1940s and early 1950s, experimental dog kidney transplantation was actively conducted by surgeons in Paris and Boston and also by Morton Simonsen in Denmark and William Dempster in London. The “second set” phenomenon was again observed, but no new insights emerged. Simonsen searched in vain for an antibody response, being unaware of Loeb’s work on the lymphocyte (Simonsen 1953). Dempster, who joined Medawar in denouncing Loeb and his cellular theory of immunity, may have been the first to use radiation in organ transplant recipients (Dempster 1953a). He also treated dog homograft recipients with cortisone (Dempster 1953b), which Rupert Billingham had found to prolong survival of skin homografts in rodents (Billingham and Krohn 1951). In dogs, neither treatment had much effect on rejection. Dempster like others despaired of further progress in organ transplants and advised against any attempt in humans.
The faint hope that homografts might fare better in humans than animals was spectacularly rekindled in 1950 when Chicago urologist Richard Lawler performed a human kidney transplant (Lawler et al. 1950). In this patient whose renal function was impaired but not terminal, the homograft was claimed to be successful. It may have produced urine briefly, but when it was removed after several months, it was found to be shrunken. This procedure was of no scientific significance and of no benefit to the patient, who survived for several years with declining renal function of her own kidney. Within the medical profession, this “maverick attempt” was widely criticized, and Lawler’s urology colleagues treated him with distain. However, because enthusiastic reports in the lay media generated public support, the impact of this transplant was substantially positive. The French surgeon René Küss said that it provided him the excuse to start a program in human kidney transplantation (Küss 1991).
Because no method was available to prevent rejection, it was recognized that the chances of success were remote, but the lack of dialysis or any other treatment for renal failure was used to justify trials of transplantation in otherwise doomed patients. In 1951, two teams working separately in Paris performed nine kidney transplants (Küss and Bomget 1992). Most of the donors were guillotined criminals. The kidneys were placed retroperitoneally in the pelvis revascularized by iliac vessels with the ureter anastamosed to the bladder, a method devised by Küss that is still the standard operation. None of these transplants showed meaningful function, and all of the patients died within days or weeks (Küss et al. 1951). The ninth transplant in this series was the first to use a living relative as the donor, the patient’s mother. Unlike the others, this kidney promptly functioned, but it was rejected after 3 weeks.
In the concurrent Boston program at the Peter Bent Brigham Hospital, David Hume performed nine kidney transplants between 1951 and 1953 (Hume et al. 1955). The donors were patients who had died during surgery or hydrocephalus patients who had a normal kidney removed so that its ureter could be used as a conduit to drain their excess cerebrospinal fluid to the bladder. Except for one orthotopic transplant, they were placed in the anterior thigh with the ureter brought out to the skin. Some of these recipients were treated with ACTH, cortisone, and testosterone. Only four kidneys showed any function, and in three of these, function was brief. But, remarkably, one transplant functioned for 5.5 months before it was rejected. This helped sustain the hope that in humans the outcome of homografts might be better than predicted from animal transplants, thus possibly justifying continued attempts made in as many as six more patients that were never reported. But in the opinion of most physicians of the time, the Paris and Boston experience confirmed the futility of kidney homografts, proving that such human experimentation was unwarranted and unethical. However, two landmark events would soon conspire to brush aside this pessimism and launch the modern era of transplantation.
PRELUDE TO THE MODERN ERA
The prelude to the modern era of transplantation began by chance. During World War II, Peter Medawar, a young Oxford zoologist who had no previous interest in transplantation, was assigned to join plastic surgeon Thomas Gibson in exploring the use of skin homografts for treatment of burned aviators (Gibson and Medawar 1943). Working in the Burn Unit of Glasgow’s Royal Infirmary, they soon reconfirmed that homografts always failed. Medawar attributed to Gibson the credit for their rediscovery of the “second set phenomenon” previously described in animals by several others including Schöne in 1912 and in a human patient by Emile Holman in 1921 (Schöne 1912; Holman 1924). Like the earlier researchers, they realized that this identified rejection as an immunological event. This important observation and its interpretation are often credited to Medawar as an original discovery and a novel insight. They were not, but Medawar’s many subsequent contributions were so important, his experiments so precise, and his speaking and writing so masterful that he rightfully deserved to be considered the central figure in the emerging field of transplantation.
Returning to Oxford after the war, Medawar conducted extensive studies of skin homografts in rabbits, more firmly characterizing the timing, histological morphology, and immunological nature of rejection (Medawar 1944). Partly because Medawar was unaware of James B. Murphy’s work on the lymphoid system 20 years earlier and refused to believe Leo Loeb’s similar findings of the 1930s, he remained convinced for another decade that grafts failed because of humoral rather than cellular immunity (Brent 1997). Frustrated that no antibody could be detected, Medawar then turned his attention away from transplant rejection. Instead, he and his first graduate student, Rupert Billingham, began to study the esoteric phenomenon that, in spotted guinea pigs, the pigmented areas of skin autografts gradually encroach on the surrounding white skin (Billingham and Medawar 1948).
Soon after this, Medawar accepted the Chair of Zoology at the University of Birmingham, recruiting Billingham to join his faculty. Together they continued to study pigment spread.
In 1949, serendipity assumed importance in the story. At a cocktail party Medawar talked with his faculty colleague Hugh Donald, who was studying twin cattle. Donald asked whether identical twins could be distinguished from fraternal twins. Medawar responded that skin grafts exchanged between twins would be accepted only by the identical ones.
When Donald requested that he perform such skin grafting experiments, Medawar was reluctant because he was uncomfortable with the prospect of handling large animals. Therefore, he enlisted the aid of his junior colleague, Billingham, who as the grandson of a dairy farmer was not afraid of cows. Billingham and Medawar had little scientific interest in the outcome of the skin grafts, which they felt was entirely predictable. However, the results were unexpectedly interesting (Billingham 1991) (Fig. 2).
Peter Medawar skin-grafting a cow. The unexpected acceptance of grafts exchanged between chimeric bovine fraternal twins was the key to understanding tolerance. (The photograph is a gift from the private collection of Rupert Billingham, who was the photographer.)
They found that most cows accepted their twin’s graft, a surprising result because they knew that most cattle twins are fraternal and some of the twin pairs they grafted were of different genders (Anderson et al. 1951). Totally puzzled by this finding, they discussed it with Hugh Donald, who suggested that they read a paper published 4 years earlier in Science. When they did so, the significance of their results suddenly became clear.
To place the cattle twin chapter of the story of tolerance in proper context, it is necessary to go back 200 years. In 1779, the English surgeon John Hunter provided the first anatomical description of the freemartin, a term used for the generally sterile female of a pair of cattle twins of unlike sex. Hunter dissected freemartins, finding that they had masculinized sex organs (Palmer 1835). That Hunter was unable to explain this curious phenomenon is ironic because he was an authority on the circulation of the placenta.
The next important link in the story was provided in 1916 when Frank Lillie, an embryologist at the University of Chicago, dissected a pair of unborn cattle twins (Lillie 1916). He found that the chorions of the twins’ placentas were fused, causing a common intrauterine circulation that would allow blood to be exchanged freely between the twins. Like John Hunter, Lillie also found that when cattle twins were of unlike sex, the gonads of the female were usually rudimentary. He reasoned that male hormones circulating through the female embryo inhibited the development of its reproductive organs.
Three decades later, in 1945, Ray Owen at the University of Wisconsin wrote the next chapter. In studying erythrocytes in cattle, Owen found that in this species fraternal twins frequently had a mixture of two red blood cell types (Owen 1945). Recalling Lillie’s finding of placental fusion of bovine twin embryos, Owen concluded that not only hormones but also cellular elements of the blood must be exchanged in utero by twin cattle. He realized that persistence of red blood cell chimerism in adulthood must depend on intrauterine transfer not only of short-lived red blood cells but also of stem cells that would perpetuate them.
Six years after the publication of Owen’s largely forgotten paper, Billingham and Medawar read it with fascination and suddenly understood why cattle accepted their fraternal twin’s graft. They realized that, like the freemartins studied by Hunter, Lillie, and Owen before them, their twins must have exchanged blood in utero and that the donor cell chimerism persisted in adulthood. They reasoned that the stem cells exchanged in utero by these twins would be not only those for red blood cells but also for leukocytes, and that these were probably responsible for skin graft acceptance. They also realized at once with considerable excitement that it might be easy for them to repeat Nature’s bovine twin experiment in other species. In 1951, Billingham and Medawar moved to University College, London. Medawar said, “Thank God we’ve left those cows behind.”
In London, Billingham, Medawar, and graduate student Leslie Brent set out to induce chimerism and homograft acceptance in mice by inoculating intrauterine fetuses with donor strain spleen cells (Fig. 3). In retrospect, they were quite lucky to have achieved any successes. By chance, the inbred strains they chose for the experiments were CBA and A, virtually the only H-2-incompatible combination available to them in which neither graft-versus-host disease nor incompatibility of skin-specific antigens would cause death or rejection of the graft.
Rupert Billingham and Leslie Brent in their laboratory, where they inoculated neonatal mice with spleen cells (upper insert). In adulthood, the mice accepted skin homografts from the donor strain (lower insert). (The photographs are a gift from Rupert Billingham’s private collection.)
In adulthood, survivors of their intrauterine inocula accepted skin grafts but only from the spleen cell donor strain (Billingham et al. 1953). Like the cattle work, it showed that allograft rejection was not inevitable. Surprisingly, neither the publication of the work in October 1953 nor its presentation at a New York Transplantation meeting the next year had sudden dramatic impact, in part because of Medawar’s statement that it had no clinical implication (Hamilton 2012, pp. 225–226). Over time, however, it is clear that no other experiment in the field has approached the importance of their demonstration that induction of chimerism can prevent graft rejection. It resulted in Medawar’s 1966 Nobel Prize, and it reverberates in present-day clinical trials to induce tolerance.
During their experiments to induce tolerance, Billingham and Brent made another quite unexpected observation. Many of their chimeric mice were sickly “runts.” They soon determined that this was because immunocompetent cells in the neonatal inocula migrated to and attacked the lymphoid tissues of their new hosts—that is, graft-versus-host disease (GVHD). Although they briefly reported this in 1957, publication of their conclusive proof was delayed until 1959 by Brent’s extended sabbatical with Ray Owen (Brent 1991). Meanwhile, in 1957, Morton Simonsen had independently discovered and published his evidence for GVHD in chickens that he had injected as embryos with allogeneic lymphoid cells (Simonsen 1957, 1985). That to cause GVHD, lymphocytes must be mobile was among the findings belatedly inducing Medawar to accept the importance of cellular immunity, of which he became the foremost proponent. Even better evidence of cellular immunity was provided by Avrion Mitchison’s demonstration that immunity to tumor grafts could be transferred by cells but not antibody (Mitchison 1954). Final proof of lymphocyte mobility came in 1959 when James Gowans (Gowans 1957) showed that lymphocytes recirculate from blood to lymph and back again.
THE FIRST SUCCESSES
In Boston, barely 14 months after the initial report of tolerance in chimeric mice by Billingham, Brent, and Medawar, came the next landmark. On December 23, 1954, Joseph Murray bypassed the barrier of rejection by using the patient’s identical twin as the donor of a human kidney transplant (Fig. 4) (Murray et al. 1955; Merrill et al. 1956). In retrospect, the success of this transplant had no real scientific significance because technical accomplishment of human kidney transplants was nothing new and it had also been known for decades that skin grafts exchanged between identical twins were not rejected (Brown 1937). But, the impact of this first successful human transplant was immediate and profound. Widespread enthusiastic reports were an important stimulus for surgeons to pursue further efforts in transplantation. But because induction of chimerism in human recipients by neonatal treatment was clearly impossible, another approach would be necessary. The next year, one was found by Joan Main and Richmond Prehn, who showed that weakening the immune system of adult mice by radiation allowed them to induce chimerism by inoculating bone marrow cells. Skin grafts were then accepted if they came from the bone marrow donor strain (Main and Prehn 1955). This and the similar success of the method in one dog kidney homograft (Mannick et al. 1959) encouraged transplantation teams in Paris and Boston to pursue this approach for preventing rejection of human kidney homografts.
Joseph Murray and his team performing the first successful kidney transplant in 1954 using as a donor the recipient’s identical twin.
In 1958, Murray’s team used the Main–Prehn strategy in two human kidney recipients that they conditioned with lethal total body irradiation (TBI) and donor bone marrow. Ten other patients were treated with sublethal TBI without bone marrow. Disappointingly, 11 of the 12 irradiated patients died within a month (Murray et al. 1962). The survivor (who was not given bone marrow) maintained adequate function of his fraternal twin brother’s kidney for 20 years. Scientifically, this was a more important accomplishment than the identical twin case because it was the first time the genetic barrier to human kidney transplantation had been breached (Merrill et al. 1960). Five months later in Paris, Jean Hamburger and colleagues (Hamburger et al. 1959) using the same irradiation treatment were successful with another fraternal twin transplant, which functioned until the patient’s death from unrelated causes 26 years later.
In these two dizygotic twin cases, there was speculation that the donor and recipient, like twin cattle, had become chimeric by exchanging tolerogenic blood cells during gestation. However, between 1960 and 1962 in Paris, Jean Hamburger and René Küss showed that this was unnecessary by performing four successful transplants in nontwin recipients conditioned by total body irradiation (Küss 1962). This French experience was the principal (and perhaps the only) justification for continuing human kidney transplantation. Because bone marrow inocula were not used in these patients, it was assumed that chimerism was not necessary for success.
Although Hamburger and Küss both used adrenal cortical steroids as an adjunct to TBI and Küss secondarily administered 6-mercaptopurine (6-MP) to one of his irradiated patients, there was as yet no systematic investigation of drug-based immunosuppression as a substitute for radiation. In the 1950s, oncologists were evaluating drugs including nitrogen mustard and 6-MP for treatment of malignancies. In 1959, interest was drawn to the use of such drugs for transplantation by the report of an experiment by Robert Schwartz and William Dameshek at Tufts University. They found in rabbits that 6-MP reduced the antibody response to bovine albumen (Schwartz and Dameshek 1959). In 1960, they reported that the drug modestly extended the survival of skin homografts (Schwartz and Dameshek 1960). When Roy Calne, a surgical trainee in London, learned of these experiments, he tested the effect of 6-MP on rejection of dog kidney homografts and found that it significantly prolonged their survival. He promptly reported this in Lancet (Calne 1960). Simultaneously, Charles Zukoski working in Richmond with David Hume independently made the same observation, but his report in the Surgical Forum did not appear until the following year (Zukoski et al. 1960). Calne also treated three human kidney recipients with 6-MP, but they all died without showing any function of the transplant (Hopewell et al. 1964).
In 1960, with Medawar’s help, Calne obtained a research fellowship with Joseph Murray in Boston. Although Murray advised him to pursue the Brigham’s ongoing experiments with whole body irradiation, Calne began working instead with 6-MP and its derivative azathioprine, which he obtained from Gertrude Elion and George Hitchings (Hitchings and Elion 1954), who were subsequently awarded the Nobel Prize for development of these immunosuppressive agents. Calne’s demonstration that in dogs kidney transplant rejection could sometimes be delayed substantially with these drugs stimulated the Brigham team to begin using them in human kidney recipients.
NATIONAL RESEARCH COUNCIL CONFERENCE
In 1963, one of the most important landmarks in the history of transplantation was revealed during a small conference organized by the National Research Council (NRC). About 25 individuals including most of the world’s active transplant clinicians and scientists assembled in Washington to review the status of human kidney transplantation. The results presented by these acknowledged experts were extremely discouraging. Less than 10% of their several hundred allograft recipients had survived as long as 3 months (Goodwin and Martin 1963). Of patients treated with total body irradiation, only six had approached or achieved 1 year survival (Starzl 2000). Hope was expressed that immunosuppressive drugs might be more effective. Murray reported his first 10 patients treated with 6-MP and azathioprine instead of irradiation (Murray et al. 1963). One had survived for a year, although at the time of the conference it was failing. The others died within 6 months. Thus, at this point, drugs seemed no more effective than radiation. The mood at the conference was so gloomy that some participants questioned whether continued activity in human transplantation could be justified (Küss 1992).
The gloom was dispelled by only one presentation given by Tom Starzl, a virtually unknown newcomer to the field, who was invited to the conference as an afterthought. He described a new immunosuppressive protocol that had allowed >70% 1-year renal graft survival. He had more surviving patients than the rest of the world’s better known participants combined. At first, his audience was incredulous. Tapes that recorded the subsequent sometimes acrimonious discussions were lost, but eventually Starzl’s unprecedented results had to be believed because he had brought with him charts detailing the daily progress of each patient—including laboratory tests, urine output, and immunosuppressive drug doses (Hamilton 2012, pp. 279–280, 487). Starzl’s innovation based on his consistent success in reversing homograft rejection in dogs was to add prednisone to azathioprine. Although rejection usually occurred in patients on azathioprine alone, it was usually reversible with large doses of prednisone. In most patients, drug doses could then be diminished without provoking rejection. This presentation caused a sensation (Küss 1992). The formal report of the conference consolidated the results of all participants. This failed to stress and seemed almost to obfuscate the dramatic reaction to Starzl’s presentation. In fact, the impact on those present was extraordinary. They would have been still more impressed if they could have known that half a century later, some of the patients Starzl described would be off immunosuppression with the same functioning allografts and that they would have been found microchimeric with their donors’ lymphoid cells.
The outlook for renal transplantation was completely changed by Starzl’s report. Transplant historian Nick Tilney described it as “letting the genie out of the bottle” (Tilney 2003). Many of the conference attendees promptly visited Starzl in Denver to learn how to adopt his immunosuppressive protocol (Starzl 1990). The news of the breakthrough spread quickly by its publication 5 weeks later (Starzl et al. 1963). Before the NRC conference, there had been only three active renal transplant centers in North America (Boston, Denver, and Richmond). As the effectiveness of Starzl’s innovative immunosuppression became known, within a year 50 new transplant programs began in the United States alone. All of them and others that began subsequently adopted the Starzl “cocktail immunosuppression.” In fact, this protocol remained the virtual world standard for almost the next two decades (Brent 1997).
A PERIOD OF CONSOLIDATION
The next quarter century, 1964–1980, is often classified as a period of consolidation or plateau. During this period, except for the development of antilymphocytic serum (in large part responsible for the first albeit marginal success of extrarenal transplants), there were no landmarks. Importantly, however, there was steady progress. Practical innovations and accomplishments took place that were necessary for maturation of kidney transplantation into a useful clinical service. These included availability of dialysis; Medicare funding of end-stage renal disease; antibody screening to avoid hyperacute rejection; the importance of tissue typing for related donor transplants; acceptance of brain death; ex vivo preservation, allowing donor organs to be transported and shared;, and, perhaps most importantly, the accumulation of experience in patient management that led to the avoidance of over-immunosuppression, thereby decreasing resultant infections and deaths.
Hemodialysis for renal failure was pioneered in Holland by Willem Kolff during World War II, but chronic renal failure could be treated only after Belding Scribner in 1960 devised Teflon arteriovenous conduits for long-term vascular access (Kapoian 1997). Before that, each dialysis treatment used up an accessible artery and vein until they ran out. In 1966, James Cimino and Michael Brescia introduced subcutaneous anastamoses of an artery and vein at the wrist to arterialize superficial arm veins that could then be accessed by simple needle puncture. But even in the late 1960s, chronic dialysis was available in only a few centers, and in these expense limited its use to a small number of patients. Proliferation of centers able to offer chronic dialysis and transplantation took place only after Congress in 1972 approved Medicare funding for patients of any age with end-stage renal disease.
Before the mid 1960s, utilization of kidneys from deceased donors was limited by the ischemic damage that set in as soon as the heart stopped beating, this being the time-honored definition of death. During the next few years, there was gradual although controversial acceptance that irreversible loss of brain function was also a form of death and one that would allow removal of organs from a “heart-beating cadaver.” In 1968, the “Harvard ad hoc Committee on Brain Death” published its recommendation that irreversible loss of brain function be accepted as death (Harvard Medical School 1968). Uneasiness that this might be influenced by the desire of transplanters to recover organs from possibly salvageable patients and criticism over the poor outcomes of the first heart transplants that were being performed during this same period fueled controversy over this concept. But eventually, brain death was widely accepted, a crucial factor in increasing numbers of transplants, especially of extra renal organs.
As early as 1905, Carrel’s colleague Charles Guthrie had advocated cooling to protect donor organs before transplantation (Hamilton 2012, p. 101). That it was not used in early human transplants probably in part accounted for their poor results. Initially, Starzl used total body hypothermia to protect donor organs, but by 1960 switched to infusing cold solution into the portal vein to protect donor livers (Marchioro et al. 1963). By 1963, pretransplant infusion of cold solution into the renal artery of donor kidneys became standard (Collins 1969). When multiple organs were procured from a donor, in situ cooling by infusion of cold solution to the aorta was used. Ex vivo perfusion of isolated kidneys by a pump (reminiscent of Lindberg’s machine in the 1930s) was shown to extend preservation of kidneys for 2–3 days (Belzer et al. 1967), but the popularity of this method waned after 1987 when Folkert Belzer introduced University of Wisconsin solution, which when simply infused into blood vessels of donor organs allowed preservation almost as long (Belzer et al. 1992).
Donor Organ Sharing
After donor kidney preservation of up to 6 hours was accomplished in the mid-1960s, sharing of organs between centers became practical. At first, sharing was local and informal. In 1967, Paul Terasaki started the first sharing organization in Los Angeles (Terasaki 1990). The Boston Interhospital Organ Bank followed in 1968. Subsequently, perceptions arose that local control allowed inequity of donor organ allocation. There was concern that U.S. organs were sent abroad or transplanted at U.S. centers into foreign nationals. Recognition of a need to formalize distribution of donor organs at a national level led Congress to pass the National Transplant Act in 1984. The Southeastern Organ Procurement Foundation (SEOPF), founded in 1969 and eventually composed of 12 hospitals in several cities, served as the template for the resultant national entity that now controls organ allocation and placement, monitors performance of transplant centers and organ procurement organizations, collects data, and controls quality—the United Network for Organ Sharing (UNOS) (McDonald 1988). UNOS has been a force for order and good, but to some its effectiveness has seemed compromised by the excessive scale of its tasks and the number and size of its committees and inevitable bureaucracy. Adjustments in appropriate organ allocation in response to changes in the relative importance of histocompatibility and issues of equity have been difficult, prolonged, and politicized.
Although tissue matching was suggested by Alexis Carrel and studied in animals by George Snell and by Peter Gorer, it could not begin to emerge as a reality for human transplants until 1958, when Jean Dausset discovered the first human leukocyte antigen (HLA) (Snell 1948; Dausset 1958). Antibodies against this antigen were identified in transfused patients and multiparous women by Rose Payne (1957) and Jon van Rood (van Rood et al. 1958) soon after. Testing for antibodies (by agglutination techniques) was cumbersome and inconsistent until Paul Terasaki in 1964 developed a microcytotoxicity assay (Terasaki 1964). His test, which mixed recipient serum and donor lymphocytes in tiny wells, quickly became the standard. For several years, Terasaki did the typing for most U.S. transplant centers. Several of his early findings were of lasting importance (Terasaki 1990): (1) a positive cross-match test identifying donor-specific antibodies in the serum of a prospective kidney recipient predicts hyperacute rejection; and (2) matching can reliably identify the optimal donor within a family. It was next assumed by histocompatibility experts that matching would be just as important in selection of unrelated donors. However, in 1970, when Terasaki examined his large database (1216 kidney transplant patients from 52 centers) to relate typing with outcome of cadaver renal allografts, he found no correlation. Terasaki’s announcement of this at the Transplantation Congress in the Hague elicited consternation in members of the tissue typing community, who contended that his methods must be faulty (Terasaki 1990). His paper was the only one not accepted for publication in the conference proceedings. NIH made an emergency site visit to Terasaki’s laboratory and abruptly withdrew his grant, closing down most of his research. His funding was subsequently restored when others confirmed that his findings were correct. Since that time, typing has improved, with identification of many additional histocompatibility antigens including those of the important Class II locus (D or DR) (Ting and Morris 1978). Histocompatibility matching remains crucial in bone marrow transplantation and important in selection of family donors. But even now for unrelated donor organ transplants, the benefit remains much less unless there is a perfect match.
Additional important findings that took place during the consolidation period are listed because space precludes their full discussion:
- Blood transfusions, rather than decreasing the chance of kidney allograft survival, were found during the 1970s to improve it. This still-not-understood observation by Gerhard Opelz and others led to protocols of deliberate pretransplant transfusions (Opelz et al. 1973). However, this strategy was abandoned when the even greater improvement due to cyclosporine obscured the benefits of transfusions.
- In some pigs and rats, liver homografts survived without immunosuppression and also induced acceptance of other types of graft from the liver donor (Calne 1969; Kamada 1980).
- William Summerlin’s report that pretransplant culture of skin allografts allowed their acceptance without immunosuppression was greeted with excitement but was subsequently proven to be fraudulent. Summerlin was painting the grafts to make them look viable (Medawar 1996).
- The origin of multiple transplantation societies, journals, national and international meetings, and registries of clinical results ensured prompt dissemination of scientific and clinical findings to a degree not seen in other fields.
EXTRARENAL ORGAN TRANSPLANTS
During the period of consolidation, transplantation of nonrenal organs (liver, heart, and pancreas) began. These landmarks are subjects of other articles in this collection. Their initial success (albeit marginal) was promoted by improvement in immunosuppression by anti-lymphocyte serum (ALS), which thus in itself deserves landmark status. Further improvement awaited cyclosporine.
Antilymphocyte serum (ALS) had a prolonged evolution. At the end of the nineteenth century, Elie Metchnikoff proposed the use of antiserum to mitigate cellular immunity (Metchnikoff 1899), a concept reexplored six decades later by Byron Waksman (Waksman et al. 1961). Depletion of lymphocytes with antiserum was a logical extension of Gowans’s finding (Gowans et al. 1963) that thoracic duct drainage delayed skin graft rejection in rats. In 1963, Michael Woodruff reported that ALS was remarkably effective in extending skin allograft survival in rodent models (Woodruff and Anderson 1963). Monaco et al. (1966) and Levey and Medawar (1966) reported similar success. In 1966, Starzl was the first to use ALS clinically. After finding it effective for kidney allografts, he also credited it with allowing his first successful human liver transplants in 1967 (Starzl et al. 1967a,b). Many others, including Anthony Monaco, John Najarian, and A.G. Sheil, treated patients with homemade ALS from rabbits or horses immunized with lymph node cells or cultured lymphoblasts before pharmaceutical companies began to produce it. Variability in effectiveness of different batches of this polyclonal antiserum was troublesome before the advent of monoclonal derivatives of ALS that were directed first at all T lymphocytes and subsequently at their subsets or their interleukin 2 receptors (Cosimi et al. 1981). These agents have become a mainstay of the modern immunosuppressive armamentarium, especially for induction therapy.
Cyclosporine and Tacrolimus
The next landmark was the “wonder drug” cyclosporine. After a slow start, it revolutionized transplantation by substantially improving kidney transplant results and greatly facilitating successful extrarenal transplants. Cyclosporine is a fungal derivative first reported in 1976 by Jean-François Borel to have immunosuppressive qualities (Borel et al. 1976). Following several years of encouraging animal experiments, Calne began in 1979 to use it as a single agent in treatment of human kidney recipients, finding it to be more potent than azathioprine but also toxic in higher doses, leading to infections, lymphomas, and renal failure (Calne et al. 1979). Results of initial trials in Boston and Canada rated it as unimpressive to poor, causing some to believe the drug should be abandoned. Once again as he had before with azathioprine, Starzl by adding prednisone to the new drug developed a protocol that strikingly improved outcomes of kidney transplants (Starzl et al. 1980). In addition, this transformed transplantation of extrarenal organs into a practical clinical service (Starzl et al. 1981).
Cyclosporine soon became the standard baseline immunosuppressant and remained so until 1989, when it was shown by Starzl that rejection of liver and other organ allografts resistant to treatment by cyclosporine, steroids, and antibodies could often by reversed by an even more potent drug, tacrolimus (Starzl et al. 1989). Tacrolimus has now in large part replaced cyclosporine as the usual baseline agent.
Modern immunosuppression with agents such as cyclosporine/tacrolimus and T-cell antibodies now allows excellent short- and midterm survival of allografts, for 1 year exceeding 90% for kidney and approaching this for other organs. Nevertheless, because of ongoing morbidity from drug toxicity and graft loss from chronic rejection, achievement of drug-free immunosuppression remains the ultimate goal. Plans for inducing tolerance invariably start with review of the 1953 demonstration by Billingham, Brent, and Medawar that chimerism induced in neonatal mice by lymphoid cell inocula allows acceptance of donor strain skin grafts. Main and Prehn’s subsequent success by infusion of donor cells to irradiated adult mice inspired pioneer transplanters in Boston and Paris to test this method in a small number of human kidney allograft recipients, but with no success. When in several irradiated human recipients, graft survival was achieved without donor cell inocula and in others achieved with immunosuppressive drugs alone, it appeared that donor cell chimerism was irrelevant to long-term allograft success.
In animal models, there has been continued exploration of donor cell inocula (combined with immunosuppression) for inducing chimerism and tolerance of allografts (Monaco et al. 1966; Lance and Medawar 1969; Thomas et al. 1987). But in humans between 1959 and 1990, there were only a few trials of this strategy, the first by Monaco et al. (1976) and the largest by W.H. Barber and A.G. Diethelm, who treated 57 kidney recipients with an induction course of ALG and cryopreserved donor bone marrow given 17 days after transplantation (Barber et al. 1991). Their results were equivocal, although early graft rejection seemed less than in the control group.
In 1992, attention was dramatically refocused on the role of chimerism in human allograft survival. Tom Starzl discovered that donor leukocyte chimerism was present in patients who had maintained successful kidney or liver grafts for up to three decades (Starzl et al. 1993). Sensitive immunochemical and molecular assays were necessary to detect donor cells, which in some patients were not found in blood but only in biopsies of skin, lymph nodes, and other tissues. This extensive search determined that a microchemic state was present in all 30 patients studied.
Because these recipients had not been given donor cells, the chimeric cells could only have reached them as passengers migrating from the donor organ. Many of the patients appeared to be tolerant, because they were off all immunosuppression. This finding was the basis of Starzl’s belief that chimerism is an important cause (not the consequence) of successful transplantation, culminating in his hypothesis of a two-way paradigm for tolerance—that is, successful engraftment is the result of the responses of coexisting donor and recipient cells each to the other causing reciprocal clonal exhaustion followed by peripheral clonal deletion (Starzl et al. 1992, 1996). This interpretation has been rebutted by Kathryn Wood, David Sachs, and others who argue that persistent microchimerism induced by a donor organ is as likely to be the effect as the cause of tolerance (Wood and Sachs 1996). Whatever the resolution of this debate, it is clear that Starzl’s demonstration of microchimerism in his patients has been an important stimulus for reexploration of this approach to allograft tolerance by trials described in Markmann and Kawai 2012).
The evolution of organ transplantation in the last half century is one of Medicine’s great stories. In support of this contention are the five Nobel Prizes given for transplantation (19, if related immunology is included). In addition, this year’s Lasker Award to Tom Starzl and Roy Calne brings to six the number of these prestigious awards for transplantation or related work. The distribution of these Prizes has not been without controversy. Largely forgotten rivalries that enrich the story are those between Alexis Carrel and his colleague Charles Guthrie; between the proponents of cellular immunity (Elie Metchnikoff, James B. Murphy, Leo Loeb, and Avrion Mitchison) and the champions of humoral immunity (Paul Ehrlich, Peter Gorer, and others, even including Peter Medawar during his early career); between transplant pioneers in Boston, Denver, and Paris, even between two separate teams of transplanters. However, it is notable and reassuring that in 1999, 12 of the field’s surviving pioneers met to select and agree on the “historical landmarks” briefly reviewed in this work (Groth et al. 2000) (Fig. 5). The promise of further progress toward the ultimate achievement of tolerance is described in subsequent work.