From “Publish or Perish” to “Patent and Prosper”

After spending more than 55 years doing research and 45 years teaching a graduate course in physical biochemistry with the last 5 years spent attempting to teach ethics in a required course on Responsible Conduct of Research, I had wondered about what to write in response to Herb Tabor’s kind invitation and urging to submit an article on “Reflections.” Tempting as it is to write about my love affairs with the ultracentrifuge, tobacco mosaic virus, nucleic acids, ribosomes, and a host of proteins including aspartate transcarbamoylase in particular (1), I decided instead to look back at the way science was done at the start of my career and how our research environment and academia, in particular, have changed in the past half-century. In doing so, I offer apologies at the outset. This account cannot be construed as “history.” Instead it offers personal recollections, biases, impressions, and evaluations, frequently without documentation. Indeedmany of the papers and historical records that I would cite are not readily available. Regrettably from my point of view, Stanley Hall, my home on the Berkeley Campus for 50 years, was demolished several years ago to be replaced by amuch larger laboratory, and this Reflections is being written in a temporary office. The following pages present my views on various topics such as how careers in science have changed, how federal funding of research in universities was initiated, and how politics interfered with the funding process. In addition, the peer review system is discussed along with the plight of dissatisfied applicants. Finally, I present my impression of the impact of federal funding on universities, the controversy over indirect costs, the burden of government regulations, the enduring struggle over fraud in science, and the major changes in the culture of academia and the commercialization of universities stemming from the Bayh-Dole Act. On many of these controversial issues I was personally involved both in advocating and opposing policies under consideration by government agencies. As President of the American Society of Biological Chemists (ASBC), now the American Society for Biochemistry andMolecular Biology (ASBMB), and later as President of the Federation of American Societies for Experimental Biology (FASEB) and Chair of the Public Affairs Committee of ASBMB, I traveled extensively to Bethesda and, in concert with other concerned biomedical researchers, helped to formulate positions that we hoped represented the working scientist’s point of view. Moreover, I had the privilege of serving for 6 years as an advisor to the Director of the National Institutes of Health (NIH), Harold Varmus, and as its Ombudsman in the Basic Sciences. This part-time activity involved extensive visits to more than 45 universities andmedical schools where I learned a great deal about the conduct of biomedical research and the THE JOURNAL OF BIOLOGICAL CHEMISTRY VOL. 281, NO. 11, pp. 6889 –6903, March 17, 2006 © 2006 by The American Society for Biochemistry and Molecular Biology, Inc. Printed in the U.S.A.

problems encountered by investigators. To some extent, therefore, this Reflections, although not about my research, is indeed a personal memoir.
Although a significant amount of material presented below is critical of government agencies and policies, university administrators, and actions of fellow scientists, I wish to emphasize that the triumphs of biochemistry in the past 60 years are almost impossible to encompass. Although biochemistry formerly was concentrated in medical schools and a few schools of agriculture, now university campuses throughout the country have thriving departments. Moreover biochemistry is "invading" chemistry and physics departments as well as engineering schools. Leadership in research is now in the hands of innumerable creative young people who have fallen in love with biochemistry and are largely responsible for the startling discoveries of the past half-century. The training of these scientists and the support of their research resulted from the magnificent contributions of those great institutions, NIH and the National Science Foundation (NSF). Conducting scientific research today is dramatically different from what I and others of my age experienced, and the pace of discovery is much more rapid. Superb commercial instruments for numerous techniques are now available. In addition, innumerable reagents biochemists formerly prepared in their laboratories can now be purchased, thereby freeing investigators to perform imaginative experiments. New journals, numbering in the thousands, are available for publication of the findings. Who could have imagined that the Journal of Biological Chemistry (JBC), which in 1945 had an Editorial Board of 12 members and published 365 articles comprising about 3600 pages, would 60 years later have 650 biochemists on the Editorial Board leading to the publication of about 5000 articles comprising 43,000 pages? And today's JBC pages are much larger than those in the 1950s. How did this come about? What factors led to such dramatic growth? How did little Biochemistry morph into Molecular Biology and Cell Biology, Big Science, and Big Business? How did academia change from "Publish or Perish" to "Patent and Prosper?" What may we expect for the next 60 years?

Independence in Research Comes Later in Careers
As a newly minted Instructor in Biochemistry at Berkeley in 1948 just after receiving my Ph.D., I was acutely aware of the unpredictability and hazards of an academic career. My training as an undergraduate in chemical engineering at MIT followed by graduate school in physical chemistry at Princeton provided me with little knowledge of biochemistry, and there I was, by good luck, on the faculty of a biochemistry department. Accordingly, by auditing various courses, I slowly became familiar with metabolic pathways leading me to describe the auspicious position of Instructor as a "Hypothetical Unstable Intermediate." Today, virtually no one is appointed as an Instructor. On the contrary, most individuals enter academia in science as Assistant Professors or even Associate Professors. Such appointments are made after they have spent several years, and often too many years, as postdoctoral fellows. It is now commonplace for young investigators to have two, or even three, different postdoctoral stints. As a consequence, researchers in the biomedical sciences do not become independent investigators until a significantly older age than was the case 40 -60 years ago. Appointments to faculty positions at an older age coupled with the termination of mandatory retirement have resulted in numerous reports bemoaning the aging of the research community.
I can recall joking 30 years ago that consideration of promotion of a biochemistry faculty member to tenure no longer involved reading their papers. The inability to comprehend the contents of the burgeoning literature resulted, according to folklore, in the practice of "weighing" the papers and, at a later time, of "counting" them. Finally, a rebellion occurred over the "quickies," and academic departments began considering content again. However in the past 10 years, it appears that more consideration is focused on where an article was published rather than on what was published. Fortunately, leading academics responsible for this "need" to publish in Cell, Nature, Science, or the Proceedings of the National Academy of Sciences (PNAS) are now realizing that they have become captives of their own creation; papers in those journals are essential for winning prizes and for promotion of younger faculty members to tenure. Although there is still considerable "hype" by Editors of these prestigious journals in order to attract papers, resentment is now growing in the academic community about this emphasis on where to publish. Whether this change in the culture of academia will have an impact on young investigators and their decisions as to where to send their "hot" papers remains to be seen. In that regard, it is relevant to ask whether the almost limitless proliferation of journals will ever end. Professional societies justify publishing new specialized journals based on the growth of science, and private companies have found science reporting a profitable venture.

Federal Support of Science
From the perspective of today it is difficult for those whose careers began in the 1940s to recall how research was done before federal funds became commonplace. The birth of the NSF in 1950 was the outcome of the remarkable report (2) by Vannevar Bush to the President of the United States on July 5, 1945. That insightful and visionary report, entitled "Science-The Endless Frontier," should be required reading today for Members of the United States House of Representatives and the Senate as they are perilously close to reducing the budget of NIH for next year. The report of only 40 pages, plus the more detailed appendices by various subcommittees, has only 6 parts (or chapters). Individual sections are devoted to research aimed at understanding and eliminating disease, the benefits to public welfare from scientific research, the need for training future generations of scientists, the importance of openness in science and the "freedom of inquiry," and finally the establishment of a government agency to foster and fund scientific research in universities. Basic scientific research and training in this country today stem directly from the Bush report, an outgrowth of his experience as head of the Office of Scientific Research and Development (OSRD) in supervising civilian scientific activities during World War II. The recognition of the invaluable contributions to the war effort made by the scientific community led to extensive deliberation of the potential role of government in the support of science in peacetime.
Not only is the NSF the direct beneficiary as spelled out magnificently in the section termed "The Means to the End" but also, to a substantial extent, that "great invention," NIH, from which most biochemists receive research support, owes its largesse to the spirit of the Bush Report in the part entitled "The War Against Disease." However it should be noted that 5 years elapsed between the submission of the Bush Report and the establishment of the NSF. During that interval of debate and political bickering over the governance of the proposed agency (3), the medical community, many of whose members were not enthusiastic about biomedical research being centered in the new foundation, devoted efforts to steer more government funds to the Public Health Service and thereby to NIH. That activity led to the astonishing increase in NIH funding for extramural research from less than 1 million dollars in 1946 to about 4 million in 1947 (4). Ever since, NIH has become the sponsor of most biomedical research and training. The role of NSF in supporting basic biological research is meager by comparison. Whereas there was substantial support for doubling the NIH budget recently, a similar effort to double the NSF budget has not been successful. Indeed Congress and the public have been extremely enthusiastic about increasing funding for research aimed at improving the health of the citizenry. Almost invariably over the past 60 years, the annual increases in NIH funding approved by Congress exceeded those proposed by the President. Some have attributed that to the personal interests of members of Congress in their own health. Whatever the motivation, funding of NIH has increased phenomenally, reaching the present staggering level of about $29 billion. But it should not come as a surprise that the "doubling" policy of funding from 1999 to 2003 was bound to end. The scientific community was hoping for "a soft landing" with increases of 7-9% annually as contrasted to the 15% increase during the doubling period. That soft landing has not materialized, and complaints are now widespread about the inadequacy of annual increases that do not even account for inflation.
As can be readily gleaned from this brief history of federal funding, scientists of my generation did not spend much time writing grant applications as they launched their careers. In my first few years as a faculty member, I received funds kindly provided by Wendell Stanley who, as a Nobel Laureate, had grants from various private foundations. The first graduate student in my laboratory was supported by university funds. Several years after initiating my research in Berkeley, I received my first grant. Actually it was a contract, of about $7,000, from the Office of Naval Research (ONR) for research on tobacco mosaic virus and the ultracentrifuge. Administrators at ONR were enthusiastic about basic science and provided enormous freedom to investigators. After a few years, many of the staff at ONR moved to NSF, and I began receiving funding from NSF. That support continued for many years, culminating years later in an annual budget of about $70,000. In the early days of federal support, I had the narrow view that my laboratory was not doing biomedical research, and I didn't apply to NIH even though larger grants were readily available. My NSF grant was adequate for many years. Graduate students in the early1950s were supported as teaching assistants or research assistants funded by small grants to faculty members or departmental funds. A typical laboratory would have one or two graduate students, a technician, and perhaps one postdoc, a far cry from the research group of today.

Intrusion of Politics in Federal Support of Research
Although most scientists were ecstatic about the Bush report and the concept of federal funding of research at universities, some expressed reservations; there was considerable apprehension about the possibility of political interference and curtailment of free inquiry. Indeed over the years, there have been periods, such as the 1950s, when NIH grants to individuals were terminated because of their presumed political activities. In 1953, while McCarthyism was rampant, the Federal Bureau of Investigation (FBI) began screening grantees. Based on FBI reports, Oveta Culp Hobby, as Secretary of Health, Education and Welfare (HEW) in the Eisenhower administration, interceded and ordered the cancellation of grants to Linus Pauling and other prominent scientists. Protests were mounted both within NIH and in the extramural community. Some leading administrators at NIH were so appalled about the policy dictated by the Secretary that they suggested to some of the "debarred" investigators that they substitute a colleague's name as the Principal Investigator, in which case the grant would still be funded to the institution. About 4 years elapsed before the policy of screening grantees under the policy initiated by Secretary Hobby was essentially abandoned. During that period, William Consolazio, who headed the section on Molecular Biology at NSF and was one of the pioneers at both ONR and NSF, sent me an urgent request for a prompt review of an application from Pauling for funding his research previously supported by NIH. According to my recollection, I sent Consolazio two responses. One was a detailed review of the proposal along with a summary indicating that the proposed research was ground-breaking and would have a great impact on our understanding of protein structure. In addition, my formal evaluation included, "The principal investigator is a giant in the field of science and is eminently and uniquely qualified to perform the research that is outlined." That was for the record. The second response was much more personal because I knew Consolazio well and served on his panel. It asked rhetorically, "Who am I to evaluate a research grant application from God?" For anyone interested in physical chemistry and its application to biological problems, Linus Pauling was God! Not only did NSF, through Consolazio's intervention, fund Pauling's grant application, but others who were cut off by NIH also received funds from NSF.
There have been other episodes of political interference in the operation of the granting agencies, with the most serious threats occurring during the Nixon administration. Beginning in 1971, for example, the training grant function at NIH was taken away from the Institutes, the number of review committees was reduced drastically, appropriated funds were impounded, and plans to separate the National Cancer Institute from NIH were initiated. Finally, Robert Q. Marston, Director of NIH for 5 years, was summarily fired in 1973 for refusing to cooperate with the Office of Management and Budget (OMB) in cutting basic research programs. At that time the entire peer review process was in jeopardy. Other less serious assaults on the grant systems at NSF and NIH dealt mainly with appointments to high level positions, such as the Director of the NSF or advisory councils at NIH. Some grant applications have also been rejected because a political official thought that government should not support the type of research described in the proposal. One prominent former senator, William Proxmire, who crusaded against waste in government, periodically ridiculed granting agencies over grants whose titles he didn't like by publishing his list of "Golden Fleece Awards." His complaints were often unwarranted. These interferences with peer review arose primarily in areas of social or political controversy such as sexual behavior, AIDS, reproduction science, or climate control. However by and large, granting agencies have managed to resist external political pressure, with the exception of the response to priorities in the appropriation process during times of budgetary constraints. Apprehensions expressed in the 1940s that funding of research by government would be swayed or dominated by politics have not materialized. Although actions in the past few years have caused concern about political interference jeopardizing the peer review process and interfering with the freedom of inquiry, the granting agencies have functioned magnificently.

Unhappy and Dissatisfied Applicants-Peer Review
The success of the grant programs does not mean that applicants for grants were not complaining. On the contrary, over the past 40 years there have been many periods when the scientific community has been up in arms about the shortage of funds. Young investigators of today almost certainly will conclude that my description of experience on study sections of NSF and NIH in the 1950s and 1960s is apocryphal.
As members of an NSF panel, we were obligated to read all of the applications being considered at a specific meeting even though some of them were outside our area of expertise. The reviews at home, in preparation for the meeting in Washington, required several weeks of intense study. Applications receiving ratings in the highest two grades (out of five) were generally funded. Many of the applications did not receive sufficiently high scores to warrant funding, and a significant number were actually disapproved. For me, as a young Assistant Professor, it was a humbling experience, reading superb applications and attending meetings with older panel members who were my heroes in science. I can recall being pleased to see that senior members on the panel were thorough in their review of the proposals as well as generous and fair in their criticisms. After about 4 years my term ended, and I was appointed to an NIH Study Section where I experienced a culture shock. When we were about to record our vote on a particularly bad proposal, one of the more experienced panel members sitting next to me asked how I was going to rate it. My response was, "Disapprove; it is terrible; the investigator will not learn anything interesting, and the research is not worth doing." To which he responded, "On NIH panels we hardly ever disapprove grant applications." At that time NIH was growing rapidly with budgets increasing substantially every year, and "disapproval" was devastating to young investigators. NIH staff justified requests for additional funds on the grounds that so many approved applications were unfunded because of insufficient funds. Quickly recognizing the differences in culture and funding between NSF and NIH, I indicated that I would rate the proposal so low (about 3.0) that the research would not be funded. But my colleague at the table quickly informed me that applications with a rating of 3.0 would be funded. Even many of those with a score of 4.0 were funded. In that one meeting, I learned how much easier it was to receive support from NIH than from NSF. That is still true today, but now applicants to NIH are experiencing major difficulties and even first class proposals receiving ratings of 1.5 or better are not being funded.
Grievances about poor ratings are many, diverse and contradictory. They include "those young guys on the panel don't even know about my classical work"; "starting scientists don't have a chance because of the prejudiced senior members on the study sections"; or "the reviewers succumb to fashions and you can't get funds for research on prokaryotic enzymes." During my tenure in the 1990s as NIH Ombudsman in the Basic Sciences, I had the dubious pleasure of sitting in at many sessions of different Study Sections. Like those scientists whose applications did not lead to funding, I can cite shortcomings in the system. There is much too much nitpicking by members of the panels, such as "the magnesium ion concentration is too low." Also, the summary statement on the "Pink Sheet" is not consistent with the score. Old timers still refer to Pink Summary Sheets; they haven't been pink for some years and are now replaced by electronic messages. Reviewers, in trying to be kind in their write-ups, frequently offer complimentary comments thereby giving applicants an erroneous impression about the real evaluation of the proposal. Scores often are not consistent with the commentary. As a result, applicants submit a slightly revised request that again receives a poor rating. Hence there are too many amended applications. Panels often have too many "ad hoc" members. Some of them, flattered by the opportunity to evaluate the research of others, go to great lengths to demonstrate their erudition by essentially rewriting the proposals. Applications are too long; experimental minutiae are included thereby becoming the focus of nitpicking. Despite these criticisms, I find the Peer Review System remarkable.
One of the most common and often repeated complaints is that panel members don't support highly original proposals. Innovative proposals, according to the complainants, are dismissed with the remark: "it won't work." Based on my experience as an observer at Study Section meetings as Ombudsman, I have no doubt that panel members prematurely and inappropriately conclude that very original proposals won't work. Despite their favorable comments about the excellent "track record" of the investigators, they give low ratings to such applications. But this criticism of the peer review system is not new. About 50 years ago in a talk on a fictitious enzyme "Money Transferase" at the Gordon Conference on Proteins, I showed the enclosed plot of the probability of obtaining a grant versus the originality of the grant request ( Fig. 1). Unfortunately, it still has some validity.
Many of the complaints about the peer review system are legitimate, but it is astonishing to hear scientists, whose grant applications were not funded, criticize NIH or NSF when the decisions were rendered largely by panels of outside reviewers. My rejoinder to the critics is that the REFLECTIONS: From "Publish or Perish" to "Patent and Prosper" MARCH 17, 2006 • VOLUME 281 • NUMBER 11 ratings are attributable to the judgments of the panel members on the review committees and "the enemy is us." Numerous committees have studied the peer review process (5), and their recommendations invariably have led to improvements in the system. The Division of Research Grants (DRG), established in 1946, initiated many reforms over the years in response to suggestions from the extramural community, and an important change occurred when that division was converted in 1997 to the Center for Scientific Review (CSR). A major reclassification of Study Sections is now being implemented. Although individual scientists have expressed concern about which Study Section will review their application, this reorganization was necessary because developments in biomedical science over the past 50 years have resulted in radical changes in the way research is conducted. Whether this new classification will reduce the number of times that important areas of scientific research fall between the cracks remains to be seen. Much will depend on the conduct of reviewers, and attempts by NIH staff to overcome bias in rating approaches to science. It should not be a surprise, for example, that a panel comprising many NMR spectroscopists and crystallographers would give high scores to proposals using these techniques. However in the process, scientists using other physical-chemical tools found themselves disadvantaged, and the ratings of their proposals were poor. In my judgment, this type of bias has occurred in the area of biophysical chemistry and in other areas of scientific research. This deficiency is likely to be mitigated as a result of the reorganization.
Doubtless complaints about peer review will continue and probably even increase because the demand for more R01s from the growing scientific community far exceeds the available source of funds. That government has delegated to the scientific community the right to design and operate the peer review system is a remarkable achievement and criticism is invaluable.

Impact of Federal Funding on Universities
During the 1950s and 1960s, the number of graduate students interested in biochemistry and molecular biology increased tremendously, appointments to the faculty grew rapidly, and new departments were established in many institutions. At Berkeley in the 1960s, we had both a biochemistry department and a virology department that was later converted into a Department of Molecular Biology. Both thrived despite occasional competition for space, faculty positions, graduate students, and university funds. Growth was rampant, influenced in large measure by the increased federal funding for biomedical research. My own laboratory group, including students from both Bio-chemistry and Molecular Biology, had grown to about 5 graduate students, several undergraduates, 2 postdocs, and 2 technicians. Despite the large increase in the number of postdoctoral fellowships and the establishment of NIH Training Grants, there was a need for additional funding to individual faculty members. Accordingly in 1964, I applied to NIH for funds and received my first, and still continuing, NIH grant entitled "Structure-Interactions of Biological Macromolecules," which was significantly larger than the existing NSF grant. Other faculty members experienced similar growth, and not surprisingly, space problems became acute. New buildings became the issue in nearly every research-oriented university. Amalgamation of departments ensued shortly thereafter. Here at Berkeley about 15 years ago there was a major reorganization. The Biochemistry and Molecular Biology group of about 25 faculty members was restructured as 1 of 5 divisions in the mega-Department of Molecular and Cell Biology (MCB) comprising about 90 faculty. This reorganization clearly has been advantageous in recruiting faculty and graduate students, and the resulting clout on campus coupled with the size of MCB has led to occasional calls for even more independence and authority. Why not establish a College of Biology? Whether this growth has resulted in a loss of community and collegiality is for others to judge. Moreover, it will be of interest to observe the interactions of the diverse groups in the new, greatly expanded Stanley Hall. Will there be extensive collaboration leading to Big Science, or will there be just more scientists in the enlarged building not talking to one another?
University administrators reacted to the growth of federal funds and desires for additional faculty by encouraging, and even requiring, full time faculty to raise substantial parts of their salaries through grant funds. The practice began when faculty members on 9-month appointments used some of their grant funds to pay summer salaries for 2 or sometimes 3 months. Soon using grant funds to supplement faculty salaries became widespread. That step led to the policy of establishing "soft money" faculty positions. This practice is particularly prevalent in medical schools. In one state institution I visited, a large department had state funds for 30 FTEs (full time equivalents) with about 300 faculty members occupying those slots. Each faculty member had to raise about 90% of the salary from federal funds. Clearly NIH was subsidizing universities to an extent not anticipated. This increase in the number of faculty positions led, of course, to greater demands for research grants and for additional laboratory space. In turn lobbying was initiated for government funds to build new buildings on university campuses. This effort was partially successful but for only a limited period. Some agencies participated in matching fund programs with universities, and there have been repeated periods of earmarking, or "pork," when individual members of Congress without debate inserted into appropriation bills funds for their favorite institution. Not surprisingly there were all too few protests from university administrators over this abandonment of the concept of peer review.

Struggles over "Indirect Costs"
Almost from the beginning of government funding of research there has been acrimony among government officials, university administrators, and research scientists over the issue of "indirect costs" or "overhead." Members of Congress maintained that federal appropriations were to support research not to sustain universities. Administrators of educational institutions claimed that there were substantial, indirect costs incurred in supporting federally funded research of faculty, and universities needed "full cost recovery." Researchers complained that too much NIH and NSF money allocated to institutions was not used to support research. Research scientists, faced with stringent budgets and with ratings on their grants below the funding level, argued strenuously that indirect costs, amounting to billions of dollars annually, should be reduced markedly. In their view, the money saved could then be used to support more research.
Administrators at universities, in focusing on the total cost of the research activity, demanded compensation for many general facilities, indirectly related to the research, such as the purchasing department, the libraries, and the additional administrative staff involved in dealing with government offices. Universities also made claims for depreciation and use allowances, such as operation and maintenance of physical facilities including heat, light, and custodial services, monitoring health and safety regulations, and waste disposal. Most research scientists viewed these charges legitimate although the amounts were questioned. In contrast, many of the other claims were considered not justifiable. These included charges for those administrators who spent a small portion of their time on matters only peripherally related to the research itself, such as university presidents, vice-presidents, deans, and others who work on management issues. The most serious complaint of scientists was that the funds requested by university administrators as "indirect costs" were not being used to "aid and abet" the research itself. Instead the money was being treated and used as income for general university support.
Saving and accumulating indirect costs by institutions for future expansion was another source of difficulty. In California a major brouhaha occurred at the first meeting of the Board of Regents attended by the newly elected Governor, Ronald Reagan. At that meeting the issue of indirect costs arose, and in response to the Governor's inquiry, the President of the University of California explained the concept and indicated that they had accumulated about 20 million dollars, money being saved for a "rainy day." It required essentially no time for the Governor to proclaim: the rainy day had arrived, the state budget to the University for that year would be reduced by the amount accumulated, and the income from indirect costs would henceforth be returned to the State. That draconian action lasted for a few years, and gradually control of most of the indirect costs did return to the individual campuses.
How federally funded money for research was used by universities had been of interest to members of Congress for many years. In 1991, that issue became the focus of Congressional hearings (6) especially in the light of audits by the General Accounting Office and the Inspector General of the Department of Health and Human Services (HHS). Representative John Dingell, as Chairman of the Subcommittee on Oversight and Investigations, was particularly incensed about some of the overhead costs submitted by major universities like Stanford. His investigations led to the recovery of large sums of money from various, private institutions where "indirect costs" rates were generally greater than 60%. During this period of intense acrimony over indirect costs, ASBMB became a major "spokesman" for the scientific community. As Chair of the Public Affairs Committee of ASBMB, I testified before the Subcommittee on Science of the Committee on Science, Space and Technology of the U. S. House of Representatives. It was a particularly pleasant experience because members of Congress present at the hearings were sympathetic to our position and decidedly opposed to the arguments of the university representatives testifying at the same session. We argued that "indirect costs" should be "limited to those expenditures that were clearly related to and provided support for the research being conducted." Funds for maintaining and repairing laboratories or for their depreciation, in our view, were justifiable charges as indirect costs. However we opposed the proposed timetable of 20 years for the depreciation of a laboratory building. In our view, the lifetime of laboratories was much longer, and the 50-year schedule was much more appropriate. The practice of accumulating funds, essentially placing them "in escrow," for expansion through the construction of new buildings, we considered an illegitimate practice. Similarly, we viewed as indefensible the request of university administrators for "full cost recovery." The policy whereby universities collected indirect costs of about 60% on top of the salaries of tenured faculty members paid from grants we considered intolerable. This pattern of enlarging the faculty by charging for salaries, needing more laboratory space, and accumulating indirect costs for construction of new buildings and expansion of universities amounts to Ponzi economics. For research scientists the Herblock cartoon from the Washington Post in 1991 was a picturesque and relevant portrayal of the "indirect cost" or "overhead" issue (Fig. 2).

Burdensome Regulations and Unfunded Mandates
A corollary of the social contract between government and institutions over federal funding of research has been the imposition of regulations stemming from public pressure and the political response. Because the three constituencies, government, universities, and the scientific community, have different obligations, responsibilities, and cultures, friction among them is almost inevitable. Scientists, for the most part, are temperamentally and culturally skeptical about proscriptive policies. They tend to doubt the legitimacy of policies restricting scientific activities, and they are concerned when these policies are transformed into rules and regulations followed by laws. Nonetheless, most scientists acknowledge, respect, and support regulations promulgated with the goal of protecting human subjects, animals, and the environment. Regardless of the inconvenience, cost, and burden they abide by those regulations widely recognized as furthering the interests of society. They tend to oppose regulations that in their view provide very little benefit, appear ill-advised or poorly crafted, or are costly and impede scientific research. Doubtless there are legitimate differences of opinion as to whether a given regulation is poorly conceived, inappropriately interpreted, or incorrectly enforced. Because of the different orientations of government officials and research scientists, there have been repeated incidents of irritation and controversy over many regulations imposed as a result of political pressures. Invariably regulations require enforcement procedures that are extremely costly, and universities join the fray by complaining about the fact that no funds were provided by the government. Hence "unfunded mandates" have become a major battleground.
Until only a few years ago, researchers had few complaints about regulations dealing with humane care and treatment of laboratory animals. Although some scientists have been cavalier in their laboratory practices leading to abuse of animals, most have been ardent advocates for responsible care and use of animals in research. As members of professional societies like FASEB, they have supported policies regarding care of animals despite the burden of regulations and the not inconsiderable administrative costs. However, complaints from those in society who oppose all use of animals in biomedical research appear repeatedly (7). In turn, government responses aimed at ameliorating this pressure lead to additional regulations. For example in 2000, the excellent working relationships between those concerned with the proper implementation of the Animal Welfare Act and researchers who use animals in biomedical research were disrupted over action proposed by the Animal and Plant Health Inspection Service (APHIS) of the United States Department of Agriculture (USDA). In considering animal welfare and the problem of pain and distress, APHIS proposed a definition of distress referring to "a state in which an animal cannot escape from, or adapt to, the internal or external stressors or conditions it experiences, resulting in negative effects on its well being." Just reading the proposed definition causes distress. Action or regulations, based on such vague and uninterpretable language, are not likely to contribute to animal welfare.
About 5 years ago the USDA, in response to a complaint that laboratory rats allegedly were receiving "inadequate housing, water, food, and veterinary care," proposed amending the regulation that excluded rats, mice, and birds from coverage under the Animal Welfare Act. This act was designed for large animals and was concerned with the protection of family pets. Other policies covering the humane care and use of laboratory animals already existed, and research institutions were required to file assurances committing them to responsible animal care. Standards and regulations established for cats and dogs were not appropriate for mice and rats bred for research purposes. Their imposition, as proposed by the USDA, would have constituted a major impediment to biomedical research (8). Fortunately, as result of extensive lobbying by the scientific community, this proposed change in implementation of the Animal Welfare Act has been thwarted, at least temporarily.
Other conflicts have arisen as a result of proposed government action aimed at regulating research activities of scientists. In response to what he and others considered unreasonable withholding of data from a federally funded investigation at the Harvard University School of Public Health, Senator Shelby introduced an amendment to the FY 1999 Omnibus Spending Bill (Public Law 105-277) designed to correct what was perceived to be a serious transgression. This action required OMB to revise Circular A-110 so that all data produced through federal funding must be made available under the Freedom of Information Act (FOIA). Scientists in general support the concept of the public's access to raw data and the aims of FOIA, but they recognize that unintended consequences frequently result from well meaning proposals. Harassment of investigators through the use of FOIA is not rare (9). Fortunately, in revising Circular 110-A, the OMB recognized the threat to scientific investigations and responded by formulating a reasonable modification to an unreasonable mandate.
Struggles over regulations will almost certainly continue to persist with researchers maintaining that they impede scientific investigations, with university administrators maintaining that they are too costly and funds are needed for implementation, and with government officials reacting to the numerous and diverse complaints of their constituencies. The vulnerability of the scientific community to regulations is greatest, as it should be, in the treatment and research affecting human subjects. Every inci-dent of harm or death, whether inadvertent or through negligence, is bound to bring forth new calls for regulations. This occurred as a result of the tragic death of a young man, Jesse Gelsinger, who was suffering from a genetic disease and being treated at the University of Pennsylvania. As described by Koski (10), the former Director of the Office for Human Research Protection, Gelsinger "entered a trial in 1999 without full knowledge of what was going to be involved. He was not presented with information that might have been invaluable in making a decision in concert with his family about whether or not to participate in the trial. There were gross deficiencies in the conduct of the trial, gross deficiencies in documentation of the data, and gross deficiencies in reporting requirements." Clearly instances like that, though few in number, are intolerable. They immediately provoke a major outcry for new regulations. However it should be noted that it is the enforcement of existing regulations and compliance by investigators that are needed rather than the crafting of additional ones.
For the foreseeable future, we can expect a storm of proposals and counterproposals over the issue of human embryonic stem cells, and biomedical scientists will have to deal with complaints from various constituencies in society.

War on "Fraud in Science"
Biomedical research hit the headlines in the 1980s. It started with the congressional hearings on "Fraud in Biomedical Research" (11) conducted by the Subcommittee on Investigations and Oversight of the Committee of Science and Technology under the Chairmanship of Albert Gore, Jr. In those hearings and others that followed, individual scientists testified about their own unethical behavior. One witness described how he falsified results of experiments that he had not conducted. Another researcher, according to the Chairman, "became entangled in a network of fraud and plagiarism, and a possible cover-up." Subsequent congressional hearings were entitled, "Scientific Fraud and Misconduct and the Federal Response" and "Fraud in NIH Grant Programs." The press was merciless. One had the impression that fraud was rampant and that a substantial fraction of scientific findings was fabricated or falsified. Books describing notorious cases of fraud soon appeared with titles like "Betrayers of the Truth" (12) and "False Prophets" (13). The March 19, 1989 issue of the Chicago Tribune had a long article entitled "Cheating in the Lab" with the subheading "Under pressure some researchers break the law." Various cases were described with the author drawing conclusions even though the allegations had not yet been investigated. The cover of Time on August 26, 1991 had a mocking caricature of a miniaturized scientist under a microscope with the blaring headline "Science Under Siege" along with a remark about "scandal plaguing America's researchers." In the principal article of that issue, the accused scientists were judged "guilty" by the author when the trial had not yet been conducted.
The word "fraud" was predominant in all the early discussions because many of the cases dealt with falsifying data, making up data, and reporting results of experiments that had not been performed. During that period, Congress directed the Secretary of HHS to "require institutional applicants for NIH funds to review reports of fraud and report to the Secretary any investigation of suspected fraud which appears substantial." To implement this directive, numerous groups both within and outside of government began meeting frequently to discuss policies and potential regulations. Unfortunately, the lawyers took over. To them, the burden of establishing "fraud" in accordance with U. S. law seemed insuperable because of the requirement to prove both intent to deceive and injury or damage to persons relying on the scientific research. As a consequence, the word fraud previously used to describe many of the intolerable breaches of ethics by scientists disappeared from the dialog. In its place was "misconduct in science." Although there was little confusion in the scientific community over the meaning of fraud, it was immediately recognized that the phrase, misconduct in science, would be interpreted differently by different people.
At that time I happened to be President of ASBC and began flying across the country to attend innumerable meetings aimed at defining misconduct in science. In a session at NIH, the Director, James B. Wyngaarden, and I were the only scientists in a room with about 30 lawyers representing different agencies of government and many universities. The various attorneys, especially those representing government agencies, wanted a broad definition including words like "deception" and "misrepresentation." In contrast, scientists argued for a narrow definition. We stressed the uncertainties in research and the need for scientists to be free to use their judgment and intuition in selecting data based on their experience. To lawyers, selection of data constituted misrepresentation or deception.
Scientists maintained that the definition should be restricted to acts such as making up data, changing data, and stealing data or ideas without attribution; i.e."fabrication," "falsification," and "plagiarism." Despite numerous arguments from the attorneys, the two of us prevailed at that meeting. Unfortunately, our victory was short-lived.
As they say at NIH, the issue was moved downtown, meaning to higher authority at the Public Health Service (PHS) or the Secretary of HHS. All of our objections were ignored. The rule proposed several years later by the PHS entitled "Responsibilities of PHS Awardees and Applicant Institutions for Dealing with Reporting Possible Misconduct in Science" included in the definition, "fabrication, falsification, plagiarism, deception or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting or reporting research . . . " For more than a decade, that definition formed the basis of investigations and adjudication by different government agencies such as NSF, the Office of Scientific Integrity (OSI), and its successor, the Office of Research Integrity (ORI).
Many scientists found the language about "other practices that seriously deviate" vague and open ended, inviting overexpansive interpretation. In arguing against that definition in an article (14) entitled "What Is Misconduct in Science?" I noted that "brilliant, creative, pioneering research often deviates from that commonly accepted within the scientific community." The absurdity of the government definition became evident with a ruling of the Office of Inspector General (OIG) of NSF in a case of a researcher "involved in 16 incidents of sexual misfeasance with female graduate and undergraduate students at the research site; on the way to the site; and in his home, car, and office." According to the OIG, "Many of these incidents were classifiable as sexual assaults." OIG concluded with " . . . these incidents were an integral part of this researcher's performance as a researcher and research mentor and represented a serious deviation from accepted practice. Therefore, they amounted to research misconduct under NSF regulations." Whether this preposterous and appalling application of the definition of "misconduct in science" was a significant factor in the ultimate removal of the "serious deviate" clause is not clear. But after about 15 years of struggle, innumerable committees, and reports, that clause is gone. Now the accepted government-wide definition of misconduct in research focuses on "fabrication," "falsification," and "plagiarism," now known as FFP. In recalling my repeated trips across the country over this single issue, I refer to FFP as Frequent Flyer Program.
In the 25 years since fraud in science became front-page news, many instances of misconduct have been unveiled. However it is important to note that the major instances of FFP described in the congressional hearings and in books (12,13) were uncovered by the traditional methods of science; i.e. inability to replicate and extend findings, competition among researchers, and complaints by co-workers and whistleblowers. It has been said repeatedly that science; i.e. is self-correcting and that, therefore, fraudulent or mistaken research is detected in the normal course of extending the work. That is true, but the correction of scientific reports applies to research that is ground-breaking and has a significant impact. Both fraud and errors in such research are generally discovered relatively soon because attempts to replicate or extend the findings are not successful. However, a great deal of research does not attract the attention of other researchers; fraud and mistakes in such research frequently go undetected for long periods of time. Hence, increasing the awareness of these potential problems by offices like OSI and ORI has been valuable. Unfortunately, in a few celebrated cases involving investigations and adjudication of alleged "misconduct in research," there were serious abuses committed by officials in those agencies. Investigations were seriously flawed; convictions were not warranted by the evidence; leaks of confidential information occurred; and reputations of scientists were seriously damaged. Substantial periods of time elapsed before the convictions were overturned and the accused scientists were cleared through appropriate appellate machinery. Procedures providing "due process" have now been installed, and it is likely that they will preclude overzealous investigators causing such incidents in the future.
Officials at ORI repeatedly emphasize their goal "to prevent misconduct in research." In my view, they will not succeed. Misconduct like that described recently by ORI (15) cannot be prevented. Similarly, the fraud in the research on stem cells by the group in South Korea could not have been prevented (16 -19). In such cases individuals should be charged, investigations should be conducted, verdicts should be reached, and punishment should be meted out, and there should be disclosure thereby diminishing the chances of repetition by the same individual. In the past, disclosure was rare because of the fear of litigation but institutions slowly have begun releasing information about the results of investigations. ORI is now providing material to individuals in universities responsible for teaching courses on "Responsible Conduct of Research." Such courses are now required for all graduate students supported by NIH Institutional Training Grants. Postdoctoral recipients of Ruth L. Kirschstein National Research Service Awards (NRSA) are also required to take such courses. Teaching such courses on "ethics" has become a significant new activity for academic departments, and ORI is expending considerable effort to broaden that mandate so as to require ethical training for all people implicated in biomedical research. As one involved in such teaching activity for the past 5 years and as one generally opposed to "required courses," I am somewhat skeptical about their value. Many of the 120 or so students enrolled in our department course would not take the course if it were voluntary. Their cynicism is illustrated in the accompanying cartoon, kindly furnished by the artist (Fig. 3) (©2001 Ed Himelblau).
Any discussion of "responsible conduct of research" must, in my view, include all the participants in the research enterprise. There should be a focus on the scientists engaged in research, the universities and academic centers where research is conducted, the professional societies responsible for developing standards, the journals in which scientific findings are published, the industrialists who contribute funds and collaborate with academicians, and government agencies fostering, sponsoring, and regulating research. Although their obligations vary, all of them have a role in achieving and maintaining an ethical climate of responsible conduct of research. Transgressions of scientists have justifiably been the subject of most courses on scientific integrity, but there has been scant attention paid to the roles of others in contributing to the loss of the public's confidence in the ethical behavior of scientists. It is useful to describe some of the notorious cases that have attracted widespread attention in the popular press and have become the focus of government agencies. Describing the actions of Summerlin, Darsee, Soman, Spector, Bruening, Bates, Slutsky, and Poehlman, for example, have educational value. It is useful to show how their "misconduct in research" was discovered, the extent of recidivism, and the sanctions imposed by government. But in discussing government activity in the investigation of FFP, it is important also to describe those instances of over-reaching, unproven allegations, and unjustifiable "leaks" of confidential, derogatory information that have damaged reputations of scientists. Only after appeals providing "due process" were they finally judged not guilty of misconduct. The Popovic and Imanishi-Kari cases demonstrate the serious defects in the earlier, informal procedures for investigating allegations and the impediments arising from political interference. It is indeed unfortunate that quasi-legal procedures need to be invoked, but the consequences stemming from allegations that prove to be unfounded demand this protection for the accused. These formal procedures also protect whistleblowers.
Almost daily over the past few weeks, headlines in the public press and scientific journals have described the apparently fraudulent research on embryonic stem cells in Hwang's laboratory in South Korea. Articles such as "Global Trend: More Science, More Fraud" (16), "Baffling, Infuriating, and Sad" (17), "Verdict: Hwang's Human Stem Cells Were All Fakes" (18), and "Cloning: South Korean Team's Remaining Human Stem Cell Claim Demolished" (19) dominate the news. This sad episode is particularly tragic because the claimed success in the South Korean laboratory was so sensational that many researchers, who hoped for such developments, accepted the results instead of being skeptical. Uncovering the fraud involved a complex investigation over (a) the donors of eggs, (b) charges raised by whistleblowers, and (c) the role of a senior author in the United States. Doubtless, further scientific research would have unveiled its flaws. It is important to recognize that research on human embryonic stem cells is now a political issue in the United States. Therefore this fraudulent activity has attracted much more attention in the press and in daily discourse than equally reprehensible, unethical behavior in research such as the alleged discovery of new isotopes and organic semiconductors (20 -22). These latter fabrications occurred in world-class laboratories in the United States.
Authorship practices occasionally are a source of irritation. There is a tendency among some to try codifying policies without recognizing the uncertainties and ambiguities in assigning credit. This issue is now the focus of attention because of questions raised about Gerald P. Schatten's role as the senior author on a paper from South Korea on human embryonic stem cells (23). Sharing data and citing previous work pose ongoing ethical problems. Plagiarism represents the most frequent abuse of ethical practices in research, and little can be done about it other than education and imposition of sanctions upon those proven guilty. However the issue of "conflict of interest" is looming as one of the major ethical problems facing the scientific community. Courses on Responsible Conduct of Research can treat this subject in terms of the research and reporting of findings by investigators, the activities and responsibilities of university administrators, and the roles of the directors of companies and government officials. Editors of scientific journals have been derelict in fostering good publication practices aimed at maintaining integrity of science. When they are so interested in attracting papers that authors are not required to release all the relevant data, the scientific literature is compromised. For years Science, Nature, and Cell, along with other journals, published papers describing crystal structures of proteins with the editors knowing that authors were not depositing the coordinates in the appropriate data banks. After considerable protest within the scientific community and through the intervention of NIH, this practice was rectified. Announcing that grants would not be funded to those who did not deposit coordinates in a data bank constituted an effective remedy. Both authors and editors responded to this use of money to solve an ethical problem. Despite this contention over making coordinates available at the time of publication, the editor of Science somewhat later authorized the publication of the sequence of the human genome even though the company did not release to the scientific community all the details. Much larger ethical burdens are now confronting editors of medical journals as described below.

Commercialization of Biomedical Research in Universities
Almost 40 years ago Garrett Hardin wrote a remarkable paper entitled "The Tragedy of the Commons" (24), which in the past decade has influenced scholars in diverse fields such as economics, political science, law, sociology, psychology, agricultural science, and environmentalism, as well as biology, Hardin's own field. In that seminal paper, Hardin refers to an even earlier commentary (25) by Lloyd in his Oxford Lectures of 1833. Lloyd, in considering "what happens to pasturelands left open to many herds of cattle," noted that a time would come when "the unmanaged commons would be ruined by overgrazing." Using that model, Hardin pointed out that when a resource is open to all it becomes available to no one; i.e. the "Tragedy of Freedom in a Commons." This concept is readily adapted to the quandary that the great discoveries in biomedical research in the 1960s and 1970s did not benefit the public. The exciting results of research in universities funded by NIH and NSF were described in the scientific literature, but they were not exploited by companies because of the lack of exclusive rights to manufacture drugs. "What was available to all was available to no one." Recognition of this dilemma led to the Bayh-Dole Act of 1980 permitting universities to obtain patents on the results of federally funded research. Passage of this act has had almost as profound an effect on the culture of biomedical research in universities as the start of federal funding in the 1940s.
The triumphs of commercialization of this research in academia are legendary. A multibillion dollar biotech industry was spawned by academic research sponsored by the federal government and turned loose under the auspices of the Bayh-Dole Act. New drugs of enormous benefit to suffering people became available. However, as pointed out in a recent article in Fortune (26), there were "unintended consequences." In accepting this new freedom to patent discoveries made by researchers funded by government, university administrators decided to share any potential income from the patents with the inventors. This has enriched many researchers leading to their active participation in this cultural change in academia from publishing to patenting. Accompanying the modified attitudes were major institutional changes. Licensing discoveries by institutions became a major preoccupation requiring the formation of Technology Transfer Offices that now have morphed into new Industry Alliance Offices. Along with considerations of technology transfer and intellectual property came Material Transfer Agreements. Whereas 20 years ago a simple request from a researcher at one institution to a scientist at another university was handled promptly by the furnishing of the strain, clone, or plasmid, now the response not infrequently is different. "Have your Technology Transfer Office contact our office and they will work out an appropriate Material Transfer Agreement." Derek Bok, in his book "Universities in the Marketplace" (27), wrote "Unfortunately, in their zeal to bring more revenue to their universities, technology transfer officers have occasionally acted, especially in situations involving fundamental earlystage discoveries in ways that threaten to slow progress rather than promote it." Many investigators are now finding that Technology Transfer Offices are impeding the exchanges of scientific materials and knowledge that have been the cornerstone of biomedical research in academic institutions.
For the past half-dozen years innumerable articles have deplored the conflicts of interest that have arisen in medical schools involving investigators and their collaborations with industry. Some of the stories that surfaced over financial ties and their influence on biomedical research led to editorials like that in the New England Journal of Medicine (NEJM) on May 18, 2000 with the title "Is Academic Medicine for Sale?" Both NEJM and the Journal of the American Medical Association (JAMA) have repeatedly stressed the need for disclosure about sponsorship and financial interests, but scandals persist. This is not too surprising because the journals themselves have neglected enforcing policies about disclosure that they espouse for medical schools that are involved in clinical trials with drug companies. Allegations about abuses by investigators in academic medical centers during the conduct of clinical trials are now front-page news in the popular press. The problems have risen to such proportions that a recent article (28) by a distinguished group of authors begins with "The current influence of market incentives in the United States is posing extraordinary challenges to the principles of medical professionalism. Physicians' commitment to altruism, putting the interests of the patients first, scientific integrity, and an absence of bias in medical decision making now regularly come up against financial conflicts of interest." In formulating a policy proposal for academic medical centers, the authors call for more stringent regulation. One of the authors, Jerome P. Kassirer, who had been a former editor of NEJM, had previously written an article (29) with the amusing title "Financial Indigestion" in which he described turning down a meal hosted by a company at an institution where he was a Visiting Professor. After the company representatives had left the room, Kassirer asked a resident at that lunch to read a paragraph written by Rothman (30) which stated "Medical schools should adopt formal rules that prohibit all gifts from drug companies to students, whether books, stethoscopes, or meals. Medical training should not include acquiring a sense of entitlement to the largesse of drug companies. Finally teaching hospitals should enforce these same restrictions, proscribing drug-company sponsorships of lunches, conferences, and travel for home staff, and should make it clear that accepting birthday presents, Christmas gifts, or food and drink off the premises from drug-company representatives violates the ethical norms of the profession." Just as medical schools have not solved the problem of conflict of interest, so university campuses are facing similar problems. By fostering the formation of industrial connections, patents, and complex licensing procedures, universities and researchers increasingly are encountering conflicts of interest. As pointed out by Bok (27) in commenting about the tragic death of Jesse Gelsinger in a gene therapy trial at the University of Pennsylvania, "As it hap-pened, the director of the institute directing the research was the founder and major stockholder of the company that funded the research. The university, too, was a stockholder, having been given an equity share by the company. Although the director did not participate personally in the trials, both he and the university stood to gain financially if the therapy being tested proved to be successful." There have been numerous charges in the popular press about conflicts of interest. Doubtless some of them may prove not to be actual conflicts of interest; but the perception is real and the reports clearly undermine the public's view of the integrity of science.

Patent and Prosper
Prior to the 1970s, patenting was alien to most scientists involved in biological research. This attitude changed abruptly as a result of two independent discoveries. The first patent on a living organism was awarded to Chakrabarty at the General Electric Company who "invented" a bacterium capable of consuming oil slicks. It is of interest that the original application was denied by the United States Patent Office (PTO). Living organisms had been considered non-patentable. The patent was granted only after a ruling by the Supreme Court to the effect that the particular organism did not exist naturally and was indeed an invention. Somewhat earlier, Stanford University had filed an application to patent the recombinant DNA technique developed by Cohen and Boyer. Much discussion among the inventors, others in the scientific community, officials at NIH, and members of Congress ensued before patents on the gene-splicing technique were granted, and licensing agreements were signed in the 1980s. This particular patent and the licensing agreements have been extremely profitable for Stanford University, the University of California at San Francisco, and the two inventors of the elegant technique. In a very important respect the non-exclusive licensing agreements put in place were contrary to what was anticipated by the Bayh-Dole Act. It had been assumed that exclusivity would be a necessary inducement for the commercial development of the results of academic research, and many of the subsequent agreements did involve exclusive licenses. In the 25 years following the awarding of the gene-splicing patent, technology transfer officers in concert with the heads of universities have stimulated activity on campuses aimed at converting the research of their faculties into financial benefits. A flurry of patents has resulted, but it should be emphasized that many have yielded virtually no income. Nonetheless, that activity persists, and the culture in basic science departments involved in biomedical research has changed.
In fiscal year 2004, there were 425 new "start-up" companies fueled by discoveries of professors on university campuses and a record number of patents and licenses. Legal fees incurred in the commercialization of this research in academic institutions amounted to more than $189 million. Barton (31), in an article entitled "Reforming the Patent System," pointed out "The number of intellectual property lawyers in the United States is growing faster than the amount of research." Among his suggested reforms was tightening the standards of "novelty" and "non-obviousness" used in judging patentability. Also Barton, like Eisenberg (32) who has written extensively on the subject of patents and their use in actually barring or impeding research, criticized the patenting of research tools dealing with fundamental research processes. The patenting of expressed sequence tags (ESTs) is one such example. It is difficult to understand how the PTO approved the widespread patenting of ESTs under the "utility" standard when not only was the gene downstream unknown but also the encoded protein and its role were not known. Certainly no drug or diagnostic treatment could be visualized from the EST. Recently there has been a court decision contesting the patentability of ESTs, but appeals are likely. Other research tools like the "oncomouse," the world's first animal patented by Harvard Medical School and licensed exclusively to DuPont (33), have been viewed as impeding downstream product development.
In raising the question whether patents can deter innovation, Heller and Eisenberg (34) refer to the "Tragedy of the Anticommons" when "a proliferation of intellectual property rights upstream may be stifling life-saving innovations downstream in the course of research and product development." A subsequent article (35) by Rai and Eisenberg entitled "Bayh-Dole Reform and the Progress of Biomedicine" raises the very important question whether "allowing universities to patent the results of governmentsponsored research sometimes works against the public interest." With passage of the Bayh-Dole Act 25 years ago, basic biomedical discoveries are now reaching the public in the form of effective drugs and treatments at a greatly increased pace. However, we will have to determine whether the pendulum has swung too far and whether basic research and the openness of universities will be curtailed in the long run. The President of Amherst College, Anthony W. Marx, in his review of Bok's book (27), indicated his concerns over the purpose of universities (36). In that review entitled "Academia for Sale (Standards Included)," Marx wrote "Universities cannot effectively teach ethics if they are themselves unethical; nor can they hope to teach that there is more to life than making money if they are unconstrained in their search for revenue." Although major issues such as reform of the Bayh-Dole Act and policies over patenting remain unresolved, it seems clear that the culture of "patent and prosper" is now entrenched in academia. It will be interesting to witness whether these relatively new practices jeopardize the openness of universities and how they can be accommodated with the much older, traditional roles in creating and dispensing knowledge.