Download the original attachment


A. Incentives are a negotiated offer external to the agent offering it – this means the federal government can’t offer incentives to itself as per the military.

Ruth Grant, professor of political science at Duke, 2002, “The ethics of incentives: Historical Origins and contemporary understandings”, Economics and Philosophy, Proquest

Similarly, `incentive' is sometimes used as if it were synonymous with `motivation' generally speaking. But there are several important sorts of motivation that are not suggested by the term. When we speak in this way, we implicitly deny the phenomena of habitual behavior, or action motivated by a sense of responsibility or of the reasonableness of a course of action (with reasonableness here understood as something other than individual utility maximization), or the way in which a role model or ideal can serve as motivator. Action which is initiated by the individual or understood as internally motivated is not really compre-hended in the concept of motivation as incentive. Incentives are external prompts to which the individual responds.
The use of `incentives' to speak of market forces is also problematic, though it is easy to see the logic of this development within the language of economics. If one company lowers the price of its product, we might readily say that other companies now have an incentive to lower theirs. But we would not say that the first company offered all other companies an incentive to lower their prices.55 Market forces are not conscious and intentional, and their rationale is intrinsic to the economic process itself. We might just as well say in this situation that the first company's lower price is a good reason for other companies to lower theirs given that they need to remain competitive. The term `incentive' says nothing that `reason' cannot say as well in this case. A similar logic applies to speaking of loan conditions as incentives. The International Monetary Fund may make a loan to a nation only on condition that it alter its inflationary policies. If the reason for the condition is intrinsic to the IMF's own financial aims, `incentive' may be a misnomer. The situation is like that of requiring a certain training as a condition for the practice of medicine; we would be unlikely to refer to this as an `incentive' to go to medical school for people who wish to become doctors.56 When the IMF is criticized for using financial incentives unethically to control the internal policies of borrowing nations, it is because the critics suspect that its real purposes are political rather than strictly limited to the legitimate concern to secure the financial health of the Fund.
The distinction between market forces and incentives can be illustrated further by considering the difference between wages as compensation and incentives as bonuses in employment. Compensation means `rendering equal', a `recompense or equivalent', `payment for value received or service rendered', or something which `makes up for a loss' ± as in the term `unemployment compensation'. Compensation equalizes or redresses a balance, and so, to speak of `fair compensation' is entirely sensible. But to speak of a `fair incentive' is not. An incentive is a bonus, which is defined as something more than usually expected, that is, something that exceeds normal compensation. It is an amount intentionally added to the amount that would be set by the automatic and unintentional forces of the market. An incentive is also a motive or incitement to action, and so an economic incentive offered to an employee is a bonus designed to motivate the employee to produce beyond the usual expectation. It should be obvious then, that compensa- tion and incentives are by no means identical. The per diem received for jury service, for example, is a clear case of compensation which is not an incentive in any sense.
It is not difficult to see how it might have happened that the boundaries were blurred between the specific conception of incentives and conceptions of the automatic price and wage-setting forces of the market. Both can be subsumed under very general notions of the factors that influence our choices or motivate action, and `incentives' carries this general meaning as well. Nonetheless, the blurring of that boundary creates a great deal of confusion. Incentives, in fact, are understood better in contradistinction to market forces than as identical to them. It is only by maintaining a clear view of their distinctive character that the ethical and political dimensions of their use are brought to light. Moreover, conceptual clarity and historical understanding go hand in hand in this case. It should no longer be surprising to find that the term `incentives' is not used by Adam Smith in first describing the operation of the market, but appears instead at a time when the market seemed inadequate in certain respects to the demands presented by changing economic circumstances. Other eighteenth and nineteenth- century ideas, often taken as simple precursors of contemporary analyses of incentives, can now be seen in their distinctive character as well. For example, Hume and Madison offer an analysis of institutional design which differs significantly from `institutional incentives', though the two are often confused. These thinkers were concerned with preventing abuses of power. They sought to tie interest to duty through institutional mechanisms to thwart destructive, self-serving passions and to secure the public good. Contemporary institutional analyses, by contrast, proceed without the vocabulary of duty or public good and without the exclusively preventive aim. Institutional incentives are viewed as a means of harnessing individual interests in pursuit of positive goals.57 Similarly, early utilitarian discussions, Bentham's in particular, differ markedly from twentieth century discussions of incentives despite what might appear to be a shared interest in problems of social control. Again, Bentham is interested entirely in prevention of abuses or infractions of the rules. The rationale for his panopticon is based on the observation that prevention of infractions depends upon a combination of the severity of punishment and the likelihood of detection.58 If the latter could be increased to one hundred per cent, through constant super- vision and inspection, punishment would become virtually unnecessary. This is a logic that has nothing whatever to do with the logic of incentives as a means of motivating positive choices or of encouraging adaptive behavior.
We are now in a position to identify a core understanding or a distinctive meaning of the concept of incentives; what we might call incentives `strictly speaking'. Incentives are employed in a particular form of negotiation. An offer is made which is an extrinsic benefit or a bonus, neither the natural or automatic consequence of an action nor a deserved reward or compensation. The offer is usually made in the context of an authority relationship - for example, adult/child, employer/employee, government/citizen or government/organization. The offer is a discrete prompt expected to elicit a particular response. Finally and most importantly, the offer is intentionally designed to alter the status quo by motivating a person to choose differently than he or she would in its absence. If the desired action would result naturally or automatically, no incentive would be necessary. An incentive is the added element without which the desired action would not occur. For this reason, it makes sense to speak of `institutional incentives' when referring to arrangements designed to encourage certain sorts of responses. `Perverse incentives' is also an expression that implies that incentives are meant to direct people's behavior in particular ways. Central to the core meaning of incentives is that they are an instrument of government in the most general sense. The emergence of the term historically within discourses of social control is illustrative of this point.


B. Vote neg –

1. Limits – anything could increase a motivation for action – improving the economy increases alternative energy since people have more money to buy it. Exploding the topic to include such causal effects makes neg predictability impossible.

2. Extra-T is bad – Ross Smith disagrees but it makes arguing generics on a huge topic impossible – we have to be prepared with specific impact turns for our PICs or else our net benefits don’t apply.


A. Incentives require quid pro quos – they can be turned down by the company.

William M Sarno, Director of Business Development at Visions Awards and Awardcraft, 2005 (or newer – date derived from earlier publication), currentissue/article.asp?art= 271117&issue=216

HRM. So what are the major differences between recognition, awards, rewards, and incentives and what makes a good mix?
WS. Recognition is how an organization commemorates an accomplishment. Recognition can be non-material, but most of the time there is a material symbol that the recipient and peers see to reinforce the accomplishment. Awards are the tangible things that are given to participants who attain goals. They “should” be accompanied by recognition, but are rarely so. Rewards are the “carrots” that organizations use to influence clients. Incentives are defined as an offering made by a company that creates a short-term opportunity for me to “get” something by “doing” something.

Incentives must be quid pro quo – they are negotiable and can be rolled back.

Sonja Parker is the HR administrator for Integrated Design, Inc. She also continues to provide leadership to the Ann Arbor High Tech Human Resource Association (AAHTHRA), 9/1/1999, “Incentives and Credits Both Build Your Bottom Line”, http://www. articledetail/articleid/15077/ default.asp

Incentives are negotiable and are generally offered at the discretion of state or local economic developers. On the other hand, tax credits are statutory in nature and are available to all companies as long as the prescribed criteria is met.
Many incentive packages include up-front tax credits as well as measures to reduce companies’ initial cash outlay for expansions and relocations.
Although tax credits are not usually negotiable, several states allow credits to be claimed retroactively. Many also have favorable carry-forward provisions.
Common incentives
The most common types of incentives are tax increment financing, property tax abatements and enterprise zones.
Through a tax increment financing subsidy, municipalities provide capital to growing businesses to help with the costs of acquisitions, renovation, development or clearance of a site and other organizational costs.
Tax abatements and credits are provided to companies moving into enterprise zones designated by city and state agencies. Empowerment zones are federally designated as an enhancement or alternative to incentives offered in enterprise zones.
Other types of incentives include: fee waivers, special districts, utility rate reduction, infrastructure funding, job training funds, low-interest loans and rebate agreements.
An example of a tax incentive package is South Carolina’s five-year property tax abatement plan. This incentive can represent a 20 to 50 percent savings on a county’s total property tax rate.
Popular tax credits
The most common types of credits offered by states are: investment tax credits, research and development credits, job tax credits and enterprise zone credits. Investment tax credits are offered by approximately 35 states.
Twenty-three states offer R&D credits. Other credits, such as job tax credits, are based on increases in payroll or employment. Enterprise zone credits promote activity within a designated area.
Other types of tax credits include: contribution credits, child care credits, training credits and environmental credits.
An example of a tax credit initiative is Georgia’s Job Tax Credit program, which offers credits ranging from $500 to $2,500 per job created, depending on the location of the facility.
Clawbacks and recaptures
Companies that take advantage of incentives and credits, however, should realize that they have to uphold their end of the bargain as well.
States and communities can use “clawback” and “recapture” policies to reduce or cancel benefits or require repayment if, for example, contractually-agreed-upon jobs don’t materialize.


B. Vote neg

1. Limits – including arguments outside QPQ explode the research burden – makes it impossible to debate specific warrants of cases – anything from deregulation to tariff repealment becomes topical.

2. Ground – it destroys generic positions like spending and politics links related to incentives – we’re forced to rely on consult and state bad as a crutch.

3. Context – environmental policies require business intervention – prefer ours because it utilizes a corporational understanding of incentives.


Text: The Untied States federal government should cooperate with other states to negotiate and create a treaty banning the development of nanotechnology for military purposes. This treaty should be open to all states. The United States federal government should substantially increase incentives for commercial development of nanotechnology.

Contention One – Competition – net benefits

Contention Two – Solvency – Other countries want a ban on military nanotech.

Jürgen Altmann, (Research fellow at Institute for Experimental Physics III at U Dortmund, Germany), January 13-15, 2005, “Limiting Military Uses of Nanotechnology and Converging Technologies,” Conference for Nanotechnology in Science. http://cgi-host.uni-marburg. de/~nano-mr/downloads/s3/ altmann_paper_final.pdf

In order to provide a framework for assessing militarily relevant technologies, the concept of preventive arms control has been developed (see e.g. Neuneck/Mölling 2001, Altmann i.p.: Ch. 5). The purpose of preventive arms control is to limit technologies before they are deployed with the armed forces, often starting at the development or testing phases. The concept proceeds from the viewpoint of international security, looking at the international system – and is thus different from an outlook that tries to provide national security through military strength. However, if national security is seen in the wider context, this may well lead to acceptance of preventive limits of the own technological capabilities if those of potential opponents will be reliably limited likewise. Earlier examples include the Anti-Ballistic Missile Treaty (1972-2002) or the Protocol banning Laser Blinding Weapons (1995). In today’s situation, the concern of preventing access by terrorists to certain technologies should provide additional motives for preventive arms control. Of course, terrorists cannot be partners of limitation agreements, but their own capabilities of making use of new technologies are quite limited; much more likely is access to technology that would have been developed and produced by states.
Beside agreed limitation (arms control proper) – earlier in many cases bilateral, then increasingly multilateral –, there are also unilateral steps that can be taken. If the situation is 8 very asymmetric, a technology leader can renounce a certain principal possibility without danger. As long as comprehensive agreements will not be in effect, export controls by the most advanced nations – a form of multilateral unilateralism – will be useful to limit the spread of new military technology to countries that may not be inclined to enter agreements.
However, export controls are discriminatory and create motives for circumvention – universal agreed limitation is clearly preferable, even if a few outsiders may remain. Preventive arms control consists of four steps: (1) prospective scientific-technical analysis of the technology in question; (2) prospective analysis of the military-operational aspects; (3) assessment of both under the criteria of preventive arms control. If the latter leads to the result that action is recommended, (4) possible limits and verification methods need to be devised. The criteria can be sorted in three groups:
I. Arms control, disarmament and international law:
– Prevent dangers to existing or intended arms-control and disarmament treaties,
– Observe existing norms of humanitarian law,
– No utility for weapons of mass destruction.
II. Stability:
– Prevent destabilisation of the military situation,
– Prevent technological arms race,
– Prevent horizontal or vertical proliferation/diffusion of military-relevant technologies, substances or knowledge.
III. Protect humans, environment, and society:
– Prevent dangers to humans,
– Prevent dangers to environment and sustainable development,
– Prevent dangers to the development of societal and political systems,
– Prevent dangers to the societal infrastructure.
The readiness to enter preventive arms control and the extent of limitations will be influenced by the military posture of the respective state. The tasks given to the armed forces may differ widely, the spectrum may range from large-scale armed conflict everywhere on the globe via defence of the own territory, crisis intervention and peace enforcement to defence against terrorist attacks. With its priority on international security and agreed limitations, preventive arms control is inclined towards reducing military threats and offensive potential. It makes 9 sense on all levels of the military-posture spectrum. Without it, the more offensive postures could become more dangerous over time.

Contention Three – Net benefit


Dr. Steven Metz, Research professor (SSI)of national security at the US Army War College and an analyst at the Strategic Studies Institute. Fall, 2K, Parameters, Vol. 30, Issue 3, ebsco

Simple cyborgs like this may be only the beginning of an even more fundamental revolution or, more precisely, the marriage of several ongoing technological revolutions. Lonnie D. Henley, for instance, argues that a melding of developments in molecular biology, nanotechnology, and information technology will stoke a second-generation revolution in military affairs.[ 23] Nanotechnology is a manufacturing process that builds at the atomic level.[ 24] It is in very early stages, but holds the real possibility of machines that are extremely small, perhaps even microscopic. Eric Drexler, the most fervent advocate of nanotechnology, predicts that it will unleash a transformation of society as self-replicating nanorobots manufacture any materials permitted by the laws of nature and thus help cure illness, eliminate poverty, and end pollution.[ 25] As Henley points out, combining nanotechnology with molecular biology and advances in information technology could, conceivably, lead to things like biological warfare weapons that are selective in targets and are triggered only by specific signals or circumstances. It could also lead to radically decentralized sensor nets, perhaps composed of millions of microscopic airborne sensors or, at least, a mesh of very small robots as envisioned by Libicki. And, Henley contends, it might eventually be possible to incorporate living neuron networks into silicone-based computers, thus greatly augmenting their "intelligence." In such a world, the Joint Vision 2010 future, or even that of advanced programs like the Army After Next project, will fade into obsolescence.
Beyond technological obstacles, the potential for effective battlefield robots raises a whole series of strategic, operational, and ethical issues, particularly when or if robots change from being lifters to killers. The idea of a killing system without direct human control is frightening. Because of this, developing the "rules of engagement" for robotic warfare is likely to be extraordinarily contentious. How much autonomy should robots have to engage targets? As a robot discovers a target and makes the "decision" to engage it, what should the role of humans be? Would prior programming be adequate, or would a human have to give the killer robot final approval to shoot? How would the deployment of battlefield robots affect the ability of the US military to operate in coalitions with allies who do not have them (given that a roboticized force is likely to take much lower casualties than a non-roboticized one)? Should the United States attempt to control the proliferation of military robotic technology? Is that even feasible since most of the evolution of robotic technology, like information technology in general, will take place in the private sector? Should a fully roboticized force be the ultimate objective?


Thomas J. Christensen , Associate Professor of Political Science and a member of the Security Studies Program at MIT, 2001, International Security 25.4, 5-40, international_security/v025/ 25.4christensen.html

Such conclusions should not be cause for excessive optimism, however. Chinese strategists seem to recognize the reality of China's persistent relative [End Page 8] weakness, but they do not therefore throw up their hands in defeat, considering great power conflict unthinkable. No matter how much Beijing might wish it could develop capabilities that could match or defeat American military power, China's strategy for the next twenty to thirty years appears more realistic: to develop the capabilities to dominate most regional actors, to become a regional peer competitor or near peer competitor of the other great powers in the region (including Russia, Japan, and perhaps a future unified Korea), and to develop politically useful capabilities to punish American forces if they were to intervene in a conflict of great interest to China. As leading military officers argue in one recent internally circulated Chinese military education book (which is analyzed in detail below): "Our weaponry has improved greatly in comparison to the past, but in comparison to the militaries of the advanced countries [fada guojia], there will still be a large gap not only now but long into the future. Therefore we not only must accelerate our development of advanced weapons, thus shrinking the gap to the fullest extent possible, but also [we must] use our current weapons to defeat enemies. . . . [We must] explore the art of the inferior defeating the superior under high-tech conditions." 7 In the near term, China seems devoted to developing new coercive options to exert more control over Taiwan's diplomatic policies, and to threaten or carry out punishment of any third parties that might intervene militarily on Taiwan's behalf, including both the United States and Japan. 8 [End Page 9]
If Beijing elites become convinced that relatively limited military capabilities and coercive tactics might allow for the politically effective use of force against Taiwan and, if necessary, American forces, then war between the United States and China becomes a very real possibility. This is true regardless of whether China's military force is generally backward compared with those of the United States and its allies, whether China still would be defeated in a toe-to-toe full-scale war with the United States, or whether the overall balance of power across the Taiwan Strait has changed enough to allow a successful amphibious invasion by the People's Liberation Army (PLA).

Strait Times

Case 1NC Frontilne

No nano-heg – it develops too quickly – and miscalculation is always already inevitable.

Avrum Gubrud. Superconductivity Researcher at U of MD. 1997. “Nanotechnology and International Security.” Foresight 5th Conference Paper. mgubrud/nanosec1.html

If economic upheaval and the creation of a new social order poses a challenge to democratic politics even in the most advanced nations, the potential for interstate conflict still presents the greatest danger of the nanotechnic revolution. The two arenas cannot be isolated, since domestic chaos can lead to the rise of intemperate leadership, and global chaos can lead to conflicts with other nations.
Interstate conflicts, confrontations and rivalries have a life of their own. Military confrontation can be dynamically stable or unstable simply with regard to possible military moves: rearmament, mobilization, readiness, forward deployment, preemptive seizure of territory, or full-scale attack. Military threats interact with political processes in cycles that can be escalatory or deescalatory. A country that is completely at the mercy of a stronger power may seek accomodation when threatened, yet the escalation of military threat generally leads to more hostile attitudes when the two sides are more or less equally matched, even when the cost of a war would be unacceptably high to both.
The history of the Cold War provides ample evidence of both sides of this paradox. It also shows that, in spite of intense rivalry, hostility, and covert warfare, nuclear confronters will be deterred from open combat and will eventually seek detente when both are completely at the mercy of a stronger power — nuclear weapons. But finally, the many crises of the Cold War, particularly the 1962 crisis, and the long human history of disastrous wars blundered into by combinations of accident, misunderstanding, miscalculation and hubris, provides ample warning that holocaust is possible. On the assumption of stochasticity, given enough time and circumstances, global holocaust is a likely eventuality, as long as nations confront each other with arms and with threats.
Even in the total absence of political conflict or ill-will, merely that fact that sovereign states maintain separate armed forces under separate command, within reach of each other and able to attack each other, contains the germ of a possible confrontation, arms race, and war. With the advent of molecular manufacturing, nations that possess the technology will be able to greatly increase the size and quality of their arsenals in a short period of time. Unless they are controlled from doing so under some system of international agreement, it is very likely that they will begin to build up more credible armed forces, perhaps slowly and cautiously at first — but others will note the development and respond with similar increases. Soon the nanotechnic powers can be doubling and redoubling the size of the threats they pose to non-nanotechnic neighbors while imposing very low costs on themselves. Given the very large potential for expansion of arsenals by the use of a self-replicating manufacturing base, nanotechnic powers which do not engage in a very dramatic buildup will be artificially restraining themselves. It seems very unlikely that a large (orders of magnitude) gap between potential (at low-cost) and actual military production will be sustained for long.
No doubt the potential for disaster will be well foreseen, but so was the potential for nuclear disaster, and yet a combination of distrust, arrogance, and rapid technological progress made it impossible to slow the nuclear arms race before it reached the level of thousands of missiles minutes from their targets, the geopolitical equivalent of a high-noon standoff, a "balance of terror" which exacted a vast and unaccounted cost in collective neurosis, and which remains in effect to this day, in spite of the much ballyhooed Cold War "victory." The failure of the Security Council "allies" to effect radical nuclear disarmament at a time when no conflicts of interest serious enough to engender a war, hot or cold, exist, is not encouraging with respect to the prospects for avoiding a nanotechnic arms race.
A race to develop early military applications of molecular manufacturing could yield sudden breakthroughs, leading to the abrupt emergence of new and unfamiliar threats, and provoking political and military reactions which further reinforce a cycle of competition and confrontation. A very rapid pace of technological change destabilizes the political-military balance. Revolutionary new types of weaponry, fear of what a competitor may be doing in secret, tense nerves and worst-case analyses, the complexity of technical issues, the unfamiliarity of new circumstances and resistance to the demands they make, may overwhelm the cumbersome processes of diplomacy and arms control, or even of intelligence gathering and assessment, formulation of measured responses and establishment of political consensus behind them. A runaway military technological revolution must at some point escape the grasp of even wise decisionmakers.

Case 1NC Frontline

Military control of nanotech is bad – creates self-replication that causes grey goo.

Dr. Sean Howard, Adjunct Professor of Political Science at the University College of Cape Breton (UCCB), Canada, July-August, 2002, Disarmament Diplomacy, Issue No. 65, 65op1.htm

Processes of self-replication, self-repair and self-assembly are an important goal of mainstream nanotechnological research. Either accidentally or by design, precisely such processes could act to rapidly and drastically alter environments, structures and living beings from within. In extremis, such alteration could develop into a 'doomsday scenario', the nanotechnological equivalent of a nuclear chain-reaction - an uncontrollable, exponential, self-replicating proliferation of 'nanodevices' chewing up the atmosphere, poisoning the oceans, etc. While accidental mass-destruction, even global destruction, is generally regarded as unlikely -equivalent to fears that a nuclear explosion could ignite the atmosphere, a prospect seriously investigated during the Manhattan Project - a deliberately malicious programming of nanosystems, with devastating results, seems hard to rule out. As Ray Kurzweil points out, if the potential for atomic self-replication is a pipedream, so is nanotechnology, but if the potential is real, so is the risk:
"Without self-replication, nanotechnology is neither practical nor economically feasible. And therein lies the rub. What happens if a little software problem (inadvertent or otherwise) fails to halt the self-replication? We may have more nanobots than we want. They could eat up everything in sight. ... I believe that it will be possible to engineer self-replicating nanobots in such a way that an inadvertent, undesired population explosion would be unlikely. ... But the bigger danger is the intentional hostile use of nanotechnology. Once the basic technology is available, it would not be difficult to adapt it as an instrument of war or terrorism. ... Nuclear weapons, for all their destructive potential, are at least relatively local in their effects. The self-replicating nature of nanotechnology makes it a far greater danger."15
Assuming replication will prove feasible, K. Eric Drexler also assumes the worst is possible: "Replicators can be more potent than nuclear weapons: to devastate Earth with bombs would require masses of exotic hardware and rare isotopes, but to destroy life with replicators would require only a single speck made of ordinary elements. Replicators give nuclear war some company as a potential cause of extinction, giving a broader context to extinction as a moral concern."16
There are, of course, multiple levels of concern below that of a final apocalypse. Use and abuse are, unavoidably, the twins born of controlled replication. Nanosystems proliferating in a precisely controlled and pre-programmed manner to destroy cancerous cells, or deliver medicines, or repair contaminated environments, can also be 'set' to destroy, poison and pollute.17 The chain reactions involved in thermonuclear explosions are precise and controlled, as much or more than the dosages in chemotherapy treatment. In the science of atomic engineering, the very technologies deployed to allay concerns of apocalyptic malfunction loom as the likely source of functional mass destruction.
Notwithstanding their vividly expressed concerns, both Kurzweil and Drexler portray the risk of mass- or global-destruction as a containable, preventable problem - provided nanotechnology is pursued as vigorously as possible in order to understand the real risks. In April 2000, however, an article in Wired magazine by Bill Joy, a leading computer scientist and co-founder of Sun Microsystems, painted a far bleaker picture:
"Accustomed to living with almost routine scientific breakthroughs, we have yet to come to terms with the fact that the most compelling 21st-century technologies - robotics, genetic engineering, and nanotechnology - pose a different threat than the technologies that have come before. ... What was different in the 20th Century? Certainly, the technologies underlying the weapons of mass destruction - nuclear, biological, and chemical - were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare...raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities. The 21st century technologies...are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. ... Thus we have the possibility not just of weapons of mass destruction but of knowledge-enabled mass destruction (KMD), this destructiveness hugely amplified by the power of self-replication."18

Military development causes pre-emption – REALISM FAILS because nano development is too fast.

Eric Drexler. PhD in Molecular Nanotech, SB, and SM from MIT, Founder, Chairman Emeritus, and Chairman of the Board of Advisors of Foresight Institute, HE FUCKING INVENTED NANOTECH TOO, “Mr. Nanotechnology.” 1986. “Engines of Creation: The Coming Era of Nanotechnology.” html

If attempts to suppress research in AI and nanotechnology seem futile and dangerous, what of the opposite course - an all-out, unilateral effort? But this too presents problems. We in the democracies probably cannot produce a major strategic breakthrough in perfect secrecy. Too many people would be involved for too many years. Since the Soviet leadership would learn of our efforts, their reaction becomes an obvious concern, and they would surely view a great breakthrough on our part as a great threat. If nanotechnology were being developed as part of a secret military program, their intelligence analysts would fear the development of a subtle but decisive weapon, perhaps based on programmable "germs." Depending on the circumstances, our opponents might choose to attack while they still could. It is important that the democracies keep the lead in these technologies, but we will be safest if we can somehow combine this strength with clearly nonthreatening policies.
Balance of Power
If we follow any of the strategies above we will inevitably stir strong conflict. Attempts to suppress nanotechnology and AI will pit the would-he suppressors against the vital interests of researchers, corporations, military establishments, and medical patients. Attempts to gain unilateral advantage through these technologies will pit the cooperating democracies against the vital interests of our opponents. All strategies will stir conflict, but need all strategies split Western societies or the world so badly?
In search of a middle path, we might seek a balance of power based on a balance of technology. This would seemingly extend a situation that has kept a measure of peace for four decades. But the key word here is "seemingly": the coming breakthroughs will be too abrupt and destabilizing for the old balance to continue. In the past, a country could suffer a technological lag of several years and yet maintain a rough military balance. With swift replicators or advanced AI, though, a lag of a single day could be fatal. A stable balance seems too much to hope for.

China’s winning the nano-race – that will cause extinction no matter what.

Lev Navrozov, (Won Albert Einstein Prize for Outstanding Intellectual Achievements; Journalist for Newsmax), Feb. 27, 2004. Newsmax. “Molecular Nano Weapons: Research in China and Talk in the West.” articles/2004/2/27/101732. shtml

What is undeniable is that the Sino-Western molecular nano assembler race has been totally one-sided. The West has been doing nothing to avert its nano-annihilation, while the voice of even Eric Drexler, founder of nanotechnology, was a voice in the wilderness, nay, a voice, drowned out by the apocalypsists, utopians, blind businessmen, and denialists.
While I was writing this column, Putin’s Russia was testing its new global hypersonic missiles, able to penetrate the U.S. “missile shield,” which President Reagan suggested in 1984, and which is still under construction.
The new Russian missiles do not circumvent Mutual Assured Destruction since the United States will still be capable of striking Russia in retaliation. Something else is noteworthy. The new Soviet missiles were a complete surprise to many in the West. Some Western experts had thought them impossible. But here they are.
Similarly, molecular nano weapons may come as a total surprise and contrary to all denials, but with fatal consequences.

The most likely scenario for extinction is nano-arms race.

John Robert Marlow, (Nanotech Columnist, nominated for the 2004 Foresight Institute Prize in Communication), February, 2004, Interview by Rocky Rawstern, editor of Nanotechnology Now, John-Marlow-Superswarm- interview-Feb04.htm

NN: In your opinion, what is the most likely scenario that leads to the survival of humanity once all the nano-pieces are in place and artificial replicators are no longer techno-fiction?
JRM: That's an interesting question because "the most likely scenario" and the "scenario that leads to the survival of humanity" may well be different things. As I say in the author's afterword to Nano, the most likely scenario is probably a nanoarms race which leads to extinction. Consider: from the moment it was realized that nuclear weapons were possible, their creation became inevitable because no major player could risk entering the future without them. It's a situation that drives all to seek the creation of something which none really want.
Today, global economics is a war all its own, and when you add the incomparable commercial benefits of nanotech to the unassailable military superiority conveyed by nanoweaponry-the only possible result is a global race to the finish line. Which, unfortunately, may really be the finish-for all of us. (See "Marlow's Second Paradox" in the side-bar)
The scenario with the best chance of leading to our survival would be one in which we all behave rationally, peacefully, and cautiously. You can see why I hesitate to call this a likely scenario. Nonetheless, the very nature of this particular beast presents another, intriguing possibility, and one which the lead character in Nano pursues: regardless of the number of players at the nanotable, Mankind's future can be assured if just one player does this right-and that player wins. Again owing to the nature of this technology, that player need not be a nation or even a large organization; it could be an individual.