The North Korean Nuclear Crisis

Gennady Chufrin is deputy director of the Institute of World Economy and International Relations (IMEMO) of the Russian Academy of Sciences in Moscow. Upon graduation from Leningrad State University in 1958, he served for four years on Soviet economic teams in Indonesia and India. After earning a Ph.D. in international economics from Leningrad State University in 1965, he was appointed an assistant on the economic faculty there from 1961 to 1965. He was an economist and department head of Soviet Trade Representation in Indonesia in 1965-69 and Senior Research Fellow at the Institute of Marketing Research in Moscow in 1970-73. From 1973 to 1978 he served as First Secretary and Counselor in the Soviet embassies in India and Pakistan. In 1979 he became department head at the Institute of Oriental Studies of the Soviet/Russian Academy of Sciences in Moscow. After receiving his professor degree in international economics there in 1981, he served as Deputy Director of the Institute until 1997. In 1994 he was elected associate member of the Russian Academy of Sciences. From 1998 to 2002, he was a project leader at the Stockholm International Peace Research Institute (SIPRI). In 2002 he assumed his current post at IMEMO. Chufrin is the author or co-author of 12 monographs and over 140 articles published in Russia and abroad.

The North Korean nuclear program and six-party talks

Northeast Asia is not only one of the most important, but also one of the most tension-filled regions in the present world. Judging from the prevailing trends, it may retain this reputation for the foreseeable future. Indeed, its role as one of the principal centers of global economic activity may be enhanced further if China’s economic success story continues and Japan overcomes its current recession. Yet the state of security in Northeast Asia continues to cause deep concern because the relationships between regional states—or at least some of them—are burdened with a host of political, economic and ideological differences as well as rival territorial claims. These inter-state disputes and contested claims undermine regional stability and may trigger serious conflicts.

A particularly high security risk in and around the Korean Peninsula is manifested by the world’s largest concentration of combat-ready troops facing each other across the 38th parallel. At one time, it seemed, tensions there started to de-escalate and the North Korean regime was opening up to the outside world. At the end of 2002, however, the positive engagement of North Korea in international relations was disrupted after the North, accusing the United States of threatening its security, acknowledged the existence of a uranium enrichment program. In the next few months the Democratic People’s Republic of Korea (D.P.R.K.) further escalated tensions on the Korean Peninsula when it withdrew from the Nuclear Non-Proliferation Treaty (NPT), expelled International Atomic Energy Agency (IAEA) inspectors, moved to restart its plutonium production program frozen under the 1994 U.S.-D.P.R.K. Agreed Framework and then withdrew from the 1992 agreement with the Republic of Korea (ROK) on the denuclearization of the Korean Peninsula.

In the wake of the U.S.-led military campaign against Iraq, Pyongyang further hardened its position on the nuclear issue because it became increasingly convinced that possession of nuclear deterrence forces probably constituted the only credible guarantee of its survival in the face of the perceived U.S. threat. Since the prime goal of the North Korean leadership was to ensure the continuity and security of its regime, the D.P.R.K. declared its intention to develop nuclear weapons unless it received direct and unambiguous security guarantees from the United States. Needless to say, these developments had a strong negative impact on the overall security situation in Northeast Asia and even beyond this region.

In an effort to de-escalate the tensions caused by the North Korean nuclear program and to avert its destabilizing consequences for regional and global security, the United States, China, Russia, Japan and South Korea began their search for effective political methods to resolve the problem. Since August 2003 these states and North Korea have held three rounds of six-party talks in Beijing, a channel for all concerned parties to resolve the nuclear crisis through dialogue and cooperation. The results of the third round of these talks held in June 2004 produced a measure of cautious optimism among some participants and observers. This optimism was caused, first, by a more productive atmosphere than at the previous two rounds of talks, which allowed participants to concentrate on searching for a comprehensive solution to the problem. Second, the United States put forward a constructive enough proposal, which could have served as a basis for an eventual agreement. Its main points included demands to North Korea to dismantle all its nuclear programs in a complete, verifiable and irreversible way; to place under international control all its missile materials; and to return IAEA inspectors to the country. In return, the United States would agree to the resumption of heavy fuel oil deliveries to North Korea by Japan and South Korea and offer provisional security guarantees. On its part, North Korea seemed to respond positively to these proposals by offering to freeze its military nuclear program and to start negotiations on dismantling equipment related to this program in return for assistance in energy supplies, lifting of economic sanctions and security guarantees from the United States.[1. Kommersant. Moscow. June 24, 2004.] One month after the completion of the third round of six-party talks, the U.S. Secretary of State and the North Korean Minister of Foreign Affairs held a meeting in Jakarta which left the impression that the principal parties in conflict were reducing their differences on the nuclear issue.

North Korea goes nuclear: reaction of its partners in six-party talks

The impression of reduced differences proved to be wrong. The United States continued to insist on the cessation of all North Korean nuclear programs, including those for the peaceful use of nuclear energy, a demand that Pyongyang called unacceptable. North Korea was also strongly offended by some public statements made during the U.S. presidential campaign, including those by President Bush himself, about the nature of the North Korean regime. All this resulted in North Korea’s refusal to attend the fourth round of six-party talks scheduled for September 2004. Citing hostile U.S. policy aimed allegedly at changing the existing political regime in North Korea by force, including by use of preventive nuclear strikes, Pyongyang’s UN envoy declared on September 28, 2005 that his country had reprocessed 8,000 spent fuel rods and was using the extracted plutonium for creation of nuclear weapons for its self-defense.[2. Kommersant. Moscow. September 29, 2004.]

On February 10, 2005, the North Korean government made another statement that strained the situation even more, claiming it actually possessed nuclear weapons. Simultaneously the D.P.R.K. announced suspension of its participation in the six-party talks for an indefinite period. The official statement released by the North Korean Ministry of Foreign Affairs on this occasion read, “We had already taken a resolute action of pulling out of the NPT and have manufactured nukes for self-defense to cope with the Bush administration’s undisguised policy to isolate and stifle the D.P.R.K. Our nuclear weapons will remain nuclear deterrent for self-defense under any circumstances.” [3. http://news.bbc.co.uk/go/pr/fr/-/2/hi/asia-pacific/4252515.stm.]

Thus, the reason given for these actions was again the hostile policy of the United States toward North Korea. There may be some truth in this, but it also seems true that the D.P.R.K.’s nuclear declaration was made deliberately at the time when, in the opinion of North Korean leadership, the United States was tied down in Iraq, making it more accommodating to North Korean demands of security guarantees and more likely to abandon its intention to change the regime in Pyongyang. The North Korean leaders, however, seriously misjudged the reaction of the international community to Pyongyang’s nuclear declaration. The statements were deeply deplored throughout the world, including by all the other countries that participated in the six-party talks.

The United States interpreted the North Korean statement as a confirmation of its earlier suspicions that Pyongyang had already been in possession of nuclear weapons since the 1990s. Washington also rejected the D.P.R.K.’s proposal to hold direct talks, which was made following North Korea’s announcement that it had become a nuclear power. During her March 2005 visits to Tokyo, Seoul and Beijing, Secretary of State Condoleezza Rice reiterated Washington’s stand that no direct talks with Pyongyang were possible and denuclearization of the Korean Peninsula was in the national interests of all parties involved in the six-party negotiations. Therefore, continuation of the six-party talks would be the best and most effective way to solve the North Korean nuclear problem. Emphasizing the U.S. commitment to a peaceful resolution of the North Korean nuclear crisis, Ms. Rice stated not only that the U.S. had no intentions of invading or attacking North Korea, but also that it was prepared to provide security assurances to Pyongyang if the latter was ready to negotiate seriously. She also stated that the six-party talks should be resumed as early as possible, warning Pyongyang that the stalemate over North Korea’s nuclear arms program cannot go on forever.[4. Nezavisimaya gazeta. Moscow. March 22, 2005; http://www.state.gov/secretary/rm/2005/43660.htm.]

Among possible steps that Washington considered was referring the issue to the UN Security Council and asking for imposition of international sanctions against North Korea.[5. New York Times. New York. February 14, 2005; April 25, 2005.] Also in case of further delay by North Korea to re-start the talks, Washington was reported to contemplate a set of more assertive steps (“tool kit”) aimed at tracking down and stopping North Korean export sales of missiles and other weapons technologies as well as at undercutting its drug trafficking and counterfeit deals. Washington was reported to consider adoption of punitive measures within the framework of the Proliferation Security Initiative (PSI) against North Korea. In other words, in spite of the conciliatory tone maintained by Ms. Rice during her tour of Japan, South Korea and China, the United States did not exclude more assertive steps in dealing with the North Korean nuclear problem. [6. Preparing American as well as international public opinion for such options, Porter Goss, Director of the Central Intelligence Agency, told the US Senate committee on intelligence that North Korea “could resume flight testing (of missiles) at any time, including the Taepodong-2 system.” He also said: “We believe North Korea probably has chemical and possibly biological weapons for use.” (Japan Times. (City?) March 24, 2005) Such statements may, however, be counter-productive. Painfully resembling similar pronouncements by U.S. high officials before the US invasion of Iraq, they may only increase fears in Pyongyang of an imminent U.S. attack on North Korea and harden its stand on the nuclear issue.]

Japan and South Korea also expressed their sincere regret and deep concern over the situation in the region following Pyongyang’s proclamation of possessing nuclear weapons. They strongly urged North Korea to resume its participation in the six-party talks, which, in their opinion, was in the best interests of all countries involved in them, North Korea above all.

That was where the similarity between Japan’s and South Korea’s attitudes to the current stage of the North Korean nuclear crisis largely ended. Indeed, Japan took a tougher stand on this issue and demanded from North Korea not only a return to the six-party talks at an early date without any preconditions, but also a commitment to complete dismantling of all its nuclear programs. Japan, similar to the United States, was also in favor of imposing more stringent economic sanctions on North Korea if it refused to yield to these demands. It was also prepared to join the United States in applying measures under the PSI against North Korea.

Without denying the seriousness of the situation that developed in Northeast Asia after Pyongyang’s nuclear declaration and the new security threats it created, it seems that Japan’s tough policy on North Korea was motivated not only by the need to respond to North Korean actions per se but also by the need to justify its own plans to revise the country’s constitution, which limits the role of armed forces and the scope of their use.

Contrary to Japan, the Roh Moo-hyun administration of South Korea clearly favored a more conciliatory approach to North Korea. The South Korean authorities did not support the idea of sanctions. Moreover, they indicated that the ROK did not plan to stop humanitarian aid to the North or to withdraw from joint economic ventures with it.[7. At the same time, however, President Roh Moo-hyun warned North Korea that his country will not give it any major aid until the standoff over its nuclear weapons program ends. http://www.nti.org/d_newswire/issues/2005_4_13.html.] Neither did South Korea support U.S. and Japanese intentions to refer the North Korean case to the UN Security Council. Instead, the South Korean government advocated “persuasion, not pressure” in dealing with North Korea. Their reason was that North Korea should not be antagonized any further but instead should be drawn into a constructive dialogue by flexible policies. As a front-line state, South Korea was particularly sensitive to the possibility of a military conflict on the Peninsula that would have devastating consequences for the country and its population even if the nuclear weapons were not used. Therefore, the South Korean government wanted to induce North Korea to resume negotiations by ‘soft methods.’ In doing so, it enjoyed support of a sizeable part of the Korean electorate.[8. Kommersant. Moscow. March 23, 2005.] It is worth noting in this regard what one of the influential South Korean newspapers, Korea Times, wrote only a few weeks before the nuclear crisis escalated: “… war must be prevented on the Korean Peninsula at any cost; no other agenda can be seen as more urgent and important than the survival of the 70 million people in North and South Korea . . . our unswerving policy is that the nuclear crisis should be resolved peacefully through dialogue and diplomatic efforts by all parties concerned.” [9. Korea Times. Seoul. January 2, 2005.]

China’s policy on the North Korean nuclear crisis

Unquestionably, both Japan and the R.O.K. have at their disposal economic leverage such as humanitarian assistance, money transfers and joint ventures to try to bring North Korea back to the negotiation table. But their possible influence is incomparable to that of China, which remains the D.P.R.K.’s key ally and a main source of its outside aid, including supplies of fuel and rice. According to current estimates, about 70 to 80 percent of all such aid provided to North Korea comes from China. Every year China is reported to send one million tons of crude oil and about 150,000 to 200,000 tons of refined oil products to the D.P.R.K..

China, however, refuses to use economic leverage on North Korea for many reasons of its own. First, Beijing does not want the collapse of North Korea because it is a buffer state separating China from having a U.S. military presence on its own borders. Second, the possible collapse of North Korea as a sovereign state may force many thousands of its citizens and their families to try to cross the border to China, thereby creating a large-scale humanitarian problem on Chinese territory. Third, anticipating an imminent standoff with the U.S. in East Asia over a range of problems, including the future of Taiwan, China does not want to strengthen the U.S. positions there by undermining its own ally. Besides, China already tried to use economic pressure on North Korea by briefly stopping oil supplies in 2003. It did not produce the intended results, however, since North Korea was fully aware of the limits of such pressure in the context of China’s own interests.

Reluctance to use economic pressure on Pyongyang does not mean, however, that China is supportive of North Korea’s nuclear policy. On the contrary, Chinese leaders, including President Hu Jintao, Prime Minister Wen Jiabao and Foreign Minister Li Zhaoxing, have repeatedly stated that China stood for a denuclearized, peaceful and stable Korean Peninsula and the return of North Korea to the six-party talks.[10. Xinhua News Agency. (City?) February 18, 2005; New York Times, March 21, 2005.] It meant that China’s strategy in resolving the nuclear crisis was based not on using sanctions against North Korea, but on the continuation of the multilateral negotiation process. Only through dialogue, maintained Chinese leaders, can confrontation be reduced, understanding enhanced and compromise reached. China’s interest in securing the dialogue process was obviously motivated by its belief that as long as the talks were on, the United States would not take the matter to the UN Security Council or make a military strike against North Korea. On the other hand, further stalemate may cause an option China very much wanted to avoid—further escalation of the conflict that eventually results in a U.S. military operation against North Korea in spite of the denials of such a possibility by Washington.

On its part, China continued to favor quiet diplomacy in its relations with Pyongyang, which on the one hand helped North Korea to withstand otherwise almost universal political and economic pressure and on the other hand was directed at persuading Pyongyang to return to the negotiation table and refrain from excessively escalatory steps.

According to Chinese Foreign Minister Li Zhaoxing, “the legitimate concerns of the D.P.R.K. should be addressed.” He added that since “the nuclear issue on the Korean Peninsula is very complicated,” China expected “the relevant parties, particularly those major parties concerned” to “undertake their responsibilities, demonstrate flexibility, sincerity and patience and work for an early reopening of the talks.”[11. Xinhua News Agency. (City?) March 6, 2005.] Beijing continued to oppose imposition of punitive sanctions against North Korea and at the same time called on principal conflicting parties to find mutually acceptable compromises.

In the second half of February 2005 China sent a high-level party official, Wang Jiariu, to Pyongyang where he met North Korean leader Kim Jong Il and conveyed to him a message from President Hu Jintao. This message explicitly stated that China’s position was in favor of denuclearization of the Korean Peninsula as well as in favor of continuation of the six-party talks. It also called on the D.P.R.K. to return to the negotiations. The North Korean leader responded to this message by stating that the D.P.R.K. was not opposed to the six-party talks, nor did it intend to withdraw from the process. He promised to return to the negotiation table but only when “conditions are ripe” and the United States showed “trustworthy sincerity.” [12. New York Times. New York. February 22, 2005; RIA Novosti. March 1, 2005.]

This stand of North Korea on the negotiation process was reiterated by D.P.R.K. Premier Pak Bong Ju during his visit to Beijing in the middle of March 2005. He also stated that North Korea continued “to adhere to its position on a nuclear-free Peninsula and a final solution of the nuclear issue through dialogue.”[13. Xinhua News Agency. (City?) March 23, 2005.] It allowed China to interpret the North Korean position as being in favor of resolving the nuclear crisis through dialogue and negotiations.[14. http://www.chinaembassy.ru/rus/fyrth/t189145.htm.] Welcoming this position China also called on other concerned parties to join efforts for the resumption of six-party talks which, China maintained, remained “a most realistic and effective approach to the resolution of the D.P.R.K. nuclear issue.” It was through the continuation of these talks, stated President Hu Jintao at his meeting with North Korean Premier Pak Bong Ju, that it would be possible to keep the Korean Peninsula free from nuclear weapons, to “resolve D.P.R.K.’s rational concerns and to maintain peace and stability on the Peninsula.”[15. Xinhua News Agency. (City?) March 24, 2005.]

It is worth noting that immediately after the February 10, 2005 nuclear declaration of North Korea, only China continued to maintain high-level direct contacts with the North Korean leadership (apart from a brief meeting between Kim Yong-Nam, head of the North Korea’s parliament, and Lee Hae-Chan, prime minister of South Korea, during the Asia-Africa summit in Jakarta in April 2005). But dealing with Pyongyang is not an easy business, even for China which has political, security and economic importance to North Korea. The latest example of that uneasy relationship was China’s deep disappointment when, during North Korean First Vice Foreign Minister Kang Sok-Ju’s April 2005 visit to Beijing, he failed to give a clear date for his country’s return to disarmament talks. As a result, the scheduled visit of China’s President Hu Jintao to Pyongyang was postponed indefinitely.[16. Joongang Ilbo. Seoul. April 12, 2005; http://www.chinaembassy.ru/rus/fyrth/t191969.htm.]

In spite of these difficulties China remained strongly in favor of the negotiated settlement of the North Korean nuclear crisis. In order to reach a breakthrough it welcomed every move that might help resumption of the six-party talks, including the reported readiness of the United States to hold direct contacts with North Korea in the framework of the talks. At the same time, China tried to prevent anything that would be detrimental to the talks’ resumption. China played down reports in the Western press accusing North Korea of preparing a nuclear test and stated that there was “no solid evidence” of such activities by Pyongyang.

All this may mean only that China does not want the North Korean crisis to escalate any further. China also clearly does not want the crisis to undermine its own already uneasy relations with Washington. One may safely assume that although China refuses to impose or to support sanctions against North Korea, it will not render Pyongyang technical, logistical or any other assistance that may be even remotely interpreted as a contribution to the North Korean military nuclear program. Without such assistance, North Korea will hardly be able to go on its own beyond limited actions in this area, given its disastrous economic situation.

In other words, it looks like China will continue to try to strike a delicate balance— refuse to let down its allies in North Korea and at the same time try not to jeopardize unnecessarily its relations with the United States.

Russia’s policy on the North Korean nuclear crisis

As a Eurasian country, Russia has strong national interests in Northeast Asia. They are formed not by mere geography, but by a rich variety of political, economic, security, demographic, cultural and other factors. Obviously, at different periods of Soviet/Russian history, some of these factors became more prominent than others, influencing the choice of national objectives and the means of their achievement.

Development of constructive and stable relations with both Korean states on a wide range of issues of common interest, including regional security, is a high priority in Russia’s policy in Northeast Asia. As the security situation in and around the Korean Peninsula remained unstable and tensions there started to escalate since the end of 2002, Russia has striven to make its own contribution to the resolution of the evolving crisis. It came forward with “a package proposal,” aimed at ensuring the non-nuclear status of the Korean Peninsula, strict observation of the NPT regime there and meeting North Korea’s legitimate economic (energy requirements in particular) and security concerns. At the same time, Russia expressed in no uncertain terms its negative reaction regarding the D.P.R.K.’s withdrawal from the NPT, stating that such action may only aggravate an already tense situation on the Korean peninsula and could greatly damage the existing legal instruments of maintaining regional and global security. The worst-case scenario from the Russian position would be a complete collapse of the nuclear non-proliferation regime not only on the Korean Peninsula, but also in the wider region of East Asia, provoking Japan and Taiwan to go nuclear. Russia, therefore, continued to urge Pyongyang to listen to the opinion of the international community, its neighbors and partners and to comply with the established norms and requirements of the non-proliferation regime.

Russia is in favor of the Korean unification, seeing it as a gradual process passing through a number of successive stages that are agreed upon in the course of a constructive political dialogue between the two Korean states. Russia holds a strong view that prospects of the Korean unification rest above all with the two Korean states. The inter-Korean dialogue seems to be the best possible method of bringing the long coveted goal of national reconciliation of the Korean people closer to realization. This does not mean to deny the role of outside powers in promoting this process. Normalization of Korean affairs, however, and the overall improvement in the security situation in and around the Korean Peninsula will hardly be facilitated by an arbitrary inclusion of the D.P.R.K. in an “evil axis” or calling it an “outpost of tyranny.” A sudden disappearance of North Korea as a sovereign state as a result of an Iraqi-type military operation, be it now or later, would be completely unacceptable to Russia since it would fundamentally change the existing balance of forces in Northeast Asia and seriously damage Russia’s national interests in the region.

Needless to say, Russia was shocked by the D.P.R.K.’s statement made on February 10, 2005 regarding its possession of nuclear weapons and expressed its deep regret over it. Moscow called on North Korea in no uncertain terms to return to the negotiation table and to avoid a nuclear arms race in the region. Nevertheless, in spite of a nasty turn in the situation in and around the Korean Peninsula created by the D.P.R.K.’s statement, Russia maintained its strong belief in a peaceful resolution of the nuclear crisis. It was, in Moscow opinion, critically dependent on the continuation of a political dialogue between the concerned parties in a six-nation format. Moscow remained, therefore, consistently in favor of a patient political process and disapproved of any use of force or threats to use force in an attempt to resolve the nuclear crisis. It did not hide its total disapproval of a possible military action against North Korea which would unleash hostilities with the possible use of nuclear weapons in the immediate proximity to Russia’s own territory.

In other words, Russia demonstrated its strong intention to follow along with other concerned nations and to contribute to the complete and sustainable resolution of the North Korean nuclear crisis by relying on political and diplomatic methods, as well as to continue to hold an active stand in defending internationally accepted legal norms and regulations. It also disapproved of sanctions, considering them to be counterproductive.

Sticking to this approach, the Russian Ministry of Foreign Affairs released a statement on February 10, 2005 saying that “we treat with respect and understanding the D.P.R.K.’s concerns regarding its security. We believe, however, that resolution of this problem must be found through a negotiation process and not through an arms race, especially of nuclear weapons. In this context, we consider the six-party negotiations in Beijing as an optimum mechanism for resolving the nuclear problem on the Korean Peninsula.” [17. Press release by the information department, Russia’s Ministry of Foreign Affairs. February 10, 2005.]

Russia also entered into intensive consultations with its partners in the negotiation process and called on them to join efforts aimed at resumption of the talks and at finding compromises that would take into account legitimate interests and concerns of all parties. When North Korean leader Kim Jong Il indicated that North Korea may resume its participation in the six-party talks, Russia welcomed this statement and reconfirmed its belief in these talks as “the shortest way to denuclearization of the Korean Peninsula and resolution of other problems related to this issue.”[18. Press release by the information department, Russia’s Ministry of Foreign Affairs. February 22, 2005.]

Apart from supporting a political process to resolve the Korean nuclear crisis, Russia indicated that it was prepared to take part in concerted efforts of the international community in helping North Korea with its energy problems. After political and military aspects of the crisis are resolved, Russia may consider its participation in a multilateral arrangement aimed at meeting the D.P.R.K. energy requirements either as a potential supplier to North Korea of natural gas from East Siberia or Sakhalin, or as a supplier of civilian nuclear technology under the established rules of the International Atomic Energy Agency.

What’s next?

At this point, the positions of the main conflicting parties to the North Korean nuclear crisis, the United States and the D.P.R.K., seem to be intransigent and almost impossible to reconcile. Moreover, one cannot completely exclude North Korea opting in favor of retaining its nuclear military potential. Latest reports regarding North Korea’s shut down of its nuclear reactor at Yongbyon supposedly for reprocessing spent fuel into weapon-grade plutonium[19. Kommersant. Moscow. April 19, 2005.] only strengthened such a possibility.

In the opinion of this author, however, the “door of opportunity” to resolve the crisis by political means is still open. The North Korean leadership, in spite of its aggressive rhetoric and its demand to turn the six-party talks into a wider disarmament forum, is basically interested not in the possession of nuclear weapons, but rather in trading them off for international (in particular U.S.) security guarantees and massive foreign economic assistance. It is North Korea’s rapidly escalating domestic economic crisis, and not possible U.S. military action, that presents the greatest threat to the North Korean regime. One may recall the case of Libya, which decided to surrender its nuclear military program in exchange for lifting international political and economic sanctions. If such a trade-off is achieved in the case of North Korea, the existing political regime in Pyongyang will continue to stay in place for some time. That may not suit those thinking of regime change in North Korea, yet it will be infinitely better than a military (possibly even nuclear) conflict in the densely populated region.

Although at this stage the highest priority for all major international and regional actors is an early resolution of the North Korean nuclear crisis, it does not mean that the situation on the Korean Peninsula may be completely normalized, even after the current nuclear crisis is reduced to an acceptable level. The reason for that lies in the fact that the security situation on the Korean Peninsula has dynamics embedded in the long and painful history of a Korean civil war and great power rivalries. The major international actors, the United States and China in particular, continue to hold highly divergent views on the future of the Korean Peninsula and of the Korean unification. This makes the current Korean nuclear crisis only a transitory event, although an important one, in the long-term strategic development of the region. As the six-party talks resume in July 2005, this broader frame of reference must be kept in mind.

Who are the Darfurians? Arab and African Identities, Violence and External Engagement

Alex de Waal is a fellow of the Global Equity Initiative at Harvard University, and programme director at the Social Science Research Council in New York. He is the author of Famine that Kills: Darfur, Sudan (revised edition, Oxford University Press 2005) and, jointly with Julie Flint, of Darfur: A Short History of a Long War (forthcoming, Zed Press, September 2005).

This paper is an attempt to explain the processes of identity formation that have taken place in Darfur over the last four centuries. The basic story is of four overlapping processes of identity formation, each of them primarily associated with a different period in the region’s history. The four are the ‘Sudanic identities’ associated with the Dar Fur sultanate, Islamic identities, the administrative tribalism associated with the 20th century Sudanese state, and the recent polarization of ‘Arab’ and ‘African’ identities, associated with new forms of external intrusion and internal violence. It is a story that emphasizes the much-neglected east-west axis of Sudanese identity, arguably as important as the north-south axis, and redeems the neglect of Darfur as a separate and important locus for state formation in Sudan, paralleling and competing with the Nile Valley. It focuses on the incapacity of both the modern Sudanese state and international actors to comprehend the singularities of Darfur, accusing much Sudanese historiography of ‘Nilocentrism’, namely the use of analytical terms derived from the experience of the Nile Valley to apply to Darfur.

The term ‘Darfurian’ is awkward. Darfur refers, strictly speaking, to ‘domain of the Fur’. As I shall argue, ‘Fur’ was historically an ethno-political term, but nonetheless, at any historical point has referred only to a minority of the region’s population, which includes many ethnicities and tribes.[1. The use of the label ‘tribe’ is controversial. But when we are dealing with the subgroups of the Darfurian Arabs, who are ethnically indistinguishable but politically distinct, the term correlates with popular usage and is useful. Hence, ‘tribe’ is used in the sense of a political or administrative ethnically-based unit. See Abdel Ghaffar M. Ahmed, Anthropology in the Sudan: Reflections by a Sudanese Anthropologist, Utrecht, International Books, 2002.] However, from the middle ages to the early 20th century, there was a continuous history of state formation in the region, and Sean O’Fahey remarks that there is a striking acceptance of Darfur as a single entity over this period.[2. R. S. O’Fahey, State and Society in Dar Fur, London, Hurst, 1980.] Certainly, living in Darfur in the 1980s, and traveling to most parts of the region, the sense of regional identity was palpable. This does not mean there is agreement over the identity or destiny of Darfur. There are, as I shall argue, different and conflicting ‘moral geographies’. But what binds Darfurians together is as great as what divides them.

Identity formation in Darfur has often been associated with violence and external engagement. One of the themes of this paper is that today’s events have many historic precursors. However, they are also unique in the ideologically-polarized nature of the identities currently in formation, and the nature of external intrusion into Darfur. The paper concludes with a brief discussion of the implications of the U.S. determination that genocide is occurring in Darfur. There is a danger that the language of genocide and ideologically polarized identities will contribute to making the conflict more intractable.

While primarily an exercise in academic social history, this paper has a political purpose also. It is my contention that, for almost a century, Darfurians have been unable to make their history on their own terms, and one reason for that, is the absence of a coherent debate on the question, ‘Who are the Darfurians?’ By helping to generate such a debate, I hope it will be possible for the many peoples for whom Darfur is a common home to discover their collective identity.

Sudanic Identities

The first of the processes of identity formation is, the ‘Sudanic model’ associated with indigenous state formation. In this respect, it is crucial to note that Dar Fur (the term I will use for the independent sultanate, from c. 1600 to 1916, with a break 1874-98) was a separate centre of state formation from the Nile Valley, which was at times more powerful than its riverain competitors. Indeed, Dar Fur ruled Kordofan from about 1791 to 1821 and at times had dominion over parts of the Nile Valley, and for much of its life the Mahdist state was dominated by Darfurians. Before the 20th century, only once in recorded history did a state based on the Nile rule Darfur, and then only briefly and incompletely (1874-82). This has been grossly neglected in the ‘Nilocentric’ historiography of Sudan. Rather than the ‘two Sudans’ familiar to scholars and politicians, representing North and South, we should consider ‘three Sudans’ and include Dar Fur as well.

The Keira Sultanate followed on from a Tunjur kingdom, with a very similarly-placed core in northern Jebel Marra (and there are many continuities between the two states, notably in the governance of the northern province) and a Daju state, based in the south of the mountain. Under the sultanate, we have an overall model of identity formation with a core Fur-Keira identity, surrounded by an ‘absorbed’ set of identities which can be glossed as Fur-Kunjara (with the Tunjur ethnicity, the historic state-forming predecessor of the Fur-Keira) enjoying similarly privileged status immediately to the north). This is a pattern of ethnic-political absorption familiar to scholars of states including imperial Ethiopia, the Funj, Kanem-Borno and other Sudanic entities. Analysing this allows us to begin to address some of the enduring puzzles of Fur ethnography and linguistics, namely the different political structures of the different Fur clans and the failure to classify the Fur language, which appears to have been creolized as it spread from its core communities. However, the ethnography and history of the Fur remain desperately under-studied and under-documented.

Surrounding this are subjugated groups. In the north are both nomadic Bedouins (important because camel ownership and long-distance trade were crucial to the wealth of the Sultan) and settled groups. Of the latter, the Zaghawa are the most important. In the 18th century, the Zaghawa were closely associated with the state. Zaghawa clans married into the ruling Keira family, and they provided administrators and soldiers to the court. To the south are more independent groups, some of which ‘became Fur’ by becoming absorbed into the Fur polity, and others of which retain a strong impulse for political independence, notably the Baggara Arabs. As in all such states, the king used violence unsparingly to subordinate these peripheral peoples.

To the far south is Dar Fertit, the term ‘Fertit’ signifying the enslaveable peoples of the forest zone. This is where the intrinsically violent nature of the Fur state is apparent. The state reproduced itself through dispatching its armies to the south, obtaining slaves and other plunder, and exporting them northwards to Egypt and the Mediterranean. This nexus of soldiers, slaves and traders is familiar from the historiography of Sudanic states, where ‘wars without end’ were essential to ensure the wealth and power of the rulers.[3. Cf. S. P. Reyna, Wars without End: The Political Economy of a Precolonial African State, Hanover, University Press of New England, 1990; Lidwien Kapteijns, Mahdist Faith and Sudanic Tradition: A History of the Dar Masalit Sultanate 1870-1930, Amsterdam, 1985; Janet J. Ewald, Soldiers, Traders and Slaves: State Formation and Economic Transformation in the Greater Nile Valley, 1700-1885, University of Wisconsin Press, 1990. The term ‘wars without end’ was used by the 19th century traveler Gustav Nachtigal with specific reference to the central Sudanic state of Bagirimi.] O’Fahey describes the slaving party as the state in miniature.[4. R. S. O’Fahey, 1980, ibid] This in turn arose because of the geo-political position of the Sultanate on the periphery of the Mediterranean world, consumer of slaves, ivory and other plunder-related commodities.[5. In the late 18th century, Egypt’s trade with Dar Fur was five times larger than with Sinnar.] During the 18th and 19th century, the Forty Days Road to Asyut was Egypt’s main source of slaves and other sub-Saharan commodities. When Napoleon Bonaparte occupied Egypt, he exchanged letters and gifts with the Sultan of Dar Fur.

All the major groups in Darfur are patrilineal, with identity inherited through the male line. One implication of this is that identity change can occur through the immigration of powerful males, who were in a position to marry into leading families or displace the indigenous men. Historically, the exception may have been some groups classed as Fertit, which were matrilineal. A combination of defensive identity formation under external onslaught and Islamization appears to have made matrilineality no more than a historical fragment. This, however, only reinforces the point that identity change is a struggle to control women’s bodies. With the exception of privileged women at court, women are almost wholly absent from the historical record. But, knowing the sexual violence that has accompanied recent conflicts, we can surmise that rape and abduction were likely to have been mechanisms for identity change on the southern frontier.

Identity formation in the Sultanate changed over the centuries, from a process tightly focused on the Fur identity (from about 1600 to the later 1700s), to a more secular process in which the state lost its ideologically ethnic character and ruled through an administrative hierarchy (up to 1916). It is also important to note the role of claims to Arab genealogy in the legitimation and the institutions of the state. The founding myth of the Sultanate includes Arab descent, traceable to the Prophet Mohammed. This is again familiar from all Sudanic states (Ethiopia having the variant of the Solomonic myth). Arabic was important because it brought a literate tradition, the possibility of co-opting traders and teachers from the Arab world, and above all because of the role of Islam as the state religion.

The state’s indigenous Arab population was meanwhile ‘Arab’ chiefly in the archaic sense, used by Ibn Khaldun and others, of ‘Bedouin’. This is a sense still used widely, and it is interesting that the Libyan government (one of three Bedouin states, the others being Saudi Arabia and Mauritania), has regarded Tuaregs and other Saharan peoples as ‘Arab’.

This model of identity formation can be represented in the ‘moral geography’ of figure 1.

Figure 1
Moral geography of the Dar Fur sultanate as seen from the centre.

One significance of this becomes apparent when we map the categories onto the Turko-Egyptian state in the middle Nile Valley, 1821-74. For this state—which is essentially the direct predecessor of what we have today—the core identity is ‘Arab’, focused on the three tribes Shaigiya, Jaaliyiin and Danagla. (The first and second are particularly dominant in the current regime. The last is ‘Nubian’, illustrating just how conditional the term ‘Arab’ can be.) The other identity pole was originally ‘Sudanese’, the term used for enslaveable black populations from the South in the 19th and early 20th centuries, but which by a curious process of label migration, came by the 1980s to refer to the ruling elite, the three tribes themselves. Meanwhile, the Southerners had adopted the term ‘African’ to assert their identity, contributing to a vibrant debate among Sudanese intellectuals as to Sudan’s relative positions in the Arab and African worlds.[6. For the seminal debates on this issue, see Yusuf Fadl Hasan, Sudan in Africa, Khartoum University Press, 1971.] From the viewpoint of Southern Sudan (and indeed east Africa), ‘African’ and ‘Arab’ are polar opposites. From the viewpoint of Darfur and its ‘Sudanic’ orientation, ‘Arab’ is merely one subset of ‘African’. Darfurians had no difficulty with multiple identities, and indeed would have defined their African kingdom as encompassing indigenous Arabs, both Bedouins and culturally literate Arabs.

The transfer of the term ‘African’ from Southern Sudan to Darfur, and its use, not to encompass the Fertit groups but to embrace the state-forming Fur and Tunjur, and the similarly historically privileged Zaghawa, Masalit, Daju and Borgu, is therefore an interesting and anthropologically naïve category transfer. ‘African’ should have rather different meanings in Darfur.

Dar Fur’s downfall came in the 1870s because it lost out to its competitor, the Turko-Egyptian regime and its client Khartoum traders, over the struggle for the slaving/raiding monopoly in the southern hinterland. The current boundaries of Sudan are largely defined by the point at which the Khedive’s agents had reached at the time when their predatory expansion was halted by the Mahdist revolution. Their commerce and raiding inflicted immense violence on the peoples it conquered, subjecting them to famine and in some cases, complete dissolution and genocide. Historians have managed to reconstruct some of the societies that existed before this onslaught, but others live on in memory only, and others have disappeared without trace.[7. Dennis D. Cordell, ‘The Savanna Belt of North-Central Africa’, in David Birmingham and Phyllis M. Martin (eds.), History of Central Africa, Vol 1., Longman, 1983; Stefano Santandrea, A Tribal History of the Western Bahr el Ghazal, Bologna, Nigrizia, 1964.]

Islamic Identities

The second model is the ‘Islamic model’. This substantially overlaps with the ‘Sudanic model’ and complements it, but also has distinctive differences, which came to a head with the Sudanese Mahdiya (1883-98). Let us begin with the overlaps.

Islam was a state cult in Dar Fur from the 17th century. Most likely, Islam came to Dar Fur from the west, because the region was part of the medieval Kanem-Bornu empire, which was formally Islamic from the 11th century if not earlier. Nilocentric historians have tended to assume that Islam reached Dar Fur from the Nile Valley, but there is much evidence to suggest that it is not the case. For example, the dominant Sufi orders in Darfur are west African in origin (notably the Tijaniya), and the script used was the Andalusian-Saharan script, not the classic Arab handwriting of the Nile Valley.

The majority of Darfur’s Arab tribes migrated into the sultanate in the middle of the 18th century, from the west.[8. H. A. MacMichael, A History of the Arabs in the Sudan, Cambridge, Cambridge University Press, 1922; Ian Cunnison, Baggara Arabs: Power and the Lineage in a Sudanese Nomad Tribe, Oxford, Clarendon Press, 1966.] They trace their genealogy to the Juheiyna group, and ultimately to the Prophet (in common with all ruling lineages, Arab or non-Arab). During the 18th century, they exhibited a general south and eastward drift. At all times they were cultivators and herders of both camels and cattle, but as they moved east and south, cattle herding came to predominate and they became known collectively as the Baggara. Most of the tribal names they now possess emerged in the 18th, 19th or 20th centuries, in some cases as they merged into new political units. An interesting and important example is the Rizeigat, a vast confederation of clans and sections, that migrated east and south, with three powerful sections (Nawaiba, Mahamid and Mahriya) converging to create the Rizeigat of ed Daien in south-eastern Darfur. But they also left substantial sections to the north and west, historic remnants of this migration. These sections have a troubled and uncertain relationship with their larger southern cousins, alternately claiming kinship and independence. Whereas the southern, Baggara, Rizeigat were awarded a territory by the Fur Sultan (who had not subjugated the area where they chose to live), the northern clans continued a primarily nomadic existence on the desert edge, without a specific place they could call home. When sections did settle (and many did), they were subject to the administrative authority of the Sultan’s provincial governor of the northern region, Dar Takanawi or Dar el Rih. For historic reasons, this was an area in which administration was relatively de-tribalised, so the northern Bedouins were integrated into the Sultanate more as subjects than as quasi-autonomous tribal units.

The same process explains why we have a large Beni Halba Baggara group, with territorial jurisdiction, in southern Darfur, and a small Abbala group further to the north, and also similarly for the Misiriya whose main territories lie in south Kordofan, but who have remnant sections in northwest Darfur and Chad. Meanwhile the Zayadiya and Ma’aliya are not Juheiyna at all, and did not migrate in the same manner, and had different (though not necessarily easier) historic relations with the Sultanate.

The Hausa and Fulani migrations that occurred in the 19th and 20th centuries also have important parallels. They too populated substantial territories in Darfur (and also further east), and included remnant and more purely pastoral sections (such as the Um Bororo) that continued the eastward migration well into the late 20th century. An important component of the eastward drift is the influence of the Haj (many see themselves as ‘permanent pilgrims’, seeking to move towards Mekka), and Mahdist tradition that emphasizes eastward migration.[9. C. Bawa Yamba, Permanent Pilgrims: The Role of Pilgrimage in the Lives of West African Muslims in Sudan, Washington D.C., Smithsonian Press, 1995.] As we shall see, militant Mahdism is itself an import into Sudan from west Africa, brought with these migrants. There are other significant groups with origins to the west, such as the Borgu and Birgid, both of them sedentary Sudanic peoples. We should not see eastward migration as exclusively a phenomenon of politically-Islamized groups, pastoralists or Arabs.

The Juheiyna groups brought with them their own, distinctive ‘moral geography’, one familiar to pastoral nomadic groups across the central Sudan and Sahelian regions. This sees all land as belonging to Allah, with right of use and settlement belonging to those who happen upon it. It sees Darfur as a chequerboard of different localities, some belonging to farmers and others to herders, with the two groups in a mutually-advantageous exchange relationship. It is also open-ended, especially towards the east. (The extent to which this is co-terminous with the moral geography of a Muslim pilgrim, exemplified by the west African migrants in Sudan, is an interesting question.)

This is represented in figure 2, which was drawn for me in outline by one of the most eminent Abbala Sheikhs, Hilal Musa of the Um Jalul Rizeigat, in 1985.

Figure 2
The ‘moral geography’ of Darfur, from a camel pastoralist viewpoint.

Several legacies of this are evident today. Most of the ‘Arab’ groups involved in militia activities including land grabbing are what we might call the Abbala remnants, with weak historic claims to tribally-defined territories, and traditions of migration and settlement to the east and south. Meanwhile, the majority of the Baggara Arabs of south Darfur are uninvolved in the current conflict.

Three other elements in the Islamic identity formation process warrant comment. One is Mahdism, which arrived in Darfur from the west, and has clear intellectual and social origins in the Mahdist state founded by Osman Dan Fodio in what is now northern Nigeria. Unlike the Nile Valley, where the Mahdist tradition was weak, in the west African savannas it was strong and well-articulated. Dan Fodio wrote ten treatises on Mahdism plus more than 480 vernacular poems, and insisted that the Mahdi had to bear the name Mohamed Ahmed (which ruled him out).[10. Ahmed Mohammed Kani, The Intellectual Origin of Islamic Jihad in Nigeria, London, Al Hoda, 1988.] The first Mahdist in 19th century Sudan was Abdullahi al Ta’aishi, grandson of a wandering Tijani Sufi scholar from somewhere in west Africa, who met the Dongolawi holy militant Mohamed Ahmed in 1881 and proclaimed him the Mahdi, in turn becoming his Khalifa. The majority of the Mahdist armies derived from the Baggara of Darfur and Kordofan, and for most of its existence the Mahdist state in Omdurman was ruled by the Khalifa and his Ta’aisha kinsmen. In fulfillment of Mahdist prophecy and to support his power base, the Khalifa ordered the mass and forced migration of western peoples to Omdurman. The Mahdiya was, to a significant extent, a Darfurian enterprise. And it involved extreme violence, though of a radically different kind from that on which the Dar Fur sultanate was founded. This was religious, messianic Jihad, including population transfers on a scale not seen before or since.

Such is the stubborn Nilocentrism of Sudanese historiography that the influence of west African and Darfurian forms of Islam on this pivotal episode in Sudanese history are consistently under-estimated. It was the collision between the heterodox Mahdist Jihadism of the west, including the egalitarian ideology of the Tijaniya, and the more orthodox and hierarchical (though also Sufist) Islam of the Nile Valley that created the Mahdiya.

The Mahdist period is remembered even today in the cultural archive of a time of extraordinary turmoil and upheaval. It was a time of war, pillage and mass displacement. In 1984/5, people looked back to the drought of 1913/14 as their historical point of reference. One wonders if the current historic reference point is the famine of 1888/9, known as ‘Sanat Sita’ because it occurred in the year six (1306 Islamic calendar), and which seems to have surpassed the Darfurians’ otherwise inventive capacity for naming tragedy.

Beyond that historic precedent, I do not want to suggest that there are parallels between the Mahdiya and contemporary or recent political Islam in Sudan, which has had its own manifestations of extreme violence and jihadism. On the contrary, I would argue that it is the failure of Sudan’s recent Islamist project that has contributed to the war in Darfur. This arises from the last important theme of Islamic identity, namely Hassan al Turabi’s alliance-building across the east-west axis of Sudanese identities.

Among the many intellectual and practical innovations in Turabi’s Islamism was an opening to African Muslims as individuals and African Islam as a tradition. The National Islamic Front recognized that Darfur represented a major constituency of devout Muslims that could be mobilized. It made significant openings to Darfur and to the substantial Fellata communities across Sudan.[11. Awad Al-Sid Al-Karsani, ‘Beyond Sufism: The Case of Millennial Islam in the Sudan’, in Louis Brenner (ed.) Muslim Identity and Social Change in Sub-Saharan Africa, Indiana University Press, 1993.] It promised that Islam could be a route to enfranchisement as citizens of an Islamic state. In doing so, Turabi and his followers moved away from the traditional focus of the political Islamists on the more orthodox Islam of the Nile Valley, and its close association with the Arab world. It was, unfortunately, a false promise: the Sudanese state is the inheritor of the exclusivist project of the 19th century Khartoum traders, and sought only to enlist the Darfurians and Fellata as foot soldiers in this enterprise. For the Fellata it had a quick win: it could grant them citizenship, correcting a longstanding anomaly of nationality policy. And it has gained the loyalty of many Fellata leaders as a result. But for Darfurians, the best it offered was relative neutrality in the emergent conflicts between Darfur’s Arabs and non-Arabs, and increasingly, not even that. Darfur was marginal even to the Islamists’ philanthropic projects in the 1990s, which at least provided basic services and food relief to many remote rural communities. Perhaps because the Islamists took the region for granted, and certainly because the ruling groups were focused on the threats from the South, Nuba and Blue Nile, Darfur was neglected in the series of Islamist projects aimed at social transformation.

When the Islamist movement split in 1999, most Darfurian Islamists went into opposition. By an accident of fate, the most powerful Darfurian in the security apparatus was an airforce general from the Abbala Rizeigat, and members of those sections were rapidly put in place as leaders of the Popular Defence Force in critical locations, removing men whom the government suspected of having sympathies with the Turabi faction. Thus was created a set of militias popularly known as ‘Janjawiid,’ adopting a term first used to refer to Chadian Abbala militias that used western Darfur as a rear base in the mid-1980s, and who armed some of their Abbala brethren and helped instigate major clashes in 1987-90. The Darfur war is, in a significant way, a fight over the ruins of the Sudanese Islamist movement, by two groups, both of which seem to have abandoned any faith that the Islamist project will deliver anything other than power.

The third note of significance concerns the position of women. In the Tijaniyya sect, with its far more egalitarian tradition than the Sufis of the Nile, women can achieve the status of sheikh or teacher. This reflects both the religious traditions of the Sudanic region, and also the relatively higher socio-economic status of women in savanna societies, where they could own their own fields and engage in trade in their own right. Darfurian ethnographies repeatedly note the economic independence enjoyed by women, among non-Arab and Arab groups alike. The subsequent spread of Islamic orthodoxy, described more below, contributed to a regression in women’s status.

Administrative Tribalism and ‘Becoming Sudanese’

The British conquest of Dar Fur in 1916, and the incorporation of the then-independent sultanate of Dar Masalit in 1922-3, represented a clear break with the past. Darfur was ruled by an external Leviathan which had no economic interest in the region and no ideological ambition other than staving off trouble. Darfur was annexed when the basic determinants of British policies in Sudan had already been established, and the main decisions (e.g., the adoption of Native Administration after 1920, the expulsion of Egyptian civil servants after 1924, the embrace of neo-Mahdism and the Khatmiya, the adoption of the Famine Regulations in the 1930s, the Sudanisation of the civil service, and the moves towards independence) were all taken with scant reference to Darfur.

The key concern in Darfur in the decade after the conquest was security, and specifically the prevention of Mahdist uprisings. An attack on Nyala in 1921 was among the most serious threats the new rulers faced, and the last significant uprising was in 1927. In riverain Sudan, the British faced a more immediate danger, in the form of revolutionary nationalism on the slogan of unity of the Nile Valley, among the educated elite and detribalized elements, especially Sudanese soldiers. To suppress both, and to ensure the utmost economy in rural administration, the British chose a policy of ‘Native Administration’. This was not ‘Indirect Rule’ as practiced in the Nigerian Emirates or Buganda (except in the case of the Sultanate of Dar Masalit, where the British officer was a Resident). Rather, it was the creation of a new hierarchy of tribal administrators, with the significant innovation of the ‘omda, the administrative chief intermediate between the paramount chief (‘nazir’ for Arab tribes) and the village sheikh. ‘Omda was an Egyptian office specially imported for the purpose.[12. The Turko-Egyptian regime had also used administrative tribalism, and had created the position of ‘sheikh al mashayikh’ as paramount chieftancies of the riverain tribes. In the 1860s, this title was changed to ‘nazir’. The sultans of Dar Fur tried similar mechanisms from the late 18th century, awarding copper drums to appointees.]

In a series of ordinances, the British regularized the status of tribal authorities. A particularly important act was to grant judicial powers to chiefs, in addition to their executive powers. This was a means of setting the tribal leaders to police their subjects, to keep an eye on both millenarian preachers and discontented graduates. (It is interesting that the leader of the 1924 nationalist revolt, Ali Al Latif, as a detribalized Southerner or ‘Sudanese’ in the parlance of the day, having no tribal leader to whom he could become subject, was kept in jail in Sudan long beyond his prison term, and then exiled to Egypt.) Along with this came the ‘Closed Districts Ordinance’, much criticized for shutting off the South and Nuba Mountains from external influences, but used in Darfur to keep an eye on wandering preachers and west African immigrants.

But the most significant corollary of Native Administration was tidying up the confusion of ethnic identities and tribal allegiances that existed across Sudan. This was an administrative necessity more than an ideological cleaning-up.

The colonial archives from the 1920s and ’30s are filled with exchanges of letters on how to organize the ethnic chaos encountered in rural Sudan.[13. This discussion derives chiefly from the author’s notes from research in the Sudan National Archives in 1988. For simplicity, specific files are not referenced.] In Darfur, the most significant question was the administration of the Rizeigat, which included shoring up the authority of the pro-British Nazir, Madibbu, regulating the shared pastures on the Bahr el Arab river, also grazed by the Dinka, and deciding the status of the Abbala Rizeigat (initially subject to Nazir Ibrahim Madibbu, then with their own deputy Nazir, finally with their own full Nazir). Other activities included grouping the two sections of the Beni Hussein together, and providing them with land in north-western Darfur (a very rare instance of a wholesale tribal relocation, albeit one done with the consent of the section that needed to be relocated), administratively uniting the two parts of the Beni Halba, finding means of appointing a chief for the Birgid, grouping the miscellaneous sections living in an area called ‘Dar Erenga’ together to form one tribe, etc. A lot of attention was paid to the Fertit groups living on Darfur’s southern frontier, including a brave but futile attempt to move them into Southern Sudan and create a ‘cordon sanitaire’ between Muslims and non-Muslims. But this was an anomaly: the basic approach was ‘live and let live.’

Native Administration was reformed in the 1940s and 1960s (when chiefs were stripped of most of their judicial powers) and formally abolished in 1971, although many people elected to Rural People’s Councils were former native administrators.

Along with the regularizing of tribal administration came the formalizing of boundaries. The British stuck with the four-fold division of the Dar Fur sultanate into provinces, and demarcated tribal territories for the Baggara in south Darfur (following the Sultan’s practice). Elsewhere, the allocation of tribal dars was somewhat haphazard. The creation of Dar Beni Hussein in the western part of north Darfur was anomalous: when a group did not present a problem, it was left to be. However, the de facto recognition of the legality of a tribal dar in south Darfur began to build a legacy.[14. The real drive for the recognition of tribal territories was elsewhere in Sudan, where ethnic territorialization was less complex, and administration denser.] Beforehand, the term ‘dar’ had been used in many different senses, ranging from a specific location or administrative unit, to the specific territory of an ethnic group, to the whole Sultanate, to an abstract region such as Dar Fertit. But, by constant usage, twinned with a tribally-based administrative system with judicial powers, the term ‘dar’ came primarily to refer to an ethnic territory in which the dominant group had legal jurisdiction. By the 1970s, Sudan’s leading land law scholar could conclude that tribes have become ‘almost the owners of their homelands.’[15. Saeed Mohamed El-Mahdi, Introduction to Land Law of the Sudan, Khartoum, Khartoum University Press, 1979, p. 2. In southern Darfur, there was a strong push by the regional authorities and development projects to recognize tribal dars in the 1980s. See Mechthild Runger, Land Law and Land Use Control in Western Sudan: The Case of Southern Darfur, London, Ithaca, 1987.] During most of the 20th century, this had no significant political repercussions, as it coincided nicely with the customary practice of a settler adopting the legal code of one’s hosts. There was sufficient free land, and a strong enough tradition of hospitality to settlers, that by the 1970s all ‘dars’ in south Darfur were ethnically mixed, some of them with very substantial settler populations from the drought-stricken north.

Let us not over-emphasize the implications of tribal administration for identity formation. It undoubtedly slowed and even froze processes of identity formation. But it was lightly implemented. Many district officers in Darfur reveled in the myriad forms of ethnic identity and chieftanship they found, documenting the intermediate identities of the Kaitinga (part Tunjur/Fur, part Zaghawa), the Jebel Si Arabs, the Dar Fongoro Fur, and numerous others; also allowing Darfurian administrators to keep their wonderful array of traditional titles including Sultan, Mek, Dimangawi, Shartay, Amir, and Nazir. Given that there were no significant economic interests in Darfur, no project for social change or modernization, and no land alienation, we must recognize the limits of imperial social engineering. It had a very light hand, both for good and ill.

Indeed, in the 1960s and ’70s, Darfur became something of a textbook case for identity change. During the preparatory studies for establishing the Jebel Marra Rural Development Project, a number of eminent social anthropologists were employed to study social change in Darfur.[16. Frederik Barth, ‘Economic Spheres in Darfur’, in Raymond Firth (ed.), Themes in Economic Anthropology, London, Tavistock, 1967; Gunnar Haaland, ‘Economic Determinants in Ethnic Processes’, in Frederik Barth (ed.) Ethnic Groups and Boundaries, London, Allen and Unwin, 1969.] Among their writings are a number of studies on how sedentary Fur farmers, on acquiring sufficient cattle, would ‘become Baggara’ in stages, to the extent of teaching their children the Arabic language and adopting many socio-cultural traits of the pastoralists they moved with. This was a remarkable reversal of the previous pattern whereby communities ‘became Fur’ for political reasons; now individuals might ‘become Baggara’ for economic ones. There were also studies of the sedenterization of nomads, underlining how the nomad/farmer distinction is an extremely blurred one. Sadly, there were no comparable studies in northern Darfur.

Most proposals for a settlement of Darfur’s conflict include the revival of Native Administration in some form, both for the resolution of inter-communal conflicts (including settling land disputes) and for local administration.[17. James Morton, The Poverty of Nations, London, British Academic Press, 1994.] Whether or not the important role of chiefs’ courts will be re-established is far less clear. However, the context of the early 21st century is very different from the 1920s. This is clear from a brief examination of the role played by the tribal leaders in the resolution of the 1987-9 conflict and the revived Native Administration Council after 1994.

The first major conflict in Darfur of recent times occurred in 1987-9, and had many elements that prefigure today’s war, not least the fact that the major protagonists were Fur militia and Abbala Arab fighters known as ‘Janjawiid’. Belatedly, a peace conference was called including tribal leaders on both sides, some of whom sought to reestablish their authority over younger militant leaders, and some who sought for advancement of their own positions. Assisted by the fact that the NIF coup occurred while the conference was in session—allowing both sides to make compromises without losing face—an agreement was reached. But it was not implemented; fighting broke out again, and another conference was held in early 1990, which came with similar recommendations, which again were not properly implemented. The key lesson from this is that Native Administration is not a solution in itself, but rather a component of finding and implementing a solution. Control of armed groups, payment of compensation, and measures to deal with the causes of dispute are all necessary.

A form of Native Administration Council was established in 1994, a measure that coincided with the division of Darfur into three states and renewed conflict in western Darfur. There are two ways in which the NAC is implicated in the conflict. First, the government saw the award of chieftancies (usually called Emirates) as a means of rewarding its followers and undermining the power of the Umma Party, which retained the allegiance of many of the older generation of sheikhs. Second, the positions were awarded with a new, simplified and more administratively powerful view of ethnicity. The very rationale for creating the new entities was to reinforce the power of a central authority (a party as much as, or more than, a state). In a militarized environment, with no services delivered by party or state, the reward for the new chiefs was the power to allocate land and guns within their jurisdiction. It was a recipe for local level ethnic cleansing, which duly occurred in several places.

During the colonial period—less than four decades for Darfur, scarcely three for Dar Masalit—and the first decades of independence, Darfur was subject to a state in Khartoum which knew little, and cared less, about this faraway region. Little changed with independence. The entire debate over Sudanese independence was conducted in Nilocentric terms: the dual questions were whether Sudan should be united with Egypt, and what should be the status of the South.[18. Cf. Gabriel R. Warburg, Historical Discord in the Nile Valley, Evanston IL, Northwestern University Press, 1993.] The position of Darfur was almost wholly absent from this discourse, and remained a footnote in ongoing debates on Sudanese national identity. For example, perhaps the most eloquent analyst of the dilemmas of Sudanese identity, writing in the format of fiction that allows him to explore more explicitly the unstated realities of Sudanese racism, treats Darfurian identity wholly within the North-South framework.[19. Cf. Francis M. Deng, The Cry of the Owl, New York, Lilian Barber Press, 1989. In this novelistic exploration of Sudanese identities, the main protagonist, who is a Southerner, meets a Fur merchant on a train. The encounter reveals that anti-Southern racist feeling exists among Darfurians, while Darfurians themselves are marginalized, exploited and racially discriminated against by the ruling riverain elites.]

The state that ensued was a clear successor to the Turko-Egyptian colonial state. It was, and remains, a merchant-soldier state, espousing Arabism, using Arabic as a language of instruction in schools and in the media, and promoting Islam as a state ideology. Its political discourse is almost wholly Nilo-centric: the key debates leading up to independence concerned whether Sudan would opt for unity with Egypt under the slogan of ‘unity of the Nile Valley’, and subsequent debates on national identity have been framed along the North-South axis of whether Sudan is part of the Arab or African world. There were brave attempts by scholars and activists to assert that Sudan is at once Arab and African, and that the two are fully compatible. These efforts came from all parts of the political spectrum: it is particularly interesting to see the Islamists’ arguments on this score.[20. Muddathir Abd al-Rahim, ‘Arabism, Africanism and Self-Identification in the Sudan’, in Y. F. Hasan, Sudan in Africa, Khartoum University Press, 1971.] Some of the academic historians who engaged in this debate worked on Sudan’s westward links. They, however, were both academically a minority and found no political reverberations for their writings. Whether polarizing or attempting bridging, the discourse was overwhelmingly North-South. And, within Northern Sudan especially, we see the relentless progress of absorption into the culture of the administrative and merchant elite.

What we see is a process that has been called many names, of which I prefer ‘Sudanization,’ following Paul Doornbos, who produced a series of superb if under-referenced studies of this phenomenon in Dar Masalit in the early 1980s.[21. Paul Doornbos, ‘On Becoming Sudanese’, in T. Barnett and A. Abdelkarim (eds.), Sudan: State, Capital and Transformation, London, Croom Helm, 1988.] ‘Arabization’ is not adequate, because Darfur’s indigenous Bedouin Arabs were also subject to the same process, and because it did not result in people who were culturally ‘Arab’. Rather, individuals came to belong to a category of Sudanese who spoke Arabic, wore the jellabiya or thoub, prayed publicly, used paper money, and abandoned tribal dancing and drinking millet beer. Doubtless, the newly-Sudanised were at social and financial disadvantage when dealing with the established elites. But they were not expropriated of land or identity, and most of them straddled both the ‘Sudanised’ and local identities, and gained from it.

One of the most marked aspects of Sudanisation is a change in the status of women. The Darfurian Sudanised women is (ideally) circumcised, secluded at home, economically dependent on her husband, meek in her behaviour, and dressed in the thoub. The spread of female circumcision in Darfur in the 1970s and ’80s, at a time when the Sudanese metropolitan elite was moving to abandon the practice, is perhaps the most striking physical manifestation of this process, and yet another illustration of how identity change is marked on women’s bodies. It is also an illustration of the recency of a ‘traditional’ practice.

What is remarkable about these processes of identity change is not that they occurred, or that they were subject to the arbitrary impositions of a state, but that they were almost entirely non-violent (with the significant caveat of genital mutilation). This is an important contrast with the South and the Nuba Mountains.

Incorporation into a Sudanese polity did bring with it a clear element of racism, based on colour of skin, and facial characteristics. Although both the Sudanic and Islamic processes of identity formation could not avoid a racial tinge, it was with Egyptian dominance and the successor Sudanese state that this became dominant. The Egyptian or Mediterranean littoral ‘moral geography’ of Dar Fur can be charted as early as 1800, when the Arab trader Mohamed al Tunisi lived there: he graded the land and its inhabitants according to the colour of skin, the beauty of women, and their sexual mores.[22. Eve Troutt Powell, A Different Shade of Colonialism: Egypt, Great Britain and the Mastery of the Sudan, Berkeley, University of California Press, 2003. Powell shows convincingly how similar attitudes informed Egyptian attitudes towards Sudan into the 20th century.] A broadly similar racist classification became evident in Egyptian occupation of the Nile Valley in the mid-19th century, and remains essentially unchanged today.

A particularly important difference between Darfur and other parts of Sudan is the significance of land and labour. Under the British and independent governments, very substantial agricultural schemes were established along the Nile and in eastern Sudan, and subsequently in south Kordofan. These involved widespread land alienation and the transformation of a rural peasantry into a wage labour force, much of it seasonally migrant.[23. Ahmad Alawad Sikainga, Slaves into Workers: Emancipation and Labor in Colonial Sudan, Austin, University of Texas, 1996.] In Darfur there was no land alienation to speak of, and seasonal labour migration is almost entirely within the region, to work on locally-owned smallholdings (some of which are large by Darfur standards, but do not match the huge registered schemes of the eastern savannas). The violent depredation and dispossession inflicted by the Sudanese state in the 1980s and 90s on the Nuba, Blue Nile peoples and adjacent areas of Upper Nile, creating mass internal displacement with the twin economic aims of creating mechanized farms owned by a Khartoum elite and creating a disadvantaged labour force to work them, has no parallel in Darfur. To a significant degree, Darfur has served as a labour reserve for Gezira and Gedaref, but because of the distances involved, the migration is long-term and not seasonal.[24. Darfurian migrant labour is remarkably under-researched, in comparison with the Nuba and west Africans. In the modest literature, see Dennis Tully, ‘The Decision to Migrate in Sudan’, Cultural Survival Quarterly, 7.4, 1983, 17-18.] And the Darfurian labour reserve has never been of strategic economic significance, such that national economic policies have been geared to sustaining it. Male outmigration has left the poorest parts of Darfur with a gender imbalance and a preponderance of female-headed households.[25. See, for example, my discussion of Jebel Si in Famine that Kills: Darfur, Sudan, Oxford University Press, 2004.]

Labour migration has had implications for the way in with the riverain elite regards westerners. In the 1920s, landowners were reported as saying that just as God (or the British) had taken away their slaves, he/they had brought the Fellata. The lowly status of this devout Muslim pilgrim group is closely associated with their low-status labouring occupations, and much the same holds for the Darfurians (of all ethnicities). The term ‘abid’ was often applied to them all, indiscriminately, reflecting both racism and their labouring status.[26. Mark Duffield, Maiurno: Capitalism and Rural Life in Sudan, London, Ithaca, 1981; C. Bawa Yamba, Permanent Pilgrims: The Role of Pilgrimage in the Lives of West African Muslims in Sudan, Washington D.C., Smithsonian Press, 1995.] It is arguable that racist attitudes followed economic stratification, rather than vice versa. In either case, there is a clear association between status and skin colour.

Incorporation into a Sudanese national state also, simultaneously, represented incorporation into a wider regional identity schema, in which the three attributes of skin colour, economic status and Arab identification all served to categorize populations. Mohamed al Tunisi would feel at home in the contemporary moral geography of Sudan, almost two centuries after his travels.

Militarized and Ideological ‘Arab’ and ‘African’ Identities

The complex history of identity formation in Darfur provides rich material for the creation of new ethnic identities. What has happened is that as Darfur has been further incorporated into national Sudanese processes, wider African and Middle Eastern processes, and political globalization, Darfur’s complex identities have been radically and traumatically simplified, creating a polarized ‘Arab versus African’ dichotomy that is historically bogus, but disturbingly powerful. The ideological construction of these polarized identities has gone hand-in-hand with the militarization of Darfur, first through the spread of small arms, then through the organization of militia, and finally through full-scale war. The combination of fear and violence is a particularly potent combination for forging simplified and polarized identities, and such labels are likely to persist as long as the war continues. The U.S. government’s determination that the atrocities in Darfur amount to ‘genocide’ and the popular use of the terms ‘Arab’ and ‘African’ by journalists, aid agencies and diplomats, have further entrenched this polarization, to the degree that community leaders for whom the term ‘African’ would have been alien even a decade ago, now readily identify themselves as such when dealing with international interlocutors.

Internally, this polarization began with some of Darfur’s Arabs. Exposed to the Islamist-Arabism of Khartoum, drawing upon the Arab lineage ideology latent in their Juheiyna identities, and often closely involved in Colonel Gaddafi’s ideologically Arabist enterprises in the 1970s and ’80s, these men adopted an Arab supremacist ideology. This seems to have been nurtured by Gaddafi’s dreams of an Arab homeland across the Sahara and Sahel (notwithstanding the Libyan leader’s expansive definition of ‘Arab’ which, true to his own Bedouin roots, includes groups such as the Tuareg), and by competition for posts in Darfur’s regional government in the 1980s. In 1987, a group of Darfurian Arabs wrote a now-famous letter to Prime Minister Sadiq el Mahdi, demanding a better deal for Darfur’s Arabs. They appealed to him as ‘one of their own’. At one level this was simply a legitimate demand for better political representation and better services. But within it lurked an agenda of Arab supremacism. Subsequently, it has become very difficult to separate the ambitious agenda of a Darfurian Arab homeland from wider and more modest goals, and to identify which documents are real and which are not. But there is no doubt that, twinned with similar ambitions among the Chadian Juheiyna Arabs, there was a political and territorial agenda emerging. This helps explain why some of the first and fiercest clashes of 1987 were in the highland Jebel Marra area of Gulu, a territory which would be clearly indicated a ‘Fur’ heartland on any moral geography of the region including that of Sheikh Hilal, reproduced above, whose son Musa has since become infamous as commander of a major PDF brigade. The attacks on Gulu in 1987 and again in 2002 and 2004 represent a symbolic strike at the heart of Fur identity and legitimacy, as well as a tactical assault on a Fur resistance stronghold.

This newly-politicized Arab identity was also militarized. Three overlapping strands of militarization can be seen. One is the Ansar, the core followers of the Mahdi, who are historically a political, religious and military movement. Between 1970 and 1977, the Ansar leadership was in exile in Libya, planning its return to power, which it tried in 1976 and failed. Many returned to Sudan in 1977 as part of the ‘National Reconciliation’ between Sadiq el Mahdi and Nimeiri, but were not, as they had hoped, absorbed into the national army. Instead, they were settled on farming schemes. Disproportionately drawn from the Baggara tribes, former Ansar fighters were instrumental in the creation of the first Baggara militias in the mid-1980s. A second group of Ansar returned in 1985-6, following the fall of Nimeiri.[27. Alex de Waal, ‘Some Comments on Militias in Contemporary Sudan’, in M. Daly and A. A. Sikainga (eds.) Civil War in the Sudan, London, Taurus, 1994.] While in Libya, the Ansar had been organized, trained and armed alongside Gaddafi’s Islamic Legion, which drew recruits from across the Sahelian countries.[28. Gaddafi’s African policy has not been well documented by journalists and scholars.] This is the second contributor to the militarization of the Bedouin. The Islamic Legion was disbanded after its defeat in Ouadi Doum in 1987, but its legacy remained. The third contributor was the formation of Arab militias in Chad, which used Darfur as a rear base for their persistent but unsuccessful attempts to take state power. The different political, tribal and ideological strands of this story have yet to be teased apart. Clearly there are important differences within these groups, including a competition for the allegiance of the Ansar fighters between the Umma leadership and the NIF. Gaddafi was also quite capable of treating with non-Arab groups such as the Zaghawa when it suited him, and was quick to recognize the government of Idris Deby when it took power in late 1990. Although Deby had been a commander of the forces that defeated the Libyan army and Islamic Legion a few years earlier, Gaddafi’s main quarrel was with Hissene Habre.

While there is a definite strain of Arab supremacism, the significance of ‘Arab’ identity must not be overstated. The groups involved in the current conflict are overwhelmingly Juheiyna Abbala (excluding for example the Zayadiya), with relatively few Baggara groups (notably including one part of the Beni Halba, many of whom were armed and mobilized in 1991 to counter the SPLA incursion into Darfur). This means that the largest and most influential of Darfur’s Arabs are not involved, including the Baggara Rizeigat, the Habbaniya, the Maaliya and most of the Taaisha. As the conflict continues to spread and escalate, this may change, and there are clear attempts by some in government to bring in all Arab groups (especially the Rizeigat) on their sides, and attempts by some on the rebel sides to provoke them.

The character of Arab supremacism is manifest in a racist vocabulary and in sexual violence. The term ‘zurug’ has long been used in the casual racism of Arabs in Darfur, despite—or perhaps because of—the absence of any discernible differences in skin colour. Attributions of female beauty or lack thereof are similarly made, again despite or because of the lack of noticeable difference. The term ”abid’, which has long been used by the riverain elites to refer to all Darfurians, has been adopted by some Arab supremacists to refer to non-Arab Darfurians, despite—or because of—its lack of historical precedent. And widespread rape itself is a means of identity destruction or transformation, particularly salient and invasive for Muslim communities. In the early 1990s Nuba Mountains counterinsurgency campaigns, there is ample documentation that rape was used systematically and deliberately for this purpose.[29. African Rights, Facing Genocide: The Nuba of Sudan, London, African Rights, 1995.]

The creation of ‘Africanism’ is more recent than the ascent of Arab supremacism. It owes much to the SPLA, whose leader, John Garang, began to speak of an ‘African majority’ in Sudan to counter the Islamist government’s claim that Sudan should be an Islamic state because it had a majority Muslim population. Garang reached out to the Nuba and peoples of southern Blue Nile, for whom ‘African’ was an identity with which they could readily identify. For example, the Nuba clandestine political and cultural organization of the 1970s and early ’80s, known as Komolo, asserted the Nuba’s right to their own cultural heritage, which they identified as distinctively ‘African.’ Under the leadership of Yousif Kuwa, Komolo activist and SPLA governor of the Nuba Mountains, the Nuba witnessed a revival of traditional dancing, music and religion.[30. The Nuba’s ‘African’ identity is well-documented. The best treatment is Yusuf Kuwa’s own memoir, ‘Things Would No Longer Be The Same’, in S. Rahhal (ed.) The Right to be Nuba: The Story of a Sudanese People’s Struggle for Survival, Trenton NJ, Red Sea Press, 2002.]

Trapped in a set of identity markers derived from the historical experience of the Nile Valley, a number of educated Darfurian non-Arabs chose ‘African’ as the best ticket to political alliance-building. The veteran Darfurian politician Ahmed Diraige had tried to do this in the 1960s, making alliances with the Nuba and Southerners, but had then switched to trying to bring Darfur’s non-Arabs into the Umma Party, hoping thereby to broaden and secularise that party. Daud Bolad, a Fur and a prominent Islamist student leader, switched from one political extreme to the other and joined the SPLA, leading a poorly-planned and militarily disastrous SPLA expedition into Darfur in 1991. Sharif Harir, a professor of social anthropology and as such inherently distrustful of such labels, was one of the first Darfurian intellectuals to recognize the danger posed by the new Arab Alliance, and has ended up reluctantly donning the ‘African’ label. He is now one of the political leaders of Darfur’s Sudan Liberation Movement.

The influence of the SPLA on the Darfurian opposition should be acknowledged. What was originally a peasant jacquerie was given political ambition with the assistance of the SPLA. Indeed, the Darfur Liberation Front was renamed the SLA under SPLA influence, and it adopted Garang’s philosophy of the ‘New Sudan’, perhaps more seriously than its mentor.

It is a commonplace of ethnographic history that communal violence powerfully helps constitute identity. In times of fear and insecurity, people’s ambit of trust and reciprocity contracts and identity markers that emphasize difference between warring groups are emphasized. Where sexual violence is widespread, markers of race and lineage are salient. Much anecdotal evidence indicates that this is happening today, and that the civilian communities most exposed to the conflict are insisting on the ‘African’ label. We can speculate that it serves as a marker of difference from the government and its militia, an expression of hope for solidarity from outside, and—perhaps most significant in the context of forced displacement and threats of further dispossession—a claim to indigeneity and residence rights.

From the point of view of the SLA leadership, including the leadership of the communities most seriously affected by atrocity and forced displacement, the term ‘African’ has served them well. It is scarcely an exaggeration to say that the depiction of ‘Arabs’ killing ‘Africans’ in Darfur conjures up, in the mind of a non-Sudanese (including many people in sub-Saharan Africa), a picture of bands of light-skinned Arabs marauding among villages of peaceable black-skinned people of indeterminate religion. In the current context in which ‘Arabs’ are identified, in the popular western and sub-Saharan African press, with the instigators of terrorism, it readily labels Darfur’s non-Arabs as victims.

From the point of view of the government in Khartoum, the labels are also tactically useful. While insisting that the conflict is tribal and local, it turns the moral loading of the term ‘Arab’ to its advantage, by appealing to fellow members of the Arab League that Darfur represents another attempt by the west (and in particular the U.S.) to demonize the Arab world. In turn this unlocks a regional alliance, for which Darfur stands as proxy for Iraq and Palestine. Looking more widely than Darfur, the term ‘Arab’ implies global victimhood.

The U.S. determination that Darfur counts as ‘genocide’ plays directly into this polarizing scenario. It is easy for self-identified Arab intellectuals in Khartoum (and elsewhere) to see this finding as (yet another) selective and unfair denigration of Arabs. If, in the confrontation between the Arabs and the Israelis and Americans, Arabs are cast as ‘terrorists’, warranting pre-emptive military action and a range of other restrictions on their rights, now in the context of Africa they are cast as ‘genocidaires’ and similarly cast beyond the moral pale and rendered subject to military intervention and criminal tribunals. Arab editorialists are thus driven both to deny genocide and to accuse the U.S. of double standards, asking why killings in (for example) Congo are not similarly labeled.

In fact, the U.S. State Department was reluctant to conclude that Darfur counted as genocide, and the Secretary of State insisted, almost in the same breath that he announced ‘genocide’, that it would not change U.S. policy. The impetus for the genocide finding did not come from Washington’s neocons, but rather from liberal human rights activists in alliance with the religious right. The origins of this alliance lie in the politics of support for the SPLA (with the Israeli lobby as a discrete marriage broker) and influence trading in Congress, specifically finding an issue (slavery in Southern Sudan) that brings together the Black Caucus, the Israeli lobby, the religious right (for whom Sudan is a crusade) and the human rights groups (who began campaigning on this long before the others). Several of these groups were frustrated that the State Department, under the Republicans, had switched from a policy of regime change in Khartoum to a pursuit of a negotiated peace for Southern Sudan. The war in Darfur was a vindication of their thesis that no business could be done with Khartoum’s evildoers. The atrocities were sufficiently swift and graphic, and coincided with the tenth anniversary of the preventable genocide in Rwanda, giving remarkable salience to the genocide claim. Congress passed a resolution, and the State Department prevaricated by sending an investigative team, confident that because there was no evident intent at complete extermination of the target groups, that their lawyers would find some appropriately indeterminate language to express severe outrage, short of moral excommunication of Khartoum (with which State was still negotiating) and military intervention. What they had not counted on was that the definition of Genocide in the 1948 Convention is much wider than the lay definition and customary international usage, and includes actions that fall well short of a credible attempt at the absolute annihilation of an ethnic or racial group. The State Department’s lawyers, faithful to the much neglected letter of the law, duly found genocide, and the Secretary of State, doubtless judging that it would be more damaging to ignore his lawyers’ public advice, duly made the announcement, and then said that this would not affect U.S. policy.

Arrived at more-or-less by accident, the genocide finding has a number of implications. One is that it divides the U.S. from its allies in Europe and Africa. Given that the Sudan peace process is a rare contemporary example of multilateralism (albeit ad hoc) and rare example of a success in U.S. foreign policy (albeit incomplete), it is important that this unity is not fully sundered. At present, it appears that the State Department has succeeded in keeping its policy on track, despite being outflanked by the militants in Washington. (Had the Democrats won in November, we might have faced the ironic situation of a more aggressive U.S. policy.) The damage has been minimized, but some has been done.

Second, the broader interpretation of the Genocide Convention, while legally correct, is one that diplomats have been avoiding for decades, precisely because it creates a vast and indeterminate grey area of atrocity, in which intervention is licensed. A tacit consensus had developed to set the bar higher: now the U.S. has lowered it, and Arab critics are correct: if Darfur is genocide, then so is Congo, Burundi, Uganda, Nigeria and a host of others. The neocons do indeed have another weapon in their armoury of unilateral intervention. Arguably, they didn’t need it, already having sufficient reason to intervene on the basis of the September 2002 U.S. National Security doctrine.

And thirdly, for Darfur, the genocide finding is being internalized into the politics of the region. This is occurring in the context of considerable external dependence by Darfur’s political organizations and communities. The political organizations have centered their strategies around external engagement. The Islamists in the Justice and Equality Movement have a strategy for regime change, using the atrocities in Darfur to delegitimize the Khartoum government internationally, thereby bring it down and bring them to power. The SLA, representing a broad coalition of communities in arms, has yet to develop a full political programme, and is instead largely reacting to events, especially the escalating atrocities since late 2003. It seeks international intervention as a best option, and international monitoring and guarantees as a second best. The communities it represents, many of them either receiving or seeking international assistance, are also orienting their self-representation to the international audience. They have been provided with a simple and powerful language with which to make their case.

The other lenses for analyzing Darfurian identities are too subtle and complex to be of much use for journalists and aid workers. So we are stuck with a polarizing set of ideologically constructed identities, mutually antagonistic. If, as seems likely, these labels become strongly attached, they will hugely complicate the task of reconstructing the social fabric of Darfur—or, given the impossibility of returning to the recent past—they will obstruct the construction of a new Darfurian identity that stresses the common history of the region and the interdependence of its peoples.

Conclusion

Let me conclude this essay with two main observations.

First, who are the Darfurians? I argue that Darfur has had a remarkably stable continuous identity as a locus for state formation over several centuries, and is a recognizable political unit in a way that is relatively uncommon in Africa. But the incorporation of Darfur into Sudan, almost as an afterthought, has led not only to the economic and political marginalization of Darfurians, but the near-total neglect of their unique history and identity. Just as damaging for Darfurians as their socio-political marginalization has been the way in which they have been forced to become Sudanese, on terms that are alien to them. To overcome this, we must move to acknowledging a politics of three Sudans: North, South and West. It is probably a naive hope, but a recognition of the unique contribution of Darfurians and the inclusive nature of African identity in Darfur could provide a way out of Sudan’s national predicament of undecided identity. Short of this ambition, it is important for Darfurians to identify what they have in common, and undertake the hard intellectual labour of establishing their common identity.

Second, what we see is the gradual but seemingly inexorable simplification, polarization and cementing of identities in a Manichean mould. Within four generations, a set of negotiable identities have become fixed and magnetized. We should not idealize the past: while ethnic assimilation and the administration of the Sultanate may have been relatively benevolent at the centre, at the southern periphery it was extremely and systematically violent. Similarly, while Sufism is generally and correctly regarded as a tolerant and pacific set of faiths, it also gave birth to Mahdism, which inflicted a period of exceptional turmoil and bloodshed on Sudan, including Darfur. Violence has shaped identity formation in the past in Darfur, just as it is doing today. Also, from the days of the Sultanate, external economic and ideological linkages shaped the nature of state power and fed its centralizing and predatory nature. Today, the sources and nature of those external influences are different. A ‘global war on terror’ and its correlates influence the political and ideological landscape in which Darfur’s conflict is located, including the very language used to describe the adversaries and what they are doing to one another and the unfortunate civilians who are in the line of fire. The humanitarians and human rights activists, as much as the counter-terrorists and diplomats, are part of this process whereby Darfurian identities are traumatically transformed once again. Hopefully there will be a counter-process, which allows for Darfurians to carve out a space in which to reflect on their unique history, identify what they share, and create processes whereby identities are not formed by violence.

Endnotes

Why ‘We’ Lovehate ‘You’

Paul Smith is professor of cultural studies at George Mason University and chair in media studies at the University of Sussex, and author most recently of Millennial Dreams (Verso).

“The reaction to the events of 11 September–terrible as they were–seems excessive to outsiders, and we have to say this to our American friends, although they have become touchy and ready to break off relations with accusations of hard-heartedness.”

‘We’ and ‘you’

Doris Lessing’s rueful but carefully aimed words, published in a post-9/11 issue of Granta magazine where a constellation of writers had been asked to address “What We Think of America,” doubtless have done little to inhibit the progress of American excess in the time since the terrorist attacks. The voices of even the most considerable of foreign intellects were hardly alone in being rendered inaudible by the solipsistic noise that immediately took over the American public sphere after 9/11. All kinds of voices and words, from within America and without, immediately lost standing and forfeited the chance to be heard, became marginalised or simply silenced, in deference to the media-led straightening of the possible range of things that could be said. And even after the initial shock of 9/11 had receded, it seems that one’s standing to speak depended largely upon the proximity of one’s sentiments to the bellicose sound-bites of the American president as his administration set sail for retaliatory and pre-emptive violence and promoted a Manichean worldview where one could only be either uncomplicatedly for or uncomplicatedly against America, even as it conducted illegal, immoral, and opportunistic war.

The peculiar American reaction to 9/11 was always latent in the discursive and cultural habits of this society where, as Lessing pointedly insists, “everything is taken to extremes.” Such extremism is perhaps not often enough considered, she suggests, when ‘we’ try to understand or account for the culture (Lessing, p. 54). I’m not sure that it’s the case that that extremism has exactly gone unnoticed; it is, after all, the motor and at the same time the effect of the sheer quotidian brutality of American social relations. But the sudden shock to the American system delivered by the terrorists certainly facilitated a certain kind of extremism, a certain kind of extreme Americanism.

That extremist Americanism is foundational to this culture. America is, as Jean Baudrillard has said, the only remaining primitive society…a utopia that is in the process of “outstripping its own moral, social and ecological rationale” (1988, p. 7). And this is, moreover, a primitivism awash with its own peculiar fundamentalisms–not quite the fundamentalisms that America attacks elsewhere in a kind of narcissistic rage, but fundamentalisms that are every bit as obstinate. This is, after all, a society where public discourse regularly pays obeisance to ancient texts and their authors, to the playbook of personal and collective therapy, to elemental codes of moral equivalency, and so on. And this is to leave aside the various Christian and populist fundamentalisms that are perhaps less respectable but nonetheless have deep influence on the public sphere. But in its perhaps most respectable fundamentalism–always the most important one, but now more than ever in this age of globalisation–the society battens on its own deep devotion to a capitalist fundamentalism. Thus it is a primitive society in a political-economic sense too: a society completely devoted to the upkeep of its means of consumption and means of production, and thus deeply dependent upon the class effects of that system and ideologically dependent upon ancient authorities, which remain tutelary and furnish the ethical life of the culture.

It is to these kinds of fundamentalism that America appealed after 9/11, by way of phrases such as ‘our values,’ ‘who we are,’ ‘the American way of life,’ and so on; or when Mayor Giuliani and others explicitly promoted consumption as a way of showing support for America. None of that was perhaps terribly surprising, however disturbingly crass it might have been, and it was clear how much it was necessary for the production of the forthcoming war economy in the USA. But the construction of such extremist platitudes (endlessly mediatised, to be sure) was surprisingly successful in effecting the elision of other kinds of speech in this nation where the idea of freedom of speech is otherwise canonised as a basic reflex ideology.

But, (as de Tocqueville was always fond of repeating) this is also a nation where dissidents quickly become pariahs, strangers. The voices, the kinds and forms of speech that were silenced or elided in the aftermath of 9/11 are, of course, the dialectical underbelly to the consolidation of a fundamentalist sense of America, and to the production of an excessive cultural ideology of shared values. They go some way to constituting, for the sake of what I have to say here, a ‘we’–strangers both within the land and beyond it. This is not, of course, a consistent ‘we,’ readily located either beyond or within the borders of the USA and who could be called upon to love or hate or to lovehate some cohesive ‘you’ that until recently sat safely ensconced inside those same borders. It goes without saying that nobody within or without those boundaries can be called upon individually to comply seamlessly, or closely, or for very long, to a discourse of putative national identity. So in the end there is no living ‘you’ or ‘we’ here, but only a vast range of disparate and multifarious individuals, living in history and in their own histories, imperfectly coincident with the discursive structure of “America.”

And yet imaginary relations are powerful. The ‘you’ whose sense of belonging to, or owning, that fundamentalist discourse has for the time being asserted or constructed itself qua America; but it is of course unclear who ‘you’ really are. It has never been clear to what extent a ‘you’ could be constructed on the ground by way of ideological and mediatised pressure. It’s certainly unclear how much the mainstream surveys could tell us, conducted as they are through the familiar corporate, university, and media channels. And it would be grossly simplistic to try to ‘read’ the nation’s ideology through its mediatised messages and simply deduce that people believe (in) them.[1. This is the error of otherwise worthy work like Sardar, Z. & M.W. Davies (2002), Why Do People Hate America? (Icon Books).] So the question of “who are ‘you’?” remains opaque in some way. At the same time, there is a discursive space where the everyday people that American subjects are coincides with the ‘you’ that is now being promulgated as fundamental America.

By the same token, there is also some kind of ‘we’ that derives from the fact that the identities and the everyday lives of so many outside the USA are bound up with the USA, with what the USA does and says, and with what it stands for and fights for. The ways in which ‘our’ identities are thus bound up is different for some than for others, obviously, and ‘we’ are all in any case different from one another. I share nothing significant, I think, with the perpetrators of the attacks on the Trade Towers or on the tourists in Bali. Some of us find ourselves actually inside the boundaries of the USA. That’s where I speak from right now, a British subject but one whose adult life has been shaped by being an alien inside America and thus to some large extent shaped by ‘you.’ And there are many in similar positions, some killed in the WTC attacks, others Muslims, others illegals, and so on. And there are of course, also the internal ‘dissenters’–those who speak and find ways to be heard outside the channels that promote the construction of a ‘you.’ All of ‘us, then, inside and outside the borders of the US, are not ‘you’–a fact that ‘you’ make clear enough on a daily basis.

Dialectics

The ‘we’ is in fact a construct of the very ‘you’ I have just been talking about. This ‘we’ is generated through the power of the long, blank gaze emanating from the American republic that dispassionately, without empathy and certainly without love, refuses to recognise most of the features of the world laid out at its feet; a gaze that can acknowledge only that part of the world which is compliant and willing to act as a reservoir of narcissistic supply to the colossus.

Appropriately (in light of the events of 9/11, certainly, and probably before that) it is to the World Trade Center that Michel de Certeau pointed when he wanted to describe the ideological imposition that such a gaze exerts over the inhabitants of a space. In his famous essay, “Walking in the City”(1984), he begins his disquisition from the 110th floor of the World Trade Center, meditating on the ichnographic gaze that the tower (then) enabled, looking down over a city that becomes for him a “texturology” of extremes, “a gigantic rhetoric of excess in both expenditure and production”(p. 91). That gaze is for him essentially the exercise of a systematic power, or a structure in other words. Its subjects are the masses in the streets, all jerry-building their own relation to that structure as they bustle and move around the spaces of this excessive city.

De Certeau doesn’t say so, but one could suspect that he reads the tower and the view it provides by reference to the mystical eye sitting atop the pyramid on the US dollar bill–another trope in American fundamentalist discourse, the god who oversees ‘your’ beginnings. But at any rate, it’s hard not to be struck in his account by the way the relationship between the ichnographic and systematic gaze and the people below replicates a much more Hegelian dialectic: the master-slave dialectic. De Certeau’s sense of power relations never quite manages to rid itself of that Hegelian, or even Marxist, sense that the grids of power here are structural rather than untidily organic in some more Foucauldian sense. The gaze he interprets, then, is in that sense the colossal gaze of the master, surveying the slaves. It is the gaze of a ‘you’ for whom the real people, foraging below and finding their peculiar ways of living within the ichnographic grids that are established for them, can be seen only as subjects and judged only according to their conformity. And when the structure feels itself threatened by the agitation and even independence of its subjects below (as, in De Certeau’s analysis, the city structure begins to decay and its hold on the city dwellers is mitigated), it tries to gather them in again by way of narratives of catastrophe and panic (p. 96). One boon of the 9/11 attacks for the colossus was of course the opportunity to legitimise such narratives.

I cite De Certeau’s dense essay in part because it has been strangely absent from the many efforts of sociological and cultural studies to ‘re-imagine’ New York after 9/11; one might have imagined a text as important as this one to have something to teach about the intersections of power and control in a modern city. But I cite it more for the reminder it offers–beginning from the same place, as it were, as the terrorist attacks themselves–of the way that the spatial structure of the city “serves as a totalizing and almost mythical landmark for socioeconomic and political strategies.” Part of the lesson of this conceit is the knowledge that in the end the city is “impossible to administer” because of the “contradictory movements that counterbalance and combine themselves outside the reach of panoptic power” (p. 95). De Certeau’s New York City and its power grid act as a reasonable metaphor for the way in which ‘our’ identities are variously but considerably construed in relation to ‘you.’ ‘Your’ identity is the master’s identity in which ‘we’ dialectically and necessarily find ‘our’ own image, ‘our’ reflection, and ‘our’ identity. The master’s identity is inflected to the solipsism of self-involvement and entitlement while emanating a haughty indifference to ‘us.’

The situation is familiar, then. In the places, histories, and structures that ‘we’ know about, but of which ‘you’ always contrive to be ignorant, it is a situation that is historically marked by the production of antagonism and ressentiment. What the master cannot see in the slave’s identity and practice is that ressentiment derives not from envy or covetousness but from a sense of injustice, a sense of being ignored, marginalised, disenfranchised, and un-differentiated. That sort of sense of injustice can only be thickened in relation to an America whose extremist view of itself depends upon the very discourse of equality and democracy that the slave necessarily aspires to. Ressentiment is in that sense the ever-growing sense of horror that the master cannot live up to the very ideals he preaches to ‘us.’

It is a kind of ressentiment that Baudrillard, in his idiosyncratic (but nonetheless correct) way, installs at the heart of his short and profound analysis of the events of 9/11. Whatever else can be located in the way of motivation for the attacks, he suggests, they represented an uncomplicated form of ressentiment whose “acting-out is never very far away, the impulse to reject any system growing all the stronger as it approaches perfection or omnipotence” (2002, p.7). Moreover, Baudrillard is equally clear about the problem with the ‘system’ that was being attacked: “It was the system itself which created the objective conditions for this retaliation. By seizing all the cards for itself, it forced the Other to change the rules” (p.9). In a more prosaic manner, Noam Chomsky notes something similar in relation to the 9/11 attacks when he says that the attacks marked a form of conflict qualitatively different from what America had seen before, not so much because of the scale of the slaughter, but more simply because America itself was the target: “For the first time the guns have been directed the other way” (2001, p. 11-12). Even in the craven American media there was a glimmer of understanding about what was happening; the word ‘blowback’ that floated around for a while could be understood as a euphemism for this new stage in a master/slave narrative.

As the climate in America since 9/11 has shown very clearly, such thoughts are considered unhelpful for the construction of a ‘you’ that would support a state of perpetual war, and noxious to the narratives of catastrophe and panic that have been put into play to round up the faithful. The notion, in any case, that ressentiment is not simply reaction, but rather a necessary component of the master’s identity and history, would always be hard to sell to a ‘you’ that narcissistically cleaves to “the impossible desire to be both omnipotent and blameless” (Rajagopal, p. 175). This is a nation, after all, that has been chronically hesitant to face up to ressentiment in its own history, and mostly able to ignore and elide the central antagonisms of class. This is and has been a self-avowed ‘classless’ society, unable therefore to acknowledge its own fundamental structure, its own fundamental(ist) economic process (except as a process whereby some of its subjects fail to emulate the ability of some of the others to take proper advantage of level playing fields and equality of opportunity). For many of ‘us’ it has been hard to comprehend how most Americans manage to remain ignorant about class and ignorant indeed of their own relationship to capital’s circuits of production and consumption. At least it’s hard to understand how such ignorance can survive the empirical realities of America today. The difficulty was by no means eased when it became known that families of 9/11 victims would be paid compensation according to their relatives’ value as labour, and this somehow seemed unexceptionable to ‘you.’ The blindness of the colossal gaze as it looks on America itself is replicated in the gaze outward as it looks on ‘us.’ This is a nation largely unseeing, then, and closed off to the very conditions of its own existence–a nation blindly staring past history itself.

“Events are the real dialectics of history,” Gramsci says, “decisive moments in the painful and bloody development of mankind” (p.15) and 9/11, the only digitised date in world history, can be considered an event that could even yet be decisive. It would be tempting, of course, to say that once the ‘end of history’ had supposedly abolished all Hegelian dialectics–wherein ‘our’ identities would be bound up with ‘yours’ in an optical chiasmus of history–it was inevitable that history itself should somehow return to haunt such ignorance of historical conditions. Yet, from 9/11 and through the occupation of Iraq, America appears determined to remain ex-historical and seems still unable to recognise itself in the face of the Other–and that has always and will again make magisterial killing all the more easy.

Freedom, equality, democracy

If this dialectic of the ‘you’ and the ‘we’ can claim to represent anything about America’s outward constitution, it would necessarily find some dialectical counterpart in the inward constitution of this state. At the core of the fundamental notions of ‘the American way of life’ that ‘you’ rallied around after 9/11 and that allow ‘you’ to kill Iraqis in order to liberate them, there reside the freighted notions of freedom, equality and democracy that, more than a century and a half ago, de Tocqueville deployed as the central motifs of Democracy in America. De Tocqueville’s central project is hardly akin to my project here, but it wouldn’t be far-fetched to say that his work does in fact wage a particular kind of dialectical campaign. That is, Democracy in America plots the interaction of the terms freedom and equality in the context of the new American republic that he thought should be a model for Europe’s emerging democracies. His analysis of how freedom, equality, and democratic institutions interact and, indeed, interfere with one another still remains a touchstone for understanding the peculiar blindnesses that characterise America today. One of its main but largely under-appreciated advantages is that it makes clear that freedom, equality and democracy are by no means equivalent to each other–and one might even say, they are not even preconditions for one another, however much they have become synonyms in ‘your’ vernacular. While de Tocqueville openly admires the way in which America instantiates those concepts, he is endlessly fascinated by exactly the untidiness and uncertainty of their interplay. That interplay entails the brute realities of everyday life in the culture that is marked for him by a unique dialectic of civility and barbarity. In the final analysis de Tocqueville remains deeply ambivalent about the state of that dialectic in America, and thus remains unsure about the nature and future of the civil life of America.

Unsurprisingly, his ambivalence basically devolves into the chronic political problem of the relationship of the individual to the state. One of the effects of freedom and equality, he suggests, is the increasing ambit of state functions and an increasing willingness on the part of subjects to allow that widening of influence. This effect is severe enough to provoke de Tocqueville to rather extreme accounts of it. For example, his explanation of why ordinary citizens seem so fond of building numerous odd monuments to insignificant characters is that this is their response to the feeling that “individuals are very weak; but the state…is very strong” (p. 443). His anxiety about the strength of such feelings is apparent when he discusses the tendency of Americans to elect what he calls “tutelary” government: “They feel the need to be led and the wish to remain free” and they “leave their dependence [on the state] for a moment to indicate their master, and then reenter it” (p.664).

This tendency derives, he says, from “equality of condition” in social life and it can lead to a dangerous concentration of political power–the only kind of despotism that young America had to fear. It would probably not be too scandalous to suggest that de Tocqueville’s fears had to a great degree been realised by the end of the 20th century. And the current climate, where the “tutelary” government threatens freedom in all kinds of ways in the name of a war that it says is not arguable, could only be chilling to de Tocqueville’s sense of the virtues of democracy. The (re)consolidation of this kind of tutelary power is figured for me in the colossal gaze that I’ve talked about, a gaze that construes a ‘you’ by way of narratives of catastrophe and panic while extending the power of its gaze across the globe by whatever means necessary.

But at the centre of this dialectic of freedom and equality, almost as their motor, de Tocqueville installs the idea that American subjects are finally “confined entirely within the solitude of their own heart,” that they are “apt to imagine that their whole destiny is in their own hands,” and that “the practice of Americans leads their minds to fixing the standards of judgement in themselves alone” (p. 240-241). It’s true that for de Tocqueville this kind of inflection is not unmitigatedly bad: it is, after all, a condition of freedom itself. But nonetheless the question remains open for him: whether or not the quotidian and self-absorbed interest of the individual could ever be the operating principle for a successful nation. He is essentially asking whether the contractual and civil benefits of freedom can in the end outweigh the solipsistic and individualistic effects of equality. Or, to put the issue differently, he is asking about the consequences of allowing a certain kind of narcissism to outweigh any sense of the larger historical processes of the commonwealth–a foundational question, if ever there was one, in the history of the nation.[2. A classic, but largely ignored, statement of American history in these terms is William Appleman Williams (1961), The Contours of American History (World Publishing Company).]

Jean Baudrillard’s America, a kind of ‘updating’ of de Tocqueville at the end of the 20th century, is instructive for the way that it assumes that de Tocqueville’s questions are still alive (or at least, it assumes that Americans themselves have changed very little in almost two hundred years [p. 90]). Baudrillard is in agreement with de Tocqueville that the interplay of freedom and equality, and their relation to democratic institutions, is what lies at the heart of America’s uniqueness. He’s equally clear, however, that the 20th century has seen, not the maintenance of freedom (elsewhere he is critical of the way that tutelary power has led to regulation and not freedom [2002], but the expansion of the cult of equality. What has happened since de Tocqueville is the “irrepressible development of equality, banality, and in-difference” (p. 89). In the dialectic of freedom and equality, such a cult necessarily diminishes the extent of freedom, and this is clearly a current that the present US regime is content to steer. But Baudrillard, like de Tocqueville before him, remains essentially enthralled by the “overall dynamism” in that process, despite its evident downside; it is, he says, “so exciting” (p. 89). And he identifies the drive to equality rather than freedom as the source of the peculiar energy of America. In a sense, he might well be right: certainly it is this “dynamism” that ‘we’ love, even as ‘we’ might resist and resent the master’s gaze upon which it battens.

Love and contradiction

The “dynamism” of American culture has been sold to ‘us’ as much as to ‘you’–perhaps even more determinedly in some ways. Brand America has been successfully advertised all around the world, in ways and places and to an extent that most Americans are probably largely unaware of. While Americans would probably have some consciousness of the reach of the corporate media, or of Hollywood, and necessarily some idea of the reach of other brands such as McDonald’s, most could not have much understanding of how the very idea of America has been sold and bought abroad. For many of ‘us,’ of course, it is the media and Hollywood that have provided the paradigmatic images and imaginaries of this dynamic America. It is in fact remarkable how many of the writers in the issue of Granta in which Doris Lessing appears mention something about the way those images took hold for them, in a process of induction that ‘we’ can be sure most Americans do not experience reciprocally.

The dynamism of that imaginary America is a multi-faceted thing, imbuing the totality of social relations and cultural and political practices. It begins, maybe, with a conveyed sense of the utter modernity of American life and praxis, a modernity aided and abetted by the vast array of technological means of both production and consumption. The unstinting determination of the culture to be mobile, to be constantly in communicative circuits and to be open day and night, along with the relative ease and efficiency of everyday life and the freedom and continuousness of movement, all combine to produce a sense of a culture that is endemically alive and happening. This is ‘our’ sense of an urban America, at least, with its endless array of choices and the promised excitement and eroticism of opportunity. The lure of that kind of urbanity was always inspissated by the ‘melting pot’ image of the USA, and is further emphasised in these days of multiculturalism and multi-ethnicity. Even beyond the urban centres, of which there are so many, this dynamic life can be taken for granted, and the realm of the consumer and the obsessive cheapness of that realm reflect the concomitant sense of a nation fully endowed with resources–material and human–and with a standard of living enjoyed by most people but achieved by very few outside the USA–even these days, and even in the other post-industrial democracies. ‘We’ can also see this vitality of the everyday life readily reflected in the institutional structures of the USA: for instance, other ways in which we are sold America include the arts, the sciences, sports, or the educational system, and ‘we’ derive from each of those realms the same sense of a nation on the move. As ‘our’ Americans friends might say, what’s not to like?

Beyond the realms of culture and everyday life, ‘we’ are also sold the idea of America as a progressive and open political system the like of which the world has never seen before. The notions that concern de Tocqueville so much are part of this, of course: freedom, equality, and democratic institutions are the backbone of ‘our’ political imaginary about the USA. In addition, ‘we’ are to understand America as the home of free speech, freedom of the press and media, and all the other crucial rights that are enshrined in the Constitution and the Bill of Rights. Most importantly, ‘we’ understand those rights to be a matter for perpetual discussion, fine-tuning, and elaboration in the context of an open framework of governance, legislation, and enforcement. Even though those processes are immensely complex, ‘we’ assume their openness and their efficacy. Even the American way of doing bureaucracy seems to ‘us’ relatively smooth, efficient and courteous as it does its best to emulate the customer-seeking practices of the service industries. And all this operates in the service, less of freedom and more, as I’ve suggested, in the service of “equality of condition”–and ultimately in the service of a meritocratic way of life that even other democratic nations can’t emulate. And on a more abstract level, I was struck recently by the words of the outgoing Irish ambassador to the US, Sean O’Huiginn, who spoke of what he admired in the American character: the “real steel behind the veneer of a casual liberal society…the strength and dignity [and] good heartedness of the people” and the fact that America had ‘brought real respect to the rule of law.”[3. “Departing Irishman Mulls ‘Glory of America’,” Washington Post 12 July 2002.]

These features, and I’m sure many others, are what go to constitute the incredibly complex woof and weave of ‘our’ imaginaries of the United States. The reality of each and any of them, and necessarily of the totality, is evidently more problematic. The words of another departing visitor are telling: “The religiosity, the prohibitionist instincts, the strange sense of social order you get in a country that has successfully outlawed jaywalking, the gluttony, the workaholism, the bureaucratic inflexibility, the paranoia and the national weakness for ill-informed solipsism have all seemed very foreign.”[4. Matthew Engel, “Travels with a trampoline,” The Guardian 3 June, 2003.] But still those imaginaries are nonetheless part of ‘our’ relation to America–sufficiently so that in the 9/11 aftermath the question so often asked by Americans, “Why do they hate us?”, seemed to me to miss the point quite badly. That is, insofar as the ‘they’ to whom the question refers is a construct similar to the ”we’ I’ve been talking about, ‘we’ don’t hate you, but rather lovehate you.

Nor is it a matter, as so much American public discourse insists, of ‘our’ envying or being jealous of America. Indeed, it is another disturbing symptom of the narcissistic colossus to constantly imagine that everyone else is jealous or envious. Rather, ‘we’ are caught in the very contradictions in which the master is caught. For every one of the features that constitute our imaginary of dynamic America, we find its underbelly; or we find the other side of a dialectic–the attenuation of freedom in the indifferentiation of equality, or the great barbarity at the heart of a prized civility, for instance. Equally, accompanying all of the achievements installed in this great imaginary of America, there is a negative side. For instance, while on the one hand there is the dynamic proliferation of technologies of communication and mobility, there is on the other hand the militarism that gave birth to much of the technology, and an imperious thirst for the oil and energy that drive it. And within the movement of that dialectic–one, it should be said, whose pre-eminence in the functioning of America has been confirmed once more since 9/11–lies the characteristic forgetting and ignorance that subvents the imaginary. That is, such technologies come to be seen only as naturalised products of an ex-historical process, and their rootedness in the processes of capital’s exploitation of labour is more or less simply elided. And to go further, for all the communicative ease and freedom of movement there is the extraordinary ecological damage caused by the travel system. And yet this cost is also largely ignored–by government and people alike–even while the tension between capital accumulation and ecological comes to seem more and more the central contradiction of American capitalism today.[5. See Ellen Wood (2002), “Contradictions: Only in Capitalism?” in Socialist Register 2002 (Monthly Review Press).]

One could easily go on: the point is that from every part of the dynamic imaginary of America an easy contradiction flows. Despite, for example, the supposed respect for the rule of law, American citizens experience every day what Baudrillard rightly calls “autistic and reactionary violence” (1988, p. 45); and the ideology of the rule of law does not prevent the US being opposed to the World Court, regularly breaking treaties, or picking and choosing which UN resolutions need to be enforced, and illegally invading and occupying another sovereign nation. The imaginary of America, then, that ‘we’ are sold–and which I’m sure ‘you’ believe–is caught up in these kinds of contradictions–contradictions that both enable it and produce its progressive realities. These contradictions in the end constitute the very conditions of this capitalism that is fundamentalist in its practice and ideologies.

So, ‘our’ love for America, either for its symbols and concepts or for its realities, cannot amount to some sort of corrosive jealousy or envy. It is considerably more complex and overdetermined than that. It is, to be sure, partly a coerced love, as we stand structurally positioned to feed the narcissism of the master. And it is in part a genuine admiration for what I’m calling for shorthand the “dynamism” of America. But it is a love and admiration shot through with ressentiment, and in that sense it is ‘about’ American economic, political, and military power and the blind regard that those things treat ‘us’ to. It is the coincidence of the contradictions within America’s extremist capitalism, the non-seeing gaze of the master, and ‘our’ identification with and ressentiment towards America that I’m trying to get at here. Where those things meet and interfere is the locus of ‘our’ ambivalence towards ‘you,’ to be sure, but also the locus of ‘your’ own confusion and ignorance about ‘us.’ But the ‘yea or nay,’ positivist mode of American culture will not often countenance the representation of these complexities; they just become added to the pile of things that cannot be said, especially in times of catastrophe and panic.

What is not allowed to be said

It’s easy enough to list the kinds of things that could not be said or mentioned after 9/11, or enumerate the sorts of speech that were disallowed, submerged, or simply ignored as the narratives of panic and catastrophe set in to re-order ‘you’ and begin the by now lengthy process of attenuating freedom.

What was not allowed to be said or mentioned: President Bush’s disappearance or absence on the morning of the attacks; contradictions in the incoming news reports about not only the terrorist aeroplanes but also any putative defensive ones (it’s still possible to be called a conspiracy theorist for wondering about the deployment of US warplanes that day, as Gore Vidal discovered when he published such questions in a British newspaper);[6. Gore Vidal, “The Enemy Within,” The Observer 27 October 2002.] the idea that the attacks would never have happened if Bush had not become president; and so on. Questions like those will, one assumes, not be able to be addressed by the governmental inquiry into 9-11, especially many months later when the complexities of 9-11 have been obliterated by the first stages of the perpetual war that Bush promised. In addition, all kinds of assaults were made on people who had dared say something “off-message”: comedians lost their jobs for saying that the terrorists were not cowards, as Bush had said they were, if they were willing to give up their lives; college presidents and reputable academics were charged with being the weak link in America’s response to the attacks; and many other, varied incidents of the sort, including physical attacks on Muslims simply for being Muslim. And in the many months after the attacks, lots of questions and issues are still passed over in silence by the media and therefore do not come to figure in the construction of a free dialogue about ‘your’ response to the event.

Many of ‘us’ were simply silenced by the solipsistic “grief” (one might like to have that word reserved for more private and intimate relationships) and the extreme shock of Americans around us. David Harvey talks about how impossible it was to raise a critical voice about the role bond traders and their ilk in the towers might have had in the creation and perpetuation of global social inequality (p. 59). Noam Chomsky was rounded upon by all and sundry for suggesting, in the way of Malcolm X, that the chickens had come home to roost. The last thing that could be suggested was the idea that, to put it bluntly, these attacks were not unprovoked and anybody who thought there could be a logic to them beyond their simple evilness was subjected to the treatment Lessing describes at the head of this piece. The bafflement that so many of ‘you’ expressed at the idea that someone could do this deed, and further that not all of ‘us’ were necessarily so shocked by it, was more than just the emotional reaction of the moment.

This was an entirely predictable inflection of a familiar American extremism, soon hardening into a defiant–and often reactionary–refusal to consider any response other than the ones ‘you’ were being offered by political and civic leaders and the media. Empirical and material, political and economic realities were left aside, ignored, not even argued against, but simply considered irrelevant and even insulting to the needs of a “grief” that suddenly became national–or rather, that suddenly found a cohesive ‘you.’ And that “grief” turned quickly into a kind of sentimentality or what Wallace Stevens might have called a failure of feeling. But much more, it was a failure, in the end, of historical intelligence. A seamless belief that America can do no wrong and a hallowed and defiant ignorance about history constitute no kind of response to an essentially political event. Even when the worst kinds of tragedy strike, an inability to take any kind of responsibility or feel any kind of guilt is no more than a form of narcissistic extremism in and of itself.[7. A longer version of this article–forthcoming in Ventura, P. (ed.), Circulations: ‘America’ and Globalization and planned to be part of my forthcoming Primitive America (U. Minnesota)–elaborates on the concept of narcissism that I have been deploying here. I distinguish my use from that of Christopher Lasch in The Culture of Narcissism in order to be able to describe a narcissistic (and primitive) structuration of America, rather than imputing narcissistic disorders to individuals or, for that matter, to classes.]

Symbols

On 9/11 there was initially some media talk about how the twin towers might have been chosen for destruction because of their function as symbols of American capitalist power in the age of globalisation. David Harvey suggests that in fact it was only in the non-American media that such an understanding was made available, and that the American media talked instead about the towers simply as symbols of American values, freedom, or the American way of life (p. 57). My memory, though, is that the primary American media, in the first blush of horrified reaction, did indeed talk about the towers as symbols of economic might, and about the Pentagon as a symbol of military power. But like many other things that could not be said, or could no longer be said at that horrible time, these notions were quickly elided. Strangely, the Pentagon attack soon became so un-symbolic as to be almost ignored. The twin towers in New York then became the centre of attention, perhaps because they were easier to parlay into symbols of generalised American values than the dark Pentagon, and because the miserable deaths of all those civilians was more easily identifiable than those of the smaller number of military workers in Washington.

This was a remarkable instance of the way an official line can silently, almost magically, gel in the media. But more importantly, it is exemplary of the kind of ideological movement that I’ve been trying to talk about in this essay: a movement of obfuscation, essentially, whereby even the simplest structural and economic realities of America’s condition are displaced from discourse. As Harvey suggests, the attacks could hardly be mistaken for anything but a direct assault on the circulatory heart of financial capital: “Capital, Marx never tired of emphasizing, is a process of circulation….Cut the circulation process for even a day or two, and severe damage is done…What bin Laden’s strike did so brilliantly was [to hit] hard at the symbolic center of the system and expose its vulnerability” (p. 64-5).

The twin towers were a remarkable and egregious architectural entity, perfectly capable of bearing all kinds of allegorical reading. But there surely can be no doubt that they were indeed a crucial “symbolic center” of the processes through which global capitalism exercises itself. Such a reading of their symbolism is more telling than Wallerstein’s metaphorical understanding that “they signalled technological achievement; they signalled a beacon to the world” (2001). And it is perhaps also more telling than (though closer to) Baudrillard’s understanding of them: “Allergy to any definitive order, to any definitive power is–happily–universal, and the two towers of the World Trade Center were perfect embodiments, in their very twinness, of that definitive order” (2002, p.6). It is certainly an understanding that not only trumps, but exposes the very structure of the narcissistic reading of them as symbols of ‘your’ values and ‘your’ freedom.

That narcissism was, however, already there to be read in these twin towers that stared blankly at each other, catching their own reflections in an endless relay. They were, that is, not only the vulnerable and uneasy nerve-centres of the process of capital circulation and accumulation; they were also massive hubristic tributes to the self-reflecting narcissism they served. Perhaps it was something about their arrogant yet blank, unsympathetic yet entitled solipsism that suggested them as targets. The attacks at very least suggested that someone out there was fully aware of the way that the narcissist’s identity and the identity of those the narcissistic overlooks are historically bound together. It’s harder to discern whether those people would have known, too, that the narcissist is not easy to cure, however often targeted; or whether they predicted or could have predicted, and perhaps even desired, the normative retaliatory rage that their assault would provoke?

What ‘we’ know, however, is that ‘we’ cannot forever be the sufficient suppliers of the love that the narcissist finds so necessary. Indeed, ‘we’ know that it is part of the narcissistic disorder to believe that ‘we’ should be able to. So long as the disorder is rampant ‘we’ are, in fact, under an ethical obligation not to be such a supplier. In that sense (and contrary to all the post 9/11 squealing about how ‘we’ should not be anti-American), ‘we’ are obliged to remind the narcissist of the need to develop “the moral realism that makes it possible for [you] to come to terms with existential constraints on [your] power and freedom” (Lasch p. 249).

But Christopher Lasch’s final words in a retrospective look at his famous work, The Culture of Narcissism, are not really quite enough. This would be to leave the matter at the ethical level, hoping for some kind of moral conversion–and this is not an auspicious hope when the narcissistic master is concerned. At the current moment when we all–‘we’ and ‘you’–have seen the first retaliation of the colossus and face the prospect of extraordinary violence on a world scale, too much discussion and commentary (both from the right and the left) remains at the moral or ethical levels. This catastrophic event has and the perpetual war that has followed it have obviously, in that sense, produced an obfuscation of the political and economic history that surrounds them and of which they are part. Such obfuscation serves only the master and does nothing to satisfy the legitimate ressentiment of a world laid out at the master’s feet. At the very least, in the current conjuncture, ‘we all’ need to understand that the fundamentalisms and extremisms that the master promulgates, and to which ‘you’ are in thrall, are not simply moral or ethical, or even in any sense discretely political; they are just as much economic and it is that aspect of them that is covered over by the narcissistic symptoms of a nation that speaks through and as ‘you.’

References

Baudrillard, J. (1988), America (Verso).

Baudrillard, J. (2002), The Spirit of Terrorism (Verso).

Chomsky, N. (2001), 9-11 (Seven Stories Press).

De Certeau, M. (1984), The Practice of Everyday Life (U. California).

De Tocqueville, A. (2000), Democracy in America (U. Chicago).

Gramsci, A. (1990), Selections from Political Writings, 1921-1926 (U. Minnesota).

Harvey, D. (2002), “Cracks in the Edifice of the Empire State,” in Sorkin and Zukin, eds., After the World Trade Center (Routledge), 57-68.

Lasch, C. (1991), The Culture of Narcissism (Norton).

Lessing, D. (2002), Untitled article, Granta 77 (spring 2002), 53-4.

Rajagopal, A. (2002), “Living in a State of Emergency,” Television and New Media , 3:2, 173 ff.

Wallerstein, I., (2001), “America and the World: The Twin Towers as Metaphor,”http://www.ssrc.org/sept11/essays/wallerstein.htm

Endnotes

Expert Economic Advice and the Subalternity of Knowledge: Reflections on the Recent Argentine Crisis

Ricardo D. Salvatore is professor of modern history at Universidad Torcuato di Tella in Buenos Aires. He is author of Wandering Paysanos: State Order and Subaltern Experience in Buenos Aires During the Rosas Era (1820-1860) and coeditor of Crime and Punishment in Latin America: Law and Society since Late Colonial Times and Close Encounters of Empire: Writing the Cultural History of U.S.-Latin American Relations, all published by Duke University Press.

Act 1. Taking the Master’s Speech as Proper

President Duhalde and Minister Remes Lenicov went to Washington and Monterrey to speak with the key figures in the US Treasury, the IMF and the World Bank. They carried with them a message: that with an impending hyperinflation it was necessary to check the devaluation of the peso; that, to strengthen the reserves of the central bank and avoid the collapse of financial institutions, IMF funding was needed; that, for the moment, provincial and federal deficits were difficult to control. After some preliminary talks, their arguments crashed against a wall of technical reason. All their arguments were rejected as out-moded views or erroneous arguments. And they listened to a new and unexpected set of arguments: The Fund and the Treasury will accept no more false promises from Argentina. In the view of their experts, the Argentine representatives presented no credible and “sustainable plan” for macro-economic stability and growth. Instead of promises of financial assistance, they issued a warning: abandoning free-market reforms and altering contracts and compromises would lead Argentina into an isolationist path that will produce more damage to its people. Half-persuaded by these strong arguments, President Duhalde and Minister Remes Lenicov returned to Argentina and started to speak with the voice of the IMF and the US Treasury. Hence, they started to spread the new gospel: a free-floating exchange rate, inflationary targeting, and macro-economic consistency. Back in Buenos Aires, President Duhalde explained to the TV audience: sometimes a father, in order to keep feeding his family, has to “bend his head” and accept, against his best judgement, the truth of the Other (to take its “bitter medicine”). This subaltern gesture, of course, contradicted his prior statements, because the prescribed policies fundamentally questioned the validity of the belief system that united the main partners of the governing alliance (Peronistas and Radicales).

After a long negotiation that seemed to go nowhere, Argentine functionaries found out that the rhetoric of the knowledgeable empire was harsh. Conservative voices were beginning to argue that neither the IMF nor the US should continue to pour money into a corrupt and unviable economy. Secretary of the Treasury Paul O’Neil spoke of Argentina as a land of waste where the savings of American “plumbers and carpenters” could be easily washed away by wrong policy decisions.[1. Many articles and commentaries picked up on O’Neil’s phrase. See for example The Economist (2001).] The suspicion of the American worker was the basis of the duress of the new US policy towards Unwisely Overindebted Countries (UOCs). In this conservative viewpoint, popular common sense taught government to be wary of international bankers and their advisers. They would willingly waste other people’s money in order to save their own interests.

Act 2. Two Ideologies In Conflict

A long-political and ideological conflict brought about the fall of President De la Rua in December 20, 2001. With him fell the so-called modelo económico implemented first by Minister Domingo Cavallo during the Menem administration (free convertibility between the peso and the dollar, free international capital mobility, government without discretionary monetary policy, privatization of government enterprises, opening of the economy to foreign competition). The defeat of De la Rua-Cavallo was read as the end of the hegemony of a way of understanding economic policy and its effect on economic development (the so-called “Washington Consensus” that in Argentina translated into a “convertibility consensus”). The politico-ideological terrain has long been divided between supporters of economic integration and free-market policies and supporters of regulation, protectionism, state-led development, and re-distributionist policies. De la Rua-Cavallo tried to defend until the end the former model, while their successors (Rodriguez Saa for a week, and Duhalde) promised policies that seemed to satisfy the expectations of the latter camp. Thus, the events of December 19-20 anticipated the re-emergence and potential hegemony of what, for simplicity, I shall call “nationalist-populist reason” at the expense of “neo-liberal reason” and the Washington consensus.

Rodríguez Saa’s announcement of the Argentine default, his promise of one million new jobs, and his televised embrace with union leaders gave Argentines the impression that a great REVERSAL was under way. The same could be said of President Duhalde. His devaluation in early January, his promises to distribute massive unemployment compensations, his announcement of a new “productivist alliance” against the interests of speculators, foreign banks and privatized utilities, and the interventionist policies to “compensate” the effects of this devaluation all created the impression that things would turn around. That, at last, the people had defeated the modelo económico and its rationale and that, consequently, it was time for a redistribution of economic gains and for strong government. The new productivist alliance–it seemed–would increase employment, re-industrialize the country, and put under check the rapacity of foreign companies. If this was so, the winners of the past “neo-liberal age” would have to accept losses for the benefit of the poor, the unemployed, and the economic renaissance of the interior provinces. As it turned out to be (so far), this public perception was disappointed by the sudden “conversion” of their politicians to the refurbished Washington Consensus.

Act 3. New Faces on TV

Momentarily at least, mainstream economic experts (most of them trained in US top universities) have disappeared from the TV screens. Although they continue to be called to integrate discussion panels for news-and-commentary TV programs, many of those associated with the words “liberal,” “neo-liberal” or “ortodoxo” refuse to participate in these programs. Their space has been occupied by heterodox economists–some of them neo-Keynesian, others simply expressing the views of the unions or industries they represent, others speaking for opposition and leftist parties, still others carrying the credential of having participated in the cabinets of governments of the 1980s or before. Unlike the US-trained experts, these other economists had studied in local universities and display a technical expertise and common wisdom that some might find insufficient or old-fashioned. Some of these economists are making their first appearances on TV programs, while others are re-appearing after decades of ostracism and neglect. Although they represent a diversity of perspectives, they are in agreement that the modelo económico of the 1990s is over and that their positions (income redistribution, active or expansionary policies, more regulation, taxes on privatized utilities, price controls, and even statizations of foreign-owned banks and oil companies) need to be heard. Their greater popularity speaks of a displacement in public discourse towards positions closer to what I have called “nationalist-populist reason.” The economists that are now appearing on TV screens are putting into debate whether Argentina should listen to the words of advice of the IMF and the US Treasury; whether it is convenient to give up so much (policy autonomy) for so little (fresh loans and tons of compelling advice).

This change in the type of economist now exposed to TV audiences speaks of the crisis of legitimacy of the “Washington Consensus” and its Argentine variant, the “convertibility consensus.” In part the retrenchment of orthodox or neo-liberal economists (which I repeat is only temporary) is founded upon solid arguments: the lack of reception in government circles for their advice, the evident failure of the modelo to generate sustained economic growth, and a bit of fear. Some economists (Roberto Aleman) have been assaulted by small groups of demonstrators, others (Eduardo Escasani) have been subject to public escraches, and still others have been advised to remain uncritical. And, as we all know, Domingo Cavallo is now in prison, arrested on charges of contraband. This Harvard-trained economist may be unfairly paying for felonies that he did not commit, but in the eyes of many of his countrymen he is guilty of the increased unemployment, poverty, and inequality that resulted from the application of policies that were part and parcel of the Washington Consensus.

Act 4. The Washington Consensus Reconsidered

To understand the meaning of this crisis of legitimacy for the US-trained economist, we must look at the “policy community” in the US and at its changing consensus. Since the Asian crisis (Summer of 1997), the Washington Consensus and its disseminating agencies (the IMF and the World Bank) has faced severe criticism.[2. See Krueger 1998; Stiglitz 2001; Fisher 2001b, among others. Even a supporter of IMF policies such as Stanley Fisher (1997) had to acknowledge that IMF programs had a limited effect upon the domestic side of developing economies (“few countries seeing significant increases in growth or sharp reductions in inflation”), although the Fund’s policies did produce improvements in the external sector and momentary reductions in fiscal deficits.] Leading economists such as S. Fisher, A. Krueger, J. Stiglitz, or R. Barro have begun to openly criticize IMF policies, calling for major reforms if not its abolition. Economists and financial experts have argued that opening commodities and financial markets simultaneously was bound to generate explosive financial crises. That macroeconomic stability was not enough to guarantee the stable and sustained growth of “emerging markets.” That the IMF (with its enhanced credit facilities, and high premium loans, and its insistence on fiscal austerity) had pushed countries into debt traps that led to financial crises and severe depressions. Attacks from left and right have led experts to re-examine the role that the IMF must play in the new global economy. Key experts have argued that IMF large loans to unreliable countries only encourage irresponsible private lending. Others have suggested that, in the future, the IMF should play a much more limited role in the world economy, restricted to “crisis prevention” and “crisis management;” that is, to collect data, give warnings, and provide policy advice (Barro and Reed 2002).

Albeit not the only one, Joseph Stiglitz has been perhaps the most vociferous in this criticism (North 2000; Stiglitz 2001). Since the 1997-98 Asian crisis, he has been exposing IMF policies as “fire added to the fire” (as the main reason for economic disaster and social distress). Instead of stimulating economies with increased social expenditures and an expansion of credit, the IMF had consistently recommended budget cut-backs, monetary restriction, and further de-regulations. These policies have sunk into deep and prolonged recessions economies that had the chance of rapid recovery. His book Globalization and its Discontents (2002) has circulated widely among critics of globalization and international financial institutions. Translated into Spanish in the same year, it has been enthusiastically received in Argentina by supporters of industrial protection, Keynesian economic policies, and the reconstruction of a “national” and “popular” economy. Radio and TV programs have familiarized the Argentine public with Stiglitz’s criticism of the IMF, emphasizing his authority as a Nobel-prize winner. Readers, of course, take from a text what strengthens their own views. Thus, Stiglitz has been locally portrayed as a detractor of the IMF and as a defender of “active industrial policies”–others have gone further, presenting him as a crusader against free-market ideology–while, in actuality, his positions have been more conservative. (True, he has accused the IMF of “lack of intellectual coherence” and has called for major limitations in the role of the IMF; but his understanding of the world economy falls short of falling into the camp of the “nationalist-populist” consensus.)

Over time, this criticism has been eroding the basis of the Washington Consensus. Though many US-trained experts still believe that macroeconomic stability and free-market reforms should not be abandoned, many see now that these two conditions are not sufficient. The consensus has shifted towards the terrain of institutions. Now scholars and policy makers agree that, in addition to free-markets and macroeconomic stability, emerging economies need good government, reliable financial systems, and exemplary judiciaries. In particular, given the frequency of external shocks, countries need to have good bankruptcy laws and (some recommend) some degree of control over short-term speculative capital. Some ambiguities of the earlier consensus (between alternative exchange regimes) have been closed: now only flexible exchange regimes are considered acceptable. The experiences of Mexico, Asia and Brazil have given the Fund grounds for arguing that exchange rate devaluations can prove successful. And arguments about the inefficaciousness of IMF loans (“giving fish to the sharks” as Stiglitz has put it, or in the conservative variety of a “moral hazard” argument) have gained widespread support in the policy community.

Act 5. US Economists Give Opinion of the Argentine Crisis

Expert economic opinion in the US is divided regarding the Argentine current crisis. There are those who blame Argentine policy-makers and politicians for the economic and financial collapse. Among them are Professor Martin Feldstein at Harvard, Professor Gary Becker at Chicago, and Professor Charles Calomiris at Columbia. On the other side are those who fault the IMF for the Argentine crisis: Professors Paul Krugman, Mark Weisbrot, and Arthur MacEwan, among others. Those who blame the IMF focus on either its unhelpful or wrong economic advice and its ill-timed financial assistance. The IMF sins are limited to not telling Argentina to abandon its fixed exchange system sooner; or advising orthodox, austerity measures in the middle of a depression. Those who place the blame on Argentine policy-makers (and minimize the IMF’s responsibility) point to the transformation of fiscal deficits into mounting public debt and to the inability to complete the structural reforms needed to make the currency board system work. Their interventions tend to distinguish between economic liberalization (not to be blamed) and ill-advised monetary and fiscal policy (guilty as charged). The positions in the debate are two-sided: either the patient did not follow the doctor’s prescription completely; or the doctor, willingly or by ignorance, provided the patient with the wrong medicine.

In any of the two extreme situations, Argentina stands in a subaltern position vis-à-vis expert (economic) knowledge. Its policy-makers are conceived, in both perspectives, as dependent upon the authorized word of IMF or the “policy analysis” community. Or, put in other terms, Argentina appears always as an “experiment” or an “example” (some use the phrase “textbook case”) of a theory or policy paradigm that is under discussion. Argentina, whether it is the “poster child” of neo-liberal reforms or the “basket case” or IMF folly, stands always as a passive object of knowledge, providing mostly data to feed a vigorous academic and policy debate that goes on elsewhere. Leading scholars and policy-makers in Argentina can argue against the current, but can hardly avoid the protocols of authorization governing “voice” in the US academic and policy community. Yes, at the end of a persistent effort, Minister Cavallo persuaded many of his peers in the United States that convertibility had been a success and could be sustained over time. But in order to do this, he had to publish his views in the most traditional and respected journal of the profession: the American Economic Review. (Cavallo and Cottani 1997).[3. In this essay, it is clear that Cavallo was running against the current. He acknowledged that key people in the policy and theory community were skeptical about the currency board. But they were ready to accept price stability and good fiscal figures as “proofs” of success. In fact, as Cavallo conceded, no one in the “policy community” raised the issue of rising unemployment at a time in which Argentina seemed to have weathered bravely the aftermath of the Mexican crisis (1996-97).] Cavallo’s arguments, to the extent that he used the master’s idiom (he spoke of the strength of an internationalized banking system, of the soundness of macro-economic fundamentals, and the resilience of the national economy to global crises), were considered valid, though not completely persuasive.

Few in the US tribunal of expert economic opinion connect Argentina’s failure with economic knowledge or the way this translates into authoritative economic advice. Few of these economic commentators are aware of the impact produced by their own university’s teaching upon the policy-makers of developing economies.[4. Mark Weisbrot is perhaps an exception in this regard. He argues that Argentina had been mismanaged by US-trained economists and subjected, for too long, to ill advice from the IMF. In the end, a failed experiment (Argentine convertibility), disguised as success, could only bring discredit to those economists who supported it (Weisbrot 2001).] After all, their students go to occupy key positions of responsibility as ministers of finance or presidents of central banks. Their students are the ones who generally sit at the other side of the table when the IMF experts dispense advice–and they are those who communicate to the population the “bitter medicine” prescribed by the money doctors. Profound disagreement within the US academy (about the causes of the Argentine crisis) stands in sharp contradiction with the single-voice advice dispensed by IMF experts.

After the crisis of November 2001-March 2002, Argentina is back at the center of interest of economic opinion as a “leading case.” Why has it failed? What were the forces that triggered the collapse? Has the confidence in free-market policies been severely damaged by this event? With the images of supermarket riots (“saqueos“), middle-class citizens banging pots and pans (“cacerolazos“) and youths throwing stones at money tellers in Buenos Aires, US economists return to the laboratory of Economic Science to re-think and re-establish their preconceptions. The essay that Martin Feldstein wrote for Foreign Affairs immediately after these events is symptomatic. Here we encounter not the doubt of the researcher but the certainty of the judge. The case (the financial collapse of Argentina) calls for an attribution of guilt. Without much supporting evidence, Feldstein rushes to indict two potential suspects: the overvalued peso, and excessive foreign debt. (Feldstein 2002). If this were so, then liberalizing policies are not to blame. Only the Argentine government is to blame, for it promised something that became impossible to deliver: convertibility at a fixed exchange rate. In the end, the “case” (Argentina) helps to reinforce orthodoxy. If the government would have been able to lower real wages (twisting labor’s arms) or maintain the golden rule of convertible regimes (to reduce the money supply and raise the interest rate) to discipline the economy into cost-reduction “competitiveness,” Argentina would have been able to maintain its convertibility.

What failed, in Feldstein’s opinion, was the political ability and the vision of the Argentine government to adapt national reality to the conditions of the world market. Argentine reality proved stubborn. Even with 15 percent unemployment rates, real wages did not decline. After the Brazilian devaluation, Argentine wages became unrealistically high. But instead of accepting the painful reality (cutting wages to world-market levels), Argentine politicians opted for the easy road: increasing indebtedness. Feldstein draws three “lessons” from the Argentine experience: 1) that a fixed exchange system combined with a currency board is a bad idea; 2) that substantial foreign borrowing is unsustainable in the long run (is a high-risk development strategy); and 3) that free-market policies continue to be desirable, in spite of this sad experience. If in the 1980s Argentina stood as a class example of what could go wrong in the land of “financial restriction,” closed economies, and populist policies, in 2002 the country was again an example–now of the failure of the currency board in countries with too much debt and inflexible labor laws.

On the other side of the argument, there are also prominent public figures and renowned scholars. Best-known among the critics of the IMF is Professor Joseph Stiglitz. But other economists have joined this position arguing that the “Argentine case” is another proof of the failure of IMF policies. University of Massachusetts Professor Arthur MacEwan is on this train. In his analysis (MacEwan 2002), he presents Argentina as a victim of ill-oriented policies, policies that insisted on increasing debt to sustain an outmoded system (the currency board). Bad economic advice comes not from good economic science but from interest. The IMF, trying to defend powerful US corporations and global banking firms, continued to funnel loans to an already bankrupt economy. To Lance Taylor, a monetary policy expert and economic historian, the fall of Argentina must be viewed as a consequence of wrong policy choices (Interview 2001). The fixed exchange rate was good to tame inflationary expectations, but disastrous as a development strategy. In the end, the Argentine case demonstrates the foolishness of opening at the same time commodity and capital markets. Economic growth becomes dependent upon the flow of capital. If successful, the inflow of fresh capital creates price inflation and this generates an overvalued exchange that conspires against local industry. If unsuccessful, capital outflows create expectations of devaluation.

Curiously, in the end, this liberal view coincides with the orthodox one. There is nothing wrong with economic expertise, only with the institutions that represent powerful interests. Liberal and conservative opinion clash only with regard to the desirability of unregulated markets, but they are one in regard to defending the citadel of knowledge. And, what is more important, in one or the other perspective, Argentina continues to be a “case,” an experimental site of policy and knowledge.

How does economic “doxa” pass for knowledge? How is economic theory able to sustain its orthodox core under a storm of counter-evidence? What makes an opinion emanated from Harvard a dominant view? Why is Argentina always a subject of study and object of advice and not a producer of knowledge? These are questions that this essay cannot attempt to answer. Nevertheless, it is useful to reflect on the unevenness of this situation. Certain locations (the subaltern ones) provide the data for experiments in policy while other locations (the dominant ones) provide the theory to understand the success or failure of these experiments. Here lies a condition of subalternity that cannot be solved by improving the quality of the national administration with foreign-trained economists. For the last word remains on the side of those who produce knowledge.

Act 6. Imperial Economics

Informal empires can treat their areas of influence with more or less duress, with more or less affection. It depends upon international political conditions and upon the vision and convictions of a US president (or his administration). Thus, when Paul O’Neil assumed a key position as the US Secretary of the Treasury (January 2001), it seemed that the empire would get tougher with regard to those countries that followed “bad policies.” Comply with the economic advice the IMF and the policy community give you or else suffer isolation: this seemed to be the image the new administration wanted to project (“Tough-Love”). This, combined with recurrent expressions by IMF high-level officials that Argentine negotiators were unable to produce a plan (meaning that their technical experts were not able to draw a consistent macro-economic program), created the impression that the time of “carnal relations” between the US and Argentina was over. From then on, distrust, derision, and distance would characterize the relations between the two countries. This was, to a certain degree, to be expected. Few could anticipate, nonetheless, that misunderstandings about economic policy and economic goals would be at the basis of this imperial duress–and, more importantly, that the hegemony of economic discourse would be the source of contention.

Perhaps part of Argentina’s neo-colonial condition is given by the fact that the country has been taken as a site of experiment for economic policy. In the early 1990s, Argentina pioneered neo-liberal reforms in the region, becoming the “poster child” of free-markets, privatization, and macro-economic stability. Since the Asian crisis, Argentina has turned into a “basket case” of poor fiscal management, increasing country risk, and bad international loans. Curiously, few have examined the location from which the judgement of “success” and “failure” emanates: US universities, think tanks, and multilateral credit institutions. The same place where tons of economic and financial advice is produced on a daily basis. Local economists engage in the debates proposed by these centers of knowledge and policy, shifting gradually or suddenly their opinions about policy and doctrinal trends. They are not equal contributors to the world of knowledge and policy: like President Duhalde, they abide with the authorized word of US-experts. Otherwise, they get displaced into the territory of the “dysfunctional.”

Is economics an imperial science? We know that the discipline has tried to colonize other social and human sciences with its maximizing principles and its implicit rationality. But, economic science could be dubbed “imperial” in a more fundamental sense. New work on international finance and economics deals with the question of global governance. Larry Summers, the current president of Harvard University, is an expert on this subject. He has been arguing that the US is the first non-imperialist empire; that US primacy in the field of economics will assure US leadership in the management of the global economy.

In 1999, Summers celebrated the globalization of US economics. US-trained economists were taking control of key positions in emergent economies’ governments and central banks (Summers 1999). There were Berkeley-trained economists in Indonesia, Chicago alumni in Chile, MIT and Harvard graduates in Mexico and Argentina, etc. These economists were spreading the knowledge of how to manage national economies in a globalized environment and providing the rationale for the transformation under way. They were called to assume a central role in completing the globalization process in the terrain where corporations alone could not progress: the reform of government. It was only in this terrain in which the imperatives of greater international integration could be made compatible with the demands of national communities.

US-trained economists will be the ones facing the challenges of a globalized world: they had to find innovative solutions to the problem of reducing financial volatility and the contagion across countries of financial crises. Since the late 1990s, Summers has been arguing for the building of a new “international financial architecture” for the world economy. His views of a pacified global economy is one in which experts dominate and help the rest of humanity cushion the effects of inevitable “market failures.” (Summers 1999). What role does the United States play in this imagined world scenario? To Summers, the US is the “indispensable nation,” the only power that can lead a movement towards international economic integration without causing a major disruption (restructuring) in the nation-state system.[5. Summers is aware of the unevenness implicit in this conception of the world system. Whereas the US government claims absolute sovereignty to control domestic economic policy, other countries should content themselves with a limited sovereignty. Subject to the financial surveillance of multilateral institutions, they cannot entertain the dream of ever issuing world-money. Even if they become “fiscal and monetarily responsible,” the Federal Reserve System will never allow other countries to become new members in the board of directors.] How could this be accomplished? By the persuasive power of economic knowledge. In the end, only the diffusion of economic rationale (“economist want their fellow citizens to understand what they know about the benefits of free trade”) can produce a compromise between promises of democratic governance (widespread public goods) and the recurrent constraints imposed by global financial crises.

The United States has changed the rhetoric of empire. For it is the first “outward-looking,” “non-imperialist” superpower with the energy (its own system of government and its economic expertise) to lead the world to cooperative solutions to its problems (Summers 1998). The new “civilizing mission” is to spread to the four winds the rationale of responsible and transparent government. The new promised land is a financial architecture that resists the pressure of periodic financial crises and a new regulatory system that neither stifles the forces of capitalist enterprise nor destroys the belief in democratic government. The new ideal is a novel compromise between government and market-one that can only be imagined and disseminated by economic experts.

Act 7. Post Devaluation Blues (Universidad di Tella)

Financial and economic crises take a heavy toll in the university system of peripheral countries (“emergent economies”) such as Argentina. An abrupt devaluation (that multiplies by three the value of the dollar) makes it almost impossible to continue study abroad programs, makes the library cut down dramatically its foreign subscriptions and purchases of books in other languages, makes it quite difficult for professors to attend conferences and congresses overseas or to invite foreign colleagues, and leaves demands for new computers or better internet connections in the basket of Utopia. If the change in exchange regimes is accompanied by a dramatic fall in GDP and employment and by rampant inflation (as is the case now in Argentina), the conditions are given for a dispersion of the Faculty, attracted by better employment possibilities elsewhere. In short, international crises strike at the very foundation of developing universities. A small but growing university, such as Universidad Torcuato Di Tella, is faced with a paralysis, in terms of human and physical resources, if it manages to withstand the collapse of the economy.

Curiously, our university has one of the best economics school in the country. Our professors have been advisers to governments, if not themselves government officials in areas related to economic policy. They have been trained at UCLA, Chicago, Yale, MIT, and other leading economic schools in the US. They themselves fell prey to the trap of the “convertibility consensus” and now form part of the “mainstream economists” who are very cautious to speak in the context of an economic meltdown and high political volatility. Our departments of Economics and Business were at the center of the university.

Now that old-line economic reasoning is called into question, it may be time to re-think priorities. Perhaps re-configuring the curricula of economic majors to include alternative modes of thinking about “equilibria,” “incentive structures,” or “economic performance.” Maybe it is now time for greater exchange among the social sciences and the humanities. Maybe it is time to challenge the master discourse elaborated in US economic schools about what constitutes “sound economic policy” and to re-think the position of authority (and the scarcity of evidence) from which international economists dispense advice to “emerging markets.” Perhaps, as Paul Krugman has recently suggested, if IMF advice would be offered in the market, the price of this service would be very low due to insufficient demand.

Conclusion. The Subalternity of Knowledge (and How to Turn it Around)

One of the problems associated with a peripheral location in the world of expertise is not to know the right answer at the right time. In March 1999 (in an address at the Inter-American Development Bank), Summers suggested that the keys to prevent financial crises in Latin America were transparent accounting, corporate governance, and effective bankruptcy regimes. This was the vocabulary of the new science of global governance. Those who did not pay attention to the relationship between information, judicial institutions, and markets were just out of tune with history. This is what happened to President Duhalde and his Minister Remes in their encounters with the IMF, US Treasury, and other experts. If they had been listening to the word of experts in “global finance” and “crisis management,” they would have been better able to understand the non-cooperative stand and duress of US experts. For (guided by an outdated policy agenda) they were enacting policies that were exactly the opposite of those recommended by experts.

The current crisis is first a crisis of governability and public confidence, but it is also a crisis of legitimacy for the policy-expert. Bad economic advice and bad government policy has contributed to a deepening of the economic depression and to create divisions in society. The discredit of past governments that were unable to fulfill their promises of economic improvement and lesser social inequality drags along the discredit of economic advice. The gigantic struggle between supporters and detractors of convertibility has now turned into another gigantic struggle between nationalist-populist and neo-liberal policy solutions. There is a profound disbelief in “expert (economic) reason.” People watch their TV screens in astonishment as representatives of local expertise (home-grown economists) pile up criticism against neo-liberal reforms and orthodox economics. People are beginning to realize that economic predictions are quite frequently in error, that technically sound advice is often politically and socially unviable, and that economists many times represent not just independent thought, but certain narrow corporate interests.

How costly is economic experimentation in peripheral economies? Will a greater dose of economic advice from the center (or a greater number of US-trained economists) spare us from the effects of globalization? Are we teaching our economists to speak with their own voice? Is their knowledge integrated into a broader conception of the world and of the human and social sciences? How should we feed the minds of those who will be managing the global economy from these peripheral outposts? Should they be only producing data for the center and applying policy solutions developed at the center? We need to seriously examine the bi-polarities created by this knowledge structure. Why are arguments about poverty and social inequality unable to penetrate the wall of technical neo-liberal reason? Why is fiscal responsibility anathema for heterodox economists? We need to re-consider the formation and circulation of economic expert advice as a constitutive moment of global governance–and challenge its foundational precepts. We need to examine the broader implications of universal financial and economic recipes, and the denial of locally situated and socially-embedded policy solutions.

Epilogue

In May of 2002, President Duhalde appointed a new economic minister, Roberto Lavagna, an expert who had made a career in the European policy community. Against the double talk of Minister Remes Lenicov (who tried to appear tough as a hyper-regulator but accepted the views of the IMF in every policy issue), the new minister took distance from the IMF and its policies. He let the Central Bank intervene in the exchange market so as to stabilize the currency (something the IMF experts were against), then started to accumulate foreign exchange reserves, delaying the payment of financial obligations to the IMF and the World Bank (a decision that provoked the anger of experts in these institutions). Soon, having obtained some minor achievements in terms of declining inflation and a stop in the fall in real output, the minister started to pursue a new negotiation strategy with the Fund: “we have to be responsible, not obedient.” In Congress, this translated into a series of delaying tactics that avoided “resolving” the problems that the Fund wanted (re-structuring of the banking system, an immediate raise in the prices of public services, the abolition of provincial bonds, and some commitment to re-start negotiations with government bonds), complemented with new bills (postponing bankruptcies and housing evictions) that went against the wishes of the IMF. As the president soon discovered, taking distance from the IMF provided important political gains. So, he began to excel in the practice of appearing committed to a successful negotiation with the IMF and, at the same time, boycotting every possibility for success.

To be fair, one has to acknowledge that from the other side of the window, the IMF leadership (the same as the Secretary of the Treasury) became increasing alienated from Argentina, as they saw President Duhalde taking the wrong turn towards a “nationalist-populist” agenda. In fact, they started to consider that he was missing altogether the train that led to “capitalist development” and “good government.” In the end, perhaps, President Duhalde did not understand (and could not understand) the rules of the economy. The initial misreading of the reasons of empire–an inexplicable rejection of the reasons of the local policy-maker–turned into alienation and mutual distrust. Perhaps, reasons Duhalde, the IMF does not want to sign an agreement. Perhaps, reasons the IMF leadership, Duhalde is no longer truly committed to reaching an agreement. Once the child has gone back to its rebellious state, the father will step up the threat of punishment. In this the imperial father will speak through the voice of local and international experts: if Argentina does not negotiate with the IMF and does not fulfill its international commitments it will “fall out of the world.”

December, 2002

References

Barro, Robert and Jay Reed (2002), “If We Can’t Abolish IMF, Let’s at Least Make the Big Changes,” Business Week, April 10.

Becker, Gary (2002), “Deficit Spending Got Argentina into This Mess,” Business Week, February 11.

Broad, Robin and John Cavanagh (1999), “The Death of the Washington Consensus?” World Policy Journal 16:3 (Fall), 79-88.

Cavallo, Domingo F. and Joaquín Cottani (1997), “Argentina’s Convertibilty and the IMF,” American Economic Review 87:2 (May), 17-22.

Feldstein, Martin (2002), “Argentina’s Fall: Lessons from the Latest Financial Crisis,” Foreign Affairs (March-April).

Fisher, Stanley (2001a), “Exchange Rate Regimes: Is the Bipolar View Correct?” Finance & Development 38:2 (June), 18-21.

______. (2001b), “The IMF’s Role in Poverty Reduction,” Finance & Development 38:2 (June), S2-S3.

______. (1997), “Applied Economics in Action: IMF Programs,” The American Economic Review 87:2 (May), 23-27.

Taylor, Lance (2001), “Argentina: Poster Child for the Failure of Liberalized Policies?” Challenge (November-December).

Keaney, Michael (2001), “Consensus by Diktat: Washington, London, and the `modernization` of modernization,” Capitalism, Nature, Socialism 12:3 (September), 44-70.

Levinson, Mark (2000), “The Cracking Washington Consensus,” Dissent 47:4 (Fall), 11- 14.

MacEwan, Arthur (2002), “Economic Debacle in Argentina: The IMF Strikes Again,” Dollars & Sense (March-April).

Naim, Moises (2000), “Fads and Fashion in Economic Reforms: Washington Consensus or Washington Confusion?” Third World Quarterly 21:3 (June), 505-528.

North, James (2000), “Sound the Alarm,” Barron’s, April 17.

Stiglitz, Joseph E. (2002), Globalization and Its Discontents (New York: W.W. Norton).

________. (2001), “Failure of the Fund: Rethinking the IMF Response,” Harvard International Review 23:2 (Summer), 14-18.

Summers, Lawrence (1999), “Distinguished Lecture on Economics in Government,” Journal of Economic Perspectives, 13:2 (Spring), 3-18.

_______. (1998), “America: The First Nonimperialist Superpower,” New Perspectives Quarterly, April 1st.

“To Little, Too Late? The IMF and Argentina,” (2001) The Economist, August 25.

“Tough-Love Ya to Death,” (2001) Newsweek, May 28.

“Unraveling the Washington Consensus: An Interview with Joseph Stiglitz,” Multinational Monitor 21:4 (April 2000), 13-17.

Weisbrot, Mark, “Another IMF Crash” (2001), The Nation, December 10.

Weisbrot, Mark and Thomas I. Palley (1999), “How to Say No to the IMF,” The Nation, June 21.

Endnotes

The Role of The United States in the Global System after September 11th

Dani Nabudere is an Executive Director at Afrika Study Center, Mbale, Uganda. Most recently Nabudere has edited Globalisation and the African Post-colonial state (AAAPS, Harare, 2000) and is author of Africa in the New Millennium: Towards a post-traditional renaissance (forthcoming-Africa World Press).

Introduction

It is clear that power relations in the global system have been severely tested since the events of September 11, 2001, so much so that it has become fashionable these days for people to argue that the world has irrevocably changed with those events. The terrorist attacks on the World Trade Center in New York and the Pentagon in Washington on September 11th, 2001 were calculated moves to test the standing and political and economic positions of the world’s sole superpower. They were aimed at delivering a blow that could carry several messages around the world at once. Indeed, it is clear that this fateful event was a manifestation of the contradictions of the modern world system since its foundation some five hundred years ago, and the messages the attacks were calculated to transmit were intended to convey to all and sundry those contradictions.

The first of these messages was the expression of anger by those disaffected social and political forces, that felt mistreated, marginalized, and oppressed by U.S. global power relations. The second was to demonstrate to the U.S. that those global power relations were vulnerable and could be attacked at the very heart of the system any time. Thirdly, the attacks gave signal to other disaffected groups opposed to U.S. dominance of the world that it was possible to weaken this power in such a way that their grievances could be addressed through the overthrow of that system. Fourthly, by attacking these two pillars of U.S. economic and military power, al Qaeda wanted to demonstrate that the U.S. was not as powerful as it thought and that its economic power and military power could be broken down by well organized, and well manned attacks.

These messages had other side interpretations. To U.S. neo-conservative forces as well as to some in the right-wing liberal political establishment, these attacks signaled an attempt by fundamentalist political Islam to overthrow Western civilization at the core and, in this respect, the attacks were interpreted as not just constituting a threat to the U.S. as a country but to the whole Christian, western civilization project. This was in fact what president Bush dubbed an “attack on civilization” in his condemnation of the strikes. This interpretation had the effect of influencing the way the world looked at the attack and the U.S. response to it. While not necessarily accepting this interpretation, it forced all foreign governments, with the exception of the very few, to side with the U.S. ideologically on the issue. Thus in addition to the overwhelming humanistic outpouring of sympathy for the victims, it enabled the Bush administration to arm-twist all governments and individuals throughout the world to side with its response on the grounds that the attacks were not on the U.S. as such but on “civilization” in general. It forced these governments to side with the U.S. government, faced with its accompanying threat that: “Either you are with us, or you are against us.”

At the same time, the attacks had other interpretations. The generalization of the consequences of the attack also put emergent “anti-globalization” activists on the spot since any attempt by them to express sympathy with the attackers by asking that the causes of the attacks be examined and addressed was interpreted as being “unpatriotic” expression of sympathy with “the enemy.” For this reason, the attacks had the effect of dampening the activities of the global solidarity movement, at least for some time, since its strong showing at the Seattle WTO demonstrations in 1999. This interpretation was also used to crack down on the democratic and civil rights of U.S. citizens and to reinforce authoritarian regimes throughout the world. Thus, the event and the reactions surrounding it were turned from a political discourse into a moral-religious event in which “the enemy” was equated with evil and barbarism, while the victim was equated with virtue and civilization.

Nevertheless, these interpretations have begun to have an opposite effect in that the widening of the net in “the war against terrorism” with the attack against Iraq has caused many countries to pose questions that were not posed earlier. Questions are being asked whether the tragic events of September 11 are not being misinterpreted to advance a narrow political agenda of some cliques within the U.S. political establishment. Something like a return to a political discourse is beginning to emerge with a call being made to address the real causes that led to the September 11th attacks against the headquarters of the “Free World” and for the United Nations to resume its responsibilities for international peace and security. President Bush’s threats against the United Nations to act according to his will “or become irrelevant” are being taken as rantings of a president whose unilateralism has gone wild. The war against Iraq has again undermined the hope of a return to a multilateral world.

In may ways, therefore, these events, and particularly the unilateral action of launching the war against Iraq with the support of Britain and the so-called “alliance of the willing,” have confirmed a predictable hegemonic trend in U.S. foreign policy since the end of World War 2. This trend has afflicted all great hegemonic powers in history. Nevertheless the role of the U.S. in international relations since the end of that war has confirmed the traditional realist and hegemonic stability theories, which have argued that for the stability of institutions of global international public good to prevail, there must be a hegemonic power that is able to enforce certain rules of behavior in international relations, because the hegemon in that case can afford the short-run costs of achieving the long-run gains, which also happens to be in its national interests. These theories have been challenged by institutional stability theorists, who have argued that the model of institutionalized hegemony, which explains the functioning of multilateral arrangements based on the cooperation of a number of core countries to overcome “market failures,” is preferable to the hegemonic power model [Keohane, 1980].

The U.S. in the Post-War Order

The hegemonic stability theories seem to have been backed by evidence of the early phase of the post-World War 2 period in which the U.S. was able to push the former European imperial powers to accept a multilateral economic system, which existed beside the United Nations system, with the U.S. playing the leading role. This Bretton Woods system was predicated on the coincidence of three favorable political conditions. The first was the concentration of both political and economic power in the hands of a small number of (western) states; secondly, the existence of a cluster of important (economic and political) interests shared by those states; and thirdly, the presence of a dominant power “willing and able” to assume a leadership role in the new situation [Spero, 1977: 29].

It is the evolution of the contradictions of this combination that Spero spoke about that has created the predicament in which the present situation for the U.S. arises. The domination of the U.S. in global economic and strategic institutions such as the North Atlantic Treaty Organization (NATO), the South East Asia Treaty Organization (SEATO), the Central Treaty Organization (CENTO) and the Bretton Woods institutions characterized the Cold War period. These institutions expressed the interests of the western powers (at first hostile to Japanese emergence on the world economic scene) as the culmination of the western modern system based on liberal-monopolistic capitalism. The institutions also expressed political military power that the western countries wielded throughout the world. Western systems of economic, political, and military power in fact protected those economic interests that were threatened by “communism,” and as time passed, by the emergent nationalism of what came to be called “Third World” or “developing” countries.

Indeed, as Paul Kennedy argued in his book: The Rise and Fall of the Great Powers [1988], economic power is always needed to underpin military power, while the latter is also necessary in order to acquire and protect wealth that superpower status demands. The problem arises when a disproportionate share of a hegemon’s economic resources is increasingly diverted from wealth-creation and allocated to military purposes. The result is the weakening of the economic backbone of the military power of the hegemon in the long run, which often leads to its eventual collapse.

This reality took some time to come through in the case of the U.S. The rise of U.S. transnational corporations in the world economy, for a time, reinforced U.S. economic, political, and strategic power, which many states in the world were obliged to comply with due to the imperatives of the situation. Having suffered from its isolationism of the interwar years and thereby contributing to the eventual collapse of the economic system and of the peace that had followed World War 1, the U.S. in the period following World War 2 was prepared for an outward push through the Bretton Woods multilateral system and the NATO alliance.

Having settled into the role of a superpower only challenged by the Soviet Union, the U.S. begun to pursue a series of policies in the international arena that tended to undermine its own political belief in the independence of states against European colonialism. To some extent, this was prompted by U.S. determination to resist “communism.” But that consideration was only marginal. The real major consideration was the need to defend a western system of values built around Christianity, liberal democracy, and world capitalism. Now these values and interests appear to be threatened by the al Qaeda attack on the U.S.

Regarding its relations with Third World countries, many of these countries originally considered the U.S. to be a “progressive” and friendly power because of its opposition to the European colonial system, especially in the interwar years and the immediate post-war period. U.S. partial support for the right of self-determination for colonial countries, articulated in the Wilsonian “Fourteen Point” Speech after World War 1, symbolized this “progressive” image. But soon the U.S.’s own economic and strategic interests compelled it to structure the post-war multilateral system in such a manner that its hegemonic interests were taken care of globally. It was therefore not surprising that its role as a neo-colonial power emerged in the course of this historical process. This reality was revealed in its dealings with the former colonial powers, as both began to rely on NATO to suppress the struggles for self-determination against the former British and Portuguese colonies in Southern Africa and elsewhere.

The same happened in other parts of the Third World–in Asia and Latin America. The existence of the U.S.S.R. as an opposite hegemonic power implied the need to confront it not only on its own home ground, but also in the now politically independent countries of Asia, Africa, and Latin America. From confronting Cuba in the U.S.’s own back yard, the “anti-communist” crusade spread to all regions of the world. The United States came to increasingly rely on right-wing military rulers as “comrades in arms” in the fight “against communism” in Third World countries. They increasingly supported military dictators such as Ferdinand Marcos in the Philippines, General Suharto in Indonesia, General Mobutu in Congo, as well as General Pinochet in Chile.

These dictators represented the rear-guard of United States policy in the Cold War period in Third World countries. A stage was reached when the fight against the U.S.S.R. was equivalent to the fight for control of the world’s natural and human resources for the benefit of the “Free World” against those of the East led by the U.S.S.R. Oil, strategic materials, and mineral wealth as well as trade and investment outlets became vital strategic areas to defend.

The Oil Crisis of the mid-1970s signaled the heightening of the United States political and strategic position in the Middle East, as we have seen, while the survival of Israel in the sea of Arab nationalism also determined the shape of U.S. foreign policy in that area. Arab nationalism and the Palestinian struggle against Israel appeared to contradict United States global policy and this set the environment for the September 11th events. Indeed, the U.S. has viewed the Middle East as an “arc of crisis” since the late 1970s.

It will be remembered that in 1979 President Carter signed Presidential Directive 18 to order the creation of the Rapid Deployment Force (RDF), composed of some 250,000 men and women, designed to meet contingencies after the Iranian revolution. The force was supposed to protect U.S. interests in 19 countries stretching all the way from Morocco through the Persian Gulf up to Pakistan, which the Pentagon regarded as the “cockpit of global crisis in the 1980s.” In fact the real purpose was the protection of the oil fields in the area.

With the fall of the Shah of Iran and the Soviet invasion of Afghanistan in 1979, the RDF was expanded. By 1984, the force had been expanded to 400,000 men and women to be on the standby for action in the “worst case scenario” of possible Soviet invasion of Iran. This understanding was based on the calculation that by 1985, the Soviet Union would have become a net importer of oil and therefore constituted a serious competitor to the U.S. monopoly of Arab oil. This happened during the second oil crisis in 1979, a period heightened by the instability in Iran, which has never ended as far as the U.S. is concerned. All these developments are interlinked and therefore provide a necessary background to understanding pre- and post-September 11 developments.

The U.S. in the Post-September 11 World

Old and New Alliances

The above background clearly demonstrates that the U.S. has throughout the period of its hegemony used its power to bolster its interests, which in many cases in effect meant the U.S. standing against the interests of the peoples of the Third World. Its support for reactionary and authoritarian regimes has not abated even in the post-Soviet period. Clearly, the collapse of the Soviet empire very much eased its strategic pressures, but the much vaunted and expected “peace dividend” never materialized. This is because the U.S. has continued to face military challenges to its power and its major concerns now are how it can reign in “rogue” and “terrorist” states, which constitute the “axis of evil.” The enemy image has shifted from the U.S.S.R. to these “rogue” states in the Third World. The events of September 11th must, therefore, in our view, be seen as part of this strategic problem facing the U.S. since its assumption of leadership of western interests against the rest of the world. Having played a role in the collapse of the U.S.S.R., it finds itself faced with an even stronger enemy within the ranks of Third World nationalism, which in its judgment constitutes many terrorist and “rogue” states and groups.

In comprehending the issues at stake, it is important to focus on the year 1979 as the watershed in the emergence of this new U.S. dilemma. This watershed was marked by the decline of Soviet power, especially weakened by its defeats in the war in Afghanistan; while at the same time, 1979 also signaled the beginnings of challenges to U.S. power in the Muslim world starting with the Iranian revolution of that year. It has also to be noted that that year and the following year also signaled a shift of western political power to the right–with the rise of Margaret Thatcher in the U.K. and Ronald Reagan in the U.S.

Supporting Muslim forces against the U.S.S.R. in Afghanistan in its efforts to “contain” the Soviet influence in the Middle East, the U.S. created a temporary convergence of interests with the radical Islamist groups in its anti-Soviet confrontation, while at the same time creating conditions for the emergence of radical political Islamism. For a time, the convergence of interests was beneficial to the U.S., but there began to emerge a divergence of interest with the collapse of the Soviet Union. This force eventually grew and assumed political importance, which eventually turned against U.S. expansionism in the Middle East. In this sense, it can be said that the collapse of the U.S.S.R. was at the same time the beginnings of the problems for the U.S. with the Muslim world in the Middle East, and in the Third World in general. In that scenario, it can be said that the seeds that germinated and forced their way out of the ground on September 11th were sown in the Afghanistan anti-Soviet war.

Samuel Huntington, in his book, The Clash of Civilizations and the Remaking of World Order [1997], located the rise of radical Islamism in the squalor of the marginalized Moslem masses in the Arab world in the mid-seventies. It is also well known that the Iranian Islamic revolution was, to a great extent, fueled by the worsening economic conditions in Iran that led to mass discontent and eventual rebellion. The discontent was clearly linked to western (imperialist) dominance in the region, where foreign oil corporations exploited local oil resources in alliance with the traditional ruling families against the interests of the masses of the people. These contradictions are still at the core of the conflicts in the region, which the U.S. continues to ignore.

One consequence of this development was to put radical and militant Islam at the center of the Muslim states, whose leaders were increasingly challenged to abandon western symbols of power. The enemy was the cultural imperialism of the west led by the U.S. From that broad anti-imperialist strategy, the Islamic radicals were able to win support for their cause from non-Muslim Third World peoples. In working for the defeat of communism in Afghanistan and the world as a whole, the U.S. played on the Muslim and Christian fundamentalist fear of communism as a “godless creed.” The U.S. worked closely with Islamic fundamentalists so long as this served its global hegemonic ambitions in defending its oil bases in the Persian Gulf region. At the same time, it pursued the secular values of democracy, freedom, and justice, which were perceived by its allies as hypocritical.

With the collapse of communism in 1989, the U.S. in its triumphalism, symbolized by the new drive for globalization, begun to be viewed by the Islamic forces as an equally “godless creed” with its emphasis on empty materialism and consumerism. This was seen as a soulless and nihilistic cultural imperialism, which was being imposed on the Arab and Muslim peoples. It was a challenge to the Islamic belief in a non-secular state system as well as to the values of western style nationalism. The U.S. could no longer invoke the Cold War in its support, since the Soviet Union was now also becoming a capitalist and secular system. Its earlier alliance with radical Islam, which enabled the U.S. to recruit people like Osama Bin Laden to its anti-communism cause, began to wane. Its support among the Taliban could only be maintained by bribery and corruption in pursuit of its materialist creed and ambitions.

Still in Search of Oil

So in order to understand the September 11th events without conjuring up conspiracy theories, it is important to note that the issue of the change of the Taliban government in Afghanistan was uppermost in the minds of certain business and political interests in the U.S. at the material time. In testimony before the Subcommittee on Asia and the Pacific Region of the Committee on International Relations of the House of Representatives on February 12, 1998, John J. Maresca, the UNOCAL vice-president for international relations, argued that there was need for multiple pipeline routes for Central Asian oil and gas resources, as well as the need for the U.S. to support international and regional efforts aimed at achieving balanced and lasting political settlements to the conflicts in the region, “including Afghanistan.” He also pointed out that there was need for U.S. “structured assistance” to encourage economic reforms and the development of appropriate investment climates in the region. Therefore, in his view, one major problem, which had as yet to be resolved, was how to get the region’s vast energy resources to the markets where they are needed.

At this time, there was a consortium of 11 foreign oil companies, including four American companies, Unocal, Amoco, Exxon and Pennzoil, which were involved in the exploration in the region. This consortium conceived of two possible routes, one line angling north and crossing the north Caucasus to Novorossiysk; the other route across Georgia to a shipping terminal on the Black Sea, which could be extended west and south across Turkey to the Mediterranean port of Ceyhan. But even if both pipelines were built, they would not have had enough total capacity to transport all the oil expected to flow from the region in the future. Nor could they have had the capability to move it to the right markets.

The second option was to build a pipeline south from Central Asia to the Indian Ocean. One obvious route south would cross Iran, but this was foreclosed for American companies because of U.S. sanctions legislation against Iran. In Maresca’s view, the only other possible route was across Afghanistan, which had of course its own unique challenges. The country had been involved in bitter warfare for almost two decades, and is still divided by civil war. He emphasized: “From the outset, we have made it clear that construction of the pipeline we have proposed across Afghanistan could not begin until a recognized government is in place that has the confidence of governments, lenders, and our company” [Emphasis added].

These developments indicate that the whole situation around September 11th can now be seen to have been part of a wider geo-strategic process of U.S. economic and political interests. While not conjuring up conspiracy theories, one can surmise that there was more to the incidents than meets the eye. It is reported that senior U.S. officials in mid-July 2001 told Niaz Naik, a former Pakistani Foreign Secretary, that military action was planned to be taken against the Taliban by mid-October, 2001. Bush declared war against Afghanistan, though the Taliban did not order the attack on the U.S. It was alleged by the U.S. government that Osama Bin Laden, a Saudi national residing in Afghanistan, ordered the attack. The U.S. action against Afghanistan resulted in the ouster of the Taliban regime and a change of government. Was this a calculated move or was it a genuine war against terrorism? Within a few months of the ouster of the Taliban regime, the U.S. government under President Bush quietly announced on January 31, 2002 that it would support the construction of the Trans-Afghanistan pipeline. Then on February 2, 2002 the Irish Times announced that President Musharraf of Pakistan (now popularly known as Busharraf) and the new Afghan leader, Mohamed Karzai, had “announced an agreement to build the proposed gas pipeline from Central Asia to Pakistan via Afghanistan.” Although September 11th might have been an event that took place independently of the wishes of the U.S. oil interests in the area, the issues connected with the event were clearly interlinked [Onyango-Obbo: 2002:8].

Africa in the `New World Order’

The events of September 11th have had a spectacular impact on the African continent. Although terrorist attacks against the U.S. embassies in Kenya and Tanzania signaled a new development for these countries in terms of their security, which U.S. presence posed, the issue was nevertheless seen as a distant threat. In the new situation and due to pressures from the U.S. government, the Organization of African Unity (OAU) in October 2001 quickly adopted a Declaration Against Terrorism, which had different connotations from the earlier initiatives by the African States themselves. At the same time, efforts were exerted to propose a Treaty on Terrorism in terms of the new definitions emanating from the U.S. Before September 11th, the OAU had in July 1999 adopted the Convention on the Prevention and Combating of Terrorism, which in article 1 condemned “all forms of terrorism” and appealed to member states to review their national legislation to establish criminal offences against those engaged in such acts. The Convention had gone a step further to define terrorism and to distinguish it from the legitimate use of violent struggle by individuals and groups. The Convention pointed out that political, philosophical, ideological, racial, ethnic, religious or other motives could not be used as justifiable defense for terrorism. Nevertheless, in article 3 (1) it declared:

Notwithstanding the provisions of article 1, the struggle waged by peoples in accordance with the principles of international law for their liberation or self-determination, including armed struggle against colonialism, occupation, aggression and domination by foreign forces shall not be considered as acts of terrorism.

It can be seen here that the African states had made some attempt to be objective on what constituted terrorism. But the events of September 11th seem to have pulled the clock backwards. Soon after the attacks on the U.S., the U.S. National Security Adviser, Condoleezza Rice, reminded the African States that:

One of the most important and tangible contributions that Africa can make now is to make clear to the world that this war is one in which we are all united. … We need African nations, particularly those with large Muslim populations, to speak out at every opportunity to make clear … that this is not a war of civilizations. … Africa’s history and geography give it a pivotal role in the war. … Africa is uniquely positioned to contribute, especially diplomatically through your nations’ memberships in African and Arab and international organizations and fora, to the sense that this is not a war of civilizations. This is a war of civilizations against those who would be uncivilized in their approach towards us [Emphasis added].

Following this appeal, the OAU Central Organ in November 2001 issued a Communiqué on terrorism in which the organization “stressed that terrorism is a universal phenomenon that is not associated with any particular religion, culture or race.” It added that terrorism “constitutes a serious violation of human rights, in particular, the rights to physical integrity, life, freedom and security.” The Communiqué also added that terrorism “poses a threat to the stability and security of States; and impedes their socio-economic development.” The Communiqué further stressed that terrorism cannot be justified under any circumstances and consequently, it “should be combated in all its forms and manifestations, including those in which states are involved directly or indirectly, without regard to its origin, causes, and objectives.”

This Communiqué demonstrated sensitivity to the problem of terrorism because of the multiethnic, multireligious, multiracial, and multicultural composition of the continental organization. It specifically excluded the religious connotations that terrorism was having in the U.S. It included, to some extent, state-sponsored terrorism as part of the evils to be combated, “without regard to its origins, causes or objectives.” But in another sense, many states now began to respond to the dictates of the Bush administration in their understanding of the problem in order to curry favor with the U.S. Some African States initiated legislation directed at their internal opposition in terms of the new U.S. definitions of terrorism. Malawi, Zimbabwe, and Uganda were the first ones to do so.

Uganda, in particular, emphasized the fact that it had been fighting terrorism even before the U.S. began to do so consistently. It rushed legislation though parliament, which was aimed at the legitimate opposition as well as groups fighting the government by way of “armed struggle.” These groups fighting the government “in the bush” were listed and sent to the U.S. and the UNO to be included among terrorist organizations. The Lords Resistance Army (LRA) and the Allied Democratic Forces (ADF), fighting in different parts of Uganda, were now listed internationally as terrorist organizations. At the same time, a law against terrorism was also rushed through parliament, which the opposition regarded as being targeted against them. Soon, it listed its opponents as “terrorists” to be treated as criminals in any part of the world.

These negative developments indicated the real impact on world affairs initiated by the U.S. response to terrorism. The statement by Condoleezza Rice demonstrated the concerns of the U.S. government as to the role Africa could play in the “war.” But it missed the very important point that Africa was largely a Christian and Muslim continent, where these two civilizations met and intermingled with African traditional religions and civilizations. This combination has created a more racially, religiously, and culturally tolerant continent. Indeed, it is said that the American officials in Guinea were extremely impressed by the fact that on the very day of the attack against the U.S., the entire Cabinet of the government of Guinea, which is an all Muslim country, went in one body to the U.S. Embassy in Conakry to deliver their condolences to the American people. This single incident demonstrated that African Islam was important to the U.S. in moderating Islamic radicals on the continent.

The Pursuit for Oil in Africa

But the U.S.–in its usual way of “divide and rule” to maintain its hegemonic position in the world–has already seized on this positive African approach and tried to pit Africans against the Arabs on the issue of oil to break the solidarity among the Organization of Petroleum Producing Countries (OPEC). It is this hegemonic “divide and rule” imperialist strategy that turns “friends” into enemies at any time it pleases the U.S. government. It is this same approach in an earlier phase that, in the U.S. interests for oil, used Saudi Arabia as a “friend” of the U.S. in order to weaken the Arab peoples’ cause for nationhood, but was now turning against it when it did not any longer suit those interests.

On 25th January 2002, the State Department released information at a breakfast seminar sponsored by the Institute for Advanced Strategic and Political Studies (IASPS) entitled: “African Oil: A Priority for U.S. National Security and African Development” about the projected U.S. strategies on oil and the growing importance of African oil to the U.S. economy. The U.S. officials, among them an Assistant Secretary of State for African Affairs, Walter Kunsteiner, added: “It is undeniable that this (oil) has become of national strategic interest to us.”

According to James Dunlop, an assistant to Kunsteiner, who also spoke at the meeting, the United States already was getting 15 per cent of its oil imports from the African continent, and the figure was growing. A U.S. Air Force, Lt. Colonel Karen Kwiatkowski, a political/military officer assigned to the Office of Secretary of Defense for African Affairs, confirmed that Africa was important to U.S. national security. She authoritatively added that she spoke as ” a U.S. government policymaker in the area of sub-Saharan Africa and national security interests.” She tried to justify the shift in U.S. interests by pointing out that the U.S. relationship to African countries was non-colonial, based on a generally positive history. In this, she did not refer to the past relationship of slave trade, which had a negative impact on today’s development prospects for Africa. What was important to the U.S. at this juncture was to try to woo African states in the new strategic game of U.S. “security interests” and Africa’s oil had now become important to the U.S. security interests because the availability of Arab oil could no longer be relied upon. According to the U.S. National Intelligence Council’s document “Global Trends 2015” report, which came out in December 2001 after the September attack, 25 per cent of U.S. oil imports in 2015 were projected to come from sub-Saharan Africa. The prime energy location sites were in West Africa, Sudan, and Central Africa.

In this respect, Africa was seen as being important for the “diversification of our sources of imported oil” away from the “troubled areas of the Middle East and other politically high-risk areas.” In fact this drive to diversify sources of oil was behind the U.S. policy to bring about “regime change” in Iraq. In this context, the vast oil and gas reserves of Africa, Russia and the Asian Caspian regions had become critical for U.S. hegemony. The proven reserves of the African continent were said to be well over 30 billion barrels of oil, and over 40 different types of crude were available. Under current projections, the U.S. expects to import over 770 million barrels of African petroleum by the year 2020. U.S. investments in this direction were expected to increase so that by 2003, these would exceed $10 billion a year. Between two-thirds and three-fourths of U.S. direct investment in Africa will be in the energy sector, and this was expected to contribute to Africa’s economic development.

The U.S. has vigorously begun to pursue this policy in the Sudan and Nigeria. Recent U.S. peace moves in the Sudan are linked to this strategy. At a dinner honoring Reverend Leon Sullivan on the 20th of June 2002, President Bush stated that the U.S. would continue the search for peace in the Sudan, while at the same time seeking to end her sponsorship of terrorism. He added:

Since September the 11th there is no question that the government of Sudan has made useful contributions in cracking down on terror. But Sudan can and must do more. And Sudan’s government must understand that ending and stopping its sponsorship of terror outside Sudan is no substitute for efforts to stop war inside Sudan. Sudan’s government cannot continue to block and manipulate U.N. food deliveries, and must not allow slavery to persist.

It was therefore imperative to put to an end the war in the Sudan in order to explore the vast oil resources in all Sudan. It was estimated that 3-4 billion barrels of oil lie in the Southern Sudd area of the country, which was under the control of the SPLM. The new anti-terrorism policy in Sudan, combined with the shift of U.S. strategic considerations from the Middle East in terms of oil production, required that a peace settlement be worked on as a matter of priority and this explains the role the U.S. played in bringing about the Machakos Peace Agreement between the government of Sudan and the SPLM in July 2002. Recently, the Sudanese government in the North reported that it had discovered a new oil source in the Northern parts of the country. This suggested that the U.S. would in the future play the South against the North in order to assure itself of energy supplies. Hence its efforts to bring about peace in the Sudan were not wholly genuine.

As regards Nigeria, the U.S. government is said to be targeting the Gulf of Guinea as a replacement to the Gulf of Persia as the future main source of U.S. oil imports. This region is now dubbed the “Africa Kuwait” in the U.S. strategic lexicon. A White Paper submitted to the U.S. Government by the African Oil Policy Initiative Group (AOPIG), pointed to the growing fear of insecurity in the continued supply of crude oil from the troubled Persian Gulf. According to Dr. Paul Michael Wihbey, a leading member and Fellow of the American Institute for Advanced Strategic and Political Studies (AIASPS), the U.S. expected to double its oil imports from Nigeria from 900,000 barrels per day to around 1.8 million barrels per day in the next five years. He pointed out that one major lesson of the September 11th terrorist attack was that the U.S. needed to diversify its major source of oil away from the Persian Gulf. A Lagos newspaper quoted him as saying:

Statistics from the US Department of Energy showed African oil exports to the US will rise to 50 percent of total oil supply by 2015. Nigeria is the energy super power of Africa. The private sector, small and major operators, administration and officials, have come to realize that Nigeria and the Gulf of Guinea are of strategic importance to the US.

The U.S. government had in fact already begun discussions on the new initiative with the Nigerian government. An important factor that was creating a greater focus on its oil was that Nigeria had created an atmosphere of stability since the democratically elected government of President Olusegun Obasanjo had come to power. U.S. President George W. Bush visited Nigeria and four other African countries in the first quarter of 2003. In fact all this made a lot of sense at the very time when the U.S. was distancing itself from Saudi Arabia, its former ally. A briefing to a Pentagon defense panel described Saudi Arabia as a “kernel of evil.” The Washington Post of August 6, 2002 reported that the briefing had described Saudi Arabia as the enemy of the U.S. Laurent Murawiec, in his July 10th 2002 briefing, is said to have stated: “The Saudis are active at every level of the terror chain, from planners to financiers, from ideologists to cheerleaders.” He added that Saudi Arabia supported U.S. enemies and also attacked U.S. allies. He described Saudi Arabia as “the kernel of evil, the prime mover, the dangerous opponent” in the Middle East. The Washington Post added that although the briefing did not reflect official U.S. policy, these views represented a “growing currency” within the Bush administration. Yet in trying to play Africa against the Arab world, the U.S. was exploiting certain weaknesses within the African polity created by the European colonial strategy of “divide and rule.” The U.S. reasoned that its reliance on African sources of oil was better assured in Africa than in the Arab world. One official argued that it would be difficult to find a Saddam Hussein in Africa. The reason was Africa’s political disunity because of the African political elite having accepted former colonial boundaries as sacrosanct. The U.S. could exploit these divisions even more, especially when it came to the “Anglophone” and “francophone” divisions, which the U.S. and France could exploit to advance their interests. Moreover, it could also exploit the democracy and good governance cards to topple regimes that put road blocks in its way.

It is clear that the U.S. had gained wide acceptance of its anti-terrorist policies among the majority of African States. There is also indication that although at the G8 Summit at Kananaskis (Alberta, Canada) the U.S. did not offer much by way of financial backing to the New Partnership for Africa’s Development (NEPAD), the U.S. and other members of the G8 had placed great importance on the NEPAD initiative if only because it gave Nigeria and South Africa a predominant voice in Africa’s affairs. It was believed that these two countries would bring the other African leaders under disciplined control through the Peer Review Mechanism on Good Governance, which the leaders had imposed on themselves as a condition for financial support for NEPAD.

Indeed, one of the very first “projects” under NEPAD was a project to fight terrorism. During his whistle-stop tour of West Africa in April 2002, the British Prime Minister, Tony Blair, acknowledged that the September 11th attacks on the United States had effected a real change in the way everybody looked at the world. In his address to the Ghana Parliament, Blair argued that increased financial support to Africa was part of the process of fighting “terrorism” because engaging African states could reduce the risk of their becoming “breeding grounds for the kind of people who carried out the U.S. attacks.” He further argued: “If we leave failed states in parts of Africa, the problems sooner or later end up on our door step.” So the African countries are part and parcel of the September 11th alliance against terrorism, but African continued support will depend on how the U.S. plays its game, which is very dicey.

Africa Must Pursue Ubuntu Policy

The U.S. attack on Iraq has altered the situation somewhat. South Africa played a key role in developing an anti-Iraq war position for the African Union and the Non-Aligned Movement, which came out with strong statements against the war. President Mbeki of South Africa is chairperson of both organizations. Almost all the African states took a position against the war. The only exception is the so-called “New Breed” of African leaders from Uganda, Ethiopia, Eritrea and Rwanda. All these countries are in internal and cross-border conflicts themselves and so it suits them to try to woo the U.S. in the war against each other. Moreover the anti-terrorism rhetoric of President Bush and the U.S. government also seems to help them to fight one another on the basis that they are against terrorism promoted by the other. This is non-sustainable.

The U.S. also did not play its Iraq war game well with some of African states. According to an investigative journalist, Seymour Hersh, the C.I.A. Chief, George Tenet, told a closed-door session of the Senate Foreign Relations Committee that between 1999 and 2001, Iraq had sought to buy 500 tonnes of uranium oxide from the African state of Niger, which would have been enough to build 100 nuclear bombs. This so-called connivance of Niger with Iraq was later used in the British government “Iraq Dossier” to prove that Iraq had weapons of mass destruction. The same “fact sheet” was cited by President Bush in his State of the Union Address on the issue of Iraq. It was used to “prove” that since it had tried to “cover up” this purchase, it was also lying about its program for developing weapons of mass destruction. According to the investigative reporter, this story about Iraq’s attempted purchase of Uranium from Niger was used as “evidence” to convince the U.S Congress to endorse military action against Iraq.

In less than two weeks before the initial U.S. bombing of Iraq, the Head of the Atomic Energy Agency, Mohammed ElBaradei, decisively discredited the accusations that had all along been denied by Niger, but with no one paying attention. The documents, which were allegedly exchanged between the governments of Niger and Iraq confirming the deal, were proved to have been forged. The documents consisted of a series of faked letters between Iraq and Niger officials. One letter that was dated July 2000 bore an amateurish forgery of the Niger president’s signature. Another letter was sent over the name of a person identified as Niger’s foreign minister, when that person had left the position ten years prior to the date of the letter!

The selection of Niger–a poor African country with little voice internationally–as the fall guy was also intentional. According to Hersh, the forgers assumed that it would be much more credible to implicate a poor African country rather than any one of the other three leading exporters of uranium oxide: Canada, Australia and Russia. While these countries could have proved the charges false, Niger, on the other hand, lacked the means of persuading the world that the accusations were false.

It is very impressive that despite Africa’s marginalisation and poverty, very few African states have been wooed to be part of the “alliance of the willing.” Most impressive was the resistance by Cameroun, Guinea and Angola, at the time African alternate members of the Security Council, to accept U.S. bullying and bribery to support the alliance against Iraq. These examples go to show that small states can stand up to great power pressure and maintain a new human morality based on a democratic world order. What the U.S. wanted to achieve in Iraq with high-tech “smart weapons” was to demonstrate to all that whatever the U.S. says “goes.” This kind of political behavior would not be a world order, but an attempt to create world disorder.

Africa should therefore stand firm in support of the United Nations and in solidarity with the Arab world in these testing times, despite the fact that some Arab countries participated in the enslavement of the African people and, indeed, continue to do so in Mauritania and Sudan. Africans continue to suffer at the hands of Arab enslavers, who are committing acts of genocide against them in these two countries. It is the duty of Africans to unite and continue to resist these acts of inhumanity and pursue claims for reparations against those Arab countries that participated in this trade and the continued acts of slave trade even up to the present moment. At the same time Africa must insist that these and similar acts, including acts of terrorism and state-terrorism against other peoples, be solved on the basis of internationally agreed solutions based on principles of international law and Ubuntu.

These principles include truth, acceptance of responsibility, compensation and reparation for wrongs against other human beings, justice, and reconciliation. Ubuntu draws deeply from African civilisational values. According to former Archbishop Desmond Tutu, later to become chairman of the Truth and Reconciliation Commission of South Africa:

Africans have this thing called UBUNTU… the essence of being human. It is part of the gift that Africans will give the world. It embraces hospitality, caring about others, willing to go the extra mile for the sake of others. We believe a person is a person through another person, that my humanity is caught up, bound up and inextricable in yours. When I dehumanize you I inexorably dehumanize myself. The solitary individual is a contradiction in terms and, therefore, you seek to work for the common good because your humanity comes into its own community, in belonging [Mukanda & Mulemfo, 2000: 52-62].

These philosophic values called Ubuntu also draw from other cultures and civilizations. It is the only civilized way we can manage problems and handle disputes in the twenty-first century, which should be a century of peace. Africa has therefore acted correctly in refusing to side with the U.S. in its war against Iraq. It is an unfair war. Such a war will have the fateful consequences of harming the interests of the Arab peoples, but also have adverse consequences for international security, which would affect African countries as well. Africa should also disassociate itself from the actions of the Bush administration in declaring Iran, Iraq, and North Korea to be the “Axis of Evil.” African states should maintain contacts and relations with all these countries. Mzee Mandela gave a lead in responding to what the U.S. regarded as terrorism when it tried to isolate Libya over the Lockerbie aircraft-bombing affair. Mandela broke the blockade against Libya by visiting Tripoli against U.S. protestations. By so doing he strengthened the African states, which also resolved to end the blockade through the Organization of African Unity (OAU). This African action made it possible for Libya to cooperate more willingly with the international community in resolving the dispute through the courts and is now part of the alliance in the fight against terrorism under UN resolutions.

Furthermore, Mzee Nelson Mandela correctly refrained from endorsing Bush’s blank concept of “terrorism” by qualifying it to not apply to genuine cases of peoples’ discontent. He argued that the right to self-determination and other peoples’ rights should not be confused with terrorism. He argued that it is by ignoring these rights, as in the case of Palestine, that acts of violence occur, which some people may prefer to describe as terrorism. He explained that these kinds of violence are the result of frustrations arising out of the non-recognition of peoples’ demands for the right to self-determination and peoples’ democratic rights. Later he called Bush a “bully” when he dismissed Iraq’s unconditional acceptance of the United Nations return of weapons inspectors and called on the U.S. to respect the United Nations. He also condemned those leaders in the world who kept quiet “when one country wants to bully the whole world.”

This is the way forward. We cannot keep quiet to the gimmicks of an outlaw behaving as if he were in the “Wild West” when it comes to the responsibilities of states to maintain peace and security in the world. While Saddam Hussein himself might have behaved like a bully himself, that is not the way he should be treated. The philosophy of “tooth for tooth, eye for eye” leaves all of us toothless and blind. We need a humane way of handling human affairs and a reasonable system of conflict management, control and resolution, which the Ubuntu philosophy offers. Therefore, the only civilized way of dealing with these issues is through the principles and spirit of Ubuntu in international relations.

The U.S. should emulate this African Ubuntu approach instead of following the path of violent confrontations with the Arab countries and Muslim political groups engaged in violence against it for causes that need to be addressed in a humane way. Violence begets violence and those who are more powerful should be more guarded in resorting to its use. As the English proverb has it: “those who live in glass houses should not throw stones.” This truism holds for the U.S. as well. Instead the U.S. should acknowledge the right of all peoples’ to self-determination, including the right of the Palestinian people, for whom the Bush administration has had little regard. We cannot afford to have one set of rules for the Palestinians and another set of rules for the Israelis. A completely new approach to the problems of the 21st century is required and the answer lies in ensuring security for all in all its manifestations.

We agree with Francis Kornegay of the Center for Africa’s International Relations, University of Witwatersrand, South Africa, when he suggests that Africa should be declared a zone of peace, which the African Union could monitor. This would be part of a doctrine in international relations based on the philosophy of Ubuntu in which African states and people commit themselves to be a continent that unites all the world’s people by insulating the continent from becoming a battleground in the war against terrorism as has already happened in Kenya and Tanzania. In this direction, the U.S. has already named a number of countries in the Horn of Africa to be part of their strategy of fighting terrorism on the African continent. African states should not collaborate in this scheme and instead declare that the continent is “terrorism free” and a “zone of peace.” But to do this, Africa would have to return to a strong commitment to the non-alignment movement in solidarity with the Arab world as well as other parts of the oppressed world.

Conclusion

In conclusion, it should be pointed out that the attack on the U.S. on September 11th 2001 was directed at U.S. strategic interests, which it has developed since the end of WW2. The analysis here has shown that this policy has been developed against the interests of Third World peoples, whose resources are subjected to U.S. control and exploitation. The U.S. believes that as a leader of the “Free World” it has the responsibility to ensure global peace and security and to do this, it needs to develop the resources in the entire world on a “free trade” basis. But, as we have seen, this has been achieved through manipulation and the use and threat of use of force against its weaker opponents in the Third World. The U.S. claims that its actions are motivated by the interest of the whole world, although it also at the same time claims to be defending “civilization,” which is a coded-word for western civilization and western interests.

Therefore while it calls on the whole world not to permit the al Qaeda to turn the present war against terrorism into a war of civilizations, it actually creates conditions that could ultimately turn such a conflict into a generalized conflict between civilizations on a global scale. The only answer to this conflict therefore lies in insisting that all problems between countries, cultures, and civilizations be resolved through dialogue and negotiations, which recognizes the interests of all as equally important. We have to use organs of global dialogue such as the United Nations, Global Summits and Conferences through which agreements can be reached and implemented. It is for this reason that the UN Secretary General, Kofi Annan, called for a dialogue between civilizations as a task of this century, if indeed the century is to be a peaceful one.

For the U.S. therefore to emphasize that the war against terrorism is not a civilisational one, while at the same time calling on the African states to agree that it is “a war of civilizations against those who would be uncivilized in their approach towards us” is to take Africa for granted and to try to use Africans against other peoples who may have genuine grievances against the U.S. It should be remembered that up to this point, the U.S. government still regards Africans and their descendants in the United States as being less than human beings and still treats them as uncivilized beings. Why? Because, alongside the other western powers and some of the Arab world, they refuse to consider demands for reparations for the exploitation and sufferings of those Africans who were enslaved by them and exploited as sub-humans in the building of the wealth which they now enjoy. Africa must push for the need to have dialogue on all these issues. The U.S. cannot have its cake and eat it. She cannot expect Africans to defend their civilization while at the same time refusing to compensate them for acts of inhuman behavior against them.

Global security of the 21st century requires that security of one country becomes the security of another and security in this new understanding must be understood in its broadened sense to mean human security for all. As the Social Science Research Council has come to recognize, security concerns should no longer be seen in the context of the geopolitics of the Cold War period. The field of security considerations has changed greatly since the early 1980s with the increasing realization that threats to security of individuals, communities and states around the world originate from a variety of sources other than the military dimension of great power competition and rivalry, which characterized the period of the Cold War. Such `small events’ as localized wars, small arms proliferation, ethnic conflicts, environmental degradation, international crimes, and human rights abuses are all now being recognized as being central to the understanding of security at local, national, regional, and global levels.

The U.S., just like all countries of the world, must adjust to this new reality and address all these different concerns of security in order to create conditions for security for all. It has now to be realized and accepted by all of us on this planet that security for ‘us’ must mean security for `them’ as well, otherwise there cannot be security for all. That must be the lesson we should learn from the events of September 11th 2001. In short, September 11th requires us to embrace and enhance a holistic security consciousness that should inform global security policy based on Ubuntu.

References

Huntington, S [1997]: The Clash of Civilizations and the Remaking of World Order, Touchstone, New York.

Kennedy, P [1988]: The Rise and Fall of the Great Powers: Economic Change and Military Conflict From 1500 to 2000, Unwin-Hyman, London.

Keohane, R O [1980]: “The Theory of Hegemonic Stability and Changes in International Economic Regimes, 1967-1977,” in Holsti, O. R, Siverson, R. M, and George, A. L (eds.) [1980]: Change in the International System, Westview Press, Boulder.

Mukanda and Mulemfo, M [2000]: Thabo Mbeki and the African Renaissance: The Emergence of a New African Leadership, Actua Press (Pty), Pretoria, South Africa.

Nabudere, D W [1979]: Essays in the Theory and Practice of Imperialism, Onyx Press, London.

Nabudere, D W [1990]: The Rise and Fall of Money Capital, AIT, London.

Onyango-Obbo, C [2002]: “Is USA that ignorant? So what do its young mean in white shirts want here,” Ear to the Ground column, The Monitor, Wednesday July 3, 2002.

Rangarajan, L [1984]: “The Politics of International Trade” in Strange, S (ed.) [1984]: Paths to International Political Economy, George Allen & Unwin, London.

Raghavan, C [1990]: Recolonisation: GATT, The Uruguay Round & The Third World, Zed/Third World Network, London.

Rashid, A [2000]: Taliban: Militant Islam, Oil and Fundamentalism in Central Asia, Yale University Press, New Haven.

Spero, J. E [1977, 1985]: The Politics of International Economic Relations, John Allen & Unwin, London.

Anti-Americanism: A Revisit

Rob Kroes, chair and professor of American studies at the University of Amsterdam, is the author of If You’ve Seen One You’ve Seen the Mall: Europeans and American Mass Culture, and Them and Us: Questions of Citizenship in a Globalizing World.

I. What “ism” is anti-Americanism?

What kind of “ism” is anti-Americanism? Like any “ism” it refers to a set of attitudes that help people to structure their world view and to guide their actions. It also implies a measure of exaggeration, a feverish over-concentration on one particular object of attention and action. Yet what is the object in the case of anti-Americanism? The word suggests two different readings. It could refer to anti-American feelings taken to the heights of an “ism,” representing a general rejection of things American. It can also be seen as a set of feelings against (anti) something called Americanism. In the latter case, we need to explore the nature of the Americanism that people oppose. As we shall see the word has historically been used in more than one sense. Yet whatever its precise meaning, Americanism – as an “ism” in its own right – has always been a matter of the concise and exaggerated reading of some characteristic features of an imagined America, as a country and a culture crucially different from places elsewhere in the world. In that sense Americanism can usefully be compared to nationalism.

In much the same way that nationalism implies the construction of the nation, usually one’s own, in a typically inspirational vein, causing people to rally around the flag and other such emblems of national unity, Americanism helped an anguished American nation to define itself in the face of the millions of immigrants who aspired to citizenship status. Particularly at the time following World War I it became known as the “one hundred percent Americanism” movement, confronting immigrants with a demanding list of criteria for inclusion. Americanism in that form represented the American equivalent of the more general concept of nationalism. It was carried by those Americans who saw themselves as the guardians of the integrity and purity of the American nation. There is, however, another historical relationship of Americanism to nationalism. This time it is not Americans who are the agents of definition, but others in their respective national settings. Time and time again other peoples’ nationalism not only cast their own nation in a particular inspirational light, it also used America as a counterpoint, a yardstick that other nations might either hope to emulate or should reject.

Foreigners as much as Americans themselves, therefore, have produced readings of America, condensed into the ideological contours of an “ism.” Of course, this is likely to happen only in those cases where America has become a presence in other peoples’ lives, as a political force, as an economic power, or through its cultural radiance. The years following World War I were one such watershed. Through America’s intervention in the war and the role it played in ordering the post-war world, through the physical presence of its military forces in Europe, and through the burst of its mass culture onto the European scene, Europeans were forced in their collective self-reflection to try and make sense of America, and to come to terms with its impact on their lives. Many forms of Americanism were then conceived by Europeans, sometimes admiringly, sometimes in a more rejectionist mood, often in a tenuous combination of the two. The following exploration will look at some such moments in European history, high points in the American presence in Europe, and at the complex response of Europeans.

Americanism and anti-Americanism

“Why I reject ‘America.'” Such was the provocative title of a piece, published in 1928 by a young Dutch author who was to become a leading intellectual light in the Netherlands during the 1930s. The title is not a question, but an answer, assessing his position to an America in quotation marks, a construct of the mind, a composite image based on the perception of current dismal trends which the author then links to America as the country and the culture characteristically – but not uniquely – displaying them. It is not, however, uniquely for outsiders to be struck by such trends and to reject them. Indeed, as Ter Braak himself admits, anyone sharing his particular sensibility and intellectual detachment he is willing to acknowledge as a European, “even if he happens to live on Main Street.” It is an attitude for which he offers us the striking parable of a young newspaper vendor that he saw one day standing on the balcony of one of those pre-World War II Amsterdam streetcars, surrounded by the pandemonium of traffic noise, yet enclosed in a private sphere of silence. Amid the pointless energy and meaningless noise the boy stood immersed in the reading of a musical score, deciphering the secret code which admitted entrance to a world of the mind. This immersion, this loyal devotion to the probing of meaning and sense, to a heritage of signs and significance, are for Ter Braak the ingredients of Europeanism. It constitutes for him the quintessentially European reflex of survival against the onslaught of a world increasingly geared toward the tenets of rationality, utility, mechanization, and instrumentality, yet utterly devoid of meaning and prey to the forces of entropy. The European reaction is one that pays tribute to what is useless, unproductive, defending a quasi-monastic sphere of silence and reflexiveness amidst the whirl of secular motion.

This reflex of survival through self-assertion was of course a current mood in Europe during the interwar years, a Europe in ruins not only materially but spiritually as well. Amid the aimless drift of society’s disorganization and the cacophony of demands accompanying the advent of the masses on to the political agora, Americanism as a concept had come to serve the purpose of focusing the diagnosis of Europe’s plight. The impulse toward reassertion – toward the concentrated retrieval of meaning from the fragmented score of European history – was therefore mainly cultural and conservative, much as it was an act of protest and defiance at the same time. Many are the names of the conservative apologists we tend to associate with this mood. There is Johan Huizinga, the Dutch historian, who upon his return from his only visit to the United States at about the time that Ter Braak wrote his apologia, expressed himself thus: “Among us Europeans who were traveling together in America … there rose up repeatedly this pharisaical feeling: we all have something that you lack; we admire your strength but do not envy you. Your instrument of civilization and progress, your big cities and your perfect organization, only made us nostalgic for what is old and quiet, and sometimes your life seems hardly to be worth living, not to speak of your future” – a statement in which we hear resonating the ominous foreboding that “your future” might well read as “our [European] future.” For indeed, what was only implied here would come out more clearly in Huizinga’s more pessimistic writings of the late 1930s and early ’40s, when America became a mere piece of evidence in Huizinga’s case against contemporary history losing form.

Much as the attitude involved is one of a rejection of “America” and Americanism, what should strike a detached observer is the uncanny resemblance with critical positions that Americans had reached independently. Henry Adams of course is the perfect example, a prefiguration of Ter Braak’s “man on the balcony,” transcending the disparate signs of aimlessness, drift and entropy in a desperate search for a “useless” and highly private world of meaning. But of course his urgent quest, his cultural soul-searching, was much more common in America, was much more of a constant in the American psyche than Europeans may have been willing to admit. Cultural exhortation and self-reflection, under genteel or not-so-genteel auspices, were then as they are now a recurring feature of the American cultural scene. During one such episode, briefly centered around the cultural magazine The Seven Arts, James Oppenheim, its editor, pointed out that “for some time we have seen our own shallowness, our complacency, our commercialism, our thin self-indulgent kindliness, our lack of purpose, our fads and advertising and empty politics.” In this brief period, on the eve of America’s intervention in World War I, there was an acute awareness of America’s barren landscape, especially when measured by European standards. Van Wyck Brooks, one of the leading spokesmen of this group of cultural critics, pointed out that “for two generations the most sensitive minds in Europe – Renan, Ruskin, Nietzsche, to name none more recent – have summed up their mistrust of the future in that one word – Americanism.” He went on to say: “And it is because, altogether externalized ourselves, we have typified the universally externalizing influences of modern industrialism.”

Yet, in spite of these similarities, the European cultural critics may seem to argue a different case and to act on different existential cues: theirs is a highly defensive position in the face of a threat which is exteriorized, perceived as coming from outside, much as in fact it was immanent to the drift of European culture. What we see occurring is in fact the retreat toward cultural bastions in the face of an experience of a loss of power and control; it is the psychological equivalent of the defense of a national currency through protectionism. It is, so to speak, a manipulation of the terms of psychological trade. A clear example is Oswald Spengler’s statement in his Jahre der Entscheidung (Years of Decision): “Life in America is exclusively economic in its structure and lacks depth, the more so because it lacks the element of true historical tragedy, of a fate that for centuries has deepened and informed the soul of European peoples….” Huizinga made much the same point in his 1941 essay on the formlessness of history, typified by America. Yet Spengler’s choice of words is more revealing. In his elevation of such cultural staples as “depth” and “soul,” he typifies the perennial response to an experience of inferiority and backwardness of a society compared to its more potent rivals. Such was the reaction, as Norbert Elias has pointed out in his magisterial study of the process of civilization in European history, on the part of an emerging German bourgeoisie vis-à-vis the pervasive radiance of French civilization. Against French civilisation as a mere skin-deep veneer it elevated German Kultur as more deep-felt, warm and authentic. It was a proclamation of emancipation through a declaration of cultural superiority. Americanism, then, is the twentieth-century equivalent of French eighteenth-century civilisation as perceived by those who rose up in defense against it. It serves as the negative mirror image in the quest for a national identity through cultural self-assertion. Americanism in that sense is therefore a component of the wider structure of anti-Americanism, paradoxical as this may sound.

Americanism, un-Americanism, anti-Americanism

Let us dwell briefly on the conceptual intricacies of such related terms as Americanism, un-Americanism, and anti-Americanism. Apparently, as we have seen, Americanism as a concept can stand for a body of cultural characteristics deemed repugnant. Yet the same word, in a different context, can have a highly positive meaning, denoting the central tenets of the American creed, or of “American scripture,” as Michael Ignatieff would have it. Both, however, duly deserve their status of “isms”: both are emotionally charged code words in the defense of an endangered national identity. In the United States, as “one hundred percent Americanism,” it raised a demanding standard before the hordes of aliens aspiring to full membership in the American community while threatening the excommunication of those it defined as un-American. Americanism in its negative guise fulfilled much the same function in Europe, serving as a counterpoint to true Europeanism. In both senses, either positive or negative, the concept is a gate-keeping device, a rhetorical figure, rallying the initiates in rituals of self-affirmation.

Compared to these varieties of Americanism, relatively clear-cut both historically and sociologically, anti-Americanism appears as a strangely ambiguous hybrid. It never appears to imply – as the word suggests – a rejection across the board of America, of its society, its culture, its power. Although Huizinga and Ter Braak may have inveighed against Americanism, against an America in quotation marks, neither can be considered a spokesman of anti-Americanism in a broad sense. Both were much too subtle minds for that, in constant awareness of contrary evidence and redeeming features, much too open and inquiring about the real America, as a historical entity, to give up the mental reserve of the quotation mark. After all, Ter Braak’s closing lines are: “America’ I reject. Now we can turn to the problem of America.” And the Huizinga quotation above, already full of ambivalence, continues thus: “And yet in this case it must be we who are the Pharisees, for theirs is the love and the confidence. Things must be different than we think.”

Now where does that leave us? Both authors were against an Americanism as they negatively constructed it. Yet it does not meaningfully make their position one of anti-Americanism. There was simply too much intellectual puzzlement, and particularly in Huizinga’s case, too much admiration and real affection, too much appreciation of an Americanism that had inspired American history. Anti-Americanism, then, if we choose to retain the term at all, should be seen as a weak and ambivalent complex of anti-feelings. It does not apply but selectively, never extending to a total rejection of both Americanisms. Thus we can have either of two separate outcomes; an anti-Americanism rejecting cultural trends which are seen as typically American, while allowing of admiration for America’s energy, innovation, prowess, and optimism, or an anti-Americanism in reverse, rejecting an American creed that for all its missionary zeal is perceived as imperialist and oppressive, while admiring American culture, from its high-brow to its pop varieties. These opposed directions in the critical thrust of anti-Americanism often go hand in hand with opposed positions on the political spectrum. The cultural anti-Americanism of the inter-war years typically was a conservative position, whereas the political anti-Americanism of the Cold War and the war in Vietnam typically occurred on the left wing. Undoubtedly the drastic change in America’s position on the world stage since World War II has contributed to this double somersault. Since that war America has appeared in a radically different guise, as much more of a potent force in every-day life in Europe than ever before. This leads us to explore one further nexus among the various concepts.

The late 1940s and ’50s may have been a honeymoon in the Atlantic relationship, yet throughout the period there were groups on the left loath to adopt the unfolding Cold-War view of the world; they were the nostalgics of the anti-Nazi war alliance with the Soviet Union, a motley array of fellow travelers, third roaders, Christian pacifists, and others. Their early critical stance toward the United States showed up yet another ambivalent breed of anti-Americanism. In their relative political isolation domestically, they tended to identify with precisely those who in America were being victimized as un-American in the emerging Cold-War hysteria of loyalty programs, House Un-American Activities Committee (HUAC) inquiries, and McCarthyite persecution. In their anti-Americanism they were the ones to rally to the defense of Alger Hiss and the Rosenbergs, Ethel and Julius, and their many American supporters. Affiliating with dissenters in America, their anti-Americanism combined with the alleged un-Americanism of protest in the United States to form a sort of shadow Atlantic partnership. It is a combination that would again occur in the late sixties when the political anti-Americanism in Europe, occasioned by the Vietnam War, felt in unison with a generation in the United States engaged in anti-war protest and the counter-culture of the time, burning US flags along with their draft cards as so many demonstrations of a domestic anti-Americanism that many among Nixon’s “silent majority” at the time may have deemed un-American. As bumper stickers at the time reminded protesters: America, Love It Or Leave It.

The disaffection from America during the Vietnam War may have appeared to stand for a more lasting value shift on both sides of the Atlantic. The alienation and disaffection of this emerging adversary culture proved much more short-lived in America, however, than it did in Europe. The return to a conservative agenda, cultural and political, in America since the end of the Vietnam War never occurred in any comparable form in countries in Europe. There indeed the disaffection from America has become part of a much more general disaffection from the complexities and contradictions of modern society. The squatters’ movement in countries such as Germany, Denmark, or the Netherlands, the ecological (or Green) movement, the pacifist movement (particularly in the 1980s during the Cruise Missile debate), and more recently the anti-globalization movement have all become the safe havens of a dissenting culture, highly apocalyptic in its view of the threat which technological society poses to the survival of mankind. And despite the number and variety of anti-feelings of these adversary groups, America to each and all of them can once again serve as a symbolic focus. Thus, in this recent stage, it appears that anti-Americanism cannot only be too broad a concept, as pointed out before – a configuration of anti-feelings that never extends to all things American – it can also be too narrow, in that the “America” which one now rejected is really a code word – a symbol – for a much wider rejection of contemporary society and culture. The more diffuse and anomic these feelings are, the more readily they seem to find a cause to blame. Whether or not America is involved in an objectionable event – and given its position in the world it often is – there is always a nearby McDonald’s to bear the brunt of anger and protest, and to have its windows smashed. If this is anti-Americanism, it is of a highly inarticulate, if not irrational, kind.

II. Cultural Anti-Americanism: Two French Cases

“Nous sommes tous américains.” We are all Americans. Such was the rallying cry of the French newspaper Le Monde‘s editor-in-chief, Jean-Marie Colombani, published two days after the terrorist attack against symbols of America’s power. He went on to say: “We are all New Yorkers, as surely as John Kennedy declared himself, in 1962 in Berlin, to be a Berliner.” If that was one historical resonance that Colombani himself called forth for his readers, there is an even older use of this rhetorical call to solidarity that may come to mind. It is Jefferson’s call for unity after America’s first taste of two-party strife. Leading opposition forces to victory in the presidential election of 1800, he assured Americans that “We are all Federalists, we are all Republicans,” urging his audience to rise above the differences that many at the time feared might divide the young nation against itself. There would clearly be no need for such a ringing rhetorical call if there were not at the same time an acute sense of difference and division. Similarly in the case of Colombani’s timely expression of solidarity with an ally singled out for vengeful attack, solely because it, more than any of its allies, had come to represent the global challenge posed by a shared Western way of life. An attack against America was therefore an attack against common values held dear by all who live by standards of democracy and the type of open society that it implies. But as in Jefferson’s case, the rhetorical urgency of the call for solidarity suggests a sense of difference and divisions now to be transcended, or at least temporarily to be shunted aside.

As we all know, there is a long history that illustrates France’s long and abiding affinity with America’s daring leap into an age of modernity. It shared America’s fascination with the political modernity of republicanism, of democracy and egalitarianism, with the economic modernity of progress in a capitalist vein, and with an existential modernity that saw Man, with a capital M and in the gender-free sense of the word, as the agent of history, the molder of his social life as well as of his own individual identity and destiny. It was after all a Frenchman, Crèvecoeur, who on the eve of American independence pondered the question of “What, then, is the American, this new Man?” A long line of French observers have, in lasting fascination, commented on this American venture, seeing it as a trajectory akin to their own hopes and dreams for France. Similarly, French immigrants in the United States, in order to legitimize their claims for ethnic specificity, have always emphasized the historical nexus of French and American political ideals, elevating Lafayette alongside George Washington to equal iconic status.

But as we also know, there is an equally long history of French awareness of American culture taking directions that were seen as a threat to French ways of life and views of culture. Whether it was Tocqueville’s more sociological intuition of an egalitarian society breeding cultural homogeneity and conformism, or later French views that sought the explanation in the economic logic of a free and unfettered market, their fear was of an erosion of the French cultural landscape, of French standards of taste and cultural value. As I have argued elsewhere, the French were not alone in harboring such fears, but they have been more consistently adamant in making the case for a defense of their national identity against a threatening process of Americanization. The very word is a French coinage. It was Baudelaire who, on the occasion of the 1855 Exposition Universelle de Paris, spoke of modern man, set on a course of technical materialism, as “tellement américanisé … qu’il a perdu la notion des différences qui caractérisent les phénomènes du monde physique et du monde moral, du naturel et du surnaturel.” The Goncourt brothers’ Journal, from the time of the second exposition in 1867, refers to “L’exposition universelle, le dernier coup à ce qui est l’américanisation de la France.” As these critics saw it, industrial progress ushered in an era where quantity would replace quality and where a mass culture feeding on standardization would erode established taste hierarchies. There are echoes of Tocqueville here, yet the eroding factor is no longer the egalitarian logic of mass democracy but the logic of industrial progress. In both cases, however, whatever the precise link and evaluating angle, America had become the metonym for unfettered modernity, like a Prometheus unbound.

Europeans, French observers included, have always been perplexed by two aspects of the American way with culture – two aspects that to them represented the core of America’s cultural otherness – one its crass commercialism, the other its irreverent attitude of cultural bricolage, recycling the culturally high and low, the vulgar and the sublime, in ways unfamiliar and shocking to European sensibilities. As for the alleged commercialism, what truly strikes Europeans is the blithe symbiosis between two cultural impulses that Europeans take to be incompatible: a democratic impulse and a commercial one. From early on American intellectuals and artists agreed that for American culture to be American it must needs be democratic. It should appeal to the many, not the few. Setting itself up in contradistinction to Europe’s stratified societies and the hierarchies of taste they engendered, America proclaimed democracy for the cultural realm as well. That in itself was enough to make Europeans frown. Could democratic culture ever be anything but vulgar, ever be more than the largest common denominator of the people’s tastes? Undeniably, there were those in Europe who agreed with Americans that cultural production there could not simply follow in the footsteps of Europeans, and who were willing to recognise an American Homer in Walt Whitman, America’s poet of democracy. But even they were aghast at the ease with which the democratic impulse blended into the commercial. What escaped them was that in order to reach a democratic public, the American artist found himself in much the same situation as a merchant going to market. If America was daring in its formation of a mass market for goods that it produced en masse, it was equally daring in its view that cultural production in a democratic vein needed to find its market, its mass audience. In the absence of forms of European cultural sponsorship, it needed to make its audiences, to create its own cultural market, if only with a view to recouping the cost of cultural production. Particularly in the age of mechanical reproduction when the market had to expand along with the growth in cultural supply, American culture became ever more aware of the commercial calculus. And by that same token, it became ever more suspect in the eyes of European critics. Something made for profit, for money, could inherently never be of cultural value. This critical view has a long pedigree and is alive and well today.

The other repertoire of the European critique of American mass culture focuses on its spirit of blithe bricolage, of its anti-canonical approach to questions of high culture versus low culture, or to matters of the organic holism of cultural forms. Again, some Europeans were tempted, if not convinced, by Whitmanesque experiments in recognising and embracing the elevated in the lowly, the vulgar in the sublime, or by his experiments in formlessness. They were willing to see in this America’s quest for a democratic, if not demotic, culture. But in the face of America’s shameless appropriation of the European cultural heritage, taking it apart and re-assembling it in ways that went against European views of the organic wholeness of their hallowed heritage, Europeans begged to differ. To them, the collage or re-assemblage attitude that produced Hearst Castle, Caesar’s Palace, or the architectural jumble of European quotations in some of America’s high-rise buildings seemed proof that Americans could only look at European culture in the light of contemporaneity, as if it were one big mail-order catalog. It was all there at the same time, itemized and numbered, for Americans to pick and choose from. It was all reduced to the same level of usable bits and pieces, to be recycled, re-assembled, and quoted at will. Many European critics have seen in this an anti-historical, anti-metaphysical, or anti-organicist bent of the American mind. When Huizinga was first introduced, in the 1920s, to the Dewey Decimal System used to organize library holdings, he was aghast at the reduction of the idea of a library, an organic body of knowledge, to the tyranny of the decimal system, to numbers. Others, like Charles Dickens or Sigmund Freud, more facetiously, saw American culture as reducing cultural value to exchange value, the value of dollars. Where Europeans tend toward an aesthetics that values closure, rules of organic cohesion, Americans tend to explode such views. If they have a canon, it is one that values open-endedness in the re-combination of individual components. They prefer constituent elements over their composition. Whether in television or American football, European ideas of flow and continuity get cut up and jumbled, in individual time slots as on tv, or in individual plays as in football. Examples abound, and will most likely come to your mind, “even as I speak,” (to use American television lingo).

Now, potentially, the result of this bricolage view of cultural production might be endless variety. Yet what Europeans tended to see was only spurious variety, fake diversity, a lack of authenticity. A long chorus of French voices, from George Duhamel and François Mauriac in the interwar years, to Jean-Paul Sartre and more particularly Simone de Beauvoir after World War II, in the 1940s and ’50s, kept this litany resounding. At one point Simone de Beauvoir even borrowed from David Riesman, an American cultural critic, to make a point she considered her own. She referred to the American people as “un peuple de moutons,” conformist, and “extéro-conditionnés,” French for Riesman’s “other-directed.” At other points she could see nothing but a lack of taste, if not slavishness, in American consumerism.

Such French views are far from dated yet. They still inform current critiques of contemporary mass culture. Yet, apparently, the repertoire is so wide-spread and well-known that often no explicit mention of America is needed any more. America has become a subtext. In the following I propose to give two examples, both of them French. One illustrates the dangers of commercialism in the production of culture, the other the baneful effects of America’s characteristic modularizing mode in cultural production, its spirit of bricolage.

Commercialism and culture

In our present age of globalization, with communication systems such as the Internet spanning the globe, national borders have become increasingly porous. They no longer serve as cultural barriers that one can raise at will to fend off cultural intrusions from abroad. It is increasingly hard to erect them as a cultural “Imaginot” line (forgive the pun) in defense of a national cultural identity. Yet old instincts die hard. In a typically preemptive move, France modernized its telephone system in the 1980s, introducing a communication network (the Minitel) that allowed people to browse and shop around. It was a network much like the later World Wide Web. The French system was national, however, and stopped at the border. At the time it was a bold step forward, but it put France at a disadvantage later on, when the global communications revolution got under way. The French were slower than most of their European neighbors to connect to the Internet. And that may have been precisely the point.

At every moment in the recent past when the liberalization of trade and flows of communication was being discussed in international meetings, the French raised the issue of cultural protection. They have repeatedly insisted on exempting cultural goods, such as film and television, from the logic of free trade. They do this because, as they see it, France represents cultural “quality” and therefore may help to maintain diversity in the American-dominated international market for ideas. The subtext for such defensive strategies is not so much the fear of opening France’s borders to the world but rather fear of letting American culture wash across the country. Given America’s dominant role in world markets for popular culture, as well as its quasi-imperial place in the communications web of the Internet, globalization to many French people is a Trojan horse. For many of them, globalization means Americanization.

Not too long ago the French minister of culture published a piece in the French daily newspaper Le Monde, again making the French case for a cultural exemption from free trade rule. A week later one of France’s leading intellectual lights, Pierre Bourdieu, joined the fray in a piece published in the same newspaper. It was the text of an address delivered on October 11, 1999, to the International Council of the Museum of Television and Radio in Paris. He chose to address his audience as “representing the true masters of the world,” those whose control of global communication networks gives them not political or economic power but what Bourdieu called “symbolic power,” that is, power over people’s minds and imaginations gained through cultural artifacts – books, films, and television programs – that they produce and disseminate. This power is increasingly globalized through international market control, mergers and consolidations, and a revolution in communications technology. Bourdieu briefly considered the fashionable claim that the newly emerging structures, aided by the digital revolution, will bring endless cultural diversity, catering to the cultural demands of specific niche markets. Bourdieu refected this out of hand; what he saw was an increasing homogenization and vulgarization of cultural supply driven by a logic that is purely commercial, not cultural. Aiming at profit maximization, market control, and ever larger audiences, the “true masters of the world” gear their products to the largest common denominator that defines their audience. What the world gets is more soap operas, expensive blockbuster movies organized around special effects, and books whose success is measured by sales, not by intrinsic cultural merit.

It is a Manichaean world that Bourdieu conjured up. True culture, as he saw it, is the work of individual artists who view their audience as being posterity, not the throngs at the box office. In the cultural resistance that artists have put up over the centuries against the purely commercial view of their work, they have managed to carve out a social and cultural domain whose organizing logic is at right angles to that of the economic market. As Bourdieu put it: “Reintroducing the sway of the ‘commercial’ in realms that have been set up, step by step, against it means endangering the highest works of mankind.” Quoting Ernest Gombrich, Bourdieu said that when the “ecological prerequisites” for art are destroyed, art and culture will not be long for dying. After voicing a litany of cultural demise in film industries in a number of European countries, he lamented the fate of a cultural radio station about to be liquidated “in the name of modernity,” a victim to Nielsen ratings and the profit motive. “In the name of modernity” indeed. Never in his address did Bourdieu rail against America as the site of such dismal modernity, yet the logic of his argument is reminiscent of many earlier French views of American culture, a culture emanating from a country that never shied from merging the cultural and the commercial (or, for that matter, the cultural and the democratic). Culture, as Bourdieu defended it, is typically high culture. Interestingly, though, unlike many earlier French criticisms of an American culture that reached Europe under commercial auspices, Bourdieu’s defense was not of national cultures, more specifically the French national identity, threatened by globalization. No, he argued, the choice is between “the kitsch products of commercial globalization” and those of an international world of creative artists in literature, visual arts, and cinematography, a world that knows many constantly shifting centers. Yet blood runs thicker than water. Great artists, and Bourdieu listed several writers and filmmakers, “would not exist the way they do without this literary, artistic, and cinematographic international whose seat is [present tense!] situated in Paris. No doubt because there, for strictly historical reasons, the microcosm of producers, critics, and informed audiences, necessary for its survival, has long since taken shape and has managed to survive.” Bourdieu thus managed to have his cake and eat it too, arrogating a place for Paris as the true seat of a modernity in high culture. In his construction of a global cultural dichotomy lurks an established French parti pris. More than that, however, his reading of globalization as Americanization by stealth blinded him to the way in which French intellectuals and artists before him have discovered, adapted, and adopted forms of American commercial culture, such as Hollywood movies.

In his description of the social universe that sustains a cultural international in Paris, Bourdieu mentioned the infrastructure of art-film houses, of a cinémathèque, of eager audiences and informed critics, such as those writing for the Cahiers du cinéma. He seemed oblivious to the fact that in the 1950s precisely this potent ambience for cultural reception led to the French discovery of Hollywood movies as true examples of the “cinéma d’auteur,” of true film art showing the hand of individual makers, now acclaimed masters in the pantheon of film history. Their works are held and regularly shown in Bourdieu’s vaunted cinémathèque and his art-film houses. They were made to work, like much other despised commercial culture coming from America, within frameworks of cultural appropriation and appreciation more typically French, or European, than American. They may have been misread in the process as works of individual “auteurs” more than as products of the Hollywood studio system. That they were the products of a cultural infrastructure totally at variance with the one Bourdieu deemed essential may have escaped French fans at the time. It certainly escaped Bourdieu.

The modularizing mind and the World Wide Web

Among other dreams the Internet has inspired those of a return to a world of total intertextuality, of the reconstitution of the full body of human thinking and writing. It would be the return to the “City of Words,” the labyrinthine library that, like a nostalgic recollection, has haunted the human imagination since the age of the mythical library of Babylon. Tony Tanner used the metaphor of the city of words to describe the central quest inspiring the literary imagination of the 20th century. One author who, for Tanner, epitomizes this quest is Jorge Luis Borges. It is the constructional power of the human mind that moves and amazes Borges. His stories are full of the strangest architecture, including the endless variety of lexical architecture to which man throughout history has devoted his time – philosophical theories, theological disputes, encyclopaedias, religious beliefs, critical interpretations, novels, and books of all kinds. While having a deep feeling for the shaping and abstracting powers of man’s mind, Borges has at the same time a profound sense of how nightmarish the resultant structures might become. In one of his stories, the library of Babel is referred to by the narrator as the “universe” and one can take it as a metaphysical parable of all the difficulties of deciphering man’s encounters in existence. On the other hand Babel remains the most famous example of the madness in man’s rage for architecture, and books are only another form of building. In this library every possible combination of letters and words is to be found, with the result that there are fragments of sense separated by “leagues of insensate cacophony, of verbal farragos and incoherencies.” Most books are “mere labyrinths of letters.” Since everything that language can do and express is somewhere in the library, “the clarification of the basic mysteries of humanity … was also expected.” The “necessary vocabularies and grammars” must be discoverable in the lexical totality. Yet the attempt at discovery and detection is maddening; the story is full of the sadness, sickness and madness of the pathetic figures who roam around the library as around a vast prison.

What do Borges’s fantasies tell us about the Promethean potential of a restored city of words in cyberspace? During an international colloquium in Paris at the Bibliothèque Nationale de France, held on June 3rd and 4th, 1998, scholars and library presidents discussed the implications of a virtual memory bank on the Internet, connecting the holdings of all great libraries in the world. Some saw it as a dream come true. In his opening remarks Jean-Pierre Angremy referred to the library of Babel as imagined by Borges, while ignoring its nightmarish side: “When it was proclaimed that the library would hold all books, the first reaction was one of extravagant mirth. Everyone felt like mastering an intact and secret treasure.” The perspective, as Angremy saw it, was extravagant indeed. All the world’s knowledge at your command, like an endless scroll across your computer screen. Others, like Jacques Attali, spiritual father of the idea of digitalizing the holdings of the new Bibliothèque Nationale, took a similar positive view. Whatever the form of the library, real or virtual, it would always be “a reservoir of books.” Others weren’t so sure. They foresaw a mutation of our traditional relationship toward the written text, where new manipulations and competences would make our reading habits as antiquated as the reading of papyrus scrolls is to us.

Ironically, as others pointed out, texts as they now appear on our screen are like a throw-back to the reading of scrolls, and may well affect our sense of the single page. In the printed book every page comes in its own context of pages preceding and following it, suggesting a discursive continuity. On the screen, however, the same page would be the interchangeable element of a virtual data bank that one penetrates into by the use of a key word that opens many books at the same time. All information is thus put at the same plan, without the logical hierarchy of an unfolding argument. As Michel Melot, long-time member of the Conseil supérieur des bibliothèques, pointed out, randomness becomes the rule. The coherence of traditional discursive presentation will tend to give way to what is fragmented, incomplete, disparate, if not incoherent. In his view, the patchwork or cut-and-paste approach will become the dominant mode of composition.

These darker views are suggestive of a possible American imprint of the Internet. They are strangely reminiscent of an earlier cultural critique in Europe of the ways in which American culture would affect European civilization. Particularly the contrast seen between the act of reading traditional books and of texts down-loaded from the Net recalls a contrast between Europe and America that constitutes a staple in the work of many European critics of American culture. Europe, in this view, stands for organic cohesion, for logical and stylistic closure, whereas America tends towards fragmentation and recombination, in a mode of blithe cultural bricolage, exploding every prevailing cultural canon in Europe. Furthermore we recognise the traditional European fear of American culture as a leveling force, bringing everything down to the surface level of the total interchangeability of cultural items, oblivious to their intrinsic value and to cultural hierarchies of high versus low.

Yet in the views as exposed at the Paris symposium, we find no reference to America. Is this because America is a sub-text, a code instantly recognised by French intellectuals? Or is it because the logic of the Internet and of digital intertextuality have a cultural impact in their own right, similar to the impact of American culture, but this time unrelated to any American agency? I would go no further at this point than to suggest a Weberian answer. It seems to be undeniably the case that there is a Wahlverwandtschaft – an elective affinity – between the logic of the Internet and the American cast of mind, which makes for an easier, less anguished acceptance and use of the new medium among Americans than among a certain breed of Europeans.

After reviewing these two exhibits of cultural anti-Americanism as a subtext, taking French attitudes as its typical expression, what conclusions can we draw? One is that fears of an American way with culture, due to either its commercial motives or its modularizing instincts, are too narrow, too hide-bound. Discussing Bourdieu’s views, I mentioned counter-examples where compatriots of his, in the 1950s, thoroughly re-evaluated the body of cinematography produced in Hollywood. They moved it up the French hierarchy of taste and discovered individual auteurs where the logic of established French views of commercial culture would have precluded the very possibility of their existence. This is a story that keeps repeating itself. Time and time again French artists and intellectuals, after initial neglect and rejection, have discovered redeeming cultural value in American jazz, in American hard-boiled detective novels, in rap music, in Disney World, and other forms of American mass culture. What they did, and this may have been typically for French or more generally European intellectuals to achieve, was to develop critical lexicons, constructing canonic readings of American cultural genres. It is a form of cultural appropriation, making forms of American culture part of a European critical discourse, measuring it in terms of European taste hierarchies. It is a process of subtle and nuanced appropriation that takes us far beyond any facile, across-the-board rejection of American culture due to its commercial agency.

How about the second ground for rejection, America’s blithe leveling of cultural components to the level of interchangeable bits and pieces? As I argued in my review of the second exhibit, America may have been more daring when it ventures out in this field, yet we can find parallels and affinities with Europe’s cultural traditions. A catalytic disenchantment of the world, as part of a larger secularization of Europe’s Weltanschauung, had been eating away at traditional views of God-ordained order before Americans joined in. Again, facile rejections of what many mistakenly see as Americanization by stealth, when confronted with more radical manifestations of the modularization of the world, miss the point. I suggested the possibility that what the World Wide Web brings us in terms of endless digital dissection and re-assemblage of “texts,” may have more to do with the inherent logic of the digital revolution than with any American agency. A more or less open aversion to this happening should be seen, therefore, as anti-modernity rather than as anti-Americanism. It reveals a resentment against the relentless modernization of our world that has been a continuing voice of protest in the history of Western civilization.

It is a resentment, though, that should make us think twice. Clearly, we are not all Americans. We do not all freely join their Promethean exploration of the frontier of modernity. This is not the same as saying that those who are not “Americans” are therefore the Bin Ladens in our midst. But their resentment is certainly akin to what, in other parts of the world, has turned into blind hatred of everything Western civilization stands for.

III. Anti-Americanism and American power

New York, 9/11, one year later. While I am writing this, the events of a year ago are being remembered in a moving, simple ceremony. The list of names is being read of all those who lost their lives in the towering inferno of the World Trade Center. Their names appropriately reflect what the words World Trade Center conjure up; they are names of people from all over the world, from Africa, the Middle East, the Far East, the Pacific, Latin America, Europe, and, of course, North America – people of many cultures and many religions. Again the whole world is watching, and I realize suddenly that something remarkable is happening here. The American mass media record an event staged by Americans. They powerfully re-appropriate a place where a year ago international terrorism was in charge. They literally turn the site into a lieu de mémoire. They are in the words of Lincoln’s Gettysburg Address, read again on this occasion, consecrating the place. They imbue it with the sense and meaning of a typically American scripture. It is the language that, for over two centuries, has defined America’s purpose and mission in the ringing words of freedom and democracy.

I borrow the words “American scripture” from Michael Ignatieff. He used them in a piece he wrote for a special issue of Granta. He is one of twenty-four writers from various parts of the world who contributed to a section entitled “What We Think of America.” Ignatieff describes American scripture as “the treasure house of language, at once sacred and profane, to renew the faith of the only country on earth (…) whose citizenship is an act of faith, the only country whose promises to itself continue to command the faith of people like me, who are not its citizens.” Ignatieff is a Canadian. He describes a faith and an affinity with American hopes and dreams that many non-Americans share. Yet, if the point of Granta‘s editors was to explore the question of “Why others hate us, Americans,” Ignatieff’s view is not of much help. In the outside world after 9/11, as Granta‘s editor, Ian Jack, reminds us, there was a wide-spread feeling that “Americans had it coming to them,” that it was “good that Americans now know what it’s like to be vulnerable.” For people who share such views American scripture deconstructs into hypocrisy and willful deceit.

There are many signs in the recent past of people’s views of America shifting in the direction of disenchantment and disillusionment. Sure enough, there were fine moments when President Bush rose to the occasion and used the hallowed words of American scripture to make it clear to the world and his fellow-Americans what terrorism had truly attacked. The terrorists’ aim had been more than symbols of American power and prowess. It had been the very values of freedom and democracy that America sees as its foundation. These were moments when the president literally seemed to rise above himself. But it was never long before he showed a face of America that had already worried many long-time friends and allies during Bush’s first year in office.

Even before September 11th, the Bush administration had signaled its retreat from the internationalism that had consistently inspired US foreign policy since World War II. Ever since Woodrow Wilson, American scripture had also implied the vision of a world order that would forever transcend the lawlessness of international relations. Many of the international organizations that now serve to regulate inter-state relations bear a markedly American imprint, and spring from American ideals and initiatives. President Bush Sr., in spite of his avowed aversion to the “vision thing,” nevertheless deemed it essential to speak of a New World Order when, at the end of the Cold War, Saddam Hussein’s invasion of Kuwait seemed to signal a relapse into a state of international lawlessness. Bush Jr. takes a narrower, national-interest view of America’s place in the world. In an un-abashed unilateralism he has moved United States foreign policy away from high-minded idealism and the arena of international treaty obligations. He is actively undermining the fledgling International Penal Court in The Hague, rather than taking a leadership role in making it work. He displays a consistent unwillingness to play by rules internationally agreed to and to abide by decisions reached by international bodies that the United States itself has helped set up. He squarely places the United States above or outside the reach of international law, seeing himself as the sole and final arbiter of America’s national interest.

After September 11th this outlook has only hardened. The overriding view of international relations in terms of the war against terrorism has led the United States to ride roughshod over its own Constitutional protection of civil rights as well as over international treaty obligations under the Convention of Geneva in the ways it handles individuals, US citizens among them, suspected of links to terrorist networks. Seeing anti-terrorism as the one way to define who is with America or against it, President Bush takes forms of state terrorism, whether in Russia against the Chechens, or in Israel against the Palestinians, as so many justified anti-terrorist efforts. He gives them his full support and calls Sharon a “man of peace.” If Europeans beg to differ and wish to take a more balanced view of the Israeli-Palestinian conflict, the Bush administration and many op-ed voices blame European anti-Semitism.

This latter area is probably the one where the dramatic, if not tragic, drifting apart of America and Europe comes out most starkly. It testifies to a slow separation of the terms of public debate. Thus, to give an example, in England the chief rabbi, Jonathan Sacks, said that many of the things Israel did to the Palestinians flew in the face of the values of Judaism. “(They) make me feel very uncomfortable as a Jew.” He had always believed, he said, that Israel “must give back all the land (taken in 1967) for the sake of peace.” Peaceniks in Israel, like Amos Oz, take similar views. And so do many in Europe, Jews and non-Jews alike. Yet it would be hard to hear similar views expressed in the United States. There is a closing of ranks, among American Jews, the religious right, opinion leaders, and Washington political circles, behind the view that everything Israel does to the Palestinians is done in legitimate self-defense against acts of terrorism. Yet, clearly, if America’s overriding foreign-policy concern is the war against terrorism, one element tragically lacking in public policy statements of its Middle-East policy is the attempt to look at themselves through the eyes of Arabs, or more particularly Palestinians. A conflation seems to have occurred between Israel’s national interest and that of the United States. Both countries share a definition of the situation that blinkers them to rival views that are more openly discussed in Europe.

Among the pieces in Granta is one by a Palestinian writer, Raja Shehadeh. He reminds the reader that “today there are more Ramallah people in the US than in Ramallah. Before 1967 that was how most Palestinians related to America – via the good things about the country that they heard from their migrant friends and relations. After 1967, America entered our life in a different way.” The author goes on to say that the Israeli occupation policy of expropriating Arab land to build Jewish settlements and roads to connect them, while deploying soldiers to protect settlers, would never have been possible without “American largesse.” But American assistance, Shehadeh continues, did not stop at the funding of ideologically motivated programs. In a personal vignette, more telling than any newspaper reports, Shehadeh writes: “Last July my cousin was at a wedding reception in a hotel on the southern outskirts of Ramallah when an F16 fighter jet dropped a hundred-pound bomb on a nearby building. Everything had been quiet. There had not been any warning of an imminent air attack. … Something happened to my cousin that evening. … He felt he had died and was surprised afterwards to find he was still alive. … He did not hate America. He studied there. … Yet when I asked him what he thought of the country he indicated that he dismissed it as a lackey of Israel, giving it unlimited assistance and never censoring its use of US weaponry against innocent civilians.” The author concludes with these words: “Most Americans may never know why my cousin turned his back on their country. But in America the parts are larger than the whole. It is still possible that the optimism, energy and opposition of Americans in their diversity may yet turn the tide and make America listen.”

The current Bush administration, with its pre-emptive strategy of taking out opponents before they can harm the US at home or abroad, in much the same way that Israeli fighter jets execute alleged Palestinian terrorists, in their cars, homes, and backyards, without bothering about due process or collateral damage, is not an America that one may hope “to make listen.” Who is not for Bush is against him. Well, so be it. Many Europeans have chosen not to be bullied into sharing the Bush administration’s view of the world. They may not command as many divisions as Bush; they surely can handle the “division” that Bush has brought to the Atlantic community.

There has been a resurgence of open anti-Americanism in Europe and elsewhere in the world. Not least in the Middle East, the area that has brought us Osama Bin Laden and his paranoid hatred of America, and the West more generally. If he can still conflate the two, why can’t we? If Raja Shehadeh still holds hopes of an America that one can make listen, why don’t we? Let us face it: We are all Americans, but sometimes it is hard to see the Americans we hold dear in the Americans that hold sway.

This may remind Europeans that anti-Americanism is not the point. We may believe we recognize Americanism in any particular American behavior, be it cultural or political. Yet the range of such behavior is simply too wide – ranging in culture from the sublime to the vulgar, and in politics from high-minded internationalism to narrow nationalism – to warrant any across-the-board rejection.

Endnotes

1 M. ter Braak, “Waarom ik ‘Amerika’ afwijs,” (Why I reject America), De Vrije Bladen, V, 3, 1928; repr. in: Verzameld Werk, (Collected Works) Amsterdam: G.A. van Oorschot, 1950/1, Vol I, 255-65.

2 J. Huizinga, Amerika levend en denkend – Losse opmerkingen. (America living and thinking – Loose observations) (Haarlem: H.D. Tjeenk Willink & Zoon, 1926), 162.

3 The Seven Arts, (June 1917), 199.

4 The Seven Arts, (March 1917), 535.

5 O. Spengler, Jahre der Entscheidung, (Muenchen: Beck, 1933) 48.

6 Michael Ignatieff, “What We Think of America,” Granta, The Magazine of New Writing, 77 (Spring 2002), 47-50.

7 I may refer the reader to my survey of such French views of American modernity. See Rob Kroes, Them and Us: Questions of Citizenship in a Globalizing World (University of Illinois Press, 2000), chapter 9.

8 See, e.g., Annick Foucrier, Le rêve californien: Migrants francais sur la côte Pacifique (XVIIIe-XXe siècles). (Paris, 1999).

9 See my If You’ve Seen One, You’ve Seen the Mall: Europeans and American Mass Culture. (University of Illinois Press, 1996)

10 Quoted in: D. Lacorne, J Rupnik, and M.F. Toinet, eds., L’Amérique dans les têtes (Paris, 1986), 61.

11 Quoted in ibid., 62.

12 Pierre Bourdieu, “Questions aux vrais maîtres du monde” Le Monde, Sélection Hebdomadaire, October 23, 1999, pp. 1,7.

13 Tony Tanner, City of Words: American Fiction 1950-1970 (New York, 1971)

14 For the Borges quotations, see Tanner, City of Words, 41.

15 For my summary of the proceedings at the Paris colloquium, I have used a report published in Le Monde, Sélection Hebdomadaire, 2589, June 20th, 1998, p.13.

16 For a fuller analysis of the metaphorical deep structure underlying the European critique of American culture, I may refer the reader to my If You’ve Seen One, You’ve Seen the Mall.

17 I argue this more at length in the concluding chapter of my If You’ve Seen One, You’ve Seen the Mall, entitled “Americanization: What are we talking about?”

18 In an interview in the Guardian on August 27th, 2003.

“Americanization”: An East Asian Perspective

Akio Igarashi is a professor of law and politics at Rikkyo University, Tokyo, Japan. He is editor in chief of The Journal of Pacific Asia and author of a number of books and articles, including Japan and a Transforming Asia (Henyousuru Asia to Nippon [Seori Shobo, 1998]).

The Unconscious Reversal of Americanization

For the past few years, American Hollywood war films, such as “Saving Private Ryan”(1998), “The Thin Red Line”(1998), “Stalingrad”(2001) [released in the United States as “Enemy at the Gates”-ed.], “Pearl Harbor”(2001) and “Black Hawk Down” (2002), have appeared regularly in Tokyo movie theaters. War movies are one of the key genres in Hollywood, and Japanese moviegoers have had the opportunity to see a large number of Hollywood war movies thus far. While watching such movies, it is not uncommon for Japanese viewers to suddenly realize that unknowingly they have stepped onto the side of the United States Army. There are probably some instances when viewers have experienced discomfort upon recognizing Japan as the “enemy.” In such films, American notions of justice and heroism, as well as of freedom and democracy are deeply embedded, and such ideologies have influenced unsuspecting Japanese audiences.

The Vietnam War, however, which Americans themselves had difficulty justifying, changed the genre of Hollywood war movies. Representative works include “Deer Hunter” (1978), “Apocalypse Now” (1979), “Platoon”(1986), and “Full Metal Jacket”(1987). An undercurrent of protest against the brutalities of war and deep skepticism toward American military policies runs through these films. At the time that they appeared, Vietnam War movies had a great impact and their influence still remains.[1. Setogawa Shuta, “‘Burakku hôku daun’ to hariuddo sensô eiga” (‘Blackhawk Down’ and the Hollywood war film), from the “Blackhawk Down” movie program.]  Japanese viewers who have seen the Hollywood war movies films mentioned above correctly observe, as one observer put it, “Although [Hollywood war films] seem to emphasize America’s viewpoint…these films are basically anti-war movies.” There is no mistake that, since the Vietnam War, American values and overt ideological messages in Hollywood war movies have subsided, and this has been acknowledged among the Japanese viewing public.

After 9.11 and the war in Afghanistan, Japanese viewers` perspectives on Hollywood war movies have changed even more, which is especially clear in their responses to “Black Hawk Down.” The film is based on real life events of 1993, when American troops were sent into Somalia on a U.N. Peacekeeping mission. Their assignment to capture two Somali warlords failed when their helicopters were shot down and they were attacked by Somali militias and civilians. Viewers who watched this film gave the following comments: “It felt as though I was on the battleground, that this was what war must be like,” (25-year old female); and “I learned that it was an unfair battle. The problem was not at the level of those fighting the battle, but was a problem at the administrative level. Who knows when Japan will be pulled into a similar situation while taking part in peacekeeping efforts?” (54-year old male). The misery of a battle in which members of the force lose their friends, one by one, slowly draws the viewer into the perspective of the American soldiers: “It was shocking when the final death count appeared just before the movie credits stating the toll to be 19 Americans and 1000 Somalis. Even though so many Somalis had died, while watching the film my sympathies were drawn toward the American soldiers. I think that’s what’s so frightening about films,” (male in his 30’s).[2. Asahi Shinbun, evening edition, April 19, 2002.]

In his review, Saito Tadao, a veteran film critic who has been writing film reviews since the end of World War II, stepped away from the ordinary review and expressed his thoughts on the state of America’s global strategy:

The director Ridley Scott concentrates on portraying the American soldiers’ feelings, whether it be of fear or solidarity, and keeps the causes of the civil war, the role of the peacekeeping forces, the pros and cons of America’s actions, and any explanation of the Somalis’ circumstances or feelings to a bare minimum. For the American soldiers placed at the heart of the danger, such information was surely irrelevant. Yet, at the present moment with the publicizing of the possibility of an American attack against Somalia because of suspicions of terrorist activity, it is necessary to be aware about such things.

“Blackhawk Down” touchingly depicts the camaraderie between the American soldiers on the one hand, while easily devaluing the lives of the Africans on the other. Viewers are left with a strong impression of poor Africans, but only because the situation is depicted as being utterly miserable. Though it is considered an American war film, for the Japanese people, the military actions of the United Nations and the peacekeeping forces cannot simply be disregarded as someone else’s problem.[3. Saito Tadao, “Blackhawk Down,” Asahi Shinbun.]

This film does not manage to instill an American value system in the viewer, but is an example of a reversal in which a Hollywood film leads to a critical analysis of the American government’s global policy. Thus, viewers are overcoming the message of the classic Hollywood war film. The fact that current Hollywood films themselves are bringing about such reversals is an important factor to keep in mind. Hollywood films have been at the center of the spread of American value systems and their manifestations throughout the world, a cultural Americanization that began in Japan after World War II. These films have now lost that capacity, which can be clearly observed in the responses of Japanese film critics and audiences alike. While observing the American government’s actions, however, the extent to which America is conscious of its own actions is unclear.

During the Cold War, Americanization actively transformed the liberal world bloc. Those countries affiliated with this bloc were incorporated as part of the global policies of American military and diplomatic efforts, and were simultaneously placed under the influence of a culture infused with American ideologies. Among the various types of cultural Americanization, popular culture captured the hearts of those in this “liberalist” world, and the acceptance of American popular culture helped these countries also embrace the military and diplomatic forms of Americanization. Around the time when the socialist bloc was losing its economic power, however, those countries in the liberalist bloc, particularly countries in East Asia, were beginning to escape the one-sided influence of Americanization. Owing to economic growth, a popular culture unique to that region was being formed. This in turn altered Japan’s relations with the U.S. and yielded a relativistic perspective toward the United States role in the areas of military policy and diplomacy. This essay will consider the influence and transformation of Americanization on the cultural front, focusing primarily on Japan but looking at East Asia as a whole.

1. The American Dream in Post-World War II Japan

For nearly one hundred years since World War I–the century at times even referred to as “America’s Century”–the United States has wielded incredible influence, not only on diplomatic or military fronts, but also on the cultural front. The spread of culture led by such notions as the Christian tradition, liberal expansionism, and Wilsonian internationalism was contemporaneous to the spread of democracy around the world in the 20th century.[4. Emily S. Rosenberg, Spreading the American Dream: American Economic Cultural Expansion, 1890-1945, New York: Hill and Wang, 1982.] Particularly during the Cold War, as both eastern and western blocs were facing off in fierce ideological warfare, cultural Americanization was deeply imbricated by an American value system. The influence of Americanization had begun its work in Japan in the 1930’s. Hollywood films such as “Stagecoach” (1939) directed by John Ford, were being shown in movie theaters, and Filipino bands playing on cruise ships sailing to foreign destinations introduced jazz to Japanese passengers. Japanese musicians traveled to Shanghai, then the jazz mecca of Asia, to learn from American jazz musicians who were touring there. In daily life, homes boasting western or private rooms and American style modern roofs were called “culture homes (bunka jyutaku),” and there was a tendency to associate the luxury and convenience of American life to “cultural living,” or “progress.”[5. Kiyomizu Sayuri, “Bunka kôryu toshite no nichibei kankei” (Japan-American relations as cultural exchange), in Masuda Hiroshi and Tsuchiya Jitsuo, eds., Nichibei kankei kiwado (keywords of Japan-America relations), Tokyo: Yuhikaku Sôsho, 2001.]

Yet, it is no surprise that Americanization’s greatest period of influence on Japan occurred during the American occupation after World War II. Japanese people who only the day before had been crying out, “kichiku beiei,” or “American and British devils,” now faced with the “generosity” of the victor began to take a more obsequious stance toward their vanquishers.[6. Rinjirou Sodei, “Dear General MacArthur; Letters from the Japanese during the American Occupation,” Maryland: Lanham, 2001. Released in 1946, the year immediately following defeat, Oka Haruo’s hit song, “Tokyo hana uri musume” (Tokyo flower selling girl), has the following verses: “jazz flows, light and shadows on the platform, ‘would you like a flower,’ ‘a flower for you’/ A real jacket of an American G.I., a sweet breeze that chases away the shadows/ oh Tokyo flower girl.” (lyrics: Sasaki Sho; music: Uehara Gen). In Sakurai Tetsuo,America wa naze kirawarerunoka (why America is hated), Tokyo: Chikuma Shobô, 2002. 123.] The overwhelming authority of the American military occupation policy enforced the aforementioned ideological policies throughout the country, even while emphasizing demilitarization of the state and democratization. Democracy and pacifism were spread extensively and popularized through these forms of American ideological endorsement.[7. Igarashi Akio, Sengo seinendan undo no shisô: kyôdô shutaisei wo motomete (the concepts and activism of postwar youth organizations; seaching for subjectivity), in Rikkyo Hougaku 42: (July 1995).] The new constitution, which took effect in 1947, was built upon the two concepts of democratization and demilitarization, which were quickly adopted by many Japanese.[8. John Dower, Embracing Defeat: Japan in the Wake of World War II, New York: W.W. Norton & Co., 2000.] The extent to which this “imposed” democracy actually took hold in Japanese society or culture was undermined by the frequent incidents provoked by conservative politicians, which became the cause of general skepticism among the populace toward Japan’s “democracy.” Pacifism, a concept that was so entrenched in the Japanese peoples’ memory of themselves as victims of war, became unstable as the memory of war began to fade. Moreover, the fact that Japan is yet unself-conscious of its role as aggressor against other Asian countries in the war, or of the sacrifice of Okinawa to the American military as the price to be paid for postwar peace, makes the ideology of Japanese postwar pacifism quite fragile.[9. Koseki Shoichi, “Heiwa kokka”: Nihon no saikentô (‘peaceful nation’: a reexamination of Japan), Tokyo: Iwanami Shoten, 2002.]

In the immediate postwar period, what a majority of Japanese hoped for was the realization of a rational and affluent society. It was a hope for escape from a past of prewar and wartime control by imperial rule and militarism, and from utter poverty.[10. Takabatake Michitoshi, “Taishu undô no tayôka to henshitsu” (the diversification and transformation of mass movements), Nihon Seijigakkai ed., 55nen taisei no keisei to hakai (the development and destruction of the 1955 position), Tokyo: Iwanami Shoten, 1977.] What was particularly alluring about American culture for such Japanese were the prospects of freedom and material abundance. The spacious rooms and the big white refrigerator in the comic strip, Blondie, helped people to imagine the affluence of the American lifestyle. The flat side of a ham hock peering from the open refrigerator door was a source of wonder for a people who had only ever seen an entire hock of ham in a butcher’s showcase. For Japanese at the time, America’s prosperous culture of consumption, symbolized by chewing gum, chocolate, and women’s fashion, represented “the American Dream.”

With the occupation by joint military forces, jazz performances were resurrected in areas near bases, and with the ban lifted on NHK radio programs and dancehalls, jazz became accessible to the average Japanese listener. As part of their public relations efforts during the Cold War era, the American government promoted overseas concert tours of black jazz musicians, and in 1952, Louis Armstrong visited Japan. Along with jazz in the 1950’s, came rock-n-roll, and in the 1960’s came Bob Dylan’s folk music. Songs representing “freedom” arrived one after the other; the electric boom, group sounds, and music of a common global language among youths came pouring in from America.

Hollywood films were the most successful anti-communist propaganda tools and received powerful backing from the American government. These films exceeded the American government’s expectations by depicting the various circumstances of American society. The crowds that filled the movie theaters to capacity feasted on the freedom and affluence of American society in these films. The children who watched “Dumbo,” “Bambi,” and “Mickey Mouse” were captivated by the colorful and expressive Disney animations.

When television programming first aired in 1953 in Japan, it exceeded movies as a means for propagating ideas. Since production techniques and capital were still inadequate, particularly in the early stages of the newly begun television industry, American television shows were often directly imported. Family dramas like “Father Knows Best,” “The Donna Reed Show,” and “I Love Lucy” were aired, and the image of an idealized American middle-class family-life without racism or the shadow of poverty stuck in people’s minds. These shows would later become models for Japanese home dramas. The Western boom also brought “Laramie” and “Rawhide,” implanting into Japanese society the image of Americans who were simple, yet cheerful, who burned with the fire of justice and lived in the vast countryside.[11. Kiyomizu, Ibid.] At a time when Japanese viewers were just beginning an era of rapid economic growth, they envisioned their future lives as bright and affluent as the lives of the characters in the home dramas, and experienced the humanism of American society through the westerns.

In postwar Japanese society, there were many, however, who saw Americanization from a much more critical viewpoint. Those involved in left wing or liberal politics recognized Americanization as the cultural analog of the U.S. geo-political role in East Asia and other developing areas. They saw the U.S. as an oppressor that suppresses and exterminates those who actually seek freedom, democracy, or humanism, in order to protect its own profits. The student movements of this area were made up of the persistent denunciations of “American imperialism.” Although these students, leftists, and progressives were perhaps unable to avoid completely the effects of Americanization on their day-to-day lives, their view of America continues to hold an authority that cannot be ignored.[12. John W. Dower, “Peace and Democracy in Two Systems; External Policy and Internal conflict,” in Andrew Gordon, ed., Postwar Japan as History, Berkeley: University of California Press, 1993. See also, Chalmers Johnson, Blowback: The Costs and Consequences of American Empire, New York: Metropolitan Books, Henry Holt, 2000.]

2. The Development of Japanization, and the Decline of Americanization

Even now in the post-Cold War era, American culture continues to hold tremendous power. Coca-Cola quenches the thirst of people around the world. KFC’s and McDonald’s occupy street corners of major cities, satisfying people’s hunger. Jazz and rock-n-roll are played around the world in clubs and street corners, and the popularity of the Hollywood movie is alive and well.[13. The American film industry earns 40% of its profits from overseas sales. 75% of the movies or TV shows viewed by people worldwide are made in America. From Alfredo Valladao, trans, Itô Gô, Murashima Yuichirô, and Tsuru Yasuko, Jiyu no teikoku: American shisutemu no seiki (Le XXIe siecle sera americain), Tokyo: NTT Shuppan, 2000.]

However, in the late 1980’s, another powerful popular cultural force joined American culture in East Asia: Japanization appeared on the scene and captured people’s hearts. In Seoul and Bangkok, the numbers of “izakaya,” or Japanese-style bars, outnumbered KFC and McDonalds franchises, and Japanese cuisine, such as yakitori and sushi, are all the rage. Pop music made in Japan (“J-Pop”) has swept East Asia, and in Thailand there is even a domestic magazine that specializes in Japanese celebrities and singers. In this area of the world, the Japanese “invention,” karaoke, is an essential part of the entertainment scene. Among the selections, there are many Japanese songs that have been reproduced with native lyrics and vocalists, and are often believed to be songs originating in that country. Animated films and television shows are watched by children in this region and beyond, and surpass even Disney in popularity. The NHK (Japan Broadcasting Corporation) TV drama, “Oshin,” depicts the life of a girl who, born into a poor farming family, finally finds success after years of hardship. The show was a huge hit first among audiences in the developing countries of Asia and then elsewhere, and represented both Oshin’s and Japan’s success story as the realization of the “Japanese Dream.”[14. The International Symposium Organizing Committee, The World’s View of Japan Through “Oshin,”Tokyo: NHK International Inc., 1991.] Japanese TV dramas that depict urban life are widely shown in East Asia. College women in Seoul clutch Japanese fashion magazines as they walk about town, while youths in Thailand fixate on “character-goods.”[15. Akio Igarashi, ‘From Americanization to Japanization in East Asia,’ The Journal of Pacific Asia, Vol. 4, 1997. The Committee for Research on Pacific Asia. This volume of the journal is dedicated entirely to the topic of Japanization. Akio Igarashi, ed., Henyosuru ajia to nihon: ajia shakai ni shintosuru nihon no popular culture, (a changing Asia and Japan: the infiltration of Japanese pop culture into Asian society), Tokyo: Seori Shobô, 1998, was edited for this special volume and was published in Japanese. In the late 1990’s, there was growing interest in this topic, and many research texts and articles have been published since. In the U.S. as well as other areas overseas, students taking Japanese studies courses have preferred it to the ever-popular subjects of Japanese economics and accounting.] Tokyo is now the fashion center of Asia.

Several factors underlie the spread of Japanization in East Asia. First, the Japanese pop culture industry had accumulated capital and techniques over a considerable period of time. Second, in East Asia, which achieved rapid economic growth in the 1980’s and 1990’s, the new middle-class living in major cities created a heightened demand for pop culture. A third factor might be attributed to the common culture and consciousness of the people within the region, which may have helped push the growth and dissemination of Japanization as the popular culture of East Asia.

Since the 1920’s, Japanese society sustained a considerable domestic market for popular culture, and the industry accumulated capital. With that as a foundation, it fell under the influence of Americanization on the one hand, and on the other it developed a popular culture all its own. In the music world, as western music was introduced alongside modernization, unique Japanese melodies and lyrics were developed for popular consumption. The postwar era also saw the absorption of music from America, such as jazz and rock-n-roll, and from all over the world, with the translation of these songs being set to “Japan-made” melodies or lyrics. Such hybrid songs even now dominate the top of the Japanese hit charts over and above worldwide hits, while they also spread throughout the entire East Asian region.

Film production has a long history dating from the prewar era. At one point after the end of World War II, there were nearly 7000 theatres in Japan, some even in small towns at the furthest outskirts of the country, which entertained over 1.1 billion spectators annually, and ushered in the golden age of Japanese cinema. It is well known that this era gave birth to such directors as Kurosawa Akira, who has greatly influenced movie-making in the west. Toei, Japan’s largest film company, began to work in animation early on. While Disney studios boasted of its unmatched share in the animated film market (for animation shown in movie theaters), Toei Studio’s work in animation constructed a foundation in made-for-TV animation, and gradually increased its share of the animated film industry. These days, children all over East Asia spend their afternoons glued to their TV sets in order to watch Japanese TV animation.

Toei Film Studios was the training ground for Miyazaki Hayao, whose animated films have captured the hearts of many fans. Sen to chihiro no kamikakushi (released in English as “Spirit Away”) was a runaway hit that broke Titanic‘s box office record in Japan. Japanese animation, or “Japanimation,” tends to depict stories that “have roots in real life,” and is a new experience for western viewers.[16. “Japanimation,” Nihon Keizai Shinbun, Nov. 18, 1995.] Moreover, the expression of certain subtle psychological responses resonates with East Asian viewers, who share common cultural characteristics with the Japanese. With increased import demand, the Japanese animation industry reduced overt Japanese national traits in the images and began constructing “nationlessness,” or non-nationalist texts and images.

In the comic book industry, the power of Japan’s market and capital is unsurpassed. The Japanese comic, or manga books and magazines together make up to 600 billion yen in annual market sales. This is the no.1 market of its kind in the world and its quality and maturity is high. There are no other examples outside of Japan of major publishing companies taking part in comic book publishing, and editors have accumulated a wealth of experience. Only in Japan are there popular writers who make a yearly salary of over 100 million yen, and where over 100 cartoonists ranging from those with only a junior high school degree to those with masters’ degrees earn more than presidents of major publishing companies. Growing out of such a “system,” manga commands a wide-ranging readership from children to adults. While European comics like the French bande dessinée (graphic novel comic) are considered too highbrow, American comics, by comparison, generally cater to children. Japanese manga with children as protagonists depict a world of honest truths that have universal appeal, and have secured popularity among a diverse range of readers overseas. The allure of the Japanese manga lies in “storytelling that can capture the imagination of adults” and in “a manifold power of expression.”[17. Mainichi Shinbun, April 18, 1996.]

Thus shaped and refined in the domestic market, manga are now coveted by many international readers. For example, the comic book series, Dragonball Z, sold over 50 million copies in Asia and 10 European countries. In Thailand and Hong Kong, manga appearing in “Jump,” the weekly manga magazine with a circulation of 6 million copies, are translated and printed alongside the works of native cartoonists. Korean comic magazines have such a large number of Japanese imports that they must make a special effort to print as many Korean cartoonists as possible.

These samples of Japan-made popular culture entered into the heart of the major cities of East Asian countries in the 1980’s and 1990’s. Enjoying a materially prosperous life as a result of economic growth, the “new middle class” of these cities carried on life styles similar to those living in Tokyo, the birthplace of Japanization, thus making the spread of Japanization that much more rapid. It is also likely that a longing for the lifestyle of those living in Tokyo, a major global city, helped to encourage the process.

Most importantly, unlike Americanization, no ideology like Wilsonian internationalism exists in Japanization. For Japan, as yet unable to leave behind its historical responsibility for colonization and wartime aggression, has no desire to convey such ideas. Accordingly, Japanization is unequivocally “materialistic” cultural dissemination. Yet, as previous experiences of Americanization’s influence on Japanese society make clear, a “faith” in an affluent society, or in this case the desire to capture the “Japanese dream,” is the greatest motivation for the spread and influence of Japanization.

Notwithstanding the “materialism” of cultural dissemination, the influence wielded by the behavior of protagonists depicted in manga or animation and the thought processes behind their actions is undeniable, particularly in the case of children. An example of such influence is found in the use of the word, “HITACHI,” the name of a major Japanese company, which in Thailand has come to mean, “an individual who responds quickly and perceptively to situations.”

The Japanese influence in the East Asian region is not restricted to popular culture. Following the Plaza Accord in 1985, the Japanese economy advanced even farther into East Asia. This kind of economic advancement produced people who were both fascinated by the products of an “advanced” capitalistic society and also overwhelmed and awestruck by Japan’s economic and technological power. Japanese products, conceived with a wealth of capital and state of the art technology, are elaborate and fashionable and thus are trusted and valued as “luxury items” in various areas of Asia. Furthermore, those who witnessed the success of companies that had incorporated Japanese technology and established capital partnerships could not help but be drawn to Japan’s economic prowess.

Japan’s biggest department stores and supermarkets have opened branches in most major cities in East Asia, and utilizing techniques for pristine window displays, they tangibly demonstrate the “cutting edge” of consumer culture. Imported Japanese department stores and supermarkets outshine the traditional local establishments, and have brought about a “consumer revolution.” With ever-increasing power and allure, Japanese products are entering local societies through these Japanese department stores. The dissemination and assimilation of Japanese popular culture has not come without some protest or resistance from Japan’s neighbors. Particularly in Korea, Japan’s direct neighbor and a former victim of colonization, the government associated Japanese culture with the Korean experience of oppression and strongly opposed it, even prohibiting the importation of Japanese popular culture.[18. For further discussion see: Arjun Appadurai, Modernity at Large: Cultural Dimensions of Globalization, Minnesota: University of Minnesota Press, 1996. 27-47.] Yet, above the apartments of Seoul’s middle-class grows a forest of antennas tuning into Japanese satellite broadcasts, and pirated tapes of J-pop circulate the city. These circumstances illustrate the difficulty of intercepting the invasion of popular culture with national boundaries. This led the Kim Dae-jung administration to ease the regulations in late 1990s.

Meanwhile, in East Asia, economic growth has brought about capital gains, and the consumer market has grown with a newly emergent middle class, allowing for the production of their own popular culture. Inevitably, Japan’s popular culture is consulted as an archetype, and Japan’s experiences gleaned from manga and animation production are copied. In Korea, where not only Japanese films, but western films were also sanctioned in order to protect the domestic film industry, Hollywood film techniques were mastered. Korean production companies have recently released a slew of international hits. There are also films produced through legitimate partnerships with Japan. In addition, films, music, and fashion from Hong Kong, Thailand, and India are widely distributed. Furthermore, Japan’s popular culture industry now considers all regions of East Asia as a market, and while aggressively promoting Japanese popular culture they also have begun to scout out talent in China. In this way, East Asia has formed a borderless world of popular culture with Japanization at its center. The fifteen or so satellite broadcasts which travel through the airwaves of this region attest to this.

American popular culture in this region is alive and well; however, it no longer exerts an absolute influence. With the end of the Cold War, rapid economic growth and the resulting spread of globalization, East Asian society is undergoing great change. Popular culture or a consumer culture are not the only means by which people within the region share common experiences and deepen their mutual understanding. There is increased travel within the region for the purpose of business and tourism, and through the development of mass media networks and a heightened interest in other countries within their region, the amount of information made available through television and newspapers has also grown. Parallel to the heightened interest and interaction within the region, there is a stronger tendency towards a “unified” East Asian region at the level of international relations. Malaysian Prime Minister Mahathir Mohamad, who has taken up these issues with the most fervor, proposed the “Look East” policy in 1981, designating the economic development of Japan and Korea as models for his own country. In 1990, he proposed an East Asian economic community, or EAEG (East Asian Economic Grouping), which was to include only the countries of Asia, excluding the United States and the countries of Oceania. But because of strong U.S. objections at the time, it could not be realized. This community was conceived to counter the formation of the Asia-Pacific Economic Cooperation (APEC), which is made up of 21 countries, including the United States, Australia, New Zealand, Japan, Korea, and China. With the history of Malaysia’s colonial experience always at the forefront of his thoughts, Mahathir maintains a strong anti-Western stance. In 1992, Southeast Asian countries, which have held together the unity of the Asian region, established the ASEAN Free Trade Area (AFTA), which would aim to lower tariffs among participating countries by 5% by 2003. In 1994, ASEAN countries established the ASEAN Regional Forum (ARF), which dealt with the mutual building of confidence and trust among participating countries and provided a forum for preventative diplomacy and the peaceful resolution of regional disputes. ASEAN has had considerable success in setting the terms for a dialogue between 22 countries including Japan, the U.S., China, Australia, New Zealand, Russia, India, and the EU.

In the wake of the currency crisis of 1997, the harsh intervention of the IMF incited a growing distrust toward the IMF and the United States, its most powerful supporter. The Japanese government proposed the Asian Monetary Fund (AMF) to prevent Asian currency crises. The proposal, however, was withdrawn after strong U.S. opposition over its potential obstruction of IMF functions. In October 1998, the Japanese government proposed the “New Miyazawa Initiative,” which would carry out the distribution of funds on a bilateral basis. Under this initiative, Japan subsequently distributed funds to Indonesia, Korea, Malaysia, the Philippines, and Thailand, and was highly applauded within the region. In May 1995, “ASEAN+3” (ASEAN countries along with Japan, China, and Korea) agreed that countries would carry out bilateral currency swaps in order to prevent currency crises. This was a reinforcement of the New Miyazawa Initiative, and is also related to the AMF concept. In this way, there is talk of a regional unification with frequent comparisons to the European Union.

In East Asia, a new middle-class arose along with general economic growth, and by raising the power of their societal voice they have rapidly realized democratization since the mid-1980’s. In the Philippines, the “People Power Revolution” occurred in February 1986, giving birth to the Aquino administration. Following “The Bloody May” incident in Thailand in 1992, the civilian Chuan administration replaced the military administration. In 1987, Korea’s democratic movement brought the long years of military rule to an end. In 1998, after the death of President Chiang Ching-kuo, Lee Teng-hui’s succession as the new president propelled democratization in Taiwan. In Indonesia, 1998 was the year in which President Suharto resigned, following a huge popular mobilization. Thus confidence arising from democratization signified independence from American influence, and the image of “America, the land of freedom and democracy,” which had been implanted through Americanization, came to represent only one of many perspectives.

3. “9.11” to the Afghan War: Responses and Criticism of the U.S.

The blow to Japanese society tuning into late-night programs on September 11, 2001, and witnessing the coverage of the 9.11 terrorist attacks, was great. Military bases in Okinawa and elsewhere were on full alert, and tensions enveloped the Japanese archipelago. Feelings of apprehensions over the North and South Korean tensions crossed the minds of the populace. Needless to say, as the state of the victims and the grief of their families were reported day after day, compassion for the American people deepened.

Listening to President Bush’s “This is war” statement, the Japanese government must have immediately recalled the “defeat” of the Persian Gulf War–a very recent memory of rebuke, Japan was excluded from Kuwait’s thank you letter printed in the Washington Post despite $13 billion of aid to the U.S., a contribution which was completely disregarded after the end of the war because of Japan’s refusal to comply with repeated demands for the deployment of self defense forces, an outright violation of Japan’s constitution. Prime Minister Jyun’ichiro Koizumi swiftly departed to the U.S. to promise Japan’s “cooperation.”[19. Kunimasa Takeshige, Wangan senso to iu tenkaiten (the Persian Gulf War as a consequence], Tokyo: Iwanami Shoten, 1999.]

The attitude of a majority of the Japanese people, however, was far from approving Prime Minister Koizumi’s actions. Many Japanese who, while sympathizing with the victims and feeling anger toward terrorism, felt uneasy with the image of American society draped in the stars and stripes of the national flag and the American government’s race toward “war” as a solution. Sakamoto Yoshikazu, a leading postwar progressive scholar of international politics writes the following:

President Bush’s congressional address includes the following sentence: “Americans are asking, ‘Why do they hate us?’… [The terrorists] hate our freedoms: our freedom of religion, our freedom of speech, our freedom to vote and assemble and disagree with each other.”

Upon hearing this, I was astounded. I wondered how he could believe that such words would be acceptable within the international community. Among terrorists, there may be those who fit such a description. Yet, there are also many people within the developing nations who, to some extent, harbor some sympathy for the terrorists and think, “the actions taken by the terrorists were wrong but their motives and intentions are understandable.” Is not the very reason these people hate America because America crushes and silences those very people who seek to realize the “freedom, human rights, and democracy” of which America speaks? Furthermore, is it not also because the “global standard” on which American civilization is based is perceived as increasing the gap between the world’s rich and poor and eroding that “other culture,” different from America? Japan’s “civilization,” which has been in continual alignment with that of the U.S., is no less guilty.[20. Sakamoto Yoshikazu, “Tero to ‘bunmei’ no seijigaku” (the political science of terrorism and “civilization”) in Terogo: sekai wa dô kawattaka (after the terror: how the world changed), Tokyo: Iwanami Shinsho, 2002.]

Sakamoto’s critique of Bush represents a widely held criticism of the U.S. government and its people, and their arrogance in believing in the universality of their kind of democracy, freedom, and human rights, their erroneous understanding of themselves, and their ignorance of the rest of the world. At the base of such views is a condemnation of the American government’s recent unilateralism that include its shelving of the Comprehensive Test Ban Treaty (CTBT) and the Anti-Ballistic Missile Defense Treaty (ABM), their rejection of the Kyoto Accord, their disabling of international regulation of small arms, and their objections to the verification of the Biological Weapons Treaty. Such attitudes of the U.S. government deviate from the aforementioned concept of Americanization of the “Christian tradition, liberalistic expansionism, and Wilsonian internationalism.” Furthermore, each time the Japanese, or East Asian people in general, witness on television and in the newspaper the large numbers of casualties arising from the “collateral damage” of U.S. bombings, which are not widely publicized in the U.S., these opinions only grow stronger.

The former journalist turned critic and writer, Henmi Yô, who among the Japanese media has most aggressively and independently spearheaded the discussion on 9.11 and its aftermath, emphasizes the need to move away from the perspective of “a world seen through the eyes of America”:

The more we attempt to focus our vision, the more we see through the smoke and raining bullets a despairing and inequitable world system. It cannot be as simplistic as a clash between the “madness” of Islamic extremists and the “sanity” of the rest of the world. Behind Osama bin Laden lies, not several thousands of armed men, but a hatred of over a hundred million poverty-stricken people toward the United States. And counter to this, there is President Bush, who not only carries the vengeance of the WTC terrorist attacks, but also exhibits the irrepressible arrogance of the privileged.

…It is time for us to reexamine the true identity of the United States. Since it’s founding, it has repeatedly carried out over 200 foreign military campaigns, including nuclear bombings. Have we yielded ultimate arbitration to a country that has shown almost no official remorse for its militaristic actions? Perhaps we have been for too long “looking at the world through the eyes of America.” This time, however, we must reexamine these terrible war casualties through our own eyes and come up with our own conclusions based on fundamental moral codes. For the U.S. is showing vigorous signs of a new form of imperialism.

The U.S. counterattack was supported by an absolute majority of “nations,” however it was an act that defied the conscience of an absolute majority of “people.” The problem is not whether one “is with the U.S., or with [the terrorists].” Now is the time for us to stand, not on the side of the state, but on the side of those people who are being bombed.[21. Henmi Yô, Tandoku hatsugen 99nen no handô kara afugan hôfuku sensô made (independent remarks on the resistance, from 1999, to the war in Afghanistan), Tokyo: Kadokawa Shobô, 2001. 39-41.]

What Henmi emphasizes is a move from one perspective to several overlapping perspectives by moving from “North” to “South,” from the powerful in war to the weak, from the state to the individual. These are ways to move away from the perspective of the American side and from the image of the world seen “through the eyes of America” fashioned by Americanization.

Endnotes

Colombia’s Conflict and Theories of World Politics

Ann Mason, Political Science, University of the Andes, Bogotá, Colombia[1. I would like to thank Arlene Tickner for her helpful comments on this article.]

Among the multiple critiques of International Relations theory, its limited relevance for understanding the Third World’s place in global affairs has gained increasing attention during the past decade.[2. For an introduction to this line of analysis, see Stephanie Neuman, ed., International Relations Theory and the Third World (New York: St. Martin’s Press, 1998).] First, the end of the Cold War revealed a more complex world stage with a plurality of actors, problems and interests that had little to do with traditional interstate power relations. September 11 drove home like a sledgehammer the point that the world is about far more than the high politics of Western nations. Today, IR theory’s poor ability to describe and explain, much less predict, the behavior of states in the global South is recognized as one of its primary shortcomings. This in part accounts for the tepid reception that this body of theory has received within countries not counted among the great powers. Both academic and policy-making circles in the developing and less developed world are skeptical about a theoretical tradition whose claims to universalism not only ignore them, but also act to reify a global order within which they are destined to draw the short straw.[3. On how Latin American scholarship has incorporated Anglo-American IR thought, see Arlene Tickner, “Hearing Latin American Voices in International Studies”, International Studies Perspectives, 4, 4, 2003 (forthcoming).]

The Andean Region exemplifies this breach between contemporary IR theorizing and the multifarious problems besetting peripheral states and societies. Until very recently, the violence and social conflicts found in nearly every corner of the Andes were not even on IR’s radar screen. The 40-year plus armed conflict in Colombia, the violent opposition to Hugo Chavez’s populism, massive social protests in Bolivia and Peru, and Ecuador’s persistent political and social instability have all been branded domestic issues, and thus not within the purview of systemic IR thinking. Worldwide transformations that have blurred the internal-external dichotomy, however, have prompted some to recognize what has long been common knowledge in the region: local conflicts and problems are completely enmeshed with complex global economic, social and political processes. Colombia’s conflict is a case in point. Global markets for illicit drugs, links between Colombian armed actors and international criminal organizations, regional externalities of Colombian violence, the massive level of migration to the North, the explosion of the global third sector’s presence in Colombia, increasing U.S. military involvement, and growing concerns of the international community about the deteriorating Colombian situation all illustrate the international face of this crisis.

What light might IR theory shed on a conflict that is estimated to result in 3,500 deaths a year, two thirds of which are civilian, that is responsible for 2.7 million displaced people and another 1 million plus international refugees, whose political economy is such that an average of seven kidnappings occur daily, and that has the country awash in numbing levels of violence and human rights abuses?[4. While the number of conflict related deaths is high, the overall figures on violence in Colombia are nothing short of alarming. In 1999 there were 22,300 violent deaths, representing a homicide rate of 53.66 per 100,000 individuals, according to Alvaro Camacho, “La política colombiana: los recorridos de una reforma,” Análisis Político. No. 41, 2000, pp. 99-117. Displaced population figures are as of September 2002, according to the NGO Consultoría para los Derechos Humanos y el Desplazamiento (Codhes), see Boletín Codhes, September 4, 2002. The number of international refugees is taken from the website of the U.S Embassy in Bogotá, http://usembassy.state.gov/bogota/wwwsdh01.shtml, last updated April 4, 2002, and from the U.N. High Commissioner for Refugees’s website, last updated January 24, 2003, http://www.acnur.org/. Colombia has the dubious distinction of having the highest kidnapping rate in the world, with 2, 304 cases being reported in 2001 according to Pais Libre, a local NGO dedicated to the problem of kidnapping. See http://www.paislibre.org.co/el_secuestro_colombia.asp#, “Total Secuestros en Colombia 1997-2002”, May 14, 2002.] IR theory is in the business of explaining and predicting violent conflict, as well as the behavior of the world’s member states in relation to conflict and stability. Although critical and second-order theories of international relations have fundamentally different concerns[5. My comments will not engage Marxist approaches, critical theory, or post-positivist constructivism, but will rather focus largely on first-order problem-solving theories of international relations.], substantive theorizing must address what Michael Mann calls IR’s “most important issue of all: the question of war and peace.”[6. Michael Mann, “Authoritarianism and Liberal Militarism: A Contribution from Comparative and Historical Sociology,” in Steve Smith, Ken Booth and Marysia Zalewski, eds., International Theory: Positivism and Beyond (NY: Cambridge University Press, 1996), p. 221.] Indeed, realist and liberal theories within the classical paradigm, which share a similar ontology, assumptions and premises, purport to do just that. Given that Kal Holsti’s latest figures estimate that 97% of the world’s armed conflicts between 1945 and 1995 took place in either the traditional or the new Third World, a viable theoretical framework of world politics must be able to integrate the global periphery.[7. Kalevi Holsti, The State, War, and the State of War (NY: Cambridge University Press, 1996), pp. 210-24.]

In this short essay I will discuss what contemporary IR scholarship may or may not offer in its treatment of the Andean Region, and of the armed conflict in Colombia in particular. My commentary will be limited to three issues familiar to the developing world, as seen through the lens of Colombia’s current crisis: the correlation of state weakness with violence and instability, the post-territorial nature of security threats, and the North-South power disparity. I will conclude with some observations on what this may tell us about the adequacy of the theories themselves.

State Weakness

The sovereign state that lies at the heart of the Westphalian model is the building block of mainstream IR theory. Most theorizing about international politics characterizes the state in terms of power, understood as the capability of achieving national interests related to external security and welfare. Realist and liberal perspectives, and some versions of constructivism, are all concerned with explaining conflictual and cooperative relations among territorially distinct political units, even while their causal, or constitutive, arguments are quite different. Although Kenneth Waltz was taken to task for blithely claiming that states under anarchy were always “like units” with similar functions, preferences and behavioral patterns, much of international relations scholarship persists in a top-down, juridical view of statehood largely abstracted from internal features.[8. Kenneth Waltz, Theory of International Politics (Boston: Addison-Wesley, 1979). Both constructivism and liberal democratic theory are important exceptions.]

But international legal sovereignty may be the most that the advanced industrialized states have in common with states on the global periphery such as Colombia. First of all, Colombia’s priority is internal security, not its power position relative to other states. Threats to the state originate within Colombian territory, not in neighboring countries. In spite of some longstanding border tensions and historical rivalries within the Andean Region, Colombia and its neighbors tend to be more concerned with the strength of domestic social movements and armed actors than they are with the international balance of power. Indeed, even in the absence of a regional balancer, strong democratic institutions, dense economic and political networks, or multilateral governance structures, inter-state wars in the region during the 20th century have been extremely rare. This no-war zone, or negative peace according to Arie Kacowicz, appears to be best explained by a shared normative commitment to maintaining a society of states and to peaceful conflict resolution, contradicting both material and systemic explanations of interstate behavior.[9. Arie Kacowicz, Zones of Peace in the Third WorldSouth America and West Africa in Comparative Perspective (Albany: State University of New York, 1998).]

State strength in much of the developing world is not measured in terms of military capability to defend or project itself externally, but rather according to the empirical attributes of statehood: the institutional provision of security, justice and basic services; territorial consolidation and control over population groups; sufficient coercive power to impose order and to repel challenges to state authority; and some level of agreement on national identity and social purpose.[10. On empirical versus juridical definitions of statehood, see Robert Jackson, Quasi-States: Sovereignty, International Relations and the Third World (Cambridge: Cambridge University Press, 1990).] States in the Andean Region all receive low marks for the very features that mainstream IR theory accepts as unproblematic, and immaterial. Although Colombia is in no immediate danger of collapse, most indications point to a state that has become progressively weaker: the basic functions required of states are poorly and sporadically performed, central government control is non-existent in many jurisdictions, social cohesion is poor, and the fundamental rules of social order and authority are violently contested.[11. For an excellent overview of Colombian state weakness and the “partially failed” thesis, see Ana María Bejarano and Eduardo Pizarro, “The Coming Anarchy: The Partial Collapse of the State and the Emergence of Aspiring State Makers in Colombia,” paper prepared for the workshop “States-Within-States,” University of Toronto, Toronto, Ontario, October 19-20. 2001.] Most importantly, the Colombian state fails the basic Weberian test of maintaining its monopoly over the legitimate use of force and providing security for its citizens.[12. Max Weber, Economy and Society, in G. Roth and C. Wittich, eds., E. Fischoff et al., trans. (Berkeley: University of California Press, 1978).]

Internal state weakness, ranging from impairment to outright collapse, is the common denominator of post-Cold War global violence and insecurity.[13. For a sampling of the large literature on this topic, see Joel Migdal, Strong Societies and Weak States: State-Society Relations and State Capabilities in the Third World (Princeton: Princeton University Press, 1988); Robert Jackson, Quasi-States: Sovereignty, International Relations and the Third World (Cambridge: Cambridge University Press, 1990); Barry Buzan, People, States and Fear: An Agenda for International Security Studies in the Post-Cold War Era (Boulder, CO: Lynne Rienner, 1991); Brian Job, ed., The Insecurity Dilemma: National Security of Third World States (Boulder, CO: Lynne Rienner, 1992); William Zartman, ed., Collapsed States: The Disintegration and Restoration of Legitimate Authority (Boulder, CO: Lynne Rienner, 1995); Kalevi Holsti, The State, War, and the State of War (Cambridge: Cambridge University Press, 1996); Ali Mazrui, “Blood of experience: the failed state and political collapse in Africa,” World Policy Journal, 9, 1, 1995: 28-34; and Lionel Cliffe and Robin Luckham, “Complex political emergencies and the state: failure and the fate of the state,” Third World Quarterly, 20, 1, 1999: 27-50.] It is also the permissive condition of Colombia’s security emergency. Reduced state capacity underlies the more proximate causes of the violent competition with and among contending subnational groups, namely the FARC, ELN, paramilitaries, and narcotrafficking organizations. Recent efforts by the Alvaro Uribe administration to build up Colombia’s military suggest movement toward state strengthening, although effective consolidation must go far beyond this one component of stateness. It remains to be seen whether in the long run Colombia’s bloody conflict becomes a force for state creation in the Tillian tradition,[14. Charles Tilly, Coercion, Capital, and European States, AD 990-1992 (Cambridge, MA: Blackwell, 1990).] or on the contrary a structure that has ritualized violent discord as a normal part of Colombian social life.

This erosion in capacity and competence has taken its toll on what is perhaps a state’s most valuable asset–legitimacy. The Colombian state’s mediocre performance and problem-solving record degrades central authority, reducing public compliance and policy options, and leading to a further deterioration in internal order as para-institutional forms of security and justice emerge. This dynamic has been exacerbated by new mechanisms of global governance and the proliferation of global actors within domestic jurisdictions increasingly perceived as legitimate alternatives to sovereign state authority. What Jessica Matthews describes as a “power shift” away from the state–up, down, and sideways–to suprastate, substate, and nonstate actors as part of the emergent world order may also involve a relocation of authority.[15. Jessica Matthews, “Power Shift,” Foreign Affairs, 76, 1, 1997: 50-66.] This is particularly apparent in the post-colonial and developing world where the state is less equipped to respond to internal challengers, and sovereignty’s norm of exclusivity is more readily transgressed. In Colombia, alternative political communities such as transnational NGO’s, church and humanitarian associations, and global organizations, as well as insurgent and paramilitary groups, are increasingly viewed as functional and normative substitutes for the state.

Global Security Dynamics

At the same time that Colombia’s security crisis is in great measure attributable to the empirical weakness of the state, it also highlights another dimension of the emerging global order: the complex interplay between domestic and international security domains. The globalization of security puts into sharp relief the growing discontinuity between fixed, territorial states and the borderless processes that now prevail in world politics. While Realists would point out that current events in North Korea and Iraq are eloquent reminders of the applicability of a traditional national security model in which state-on-state military threats predominate, concerns in Colombia reflect a somewhat different security paradigm.

First of all, insecurity in Colombia is experienced by multiple actors, including the state, the society at large, and particular subnational groups. Security values, in turn, vary according to the referent: national security interests, both military and nonmilitary, exist alongside societal and individual security concerns. Colombian society not only seeks security against attacks, massacres, torture, kidnapping, displacement, and forced conscription, but also in the form of institutional guarantees related to democracy and the rule of law, and access to basic services such as education, employment and health care. Many of the internal risks that Colombia confronts are also enmeshed with regional, hemispheric and global security dynamics that are dominated by state and non-state actors.

While Colombia is typically viewed as being the in eye of the regional storm, the Colombian crisis is itself entangled with transregional and global security processes, including drug trafficking, the arms trade, criminal and terrorist networks, and U.S. security policies.[16. For an elaboration of the transregional security model see Arlene B. Tickner and Ann C. Mason, “Mapping Transregional Security Structures in the Andean Region,” Alternatives, 28, 3, 2003 (forthcoming).] The remarkable growth in the strength of Colombia’s most destabilizing illegal groups during the 1990s, for instance, is directly attributed to their ability to generate revenue from activities related to the global market for illegal drugs.[17. The relationship between rents from illegal drugs and the internal conflict in Colombia is well established in Nizah Richani, Systems of Violence: The Political Economy of War and Peace in Colombia (Albany: State University of New York Press, 2002).] Both the FARC and the paramilitaries capture rents from the cultivation, production and trade of cocaine and heroin, which finances their organizations, keeps them well stocked with arms also traded on regional and worldwide black markets, and sustains a pernicious conflict. These transactions occur within complex transnational criminal associations within and at the edges of the Andean region, which in turn are involved in global financial, crime, and even terrorist networks.[18. See Tickner and Mason (2003) and Bruce Bagley “Globalization and Organized Crime: The Russian Mafia in Latin America and the Caribbean,” School of International Studies, University of Miami, 2002, unpublished paper.] Seen from this perspective, Colombia’s war is not so internal after all: it actively involves dense transborder networks composed of an array of global actors.[19. Colombia’s conflict increasingly resembles the new war nomenclature. See Mary Kaldor, New and Old Wars: Organised Violence in a Global Era (Cambridge: Polity Press, 1999).] Such a post-sovereign security setting underscores the necessity for mainstream IR theorizing to go beyond its state-centered vision of world politics and to develop conceptual tools better equipped to deal with global realities.

Power and Authority on the Periphery

IR theory’s notion of formal anarchy coexists uneasily with relations of inequality and domination that pervade world order. While most states in the South would tell you that the exclusive authority with which the institution of sovereignty endows them is not quite equal to that of their more powerful northern associates, neorealism and neoliberalism insist that the evident discrepancies among states are mere power differentials within a decentralized international system that lacks a central political authority. Thus hegemony and asymmetrical interdependence as such do not contradict the fundamental IR distinction between anarchy and hierarchy.

Some dominant-subordinate structures, such as the U.S.-Colombian relationship, may indeed be about more than material differences, however. The immense disparity in economic, political and military power has permitted Washington to impose its will in Colombia on a wide range of issues similar to a coercive hegemonic project. Nevertheless, Colombian observance of American preferences in its foreign and internal security policy is not exclusively related to overt threats or quid pro quos. The rules of what Alexander Wendt and Daniel Friedheim call “informal empire” are such that inequality can also be characterized as a de facto authority structure.[20. Alexander Wendt and Daniel Friedheim, “Hierarchy under Anarchy: Informal Empire and the East German State,” International Organization, 49, 4, 1995: 689-721. See also Nicolas Onuf and Frank Klink, “Anarchy, Authority, Rule,” International Studies Quarterly, 33, 1989: 149-173, on the paradigm of rule as an alternative to anarchy.] Authority implies that the U.S. exercises a form of social control over Colombia, and that in turn Colombian compliance cannot always be explained by fear of retribution or self-interest, but rather suggests some acceptance, no matter how rudimentary, of the legitimacy of U.S. power.[21. On the different methods of social control, see Ian Hurd, “Legitimacy and Authority in International Politics,” International Organization. 53, 2, 1999: 379-408.] Ongoing practices that become embedded in institutional structures can create shared behavioral expectations and intersubjective understandings reflected in identities and preferences. Colombia’s anti-drug posture, for example, that was in great measure shaped by Washington´s militarized war on drugs and aggressive extradition policy, has over time become internalized.[22. This process is, nevertheless, highly uneven, and can be mediated by multiple factors. For an analysis of how domestic considerations led Colombia to adopt a confrontational position toward U.S. demands on extradition during the Gaviria administration, see Tatiana Matthiesen, El Arte Político de Conciliar: El Tema de las Drogas en las Relaciones entre Colombia y Estados Unidos, 1986-1994 (Bogotá: FESCOL-CEREC-Fedesarrollo, 2000).] Colombia has appropriated the prohibitionist discourse of the United States, and become an active agent in reproducing its own identity and interests vis-à-vis the illegality and danger of drugs.[23. Curiously, even while various states in the U.S. are considering the decriminalization of drug use for medicinal purposes, Colombia’s current proposed political reform includes eliminating the “personal dosis” of illicit substances, which had been legalized by the Constitutional Court in 1994. On the construction of an anti-drug national security identity see David Campbell, Writing Security: United States Foreign Policy and the Politics of Identity (Minneapolis: University of Minnesota Press, 1992).]

It would be an exaggeration, however, to conclude that Colombia’s behavior on its shared agenda with the U.S. is completely consensual: the underlying power configuration is a constant reminder that Washington calls the shots. The U.S. reconstruction of Colombia’s internal conflict into part of its war on global terror, with great uncertainty within the country about its implications for a negotiated settlement, is illustrative. U.S. preponderance can also lead to “increased incentives for unilateralism and bilateral diplomacy,” at times directly against Colombian interests.[24. Robert Keohane, “The Globalization of Informal Violence, Theories of World Politics, and ‘The Liberalism of Fear,'” in Craig Calhoun, Paul Price and Ashley Timor, eds., Understanding September 11 (New York: The New Press, 2002), p. 85.] Recent arm-twisting to grant immunity to American citizens and military in Colombia from prosecution for human rights violations under the International Criminal Court is a case in point. Still, material inequalities can obscure how third-dimensional power also operates in the informal authority relations between the United States and Colombia.

Conclusion

The Colombian situation suggests various themes which theories of world politics would be well advised to take into consideration. IR theory has been largely silent on the issues of state-making and state-breaking that reside at the heart of the Third World security problematic. In neglecting domestic contexts more broadly speaking, this body of theory is inadequate for explaining the relationship between violent “internal” conflicts and global volatility at the start of the 21st century. These theories also have a blind spot when it comes to non-state actors in world politics. In overemphasizing states, realist theories in particular are hard pressed to adequately account for the countless sources of vulnerability of states and societies alike. Security threats, from terrorism to drug trafficking to AIDS, defy theoretical assumptions about great power politics and the state’s pride of place in world order. Similarly, non-territorial global processes such as Colombia’s security dynamics are not well conceptualized by conventional IR levels of analysis that spatially organize international phenomena according to a hierarchy of locations.[25. Barry Buzan, “The Level of Analysis Problem in International Relations Reconsidered,” in Ken Booth and Steve Smith, eds., I.R. Theory Today (University Park: The Pennsylvania State University Press, 1995).] Colombia’s experience with sovereignty also calls into question the logic of anarchy in realist and liberal IR theorizing. Seen from a peripheral point of view, the notion of formal equality is little more than a rhetorical device that camouflages deep and persistent material and social inequalities in the international system. We thus arrive at the conundrum of a “stable” world order, in IR terms defined by the absence of war among the world’s strongest states, wracked by violent conflict and immeasurable human suffering in peripheral regions. Perhaps most importantly, today’s global security landscape should prompt us to rethink theories that by and large bracket the non-Western, developing domains and suppress their narratives.

The heterogeneity of the IR discipline cautions us against jettisoning the entire canon as flawed when it comes to the Third World, however. Constructivists’ incorporation of a social dimension into an analysis of state identities and interests is a promising research agenda for analyzing non-material aspects of North-South relations. Institutionalism theory has also contributed to our understanding of the role of global institutions and norms in conflict resolution and cooperation in the Third World, and may offer insight into seemingly intractable conflicts such as Colombia’s. Paradoxically, certain realist precepts also have utility for analyzing the international politics of developing states. The distribution of global economic, political and military power has an enormous impact on center and peripheral states alike. As we have seen, the inequality in U.S.-Colombian relations poses a serious challenge to multilateralism and mechanisms of global governance. In spite of the ongoing reconfiguration of the state in response to global transformations, the sovereign state has proved to be highly adaptive and resilient. Colombia’s internal weakness, for example, is to be contrasted with the state’s increasingly successful political and diplomatic agenda within the international community, even in the face of increasing global constraints. The complexities of Colombian security dynamics, which vividly illustrate a non-realist security landscape, nevertheless require that public policies prioritize, delineate and specify threats and responses largely in conventional, military terms. And finally, Colombia’s efforts to recuperate state strength, or complete its unfinished state-making process, as the case may be, suggest that state power remains pivotal to internal, and thus global, order.

Rather than dismissing IR theory outright for its shortcomings in explaining the problems of countries such as Colombia, we may be better advised to look toward peripheral regions for what they can contribute to testing, revising, and advancing our theories of international politics. Perhaps the explosion of war-torn societies in the Third World and the implications this has for global order will inspire critical analysis of where the theories fail and what they have that is germane to analysis of international relations in the South. Just as there is no single theoretical orthodoxy in IR, neither is the Third World a like unit. With any luck, the diversity of these experiences will lead to new theorizing about world politics.

Endnotes

Institutions in Turbulent Settings

Francisco Gutiérrez[1. Researcher, Instituto de Estudios Políticos y Relaciones Internacionales, Universidad Nacional de Colombia. This paper was sponsored by the London School of Economics – DFID Crisis States Program. I wish to thank James Putzel, Eric Hershberg, Juanita Villaveces, Juan Camilo Cárdenas and Paul Price for their extremely valuable input.]

This paper critiques some applications of the neoinstitutionalist program (NI) to the study of Latin American and Andean polities, and tries to develop some aspects of an alternative framework.

The critique develops on two levels. First, the “turbulent” institutions of several Latin American and Andean countries highlight some of the shortcomings of NI tout court. Since turbulent settings are the norm, rather than the exception, both theoretically and empirically there is a need in each case to explain the specific sense in which institutions can be taken as an independent variable. Second, the variant of NI most frequently applied to Latin America–with its heavy and almost always implicit normative and theoretical assumptions[2. See, for example, the excellent review by Ames (1999).]–is deeply flawed and fails to address the very core of political conflict and change in these countries. I hope that the context will indicate at which level my criticism is developing. I contend that the shortcomings of standard NI models can be overcome with a kind of political analysis that “brings society back in” and incorporates learning in a nontrivial way. This discussion is linked to the “philosophy of history” of NI (path dependency), sensibility to initial conditions and the transition from laminarity to turbulence.

In the first part of the essay, I flesh out some basic definitions. In the second, I contrast how institutions work in stable (or “laminar”) and turbulent settings. In the third, I stress the importance of nontrivial learning from a Schumpeterian perspective. In the fourth, I show that path dependency is only part of the history. Each section leaves some unanswered and (hopefully) interesting questions.

1. Definitions

I understand NI in a very conventional way: a theory that takes institutions, broadly understood, as a relatively fixed set of incentives that explain differential social outcomes (as in North and Thomas´s rendering of “the rise of the Western World”, 1976). How broadly understood is open to debate. A typical gambit of NI as applied to Latin American contexts is to resort to so-called “informal institutions” when the proposed correlation between institutions proper and outcomes fails to show up. Thus, the concept of institutions would contain all the rules of the political game. The problem of fancifully broad definitions of institutions is that they are all-encompassing in nature, identifying institutions with any stable pattern of human interaction. This is an open door for circular argumentation, tautology, and programmatic degeneracy (in Lakatos's [1970] jargon). Specifically, the notion of “informal institutions” is open to two criticisms. First, contrary to institutions proper, the so-called informal ones are identified by an external observer without the conscious acceptance of the protagonists of the interaction. It is an analytic device, at another level of reality than explicit rules. How can you demonstrate empirically that the set of informal rules that supposedly were observed actually regulate a given universe of interactions? Second, informal rules have a very vigorous (and unstudied) life, below and above formal institutions. Below: conventions, for example, are potent devices for coordination-solving, that largely live in a world previous to explicit rule-establishment (Lewis, 1986) and that certainly do not require a two person interaction.[3. I can develop conventions to protect myself from my weakness of will, Elster (1984).] Above: meta-agreements are not necessarily well behaved. In countries with weak polities, it can be the case that the only valid rule is the (informal) motto “the rule is that no rule will be observed”. This kind of (paradoxical) order does not orient actors in their every day life–and so the typical inference of the NI program (such and such set of institutions generates such and such social outcomes) is out of place. Thus, it is better to stick to something like the more restrictive but sensible definition of Eggertsson: “Let us define institutions as sets of rules governing interpersonal relations, noting that we are talking about formal and organizational practices” (Eggertsson 1990, 70); see also Tsebelis (1990).

Which is the explanatory power of institutions? In a world of (semi)rational agents, stories of failure and success have to be accounted for: Why do people (sometimes) take the wrong paths? Because they are sent the wrong signals, which encourage suboptimal behavior. And why do wrong signals occur? Mainly because of unspecified property rights, imperfect information and transaction costs. NI offers microfoundations where “organic culturalism” (à la Putnam) does not.[4. I would note, however, that Northian and Putnamian tales have in common being narratives of an original sin, a pretty strong symptom of their obliteration of power asymmetry motives. This aside means that I do NOT take sensibility to initial conditions as given, a subject to which I will return.] Suboptimal outcomes can be explained by poor institutional design–which, in turn, is maintained by coalitions that benefit from it. Transaction costs are the prototype of such a mechanism. Market and state failures create social forces that help to maintain those very failures. From positive feedback and the decisive role of institutional design, path dependency follows. This provides for a theory of social change.

Note that the microfoundations provided by NI make sense if and only if institutions constitute a system of incentives that explains the behavior of the agents. On the other hand, NI does not entail hard-nosed rationalism; it should suffice that institutional signaling be the most salient feature of the incentive landscape, the other incentives playing the–perhaps important but logically subordinate–role of noise.

2. The Neoinstitutionalist Framework in Turbulent Settings: Does It Work?

Does NI provide an adequate toolkit for understanding conflict and change in countries with high levels of instability and where “noise” is stronger than institutional signaling? I think not. I will not dispute that in such countries “institutions matter”–an assertion that is trivially true (Harris, Hunter and Lewis 1995)–nor that a broad family of social and political problems can be captured by the rich conceptual framework of NI. I start instead with the defense of the (once again, very conventional) claim that such matters as contemporary political change in Colombia, Ecuador or Venezuela cannot be accounted for by standard NI.[5. I hope to do so by not-too-conventional means.] I believe there is a fundamental difference, from the point of view of the role and status of institutions in social and political life, between those turbulent Andean cases and the core capitalist countries.

a. My main defense of the conventional claim is based on the fact that in a very important sense countries like Colombia, Perú or Venezuela fit too well the theories that constitute the formal backbone of NI. To see this, let us use the powerful dichotomy between “policy politics” (playing the game) and “institutional politics” (debating the rules of the game), common to many variants of social choice theory. Buchanan (1986) argued that in all developed democracies (of his time), policy politics was the dominant practice, because basic institutions were taken for granted. This is perplexing, because if institutions are decisive for outcomes of efficiency and distributional problems, they should be a permanent object of political strife among (even boundedly) rational agents. The Tullock question (“Why so much stability?”) has no easy answer; in this regard, I can do no better than quote Eggertsson in extenso:

If institutions are somewhat chosen as we want to argue [rationally], then we are back to the disequilibrium outcomes of majority-rule voting, and the choice among institutions will not lead to stable or equilibrium institutions. And it does not help to argue that the choice of institutions is prescribed by higher rules, written or unwritten constitutions, because this only pushes the argument one step back, requiring us to predict unstable constitutions. . . . Just as in the case of voting outcomes, empirical observations tell us that the institutional structures in democratic countries are relatively stable, that they tend to be [in] equilibrium. . . . Shepsle argues that political institutions are ex ante agreements about cooperation among politicians. According to this view, institutions can be seen as a capital structure designed to produce a flow of stable policy outcomes, and institutional change is a form of investment. One of the costs of institutional change is the uncertainity about which outcomes the new regime will produce. Uncertainty implies that a given structure may ex ante be associated with a set of structure-induced equilibrium points. Ex post, this uncertainty is gradually reduced as the operational qualities of a new institutional structure become known. Finally, Shepsle argues that this uncertainty about the impact of structural change on equilibrium outcomes is enough reason to stabilize institutions and prevent continuous institutional change. He maintains, thus, that the calculations of agents in decisions involving policy choices are qualitatively different from calculations regarding institutional change (Eggertsson 1990, 71-72).

Stability is a major anomaly, for which auxiliary arguments had to be developed. In contrast, in the Andean area we have fully normal cases. The Andean constitutional wave of the 80s and 90s was one of the two biggest in the contemporary world, together with the post-socialist Central-East European. Colombia issued one new constitution and several important reforms related, among other matters, to property rights and the judiciary; Ecuador produced two constitutions (1979 and 1998), and a couple of major reforms in the meantime; Perú also enacted two constitutions; and Venezuela and Bolivia one each. In all these countries, “writing a new charter” has been the main motive of politics for long periods, a phenomenon backed by a strong historical tradition. Furthermore, large-scale institutional change is the main objective of everyday political squabbling: decentralization; electoral, judicial and congressional reform; and so on. The legislative inflation in the Andes is monstrous; not even specialists can follow comfortably the unending stream of change in the rules of game in practically all of the basic areas of life.[6. One among many possible examples: during the first three years of the Andrés Pastrana (1998-2002) administration in Colombia, five important tax reforms took place. Even the rules that affect property, that apparently cozy haven of stability, are flexible.]

In normal Andean countries, then, institutional politics is dominant (and policy politics is rather poor). But this creates a real problem for NI. First, the basic rules of the game are not stable. Rather, they are the main object of contention and change very fast. Now, “unstable institutions” is rather an oxymoronic expression–whatever sense one can give to it, it affords only weak independent explanatory power to institutions. It also calls into question evolutionary arguments, one of the best alternatives to hyperrationality assumptions (Axelrod 1986; Young 2001), because the slow pressure against maladaptations does not have time to unfold (I will return to this). Thus, we have to ask in what sense we are speaking about a system of incentives that actually affects agents´ behavior. If, as is the case in countries like Colombia or Ecuador, agents are conscious of the velocity of institutional change, their expectations are not strongly linked to any clearly specified (present) system of incentives. More subtly, we cannot maintain simultaneously both the theory and the explanations for the anomaly; both are stated on a general plane, and each one will hold for all cases or for none.

The obvious empirical question is: In such a setting, do institutional arrangements act as independent variables? And, then: How do agents respond to this family of noisy and fluid incentives?

b. Although in the above sense turbulent countries are too normal,[7. And indeed they are simply more than stable ones.] in another sense they are odd. Instability, war and violence give rise to self-sustaining patterns of human interaction, which in turn generate explanatory problems for the concept of rational calculus and thus of “systems of incentives.”

In Colombia, for example, war and elections have coexisted for a very long time–more or less twenty years, if we fix the start of the present wave of armed conflict in the 1982-84 period.[8. 1982 was the year in which the main guerrilla force (Fuerzas Armadas Revolucionarias de Colombia-FARC) declared itself the People´s Army. Collier and Hoeffler give 1984 as the initial date of war in Colombia (1998).] Violence has become an everyday component of political action, making it a high risk activity. This changes the menu of options for politicians, as well as their mindset. It also creates a rationality problem.

Due to war, Colombian politicians have the option–often the need–of switching between institutional systems: the jurisdiction of the state or the jurisdiction of the warlords. It is important to stress that warlords are involved in electoral politics in their territories, so the modal politician will participate in different and contradictory institutional worlds. This is another type of institutional politics (with changes in space, not in time). Agents can choose among competing rules of the game, indicating what kind of game they prefer in a given time period, but they pay the price of prohibitively high risk taking.

Colombian politicians of all political parties and families are presently taking very high risks of being kidnapped or killed.[9. An ominous symptom of this fact is that insurance companies withdrew coverage for Colombian mayors.] Oddly, there has been a strong increase both in risk and political participation (in the sense that there are many more candidates and lists) in Colombia in the last twenty years.[10. Violence affects the political parties differentially. Controlling by size, proportionally more left-wing than tradional party members are killed; but all suffer massive bloodletting.] Why? Whatever the answer, I would argue that at the limit, when one is playing Russian roulette (i.e., when the loss of one’s life is one of the prizes), using the principle of revealed preferences is simply not sound. But in what sense then are we speaking of utility functions and systems of incentives? Risk can be coped with under the incentive system up to a certain point–after that, incommensurability appears.

c. Turbulent countries tend to be weak and vulnerable nation-states. A fundamental part of their decision making and political life is transnational. War, narcotrafficking, decentralization, economic adjustment, to name just some of the dominant motifs in the Colombian case, involve extended webs of national and foreign actors. This would not be a major problem except that transnational governance systems, and their corresponding distributional problems, have generally escaped the gaze of NI analysis–in part, indeed, because they are institutionally mis-specified,[11. For example, Aoki’s (2001) interesting and comprehensive work doesn´t treat the subject.] in part because the mechanism of “micromotives and macrobehavior” (Schelling 1978) implies a change of scale: when one moves from the national to the global, the unit of analysis moves from individuals to states and organizations.

On the one hand we have, then, crucial global-ridden processes–that is, narcotrafficking, development models, technological change–and, on the other hand, a national institutional framework.[12. This mismatch between national institutional frameworks and patterns of power distribution appeared long before the present wave of globalization; indeed, it is at the heart of the “rise of the Western world.”] The result is that, empirically, one can observe a strong link between the “small” everyday practices of local politicians and the “big” arrays of transnational phenomena, but NI gives no way of capturing it.[13. Certainly, this is a motive explicitly posed by the Crisis States Program.] When one poses simple questions–such as why democracies are unstable in the Andean area or why the Colombian political system has changed in the specific sense it has–this tension becomes particularly uncomfortable. In the new wave of institutional studies on Latin America, novel and interesting aspects of political life were analyzed successfully, but a quaint dichotomy took center stage: adequate problem specification versus strong methods. One can have one of these goods, but not both simultaneously. Geddes (1994), for example, in her neat and intelligent analysis of state reform and the trajectory of bureaucracy-technocracy in Brazil, overlooks the role of international financial institutions. This description of a “purely national dilemma” defies credibility. An even more extreme example concerns the interpretation of political change in Colombia: analysts have focused on the niceties of electoral legislation, or more generally on institutional design, forgetting about such small details as narcotrafficking, war or the changes caused by television in political life (e.g., Nielson and Shugart 1999). Old institutionalism was not free of such difficulties, as denounced by Eckstein (2000), who criticized it for its focus on the small print of electoral design and its failure to give adequate context to the fall of the Weimar Republic.[14. Eckstein, though, seems to take for granted that “big” objects always have “big” causes, a point of view that can´t be shared.] This draws our attention to the problem of metrics. Which is the set of institutions relevant for the specific problem at hand? Can, for example, the electoral fragmentation of several Andean polities be explained simply by wrong electoral rules, or are other, more distant, institutions relevant?[15. And, once again, a meta-institutional problem arises here: the constant change in the rules of game -and not one set of rules valid at a given period-can decisively shape the nature of political conflict. See the excellent Fleischer (1996) for the successive waves of electoral reformism in authoritarian Brazil.]

In brief, society (with its “long networks” of political action) has to be brought back in. This applies, if the previous arguments are correct, especially to normal countries, where the institutional framework is unstable.

Conflicts around basic institutional design, the breakdown and creation of coalitions and national–transnational agendas–this is the very matter of everyday politics in “states in crisis”. Is it at all possible to speak about it, displaying at least part of NI rigor and using some of its methodological tools?

3. Innovation and Learning

The question merits a positive answer; there are now interesting programmatic reflections (from Hedström and Swedberg [1998] to McAdam, Tarrow and Tilly, [1997]) as well as appealing empirical works that address many of these challenges. Since the adaptation of the framework of “the vast majority of analyses produced by political economists” (using Hall’s [1997] expression) to these concerns will necessarily be incremental and piecemeal, I will offer a different, Schumpeterian twist.

Schumpeter’s name in political science is bound to the so-called elitist perspective, a contested though fertile view of electoral competition. But here I use another aspect of Schumpeter´s work: his notion of entrepreneurship and innovation. My belief is that studying political innovation allows the researcher to trace the links of the chain that go from transnational processes to small local coalitions and conflicts. It may also allow for a better understanding of the nature of political change and a better fit to empirical data than standard recountings.

The intuition of viewing the politician or social leader as an entrepreneur is already well established and has been particularly successful in the study of social movements (see, for example, McAdam, McCarthy and Zald 1996). However, the typical definition of entrepreneur in the social movement literature is as a resource mobilizer, failing thus to address what Schumpeter considered the distinctive nature of entrepreneurship–innovation.[16. McCarthy (1996) comes close to this idea, but then drifts away.] Perhaps because social movements tend to be short lived, the study of the repertoires of contention has not led to the analysis of innovation and its long-lasting effects on organisms and systems. More than in risk taking or mobilizing resources, entrepreneurs (by Schumpeter´s definition) are engaged in innovation, defined as “technological change in the production of commodities already in use, the opening up of new markets or of new sources of supply, Taylorization of work, improved handling of material, the setting up of new business organizations . . . in short, any ‘doing things differently’ in the realm of economic life” (Schumpeter 1939, 84).

What causes innovation? Endogenous change and exogenous shocks.[17. It is important to take into account that for Schumpeter, these expressions don´t correspond to the national-international dichotomy, but rather to “internal or external to the economic system.” I adopt this usage, replacing “economic” with “political.”] Schumpeter focused on the former, actually considering the latter of little interest to economics. Whatever the merits of his reasoning, both types of forces seem crucial to the study of political systems. In politics, exogenous shocks occur frequently and are of indisputable interest: the way in which wars, changes in tastes and new technologies give rise to ways of “doing things differently” are not well understood, yet they are crucial for the interpretation of political change.

The import of applying the Schumpeterian view of innovation is that we can study explicitly the interaction between processes of innovation and “exogenous shocks” caused by distant drivers of political change. We can do this without abandoning the basic tools that give NI its analytical force, particularly some notion of rationality and informational economy (signals, incentives and constraints)–that is, microfoundations.[18. On the other hand, endogenous innovation will not be successful unless it is robust relative to external shocks, especially if these are strong and repeated.] In other words, our agents will remain basically the same (for example, politicians who want to win elections) but now they are open to many different incentives and constraints. Exogenous shocks exist, so the system and its environment are moving simultaneously. The social landscape will be the vector that results from aggregated microinteractions, institutions and exogenous forces.

It is important to stress that the idea of innovation also allows for the study of both moments of interaction between institutions and agents: how institutions restrict agents and orient them in specific directions, and how agents, through small changes, transgressions and adaptations perturb and finally transform institutions.[19. Here the existence of transnational forces is critical: agents can shortcircuit institutions resorting to transnational coalitions.] In this sense, as addressed specifically by Schumpeter, the study of innovation is evolutionary. But if narrow rationalism is replaced by an evolutionary perspective, we have a “syncopated evolution,” because the environment changes very fast, creating juxtaposed layers of adaptative practices. We do not just have a society of limited, myopic agents struggling in (sometimes very) noisy environments. Evolution can be imperfect and allow for the survival of maladaptations to previous incentive systems, because exogenous shocks can accumulate, truncating the evolutionary process. One of the typical results of this kind of evolution is thus “mixed types” and heterogeneous coalitions. Keeping track of the exogenous shocks, their unfolding and their effects on the political system, is consequently an antidote for the extremely acritical adoption of the modern-backward dichotomy that is so evident in several NI analyses.

For example, elsewhere I have shown (Gutiérrez 2001) that to explain the type of relations between narcotraffickers and politicians in Colombia in the last twenty years I needed two dimensions: a principal-agent model that accounted for the contractual conflicts between criminals; and a new insertion of Colombia in the international system, marked by the 1991 Constitution, that changed the role of the state vis-à-vis illegality. On both dimensions the characteristic of social interaction was strong ambiguity and a large amount of noise. The 1991 Constitution was considered in its time a modernizing landmark, and in fact it provided a wealth of institutional resources to expel organized criminals from political life; but at the same time it gave in to the main demand of narcotraffickers (a ban on extradition for Colombians).

Schumpeter´s analysis made learning the driving force of change, though focusing only on imitation. The “new ways of doing things” spread in waves, with early imitators replicating successful practices. But entrants keep imitating even after the marginal benefits of the innovation have reached zero, while other practitioners simply can´t catch the beat of the new rhythms. Thus, a typical result of Schumpeterian evolution is that learning entails overreacting. Herd effects and congestion lead innovation-prone rational agents to suboptimal behavior. Agglomeration around successful devices results in catastrophes and organizational destruction. Schumpeter could identify the specific (market) mechanisms behind organizational breakdown and catastrophes.

We are not near doing the same, because in politics there is no equivalent of markets, but I suggest this is a fundamental task. There has been a big bang in the political system in the Andes, but we do not know why it took place or what are the reasons for the differential outcomes (an organizational earthquake in Perú, Ecuador and Venezuela; relative stability in Bolivia; change with important organizational invariants in Colombia). Indisputably, there is some kind of relation between the diverse ways the agents adapted to rather similar processes. Several interesting questions spring from this simple statement: Which concrete mechanisms explain the differential reactions? When and why do the differential reactions imply divergence or convergence of outcomes?

Innovation and learning are the core of technical change. Note that, as in economics, in politics the latter expression has two meanings: the introduction of new devices; and the development of new forms of organization, discourses and practices. Both of them are very important. Television, for example, has had a very serious impact on Latin American political systems, giving an advantage to individuals over organizations[20. Unlike the majority of countries in Latin America, parties in Colombia were institutionalized long before the introduction of TV. In countries in which both came more or less together, like Brazil, the impact must have been stronger (Mainwaring 1999).]–a circumstance that can hardly be overcome by changing electoral statutes.[21. As soon as the majority of voters starts to entertain the notion that individuals are better than “machines,” stringent statutes that give an edge to parties over ambitious politicians don´t have a chance to survive. Ecuador is a good example of this.] On the other hand, there is a rich menu of purely “soft” technological innovations that trigger long-range political change. The following is a list of basic innovations, with illustrations of the decisions and processes involved:

Finding new financial resources. With the growing importance of organized crime in political life, a major decision is whether or not to accept illegal funds, and, if so, how to do it and how to justify (or deny) it publicly. The diverse ways in which these decisions are made–and rebuked by adversaries–produce a specific technology of public debate over the legal-illegal and formal-informal divide (which, by the way, is crucial in the contemporary world, not only for states in crisis).

Finding new languages, political discourses and symbols. In the Andean area, the second half of the 1980s and the 1990s saw the upsurge of the so-called antipolíticos, who sought to capture the votes of the citizens by staging an involved public performance of denial (“elect us because we are not politicians”). This complicated ritual produced a new technology of political symbolism.

Finding new ideas and interest aggregation–articulation forms. New ideas are decisive in the political experience. In this regard, Hall has made two extremely important points. First,

[T]he vast majority of analyses produced by political economists take the same general form, which is to say that they identify a fixed set of variables, whether composed of interests, institutions or ideas, given exogenously to the process of political conflict, and then show how these structure the situation so as to produce the relevant outcomes. This kind of analysis can have real value, but what it misses is the extent to which the outcomes may be created via processes of political conflict and not generated entirely by the antecedents of that conflict.[22. That is, the assumption of the exogenous character of institutions can be wrong.] (Hall 1997, 197)

Second, ideas are a basic dimension of the political, because “politics is not only a contest for power. It is also a struggle for the interpretation of interests . . . politics is more open than most political economists see it” (ibid.; see also Hall 1992). Ideas, and ideals, migrate, suffer counterintuitive adaptations and are articulated (to use another Hall category) with others in an evolutionary fashion.

Finding new dimensions of political practice. Shall parties resort to violence or not? Will they change significantly their traditional repertoire of political action? Will they publicly change their ideology?

Highlighting innovations allows one to exhibit the remarkable, and often neglected, technical content of political struggle and change. The technical is not only a way of presenting interests, it is a way of building them (Hall, 1997). At the same time, it shows some stark contrasts with standard NI. From the NI perspective, learning is “transparent.” Indeed, the pathbreaking analyses of Akerlof (1970), Stiglitz and others (e.g., Kreps [1990]) develop exquisite models that enhance our understanding of informational problems, and offer potent tools to spell them out. However, their translation into political terms remains doubtful. For example, principal–agent structures have been applied to the people–government relation, losing the fact that “the people” is not an actor but a space crossed by cleavages and fractures. Moreover, NI political scientists have ignored systematically the extremely simple–and intelligent–observation of Hirschman: politics is also (fundamentally) about speaking, and thus we need a theory of voice and signaling.

Learning through technical innovation suggests a different picture. First, agents create and adapt along many different dimensions, and are frequently struck by endogenous waves of innovation and/or exogenous shocks. This has two types of consequences. On the static side, the concept of dimensions of evaluation precedes that of systems of incentives. The best example I know of the dramatic changes in the model when the dimensions of evaluation are methodically and explicitly introduced is the classical critique by Hirschman (1970) of the Schumpeterian-Downsian model of elections. On the dynamic side, as stressed above, this evolution is punctuated by frequent large-scale changes in the environment, a factor that limits the power of the basic mechanism of the slow and gradual elimination of unfit agents.

Second, learning advances in waves, so that even a movement toward Pareto-optimal situations can cause catastrophic organizational mortality. There is a clear analogy, then, between herding behind a successful innovator and the classical collective action dilemmas. A polity that learns well can end in a state of constant disarray. Third, signals are not transparent–they have to be read. This argument can be developed in several stages. To start with, innovators can successfully override institutional considerations. Institutions themselves can be inconsistent: to take a typical contemporary situation, for example, answering simultaneously to a national and an international constituency with contradictory interests and concerns. Additionally, agents are exposed to lumpy stimuli–not one signal from one institution, but an institutional score, if I may, which the agent has to learn to interpret and play in front of different audiences. In turbulent settings, where institutions are unstable, short lived and inconsistent, and coexist with other very strong systems of incentives, the ambiguity and polisemy of institutional signals reach a point where the (sunken) assumptions of the comunicational transparency of contractual incentives is untenable.

4. Whither Path Dependency?

The previous discussion is intimately related to the notions of equilibrium and change. To simplify: while neoclassical models predict convergence independently of historical contingency, NI establishes divergence and irreversibility as core concepts that entail path dependency (single versus multiple equilibria outcomes). Taken together, both perspectives beg many questions. The first one is the level of resolution of the explanatory mechanisms. How “small” can an event be to trigger a “big” change? For example, Eckstein (2000) argues that the analysis of the Weimar republic by old institutionalists was wrong because they highlighted the finesse of electoral legislation, forgetting the huge historical tragedy behind Nazism. The criticism is intuitively appealing, but at the same time one would want an argument that addressed the very real fact that there need not be congruence between the size of the “cause” and the size of the “effect.”

The second question is related to the fact that the nature of change can change . Suppose there are two types of systems, those oriented toward an outcome (convergent) and those with several degrees of freedom (path dependent). It is clear that path-dependent systems can in a given moment become convergent–actually, it can even happen that a convergent system becomes path dependent if, for example, it is subjected to a strong enough exogenous shock. This means that “original sin accounts” as those discussed at the beginning of this paper fail because they ignore the second order processes of change. This sounds much less involved when grounded in the experience of the Andean countries. Was the “third wave” of democratization a genuine wave (toward a convergent system), or rather a turn in a cycle of regime change related to the form of insertion of these countries in the global market? Did these democracies depend for their existence on national assets or on international constraints? And what kind of counterfactual could be built by removing one or another constraint?

The intuition that there is an internal dynamic of the system, different from its observed path, might be important in two senses. First, the basic concepts of equilibrium in economics and game theory (i.e., Nash equilibrium) coincide tautologically with the notion of stability. Once the system arrives at a state, the task of the analyst is to explain why it is actually an equilibrium given the nature of the agents. By definition, there is no out-of-equilibrium outcome (Eggertsson notes this difficulty [1990]). However, with the simple idea of exogenous shocks and perturbations, there is the theoretical possibility of stability without equilibrium, when a system is continually perturbed up to a critical point, and thus of self-organizing structures in far-from-equilibrium situations. Political systems in States in Crisis–especially those that show some kind of stable institutional life–would correspond to this description. Second, very different trajectories exposed to similar perturbations may have the same end point. The Andean countries, for example, show similar patterns of political problems, despite their very different traditions–the same can be said about South and Central European nations. Convergence takes place because of many factors, one of the most neglected being that agents learn about what is happening in their neighboring nations. Once again the technical domain–innovation–is a driving force for learning: in the collapse of the Central and East European centrally planned systems, the “round table” technique used for the first time in Poland was used successfully in very different contexts; in the Andean countries the wave of antipolítica was propagated through explicit appropriation of the motives of electorally successful leaders. Space still counts–perhaps it counts more than ever. All this highlights that historicism does not entail path dependency. Systems can be absorbent (a single-ending state, as in neoclassical models), periodic (cyclical) or none of these (path dependent)–and large-scale historical changes can imply “second-order change” passing from one type of system to another.

This takes us back to the “big effects”/”small causes” motive. If we take path dependency seriously, its meaning is very near the “sensibility to initial conditions” concept. When dynamic systems show sensibility to initial conditions, given the trajectory of two distinct “particles,” very small differences in the starting point can entail enormous differences in the outcome. In historical analysis, however, this type of analysis is ridden with difficulties. How can one determine the “starting point” of the trajectory? Seldom is this question posed explicitly. Will one accept the theological concept that all future generations are determined by the (original) sin(s) of their great-great-grandfathers? Putnam (1994) has shown an iron consistency in this regard, and goes as far back as possible to find the reasons for differential outcomes–a real and explicit concern for “time zero” in the historical trajectory. Though his effort is basically flawed (Putzel, 1997; Tarrow, 1996), it clearly and honestly reveals that path dependency and sensibility to initial conditions are kindred notions. However, if original-sin determinism is introduced into the analytical framework, in what sense do institutions (or culture, for that matter) count? Institutions would be only the demiurge that expresses the (perhaps tiny) differences at time zero.

All this has implications for the very foundations of an evolutionary perspective. In “abnormal” (stable) situations, institutions constitute a general framework and outcomes can be seen as the result of the interactions of individuals.[23. Of course groups and organizations enter the analysis, but they in turn are a result of the interaction between individuals.] The “micromotives and macrobehavior” mechanism works well. Thus, stable worlds are transparent in yet another sense: given the rules, outcomes can be studied as if they were the result of aggregated microdecisions. But vulnerable and unstable (turbulent) systems are not transparent, because agents are “shooting at a moving target”–the environment changes faster than the system and the boundaries between the system and the environment are ill defined.

5. Conclusions

I very much agree with Rubinstein´s (2000) assertion that the abstract analysis of the interaction of rational agents has an independent value in itself. However, as soon as a theory, or threads of it, is used as a tool for empirical statements about the state of the world, the categories of the theory should be carefully evaluated to see if they capture the basic content of the system under study. What are the conditions for institutions to be the basic “system of incentives” in a social world?

NI literature applied to Latin America has plainly neglected this question. When systems of incentives are taken for granted, the interest of agents can be fixed deductively. The basic question then becomes cui bono? (Elster 1997): “Who is the beneficiary of this rule of the game”? Modernizers are, and should be, interested in liberalization (Diamond et al., 1997), clientelists in statism, and so on. If what I have being saying about “mixed types” is true, the cui bono question, however attractive, is very nearly the death of good empirical research for countries like those in the Andean region.

A focus on learning and innovation would help to alter the analytical landscape (interests, by the way, are also learned and discovered) and take society back in. On the other hand, it is a good antidote against narrow (economic, cultural, original-sin) determinism. Innovation and learning are embedded in sequences of political-socioeconomic-military structures and events. I believe these Hirschmanian sequences give a better understanding than explaining politics through economics, explaining politics through politics or explaining politics through time–zero events.

Contrary to Schumpeter´s framework for economics, in politics exogenous shocks are also of analytic interest. I have used the expression “exogenous shocks” here not as “international” nor as “uncalled for,” but as “part of the environment, not of the system.” Political change cannot be understood without taking into account the open-ended nature of political systems (Hall) and the exogenous shocks to which they are exposed, especially in turbulent settings. Exogenous shocks also stress the contingent and nontransparent nature of institutional signaling, a fundamental aspect of empirical political conflict in Andean countries.

References

Akerlof, G. 1970. “The market for lemons: quality uncertainty and the market mechanism.” Review of Economic Studies, no. 54:345-64.

Ames, Barry. 1999. “Approaches to the study of institutions in Latin American politics.” Latin American Research Review 34, no .1:221-27.

Aoki, Masahiko. 2001. Toward a Comparative Institutional Analysis. Cambridge, Mass.: MIT Press.

Axelrod, Robert. 1986. La evolución de la cooperación. Madrid: Alianza.

Buchanan, James. 1986. Liberty, Market and the State. Political Economy in the 1980s. New York University Press, N.Y.

Collier, Paul, Hoeffler Anke. 1998. “On economic causes of civil wars,” Oxford Economic Papers no. 50 pp. 563-573.

Diamond, Larry, Plattner Marc, Yu-han Chu, Hung-mao Tien. 1997. Consolidating the Third Wave Democracies. John Hopkins University Press, Baltimore and London.

Eckstein, Harry. 2000. “Unfinished business.” Comparative Political Studies 33 (6/7):505-35.

Eggertsson, Thráin. 1990. Economic Behavior and Institutions. Cambridge, England; New York: Cambridge University Press.

Elster, Jon. 1997. Egonomics. Barcelona: Gedisa.

______. 1992. El cambio tecnológico. Investigaciones sobre la racionalidad y la transformación social. Barcelona: Gedisa.

______. 1984. Ulises y las sirenas. Estudios sobre racionalidad e irracionalidad. Fondo de Cultura Económica, México

Fleischer, David. 1996. “Las consecuencias del sistema electoral brasileño: partidos políticos, poder legislativo y gobernabilidad,” Cuadernos de Capel-IIDH no. 39

Geddes, Barbara. 1994. Politician´s Dilemma: Building State Capacity in Latin America. Berkeley: University of California Press.

Gutiérrez, Francisco. 2001. “Organized crime and the political system in Colombia.” www.nd.edu/~kellog

Hall, Peter. 1997. “The role of interests, institutions and ideas in the comparative political economy of the industrialized nations.” Pp. 174-207 in Comparative Politics. Rationality, Culture and Structure, edited by Mark Lichbach and Alan Zuckerman. Cambridge, England; New York: Cambridge University Press.

______. 1992. “The movement from Keynesianism to monetarism: Institutional analysis and British economic policy in the 1970s.” Pp. 90-113 in Structuring Politics. Historical Institutionalism in Comparative Analysis,” edited by Sven Steinmo, Kathleen Thelen, and Frank Longstreth. Cambridge,England; New York: Cambridge University Press.

Harris, John, Janet Hunter; and Colin Lewis. 1995. The New Institutional Economics and Third World Development. London; New York: Routledge.

Hedström, Peter, and Richard Swedberg. 1998. Social Mechanisms: An Analytic Approach to Social Theory. Cambridge, England; New York: Cambridge University Press.

Hirschman, Albert. 1970. Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, Mass: Harvard University Press.

Kreps, David. 1990. A Course in Microeconomic Theory. Princeton, NJ: Princeton University Press.

Lakatos, Imre. 1970. “Falsification and the methodology of scientific research programmes.” Pp. 91-195 in Criticism and the Growth of Knowledge, edited by Imre Lakatos and Alan Musgrave. Cambridge, England: Cambridge University Press.

Mainwaring, Scott. 1999. Rethinking Party Systems in the Third Wave of Democratization. The Case of Brazil. Stanford, Calif.: Stanford University Press.

McAdam, Doug; John McCarthy; and Mayer Zald, eds. 1996. Comparative Perspectives on Social Movements. Political Opportunities, Mobilizing Structures and Cultural Framings. Cambridge, England; New York: Cambridge University Press.

McCarthy, John. 1996. “Constraints and opportunities in adopting, adapting and inventing.” Pp. 141-52 in Comparative Perspectives on Social Movements. Political Opportunities, Mobilizing Structures and Cultural Framings, edited by Doug McAdam, John McCarthy, and Mayer Zald. Cambridge, England; New York: Cambridge University Press.

Nielson, Daniel, Matthew Shugart. 1999. “Constitutional change in Colombia. Policy adjustment through institutional reform.” Comparative Political Studies 32, no. 3:313-42.

North, Douglass, Thomas Robert. 1976. The Rise of the Western World: a New Economic History. Cambridge: Cambridge University Press.

Putnam, Robert, Robert Leonardi, Rafaella Nanetti. 1994. Making Democracy Work. Princeton University Press

Putzel, James. 1997. “Accounting for the dark side of social capital,” Journal of International Development vol. 9 no .7 pp. 939-949

Rubinstein, Ariel. 2000. Economics and Language: Five Essays. New York: Cambridge University Press.

Schelling, Thomas. 1978. Micromotives and Macrobehavior. New York: Norton.

Schumpeter, Joseph. 1934. Theory of Economic Development. An inquiry into Profits, Capital, Credit, Interest and the Business Cycle.Cambridge, Mass.: Harvard University Press.

______. 1939. Business Cycles. A Theoretical, Historical, and Statistical Analysis of the Capitalist Process. New York; London: McGraw-Hill Book Co., Inc.

Tarrow, Sidney 1996. “Making social science work across space and time: a critical reflection on Robert Putnam´s ´Making democracy work,` American Political Science Review vol. 90 no. 2.

Tsebelis, George. 1990. Nested Games. Rational Choice in Comparative Politics. Berkeley: University of California Press.

Young, Peyton. 2001. Individual Strategy and Social Structure. An Evolutionary Theory of Institutions. Princeton, NJ: Princeton University Press.

Endnotes