Epidemic and Infectious Disease Surveillance : Rise of the Security–Military Framework - Economic and Political Weekly

The COVID-19 pandemic has seen some Asian countries employ sophisticated mass-surveillance technologies—normally employed to gather intelligence for domestic security purposes—to contain the spread of infection in their populations. There has also been an intrusion of military and allied national security actors into the traditionally civilian domain of public health, in the form of disease surveillance. These emerging developments in the pandemic response provide a pretext for a limited historical review, beginning from World War II to the present, centred on the intersection between infectious disease surveillance and control, national security, and military in the Western world.

The ongoing COVID-19 pandemic has been the single most disruptive event, unprecedented in its impact on almost every major domain, from public health to the economy and environment, since World War II. A global pheno­menon of this scale is viewed as a harbinger of epochal change in numerous domains, and spawns countless speculations and apprehensions about the not-so-distant future, that is, the post-pandemic era. From the early months of the COVID-19 pandemic, many governments employed digital
real-time remote surveillance technologies as part of the spectrum of emergency measures employed in outbreak control in their respective populations. A concern was that deployment of these new sophisticated mass-surveillance tools, often provisionally sanctioned under extra­ordinary legal powers, would continue indefinitely, even after the pandemic was brought under control (Harari 2020). The techno­logies, meant for contact tracing and location-tracking, included smartphone apps using bluetooth and the global positioning system (GPS), quick response (QR) codes, facial-recognition software and others. With the help of data mining, artificial intelligence and machine learning, government agencies could constantly monitor movements of persons known or suspected to have contracted the virus. This was done in order to ensure strict compliance with isolation or quarantine protocols, as well as to identify potential contacts and, in the long run, establish virus transmission chains and identify emerging hotspots.

Aided by a favourable legal regime and highly wired society, some governments, such as South Korea, Taiwan, Singapore, China, and Israel, were getting access to digital footprints of their citizens through smartphone apps and credit/debit card transaction records. In Russia, closed-circuit television (CCTV) cameras with facial-recognition software were being used. The Israeli government had granted its domestic security agency—the Shin Bet—unrestricted access to a vast pool of geolocation data of millions of Israeli cellphone users. These confidential data, usually gathered from cellphone providers for counterterrorism measures, were repurposed for outbreak control efforts (Silverstein 2020). In the West, both the United Kingdom (UK) and the United States (US) health departments signed expensive contracts with Palantir, a secretive big-data analytics firm from Silicon Valley that had earlier collaborated with the Central Intelligence Agency (CIA) and other US national security agencies, and infamously assisted the immigration department under the Donald Trump administration in the separation of migrant families and arrests of undocumented migrants (Howden et al 2021). The US governments, both federal and state, also began a similar dialogue with Clearview AI, a controversial facial recognition start-up that provided software to law enforcement agencies, like the Federal Bureau of Investigation (FBI) and Department of Homeland Security (DHS), to track down criminals. Google, Apple and other big tech firms have lately ventured into creating similar tracking software (Calvo et al 2020).

A few emerging, and also overlapping, trends can be discerned from these new developments. First, is the temporary repurposing of many surveillance technologies originally designed and employed for intelligence gathering, for outbreak control. Second, is the growing role of military or security agencies in the realm of disease surveillance.

The traditional role of military in large and significant infectious disease outbreaks is of immediate response, as in the case of other major natural disasters, and is limited to mainly helping in logistics, building temporary infrastructures like field hospitals and isolation centres, enforcing curfews and lockdowns, transporting vaccines, and medicines, maintaining food and other essential supply chains, etc. During the COVID-19 pandemic, excessive use and reliance on armed forces for immediate outbreak response in some countries like the Philippines, South Africa, and Sri Lanka has attracted considerable criticism (Metheven 2020). However, in this paper, the traditional military response to epidemics is not the issue of concern. Instead, the focus is on the increasing role and responsibility shared by military, security or law enforcement agencies in the long-term aspects of infectious disease control, including surveillance and research. It is to be underlined that these aspects belong to the public health domain, traditionally entrusted with appropriate civilian agencies in normal times.

An almost commonsensical explanation is that these new developments are mostly limited to East Asian countries, having nominal democratic or semi-authoritarian regimes with scant regard for civil rights and, hence, are fundamentally incompatible with the existing liberal democratic culture and legal regimes in Western countries (Alon et al 2020). In this paper, we will attempt to dismantle this stereotypical binary of "democratic West–authoritarian East" in accounting for the increasing role of military or security agencies in disease surveillance for outbreak control. Instead, it can serve as a starting point to embark on a historical review on the intersection between infectious disease surveillance, national security and military in the US, as a prototype of the Western world, from the post-World War II era to the present. Drawing from a range of secondary empirical sources like articles in research journals and periodicals, news reports, key official reports and others, it can be argued that: first, certain infectious diseases have become deeply associated with national security interests in the Western world in the decades following the Cold War; and second, despite being a traditional concern of public health agencies, surveillance and other components of infectious disease control witnessed an increasing involvement of military and allied security actors in recent decades. The military engagement with infectious diseases in Western countries also broadened in scope, from health protection of their own forces to that of the general population, in coordination with public health institutions.

Post-war Shaping of Epidemiological Surveillance

In the initial years following World War II, the establishment of the Centers for Disease Control and Prevention (CDC) and the Epidemic Intelligence Service (EIS) programme played a central role in shaping epidemiologic or public health surveillance of infectious diseases in the US and many other countries in its present professional and institutional form.1 These developments, in their nascent stages, were embedded, to an extent, in national security interests in the context of an emergent Cold War.

In 1942, the CDC was established as a wartime exigent organi­sa­tion, then called the Office of Malaria Control in War Areas—with its headquarters in Atlanta, Georgia in south-eastern US—with the objective to control malaria around military installations in southern states, where mosquitoes were abundant. Although malaria and typhus were largely controlled with the end of World War II, the agency was continued with the aim to carry on control efforts as well as to prevent the spread of new infectious diseases, expected to be brought by American troops returning from foreign tropical countries. In 1946, the organisation changed to a permanent federal agency, the forerunner of today's CDC, aimed to assist the US in the control of every type of infectious diseases (Foege 1981).

A prominent epidemiologist in the CDC, Alexander Langmuir, was trying to establish a formal training programme to train interested physicians in field epidemiology, but the proposal initially failed to attract governmental funding (Hamilton 2006). Beginning in June 1950, the Korean War was the first large-scale conflict between two superpowers since the advent of the Cold War, in which mutual suspicions and accusations regarding the use of biological weapons were high. The clandestine biological weapons programme of the US, established in 1943 during World War II, was in its upswing. Langmuir (1980: 472) later remarked:

The CDC was born at the beginning of the atomic age, an age when intense controversy raged among physicians, epidemiologists, and the military over biological warfare. The subject was so shrouded in secrecy that it could not be discussed in an open scientific fashion.

Sensing a new insecurity in the political and military establishment regarding the biological warfare threat from the Soviet Bloc, Langmuir invoked national security to boost his earlier training proposal to now develop a mobile team of competent epidemiologists that would aid the FBI and the military by early detection of any such covert biological attacks. Expectedly, the US Congress accepted the proposal and the EIS was created. Langmuir admitted to have deliberately selected the word "intelligence" to describe the new programme, reflecting the underlying wartime utilitarian tone, which was explicit. He stated:

if an enemy chooses to use biological warfare against us, he will expect to produce epidemics. Epidemiology is, therefore, basically involved in defense against biological warfare. (Langmuir and Andrews 1952: 235)

Cold War: Military and Infectious Diseases

A few years into the Cold War, the anticipated biological warfare threat from the Soviet Union was largely eclipsed by the threat of
nuclear weapons. The EIS quickly shifted its attention to infectious diseases of public health concern and made some spectacular achievements, such as in polio surveillance in later years.
 In 1965, the World Health Organization (WHO) formally established its epidemiological surveillance unit for communicable diseases. As domestic protests escalated against the Vietnam War, US President Richard Nixon, in 1969, terminated the offensive biological weapons programme and dismantled the army biological warfare laboratories in Fort Detrick, Maryland. In 1972, he eventually signed the Biological Weapons Convention, a multilateral biological disarmament treaty (Tucker 2002).

During the Cold War, the engagement of US military to the domain of infectious disease was largely limited to health protection of military forces. It was long-recognised that infectious disease outbreaks contribute most to fatalities incurred on the battlefield and can decisively influence the course of military operations. During World War II, armed forces on both sides witnessed overwhelming fatalities due to influenza and typhus epidemics. From the dawn of the 20th century, beginning with the historic successes of the army commissions on yellow fever (in Cuba) and anaemia (in Puerto Rico), the US military had established and maintained a wide network of overseas laboratories, mostly in tropical and subtropical countries, for research on locally endemic infectious diseases, which were seen as potential threats to the success of its overseas military operations and the health of the deployed troops. The major focus was on developing and testing new diagnostic tests, vaccines, antimicrobials, vector control measures, etc. The host countries provided an ideal setting to study these diseases in their natural milieu and clinical trials were also conducted on the local population. However, surveillance for infectious diseases remained nominal throughout this period. Collaboration with host countries at scientific and administrative levels also served a variety of diplomatic and strategic interests of the US. In World War II and the ensuing Cold War, as overseas military deployments of the US steadily grew in numbers, more overseas laboratories and their detachments were established—in Guam (1944), Taiwan (1955), Thailand (1959), Indonesia (1970), Brazil (1973), Kenya (1974), Manila (1979), Peru (1983), and others (Gambel and Hibbs 1996).

In the closing decades of the 20th century, more overseas labs were shut down than opened, for various reasons, including shortage of funds and changing strategic needs. In the late 1970s, the US Department of Defense (DoD) even considered closing, or handing over to civilian agencies, all of its overseas military research laboratories. However, after the Cold War ended, the relationship of the US military with infectious diseases underwent a profound transition (Russell et al 2011).

New Security Threats and Civil Biodefence

With nuclear threat receding with the end of the Cold War, non-traditional threats to national security began to receive more attention in the West. From being simply regarded as public health problems, certain infectious disease outbreaks, whether natural or man-made, were increasingly assumed to have ramifications that could unsettle societal, economic, and political stability. In international politics, an issue is said to be securitised when it is "presented as an existential threat requiring emergency measures and justifying actions outside the normal bounds of political procedure" (Buzan et al 1998: 23–24). Traditionally, such existential threats to a state are of military nature, but from the early 1990s, infectious diseases began rising in ranks of the national security agenda of developed countries, from the realm of low to high politics.

Emerging and re-emerging infectious diseases: In the three decades following World War II, newly developed antibiotics drastically reduced deaths from hitherto deadly bacterial diseases, and the initiation of mass vaccination programmes produced spectacular results in the control of smallpox, polio, measles, and other childhood diseases in developed countries. However, the optimism of the late 1970s was soon dimmed by the emergence of new infectious diseases like HIV, and global resurgence of old ones like malaria, tuberculosis, cholera, and dengue, often in more virulent and drug-resistant forms, with the sluggish development of new antibiotics. From an historic low in 1980, the death rate from infectious diseases in the US began to steadily rise once again. The most alarming of all, HIV/AIDS, discovered in 1981–82, was later found to originate in sub-Saharan Africa. Highly lethal new viral diseases, poorly understood and difficult to treat, erupted in other parts of the world like Ebola virus (1976, Zaire) and Nipah virus (1998, Malayasia). In 1999, the West Nile virus was introduced in the US, resulting in a large and dramatic outbreak in New York. In 1997, the first known instance of human infections from the highly virulent avian influenza or "bird flu" strain (H5N1) were reported in Hong Kong. In 2003, the severe acute respiratory syndrome coronavirus (SARS-CoV) spread to Canada within just one month of its outbreak in China. This succession of unforeseen events sounded alarm bells within the administrative and security establishments of Western countries, to the presumed threat of emerging infectious diseases (EIDs) to their national security (Davies 2008; Heymann 2003).

The increased interconnectedness between developed and developing countries in this globalised world—particularly through air transportation between metropolises, resulting from an explosive rise in international travel, trade and immigration—has added to the sense of vulnerability of the former to infectious disease outbreaks. In the 1990s, a number of successive official reports in the US reflected the growing recognition of emerging and re-emerging infectious diseases as a significant threat to the nation (IOM 1992; CDC 1994; CISET 1995). One such influential report by the Institute of Medicine (IOM 1992) strongly recommended strengthening the US role in global surveillance, highlighted the importance of the already existing overseas military laboratories, suggested continued financial support to them, and also warned against their decline. In January 2000, an annual CIA report underlined global infectious diseases as a non-traditional threat to US national security (NIC 2000). Additionally, the United Nations Security Council (UNSC), at the initiative of the US, held an entire session exclusively on a health issue, that is, the impact of HIV/AIDS on peace and security in Africa (Waal 2014). Both initiatives were a first in the respective histories of these security organisations.

Bioterrorism: With the end of the Cold War, a few third world countries like Iraq, Iran, Libya, Syria, and North Korea—referred to as "rogue" states—and some transnational non-state terrorist entities like Al-Qaeda, were projected by security experts in Washington as emerging international threats to US national security. Unlike a nuclear superpower like the erstwhile Soviet Union, these new sets of actors were more likely to indulge in asymmetrical warfare, employing unconventional weapons of mass destruction (WMDs) like chemical and biological weapons (CBW) that require lower input cost and technology and are far easier to produce and disseminate than conventional ones—hence, often described as "the poor man's atomic bomb." The etymological shift in denoting the security threat, from an NBC (nuclear, biological, and chemical) threat in the Cold War era to a CBRN (chemical, biological, radiation, and nuclear) threat in the 1990s, reflected the shift in perception and realignment of priorities. The revelation of the erstwhile Soviet Union's huge biological weapons programme raised the fear that the bioweapons cache might, through former Soviet
scientists, end up in the possession of new rogue states and non-state actors (Wright 2004). In 1995, Iraq admitted its small biological weapons programme operating since the 1980s. An influential report by the US Office of Technology Assessment (OTA) called for a sweeping expansion of biological defence to protect the entire civilian population, deemed vulnerable to bioterrorist attacks. It reflected a definitive shift in priority group for biodefence—from military to civilian. Prior to this, the conventional form of biodefence was exclusively concerned with protecting the troops from infectious diseases on the battlefield (OTA 1992). By the end of the 1990s, the possibility of some impending biological doom seemed increasingly real to some influential sections, reinforced by regular and often dramatic portrayals in newspapers, popular television shows and fiction:

The image of a cloud of anthrax killing millions, repeatedly promoted to the public by prominent scientists and senior members of the administration … gained the same kind of symbolic strength as the mushroom cloud of a nuclear explosion. (Wright 2006: 100)

The twin threats of EIDs and bioterrorism converged to elevate the status of certain infectious disease to an issue of natural security, which called for a common strategy to combat them. It was determined that strengthening the national and global surveillance efforts would serve a "dual purpose " against both natural and military dimensions of these infectious diseases (Wright 2006; Heymann 2003).

The Expanding Role of the Military

Throughout modern history, the military has been seen as the traditional agent of national security. Any threat to national security often calls for a key role of the armed forces as part of the response and mitigation efforts. The growing trend in developed countries to regard certain infectious diseases as potential security threats, in turn, provided legitimacy to the increasing role of traditional security actors like militaries in the context of infectious disease outbreaks (Watterson and Kamradt-Scott 2016; Youde 2008). From the mid-1990s, the role of the military and the scope of biological defence considerably broadened, from the traditional concern of exclusively protecting the health of service persons to now protecting the health of the civilian population from the threat of infectious disease, either natural or man-made; this was termed as a "major policy shift" by Wright (2006: 65). According to Fidler (2011: 119): "The lines demarcating military and civilian realms began to blur, creating the need to integrate civilian and military efforts against pathogenic threats." The years that followed also witnessed an unprecedented collaboration between federal public health agencies in the US, namely the CDC and the National Institutes of Health (NIH), with military and domestic security or law enforcement agencies, like the FBI (Waal 2014; Butler et al 2002).

Militarisation of Infectious Disease Surveillance

In 1997, under the directive of the then US President Bill Clinton, the DoD established a global surveillance network for emerging and re-emerging infectious diseases, named the Global Emerging Infections Surveillance and Response System (DoD-GEIS) (White House 1996). The goal was to improve surveillance and outbreak response to emerging pathogens worldwide through overseas military medical research laboratories located in partner countries. Davies (2008: 299) remarked: "Their location in the DoD, as opposed to the United States Agency for International Development (USAID) or Center for Disease Control (CDC) demonstrates how seriously the United States views the response to infectious disease as a key national security strategy." In 1999, the US military also developed a global web-based syndromic surveillance named ESSENCE (Electronic Surveillance System for the Early Notification of Community-based Epidemics), which incessantly evaluates digital health data from diverse non-traditional sources around the world, and can detect any emerging outbreaks, often within few hours instead of days. To strengthen the domestic surveillance, the FBI and the US Army partnered with the CDC and established the Laboratory Response Network (LRN) in 1999—a multilevel network connecting state or federal laboratories with military ones (Morris et al 2003). The US Department of Health and Human Services (DHHS) secretary publicly declared in the same year that it was "the first time in American history in which the public health system has been integrated directly into the national security system" (Wright 2004).

Intensifying Domestic Biosurveillance

In 2001, anthrax attacks in the US through spore-laden mails ostensibly validated the long-held apprehensions of the security establishment. The George W Bush administration promptly responded by launching an array of ambitious and expensive civil biodefence initiatives, ranging from developing new vaccines and drugs against common biological agents (Project Bioshield) and creating emergency medical reserves for potential bioterrorist attacks (Strategic National Stockpile), to intensifying domestic biosurveillance, among others.2 In 2003, the newly founded US DHS—launched to prevent future terrorist attacks like that of 11 September 2001—deployed the BioWatch programme along with the FBI. As per this programme, the existing network of environmental sensors in major US cities was modified to also act as biological sensors, to detect early mass release of any lethal pathogens into urban air. Wright (2004: 58) remarked: "Bush promises Americans a vast bio-umbrella intended to shield them from deadly bio-aggression in the same way that President Ronald Reagan promised that his Strategic Defense Initiative ('Star Wars') would shield them from nuclear missiles." Cooper (2006) referred to this event as "the biological turn in the war on terror."

War Disease to National Security Threat

In modern history, influenza has been so deeply and devastatingly associated with war that its surveillance merits a separate discussion altogether. During armed conflicts, influenza outbreaks, within a very short time, incapacitated entire regiments, affected troop readiness, overwhelmed medical facilities, and disrupted military operations and logistics—hence, it is aptly called a war disease. Unprecedented death and devastation was inflicted on the armed forces around the world in the final stages of World War I by the 1918 influenza pandemic (Spanish flu). When another global war began, the still-fresh memories of the pandemic hastened the US army to establish a board for influenza control in 1941. It eventually became the tri-service Armed Forces Epidemiological Board (AFEB) in 1949, having separate commissions for influenza and other infectious diseases of military significance (Canas et al 2000).

With concern about an influenza outbreak looming in war-ravaged Europe with displaced populations and destroyed
infrastructure, the WHO established the Global Influenza Surveillance Network (GISN)—later renamed the Global Influenza Surveillance and Response System (GISRS)— in 1952. Its main task was to identify and share the commonly circulating virus strains so that effective vaccines can be developed. The relatively low virulence and fatalities of the next two pandemics—in 1957 and 1968—steadily diminished the overall threat perception of an influenza pandemic in Western countries. After World War II, the turn towards nuclear weapons and consequent reduction in the role of conventional forces on the ground, along with regular vaccination campaigns of armed forces, greatly undermined the threat of influenza on military and conflict outcomes. While the WHO'
s GISN expanded its role in the 1957 and 1968 pandemics, the same period witnessed a steady withdrawal of the military from global influenza surveillance and control initiatives. Many such US military-led programmes were decommissioned; for instance, the AFEB was disbanded in 1972 (Watterson and Kamradt-Scott 2016). In 1976, the US Air Force, once again, began a laboratory-based influenza surveillance programme called "Project Gargle," primarily oriented towards protecting armed forces from influenza outbreaks (Canas et al 2000).

However, the scenario rapidly changed in the 1990s. In 1996, a dramatic and unusually contagious outbreak of human influenza, by a novel virus strain isolated from China, occ­urred aboard a US Navy ship, severely incapacitating crew members and temporarily grounding the ship, despite more than 95% of the crew being vaccinated (Earhart et al 2001). The following year, in 1997, Hong Kong witnessed the first known human outbreak of a highly virulent avian influenza or "bird flu" strain (HPAI A[H5N1]), which normally infects wild birds and domestic poultry. The outbreak, though small, was marked by an extremely high mortality rate (>30%), far exceeding that of common human influenza. These two alarming events underscored an urgent need for strengthening global influenza surveillance to rapidly identify newly emerging viral strains. In response, the US Air Force initiated the Global, Laboratory-based, Influenza Surveillance Program in 1997, as part of the GEIS, to improve, coordinate, and integrate influenza surveillance efforts. It was an expanded version of the earlier Project Gargle, built on existing networks of overseas laboratories, and operating in concert with the existing influenza surveillance programmes of the CDC and WHO (Canas et al 2000).

In 2003, the Asian "bird flu" strain re-emerged and spread widely through regular outbreaks among poultry and wild birds in Asia, Africa, and even Europe. Though the spillover to humans has been small and sporadic, the virus was deemed to have pandemic potential. The following few years witnessed the securitisation of avian influenza reaching its pinnacle. In 2007, the WHO released the World Health Report ''A Safer Future'' that identified the influenza pandemic as ''the most feared security threat'' (Elbe 2010; Watterson and Kamradt-Scott 2016). The US military influenza surveillance system underwent significant global expansion between 2005 and 2008 in response to the growing apprehension of an impending pandemic. In this period, the number of countries routinely submitting specimens almost tripled (from 24 to 72), with many from South East Asia—noted for its regular contribution to global influenza strain circulation.

WHO and Global Surveillance Network

From 2000 onwards, Western countries increasingly delegated their responsibilities of infectious disease control to the WHO, which emerged as the foremost international authority on global health security and surveillance (Davies 2008). In 2000, the WHO established the Global Outbreak Alert and Response Network (GOARN), a global network of different partner institutions and agencies to integrate information and bringing international coordination of response to infectious disease outbreaks anywhere in the world. The GOARN continued to be funded, largely and primarily by Western countries, like the US, the UK, Canada, countries of the European Union (EU) and Australia. In 1995, the World Health Assembly recommended that major changes be made in the International Health Regulations (IHR), first adopted in 1969, in light of the emerging and re-emerging infectious diseases and increased international trade and travel. Particularly following the 2003 SARS outbreak, the IHR was significantly revised in 2005 to conform to the new standards of global surveillance. Now, any "illness or medical condition, irrespective of origin or source, that presents or could present significant harm to humans" should be notified by the national governments to the WHO from the earlier mandatory notification of only three diseases—plague, cholera, and yellow fever (WHO 2016: 1).

The US military surveillance networks, like the DoD-GEIS, function as key members in the GOARN, actively supporting and complementing the WHO in its coordination of global surveillance efforts on infectious diseases. Indeed, from the late 1990s, the WHO has advocated for the integration of existing national military laboratory networks of different countries into its global surveillance network (D'Amelio and Heymann 1998). Similarly, the US military also shares close operational relation with the WHO in global influenza surveillance—its overseas laboratories act as crucial and often the only source of information on circulating virus strains to the WHO's GISRS. Its global influenza surveillance programme, in fact, often projected itself as an ideal and effective model for emulation by the WHO (Chretien et al 2006; Kelley 2009).

Civilian Biodefence

Interestingly, several key elements of civilian biodefence initiatives undertaken by successive US governments were found to be conveniently aligned with the financial interests of pharmaceutical giants as well as biotechnology start-ups.

Some of the most vocal advocates of civilian defence against bioterrorism in the 1990s were renowned and influential biological scientists like Nobel Laureate Joshua Lederberg, Frank Young and others, who served as scientific advisories to the Clinton administration at different official levels. Interestingly, they also maintained formal ties, as directors, trustees, board members or scientific advisers, with many biotechnology and pharmaceutical start-ups of that period. Some of them, like Elusys Therapeutics, subsequently made huge profits from biodefence-related research contracts with different government agencies, to develop drugs and vaccines (Wright 2004; Cooper 2006). In fact, as federal spending on civilian biodefence skyrocketed from 1999 to 2004 in order to fund expensive research programmes like Project Bioshield, it allowed for the growth of a new industry of nascent biopharmaceutical start-ups. In 2005, they formed a corporate lobbying group, the Alliance for Biosecurity, to influence biodefence-related federal policies and legislations and secure generous government funding for biodefence research.

In 2004, when the securitisation of avian influenza was picking up, the WHO explicitly recommended that countries stockpile antivirals in advance, given their limited availability. It triggered many governments around the world to panic-buy and stockpile millions of doses of the two anti-influenza drugs (neuraminidase inhibitors)—Oseltamivir (Tamiflu) and Zanamivir (Relenza)—whose clinical efficacies were yet to be established. A developing country like India, where deaths from tuberculosis far exceeded that of influenza, also jumped on the global bandwagon and increased its Tamiflu stock by almost 10 times. Later, it was revealed that three scientists who drafted the WHO guidelines on influenza pandemic preparedness were also long-time paid consultants of two pharmaceutical giants, Roche and GSK, which manufacture these two anti-influenza drugs and, expectedly, made huge profits from the massive stockpiling (Cohen and Carter 2010).

In comparison with anticipated "bird flu" (H5H1) pandemic, the "swine flu" (H1N1) pandemic in 2009–10 was similar to the usual seasonal influenza and the number of deaths fell far short of WHO predictions. This resulted in an unnecessary and colossal waste of public expenditure, as stockpiled drugs, bought with already constrained national health budgets, lay unused in warehouses around the world. Similarly, the US Strategic National Stockpile ran out of its reserves during the 2020 COVID-19 pandemic, resulting in an acute shortage of protective masks, ventilators and other critical medical necessities, because the allocated budget for last 10 years was mostly spent on buying and storing anthrax vaccines from a single biopharmaceutical firm—Emergent Biosolutions—to prepare against future bioterrorist attacks (Hamby and Stolberg 2021).

Discussion

Two distinct emerging trends can be discerned, more conspicuously in the decades following the Cold War: first, the growing securitisation of certain infectious diseases; and second, the growing militarisation of surveillance and other aspects of public health and infectious diseases. Last but not least are underlying financial interests of the biopharmaceutical industry influencing these developments. Though running the risk of being conflated, these trends have their own independent and contingent historical trajectories, which may not always have direct and immediate relation to one another. However, in the long run, they often appear to get entangled to varying extents. In fact, the 2014–15 Ebola epidemic in West Africa can provide a recent classic example of how the securitisation of a disease and militarisation of public health response can go hand in hand. In September 2014, the UNSC held a discussion on the Ebola epidemic in West Africa—only the second one held on a disease (first being HIV/AIDS in 2000)—and declared it as "a threat to international peace and security." Subsequently, more than 5,000 military personnel from Western countries—the US, the UK, Canada and others, were deployed to West Africa as part of the international Ebola response (Benton 2017).

The question of how big a threat of infectious diseases pose to national security can be raised. Securitisation of infectious diseases has attracted a lot of scholarly and policy attention over the last two decades. Many advocated that securitisation of certain public health issues has proved instrumental in elevating a particular public health issue to the top of the national, political and administrative agenda, imparting it greater importance and attention, and thereby drawing more resources than traditional non-security issues (Youde 2008).

However, many scholars failed to establish, from historical precedents, a substantial empirical link between modern epidemics and national security, thus questioning the conceptual premise underpinning securitisation of infectious diseases. For example, the 1918 influenza pandemic killed millions of people around the world within an extraordinarily short duration, far greater than contemporary World War I, but even in such turbulent circumstances, had minimal disruption on political stability anywhere. Similarly, the full-blown HIV/AIDS pandemic in the 1980s and 1990s did not precipitate any societal crisis or state collapse, not even in badly affected "weak states" of the poor African countries (Waal 2014).

Another concern has been about the long-term consequences of securitisation on both national and global public health. Securitisation of diseases inadvertently influences and misleads the prioritisation of public health issues. It suppresses real and urgent public health concerns and diverts attention and limited resources to national security agendas. While the much-dreaded avian influenza pandemic never actually arrived, bioterrorist attacks have historically been very rare, and ensuing casualties, very few (Headley 2018). The grossly skewed distribution of resources resulting from the undue fixation on a narrow but improbable set of infectious disease threats like anthrax and plague (as biological agents) or avian influenza (as EIDs) while ignoring more prevalent infectious diseases, can have deleterious effects on overall public health in the long term (Youde 2008). This has been aptly described as "public health in reverse" (Cohen et al 1999). Securitisation can deepen historic distrust and divisions between first-world and third-world countries, and act as impediment to inter­national health cooperation in the realm of global public health (Youde 2008). In its efforts to develop a strong global surveillance network—GOARN/GISN—the WHO has been criticised for prioritising the interests of Western countries by protecting them from these outbreaks. In fact, the WHO, a supposedly "neutral" international actor, helped in imparting a "global character" to the specific concerns and agendas of developed countries and transposing them to developing countries that have very different public health needs and concerns (Davies 2008). For example, in 2006, at the height of the avian influenza pandemic scare, the Indonesian government decided to stop further sharing its bird flu virus samples with the WHO after it found out that the samples, already shared unsuspectingly, were being indirectly passed on to biopharmaceutical firms in Western countries. This was to assist them in developing vaccines meant for use in their own countries, as the same were too costly for developing countries (Elbe 2010).

Comments

Popular posts from this blog

Common Questions About Vaccines (for Parents) | Nemours KidsHealth