(Click below to jump to my chapters of the paper)
AMERICA: Birthplace of the Internet
By Hannah Fuchs
It is vital to recall that the Internet, radar, bluetooth, mobile phones, etc. were all US state-supported inventions. The US state helped cement global dominance in those emerging fields through funding research and building AI technologies. The Internet itself was funded by the Advanced Research Projects Agency (ARPA), part of the Department of Defense and nowadays referred to as DARPA. Created by the Eisenhower administration in 1958, it led to a new industrial revolution. Most of DARPA’s spending is part of the Pentagon’s infamous black budget. It would not even confirm programme code names “or confirm estimates of the agency’s bottom line.” Since 2005, DARPA has been more transparent. In 2019, its enacted budget was $3.427 billion.
In 1956, the US computer scientist John McCarthy organised the Dartmouth Conference, where the term ‘Artificial Intelligence’ was first adopted. Since then, researchers have tried to build intelligent machines but have failed, partly due to a lack of required large amounts of data (computers up until the early 1990s didn’t have the capacity to store and process the amount of data needed).
Over the past few decades, research and development of many AI technologies in the US have moved from state-funded projects to private companies that benefit from the positive feedback loop of having exclusive access to an outstanding amount of data. The US state doesn’t regulate the use of that data to a large extent (and what regulation exists is splintered over hundreds of individual state and federal laws) and lacks in antitrust enforcement, making it easy for these conglomerates to simply buy out their smaller competitors. It is nearly impossible for start-ups to compete alongside Amazon, Google or Facebook. Sooner or later, they will receive an inquiry about selling their company.
The lack of state intervention (i.e. the regulation to protect private citizens) and the power that US companies using AI have, demonstrates the ethical risk of a positive feedback loop: the more data companies gather, the more of a monopoly is created. They can then drive down prices and eliminate competition. This concentration of power will feed into social inequality. It is therefore important to strengthen a diverse AI industry where small start-ups have a chance to compete on the market.
The increase of ‘gig economy’ or ‘gig workers’ has led to a real issue in the US. Gig workers are not a strictly defined category so assessments of their number vary. Generally speaking, gig workers are freelance workers that more and more often work in the platform economy, such as for Uber, Lyft or food delivery companies. Because those companies offer a platform where supply and demand meet, they do not see themselves as conventional employers and shirk their responsibility to provide platform workers with health insurance, minimum wage and paid leave. They also often don’t receive unemployment insurance from the government as their status (whether they’re an employee or freelance workers) is ill-defined. Gig workers find themselves in a gray zone with very few rights. In response to that, the state of California passed a law at the end of 2019 that classifies workers in the platform economy as employees, enabling them to receive valuable workers’ benefits. The traditional employer-employee relationship starts to blur with artificial intelligence. When apps, or algorithms, make the decisions, then who is accountable for their impacts?
Policymakers need to understand the economic disruption that comes with AI. Rather than looking away, the most beneficial outcome for society is for national as well as regional governments to understand new economic opportunities and proactively create jobs that allow employees to work with AI rather than have AI replace jobs. Innovations often come along with economic disruptions. The government should not be facing mass unemployment, but instead, mass redeployment.
Governments also need to ensure that people don’t end up in gray zones with little rights. Instead, as mentioned earlier, minimum wage, health insurance, pension, as well as paid leave should all be guaranteed for so-called ‘gig workers’. The definition of this group needs to be revised regularly due to the fast changes happening in this industry.
The UK government needs to ensure that monopolies are properly identified and regulated so that a healthy competitive environment can exist alongside workers’ rights. This can be done, for example, by institutionalising a wide range of easily-accessible and low-threshold funding opportunities for start-ups, and redistributing the resources of large tech companies that already have access to an extraordinarily large pool of data. Those resources can then be used to retrain people, create new job sectors for those whose jobs are at risk, and invest in smaller start-ups.
In an ever more globalised world, there is a great need for international cooperation on taxing tech companies to reduce inequality on a national as well as an international level (e.g. avoiding tax havens).
Currently, the US remains the global leader in AI, especially in chip development. China is well aware of that dependence on US-developed chips and is eager to catch up in that field, fueled by the US-China trade war and accusations against Huawei, a Chinese telecommunications company, which allege that the company helps the Chinese government steal foreign AI technology and allows Chinese intelligence agencies to use their telecommunication networks to spy on foreign countries. A recent study indicates that the US is going to fall behind China in five to ten years in the areas of innovation, implementation, and investment of AI. In the same study, the UK follows China ranking third. Research on the US AI workforce conducted by the Center for Security and Emerging Technology states that more US companies are moving their AI research and development (R&D) abroad and 70% of computer scientists studying in the US were born abroad.
In light of Brexit, the UK government will need to consider its immigration policies and create an inclusive environment both for national and international students. It should encourage UK residents along with international students to enroll in STEM programmes to improve and encourage diversity in the industry and ensure that the sector reflects the makeup of the general population.
As China catches up in AI research and development, the lead that the US has, is being slowly eroded. Experts and scientists have been criticizing this development, urging the US government to take action. In response, in 2019, President Donald Trump issued an executive order to make AI research and development a national priority. Part of that is a budget proposal intending to raise DARPA’s spending on AI research and development to $249 million from its current levels at $50 million. The proposal also includes a budget increase for the National Science Foundation from $500 million to $850 million for AI matters. For many years, the US government has not only lost highly qualified data scientists and start-ups to China, but also to its private US-based companies such as Google. With the overall $4.8 trillion heavy budget proposal, the US government aims to attract well-trained scientists and drive research in a direction that benefits national security and other public areas such as energy.
The US government struggles to keep on top of its US-based tech companies such as Google, Facebook, and Amazon. Most innovation has happened in the private sector and the US government does not seem to understand the consequences of AI in order to regulate it accordingly. At the moment, it looks like AI is regulating the government rather than vice versa. Governments need to understand AI and its impacts to the best extent possible in order to effectively promote and regulate it at the same time.
While the US government’s R&D has lagged behind the private sector, its willingness to cooperate with it exists, raising questions on human and civil rights. For example, the technology company Palantir Technologies provides AI database management to Immigration and Customs Enforcement (ICE), part of the Department of Homeland Security (DHS), in order to record, target, and detain undocumented immigrants. They were able to do so due to a cooperation with the FBI, where they used an unprecedented surveillance infrastructure that was based on facial recognition technology.
This raises the question of how far governments can use their power on their own citizens. For example, do a vast majority of undocumented immigrants represent a threat to national security? Or have undocumented immigrants actually contributed to their communities, and aimed to have a better life? Eventually, similar AI-based procedures will most likely be used on US residents as well, questioning the transparency, and legality of its usage. It bears the risk that people’s lives will be heavily influenced by nontransparent algorithmic calculations rather than people’s individual choices.
When such significant decisions about humans’ lives are made, they shouldn’t solely be based on algorithmic determinations. Instead, decisions should also be made in conjunction with ethical and moral stances that are transparent enough so that third parties, such as civil rights organisations, can ask questions and receive answers. Public institutions should also lay out on which premises their decisions were made to ensure transparency and allow the public to understand them and give the opportunity to question these decisions.
Eventually, consistent with the model in existing western liberal democracies, the US ought to move towards a model where the ethical boundaries surrounding the applications of AI are representative of the general public. Whether it can achieve that goal with its existing political system is another matter.
EAST ASIA: POWERFUL STATES
Singapore: The Lion City
Even though Singapore might not be at the top of the list when we think about AI, the city-state shows examples of how states can engage productively with the rise of AI. Similarly to the US and China, Singapore understood early on that AI could be a key driver for economic growth. In 2017, the Singaporean government set up a national programme to invest $150 million into AI for the next five years. Its three key sectors are finance, city management and healthcare.
One year later, in 2018, an advisory council for the government was established, led by former Attorney General Vijaya Kumar Rajah. Its purpose is to advise the government on AI and work together with the ethics boards of businesses. Leaders from Google, Microsoft and Alibaba are part of the advisory council. The potential economic gains from AI can conflict with the need for independent and universal ethical standards.
In November 2019, the government expanded the scope of its focus on AI and defined five key sectors: transport; smart cities; healthcare; education; and safety and security. Part of its national AI strategy is its Model AI Governance Framework whose second edition was launched in January 2020. The platform is aimed to democratise AI technologies and their use by implementing four principles:
- Ensuring transparent internal governance structures and regular staff training within organizations
- Determining the level of human involvement in AI-augmented decision-making so that organisations minimise the risk of harm to individuals
- Minimising their biases in data and models
- Communicating openly and accessibly with their stakeholders and allowing feedback
Even though it remains an open question as to how effectively this framework will be implemented, the principles should be adopted by all public and private organisations to ensure AI is used in a transparent and democratic way. In the midst of success due to AI, following ethical guidelines can often be overlooked by private companies as well as the government.
China: from an agrarian state to the United States’ biggest competitor in AI
Well into the latter part of the 20th century, China was still considered an agrarian state. Today, China is the world’s largest producer of digital data, a gap that is widening daily. While China is well aware that in order to be able to compete in AI, it needs highly educated technical talent, it also knows that highly educated data scientists will reach a certain threshold whereby they begin to show diminishing returns. Beyond that point, data makes all the difference as it is the fundamental component without which AI could not exist in the first place.
According to Kai-Fu Lee, Founder, Chairman, and CEO of Sinovation Ventures, and the former president of Google China as well as executive at Microsoft, SGI, and Apple, China identified the four components to be successful in AI: entrepreneurs, enormous amounts of data, highly educated AI engineers, and a government that is eager to support and use AI technology.
China has more internet users than the US and Europe combined. It has also started earlier in collecting a high quality of data which will be more useful for creating AI driven products. “Qualitative” data implies information collected from the real world, that is, physical purchases, meals, makeovers, transportation, etc. The higher the amount and the more wide-ranging the data, the better the data-fueled algorithms and models for future products will be.
The mobile app WeChat is one example of a successful product in China that is based on a data ecosystem. The app reflects the network effect stemming from its ecosystem. As WeChat affects almost all parts of Chinese life, the app constitutes an ecosystem. It is so tightly interwoven with day-to-day activities that it is challenging for a Chinese resident to not be part of this network. Additionally, because WeChat has a large amount of users sharing their data in so many areas of life, it can build on that network of information to further improve the app’s services, or improve targeted advertising.
WeChat became a data powerhouse and, critics say, a tool for “remote control” of people’s lives within just five years of launch as it enables messaging, media, marketing, gaming, payments at restaurants and your taxi driver, unlocking shared bikes, managing investments, booking GP appointments and having prescriptions delivered to your door.
During the Covid-19 lockdown in China, the State Council introduced an app based rating system together with two major tech firms, Alibaba and Tencent, to control people’s movements during the outbreak. People log their recent locations and health statuses and the app is able to link all entries together. Based on that information, the users receive coloured badges (green, yellow, and red) and a QR code to show when, for example, they enter a building. Tencent and Alibaba appreciate the traffic on their systems but claim to not have any access to personal data. This use case is an example of one of the potential drawbacks of AI – opaque decision making. The Wall Street Journal reported on the case of a man who was granted a red badge despite following all instructions. Additionally, since the system’s core operation is managed by the Chinese government, which means that the state now has (in China’s case, virtually uninhibited) access to this powerful tool. For example, the provincial Hangzhou government accused 16 people of lying about their health conditions and immediately gave them red badges.
Because people use WeChat for every aspect of their lives, all their behaviours, patterns, and choices are recorded and centralised on the app. Every move, every decision, and even every thought that you type out and send to your friends, will be stored and used in an unknown way and by unknown people. That extreme centralisation of people’s lives’ data in one place is unique and creates a positive feedback loop; more services offered in one single place or app leads to more centralised data which leads to better products, which leads to more users, and so on.
The rise of WeChat, an app with an enormous ecosystem, also spearheaded e-commerce in China. Targeting customers became effective and efficient because all their data was already provided through other applications within WeChat. China’s digital transaction value in 2019 amounted to $1,595,513 million, compared to $152,897 million in the UK. China’s mobile transaction penetration rate is higher than in any other country (35% vs. 6.6% in the US). That means 35% of people using a mobile phone also pay by using their phones – about half a billion people in China make their daily purchases by phone. Their average annual transaction value ($1,662) is lower than that of the US ($2,993) or UK ($2,464). Although this figure is lower than those for the West, it is worth remembering that Chinese incomes are significantly lower than Western ones in absolute terms, making these figures all the more surprising. It reflects the increasing trend in China to pay all your daily purchases, no matter how small, with your mobile app. This development leaves behind an enormous amount of digital footprints of everyday behaviour that is stored in, centralised in, and thus made available to apps beholden to the Chinese state.
WeChat uses people’s data from their everyday lives for extremely targeted advertising. How are WeChat users able to make critical, informed decisions if they get recommended products and services that seem plausible to them? And who designs those algorithms that offer you exactly that product that you allegedly have been looking for? The growing shift from contextual to behavioural targeting results in a continuous subtle influence of showing people certain products, services, news headlines, or bargains over and over, thereby influencing the way they perceive the world and make decisions. Global conglomerates can dominate the market by paying enough money and using their large sets of available data as barriers to entry for startups. At some point, smaller companies most likely won’t be able to compete anymore if no regulations cut off this cycle of data collection and competitive advantage entrenchment.
In 2014/15, the country became a real competitor to Silicon Valley with China’s mass innovation campaign by focussing intensively on the following policy areas:
- The state started directly subsidising technology entrepreneurs.
- Public venture capital funding jumped tremendously and became almost equal to the US in 2018. Kai-Fu Lee points out that in America, people predominantly believe in private rather than public venture capital, as they tend to believe the latter is highly inefficient.
- The establishment of entire cities focussing on AI. While the direction originated from the central government, ambitious mayors implemented the strategy widely. They aimed to establish their towns to be centers for AI by investing in local AI companies, offering research grants, opening AI training institutes, free company shuttles, securing places at schools and special accommodation for people who work in the AI industry.
- The amount of technology incubators was rapidly increased. “Entrepreneurship zones” were created and government-backed funds were launched to attract more private venture capital. The government also granted tax incentives for people and businesses working in the technology sector and generally made it easier to start a business.
- Even though it sounds like a promising and successful strategy, how far can a government go in directing national industrial strategies? The above scenario bears the risk of a two-class society; those who work in AI and those who don’t. While these policies should be incorporated in the UK’s AI policy strategy, policymakers will have to ask themselves how far they can incentivise one area without groundlessly disadvantaging people at the same time.
Online to offline revolution
The development outlined above, with the rise of WeChat, and what is part of the AI revolution is the introduction of the online to offline, or O2O Revolution; offline and online would merge together and there would be no more differentiation between what was online and what offline. The US introduced the first transformational O2O model: ride-sharing, thanks to Uber and Lyft. China quickly copied that model with Didi Chuxing and accustomed it to local conditions. WeChat then accelerated the O2O trend. An increasing amount of activities you do offline is managed online in one single app, offering all the services you need and thereby transforming the data environment.
As we know, WeChat centralised all its data gathering on consumption patterns and personal habits. That ecosystem differentiates China from the US, which doesn’t centralise multiple services but instead, splits them up, offering multiple services across their different platforms. Facebook is an US example that splits its services into the Facebook app, Facebook Messenger, WhatsApp, and Instagram. Facebook even has its own app for managing pages and groups. All of these platforms seem to be independent and yet, all are owned by Facebook. On the other hand, Yelp bought Eat24, a food delivery platform, trying to follow the example of Chinese companies. However, it failed to properly fuse all of the logistical services onto one platform like Chinese companies do. Specifically, the restaurants still had to handle the deliveries themselves, which gave little incentive to join Eat24 and thus, the business never succeeded.
China was also able to catch up with the US so quickly because AI researchers around the world are relatively open to sharing their data, algorithms, and results with the public. Open source platforms (such as well-known Wikipedia) have become more and more popular. Publicly available knowledge across the world fosters competition on an international level. China put that to good use and proved to be a serious competitor in the field.
Another central component for AI is chip development (e.g. for facial recognition or self-driving cars). Even though Silicon Valley remains the clear leader in AI chip development, Chinese cities have become AI development hubs due to the following supportive policies:
- Easily accessible subsidies for research
- Venture capital funding and grants for AI companies
- Government contracts promising to buy products and services developed in local AI cities
- AI incubators
- AI training institutes
- Clear schemes to set up and register a company
The measures taken by the Chinese government raise questions about how independently firms can really operate. For example, an official statement laid out that government representatives would be assigned to 100 big tech companies including Alibaba in order to strengthen government relations and information exchange. It is not clear though to what extent the Chinese government is controlling these companies on the management side. This is a democratic and transparency issue.
While the government does play a crucial role in helping start-ups to become successful, the public has a right to know by whom AI companies are funded and supported by. Knowing who controls the data collection and builds the algorithms is essential for ethical AI practice.
One particular categorisation of AI splits it into four. First, Internet AI uses data for algorithms to develop recommendations for users, such as seen with Youtube videos and Spotify songs. Second, companies use business AI to learn more about their customers to improve their services. For example, banks give out loans, insurance companies sell policies, or supply chains and inventories are getting optimised based on structured data that identifies certain patterns. Here, the US is the clear leader where companies specialise in helping other businesses improve their services through artificial intelligence software. China has so far been lagging behind here. Third, perception AI digitises the physical world, and how we perceive and experience it. It incorporates our daily routines, behaviour, and conversations by deep learning algorithms into data sets that can then be used in a wide variety of ways. Examples are Alexa, Siri, or the leading speech recognition company iFlyTek from China. Fourth and final, autonomous AI is slowly developing, such as self-driving cars, autonomous drones, and intelligent robots.
Ethics plays an especially crucial role in perception and autonomous AI. By gathering data in public spaces, questions arise such as how people give consent. Who can use that data and for what purposes? How would checks and balances work? Among Western countries, the UK is already a leading country in public surveillance through the amount of closed-circuit television (CCTV) cameras in public spaces that feed the information into facial recognition software. The system has attracted heavy criticism over the years, including a 2018 judgement by the European Court of Human Rights ruling that the way data is collected has been unclear and therefore violates human rights. After Beijing, London is the city with the highest amount of CCTV cameras with around 420,000 cameras in 2019.
China is also advanced in autonomous drone production. The world’s leading consumer drone maker, DJI, is based in Shenzhen and holds an estimated 70% global market share. The US has become sceptical about using their drones for government purposes due to security risks and DJI’s alleged links to the Chinese government.
In general, Kai-Fu Lee argues that the US and China have different approaches to entering the market. The US is a “perfectionist”, working on a product in Silicon Valley until it is nearly flawless, before it is rolled out around the world as an “one size fits all” product. China, on the other hand, uses a more diversified approach by investing in dispersed small local start-ups around the world, adapting the product’s algorithms with local data, and tailoring it to local circumstances.
China has been successful in catching up in AI with an extreme pace, but that isn’t to say that all developments in AI in China have been good. For example, the Chinese national police use facial recognition technologies to target Uighurs, a minority group in China. In 2019, The New York Times reported that “Almost two dozen police departments in 16 different provinces and regions across China sought such technology beginning in 2018, according to procurement documents. Law enforcement from the central province of Shaanxi, for example, aimed to acquire a smart camera system last year that ‘should support facial recognition to identify Uighur/non-Uighur attributes.’” While this is an example of discrimination, the Chinese start-up CloudWalk openly advertises that its surveillance system can “identify sensitive groups of people’’. As Clare Garvie, an associate at the Center on Privacy and Technology at the Georgetown University Law Center states, “If you make a technology that can classify people by an ethnicity, someone will use it to repress that ethnicity.”
While China’s approach in catching up with AI shows a holistic approach, and drives the development of AI on multiple levels and with incredible speed, the use of power China gained in AI remains rather questionable. Authoritarian states have an advantage in collecting data as they face less legal constraints. Gregory Allen, a political scientist and Chief of Strategy and Communications at the DoD Joint Artificial Intelligence Center, says “essentially all major technology firms in China cooperate extensively with China’s military and state security services and are legally required to do so. Article 7 of China’s National Intelligence Law gives the government legal authority to compel such assistance, though the government also has powerful non-coercive tools to incentivize cooperation”.
The UK will have to consider how to interact with China on AI matters:
The private as well as public sector need to stay aware of AI developments in China, knowing that companies collaborate closely with the Chinese government. This means looking at the import and export of products and services to and from China, but also other states that knowingly have been supported by Chinese AI companies.
The UK should have clear standards on human rights violations. While it is crucial to remain diplomatic relationships, the UK should openly speak up on human rights abuses.
With the right set of policies that foster AI R&D, wealth distribution and the support of start-ups and small and medium sized enterprises (SMEs), the UK should cooperate with other nations and multilateral institutions to establish a level playing field that allows fair competition and protects human rights.
By introducing the above point, the UK should aim to avoid a trade or proxy war with undemocratic states such as the US has been with China.
AI IN FOREIGN POLICY
Artificial Intelligence is not only used for commercial purposes: the mobile phone, bluetooth, GPS, and the Internet are examples of inventions massively funded by and for the US military.
The uses of AI in foreign policy have been recognised and actively utilised by a few states for decades. Just like companies can become monopolies in AI due to an extremely large pool of data, so too can world powers shift, vacuums be created and new tools of foreign policy established.
Democracy, sovereignty and ethics
China has become a significant investor in American start-ups that are working on technologies with potential military applications. These start-ups focus, for example, on rocket engines for spacecraft, sensors for autonomous ships, and printers that make flexible screens that could be used in fighter-plane cockpits. Many of these Chinese investor companies are state owned or have connections to Chinese leaders. Not only does that mean Chinese investors have significant control in the start-up’s decision making, and deciding in which direction the start-up should go, it also opens the door for intelligence gathering and obtaining the technologies themselves. Especially when it comes to military purposes, this can become critical.
Foreign direct investment (FDI) should not be underestimated as a foreign policy tool. In terms of the ethical use of AI, the public of one country (here the case of the US) often doesn’t know (yet has a right to know) that the company they’re giving their data to (here, China) is owned by the (Chinese) government.
Because data can be used in such a wide variety of ways and the line of when AI becomes harmful is often blurry, it is extremely important to oversee who collects the data and how it is used.
Policymakers and public servants are obliged to work in the public interest. If foreign states decide to interfere through FDI and have the ability to shift foreign policy tools without the oversight of domestic policymakers, then this raises questions on the ethical use of AI, and the undermining of public trust and democratic processes.
The UK’s foremost AI company, DeepMind, has some of the world’s leading AI scientists. In 2014, it was acquired by Google.
The UK government should invest in and protect independent and UK-based AI companies in order to increase their global competitiveness.
It should also remain in control of how the wealth generated through these companies is distributed throughout British society. After all, AI companies generate their wealth through taxpayers’ data.
Large AI companies need to be taxed to ensure a competitive market: first, to ensure enough is “paid” to users for their data that they are supplying to AI companies, and second, to counteract monopolies created through the positive feedback loop introduced earlier in order to support a wide variety of AI start-ups. This payment ideally takes the form of market-determined negative prices or in the form of data to ensure a level playing field (see Conclusions: Policy Proposals).
Hacking foreign democracies poses another threat in foreign policy. Russia did so in the US elections in 2016 and in 2014 Chinese hackers stole files of 22 million people from the US government’s Office of Personnel Management. China now could use this well-structured data to create algorithms in an extremely large variety of ways, thereby strengthening their cyberwarfare capabilities in many ways. In the 2016 US presidential election, bots were able to alter entire national public debates, and change people’s opinions (see AI and Disinformation). In the 2020 US presidential election, presidential candidate Joe Biden has faced hacking attempts by Chinese hackers, targeting the personal emails of his campaign staff members. Bots can work 24/7 and process data as well as develop content in a much more efficient manner. The fact that an entire democratic system can be undermined by such attacks so easily shows how vulnerable societies are and how urgently states need to work on protecting the essential principle of societies living together: democracy.
This new form of AI attacking is called cognitive hacking, “a form of attack that seeks to manipulate people’s perceptions and behavior, takes place on a diverse set of platforms, including social media and new forms of traditional news channels”. Cognitive security, on the other side, aims to defend such attacks. Cognitive hacking abuses a large amount of innocent people and their data and engages them in operations of foreign states without their knowledge.
In 2017, Google signed a contract with the US Department of Defense (DoD) for a military project called ‘Project Maven’, which deploys AI to “automatically label images, buildings, and other objects captured by cameras on drones, helping [US] Air Force analysts identify unique targets.” It is an attempt to incorporate AI into battlefield technology. When the contract became publicly available, Google employees protested, and some quit their jobs while others started a petition to urge Google to distance itself from warfare technology and cancel the contract. At first, Google tried to play down the significance of the contract, saying it was “only” a $9 million project. However, it was soon revealed that the contract with Project Maven was worth around $250 million a year. In June 2018, Google announced it would let the contract for Project Maven expire when it ended in March 2019.
Employees should have the right to easily opt out of projects which go against their moral beliefs without consequential disadvantages for their career. This also raises the question of how strictly divided the lines between private companies and the government should be. Is Google allowed to share its users’ data with the Pentagon without the users’ consent? If so, how many third parties are allowed to access and use that data, and then use it for what purposes? Or should it not be possible for Google to share the data at all?
Google is also an international company with offices across the world, which entails two kinds of risks. First and foremost, Google collects data from countries all over the world. Despite GDPR rules, there is leeway for Google to use data from its users in foreign countries for US military purposes. The second risk is that classified information could get into the wrong hands, outside of the US, especially when the employees working on that project have never intended to work on military projects and so are more willing to leak information.
In order to cooperate on a multilateral level, in May 2019 the OECD published its AI Principles and AI Observatory, outlining principles and recommendations for governments in order to develop a level playing field. All 36 OECD countries together with a few others have signed the document. However, the US has signed the principles under President Trump, who has continuously been expressing animosity towards international cooperation. China and Russia are only part of a consensus agreement stating they will support the efforts more broadly.
Lethal Autonomous Weapons Systems
An emerging and growing part of the foreign and defence policy discussion are lethal autonomous weapons systems (LAWS). In 1988, a US guided missile cruiser shot down an Iranian passenger jet in the Persian Gulf, killing all 290 people on the plane. Even though the plane gave every indication to be a civilian airplane, the missile cruiser’s Aegis system, programmed to target Soviet bombers, misidentified it. Nobody in the Aegis crew challenged the decision and so passively authorised the firing of the missile. A more recent example of LAWS is the Isreali drone ‘Harpy’. It can stay high up in the air, observing a large radius of ground. When it detects a radar signal from the enemy, it crashes itself into the radar’s location, destroying itself and everything around it.
When it comes to the fundamental question of life or death, it is questionable whether we really want to give a machine the full authority and control of that decision. In war, actions are time-sensitive and some might argue that these machines can take into account more information at a faster pace than any human could ever do. But the decision to take away people’s lives goes beyond purely rational calculations. War has become more complex and it has become more difficult to differentiate between civilians, enemies and allies. AI operated weapons are based on data, but what if that data is not sufficient? What if things have changed just the other day or hour and the programme is operating under false premises? Another argument against LAWS is one of responsibility and accountability. Who is responsible if something goes wrong and innocent people die? The scientist who programmed it? The commander who decided to use the weapon? The responsible government department which bought it? If responsibility defuses and the risk of being held accountable decreases, this can lead to decisions to kill people being made with less questioning. Lastly, because LAWS can be deployed with less risk to military personnel, the proliferation of such weapons might lower the bar for conflicts.
Following the moral arguments to ban LAWS, activists, over 110 nongovernmental groups, the European Parliament, 26 Nobel prize winners, more than 4,500 AI scientists and 30 different countries have joined a global campaign addressing the UN to ban LAWS. However, governments who drive the development of LAWS and profit from it have so far voted against the ban, which needs an unanimous vote in order to pass at the UN.
In the end, machines differ from humans as they don’t have a moral compass. Morality itself is so complex and diverse that no machine will be able to be programmed with a moral compass. Guilt, shame, empathy, a feeling of responsibility and accountability are attributes that arise in a person when they do, see, or decide certain things. It will be very unlikely that machines, algorithms, or complex softwares replace these powerful human emotions. That is why it is so important not to give machines the power to make final decisions, or shirk from making decisions and taking actions just because the machine has chosen its course. An open, public discourse about morality in all aspects of life should take place to understand the distinction and uniqueness of humans and their difference to machines.
Bibliography
“AI Policy – Singapore.” Future of Life Institute. Accessed June 2, 2020. https://futureoflife.org/ai-policy-singapore/.
“Artificial Intelligence.” Infocomm Media Development Authority, January 22, 2020. https://www.imda.gov.sg/ AI.
Arnett, Eric H. “Welcome to Hyperwar.” The Bulletin 48, no. 7 (September 1992): 14–22.
Buchholz, Katharina. “China’s Mobile Payment Adoption Beats All Others.” Statista, May 7, 2019. https:// http://www.statista.com/chart/17909/pos-mobile-payment-user-penetration-rates/.
Can Kasapoğlu and Bariș Kirdemir. “Artificial Intelligence and the Future of Conflict.” Essay. In New Perspectives on Shared Security: NATO’s next 70 Years. Accessed June 3, 2020. https://carnegieeurope. eu/2019/11/28/artificial-intelligence-and-future-of-conflict-pub-80421.
“Chart: China’s Mobile Payment Adoption Beats All Others …” Statista. Accessed June 3, 2020. https://www. statista.com/chart/17909/pos-mobile-payment-user-penetration-rates/.
“China Turns to Health-Rating Apps to Control Movements …” Statista. Accessed June 3, 2020. https:// http://www.wsj.com/articles/china-turns-to-health-rating-apps-to-control-movements-during-coronavirus-outbreak-11582046508.
Clement, J. “Number of Internet Users in Selected Countries 2019.” Statista, January 7, 2020. https://www. statista.com/statistics/262966/number-of-internet-users-in-selected-countries/.
Conger, Kate, and Noam Schreiber. “California Bill Makes App-Based Companies Treat Workers as Employees.” The New York Times, September 11, 2019. https://www.nytimes.com/2019/09/11/technology/california-gig-economy-bill.html.
Cussins Newman, Jessica. “A Global Reference Point for AI Governance.” Essay. In AI GOVERNANCE IN 2019 A YEAR IN REVIEW. Accessed June 3, 2020. https:// http://www.aigovernancereview.com.
Department of Defense. Defense Advanced Research Projects Agency – Budget, Defense Advanced Research Projects Agency – Budget §. Accessed June 7, 2020. https://www.darpa.mil/about-us/budget.
“Digital Payments – China: Statista Market Forecast.” Statista. Accessed June 7, 2020. https://www.statista. com/outlook/296/117/digital-payments/china.
Elstrom, Peter. “China’s Venture Capital Boom Shows Signs of Turning Into a Bust.” Bloomberg, July 9, 2019. https://www.bloomberg.com/news/articles/2019-07-09/china-s-venture-capital-boom-shows-signs-of-turning-into-a-bust.
Fang, Lee. “Google Hedges on Promise to End Controversial Involvement in Military Drone Control.” The Intercept, March 1, 2019. https://theintercept. com/2019/03/01/google-project-maven-contract/.
Franke, Ulrike. “Harnessing Artificial Intelligence.” European Council on Foreign Relations, June 25, 2019. https://www.ecfr.eu/publications/summary/harnessing_ artificial_intelligence.
Galston, William A. “Why the Government Must Help Shape the Future of AI.” Brookings. Brookings, October 25, 2019. https://www.brookings.edu/research/why-the-government-must-help-shape-the-future-of-ai/.
Gansky, Ben, Michael Martin, and Ganesh Sitaraman. “Artificial Intelligence Is Too Important to Leave to Google and Facebook Alone.” The New York Times, November 10, 2019. https://www.nytimes.com/2019/11/10/opinion/artificial-intelligence-facebook-google.html.
Harwell, Drew. “FBI, ICE Find State Driver’s License Photos Are a Gold Mine for Facial-Recognition Searches.” The Washington Post, July 7, 2019. https://www. washingtonpost.com/technology/2019/07/07/fbi-ice-find-state-drivers-license-photos-are-gold-mine-facial-recognition-searches/.
Kessel, Jonah M. “Killer Robots Aren’t Regulated. Yet.” The New York Times, December 13, 2019. https:// http://www.nytimes.com/2019/12/13/technology/autonomous-weapons-video.html.
Lin, Chia Jie. “Singapore Sets up AI Ethics Council.” GovInsider, June 7, 2018. https://govinsider.asia/innovation/singapore-sets-ai-ethics-council/.
Lin, Liza. “China Turns to Health-Rating Apps to Control Movements During Coronavirus Outbreak.” The Wall Street Journal, February 18, 2020. https:// http://www.wsj.com/articles/china-turns-to-health-rating-apps-to-control-movements-during-coronavirus-outbreak-11582046508.
Lucas, Louise. “China Government Assigns Officials to Companies Including Alibaba.” Financial Times, September 23, 2019. https://www.ft.com/content/055a1864-ddd3-11e9-b112-9624ec9edc59.
McGee, Patrick. “How the Commercial Drone Market Became Big Business.” Financial Times, November 27, 2019. https://www.ft.com/content/cbd0d81a-0d40-11ea-bb52-34c8d9dc6d84.
Metz, Cade. “White House Earmarks New Money for A.I. and Quantum Computing.” The New York Times, February 10, 2020. https://www.nytimes.com/2020/02/10/ technology/white-house-earmarks-new-money-for-ai-and-quantum-computing.html.
Mozur, Paul. “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a MinorityPaul.” The New York Times, April 14, 2019. https://www.nytimes. com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html.
Mozur, Paul, and Jane Perlez. “China Bets on Sensitive U.S. Start-Ups, Worrying the Pentagon.” The New York Times, March 22, 2017. https://www.nytimes. com/2017/03/22/technology/china-defense-start-ups. html.
Nuttal, Chris. “London Sets Standard for Surveillance Societies.” Financial Times, August 1, 2019. https:// http://www.ft.com/content/70b35f8a-b47f-11e9-bec9-fdcab53d6959.
Ray, Shaan. “History of AI.” Towards Data Science (blog). Medium, August 11, 2018. https://towardsdatascience.com/history-of-ai-484a86fc16ef.
Rosenbach, Eric, and Katherine Mansted. “How to Win the Battle Over Data.” Foreign Affairs, September 17, 2019. https://www.foreignaffairs.com/articles/2019-09-17/how-win-battle-over-data.
Sanger, David E., and Nicole Perlroth. “Chinese Hackers Target Email Accounts of Biden Campaign Staff, Google Says.” The New York Times, June 4, 2020. https://www.nytimes.com/2020/06/04/us/politics/china-joe-biden-hackers.html.
“Some Aspects of UK Surveillance Regimes Violate Convention.” European Court of Human Rights, September 13, 2018. European Court of Human Rights. https:// hudoc.echr.coe.int/eng-press#{“sort”:[“kpdate%20Descending”],”itemid”:[“003-6187848-8026299”]}.
“The Campaign To Stop Killer Robots.” The Campaign To Stop Killer Robots, 2020. https://www.stopkillerrobots.org/.
“The Global AI Index.” Tortoise. Accessed June 1, 2020. https://www.tortoisemedia.com/intelligence/ai/.
“The War Against Immigrants – Trump’s Tech Tools Powered by Palantir.” Rep. The War Against Immigrants – Trump’s Tech Tools Powered by Palantir. Palantir and Mijente, August 2019. https://mijente.net/wp-content/ uploads/2019/08/Mijente-The-War-Against-Immigrants_-Trumps-Tech-Tools-Powered-by-Palantir_.pdf.
Wheeler, Tom. “History’s Message about Regulating AI.” Rep. History’s Message about Regulating AI. Brookings, October 31, 2019. https://www.brookings.edu/research/historys-message-about-regulating-ai/.
Wu, Tim. “America’s Risky Approach to Artificial Intelligence.” New York Times, October 7, 2019. https://www. nytimes.com/2019/10/07/opinion/ai-research-funding. html.
Yang, Yuan. “The Chinese Internet Boom in Charts.” Financial Times, August 21, 2018. https://www.ft.com/ content/ef80e27c-a500-11e8-8ecf-a7ae1beff35b.
Zwetsloot, Remco, Roxanne Heston, and Zachary Arnold. “Strengthening the U.S. AI Workforce.” Strengthening the U.S. AI Workforce. Center for Security and Emerging Technology, September 2019. https://cset. georgetown.edu/wp-content/uploads/CSET_U.S._AI_ Workforce.pdf.