A great necessity and complexity for businesses
We cannot deny that artificial intelligence is on the rise. It is seen more and more as critical to competitiveness and future economic growth, with large amounts of money being invested in AI, especially in China and the US. However, due to the complexity of AI, its associated risks and the questions it raises make it tricky to adopt. In this article, Isabelle Hajjar, vice chair of the European Chamber’s ICT Working Group for Shanghai and head of compliance at Tek-ID, and Marc Pedri, AI expert at Tek-ID, give an overview of these different facets of AI.
Let’s AI
Technology has transformed the way people live their lives and do business. This transformation has had a sizeable contribution from artificial intelligence (AI), with its contribution, according to McKinsey, being “10 times faster and at 300 times the scale, or roughly 3,000 times the impact” of the industrial revolution.[1] Profit margins have grown substantially (3 to 15 per cent higher than the industry average) by those businesses that have seriously adopted AI.[2] Investment in AI is growing fast and is dominated by digital giants like Google, Baidu, Alibaba and Tencent. These investments are expected to be worth more than United States dollar (USD) USD 46 billion by 2020, and global gross domestic product (GDP) is expected to be up to 14 per cent higher as a result of AI by 2030, corresponding to an additional USD 15.7 trillion. China and the United States (US) are expected to benefit the most, with a potential boost of up to 26 per cent GDP in China.[3]
China looks to become dominant in AI due to its massive resources, data sets, ambitious plans and high-level government support.[4] China has deployed a host of strategic initiatives, including: the 13th Five Year Plan, the China Manufacturing 2025 initiative, the Robotics Industry Development Plan and the Three-year Guidance for the Internet+ Artificial Intelligence Plan. These all help to guide the growth of AI research and development (R&D). Additionally, the recent Next Generation Artificial Intelligence Development Plan makes AI a key strategy and includes a plan on growing a competitive AI industry that would be “worth renminbi-yuan (CNY) 1 trillion (USD 150.7 billion) by 2030”.
That being said, China will still have to get to grips not only with the already wide information technology (IT) talent gap, which will only worsen with AI, but the potential for massive unemployment due to AI. The Chinese Government already estimated in 2016 that the Chinese AI sector would require an additional 5 million high-skilled workers.. While some studies[5] project that 47 per cent of jobs in the US will go to automation in the next 15 to 20 years.
What is AI, and what are its perks?
With more than 60 years of development behind it, AI is an umbrella term that covers several areas of technology that simulate human intelligence processes. With the ability to self-correct and innovate, AI can include machine learning, deep learning, cognitive computing and language processing.
One of the advantages of AI is the ability to automate and expedite mundane, time-consuming tasks and to turn unstructured data into analysed and structured data in a fraction of the time that a human being would need to carry out a similar task, thus freeing humans up for higher-level tasks. Analytical tasks are the core strength of AI. Decision-making, if mostly an emotional act, remains based on facts. Getting real-time analysed and structured data and taking emotions out of the equation thanks to AI can help improve decision-making.
By utilising AI, one can dramatically benefit from greater accuracy, efficiency, cost savings and speed. It can also provide new market and customer behaviour insights, and help transform a business’s operations, products and services.
The goal for AI systems should not be to replace humans, but to provide support. Ultimately, humanity will evolve with AI. This technology is just one piece of the puzzle and the future should be figuring out how everything should fit together.
Our everyday interactions will soon be intertwined with AI, from Google search results and Amazon’s product recommendations to the integration of generalised voice-controlled systems. Financial competition is taking place on the stock market due to AI, and Legal AI simulates and predicts potential outcomes of cases based on interpretations of the law.
Businesses who disregard AI will soon suffer as it becomes a major differentiator on the market and humans will need AI to perform analytical tasks that are becoming too complex.
A mind-blowing can of legal, ethical and security worms
Adopting AI raises fundamental and complex legal, societal and ethical questions that have already provoked debate. Adapting all aspects of our life to AI will be necessary in the near future. Unlike classic-software programs, as AI evolves on its own, regulatory difficulties have started to arise and it will be more important than ever to fully understand and balance the potential threats and advantages that can arise from this technology. Although there are no concrete regulatory plans, several initiatives have been launched by the authorities and private industry. The Chinese Government has announced AI standards,[6] and Microsoft intends to codify its ethics and design rules for this technology.
Here are some of the main questions that pertain to AI:
- Will AI have an ‘artificial’ personality and free will? If AI allows programs and robots to learn, grow and alter decision-making criteria, at what point does one consider AI programs and robots to have free will, a personality, rights and obligations? The European Parliament is already considering an “e-personality”, and the establishment of liability for the actions of robots. From a more humorous perspective, last year Saudi Arabia granted citizenship to a humanoid robot called Sophia, becoming the first country in the world to do so.
- Currently, AI may fall under product liability (defective products), a fault-based regime, or tort law, such as for example responsibility for negligence. However, can an AI program or AI-enabled equipment be considered ‘defective’ or ‘negligent’ because it erred? One of the main difficulties is going to be determining responsibility when AIs are in cooperation with humans, in the design of the decision-making process and decision-making itself. This will make it difficult to determine responsibility among the users, distributors, manufacturers, or developers. Most likely, the question of liability will be regulated with new compensation and insurance models. AI models require massive amounts of data to be trained and become effective. Data and AI algorithms must be sufficiently regulated and overseen to ensure they are not abusive or abused. Existing data privacy rules will have to be adapted to tackle the added threat posed by AI.
- Cyber-industry problems, such as a lack of security awareness and the fact that hackers always one step ahead, will be remedied by developments in AI-enabled security. The large amount of investments, skill and IT resources put into AI-enabled security will, for the first time, allow those adopting AI to stay ahead of the hacking community. However, two major issues need to be addressed. First, government-backed hackers will become incredibly powerful and second, a self-aware AI could potentially identify humans as a security threat and take preventive measure such as blocking access to IT, reshaping internet governance or establishing self-made security protocols.
- Can AI-generated music, paintings, or models be considered original, new or inventive if their works stems from processing data? If so, who owns the potential corresponding intellectual property rights?
- The design of AI comes with ethical strings attached, as pre-set decision rules will have to be chosen. As an example, for autonomous cars do humans decide to protect the passengers at all costs or to cause minimal loss of life or damage even if it means killing the passengers? The potential malicious uses of AI include fake-news, opinion manipulation and spreading hate speech. Having, AI-induced discrimination is also a serious issue, as real-world biases such as racism or sexism can be embedded into training data or algorithms (such as image recognition categorising Asians as blinking, black people being labelled as more likely to be recidivists, or women getting different results in a search for open-job positions).
In conclusion, AI is important and far more complex than other digitalised products and services. Both AI R&D agencies and companies that are developing, buying or subscribing to an AI product or service will face numerous AI-related risks. Some of these risks could be property loss, security issues, ethical underpinnings and scandal. If AI is indeed a game changer, businesses will have to proceed with great caution, and ask themselves the right questions.
Isabelle Hajjar is vice chair of the European Chamber’s ICT Working Group in Shanghai and head of compliance at Tek-ID, specialised in digital risk intelligence. She leads regulatory and operational compliance support, consulting, strategy and program design, implementation and roll-out services, both locally in China and on a global level, particularly focusing on cybersecurity and data privacy compliance.
TekID’s purpose is to help organisation mitigate cyber threats and digitalisation risks by providing business intelligence beyond technical issues. It can be to overcome a compliance program (CSL, ISO, SOC, SOX, PCI, …), evaluate technology solutions, audit your company (Security Audit, Penetration test, SAPIN2, FCPA, …), perform a Computer Forensic Investigation and Cyber Threats Intelligence (CFI, CTI) or to have technology experts working side by side with you.
[1] McKinsey – “No Ordinary Disruption: The Four Global Forces Breaking All the Trends”
[2] McKinsey – Artificial Intelligence the next digital frontier? Discussion Paper – June 2017
[3] PWC, Sizing the Prize: PWC’s Global Artificial Intelligence Study – Exploiting the AI Revolution, 2017
[4] Goldman Sachs – “China’s Rise in Artificial Intelligence” – 2017
[5] Oxford Martin, and the Bank of America (Merrill Lynch)
[6] Next Generation Artificial Intelligence Development Plan – 20th of July 2017
Recent Comments