Governance of Artificial Intelligence

Governance of Artificial Intelligence

Christian Hunt

25 years: Behavioural science & compliance

In this video, Christian explores essential strategies for governing AI development, offering insights on how companies and governments can regulate and control AI to prevent negative outcomes.

In this video, Christian explores essential strategies for governing AI development, offering insights on how companies and governments can regulate and control AI to prevent negative outcomes.

Speak to an expert

Speak to an expert today to access this and all of the content on our platform.

Governance of Artificial Intelligence

13 mins 20 secs

Key learning objectives:

  • Define artificial intelligence

  • Outline the three main subsets of artificial intelligence

  • Understand the role of government in regulating artificial intelligence

  • Identify the challenges of artificial intelligence

Overview:

Artificial intelligence is reshaping our lives across healthcare, transportation, and education by emulating human and natural intelligence. Within AI's three primary subsets, i.e., machine learning, natural language processing, and computer vision, we witness the transformational impact. Machine learning powers personalised recommendations on platforms like Netflix, NLP deciphers sentiments for brand monitoring, and computer vision enables object recognition for self-driving cars and facial recognition in security and social media. This overview encapsulates the dynamic influence of AI, driven by complex algorithms and extensive datasets, which is continually redefining our technological landscape.

Speak to an expert

Speak to an expert today to access this and all of the content on our platform.

Summary
What is artificial intelligence?

Artificial intelligence is a rapidly advancing technology that has the potential to and is already transforming many aspects of our lives, from healthcare to transportation to education.

What are the three subsets of artificial intelligence?

  • Machine learning
  • Natural language processing
  • Computer vision

What challenges do governments face in legislating the use of AI?

Governments encounter significant challenges in legislating the use of AI due to the evolving nature of technology and its potential impact on society. One major concern is the use of facial recognition technology by law enforcement. While it offers the potential to identify criminals easily, it raises issues related to privacy, bias, and accuracy.

The 2018 study by the US National Institute of Standards and Technology revealed higher error rates for people with darker skin tones and women, leading to false identifications and wrongful arrests. This creates a dilemma for lawmakers as they strive to balance the benefits of AI in crime prevention with the protection of civil liberties. Moreover, the global nature of AI requires governments to not only regulate within their borders but also develop defences against its potential use as a weapon by enemy states, emphasising the complex and multifaceted nature of AI governance.

What are some of the challenges associated with regulating AI, and how do these challenges impact the development of effective rules and standards?

Regulating AI presents several challenges. The rapid evolution of AI technology can render regulations quickly outdated or ineffective, raising concerns about balancing innovation with safety and ethics. Additionally, the lack of standardisation and coordination across countries may result in regulatory fragmentation. An example is the 2021 European Union proposal, criticised for being too broad and potentially hindering innovation. The overarching challenge lies in creating rules that harness the societal benefits of AI while preventing unforeseen negative outcomes. This complexity is further highlighted by incidents like Facebook's AI-powered chatbot, underscoring the need for continuous research and understanding of AI system interactions to inform effective regulation.

Speak to an expert

Speak to an expert today to access this and all of the content on our platform.

Christian Hunt

Christian Hunt

Christian is the founder of Human Risk, a Behavioural Science led Consulting and Training Firm. Previously, Christian was Managing Director at UBS, and Head of Behavioural Science (BeSci), within the Bank’s Risk function. Prior to joining UBS, he was Chief Operating Officer at the UK’s Prudential Regulation Authority.

There are no available Videos from "Christian Hunt"