Abstract
Authors
Stephen Gibson Harvard Kennedy School
Winston Tang Harvard Kennedy School
The recent surge in Generative Artificial Intelligence has introduced both opportunities and risks to society. This paper discusses the challenges in assessing the impacts of regulation of AI. It identifies a range of different concerns that might give rise to AI regulation and sets out approaches that may inform the design of AI regulation as well as principles for a robust AI regulatory framework.
The paper focuses on the methodologies and challenges involved in evaluating the impacts of AI regulation particularly where there is both significant uncertainty around the costs and benefits of the proposed regulation and the potential for near-existential risk, meaning that AI regulatory proposals are not easily susceptible to standard cost benefit analysis approaches. It outlines and considers the use of a range of quantitative and qualitative approaches to the assessment of AI regulatory proposals including breakeven analysis, using real options and applying the precautionary principle.
Given the potential for significant and near-existential harm from AI, it seems reasonable and appropriate that policymakers should err on the side of caution in designing AI regulation in line with the precautionary principle. However, there are important insights from both the approach adopted for assessing environmental regulations in terms of developing a standardized metric of regulatory risk, developing more robust qualitative reasoning, and also considering the regulatory framework as a real option, implying that policymakers should retain flexibility and monitor developments in designing regulations as new evidence becomes available. This effectively views AI regulation as an investment with embedded real options (to delay, expand, revise or abandon). It requires ongoing monitoring of the effectiveness (or otherwise) of regulation and implications of wider developments in the AI space, as well as a willingness to re-open regulatory decisions in the light of new information.
Additional author: Winston Tang