New Report Proposes Policy Framework to Create Accountability in Use of Algorithms

ITIF's picture

WASHINGTON—Using algorithms to automate complex processes creates myriad opportunities for the public and private sectors to increase their productivity and effectiveness and thereby generate substantial social and economic benefits. But some uses of algorithms can cause harms, depending on how they are implemented, such as exacerbating existing biases and inequalities. The answer to this problem is not to overregulate the use of algorithms, according a new report released today by the Center for Data Innovation, but instead to embrace the principle of “algorithmic accountability,” which provides a model for promoting beneficial outcomes, guarding against harms, and ensuring that laws governing human decisions can also be applied effectively to algorithmic ones.

Algorithms create a wide variety of social and economic benefits, and as they improve they will help solve newer and bigger challenges,” said Joshua New, policy analyst with the Center and the report’s lead author. “Policymakers should be careful not to jump the gun on regulation. Many of the proposals that have been put forward to regulate algorithms would do little to protect consumers but would be sure to stall innovation.”

Algorithmic accountability is the principle that systems based on algorithms should employ a variety of controls to ensure their human operators can verify they work as intended, and to identify and rectify any harmful outcomes.

To implement algorithmic accountability, the report recommends policymakers take a harms-based approach to protecting individuals, which would entail holding operators accountable through light-touch regulation. The approach would use a sliding scale of enforcement actions against companies that cause harm through their use of algorithms, punishing intentional and harmful actions while imposing little or no penalty on unintentional and harmless actions.

“Many people have proposed drastic new regulations for algorithms, and Europe has already started moving down this path,” said Daniel Castro, the Center’s director and co-author of the report. “The risk today is that policymakers may overregulate algorithms, and in the process, limit innovation with technologies like artificial intelligence. A better approach is for policymakers to continue to use light-touch regulation to ensure proper oversight while enabling development and adoption of new technology.”

To further support the development and adoption of algorithmic accountability, the report recommends that policymakers increase the technical expertise of regulators; invest in methods for achieving algorithmic accountability; and address sector-specific regulatory concerns.

###

The Information Technology and Innovation Foundation (ITIF) is an independent, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized as the world’s leading science and technology think tank, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress.

Copy this html code to your website/blog to embed this press release.

Comments

Post new comment

2 + 3 =

To prevent automated spam submissions leave this field empty.