What Role Do Governments Play in Regulating the AI Frenzy?

Governments Play in Regulating the AI Frenzy

The rapid rise of AI has fueled a global frenzy to incorporate it into almost every facet of our lives, from improving customer service to enhancing healthcare, and even to replace jobs. But despite its promise, the technology also poses real risks for society and humanity. Consequently, some are calling for governments to play a greater role in regulating the industry.

The US government’s first action in the area was an executive order signed by President Joe Biden last year, requiring new safety assessments, research into equity and civil rights issues, and more. It also tasked agencies with developing processes to evaluate and deploy AI tools within state governments.

However, there is still much debate on how exactly to regulate AI Frenzy. The most important consideration is how to ensure that an effort at regulation doesn’t hinder the speed of innovation. Trying to do so could potentially stifle creativity and derail the technological advancement that many are counting on to lift global living standards, alleviate poverty, and solve the climate crisis.

What Role Do Governments Play in Regulating the AI Frenzy?

To help avoid such a risk, some are advocating for a more collaborative approach, similar to the financial crisis’s creation of an informal coordinating body, the Financial Stability Board. This would involve building multistakeholder processes to coordinate work on prioritizing standards for AI, as well as updating them regularly to ensure they keep pace with the rapid pace of change in the field.

The National Telecommunications and Information Administration, or NTIA, is another government agency looking to establish guardrails for the sector. It is asking researchers, private-sector companies, and privacy and digital rights groups for feedback on what those rails should look like. The NTIA hopes to create a set of “best practices” to identify the key features that AI systems should have, including their ability to be transparent and explain themselves, to be free from bias, to not mislead or spread misinformation, and to protect individuals’ privacy.

Nonetheless, such efforts will only go so far. The sheer number of different AI uses will require a wide range of regulation. For example, a generative AI that’s used to generate news articles or music should be subjected to stricter regulations than an AI that makes a hotel reservation. As such, a one-size-fits-all global regulatory regime may be impossible.

Moreover, the technology is becoming increasingly central to geopolitics and international cooperation, from financing, climate change, and development to trade, security, military operations, and more. This has the potential to significantly shift the power structures of the world’s most powerful nations, which might not want to give up that influence voluntarily.

For these reasons, it will be essential for regulators to have a toolkit of effective methods to address AI. Fortunately, those tools are already beginning to take shape, with journalists and civil society organizations conducting algorithmic audits, and regulatory bodies such as the EU’s AI Act allowing them to request information from companies about high-risk algorithms. Together, these initiatives will compose a framework on which much of future AI governance will be built.

Leave a Reply

Your email address will not be published. Required fields are marked *