Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more
When humans discovered fire about 1.5 million years ago, they probably knew they had a good thing right away. But they probably figured out the downsides pretty quickly: getting too close and getting burned, accidentally starting a forest fire, inhaling smoke, or even burning down the village. They weren’t minor risks, but there was no going back. Fortunately, we managed to harness the power of fire for good.
Fast forward to today, artificial intelligence (AI) could prove as transformative as fire. Like fire, the risks are enormous, some would say existential. But like it or not, there is no going back or slowing down, given the state of global geopolitics.
In this article, we explore how we can manage AI risks and the different paths we can take. AI is not just another technological innovation, it is a disruptive force that will change the world in ways we can’t even begin to imagine. However, we must be aware of the risks associated with this technology and manage them appropriately.
Set standards for the use of AI
The first step in managing the risks associated with AI is to set standards for the use of AI. This can be done by governments or industry groups, and can be mandatory or voluntary. While voluntary standards are nice, the reality is that the companies that are the most responsible tend to follow the rules and guidance, while others pay no attention. For the general benefit of society, everyone should follow the guide. Therefore, we recommend that the standards be enforced, even if the initial standard is lower (ie, easier to meet).
As to whether the governments of the industry groups should lead the way, the answer is both. The reality is that only governments have the weight to make the rules binding and to incentivize or cajole other governments globally into participating. However, governments are notoriously slow and prone to political undercurrents, which is definitely not a good thing in these circumstances. Therefore, I believe industry groups need to be involved and play a leading role in shaping the thinking and building the broader base of support. In the end, we need a public-private partnership to achieve our goals.
Governance of the creation and use of AI
There are two things that need to be governed when it comes to AI: its use and its creation. The use of AI, like all technological innovations, can be used with good or bad intentions. Intentions are what matter, and the level of governance must match the level of risk (or whether it’s inherently good, bad, or in between). However, some types of AI are inherently so dangerous that they must be carefully managed, limited, or restricted.
The reality is that today we don’t know enough to write all the regulations and rules, so what we need is a good starting point and some authoritative bodies that can be trusted to issue new rules as they are needed. . AI risk management and authoritative guidance must be fast and agile; otherwise, you will fall far behind on the innovation path and be worthless. Existing industries and government agencies are moving too slowly, so new approaches need to be put in place that can move faster.
National or global AI governance
Governance and rules are only as good as the weakest link. Acceptance by all parties is critical. This will be the most difficult aspect. We should not delay anything to wait for a global consensus, but at the same time, global working groups and frameworks should be explored.
The good news is that we are not starting from scratch. Various global groups have been actively expressing their views and publishing their results; Notable examples include the recently released AI Risk Management Framework from the US-based National Institute of Science and Technology (NIST) and Europe’s proposed EU AI Law, and there are many others. Most are voluntary, but a growing number have the force of law behind them. In my opinion, while there’s still nothing that covers the full scope comprehensively, if I were to put them all together, I’d be off to a commendable starting point for this journey.
reflecting
The journey will definitely be bumpy, but I believe that humans will ultimately prevail. In another 1.5 million years, our ancestors will look back and think it was hard, but we finally got it right. So let’s move forward with AI, but be aware of the risks associated with this technology. We must take advantage of AI forever and be careful not to burn the world.
Brad Fisher is CEO of Lumen’s AI.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read more from DataDecisionMakers