Loose Reins on AI Development, Tight on AI Deployment ?

The AI horses are out of the barn. Some are off the ranch, and not even horses. Smiling purple winged puppies. Fire-breathing dragons. The horses, puppies, and dragons are proliferating, and it has everyone excited and worried. Any of them can wreck the town.

The present debate about how to reduce risks with artificial intelligence is fraught, complex, and changing. Recently, Sam Altman of OpenAI spoke to Congress (after a dinner and demo) and advocated for a new agency to regulate AI, and discrediting the idea that the industry can self-regulate. There are some who say regulation of any sort will stifle innovation and have other unintended effects (e.g.; black market AI) and those that urge moratoriums and regulations as soon as possible on as much as possible, given misuse and damage already happening.

Regulation of some sort will come. The history of regulation and innovation is PhD-level huge, and I don’t know how big that iceberg is. Technology always leads regulation; it develops faster, and regulators first need to understand what they’re talking about. By the time rules are even proposed, they’re of date.

What I can see so far is a critical distinction: regulating development vs regulating deployment. Developers of new technology need room to experiment and fail, in order to create new opportunities for advancement in the public interest. “Sandboxing”, closed beta testing, pilots, are a few of the ways to lower the risks to the public, while allowing creativity and innovation to flourish.

Before the public can trust a new technology to be deployed widely, it’s even more critical to communicate, minimize, mitigate, and address the risks to the public. That didn’t really happen with AI already in use, making decisions about bank loan rejections, bail costs, surveillance images, and a myriad of other deployments.

Maybe regulation, when it comes, should focus on deployment, and how widely tech is tested in development before public release. Industries like banking, with a regulatory infrastructure already in place, can be held accountable for deploying bad tech solutions, or tech deployed badly. Pharma development innovation seeks to reduce risk and cost while meeting regulations that keep risky drugs off the market.

Development and deployment are blurry, like the are major open beta tests like OpenAI foisted on the public with ChatGPT3. Researchers learned quite a bit from earlier versions in limited wide-scale testing, but regulators failed to pay attention. In digital, deployment and development overlap. AI developers, trained to ‘move fast and break sh*t’ (and then attempt to fix it) are unable to keep up with the wild effects of their experimentation. Can they effectively self-regulate? It seems unlikely.

There is not and won’t be consensus. Every presentation at a recent AI conference mentioned something about ethics, responsible development, and governance. Everyone sees valid competing interests and overlaps. No one has the answer. I don’t have the answers figured out either. This is an extremely wicked problem, but we can’t throw our hands up and keep the barn doors open.

How is your team handling ethics, responsible development, and governance in the use of AI for work processes and innovation?

Previous
Previous

Range should be all the rage.

Next
Next

Anxiety Reduction by Design