[ad_1]
Generative AI came out of nowhere this year, and it has captured the imagination and the attention of the tech industry. Companies appear to be fully embracing it, perhaps sensing that this could be a truly transformative technology. Yet even as companies fall all over themselves to get in on the ground floor of this potential opportunity, a cloud hangs over the enthusiasm.
That is the great unknown of regulation, which could have a tremendous impact on every company selling and implementing generative AI. Biden released an executive order that dictates a broad set of guidelines; there was an AI Safety Summit meeting in the U.K.; and the EU is working on its own set of potentially stringent requirements, too.
There’s been a range of reactions to the rise of generative AI, with some — like the letter signed by 1,100 technology industry luminaries last March — calling for a six-month moratorium on AI development. That didn’t happen, of course. If anything, it has accelerated, even as some scream hysterically that AI is an existential threat.
At the other end of the spectrum, you have folks who think any type of regulation would stifle innovation without really generating any actual protection. The primary argument being how can you protect people from negative outcomes until you know what they are. Of course, some would argue that if you wait for those bad results, it could be too late to do anything about it.
And some people see the existential threat argument as a smoke screen covering up real problems we face from the current generation of AI. What’s worse, regulations that are too stringent favor the richest and most established companies, pushing aside startups, which might not be able to afford to comply.
There’s something to be said for that, too, especially when the incumbents are sitting at the table helping to draft those same regulations. It raises some interesting questions about how much to regulate and where the right answers lie.
To regulate or let it be
It seems that most folks would see some AI regulation as a given, perhaps a necessity, especially from those who see it in purely dystopian science-fiction terms. But that’s not always the case. In Marc Andreessen’s rambling pro tech manifesto, published in October, he envisions a world of unfettered and unregulated technology where regulatory bodies are the enemy of progress.
“We believe intelligence is the ultimate engine of progress,” he wrote. “Intelligence makes everything better. Smart people and smart societies outperform less smart ones on virtually every metric we can measure. Intelligence is the birthright of humanity; we should expand it as fully and broadly as we possibly can.”
In his view, regulating AI could, in some cases, be akin to murder: “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”
He is not alone in some of his views.
[ad_2]
Source link