The Singularity Isn't Here Yet, But We Must Act to Regulate AI
The billboards lining the San Francisco Bay Area's freeways have become a familiar sight, proclaiming the arrival of the singularity. "The singularity is here," one banner reads. "Humanity had a good run," another boasts. The hype surrounding artificial intelligence (AI) has reached fever pitch, with tech giants like OpenAI and Elon Musk making outlandish claims about AI's capabilities.
However, according to Samuel Woolley, a professor at the University of Pittsburgh, these claims are largely unfounded. In his recent article, Woolley argues that the singularity – a hypothetical point at which machines surpass human intelligence – is not here yet. "We basically have built AGI, or very close to it," said Sam Altman, OpenAI's CEO, in a recent statement. However, when pressed for further explanation, Altman later qualified his statement as "spiritual."
Similarly, Elon Musk has claimed that we've entered the singularity. But experts disagree. AI's advancement is limited by several tangible factors, including mathematics, data access, and business costs. The notion of AGI or the singularity being remotely close to reality is not grounded in empirical research or science.
Despite these claims, there are signs that big tech has become increasingly intertwined with nationalist agendas in Washington. Tech giants like Google and Apple have been criticized for their ties to law enforcement agencies, including ICE, which has paid Palantir $30 million for AI-enabled software that may be used for government surveillance.
Moreover, many of these companies have also been accused of championing far-right causes. The overblown claims about AI emanating from Silicon Valley have become inextricably linked with the nationalism of the US government, as both work together to "win" the AI race.
However, there is hope that we can fight back against this marriage of convenience caused by big tech's quest for higher valuations and Washington's desire for control. The recent protests in Minneapolis, which were sparked by the murder of George Floyd, have reminded us about the power of collective action.
These protests demonstrate that even loosely organized groups can bring powerful organizations to heel. In the past, public pressure has caused big tech to make changes related to users' privacy, safety, and well-being. As the Anthropic CEO, Dario Amodei, recently argued, AI <em>can</em> and <em>should</em> be governed.
The truth is that AI is not a runaway force in the hands of those at the top, but rather a "normal technology" whose effects will be decided by people. We have the capacity to allow its impact to accelerate, but we also have the ability to control and regulate its use.
As Woolley puts it, "AI governance must be focused and informed." It does not have to be antithetical to reasonable technical progress or democratic rights. The power to decide the future of AI still lies in the hands of humans – and it's up to us to take action.
The bot that was recently created by some tech firm, which claimed to be channeling human culture and stories, is a stark reminder that these so-called "agents" are mostly reflections of people. They're encoded with human ideas and biases because they're trained on human data and designed by human engineers. Many of them operate via mundane automation, not actual AI.
We've managed changes sparked by new technologies many times before, and we can do it again. It's time for us to demand that AI be effectively governed and to take control of its impact on our lives. The future of AI is in our hands – and it's up to us to shape it wisely.
The billboards lining the San Francisco Bay Area's freeways have become a familiar sight, proclaiming the arrival of the singularity. "The singularity is here," one banner reads. "Humanity had a good run," another boasts. The hype surrounding artificial intelligence (AI) has reached fever pitch, with tech giants like OpenAI and Elon Musk making outlandish claims about AI's capabilities.
However, according to Samuel Woolley, a professor at the University of Pittsburgh, these claims are largely unfounded. In his recent article, Woolley argues that the singularity – a hypothetical point at which machines surpass human intelligence – is not here yet. "We basically have built AGI, or very close to it," said Sam Altman, OpenAI's CEO, in a recent statement. However, when pressed for further explanation, Altman later qualified his statement as "spiritual."
Similarly, Elon Musk has claimed that we've entered the singularity. But experts disagree. AI's advancement is limited by several tangible factors, including mathematics, data access, and business costs. The notion of AGI or the singularity being remotely close to reality is not grounded in empirical research or science.
Despite these claims, there are signs that big tech has become increasingly intertwined with nationalist agendas in Washington. Tech giants like Google and Apple have been criticized for their ties to law enforcement agencies, including ICE, which has paid Palantir $30 million for AI-enabled software that may be used for government surveillance.
Moreover, many of these companies have also been accused of championing far-right causes. The overblown claims about AI emanating from Silicon Valley have become inextricably linked with the nationalism of the US government, as both work together to "win" the AI race.
However, there is hope that we can fight back against this marriage of convenience caused by big tech's quest for higher valuations and Washington's desire for control. The recent protests in Minneapolis, which were sparked by the murder of George Floyd, have reminded us about the power of collective action.
These protests demonstrate that even loosely organized groups can bring powerful organizations to heel. In the past, public pressure has caused big tech to make changes related to users' privacy, safety, and well-being. As the Anthropic CEO, Dario Amodei, recently argued, AI <em>can</em> and <em>should</em> be governed.
The truth is that AI is not a runaway force in the hands of those at the top, but rather a "normal technology" whose effects will be decided by people. We have the capacity to allow its impact to accelerate, but we also have the ability to control and regulate its use.
As Woolley puts it, "AI governance must be focused and informed." It does not have to be antithetical to reasonable technical progress or democratic rights. The power to decide the future of AI still lies in the hands of humans – and it's up to us to take action.
The bot that was recently created by some tech firm, which claimed to be channeling human culture and stories, is a stark reminder that these so-called "agents" are mostly reflections of people. They're encoded with human ideas and biases because they're trained on human data and designed by human engineers. Many of them operate via mundane automation, not actual AI.
We've managed changes sparked by new technologies many times before, and we can do it again. It's time for us to demand that AI be effectively governed and to take control of its impact on our lives. The future of AI is in our hands – and it's up to us to shape it wisely.