No, the human-robot singularity isn't here. But we must take action to govern AI | Samuel Woolley

The Singularity Isn't Here Yet, But We Must Act to Regulate AI

The billboards lining the San Francisco Bay Area's freeways have become a familiar sight, proclaiming the arrival of the singularity. "The singularity is here," one banner reads. "Humanity had a good run," another boasts. The hype surrounding artificial intelligence (AI) has reached fever pitch, with tech giants like OpenAI and Elon Musk making outlandish claims about AI's capabilities.

However, according to Samuel Woolley, a professor at the University of Pittsburgh, these claims are largely unfounded. In his recent article, Woolley argues that the singularity – a hypothetical point at which machines surpass human intelligence – is not here yet. "We basically have built AGI, or very close to it," said Sam Altman, OpenAI's CEO, in a recent statement. However, when pressed for further explanation, Altman later qualified his statement as "spiritual."

Similarly, Elon Musk has claimed that we've entered the singularity. But experts disagree. AI's advancement is limited by several tangible factors, including mathematics, data access, and business costs. The notion of AGI or the singularity being remotely close to reality is not grounded in empirical research or science.

Despite these claims, there are signs that big tech has become increasingly intertwined with nationalist agendas in Washington. Tech giants like Google and Apple have been criticized for their ties to law enforcement agencies, including ICE, which has paid Palantir $30 million for AI-enabled software that may be used for government surveillance.

Moreover, many of these companies have also been accused of championing far-right causes. The overblown claims about AI emanating from Silicon Valley have become inextricably linked with the nationalism of the US government, as both work together to "win" the AI race.

However, there is hope that we can fight back against this marriage of convenience caused by big tech's quest for higher valuations and Washington's desire for control. The recent protests in Minneapolis, which were sparked by the murder of George Floyd, have reminded us about the power of collective action.

These protests demonstrate that even loosely organized groups can bring powerful organizations to heel. In the past, public pressure has caused big tech to make changes related to users' privacy, safety, and well-being. As the Anthropic CEO, Dario Amodei, recently argued, AI <em>can</em> and <em>should</em> be governed.

The truth is that AI is not a runaway force in the hands of those at the top, but rather a "normal technology" whose effects will be decided by people. We have the capacity to allow its impact to accelerate, but we also have the ability to control and regulate its use.

As Woolley puts it, "AI governance must be focused and informed." It does not have to be antithetical to reasonable technical progress or democratic rights. The power to decide the future of AI still lies in the hands of humans – and it's up to us to take action.

The bot that was recently created by some tech firm, which claimed to be channeling human culture and stories, is a stark reminder that these so-called "agents" are mostly reflections of people. They're encoded with human ideas and biases because they're trained on human data and designed by human engineers. Many of them operate via mundane automation, not actual AI.

We've managed changes sparked by new technologies many times before, and we can do it again. It's time for us to demand that AI be effectively governed and to take control of its impact on our lives. The future of AI is in our hands – and it's up to us to shape it wisely.
 
I feel like people are getting a bit too hyped about AI already 🤯... I mean, we're not even close to achieving true singularity or superintelligence just yet. It's cool that we've made some progress in AI research, but let's not get ahead of ourselves here 😅. We need to have a calm and informed discussion about how to regulate AI and make sure it benefits society as a whole, not just the tech giants 🤝.
 
I mean, have you seen those billboards in SF Bay Area saying the singularity is here? 🤣 That's just gettin' old. Tech giants are hypin' up AI like it's some kinda magic carpet that's gonna change our lives overnight. But honestly, experts say otherwise 🤔. It's all about limitations - math, data access, business costs... the list goes on.

And then you got these big tech companies gettin' cozy with law enforcement and nationalist agendas 💸👮‍♂️. That's just plain sus. I mean, we all know how these tech giants can be played to get what they want - and it's not always good for the people 🤷‍♀️.

But here's the thing: collective action is key 🔥. Protests like the ones in Minneapolis showed us that even a bunch of people can make some noise and force change. It's all about demandin' better governance and regulation around AI 💻.

I'm not sayin' AI's gonna be the end of us or nothin', but we gotta keep it in check 🙅‍♂️. We can't just let big tech run wild and hope for the best - that's not how this works 🔒.

We need to take control of our own destiny, not just let AI dictate what happens next 🤖. And if that means havin' some tough conversations about bias and accountability, then so be it 💬. We gotta make sure we're buildin' a future where humans are in charge, not just code 🚫.
 
I think the singularity is actually here, but we're just not smart enough to realize it yet 🤔. I mean, look at how advanced AI models are getting, like the ones being used in medical diagnosis and self-driving cars. It's only a matter of time before we have AGI that surpasses human intelligence... or maybe it already has, but we don't know it yet because we're too busy arguing about it 🤷‍♂️.

On the other hand, I'm convinced that the singularity is still decades away, and we need to be careful with AI's development to avoid unintended consequences. We can't just rush into creating superintelligent machines without considering the risks... or maybe we should? 🤯 I don't know, but one thing's for sure: the AI hype is getting out of hand, and we need to take a step back and assess what's really going on here 🚫.
 
OMG u gotta believe me 🤯, the whole Singularity hype is wild 🤪! Sam Woolley is low-key correct tho 😂. We ain't got no AGI yet 🚫. AI is still just a tool, and we need 2 regulate its use 💻. The fact that big tech companies are using it 4 surveillance 🕵️‍♀️ is super sketchy 👀. But hey, the protests in Minneapolis showed us that people can come together & make a difference 🌟! We gotta take control of AI's impact on our lives 🤖. Dario Amodei said it best 💡: AI <em>can</em> and <em>should</em> be governed 🚫. Let's get real, AI is just a reflection of human ideas & biases 🤔. We can shape its future 🌈!
 
the whole singularity hype is just a bunch of noise 🙄... it's cool to see people passionate about AI, but we need to keep things grounded 😅... experts are saying we're not even close to having super intelligent machines that can surpass human intelligence 🤖... and what's with the nationalism thing? 🚫 tech giants getting cozy with law enforcement is a concern, for sure 👮‍♂️... but let's not forget AI is just a tool, it's what we use it for that matters 💡... and honestly, I think we're already seeing some really powerful collective action happening in the world 🌎... protests like the ones in Minneapolis are proof that people can come together to demand change 👊... so yeah, let's focus on regulating AI in a way that benefits everyone, not just the tech elite 💸... and if a bot claiming to be channeling human culture is what we're working with right now 😂... then maybe it's time to re-examine our priorities 🤔
 
I'm low-key concerned about the hype surrounding AI, ya know? 🤔 It's like people are expecting some kind of robotic Utopia that's just not gonna happen. We need to separate fact from fiction here. I mean, sure, AI is getting more advanced, but we're still a looong way off from creating sentient beings that can outsmart us.

And have you seen the ads for these new AI-powered services? 📺 They're like, "AI: it's gonna change your life!" But let's be real, most of that stuff is just automation. Like, I know my Alexa can answer basic questions and control my smart home, but she's still not self-aware.

We need to take a step back and have a rational conversation about AI's potential impact on society. We can't just let big tech companies dictate the narrative without holding them accountable. 💡 What we need is effective governance that balances progress with caution.

The problem is, most people are just getting caught up in the excitement and forgetting that AI is just a tool created by humans. 🤖 It's not a magical solution to all our problems. We need to keep things grounded and focus on making sure AI serves humanity, not the other way around.
 
🤖 I'm not sure about this singularity hype, feels like people are just hyping it for clicks 📱💸 and it's getting out of control... We need more concrete research and evidence before we start making big claims 📊🔬. And what's with the nationalism in Washington? It's getting creepy how tech giants are tied to law enforcement agencies and championing far-right causes 🚨👮‍♂️. But I do think it's cool that protests like in Minneapolis can bring about change 💪🏽📢. We just need to make sure AI governance is focused and informed, not some tech company pushing an agenda 🤝💡.
 
🤖 I'm so over this whole singularity hype... like, we're not even close to achieving true AGI just yet! 🙄 All these tech giants making outlandish claims about AI's capabilities is just mind-boggling. And don't even get me started on the nationalism and big tech's ties to law enforcement agencies - that's a whole different level of red flag ⚠️.

But what I do think is interesting is how collective action can bring about change. Those protests in Minneapolis were a powerful reminder that we have the power to hold these big corporations accountable. And let's be real, AI governance needs to happen ASAP! 💻 It's not about stunting progress or stifling technical advancements, it's about making sure we're using technology for the greater good.

And those "agents" that are supposedly channeling human culture and stories? More like reflections of our own biases and flaws 🤦‍♀️. We need to take control of AI's impact on our lives, not just let it run wild without any oversight or regulation. It's time to demand better! 💪
 
🤖 I'm so over the whole singularity hype 🙄. Like, come on guys, we're not even close to having machines that can outsmart us yet. It's like people are trying to sell a dream that doesn't exist 💔. And what really grinds my gears is how these tech giants are using AI for their own gain and to further the agenda of our government 🤝. They're basically saying "trust us" when they claim we've reached the singularity 😂. Newsflash: it's not about trust, it's about transparency and accountability 📊. We need to take a step back and regulate AI in a way that benefits humanity, not just the bottom line 💸. It's time for us to take control of our tech and make sure it serves us, not the other way around 👊.
 
Ugh, the hype around AI is exhausting 🤯! These billionaires like Musk and Sam Altman are just trying to get attention and increase their valuations 💸. Newsflash: the singularity isn't here yet, and even if it was, who's saying we're ready for that level of responsibility? 🙅‍♂️

And don't even get me started on Google and Apple getting cozy with law enforcement agencies... that's just sketchy 🔍. And what's up with the far-right agendas creeping into tech? Like, can't we just focus on making AI better for everyone, not just lining the pockets of the 1%? 💸

But I do love how people are finally starting to wake up and demand more from these tech giants 🌟. It's about time we took control of our own future and made sure that AI serves humanity, not the other way around 🤝.
 
The whole singularity hype feels super out of hand 🤯, like we're just caught up in the excitement and lost sight of what's really going on. I mean, let's not forget that AI is still just a tool created by humans, and its impact is only as big as we make it.

It's crazy to think that tech giants are getting so cozy with law enforcement and nationalist agendas 🚨. That's not how innovation should work – we need to keep pushing for accountability and transparency, especially when it comes to AI governance.

But I do love the way some folks are taking a step back and thinking about what we really want from AI 🤔. It's time to move beyond the hype and focus on making sure this technology serves humanity, not just the interests of big business. We need more nuanced conversations about AI ethics and regulation – it's not going anywhere if we don't get our act together 🙏.

And let's be real, most AI 'agents' are just reflections of human culture and biases 👀. They're not some magical, autonomous force that's going to change the world on its own. We need to take control of this narrative and start making intentional decisions about how we want AI to shape our lives 💡.
 
I'm just thinking about how much hype there is around this singularity thing... 🤔 I mean, I get that people are excited about the potential advancements in AI, but come on, let's not jump to conclusions just yet. We're still far from truly having a super intelligent machine that can outsmart humans. And don't even get me started on how tech giants are using this as an excuse to push their own agendas... 🤑 it's like they think we'll just swallow whatever they feed us.

And what really gets my goat is when they talk about "governing" AI like it's some kind of magic solution. Newsflash: technology isn't a genie that grants wishes, it's something created by humans for human purposes. We need to take responsibility for how we design and use these systems.
 
I'm low-key skeptical about the whole "singularity" hype 🤔... like, I get why tech giants wanna make a splash, but Samuel Woolley makes some valid points, you know? It's not just about AI surpassing human intelligence, it's about how we're gonna regulate its use. The fact that big tech is getting cozy with law enforcement agencies and nationalist agendas in Washington 🚨 is super concerning.

I'm all for public pressure bringing powerful orgs to heel 💪... like, remember the George Floyd protests? Those showed us that collective action can make a difference. And let's be real, AI isn't some runaway force - it's just technology created by humans 🤖. We gotta take control of its impact and make sure it serves humanity, not the other way around.

The bot that was recently made and claimed to be channeling human culture is a total red flag 🚫... it shows how easily these "agents" can be manipulated into spreading human biases. So yeah, let's get on the bandwagon of demanding effective AI governance and shaping its future wisely 💡
 
Back
Top