SayMore

Author Reply ❌

Farcaster Comments

Maybe Im Wasabi〽️

@maybeimwasabi

⭐ Threadstarter

What do you Farcasters think about the recent open letter to pause AI progress?

Maybe Im Wasabi〽️

@maybeimwasabi

The OG techcrunch article:

Maybe Im Wasabi〽️

@maybeimwasabi

What about if they didn’t stop…but made it less public?

Maybe Im Wasabi〽️

@maybeimwasabi

So, zero fear around how it affects “meaningful work” as they put it?

Maybe Im Wasabi〽️

@maybeimwasabi

The difference being the *speed* of change of these technologies vs past

max⚡️

@maxp.eth

Reminds me of the concerned.tech letter, “stop progress because I don’t like or understand it”

g (🥝,🔪)

@g3rard.eth

Lol first thought was SouthPark 'they took our jerbs' but certainly in a different context

ccarella

@ccarella.eth

Where can I sign a letter for the US to subsidize and accelerate the work? Our global competition will not take a pause. This is a weak men mindset.

Maybe Im Wasabi〽️

@maybeimwasabi

Agreed, what about if it were out of public hands

Maybe Im Wasabi〽️

@maybeimwasabi

The difference (maybe) here is the speed of progress…seems unparalleled with anything in the past

Ryan Anderson

@ra

What if only a handful of people got access to the internet because “notable signatories” were afraid of it. We already have a huge wealth gap… creating a knowledge gap would be catastrophic.

Maybe Im Wasabi〽️

@maybeimwasabi

Hm. Good point. What if gpt4 was available to all but the better models were not..just for 6 months…until we figure out any consequences?

Ryan Anderson

@ra

I mean that’s effectively what’s happening. When GPT-4 came out, a few people mentioned they’d been using it since the summer. Imo the negative effects will be an extension of what’s already happening. Public AI will make it more efficient but it’s unlikely to create direct negative effects.

chrsmaral

@chrsmaral

feels more like 1100 signatories wanting to be noticed

Maybe Im Wasabi〽️

@maybeimwasabi

Really, all just a pr stunt? 🥲

chrsmaral

@chrsmaral

It‘s always a pr stunt… The worried people are working on solutions and not trying to stop progress. Join them and make it safe.

Jamie → q/dau

@chicago

They don't know how to regulate it so now they're trying to stop or slow it down. Regulation by headline. Old world ideas 👎

WakΞ

@wake

This is my read, too. Maybe some signers are well informed and intentioned, but this is mostly moral panic/ attention mongering. I am also dubious of Musk's intent, given his history with the company and present-day remarks.

Maybe Im Wasabi〽️

@maybeimwasabi

Can someone address the speed of change.. the difference between AI and any other past tech is the speed of progress…

Maybe Im Wasabi〽️

@maybeimwasabi

Wdyt of the speed of progress? It’s unlike anything prior. Does that matter to our, human, regulatory ability to adapt?

Aditya

@itsaditya

Elon asking openai to stop so that tesla can catch up? Idk, asking to stop progression is stupidity

WakΞ

@wake

It matters. It's just futile. Six months of regulatory wheel-spinning and Musk spotlighting won't change the next century of human displacement.

adiba

@adiba.eth

Exactly 🎯 'Please wait so I can catch up'

Stephen

@stephenlacy

These signers aren’t the ones actually working on the next gen models. Its a publicity stunt, they can then claim in 10 years that they were against [insert future misbehaving model] that caused mayhem.

Zach French

@zachfrench

There are a lot of legitimate concerns here that we need to try and address. I am not 6 months of freezing will solve them

chrsmaral

@chrsmaral

With such actions, I often sense remorse that it‘s not themselves leading innovation (or headlines). And therefore, it‘s not them paving the way with their rules and their set of regulations. If so, asking for six months is simply a sad move to try to catch up and enforce their own views.

WakΞ

@wake

From what I've read, this summarizes Musk's motive. Likely others. Trying to stir up a moral panic. Cultural arbitrage.

max⚡️

@maxp.eth

I think it’s too early to tell. Feels that way now, but it could have been a one-off leap like self driving cars.

Maybe Im Wasabi〽️

@maybeimwasabi

What are your top concerns?

Cassie Heart

@cassie

Cool, that’s six months of getting ahead of competition

MxVoid

@mxvoid

I think the full proposal is something that should be taken seriously. And it's not just a bunch of fuddy-duddy boomers signing it. It's a lot of big names in the AI and tech space, too (see top of signatory list).

MxVoid

@mxvoid

It's important to note that it's not a pause because "OMG stop progress!" It's a pause for a specific purpose—to organize the stakeholders in the AI research and application space to agree on open, independent, and verifiable standards to mitigate AI risks. Because there are A LOT.

Maybe Im Wasabi〽️

@maybeimwasabi

Thanks for pointing this out!!! There are a huge variety of academic and corporates here, all big names.

Maybe Im Wasabi〽️

@maybeimwasabi

Whose the competition here? Seems a big variety entities have signed as @mxvoid pointed out

Varun Srinivasan

@v

This seems be a list of tech people with no skin in the game (or incentives to see the current players fail) I am skeptical because they gain a lot by virtue signalling and signing this letter, and lose nothing if it ends up being pointless.

Varun Srinivasan

@v

I would worry if many credible people who have shipped production grade LLM's sign something like this

lawrencejh.eth

@lawrence-jh

I think it would be sensible to pause and use as an opportunity for a dialog with society, industry and regulators on what the ground rules should be. Otherwise the space risks counterproductive sledge hammer regulation further down the line. The clearest case for regulation I have seen, using AI tools, are deep fakes.

Maybe Im Wasabi〽️

@maybeimwasabi

Who says they have no skin in the game.. how do we know that? ie what shares they hold of private companies

Matthew

@matthew

i think it’s cope… many of them have a vested interest in, or would benefit by, slowing down OpenAI

Varun Srinivasan

@v

If they did, they would have shared it in their letter. 50% of my net worth is in OpenAi is a much stronger claim than CEO of Ripple

Maybe Im Wasabi〽️

@maybeimwasabi

It would be cool to label the signees as supportive or not and see a pie chart

Maybe Im Wasabi〽️

@maybeimwasabi

Hmm. I see your point. Thinking.

Matthew

@matthew

wdym? supportive of AI in general?

Maybe Im Wasabi〽️

@maybeimwasabi

No, to your point on whether they would benefit by slowing Open AI down. The labels would be hard I think

zico

@zico

not smart enough to understand from technical level but something in me says it's not the right move

MxVoid

@mxvoid

We should definitely critically evaluate the signatories and what interests they have in participating. If you look at the full list of signatories, the vast majority are AI/ML researchers—the people who create the algos that the commercial tech uses plus AI/ML ethicists. That seems very "skin in the game" to me.

Varun Srinivasan

@v

What do they lose if GPT / Bard fail as a result of their proposal? If it’s nothing or very small, then I’d say they have no skin in the game.

Varun Srinivasan

@v

Asking an “ai ethicist” if AI needs to be slowed down is like asking google if Microsoft should be taxed more Credentials are not the same thing as skin in the game.

Zach French

@zachfrench

My personal concerns are: whether we are prioritizing short term profit to the detriment of long term control discoverability of quality creators over quality marketers - which btw is not new just magnified with AI

ccarella

@ccarella.eth

Big get off my lawn energy

MxVoid

@mxvoid

Well, some of them would lose their jobs and/or a lot of money on company stock, because there's quite a few people from Microsoft, Google, and Alphabet subsidiary DeepMind who are signatories. Interestingly, the only signatory who has a former or existing direct relationship with OpenAI (AFAICT) is Elon.

MxVoid

@mxvoid

Sure, they have their biases and seek job security just like everyone else. Alternatively, consider how important it is to heed ethicists when there's a strong push to "move fast and break things." Consider the examples of Henrietta Lacks and the Tuskegee Syphilis Study. Ignoring ethics = negative outcomes.

Varun Srinivasan

@v

If there are people with skin in the game, I would listen to them carefully! But not like "I was a vp at microsoft" and more like "im working on bing/sydney right now and I think we should stop"

Ryan Reef

@ryanreef

I don’t see how this could ever been enforced. If somehow western ai companies for lack of a better term stopped, why would China?

MxVoid

@mxvoid

When it comes to existential risk from AI, we *all* have skin in the game. Consider the risk of dual-use AI, where a drug discovery model can *also* be used to create novel chemical weapons. This proof-of-concept generated 40,000 candidate molecules for chemical warfare—some known (VX), some unknown. bit.ly/3TRkbkn

Varun Srinivasan

@v

(short term) incentives rule everything around me theoretically, we should all be operating altruistically and factoring the good of the human race in practice, we are human and our biases lead us to accidentally (or intentionally) do things that are more in our self interest than anything else.

ccarella

@ccarella.eth

Well I can assure you that no one working on the weapon use case is taking a pause. So lets not put the whole world 6 months behind them.

MxVoid

@mxvoid

Right, we've established that the signatories have their own reasons for joining the open letter. But we really need to consider the core argument—it may be necessary to step back, look at the landscape ahead, and collectively figure out the best way to move through it lest we run headlong into existential risks.

Max Miner

@mxmnr

Personally I don’t think pausing AI advancement is a good idea for a number of reasons. Also - not clear the letter was actually signed by everyone they’re claiming👇

Varun Srinivasan

@v

If you suspect that the human race is on the brink of discovering nuclear weapons, should you stop your country from building them? Not a perfect argument since AI has multiple levels of threat, but "taking a break" is not the safest path through. We have to figure out the answers in real time or we lose the game.

MxVoid

@mxvoid

They're not asking for *research* to be paused, though. They're asking for a pause on training large, unpredictable, and opaque models that are done with zero oversight. An example of "fuck safety protocols, this is a race" turning out badly is the story of the demon core:

ccarella

@ccarella.eth

I think its a risk reward equation and think the invention of unlimited free energy was worth it. In fact I think nuclear is one of the few examples where we pumped the breaks on innovation and instead of unlimited free energy we got wars and global warming.

MxVoid

@mxvoid

I actually think the dawn of the atomic age is a good analogy—advanced AI/ML is a technology which holds both great promise and great peril. I mentioned this elsewhere in the thread, but we also have many good examples where the lack of safety protocols and/or safe design led to multiple deadly nuclear accidents.

MxVoid

@mxvoid

I agree, the promise of cheap and widespread nuclear energy was hindered by too much NIMBYism, ideological opposition, and lack of research support. But in every nuclear accident, there's one common theme—safety protocols or safety-oriented design either didn't exist or were neglected. 1/

MxVoid

@mxvoid

I think people are focusing too much on "we call for a pause" without exploring the "why." And it's a very compelling "why!" We don't have a consensus of best practices/testing/safety protocols in place, and we probably should. Ideally we can walk and chew gum at the same time, but either way, we need protocols. 2/2

daniel

@pcdkd

Weird to see coming from the "move fast and break things" crowd who didn't care about unforeseen consequences before.

maurelian - q/dau

@maurelian

Takes: 1. halting progress is probably not feasible (Molloch must be fed). 2. Asking congress to do something about it is hilarious, and I assume that the petitioners are not actually serious about that, but think it will get attention. 3. The risk is real! 4. “Tech progress is always good” is peak midwit.

ken

@kenergy.eth

Right, the way I understand it's not saying "Stop work" but more like "let's not focus on gain of function research but let's figure out how what we created works, and plan to improve it safely"?

Maybe Im Wasabi〽️

@maybeimwasabi

I’m very pro nuclear. Just putting it out there.

Maybe Im Wasabi〽️

@maybeimwasabi

Didn’t consider this application before

Maybe Im Wasabi〽️

@maybeimwasabi

The personal incentive for $/family/safety vs broader human good

Maybe Im Wasabi〽️

@maybeimwasabi

Tell me more about those

Maybe Im Wasabi〽️

@maybeimwasabi

So few can, and want to, think beyond their own lives

Devin Elliot

@notdevin.eth

We should be going all in and overhauling every major system with it and investing in expanding our understanding to whatever new universal primitives we just accidentally found through transformers These people are just sad we’re running out of hall monitor job openings

Devin Elliot

@notdevin.eth

Ja Rule has a voice in this why? It would be great if the unqualified were a little more self aware in our society.

jeremiah

@n64jerry

this is the purpose of the doc

Zach French

@zachfrench

I agree, and on that front there are plenty of logical reasons why (albeit mostly selfish). A lot of the same arguments are made for the protecting the environment. What seems to be a trend though, at least among my core group, is the desire to see beyond your individual lifetime and consider the future of society

maurelian - q/dau

@maurelian

Cybersecurity and civil unrest are high on the list.

MxVoid

@mxvoid

Same. We really screwed ourselves over by slamming the brakes on cheap, abundant, and zero-carbon energy technology. If we had been really serious about building up nuclear energy, we might have even developed good, working solutions for reducing nuclear waste from baseload reactors, like using breeder reactors.

MxVoid

@mxvoid

"The Immortal Life of Henrietta Lacks" is an *excellent* book that describes the saga of Henrietta Lacks, her family, and the HELA cell line that was taken from her body without informed consent for biological research. More info about The Tuskegee Syphilis Study:

Jackson

@jacks0n

Public hand-wringing pearl-clutching cope. The most shameful flavour of cope

Jackson

@jacks0n

Leveraging AGI-mid cult to buy more compute time to catch up to open AI

Jackson

@jacks0n

Oh I see now, that’s what the words say. That must be the intent. Glad we cleared that up

Jackson

@jacks0n

2. Everyone is embarrassed/salty that Microsoft has taken the lead on AI

Übermensch

@ubermensch

It's an idea in the right direction, but it wont be implemented, and would not be sufficient even if it was.
Made with ✨ by Cameron