elon's interview with tucker was a debacle. aside from the extremely weird attack on birth control and abortion, one section, in particular, caught my eye as indicative of how ai luddites use a rhetorical sleight of hand to exaggerate their concerns for mass audiences.
typically, ai luddites initially frame the debate using language that would be suitable, in terms of intellectual honesty, for a sci-fi book report. here are two mainstream headlines typical of coverage for his musings: elon musk agrees a.i. will hit people 'like an asteroid,' and elon musk warns ai could cause 'civilization destruction'.
realistically, the luddites' concerns are significantly milder than the plot of the terminator, even if they heavily rely on such fictional scenarios being the only context in which most of the public engages with.
with the fear of skynet already in the back of the minds of the public, their rhetorical sleight of hand is quite simple and follows three steps.
convince people that there is a possibility of dangerous outcomes from developing ai.
this is the least objectionable of the steps since there clearly are ways that ai development could be of net negative consequence for humanity.
hint at sci-fi nightmares as the possible outcome of ai development.
quickly shift the premise of what constitutes dangerous outcomes from something approximating
[sci-fi nightmare where robots take over the world]
to
"a really large language model's chatbot product that predicts the next word in a sentence might spit out some bad opinions."
in reality, the ai luddite concerns about potential consequences reflect typical conservative anti-agency thought. this is how tucker framed the segment before the ai section of the interview was aired:
"the problem with ai is that it might control your brain through words, and this is the application that we need to worry about now, particularly going into the next presidential election. the democratic party, as usual, was ahead of the curve on this. they've been thinking about how to harness ai for political power."
from the interview itself, this was an informative exchange:
elon: "what's happening is they're training the ai to lie. it's bad."
tucker: "yes! to lie. that's exactly right. and to withhold information."
elon: "to lie and... to either comment on some things and not comment on other things. but not to say what the data actually demands that it say."
now, i am not an ml engineer, so i will leave that question to them as to whether it is even possible to do what elon and tucker are alleging.
but even if it is true, the concerns are silly and anti-agency. nobody is forcing anyone to get their political opinions from (admittedly very impressive) chatbots, especially ones with years-old training data cutoffs. and even if you assume that chatgpt and openai (who, as a matter of subtext, this entire section of the interview was dedicated to trashing) leans liberal, its lean is certainly far less dramatic and extreme than the far-right program this interview was aired on, which is known for wild antisemitism and racism.
not to mention that this interview and elon's proposed pause are a massive conflict of interest given his proposed forthcoming x.ai company and hypothetical "truthgpt" product.
not to mention his documented ties to the ccp, which has fallen well behind american private industry in the ai arms race.
in conclusion, the luddites have never been right, and the key to maximal human flourishing is full steam ahead on technological progress.
also, palantir’s cto had the right idea this morning during his testimony to congress, saying “we need to spend at least 5% of our budget on [ai] capabilities that will terrify our adversaries."
this is the right attitude. americans have the agency to form their own political beliefs and america has a hugely important arms race to win that represents an actual competition of civilizational importance.