Arguably big but definitely not a "smart piece"
by Byrne Hobart on writing and blogging and LLMs: https://www.thediff.co/archive/ais-impact-on-the-written-word-is-vastly-overstated/
@timbray: Let me deconstruct this. The author's bias (Byrne Hobart's) is for,
"some algorithm guessing the next word based on statistical distributions."
That is the very definition of average and mediocre. Economically, the author argues,
"...printed books had lower production values, because of that marginal cost argument. But there were many more [books common people could read]
... keeping in mind, *written by humans who could afford the costs of learning to write them, writing them, printing them, and distributing them. Of course, progress to electronic media meant,
"What blogging meant was that people could participate in the discourse, in the medium that the most influential people prefer, without any vetting whatsoever."
Emphasis mine. True! Worse,
"Pajama-clad or not, bloggers stuck around, and they forced other news organizations to adapt to their norms."
Exactly.
This, in fact, inevitably draws the reader to the deductive conclusion that—contrary to the author's argument that LLMs are an improvement on human-gen writing—instead genAI is as disruptive to truth-in-journalism (to the extent that a person or displaced journalist could be or want to be unbiased) as is blogging, but on steroids. This is especially true when you factor in the statistics at the bottom of the article:
"We already implicitly opt out of the overwhelming majority of what we could read. And whether we read .01% or .001% of what theoretically interests us doesn't make much of a practical difference."
Why not opt out more by embracing genAI output? Interesting thesis.
The problem here is sorting out the genAI regurgitated hash of trending opinion and alternative facts when such writing is rarely marked as AI generated. Remember,
"Tools like Grammarly take this a bit further, and at this point LLMs make it so that everyone who wants to can write whatever [without learning how or why] [what] they need to in whatever tone they want ... [They] can move [their] writing a little closer to 90th percentile...elevate basically any coherent thought into a message that reads like it was produced by a college-educated professional [but could have been written by a 1st grader who has no idea what they are talking about].
"Eliminating the ability to judge people based on how well they write is a social shift that, at least for the Extremely Online, is an act of linguistic egalitarianism."
I'm all for the best tools, but my hammer doesn't build my house by itself.
In other words, genAI using LLMs devalues educated writing, argument, and rhetoric to something maybe a person can recognize as good, but makes it something not worth learning to do, or monetarily worth investing in. This blithe attitude reminds me of how art is often devalued, especially in schools in favor of STEM and rote learning—displacing critical thinking and learning to be creative.
Sturgeon's Law states,
"Ninety percent of everything is crap."
It's true! The author agrees on that, but if you displace human writing by making it worthless by teaching that genAI use is on par, or better than learning to write, don't you end up multiplying the statistical occurrences the author-cited algorithm chooses from 90% to 99% to 99.99% ad infinitum until we have machines talking to machines? Creativity evaporates. Laziness dominates.
The ability of bad actors to control people through algorithms increases by orders of magnitude when people cease to write by themselves. Free words are too expensive.
#BoostingIsSharing