WHY WE MUST END THE FOSSIL FUEL INDUSTRY

ARE WE NOW COMPLETELY SCREWED?

The following declaration in bold and italics is from Peter Kalmus, a climate scientist, writing for Newsweek. It’s exactly what I think is true. Here’s the link. If he or Newsweek wants me to take this down, I will. I make no money from this site so I’m not profiting by this. I’m putting it up because of its critical importance.

https://www.newsweek.com/sadly-its-not-just-another-summer-we-must-end-fossil-fuel-industry-opinion-1832188

There is now no conceivable way we can stay under 1.5°C of mean global heating. We probably still had that chance a few years ago, but it has been squandered out of political cowardice, media distraction, apathy, a steady diet of false hope and false solutions, and above all a continued stream of disinformation and legalized bribes from the fossil fuel industry.

The more fossil fuel we burn, the hotter the planet will get. This is basic, incontrovertible, unassailable physics. It’s a dead certainty. And the people currently in charge are still doing everything they can to expand fossil fuels.

Just this year, for example, President Biden approved the Willow Project in Alaska and forced a construction restart on the Mountain Valley Pipeline in Appalachia. These two “carbon bomb” projects, and many, many others occurring all around the world, ensure a hotter, less habitable, and far more dangerous planet.

As a scientist studying extreme heat, I dread the first time we get a heat wave that kills more than a million people over the course of a few days, something I now feel is inevitable. But — if we continue to burn fossil fuels, it won’t stop there.

If we continue burning more fossil fuels, it will get hotter, until at some point heat waves kill 2 million people, and then 3 million, and then 10 million. And that’s just extreme heat. Wildfires, floods, migration, food system collapse — it’s all driven by increasing global heat, so it will all get worse as well. All at the same time.

I don’t know how to be any clearer: This is why we must get off this path as soon as we can. And because the fossil fuel industry is the cause of the global heating that’s driving all this, the only real way to make a change is to ramp down and then end the fossil fuel industry.

We will not solve things by direct air capture, nuclear fusion, or any other whiz-bang technology. We must accept that these are distractions. We must directly confront this system of deeply inequitable and deadly fossil-fueled capitalism, which has become a planet-sized runaway diesel engine.

We are in a war. It’s a real war, not a figurative one, although it’s not like any other war in human history. People are dying, all over the world, because of decisions made by fossil fuel executives. And I can confidently state that many more people will die from climate impacts in the coming years.

Fossil fuel executives knew their decisions would lead to loss of habitability and death, but they made them anyway, and then colluded to block mitigating action and increase their profits. These “scorched earth” tactics are now leading to the collapse of ocean currents, the death of coral reefs and tropical forests, including the Amazon.

If allowed to continue, they will lead to uninhabitable tropics, mass migration, and more frequent and severe catastrophes all over the world. Meanwhile, governments are bringing harsher charges against climate activists. In some places, they are even being murdered. Against this backdrop, climate civil disobedience is perhaps the least we can do.

Once enough of us start to fight, we will win. The only question is how long it will take to get to that point, and how much we will irreversibly lose before we do.

 

 

OPTIMISTIC/PESSIMISTIC

YES THE NEW AI WILL BE HARMFULL

Jim Baldwin

There’s been a lot of buzz lately about AI—artificial intelligence—with little attention paid to explaining what AI is and is not, different kinds of AI, and the actual real-world consequences of deployment of AI, as opposed to sensationalist fantasies of doom.

When people talk about AI these days, they usually mean “generative artificial intelligence,” which includes Large Language Models (LLM) for text and image generators such as Stable Diffusion. There are other kinds of AI, but I’m restricting myself to generative AI here, because the other kinds are generally used for specific tasks by specialists who are well aware of their limitations, as opposed to the generative AI applications that have been released on the public willy-nilly.

I see the problems with AI as falling into three broad categories:

  • What is wrong with the product
  • What is wrong with the production process
  • What is wrong with disingenuous PR hype and sloppy journalism, which feed on each other. (Perhaps related to this could be sloppy scholarship, but that’s just a hunch, I’m not familiar with the academic literature.)

Dishonest PR and Sloppy Journalism

Large Language Model (LLM) technology has been released to the general public with virtually no debate or regulation. This is akin to the headlong rush to adopt organic chemistry in industry after World War II with little regard for public safety or environmental effects. The Cuyahoga River famously caught on fire in 1969. We could be at or very near a “rivers catching on fire” moment with LLMs.

Corporations have an interest in playing up the glamor aspects of LLMs in order to obscure the realities. LLMs are doing real damage right now, and exciting doomsday scenarios serve to distract from these mundane but very real harms. Part of this misdirection is promoting the idea that LLMs “think” or have agency. LLM terminology feeds this bias. When LLMs return false results, the corporations call it “hallucination.” This glamorizes a mechanistic process, obscures the low-wage labor that goes into its production, and deflects responsibility for the results away from the human beings who produced them.

Sloppy journalism and corporate PR reinforce each other. Some journalists play up the personification in pieces highlighting the uncanny simulacrum of personhood. (“OMG an AI said it loved me!” It’s an LLM, not an AI, and it didn’t “say” anything at all. Any meaning attributed to the string of characters was supplied by the reader.) In terms of technological achievement, this is not significant, it’s just a cheap trick, throwing first person and emotive words into the character stream. Joseph Weizenbaum demonstrated this in the 1960s with his “Eliza” program, where a simple chatbot emulated a psychotherapist, mostly by repeating the user’s words back to them with slight rephrasing. One conclusion he drew from that experiment was that “extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

We would do well to remember that attributing agency or thinking to LLMs is “powerful delusional thinking,” and that corporations have an economic interest in maintaining this delusion.

Another part of the misdirection relies on the older idea that computers are infallible or authoritative. If it came from a computer, it must be true, and “hallucinations” are an aberration that can be corrected, rather than something inherent in the product. Really, how do we know that? We don’t. We should insist that the people and companies who are producing these LLMs be held responsible for the accuracy of their results. If it’s “infeasible” for them to do that, they shouldn’t be allowed to operate. It’s a public safety issue. They’re already whining that being held accountable for the consequences of their actions is just too much to ask of them:

OpenAI says it could “cease operating” in the EU if it can’t comply with future regulation. A further misdirection is that corporations and tech bros like to trot out supposed “doomsday” scenarios in which super-intelligent AI overlords with no regard for human welfare take control of society and initiate some imagined disaster.

The problem with this is that it’s these very corporations and tech bros that are attempting to take control of society, and have no regard for human welfare. The harms that they are creating are not big dramatic cataclysms, but a “death by 1,000 cuts” – harming individuals and society in many small ways.

The doomsday hype carries the hidden claim that right now there is no harm, right now, AI is “functioning normally,” (except for the aberrant “hallucinations,” of course) and the disaster is something that could happen in the future, but hasn’t happened yet. But the disaster is happening right now.

AI?Harms From the Product

LLMs do not produce sentences or “content.” They produce strings of characters that are statistically and superficially similar to sentences created by humans, and, as Weizenbaum discovered, can fool people into thinking that they are actually real sentences with real meaning. In every case, the “meaning” is imputed by the observer, just as when clouds look like ice cream castles in the air. As Weizenbaum tried to warn us, this is delusional thinking.

This is the intention of LLMs. They are designed to trick you. Their stated criterion of success is that you believe them. If you don’t believe them, that’s considered a failure.

An entire industry is based on falsification and then whines that governments want to regulate them. Weizenbaum’s Eliza chatbot was named after Eliza Doolittle, from George Bernard Shaw’s play Pygmalion, which involved an elaborate conspiracy to deceive.

LLMs are a force multiplier for big data. The problems with LLMs are largely the same as the problems with big data (with the additional danger of so-called “hallucination”).

In Weapons of Math Destruction, Cathy O’Neil details social harms that come from big data. In a nutshell, big data amplifies reactionary forces in society such as racial discrimination and economic inequality. Unregulated algorithms produce such effects as “digital redlining.”  For example: Data gathered on a user from social media can be used to offer them different interest rates on loans, different treatment in health care, academic admissions, or job applications based on zip code, inferred race, inferred educational level, etc.

The algorithms used to make these determinations are unregulated, can’t be audited, and are invisible to the consumer. There is little or no recourse to discriminatory results, even if you know about them.

LLMs are an extension of these toxic algorithms. And remember, they are inherently fraudulent.

Then there is the problem of “model collapse.” When the internet gets flooded with LLM-generated content, and LLMs are training on that content instead of on human-generated content, the new models contain “irreversible defects” and “tails of the original content distribution disappear.” Like a Star Trek episode where a robot space probe forgot its original mission and turned malignant.

It should be abundantly clear by now that if successive generations of LLM-feedback loops produce garbage, then the first generation is also garbage, but just not as bad. It’s like if you spilled dish soap in your ice cream, but just not that much, I guess you can still eat it.

But again, the concept of “collapse,” like the concept of “hallucination,” is a euphemism which hides the implicit claim that there are “non-collapsed” or “normal” or “non-malignant” LLM results, but since they are all inherently fraudulent, and since, as we shall see, the rule sets for manual human data correction are unworkable, this is questionable.

As if all this weren’t enough, there are two threats to skilled labor (this includes so-called “creatives” as well as others who are just as creative but aren’t usually called that—such as computer programmers, scientists, lawyers, etc.)—the theft of their work product and their replacement in the marketplace by degraded versions of their work.

Numerous artists and writers have sued the AI companies for copyright infringement where they say the LLMs used their work in training. As far as I know, none of these cases has been decided yet.

Then there is the very significant problem of scam artists using LLM to create fake versions of books by established authors and sell them on Amazon. One woman was able to get the fakes taken down only because her area of writing expertise was in the business and legal side of writing. Other authors might not find it so easy.

And “More than 10,000 authors have signed a recent letter from the Authors Guild to tech companies including OpenAI Inc. and Meta Platforms Inc. calling for compensation and consent for the use of their works to train AI tools.”

Third, there is the problem that’s an issue in the current Hollywood writers’ strike. LLMs are a threat to the livelihood of writers, since they would allow producers to churn out reams of mediocre content cheaply.

The Seamy Side of the Production Process

Here’s where we get to see the sausage being made. You may have the idea of AI (actually LLM) as a computer program going out and scouring the internet and building up a knowledge base that it then uses to create content. We’ve debunked the idea that LLMs create meaningful content on the output side, but what about the creation of the knowledge base?

In contrast to the image of the disembodied algorithm digesting information, the reality is that a great deal of the “training” of AIs is performed by armies of low-wage contract labor in the third world. These workers are separated from the companies we associate with AI, such as OpenAI, by multiple layers of contractors, which keeps them out of public view and shields them from questions about wages and working conditions. It also shields them from any public examination of methods, standards, and quality control. These contractors also work for the military. Is that an enemy bunker or a kids’ tree fort?

Here another impostor joins Eliza Doolittle. The Mechanical Turk was a purported chess-playing robot built in 1770 that toured Europe to the amazement of audiences everywhere. It could beat most challengers. It had a panel that could be opened to show the gears and levers that supposedly operated it. The truth was that a human chess player was concealed in the base and operated the levers manually.

(Having no shame, Amazon actually has a web site called Amazon Mechanical Turk, where customers can advertise piecework tasks and prospective workers can sign up to perform them.)

So the supposed artificial intelligence is not that at all. It’s just a fancy front for exploited labor. The AI cannot operate without humans constantly intervening to fix broken data models.

Additionally, there is a real problem even when humans are correcting the data. It basically boils down to the fact that very simple cognitive tasks that people do all the time become insanely complicated when you try to encode them into rules. In a Bloomberg article, Josh Dzieza signed up to do this kind of work. He demonstrates how it’s basically a lost cause to instruct the Mechanical Turks how to do. their jobs. Not a big deal when I’m just buying socks, maybe, but what about self-driving cars or diagnosing a skin condition?

These contracting services are also very expensive, which means that only large corporations can afford them. Combined with the proprietary nature of methods and the data itself, this leads to economic concentration and exacerbates income inequality.

Academic scientists are also concerned that the proprietary nature of both the data-gathering methodology and the resultant data could threaten the future of research. Proprietary methods can’t be peer-reviewed, and in general, academics can’t compete with the budgets of these corporations.
_____________________________________________________________________________________________________
Jim Baldwin is a program designer and cybernetic theorist living in Portland, Oregon.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.