Good for you, but I have to warn you that there some discouraging developments. There is a peculiar segment of society that will want to outlaw you.
Abortion is on the ballot in South Dakota. Insulin prices were a key issue in the June debate between Biden and Trump. The U.S. Surgeon General declared gun violence a public health crisis, and the Florida governor called the declaration a pretext to “violate the Second Amendment.”
In an intense presidential election year, the issue of anti-science harassment is likely to worsen. Universities must act now to mitigate the harm of online harassment.
We can’t just wish away the harsh political divisions shaping anti-science harassment. Columbia University’s Silencing Science Tracker has logged five government efforts to restrict science research so far this year, including the Arizona State Senate passing a bill that would prohibit the use of public funds to address climate change and allow state residents to file lawsuits to enforce the prohibition. The potential chaos and chilling effect of such a bill, even if it does not become law, cannot be understated. And it is just one piece of a larger landscape of anti-science legislation impacting reproductive health, antiracism efforts, gender affirming healthcare, climate science and vaccine development.
It’s a bit annoying that the article talks about these dire threats to science-based policy, but doesn’t mention the word “Republican” once. It’s not that the Democrats are immune (anyone remember William Proxmire, Democratic senator from Wisconsin?), but that right now Republicans are pushing an ideological fantasy and they don’t like reality-based people promoting science.
One other bit of information here is that science communication does not pay well, and you rely on a more solid financial base: a university position, or a regular column in a magazine or newspaper, anything to get you through dry spells. You can try freelancing it, but then you’re vulnerable to any attack, and hey, did you know that in America health insurance is tied to your employment? I’ve got the university position, which is nice, but that’s a job, and your employers expect you to work, and it’s definitely not a 40 hour work week sort of thing.
It doesn’t matter, though, because those employers are salivating at the possibility of replacing you with AI.
But AI-generated articles are already being written and their latest appearance in the media signals a worrying development. Last week, it was revealed staff and contributors to Cosmos claim they weren’t consulted about the rollout of explainer articles billed as having been written by generative artificial intelligence. The articles cover topics like “what is a black hole?” and “what are carbon sinks?” At least one of them contained inaccuracies. The explainers were created by OpenAI’s GPT-4 and then fact-checked against Cosmos’s 15,000-article strong archive.
Full details of the publication’s use of AI were published by the ABC on August 8. In that article, CSIRO Publishing, an independent arm of CSIRO and the current publisher of Cosmos, stated the AI-generated articles were an “experimental project” to assess the “possible usefulness (and risks)” of using a model like GPT-4 to “assist our science communication professionals to produce draft science explainer articles”. Two former editors said that editorial staff at Cosmos were not told about the proposed custom AI service. It comes just four months after Cosmos made five of its eight staff redundant.
So the publisher slashed its staff, then started exploring the idea of having ChatGPT produce content for their popular science magazine, Cosmos. They got caught and are now back-pedaling. You know they’ll try again. And again. And again. Universities would love to replace their professors with AI, too, but they aren’t even close to that capability yet, and they have a different solution: replace professors with cheap, overworked adjuncts. They won’t have the time to do science outreach to the general public.
Also, all of this is going on as public trust in AI is failing.
Public trust in AI is shifting in a direction contrary to what companies like Google, OpenAI and Microsoft are hoping for, as suggested by a recent survey based on Edelman data.
The study suggests that trust in companies building and selling AI tools dropped to 53%, compared to 61% five years ago.
While the decline wasn’t as severe in less developed countries, in the U.S., it was even more pronounced, falling from 50% to just 35%.
That’s a terrible article, by the way. It’s by an AI promoter who is shocked that not everyone loves AI, and doesn’t understand why. After all,
We are told that AI will cure disease, clean up the damage we’re doing to the environment, help us explore space and create a fairer society.
Has he considered the possibility that AI is doing none of those things, that the jobs are still falling on people’s shoulders?
Maybe he needs an AI to write a science explainer for him.