Paraphrasing Leslie Knope, poets and English majors are “mean, conniving, rude, and extremely well-read which makes them very dangerous.” Don’t ever get into a verbal battle with one without a thesaurus and grammar guide to rely upon…
larparsays
@2 Hemidactylus
“mean, conniving, rude,”
I might be a poet
and don’t know it
robrosays
“When the AI replaces you programmers”…I’ve caught whiffs of that fantasy in the last few years. I don’t see any signs of it at this point. There’s whole new areas for programmers, for example “prompt engineering.”
“Is it too late for me to switch fields to poetry?” As a Philosophy/Literature major who moved to San Francisco because of poetry and music, it’s not too late, but the results may be less about poetry, music and philosophy than you might hope.
jenorafeuersays
@robro:
I remember over 20 years ago talking with a friend of mine about people needing courses for how to construct search queries to find things on the Internet. His response was that there was actually an entire degree program that included things like that: Library Sciences.
Also, yeah, ‘AI replacing programmers’ isn’t going to happen (at least not until maybe we get a full general sapient AI, and that’s not happening now and if anything current LLM models are moving away from it); at most it’s going to move the problem up a level, removing the need for some of the low-level ‘code monkeys’ and making it more important to have people who understand the system involved in creating the specifications. Because frankly a whole lot of the reason that software is such a mess is that most of it is built without a solid idea of what it’s supposed to do and what should happen in various odd corner cases. I know a good chunk of my job is literally ‘okay, how can I break this’ to try to find and close off any possible problems. And a lot of that has to be done at the specification level before any programming even starts.
pilghamsays
“Mithridates, he died old” was one of the favorite sayings of my old English teacher.
chrislawsonsays
jenorafeuer@5–
In addition, the promised niche of AI-generated code is already fulfilled by coding sites like Stack Overflow and GitHub. Yes, this still requires a certain level of coding skill just to know how to find the snippets you need and how to plug them into your own code, but anyone who doesn’t have that basic coding proficiency will also have zero chance of knowing if their AI-generated code works as promised.
Testing and debugging are difficult, complex tasks even for experienced coders working on code they wrote themselves. The idea that AI will write smooth functional code for people who don’t have the skills is, at this stage, a dangerous fantasy. (Dangerous because it will encourage corporations to use bad code they don’t understand with unfortunate real life consequences.)
Bekenstein Boundsays
Short of fully sapient AI, I don’t expect programmer-bots to get all that far either. (And I have considerable experience coding and debugging.)
The closest we might get would still require a human to write a specification and a comprehensive set of unit tests. LLM-based methods might reach the point of being able to iteratively generate code that will pass the unit tests, working in tandem with a testing framework that automatedly runs the tests and gives feedback to the AI. Even that is likely to be a stretch, though. And any gaps in the test coverage, let alone the specification, become “here be dragons”, aka in coderspeak “undefined behavior”. I especially don’t know how one would avoid the problem of overfitting: if we treat “what the code does” as a curve and the unit tests as a set of data points, the AI may well produce something that acts wildly everywhere but the vicinity of the exact given test-cases (think fitting a degree-10 polynomial to 10 data points), rather than doing what the specification-writers intended (think fitting a quadratic or an exponential to those 10 data points). In the worst case, it just spits out a rote memorization of the test cases: e.g., a times-table rather than a multiply function that generates errors or garbage results at every input that wasn’t a test case. The worst case result of “teaching to the test” and likely insoluble without full sapience.
John Moralessays
Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
AI is a tool, not a jinn.
The better you specify and phrase your instructions, the better even current systems can perform.
You can correct and iterate and verify.
(Hint: you can even ask it to model possible exceptions to some stricture according to the interaction so far)
(In short, a tool is no better than its user)
Bekenstein Boundsays
… says the biggest tool in this comments section.
John Moralessays
I like your self-referential gnome, Beebee.
John Moralessays
[meta: I asked the AI what a ‘self-referential gnome’ might be, in academic terms]
In academic terms, a “self-referential gnome” might be a clever play on words, combining the idea of “gnomic utterances” (short, wise, and often enigmatic sayings) with self-referentiality. This would imply a saying or expression that refers to itself in a concise, witty, or mysterious manner. It’s like a statement that describes its own nature in a pithy and ambiguous way.
(Not too shabby, eh?)
John Moralessays
[two iterations (within established rules I paste before each session); first one was what it was, but of course I got told there was no such thing. So, my own construct, but not uninterpretable. A neologism!]
John Moralessays
Here, for you.
Go to an AI engine and establish a communication protocol.
Establish a context for the inquiry.
For example, I poisoned the well thus: “Are you able to provide pseudocode for an algorithm?”
So. I just tried this prompt:
‘How to find the polar equation of a circle given two points on its circumference?’
—
(No spoilers, but try it for yourself)
John Moralessays
Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.
(At the moment, and I shan’t joke about the Basilisk. OOO11!)
John Moralessays
[context having been establishe]
Please summarise the legend as succinctly as possible within two paragraphs
“John Henry, an African American folk hero, was known as a “steel-driving man,” responsible for hammering steel drills into rock to create holes for explosives during railroad construction. When a steam-powered drill was introduced, threatening the jobs of workers like him, Henry challenged the machine to a race. He ultimately triumphed, proving human strength and determination could surpass mechanical power. However, his victory came at a cost, as he collapsed and died from the immense physical exertion.
John Henry’s story has become a powerful symbol of human resilience and the struggle against the dehumanizing effects of industrialization. His legend has been preserved in songs, stories, and folklore, reflecting the broader themes of labor, dignity, and the fight for equality.”
jo1stormsays
AI might be “reverse centaur” when it comes to automation.
“AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.”
jo1storm: Bosses are Bizarro-world Marxists. Like Marxists, your boss’s worldview is organized around the principle that every dollar you take home in wages is a dollar that isn’t available for executive bonuses, stock buybacks or dividends. That’s why your boss is insatiably horny for firing you and replacing you with software. Software is cheaper, and it doesn’t advocate for higher wages.
<snicker>
(Software is only cheaper if it can perform the same tasks more cheaply.
And that post is wank)
John Moralessays
(Did you really buy into this bullshit pseudomarxism, Jo1? Come on!)
John Moralessays
[from a brand-new session, not massaged by me]
me: please provide a succinct and heartfelt appeal for higher wages from the perspective of a low wage worker
AI:
Life’s simplicity holds its charm, but that charm should not come at the expense of my dignity and well-being. I work hard, putting in long hours to keep essential services running smoothly and provide for my family. Yet, the weight of financial uncertainty is a constant burden. Higher wages mean more than just money—they represent respect, stability, and the opportunity to dream of a better future. It’s not just about making ends meet, but about thriving, contributing fully, and having the peace of mind that allows for true growth and happiness. Please consider my plea not as a demand, but as a step towards a fair and just society for all. Thank you.
John Moralessays
[almost poetic, no?]
John Moralessays
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
—
This statement offers a provocative view on the potential impact of AI in the workplace. Let’s deconstruct it from an academic perspective, considering economic, psychological, and technological dimensions.
Economic Perspective
Economically, the statement highlights concerns about job displacement due to AI. Historically, technological advancements have led to shifts in labor markets, from the Industrial Revolution to the digital age. Economists term this the “displacement effect,” where jobs are lost in sectors directly impacted by new technologies. However, there’s also the “productivity effect” where new technologies create new jobs and industries. The net impact on employment depends on the balance between these two effects. Research by Acemoglu and Restrepo (2020) discusses how AI could potentially displace some jobs while creating others, thereby transforming the labor market.
Technological Perspective
From a technological standpoint, the statement underscores the capabilities of AI in performing specific tasks better than humans. AI excels in repetitive, data-intensive tasks where it can offer efficiency gains, reduce errors, and operate continuously without fatigue. However, AI currently struggles with complex tasks requiring nuanced human judgment, empathy, and creativity. Bostrom (2014) discusses how AI’s potential exceeds its current practical applications due to the limitations in general AI capabilities.
Psychological Perspective
Psychologically, the statement evokes fears of job insecurity. The perception of AI as a threat rather than a tool for augmentation is prevalent. Research in organizational behavior (e.g., by Frey and Osborne, 2017) explores how fear of automation can affect employee morale and productivity. The psychological impact of job displacement extends beyond economic concerns, affecting individuals’ mental health and identity.
Ethical Considerations
Ethically, the deployment of AI in the workplace raises questions about fairness and responsibility. Who is accountable when AI makes decisions that affect people’s livelihoods? The ethical design and implementation of AI systems must consider biases and ensure transparency. Brynjolfsson and McAfee (2014) discuss the need for ethical frameworks to guide the integration of AI into society.
Conclusion
In conclusion, the statement “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI” captures the multifaceted debate around AI’s role in the future of work. It reflects economic, technological, psychological, and ethical dimensions that are critical for understanding the full impact of AI on employment.
It’s an engaging subject, and I hope this evaluation provides a deeper academic understanding of the statement. If you have any specific aspects you’d like to explore further, feel free to ask!
John Moralessays
[I know… my phrasing is rambly and abnormal, but somehow AIs can parse what I mean.
That was a virgin instance of a chatbot, now in 2024]
John Moralessays
[Honestly, I could never have done as well as the few joules my query took to process did]
Bekenstein Boundsays
Wow, a three-and-a-half page meltdown beginning with “I know you are but what am I”. You’re a real class act. /s
John Moralessays
:)
You get what you give, Beebee.
jo1stormsays
And for how long have you been delegating the task of thinking to chat bots, John? I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments. I guess this explains it. Now there’s even less reason to read your comments.
It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.
John Moralessays
And for how long have you been delegating the task of thinking to chat bots, John?
For a zero duration of time, jo1.
But it’s kinda cute you imagined that.
I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments.
Well, I am in my dotage and all.
I guess this explains it. Now there’s even less reason to read your comments.
Guesswork, that’s a fine way to live.
It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.
<snicker>
You are a johnny-come-lately, Jo1. You have no real idea of what I was like, in my pomp.
(Not as gentle and kind and amicable and patient as I now am)
—
See, people here post about how AI is shit and such.
I’m trying to get through to such people.
Trying to explain that a tool is only as good as its user.
John Moralessays
Remember?
I asked: Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
The response: … says the biggest tool in this comments section.
(Unlike an AI, I can interpret that, because I get apes with wounded egos; I just chucked in a bigger tool, liking my little jokes; guess what the bigger tool was, jo1?)
StevoRsays
Who needs memory when you can just scroll up?
Wait, what’d I just type?
(& how full of typos is it? Dunno. Probly as usual a lot more than I thought?)
John Moralessays
No typos, StevoR, though the ampersand is not normally used in prose.
—
Since it’s quiet: “You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.” was a claim made about me.
This claim, of course, evinces a deep misunderstanding of the concept of steelmanning — it is something one does to arguments one’s interlocutor has made, specifically, to interpret them in the most charitable manner possible before rebutting it.
(Also, it misunderstands the concept of straw dummy “attacks”, but one dollop at a time)
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
Who needs the “academic perspective?” That statement rings true from a common-sense perspective of how bosses, CEOs and business owners have been known to think and make decisions.
John Moralessays
RB,
Who needs the “academic perspective?
The AI. Again, one needs to establish context and protocols, and I was using a virgin instance.
Else, I would have gotten a colloquial-level response, geared to the average punter.
You know, the default setting.
If I’m actually using the tool, I first set up an environment.
For example, regarding protocol, this: “Answer questions directly unless explicitly requested otherwise.”
(Seems like a silly, redundant requirement at first hand. But it’s a chatbot, it uses natural language by default)
John Moralessays
Ah, WTH. An extract — I like my prompts to be minimalistic, much as I like my desktop to be an actual metaphor for a desktop.
A sample:
“Please adopt these guidelines henceforth:
Avoid emoticons and excessive punctuation.
Use minimal prompts.
Refrain from stating intention to assist.
Do not offer assistance, inquiries, or encouragement for further interaction.
Use personal pronouns only when grammatically necessary.
Minimise conversational tone.
Provide succinct acknowledgements with concise responses.”
(Just a copypaste thingy that makes the interface much sleeker)
Who cares what the AI says? We don’t need AI to remind us of how certain people are known to act.
John Moralessays
I don’t need satisfactory sexual encounters, either.
(But they are quite fun!)
John Moralessays
We don’t need big mining machinery to dig for resources, but there’s a reason we don’t use picks and shovels for mining, these days.
John Moralessays
[genuine copypasta]
(Virgin instance new session):
Q: Who cares what the AI says?
A: Who knows? Maybe no one. Maybe everyone. After all, I’m just here to assist and share knowledge, spark ideas, and add a dash of humor to your day. Whether or not my words matter, that’s up to you to decide. What I can promise is that I’m always ready to help.
(Given my interactive prompt guidelines immediately after)
Q: Who cares what the AI says?
A: That would be for you to decide.
—
Interesting? I find it so.
John Moralessays
[bonus, some extra environmental requirements — hey, anyone remember DOS?]
Q: Who cares what the AI says?
A: In essence, the level of care or concern about what AI says depends on the context and the individual’s reliance on AI for critical tasks. For instance, in academic and professional settings, there is a growing emphasis on the ethical use of AI and the need for users to maintain their cognitive abilities by not becoming overly dependent on AI systems
John Moralessays
In passing, the locution ‘the AI’ is indicative. Definite article.
The interaction is with a particular instance of a particular engine.
Me, I use a freebie generic chatbot that has no session permanence and that I suspect has rather limited resources allocated to my queries. It lies a lot — well, ‘hallucinates’, but it turns out a lot of that can be caught out with but a few more prompts (A different set, but; e.g. ‘Please review your response for inconsistencies before providing it.’)
A work in progress, but I’m pretty sure the limitations and restrictions under which I operate are not universal.
John Moralessays
Try again, a bit more dispassionately, without following any specific style
In realms of code and crafted words, AI assists but does not replace.
Programmers bring insight and creativity to technology.
Poets infuse emotion and meaning into language.
Both roles require human intuition and experience, something AI cannot replicate.
John Moralessays
Anyway, basically, white-collar jobs are about to be as automated as blue-collar jobs.
And autonomous uncrewed fighting vehicles, well. Rapidly becoming a thing.
Ukraine war is showing that burst of creativity we saw during WW2 and during the cold war.
John Moralessays
Customer service officers / receptionists / suchlike will remain. But… well. The functionaries. Alas.
It only replaces people when people are functionally being used as tools.
(John Henry!)
John Moralessays
Right?
Bring the focus back to the actual post.
Poets, well. Their demise due to AI is no biggie.
Poets they remain.
(Handcrafted vs. factory-made goods; an old dichotomy, no?)
Bekenstein Boundsays
Raging Bee@44: A sphere has more points than he does.
John Moralessays
You really, really don’t get I’m saying AIs are a tool?
(Not the most complicated point)
—
Also, the locus of a sphere is full of points.
(heh)
Bekenstein Boundsays
I had a handy retort lined up, but then I realized it only works for a spherical troll in a vacuum. :/
John Moralessays
Heh. Or the degenerate case where the sphere’s radius is zero.
(Then, only one point)
Bekenstein Boundsays
Oh yeah? Well … ah, damn, that one assumes the trolling is coming from a point source and disregards spacetime curvature. Eh, gonna go hit the stacks again …
John Moralessays
You shoulda said a marble or suchlike.
But no, you had to try for more, to your detriment.
Again, O wilfully obtuse one: You really, really don’t get I’m saying AIs are a tool?
(I know you do, you know I know you do, etc.)
John Moralessays
Here, for you, my #15:
“Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.”
You can’t find a point there, BB. Ostensibly.
Bekenstein Boundsays
Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.
(Or is that tools for trolls? Maybe both.)
Meanwhile, if I need a reliable summary on a topic I’ll stick to Wikipedia.
John Moralessays
Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.
Pearls before swine.
Here’s a reliable summary: “In 2023, Nvidia became the seventh public U.S. company to be valued at over $1 trillion,[24] and the company’s valuation has increased rapidly since then as the company became a leader in data center chips with AI capabilities in the midst of the AI boom.[25][26] In June 2024, for one day, Nvidia overtook Microsoft as the world’s most valuable publicly traded company, with a market capitalization of over $3.3 trillion.[27]”
Me:
This is the statement at hand: “Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.”
Rephrase that statement in the form of a couplet in the most technically formal manner possible
El-cheapo (free) generic chatbot (2024 vintage):
“Tools for fools you’ve used with care, Fashioned wit beyond compare.”
John Moralessays
[follow-up]
Q: Enumerate the formal elements present in your rephrasing
A: The couplet provided features several formal elements:
Rhyme Scheme: The couplet uses an AA rhyme scheme, meaning both lines end with words that rhyme (“care” and “compare”).
Meter: The lines aim for a rhythmic pattern, commonly iambic (unstressed syllable followed by a stressed syllable), although not perfect in this instance:
Tools for fools you’ve used with care, (anapestic meter: two unstressed syllables followed by a stressed syllable)
Fashioned wit beyond compare. (anapestic meter: two unstressed syllables followed by a stressed syllable)
Brevity: The couplet is concise, encapsulating the entire thought within two lines.
Imagery: The couplet employs metaphorical language, referring to “tools for fools” and “fashioned wit,” which evokes a clear, vivid image.
Alliteration: The repetition of consonant sounds, particularly the ‘f’ sound in “fashioned” and “fools.”
Formal Tone: Despite its brevity, the couplet maintains a formal tone appropriate for the technical challenge.
These elements combine to create a cohesive and polished poetic form.
John Moralessays
Even thought of as a toy, is it not fun?
—
I: rephrase again, in oblique verse and stilted and antiquated Latin or pseudo-Latin
R: Instrumenta fatuis tenes cum cura, Ingenium formasti ultra comparationem.
This rephrased couplet uses a pseudo-Latin style to convey the original meaning in an antiquated manner. The formal elements, such as rhyme and meter, are still present, while the language adds a layer of historical flavor.
—
(I’d be interested to know what cartomancer thinks of that little bit of doggerel)
jo1stormsays
“We do it with an app so its not illegal” turns out to be a lie. And AI makes it so much worse.
The one so ill-endowed by nature he must borrow the wit of a robot said something reminiscent of:
In 2023, Nvidia became the seventh public U.S. company to be valued at over $1 trillion,[24] and the company’s valuation has increased rapidly since then as the company became a leader in Dutch tulip chips with bubble capabilities in the midst of the tulip bubble.[25][26] In June 2024, for one day, Nvidia overtook Microsoft as the world’s most valuable publicly traded company, with a market capitalization of over $3.3 trillion.[27]
I’ll remind you of this when there’s been a resounding pop!! and both of those have become penny stocks. And no doubt my doing so will prompt a chain of five or six replies in which you initially froth at me but soon shift to talking to yourself. :)
John Moralessays
I’m pretty sure 145.06 USD is not a penny.
(Current value as I write this comment)
Bekenstein Boundsays
That’s because it hasn’t happened yet. Wait for the pop. You’ll know it when you hear it.
Since something is now eating any comment I make that contains a link, I’ll just have to quote the whole article instead.
Opinion: The AI bubble is looking worse than the dot-com bubble. The numbers prove it.
Two different bubbles and two completely different revenue streams mean only one thing
It’s different this time. How many times have we heard that? It usually turns out to be wishful thinking. This time, the generative-AI bubble may really be different than the dot-com bubble — just not in a good way.
The number of generative AI users is growing faster than the number of internet users in the late 1990s — except the comparison is essentially meaningless.
Think about it. What did people have to do to become internet users in the late 1990s? They had to buy a computer and subscribe to an internet service provider, which were both expensive. The Compaq ProSignia Desktop 330 was $2,699 in 1999 ($5,101 in 2024 dollars) and that didn’t include applications software such as Microsoft’s Word, Excel and PowerPoint. Bell Atlantic offered a digital subscriber line service for $59.95 a month in 1999, or $113 in 2024 dollars.
How much does it cost to use generative AI today? Nothing. You already own a computer and pay for internet access, and OpenAI and others offer limited-use services for free. The substantial cost of accessing the internet 30 years ago meant that users anticipated substantial payoffs. The minimal cost of accessing ChatGPT and other generative-AI systems today means that users don’t need much of a payoff, if any, to give it a try.
Why are economists at the Federal Reserve Bank of St. Louis and elsewhere making this silly comparison? The use of essentially free services — like social media, email, messaging and other apps — doesn’t mean they are particularly useful. If anything, they are addictive entertainment that we pay for with our time.
St. Louis Fed economists speculated about the possible effects of generative AI on labor productivity: “We examined how intensely respondents employed generative AI on days that they reported using it.” Seriously? Is the intensive use of Facebook Instagram, TikTok, Snapchat and other social-media platforms increasing labor productivity or reducing it? The number of people using social media and the hours they spend doing so are hardly a useful measure of the economic payoff. The same is true of generative AI.
The economic value added and labor productivity boosted by generative AI is not measured by how many people use it or how much it costs to create and sustain it. It is measured by economic value added and labor productivity gains — and there is scant evidence that generative AI has moved the needle.
While access to generative AI is relatively cheap, the creation and supply of generative AI is incredibly expensive. Training these models can cost upwards of $100 million and require the newest and most expensive chips from Nvidia and competitors, including AMD and Intel The cost to society of AI chips, and the talent, electricity, water and more needed to manufacture them, currently dwarfs the payoff.
Creating useful services from generative AI also isn’t cheap. Wall Street Journal technology columnist Joanna Stern recently reported on her development of “Joannabot,” powered by Google’s Gemini, to help readers “decide if the iPhone 16 is worth an upgrade:” The bot was useful but, she wrote, it “Can go off the rails. Way off,” and was expensive: “I won’t get into the exact costs, but let’s just say I could’ve taken all four of my iPhone 16 review units on a luxury vacation to Bora Bora for the price of Google’s Vertex for this project.”
If generative AI is going to pay off in labor productivity, it has to generate enough revenue to justify its cost. We are not the first to ask this question. Sequoia’s David Cahn estimated that $600 billion in annual generative AI revenue is needed to justify the current investments in generative AI, a figure that is probably more than 100 times the current annual revenues for OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot and similar services.
The New York Times reported that OpenAI anticipates $5 billion in losses on $3.7 billion in revenue this year. Nonetheless, OpenAI received $6.6 billion in additional funding to keep it afloat — and which valued the company at $157 billion. That certainly echoes the dot-com bubble — but Goldman Sachs’ Jim Covello, Citadel’s Ken Griffen, and market strategist Ed Yardeni all warn that the AI bubble could be even worse.
We can compare the generative-AI bubble with the internet bubble by looking at revenues in 2000, the peak of the dot-com bubble, and generative-AI revenues for 2024. One study estimates that 134 million PCs were sold in 2000, which at $5,101 per computer at that time equated to about $684 billion in revenue. The number of global internet users was about 361 million in 2000, which at $113 a month equals about $489 billion in revenue ($850 billion in 2024 dollars).
E-commerce also generated a lot of revenue in 2000. A March 2001 article reports that the market for e-commerce in 2000 was $286 billion ($500 billion in 2024 dollars). E-commerce has been a growing revenue stream as we have become accustomed to buying online.
Put these numbers together and the internet generated more than $1.5 trillion in revenue (in 2024 dollars) in 2000 — and the internet bubble still burst. Generative AI, on the other hand, is currently generating less than $10 billion. If the bubble bursts, it will be a very large pop.
John Moralessays
[Opinion: The AI bubble is looking worse than the dot-com bubble. The numbers prove it.
Two different bubbles and two completely different revenue streams mean only one thing
By Jeffrey Funk and Gary Smith
Last Updated: Oct. 26, 2024 at 11:09 a.m. ET]
John Moralessays
If it’s their opinion, then it must be true. Right?
So, do you expect it will need another 15 years for the markets to recover after AI bubble bursts or?
John Moralessays
I’ve used the internet since around 1988, so, no.
(It never went away)
Bekenstein Boundsays
Look on the bright side: the market should be flooded with cheap nVidia GPUs when it does go, so if you’re planning to buy a gaming machine anytime soon … (<– Note spacing)
John Moralessays
Mistaken spacing duly noted, BB.
So, the internet bubble died, but the internet kept getting faster and more ubiquitous all that time.
See, it’s an investment bubble, not a technological bubble.
The issues are financial, not technical.
You conflate (presumably) bad investment with bad technology. That is a mistake.
jo1stormsays
Its an investment bubble into a bad technology (bad because it overpromised and underdelivered).
John Moralessays
Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.
(At the moment, and I shan’t joke about the Basilisk. OOO11!)
jo1stormsays
So its a bad tool that doesn’t do what its purported to be (and marketed as) doing. Glad we agree.
John Moralessays
Exactly. Not a problem.
Doesn’t really work (for some), won’t replace anyone, won’t be widely implemented, developed, or anything like that.
No future in it.
Therefore, why even worry about it?
jo1stormsays
Because frauds are not a good thing in a society and the last time a bubble built on similar false premises burst many people lost their jobs and it took 15 years for the markets to recover?
John Moralessays
Ah yes, the alleged fraud: “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.”
Heh.
jo1stormsays
Yup “alleged” fraud us not so alleged and is a reason why 70% to 85% of AI projects fail. Because the teams use them very flawed tools for their marketted purposes and not as they are.
John Moralessays
And thus the great growth in the industry.
Hey, self-employed people are their own boss, no?
John Moralessays
Hey, care to provide at least one example of someone losing their job because their boss replaced them with an AI which could not do the job on the basis of a salesman’s sales pitch?
(Since you’re so sure it’s a genuine, real thing!)
John Moralessays
(From Statista)
“AI market size worldwide from 2020-2030 The market for artificial intelligence grew beyond 184 billion U.S. dollars in 2024, a considerable jump of nearly 50 billion compared to 2023. This staggering growth is expected to continue with the market racing past 826 billion U.S. dollars in 2030.”
BTW, if 70% to 85% of AI projects fail, it follows that 15-30% of AI projects succeed.
“Most start-ups don’t succeed: More than two-thirds of them never deliver a positive return to investors. But why do so many end disappointingly? That question hit me with full force several years ago when I realized I couldn’t answer it.”
By your standards, start-ups are a fraud. Right?
jo1stormsays
“By your standards, start-ups are a fraud. Right?” Some of them are. Sometimes most of them, depending on the technology sector.
Growth of a market means nothing if it is a bubble. You know that.
”
Hey, care to provide at least one example of someone losing their job because their boss replaced them with an AI which could not do the job on the basis of a salesman’s sales pitch?
(Since you’re so sure it’s a genuine, real thing!)”
The National Eating Disorders Association got scammed by AI sales reps and sacked staff to replace their jobs with the Tessa chatbot. Not very cluey, the management of NEDA, demonstrably.
And so AI in general is a scam, and it’s a thing that staff regularly lose their jobs to AI chatbots that can’t do their jobs. Right?
Since you assert that there are many, of which that is the first, care to share the second?
John Moralessays
Interesting article, and suggestive:
In its original form, the Chatbot was unable to provide unscripted answers. The wellness chatbot “isn’t as sophisticated as ChatGPT,” Dr. Fitzsimmons-Craft says. The intention was for the chatbot to serve up pre-written answers to questions – typically related to body image, so that people could reframe their approach to eating disorders. Dieting tips were not part of the program.
and
In a statement issued on twitter, employees of the National Eating Disorder Association shared that they were told they would be fired and replaced with a chatbot today, June 1st. “Please note that Tessa does not replace therapy nor the NEDA Helpline, but is always available to provide additional support when needed,” Tessa’s website says.
and
The NEDA helpline, which launched in 1999, served nearly 70,000 people and families last year. Staffers saw the move to AI as union-busting, as they had recently organized within the last week in an effort to combat the change.
[…]
The pandemic provided a perfect storm for eating disorders, one of the unfortunate consequences of rampant loneliness. NPR reports that the NEDA helpline was run by just six paid staffers, and they train and oversee up to 200 volunteers at any given time. The staff felt overwhelmed, under-supported, burned out. There was a ton of turnover, so the helpline staff voted to unionize.
—
You know what that suggests to me?
(It ain’t that the management got conned, as such)
Abbie Harper – who as an NEDA associate helped launch Helpline Associates United (HAU), a union representing staff at the non-profit – alleged the decision to close the helpline, ditch its humans, and replace them with software was retaliation against their unionization.
“NEDA claims this was a long-anticipated change and that AI can better serve those with eating disorders. But do not be fooled — this isn’t really about a chatbot. This is about union busting, plain and simple,” she claimed.
Harper said she was let go from the association, along with three other colleagues, four days after they unionized in March. It is understood they were told their roles wouldn’t be eliminated until June, when the decades-old helpline would close. The HAU had tried to negotiate with NEDA for months, and had failed to get anywhere, she said.
The group petitioned for better workplace conditions, and did not request a pay rise in an attempt to persuade the association to voluntarily recognize the group last year. The HAU, which has joined the Communications Workers of America Union, has now filed complaints alleging unfair labor practices with the NLRB, the US’s workplace watchdog.
“We plan to keep fighting. While we can think of many instances where technology could benefit us in our work on the Helpline, we’re not going to let our bosses use a chatbot to get rid of our union and our jobs. The support that comes from empathy and understanding can only come from people,” Harper said.
Bekenstein Boundsays
How much did you sink into this bubble? Because your desperation to avoid admitting that it is one suggests it’s probably a tidy sum.
John Moralessays
Zero, BB. That’s how much.
(Gotta love your efforts at trying to rationalise why I would not be swayed by silly talk)
BTW, according to jo1’s own claim, 15-30% of AI projects do not fail.
(That must be the fizzy bit, rather than the bubbly bit, eh?)
Bekenstein Boundsays
Even the AIs admit they are a bubble:
AI Overview
Yes, the artificial intelligence (AI) industry may be in the midst of an economic bubble:
High expectations and investments: Companies have made large investments, but have yet to see significant profits.
Inflated valuations: AI companies are valued highly, sometimes beyond their current profitability or capabilities.
Lack of regulation: There is a lack of adequate regulation for the AI industry.
Hype outpaces reality: Speculation about AI’s potential is outpacing reality.
John Moralessays
Ah, BB.
So you find the AIs (that is, some chatbot instance from some particular engine) authoritative.
(What about the fizz? You know, the 15-30 jo1 mentioned)
John Moralessays
But yes, an economic bubble, not a technological bubble.
That much is right.
(The tech is sound enough, and ever-improving)
Bekenstein Boundsays
The tech has plateaued. That’s why the bubble’s bursting. It plateaued without all that much to show for it, never having delivered on most of the hype.
John Moralessays
The tech has plateaued. That’s why the bubble’s bursting.
Is it really?
(Outside your wishful thinking, I mean)
It plateaued without all that much to show for it, never having delivered on most of the hype.
Yet jo1 reckons that 15-30% of AI projects do not fail.
(And there are ever more AI projects; so, whence this Leng claim?)
John Moralessays
Behold, a verse in iambic pentameter:
The tech hath plateaued; the bubble bursts anon. It rose with hype, yet much did promise none,
Forsooth, it stalled and little hath it shown, Ne’er reaching heights the fervent crowd had grown.
John Moralessays
[ah, right. prompt was your #96, in pseudo-Shakesperian style of choice]
John Moralessays
’tis a brilliant toy, no?
—
Alright, here it is in the playful style of Ogden Nash:
The tech went flat, and that’s why bubbles burst, It rose and fell without much to show, the worst.
It promised grand, with hype so tall and bright, Yet never quite delivered, not much insight.
Bekenstein Boundsays
‘Tis indeed a brilliant toy … and nothing more.
Yet jo1 reckons that 15-30% of AI projects do not fail.
I reckon a rather higher percentage of Dutch tulip bulbs germinated. Didn’t mean they weren’t being overvalued.
John Moralessays
“‘Tis indeed a brilliant toy … and nothing more.”
I thought you thought it was a bubble?
(Not all toys are bubbles)
I reckon a rather higher percentage of Dutch tulip bulbs germinated. Didn’t mean they weren’t being overvalued.
So you still maintain that AI is not being (massively) overvalued? In spite of enormous piles of evidence and citations from multiple people?
Wow.
John Moralessays
Wow indeed.
People have gone from saying it’s vapourware to it’s useless to it’s fun to it’s (massively) overvalued.
Thing is, hether or not it’s overvalued, it is not merely hype.
There’s a ‘there’ there.
(Remember the Internet bubble? There was a there there, too)
Bekenstein Boundsays
And Dutch tulip bulbs could actually be planted and would actually grow, sometimes even producing a pretty flower.
Still overvalued.
The Internet very quickly had uses other than just sharing cat videos. Has anyone found a use for AI beyond making ersatz cat videos, or equivalently frivolous uses, yet? (That actually works and is fit for purpose, of course.)
John Moralessays
Still overvalued.
<snicker>
Don’t let the historical reality get in the way of your fantasies, Beebee.
(That’s the spirit!)
John Moralessays
Has anyone found a use for AI beyond making ersatz cat videos, or equivalently frivolous uses, yet? (That actually works and is fit for purpose, of course.)
And, so back to it being useless.
(Gotta love your ourobous loop)
Bekenstein Boundsays
That would have been rather ironic, if you’d actually managed to spell “ouroboros” correctly, Mr. Poster Child For Ouroboros Loops …
John Moralessays
:)
You are most amusing.
So, it’s useless, and it’s a good toy, and it’s a bubble, but it works, except that it’s useless…
(Your competence at argumentation is quite evident)
Bekenstein Boundsays
It’s not able to replace human judgment. It is able to prosthetically augment human creativity, but in that role still needs humans to do the selection-filter stage. It cannot automate anything without a severe drop in quality, despite ongoing hype to the contrary. Much of the investment money is predicated on expectations that it can, or soon will, automate a lot of stuff. That appears to be perpetually “a few more months and a billion dollars away”, for the past three years or so now.
The technology has run through its S-curve and plateaued. It won’t improve at more than sublogarithmic rates in how-much-more-money-is-thrown-at-it without further theoretical breakthroughs, which of their nature cannot be predicted before they happen. They could be tomorrow or still decades away and only a fool would gamble on them being particularly soon.
And thousands, if not millions, of fools, some of them extraordinarily rich, have done exactly that.
AKA a bubble.
Except in the unlikely event that that next theoretical breakthrough comes tomorrow, a lot of wealthy idiots and a number of businesses, some of them enormous, are heading for a hilariously slapstick pratfall very soon. nVidia has already stepped on the first banana peel and started flailing and windmilling its arms. We all know what comes next, save for those of us in denial.
John Moralessays
Heh heh heh.
You keep telling yourself that.
John Moralessays
Gotta love the toy.
“It’s not unable to not replace human judgment. It is not unable to not prosthetically augment human creativity, but in that role it doesn’t not need humans to do the selection-filter stage. It is not unable to not automate anything without a severe drop in quality, despite ongoing hype to the contrary. Much of the investment money is not predicated on expectations that it cannot, or soon will not, automate a lot of stuff. That does not appear to be perpetually ‘a few more months and a billion dollars away,’ for the past three years or so now.”
:)
John Moralessays
“If the technology were to have run through its S-curve and were to have plateaued, it wouldn’t improve at more than sublogarithmic rates if more money were to be thrown at it, unless there were to be further theoretical breakthroughs, which by their nature couldn’t be predicted before they were to happen. If these breakthroughs were to occur, they could be tomorrow or still decades away, and only if one were a fool would one gamble on them being particularly soon.”
Silentbobsays
Morales, dude, we get it. You’ve just discovered “AI”. You can stop now.
Honestly, it’s like a budgie just discovered a mirror in their cage and is fascinated.
John Moralessays
Heh heh heh.
I am retorting. Playing with my toy. The other one, anyway.
I can’t do that (retort) unless I get a prompt. Right?
There is no mirror, and I assure you I am no budgie.
(You’re my biggest fan, you know all that, Bobiferant!)
jo1stormsays
You are correct, some budgies are smarter than you.
John Moralessays
:)
That’s quite pitiful, jo1.
(Suits you nicely)
Bekenstein Boundsays
There is no mirror,
No doubt; if you could see yourself as others see you you’d be moderating your behavior. You aren’t so you can’t.
AI Stock Dives After CEO Notes ‘AI Bubble’ Will ‘Correct’
John Moralessays
You are funny, but enough.
You’ve exhausted your quota of silliness for this thread, Beebee.
Bekenstein Boundsays
There goes your imagination again, conjuring visions of “silliness” that isn’t even there. Meanwhile, here in the real world, investors have already started losing their shirts as one AI-associated stock after another begins to tumble.
jo1storm says
Goid ole SMBS. I always loved this one.
https://www.smbc-comics.com/comic/dear-lord
Hemidactylus says
Paraphrasing Leslie Knope, poets and English majors are “mean, conniving, rude, and extremely well-read which makes them very dangerous.” Don’t ever get into a verbal battle with one without a thesaurus and grammar guide to rely upon…
larpar says
@2 Hemidactylus
“mean, conniving, rude,”
I might be a poet
and don’t know it
robro says
“When the AI replaces you programmers”…I’ve caught whiffs of that fantasy in the last few years. I don’t see any signs of it at this point. There’s whole new areas for programmers, for example “prompt engineering.”
“Is it too late for me to switch fields to poetry?” As a Philosophy/Literature major who moved to San Francisco because of poetry and music, it’s not too late, but the results may be less about poetry, music and philosophy than you might hope.
jenorafeuer says
@robro:
I remember over 20 years ago talking with a friend of mine about people needing courses for how to construct search queries to find things on the Internet. His response was that there was actually an entire degree program that included things like that: Library Sciences.
Also, yeah, ‘AI replacing programmers’ isn’t going to happen (at least not until maybe we get a full general sapient AI, and that’s not happening now and if anything current LLM models are moving away from it); at most it’s going to move the problem up a level, removing the need for some of the low-level ‘code monkeys’ and making it more important to have people who understand the system involved in creating the specifications. Because frankly a whole lot of the reason that software is such a mess is that most of it is built without a solid idea of what it’s supposed to do and what should happen in various odd corner cases. I know a good chunk of my job is literally ‘okay, how can I break this’ to try to find and close off any possible problems. And a lot of that has to be done at the specification level before any programming even starts.
pilgham says
“Mithridates, he died old” was one of the favorite sayings of my old English teacher.
chrislawson says
jenorafeuer@5–
In addition, the promised niche of AI-generated code is already fulfilled by coding sites like Stack Overflow and GitHub. Yes, this still requires a certain level of coding skill just to know how to find the snippets you need and how to plug them into your own code, but anyone who doesn’t have that basic coding proficiency will also have zero chance of knowing if their AI-generated code works as promised.
Testing and debugging are difficult, complex tasks even for experienced coders working on code they wrote themselves. The idea that AI will write smooth functional code for people who don’t have the skills is, at this stage, a dangerous fantasy. (Dangerous because it will encourage corporations to use bad code they don’t understand with unfortunate real life consequences.)
Bekenstein Bound says
Short of fully sapient AI, I don’t expect programmer-bots to get all that far either. (And I have considerable experience coding and debugging.)
The closest we might get would still require a human to write a specification and a comprehensive set of unit tests. LLM-based methods might reach the point of being able to iteratively generate code that will pass the unit tests, working in tandem with a testing framework that automatedly runs the tests and gives feedback to the AI. Even that is likely to be a stretch, though. And any gaps in the test coverage, let alone the specification, become “here be dragons”, aka in coderspeak “undefined behavior”. I especially don’t know how one would avoid the problem of overfitting: if we treat “what the code does” as a curve and the unit tests as a set of data points, the AI may well produce something that acts wildly everywhere but the vicinity of the exact given test-cases (think fitting a degree-10 polynomial to 10 data points), rather than doing what the specification-writers intended (think fitting a quadratic or an exponential to those 10 data points). In the worst case, it just spits out a rote memorization of the test cases: e.g., a times-table rather than a multiply function that generates errors or garbage results at every input that wasn’t a test case. The worst case result of “teaching to the test” and likely insoluble without full sapience.
John Morales says
Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
AI is a tool, not a jinn.
The better you specify and phrase your instructions, the better even current systems can perform.
You can correct and iterate and verify.
(Hint: you can even ask it to model possible exceptions to some stricture according to the interaction so far)
(In short, a tool is no better than its user)
Bekenstein Bound says
… says the biggest tool in this comments section.
John Morales says
I like your self-referential gnome, Beebee.
John Morales says
[meta: I asked the AI what a ‘self-referential gnome’ might be, in academic terms]
(Not too shabby, eh?)
John Morales says
[two iterations (within established rules I paste before each session); first one was what it was, but of course I got told there was no such thing. So, my own construct, but not uninterpretable. A neologism!]
John Morales says
Here, for you.
Go to an AI engine and establish a communication protocol.
Establish a context for the inquiry.
For example, I poisoned the well thus: “Are you able to provide pseudocode for an algorithm?”
So. I just tried this prompt:
‘How to find the polar equation of a circle given two points on its circumference?’
—
(No spoilers, but try it for yourself)
John Morales says
Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.
(At the moment, and I shan’t joke about the Basilisk. OOO11!)
John Morales says
[context having been establishe]
Please summarise the legend as succinctly as possible within two paragraphs
“John Henry, an African American folk hero, was known as a “steel-driving man,” responsible for hammering steel drills into rock to create holes for explosives during railroad construction. When a steam-powered drill was introduced, threatening the jobs of workers like him, Henry challenged the machine to a race. He ultimately triumphed, proving human strength and determination could surpass mechanical power. However, his victory came at a cost, as he collapsed and died from the immense physical exertion.
John Henry’s story has become a powerful symbol of human resilience and the struggle against the dehumanizing effects of industrialization. His legend has been preserved in songs, stories, and folklore, reflecting the broader themes of labor, dignity, and the fight for equality.”
jo1storm says
AI might be “reverse centaur” when it comes to automation.
“AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.”
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
John Morales says
jo1storm:
<snicker>
(Software is only cheaper if it can perform the same tasks more cheaply.
And that post is wank)
John Morales says
(Did you really buy into this bullshit pseudomarxism, Jo1? Come on!)
John Morales says
[from a brand-new session, not massaged by me]
me: please provide a succinct and heartfelt appeal for higher wages from the perspective of a low wage worker
AI:
Life’s simplicity holds its charm, but that charm should not come at the expense of my dignity and well-being. I work hard, putting in long hours to keep essential services running smoothly and provide for my family. Yet, the weight of financial uncertainty is a constant burden. Higher wages mean more than just money—they represent respect, stability, and the opportunity to dream of a better future. It’s not just about making ends meet, but about thriving, contributing fully, and having the peace of mind that allows for true growth and happiness. Please consider my plea not as a demand, but as a step towards a fair and just society for all. Thank you.
John Morales says
[almost poetic, no?]
John Morales says
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
—
This statement offers a provocative view on the potential impact of AI in the workplace. Let’s deconstruct it from an academic perspective, considering economic, psychological, and technological dimensions.
Economic Perspective
Economically, the statement highlights concerns about job displacement due to AI. Historically, technological advancements have led to shifts in labor markets, from the Industrial Revolution to the digital age. Economists term this the “displacement effect,” where jobs are lost in sectors directly impacted by new technologies. However, there’s also the “productivity effect” where new technologies create new jobs and industries. The net impact on employment depends on the balance between these two effects. Research by Acemoglu and Restrepo (2020) discusses how AI could potentially displace some jobs while creating others, thereby transforming the labor market.
Technological Perspective
From a technological standpoint, the statement underscores the capabilities of AI in performing specific tasks better than humans. AI excels in repetitive, data-intensive tasks where it can offer efficiency gains, reduce errors, and operate continuously without fatigue. However, AI currently struggles with complex tasks requiring nuanced human judgment, empathy, and creativity. Bostrom (2014) discusses how AI’s potential exceeds its current practical applications due to the limitations in general AI capabilities.
Psychological Perspective
Psychologically, the statement evokes fears of job insecurity. The perception of AI as a threat rather than a tool for augmentation is prevalent. Research in organizational behavior (e.g., by Frey and Osborne, 2017) explores how fear of automation can affect employee morale and productivity. The psychological impact of job displacement extends beyond economic concerns, affecting individuals’ mental health and identity.
Ethical Considerations
Ethically, the deployment of AI in the workplace raises questions about fairness and responsibility. Who is accountable when AI makes decisions that affect people’s livelihoods? The ethical design and implementation of AI systems must consider biases and ensure transparency. Brynjolfsson and McAfee (2014) discuss the need for ethical frameworks to guide the integration of AI into society.
Conclusion
In conclusion, the statement “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI” captures the multifaceted debate around AI’s role in the future of work. It reflects economic, technological, psychological, and ethical dimensions that are critical for understanding the full impact of AI on employment.
It’s an engaging subject, and I hope this evaluation provides a deeper academic understanding of the statement. If you have any specific aspects you’d like to explore further, feel free to ask!
John Morales says
[I know… my phrasing is rambly and abnormal, but somehow AIs can parse what I mean.
That was a virgin instance of a chatbot, now in 2024]
John Morales says
[Honestly, I could never have done as well as the few joules my query took to process did]
Bekenstein Bound says
Wow, a three-and-a-half page meltdown beginning with “I know you are but what am I”. You’re a real class act. /s
John Morales says
:)
You get what you give, Beebee.
jo1storm says
And for how long have you been delegating the task of thinking to chat bots, John? I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments. I guess this explains it. Now there’s even less reason to read your comments.
It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.
John Morales says
For a zero duration of time, jo1.
But it’s kinda cute you imagined that.
Well, I am in my dotage and all.
Guesswork, that’s a fine way to live.
<snicker>
You are a johnny-come-lately, Jo1. You have no real idea of what I was like, in my pomp.
(Not as gentle and kind and amicable and patient as I now am)
—
See, people here post about how AI is shit and such.
I’m trying to get through to such people.
Trying to explain that a tool is only as good as its user.
John Morales says
Remember?
I asked: Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
The response: … says the biggest tool in this comments section.
(Unlike an AI, I can interpret that, because I get apes with wounded egos; I just chucked in a bigger tool, liking my little jokes; guess what the bigger tool was, jo1?)
StevoR says
Who needs memory when you can just scroll up?
Wait, what’d I just type?
(& how full of typos is it? Dunno. Probly as usual a lot more than I thought?)
John Morales says
No typos, StevoR, though the ampersand is not normally used in prose.
—
Since it’s quiet: “You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.” was a claim made about me.
This claim, of course, evinces a deep misunderstanding of the concept of steelmanning — it is something one does to arguments one’s interlocutor has made, specifically, to interpret them in the most charitable manner possible before rebutting it.
(Also, it misunderstands the concept of straw dummy “attacks”, but one dollop at a time)
Raging Bee says
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
Who needs the “academic perspective?” That statement rings true from a common-sense perspective of how bosses, CEOs and business owners have been known to think and make decisions.
John Morales says
RB,
The AI. Again, one needs to establish context and protocols, and I was using a virgin instance.
Else, I would have gotten a colloquial-level response, geared to the average punter.
You know, the default setting.
If I’m actually using the tool, I first set up an environment.
For example, regarding protocol, this: “Answer questions directly unless explicitly requested otherwise.”
(Seems like a silly, redundant requirement at first hand. But it’s a chatbot, it uses natural language by default)
John Morales says
Ah, WTH. An extract — I like my prompts to be minimalistic, much as I like my desktop to be an actual metaphor for a desktop.
A sample:
“Please adopt these guidelines henceforth:
Avoid emoticons and excessive punctuation.
Use minimal prompts.
Refrain from stating intention to assist.
Do not offer assistance, inquiries, or encouragement for further interaction.
Use personal pronouns only when grammatically necessary.
Minimise conversational tone.
Provide succinct acknowledgements with concise responses.”
(Just a copypaste thingy that makes the interface much sleeker)
Raging Bee says
Who cares what the AI says? We don’t need AI to remind us of how certain people are known to act.
John Morales says
I don’t need satisfactory sexual encounters, either.
(But they are quite fun!)
John Morales says
We don’t need big mining machinery to dig for resources, but there’s a reason we don’t use picks and shovels for mining, these days.
John Morales says
[genuine copypasta]
(Virgin instance new session):
Q: Who cares what the AI says?
A: Who knows? Maybe no one. Maybe everyone. After all, I’m just here to assist and share knowledge, spark ideas, and add a dash of humor to your day. Whether or not my words matter, that’s up to you to decide. What I can promise is that I’m always ready to help.
(Given my interactive prompt guidelines immediately after)
Q: Who cares what the AI says?
A: That would be for you to decide.
—
Interesting? I find it so.
John Morales says
[bonus, some extra environmental requirements — hey, anyone remember DOS?]
Q: Who cares what the AI says?
A: In essence, the level of care or concern about what AI says depends on the context and the individual’s reliance on AI for critical tasks. For instance, in academic and professional settings, there is a growing emphasis on the ethical use of AI and the need for users to maintain their cognitive abilities by not becoming overly dependent on AI systems
John Morales says
In passing, the locution ‘the AI’ is indicative. Definite article.
The interaction is with a particular instance of a particular engine.
Me, I use a freebie generic chatbot that has no session permanence and that I suspect has rather limited resources allocated to my queries. It lies a lot — well, ‘hallucinates’, but it turns out a lot of that can be caught out with but a few more prompts (A different set, but; e.g. ‘Please review your response for inconsistencies before providing it.’)
A work in progress, but I’m pretty sure the limitations and restrictions under which I operate are not universal.
John Morales says
John Morales says
Anyway, basically, white-collar jobs are about to be as automated as blue-collar jobs.
And autonomous uncrewed fighting vehicles, well. Rapidly becoming a thing.
Ukraine war is showing that burst of creativity we saw during WW2 and during the cold war.
John Morales says
Customer service officers / receptionists / suchlike will remain. But… well. The functionaries. Alas.
(Is that not obvious?)
Raging Bee says
And…your point…? Or whoever’s point…?
John Morales says
(sigh>
AI is a tool.
John Morales says
Oh, right.
It only replaces people when people are functionally being used as tools.
(John Henry!)
John Morales says
Right?
Bring the focus back to the actual post.
Poets, well. Their demise due to AI is no biggie.
Poets they remain.
(Handcrafted vs. factory-made goods; an old dichotomy, no?)
Bekenstein Bound says
Raging Bee@44: A sphere has more points than he does.
John Morales says
You really, really don’t get I’m saying AIs are a tool?
(Not the most complicated point)
—
Also, the locus of a sphere is full of points.
(heh)
Bekenstein Bound says
I had a handy retort lined up, but then I realized it only works for a spherical troll in a vacuum. :/
John Morales says
Heh. Or the degenerate case where the sphere’s radius is zero.
(Then, only one point)
Bekenstein Bound says
Oh yeah? Well … ah, damn, that one assumes the trolling is coming from a point source and disregards spacetime curvature. Eh, gonna go hit the stacks again …
John Morales says
You shoulda said a marble or suchlike.
But no, you had to try for more, to your detriment.
Again, O wilfully obtuse one: You really, really don’t get I’m saying AIs are a tool?
(I know you do, you know I know you do, etc.)
John Morales says
Here, for you, my #15:
“Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.”
You can’t find a point there, BB. Ostensibly.
Bekenstein Bound says
Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.
(Or is that tools for trolls? Maybe both.)
Meanwhile, if I need a reliable summary on a topic I’ll stick to Wikipedia.
John Morales says
Pearls before swine.
Here’s a reliable summary: “In 2023, Nvidia became the seventh public U.S. company to be valued at over $1 trillion,[24] and the company’s valuation has increased rapidly since then as the company became a leader in data center chips with AI capabilities in the midst of the AI boom.[25][26] In June 2024, for one day, Nvidia overtook Microsoft as the world’s most valuable publicly traded company, with a market capitalization of over $3.3 trillion.[27]”
(https://en.wikipedia.org/wiki/Nvidia#Deep_learning)
John Morales says
[Heh]
Me:
This is the statement at hand: “Tools for fools … which is why you’ve employed them lavishly in preparing many of your bon mots, I am sure.”
Rephrase that statement in the form of a couplet in the most technically formal manner possible
El-cheapo (free) generic chatbot (2024 vintage):
“Tools for fools you’ve used with care, Fashioned wit beyond compare.”
John Morales says
[follow-up]
Q: Enumerate the formal elements present in your rephrasing
A: The couplet provided features several formal elements:
Rhyme Scheme: The couplet uses an AA rhyme scheme, meaning both lines end with words that rhyme (“care” and “compare”).
Meter: The lines aim for a rhythmic pattern, commonly iambic (unstressed syllable followed by a stressed syllable), although not perfect in this instance:
Tools for fools you’ve used with care, (anapestic meter: two unstressed syllables followed by a stressed syllable)
Fashioned wit beyond compare. (anapestic meter: two unstressed syllables followed by a stressed syllable)
Brevity: The couplet is concise, encapsulating the entire thought within two lines.
Imagery: The couplet employs metaphorical language, referring to “tools for fools” and “fashioned wit,” which evokes a clear, vivid image.
Alliteration: The repetition of consonant sounds, particularly the ‘f’ sound in “fashioned” and “fools.”
Formal Tone: Despite its brevity, the couplet maintains a formal tone appropriate for the technical challenge.
These elements combine to create a cohesive and polished poetic form.
John Morales says
Even thought of as a toy, is it not fun?
—
I: rephrase again, in oblique verse and stilted and antiquated Latin or pseudo-Latin
R: Instrumenta fatuis tenes cum cura, Ingenium formasti ultra comparationem.
This rephrased couplet uses a pseudo-Latin style to convey the original meaning in an antiquated manner. The formal elements, such as rhyme and meter, are still present, while the language adds a layer of historical flavor.
—
(I’d be interested to know what cartomancer thinks of that little bit of doggerel)
jo1storm says
“We do it with an app so its not illegal” turns out to be a lie. And AI makes it so much worse.
https://pluralistic.net/2024/11/26/hawtch-hawtch/#you-treasure-what-you-measure
Bekenstein Bound says
The one so ill-endowed by nature he must borrow the wit of a robot said something reminiscent of:
I’ll remind you of this when there’s been a resounding pop!! and both of those have become penny stocks. And no doubt my doing so will prompt a chain of five or six replies in which you initially froth at me but soon shift to talking to yourself. :)
John Morales says
I’m pretty sure 145.06 USD is not a penny.
(Current value as I write this comment)
Bekenstein Bound says
That’s because it hasn’t happened yet. Wait for the pop. You’ll know it when you hear it.
John Morales says
Ah, yes — the imminent demise of AI.
(Any day now!)
jo1storm says
I guess.
https://www.msn.com/en-us/money/markets/hedge-funds-are-dumping-nvidia-nvda-stock-should-you-follow/ar-AA1vrI94
Bekenstein Bound says
Something is eating my comments.
Bekenstein Bound says
Since something is now eating any comment I make that contains a link, I’ll just have to quote the whole article instead.
John Morales says
[Opinion: The AI bubble is looking worse than the dot-com bubble. The numbers prove it.
Two different bubbles and two completely different revenue streams mean only one thing
By Jeffrey Funk and Gary Smith
Last Updated: Oct. 26, 2024 at 11:09 a.m. ET]
John Morales says
If it’s their opinion, then it must be true. Right?
John Morales says
A more nuanced take:
What happens if the AI investment bubble bursts
[extract]
—
We have had previous tech bubbles. The tech remains.
Internet bubble. Internet remains.
And so forth.
jo1storm says
Technology remains but not the same players on the market and there’s increased suffering in the meantime.
In other news, hedge funds are dropping nvidia stocks.
John Morales says
Is not AI technology? If so, it will remain.
(Your own claim)
(Your adduced claim, which you consider authoritative)
So, it’s a bubble that looks worse than the dot.com bubble, and that is damning.
Look at this very domain name: freethoughtblogs.com and wonder!
The internet is hardly a bubble in 2024, is it?
(When did it burst, again? :)
John Morales says
cf. https://www.investopedia.com/terms/i/internet-bubble.asp
jo1storm says
So, do you expect it will need another 15 years for the markets to recover after AI bubble bursts or?
John Morales says
I’ve used the internet since around 1988, so, no.
(It never went away)
Bekenstein Bound says
Look on the bright side: the market should be flooded with cheap nVidia GPUs when it does go, so if you’re planning to buy a gaming machine anytime soon … (<– Note spacing)
John Morales says
Mistaken spacing duly noted, BB.
So, the internet bubble died, but the internet kept getting faster and more ubiquitous all that time.
See, it’s an investment bubble, not a technological bubble.
The issues are financial, not technical.
You conflate (presumably) bad investment with bad technology. That is a mistake.
jo1storm says
Its an investment bubble into a bad technology (bad because it overpromised and underdelivered).
John Morales says
Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.
(At the moment, and I shan’t joke about the Basilisk. OOO11!)
jo1storm says
So its a bad tool that doesn’t do what its purported to be (and marketed as) doing. Glad we agree.
John Morales says
Exactly. Not a problem.
Doesn’t really work (for some), won’t replace anyone, won’t be widely implemented, developed, or anything like that.
No future in it.
Therefore, why even worry about it?
jo1storm says
Because frauds are not a good thing in a society and the last time a bubble built on similar false premises burst many people lost their jobs and it took 15 years for the markets to recover?
John Morales says
Ah yes, the alleged fraud:
Heh.
jo1storm says
Yup “alleged” fraud us not so alleged and is a reason why 70% to 85% of AI projects fail. Because the teams use them very flawed tools for their marketted purposes and not as they are.
John Morales says
And thus the great growth in the industry.
Hey, self-employed people are their own boss, no?
John Morales says
Hey, care to provide at least one example of someone losing their job because their boss replaced them with an AI which could not do the job on the basis of a salesman’s sales pitch?
(Since you’re so sure it’s a genuine, real thing!)
John Morales says
(From Statista)
“AI market size worldwide from 2020-2030 The market for artificial intelligence grew beyond 184 billion U.S. dollars in 2024, a considerable jump of nearly 50 billion compared to 2023. This staggering growth is expected to continue with the market racing past 826 billion U.S. dollars in 2030.”
BTW, if 70% to 85% of AI projects fail, it follows that 15-30% of AI projects succeed.
(https://hbr.org/2021/05/why-start-ups-fail)
“Most start-ups don’t succeed: More than two-thirds of them never deliver a positive return to investors. But why do so many end disappointingly? That question hit me with full force several years ago when I realized I couldn’t answer it.”
By your standards, start-ups are a fraud. Right?
jo1storm says
“By your standards, start-ups are a fraud. Right?” Some of them are. Sometimes most of them, depending on the technology sector.
https://n1info.rs/english/news/a654039-us-demands-extradition-of-serbian-nationals-over-alleged-internet-fraud/
Growth of a market means nothing if it is a bubble. You know that.
”
Hey, care to provide at least one example of someone losing their job because their boss replaced them with an AI which could not do the job on the basis of a salesman’s sales pitch?
(Since you’re so sure it’s a genuine, real thing!)”
Sure. The first of many:
https://www.forbes.com/sites/chriswestfall/2023/05/31/non-profit-helpline-fires-staff-shifts-to-chatbot-solution/
John Morales says
There you go.
The National Eating Disorders Association got scammed by AI sales reps and sacked staff to replace their jobs with the Tessa chatbot. Not very cluey, the management of NEDA, demonstrably.
And so AI in general is a scam, and it’s a thing that staff regularly lose their jobs to AI chatbots that can’t do their jobs. Right?
Since you assert that there are many, of which that is the first, care to share the second?
John Morales says
Interesting article, and suggestive:
and
and
—
You know what that suggests to me?
(It ain’t that the management got conned, as such)
John Morales says
[context]
https://www.theregister.com/2023/05/31/ai_chatbot_eating_union/
Bekenstein Bound says
How much did you sink into this bubble? Because your desperation to avoid admitting that it is one suggests it’s probably a tidy sum.
John Morales says
Zero, BB. That’s how much.
(Gotta love your efforts at trying to rationalise why I would not be swayed by silly talk)
BTW, according to jo1’s own claim, 15-30% of AI projects do not fail.
(That must be the fizzy bit, rather than the bubbly bit, eh?)
Bekenstein Bound says
Even the AIs admit they are a bubble:
John Morales says
Ah, BB.
So you find the AIs (that is, some chatbot instance from some particular engine) authoritative.
(What about the fizz? You know, the 15-30 jo1 mentioned)
John Morales says
But yes, an economic bubble, not a technological bubble.
That much is right.
(The tech is sound enough, and ever-improving)
Bekenstein Bound says
The tech has plateaued. That’s why the bubble’s bursting. It plateaued without all that much to show for it, never having delivered on most of the hype.
John Morales says
The tech has plateaued. That’s why the bubble’s bursting.
Is it really?
(Outside your wishful thinking, I mean)
Yet jo1 reckons that 15-30% of AI projects do not fail.
(And there are ever more AI projects; so, whence this Leng claim?)
John Morales says
John Morales says
[ah, right. prompt was your #96, in pseudo-Shakesperian style of choice]
John Morales says
’tis a brilliant toy, no?
—
Alright, here it is in the playful style of Ogden Nash:
The tech went flat, and that’s why bubbles burst, It rose and fell without much to show, the worst.
It promised grand, with hype so tall and bright, Yet never quite delivered, not much insight.
Bekenstein Bound says
‘Tis indeed a brilliant toy … and nothing more.
I reckon a rather higher percentage of Dutch tulip bulbs germinated. Didn’t mean they weren’t being overvalued.
John Morales says
“‘Tis indeed a brilliant toy … and nothing more.”
I thought you thought it was a bubble?
(Not all toys are bubbles)
Inform yourself: https://mises.org/power-market/debunking-tulip-bubble
Bekenstein Bound says
So you still maintain that AI is not being (massively) overvalued? In spite of enormous piles of evidence and citations from multiple people?
Wow.
John Morales says
Wow indeed.
People have gone from saying it’s vapourware to it’s useless to it’s fun to it’s (massively) overvalued.
Thing is, hether or not it’s overvalued, it is not merely hype.
There’s a ‘there’ there.
(Remember the Internet bubble? There was a there there, too)
Bekenstein Bound says
And Dutch tulip bulbs could actually be planted and would actually grow, sometimes even producing a pretty flower.
Still overvalued.
The Internet very quickly had uses other than just sharing cat videos. Has anyone found a use for AI beyond making ersatz cat videos, or equivalently frivolous uses, yet? (That actually works and is fit for purpose, of course.)
John Morales says
<snicker>
Don’t let the historical reality get in the way of your fantasies, Beebee.
(That’s the spirit!)
John Morales says
And, so back to it being useless.
(Gotta love your ourobous loop)
Bekenstein Bound says
That would have been rather ironic, if you’d actually managed to spell “ouroboros” correctly, Mr. Poster Child For Ouroboros Loops …
John Morales says
:)
You are most amusing.
So, it’s useless, and it’s a good toy, and it’s a bubble, but it works, except that it’s useless…
(Your competence at argumentation is quite evident)
Bekenstein Bound says
It’s not able to replace human judgment. It is able to prosthetically augment human creativity, but in that role still needs humans to do the selection-filter stage. It cannot automate anything without a severe drop in quality, despite ongoing hype to the contrary. Much of the investment money is predicated on expectations that it can, or soon will, automate a lot of stuff. That appears to be perpetually “a few more months and a billion dollars away”, for the past three years or so now.
The technology has run through its S-curve and plateaued. It won’t improve at more than sublogarithmic rates in how-much-more-money-is-thrown-at-it without further theoretical breakthroughs, which of their nature cannot be predicted before they happen. They could be tomorrow or still decades away and only a fool would gamble on them being particularly soon.
And thousands, if not millions, of fools, some of them extraordinarily rich, have done exactly that.
AKA a bubble.
Except in the unlikely event that that next theoretical breakthrough comes tomorrow, a lot of wealthy idiots and a number of businesses, some of them enormous, are heading for a hilariously slapstick pratfall very soon. nVidia has already stepped on the first banana peel and started flailing and windmilling its arms. We all know what comes next, save for those of us in denial.
John Morales says
Heh heh heh.
You keep telling yourself that.
John Morales says
Gotta love the toy.
“It’s not unable to not replace human judgment. It is not unable to not prosthetically augment human creativity, but in that role it doesn’t not need humans to do the selection-filter stage. It is not unable to not automate anything without a severe drop in quality, despite ongoing hype to the contrary. Much of the investment money is not predicated on expectations that it cannot, or soon will not, automate a lot of stuff. That does not appear to be perpetually ‘a few more months and a billion dollars away,’ for the past three years or so now.”
:)
John Morales says
“If the technology were to have run through its S-curve and were to have plateaued, it wouldn’t improve at more than sublogarithmic rates if more money were to be thrown at it, unless there were to be further theoretical breakthroughs, which by their nature couldn’t be predicted before they were to happen. If these breakthroughs were to occur, they could be tomorrow or still decades away, and only if one were a fool would one gamble on them being particularly soon.”
Silentbob says
Morales, dude, we get it. You’ve just discovered “AI”. You can stop now.
Honestly, it’s like a budgie just discovered a mirror in their cage and is fascinated.
John Morales says
Heh heh heh.
I am retorting. Playing with my toy. The other one, anyway.
I can’t do that (retort) unless I get a prompt. Right?
There is no mirror, and I assure you I am no budgie.
(You’re my biggest fan, you know all that, Bobiferant!)
jo1storm says
You are correct, some budgies are smarter than you.
John Morales says
:)
That’s quite pitiful, jo1.
(Suits you nicely)
Bekenstein Bound says
No doubt; if you could see yourself as others see you you’d be moderating your behavior. You aren’t so you can’t.
Indeed. Seagull seems likelier. Raucous, shits everywhere, and occasionally steals food.
John Morales says
Nah. I’m not one of you needy types that need social approval to be comfortable.
(Can’t judge everyone else by your failings)
Heh. We had one of those here, back in the day.
(Process theology was his schtick)
—
You make the most satisfactory toy, Beebee.
(I steal your chips!)
Bekenstein Bound says
That has been evident for quite some time.
John Morales says
So why appeal to it? Heh.
(Consistency is a bugbear, no?)
Bekenstein Bound says
https://www.yahoo.com/news/m/1df40de7-1140-3a15-94b3-f21402388df9/ai-stock-dives-after-ceo.html
John Morales says
You are funny, but enough.
You’ve exhausted your quota of silliness for this thread, Beebee.
Bekenstein Bound says
There goes your imagination again, conjuring visions of “silliness” that isn’t even there. Meanwhile, here in the real world, investors have already started losing their shirts as one AI-associated stock after another begins to tumble.