Paraphrasing Leslie Knope, poets and English majors are “mean, conniving, rude, and extremely well-read which makes them very dangerous.” Don’t ever get into a verbal battle with one without a thesaurus and grammar guide to rely upon…
larparsays
@2 Hemidactylus
“mean, conniving, rude,”
I might be a poet
and don’t know it
robrosays
“When the AI replaces you programmers”…I’ve caught whiffs of that fantasy in the last few years. I don’t see any signs of it at this point. There’s whole new areas for programmers, for example “prompt engineering.”
“Is it too late for me to switch fields to poetry?” As a Philosophy/Literature major who moved to San Francisco because of poetry and music, it’s not too late, but the results may be less about poetry, music and philosophy than you might hope.
jenorafeuersays
@robro:
I remember over 20 years ago talking with a friend of mine about people needing courses for how to construct search queries to find things on the Internet. His response was that there was actually an entire degree program that included things like that: Library Sciences.
Also, yeah, ‘AI replacing programmers’ isn’t going to happen (at least not until maybe we get a full general sapient AI, and that’s not happening now and if anything current LLM models are moving away from it); at most it’s going to move the problem up a level, removing the need for some of the low-level ‘code monkeys’ and making it more important to have people who understand the system involved in creating the specifications. Because frankly a whole lot of the reason that software is such a mess is that most of it is built without a solid idea of what it’s supposed to do and what should happen in various odd corner cases. I know a good chunk of my job is literally ‘okay, how can I break this’ to try to find and close off any possible problems. And a lot of that has to be done at the specification level before any programming even starts.
pilghamsays
“Mithridates, he died old” was one of the favorite sayings of my old English teacher.
chrislawsonsays
jenorafeuer@5–
In addition, the promised niche of AI-generated code is already fulfilled by coding sites like Stack Overflow and GitHub. Yes, this still requires a certain level of coding skill just to know how to find the snippets you need and how to plug them into your own code, but anyone who doesn’t have that basic coding proficiency will also have zero chance of knowing if their AI-generated code works as promised.
Testing and debugging are difficult, complex tasks even for experienced coders working on code they wrote themselves. The idea that AI will write smooth functional code for people who don’t have the skills is, at this stage, a dangerous fantasy. (Dangerous because it will encourage corporations to use bad code they don’t understand with unfortunate real life consequences.)
Bekenstein Boundsays
Short of fully sapient AI, I don’t expect programmer-bots to get all that far either. (And I have considerable experience coding and debugging.)
The closest we might get would still require a human to write a specification and a comprehensive set of unit tests. LLM-based methods might reach the point of being able to iteratively generate code that will pass the unit tests, working in tandem with a testing framework that automatedly runs the tests and gives feedback to the AI. Even that is likely to be a stretch, though. And any gaps in the test coverage, let alone the specification, become “here be dragons”, aka in coderspeak “undefined behavior”. I especially don’t know how one would avoid the problem of overfitting: if we treat “what the code does” as a curve and the unit tests as a set of data points, the AI may well produce something that acts wildly everywhere but the vicinity of the exact given test-cases (think fitting a degree-10 polynomial to 10 data points), rather than doing what the specification-writers intended (think fitting a quadratic or an exponential to those 10 data points). In the worst case, it just spits out a rote memorization of the test cases: e.g., a times-table rather than a multiply function that generates errors or garbage results at every input that wasn’t a test case. The worst case result of “teaching to the test” and likely insoluble without full sapience.
John Moralessays
Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
AI is a tool, not a jinn.
The better you specify and phrase your instructions, the better even current systems can perform.
You can correct and iterate and verify.
(Hint: you can even ask it to model possible exceptions to some stricture according to the interaction so far)
(In short, a tool is no better than its user)
Bekenstein Boundsays
… says the biggest tool in this comments section.
John Moralessays
I like your self-referential gnome, Beebee.
John Moralessays
[meta: I asked the AI what a ‘self-referential gnome’ might be, in academic terms]
In academic terms, a “self-referential gnome” might be a clever play on words, combining the idea of “gnomic utterances” (short, wise, and often enigmatic sayings) with self-referentiality. This would imply a saying or expression that refers to itself in a concise, witty, or mysterious manner. It’s like a statement that describes its own nature in a pithy and ambiguous way.
(Not too shabby, eh?)
John Moralessays
[two iterations (within established rules I paste before each session); first one was what it was, but of course I got told there was no such thing. So, my own construct, but not uninterpretable. A neologism!]
John Moralessays
Here, for you.
Go to an AI engine and establish a communication protocol.
Establish a context for the inquiry.
For example, I poisoned the well thus: “Are you able to provide pseudocode for an algorithm?”
So. I just tried this prompt:
‘How to find the polar equation of a circle given two points on its circumference?’
—
(No spoilers, but try it for yourself)
John Moralessays
Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.
(At the moment, and I shan’t joke about the Basilisk. OOO11!)
John Moralessays
[context having been establishe]
Please summarise the legend as succinctly as possible within two paragraphs
“John Henry, an African American folk hero, was known as a “steel-driving man,” responsible for hammering steel drills into rock to create holes for explosives during railroad construction. When a steam-powered drill was introduced, threatening the jobs of workers like him, Henry challenged the machine to a race. He ultimately triumphed, proving human strength and determination could surpass mechanical power. However, his victory came at a cost, as he collapsed and died from the immense physical exertion.
John Henry’s story has become a powerful symbol of human resilience and the struggle against the dehumanizing effects of industrialization. His legend has been preserved in songs, stories, and folklore, reflecting the broader themes of labor, dignity, and the fight for equality.”
jo1stormsays
AI might be “reverse centaur” when it comes to automation.
“AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.”
jo1storm: Bosses are Bizarro-world Marxists. Like Marxists, your boss’s worldview is organized around the principle that every dollar you take home in wages is a dollar that isn’t available for executive bonuses, stock buybacks or dividends. That’s why your boss is insatiably horny for firing you and replacing you with software. Software is cheaper, and it doesn’t advocate for higher wages.
<snicker>
(Software is only cheaper if it can perform the same tasks more cheaply.
And that post is wank)
John Moralessays
(Did you really buy into this bullshit pseudomarxism, Jo1? Come on!)
John Moralessays
[from a brand-new session, not massaged by me]
me: please provide a succinct and heartfelt appeal for higher wages from the perspective of a low wage worker
AI:
Life’s simplicity holds its charm, but that charm should not come at the expense of my dignity and well-being. I work hard, putting in long hours to keep essential services running smoothly and provide for my family. Yet, the weight of financial uncertainty is a constant burden. Higher wages mean more than just money—they represent respect, stability, and the opportunity to dream of a better future. It’s not just about making ends meet, but about thriving, contributing fully, and having the peace of mind that allows for true growth and happiness. Please consider my plea not as a demand, but as a step towards a fair and just society for all. Thank you.
John Moralessays
[almost poetic, no?]
John Moralessays
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
—
This statement offers a provocative view on the potential impact of AI in the workplace. Let’s deconstruct it from an academic perspective, considering economic, psychological, and technological dimensions.
Economic Perspective
Economically, the statement highlights concerns about job displacement due to AI. Historically, technological advancements have led to shifts in labor markets, from the Industrial Revolution to the digital age. Economists term this the “displacement effect,” where jobs are lost in sectors directly impacted by new technologies. However, there’s also the “productivity effect” where new technologies create new jobs and industries. The net impact on employment depends on the balance between these two effects. Research by Acemoglu and Restrepo (2020) discusses how AI could potentially displace some jobs while creating others, thereby transforming the labor market.
Technological Perspective
From a technological standpoint, the statement underscores the capabilities of AI in performing specific tasks better than humans. AI excels in repetitive, data-intensive tasks where it can offer efficiency gains, reduce errors, and operate continuously without fatigue. However, AI currently struggles with complex tasks requiring nuanced human judgment, empathy, and creativity. Bostrom (2014) discusses how AI’s potential exceeds its current practical applications due to the limitations in general AI capabilities.
Psychological Perspective
Psychologically, the statement evokes fears of job insecurity. The perception of AI as a threat rather than a tool for augmentation is prevalent. Research in organizational behavior (e.g., by Frey and Osborne, 2017) explores how fear of automation can affect employee morale and productivity. The psychological impact of job displacement extends beyond economic concerns, affecting individuals’ mental health and identity.
Ethical Considerations
Ethically, the deployment of AI in the workplace raises questions about fairness and responsibility. Who is accountable when AI makes decisions that affect people’s livelihoods? The ethical design and implementation of AI systems must consider biases and ensure transparency. Brynjolfsson and McAfee (2014) discuss the need for ethical frameworks to guide the integration of AI into society.
Conclusion
In conclusion, the statement “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI” captures the multifaceted debate around AI’s role in the future of work. It reflects economic, technological, psychological, and ethical dimensions that are critical for understanding the full impact of AI on employment.
It’s an engaging subject, and I hope this evaluation provides a deeper academic understanding of the statement. If you have any specific aspects you’d like to explore further, feel free to ask!
John Moralessays
[I know… my phrasing is rambly and abnormal, but somehow AIs can parse what I mean.
That was a virgin instance of a chatbot, now in 2024]
John Moralessays
[Honestly, I could never have done as well as the few joules my query took to process did]
Bekenstein Boundsays
Wow, a three-and-a-half page meltdown beginning with “I know you are but what am I”. You’re a real class act. /s
John Moralessays
:)
You get what you give, Beebee.
jo1stormsays
And for how long have you been delegating the task of thinking to chat bots, John? I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments. I guess this explains it. Now there’s even less reason to read your comments.
It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.
John Moralessays
And for how long have you been delegating the task of thinking to chat bots, John?
For a zero duration of time, jo1.
But it’s kinda cute you imagined that.
I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments.
Well, I am in my dotage and all.
I guess this explains it. Now there’s even less reason to read your comments.
Guesswork, that’s a fine way to live.
It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.
<snicker>
You are a johnny-come-lately, Jo1. You have no real idea of what I was like, in my pomp.
(Not as gentle and kind and amicable and patient as I now am)
—
See, people here post about how AI is shit and such.
I’m trying to get through to such people.
Trying to explain that a tool is only as good as its user.
John Moralessays
Remember?
I asked: Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
The response: … says the biggest tool in this comments section.
(Unlike an AI, I can interpret that, because I get apes with wounded egos; I just chucked in a bigger tool, liking my little jokes; guess what the bigger tool was, jo1?)
StevoRsays
Who needs memory when you can just scroll up?
Wait, what’d I just type?
(& how full of typos is it? Dunno. Probly as usual a lot more than I thought?)
John Moralessays
No typos, StevoR, though the ampersand is not normally used in prose.
—
Since it’s quiet: “You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.” was a claim made about me.
This claim, of course, evinces a deep misunderstanding of the concept of steelmanning — it is something one does to arguments one’s interlocutor has made, specifically, to interpret them in the most charitable manner possible before rebutting it.
(Also, it misunderstands the concept of straw dummy “attacks”, but one dollop at a time)
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
Who needs the “academic perspective?” That statement rings true from a common-sense perspective of how bosses, CEOs and business owners have been known to think and make decisions.
John Moralessays
RB,
Who needs the “academic perspective?
The AI. Again, one needs to establish context and protocols, and I was using a virgin instance.
Else, I would have gotten a colloquial-level response, geared to the average punter.
You know, the default setting.
If I’m actually using the tool, I first set up an environment.
For example, regarding protocol, this: “Answer questions directly unless explicitly requested otherwise.”
(Seems like a silly, redundant requirement at first hand. But it’s a chatbot, it uses natural language by default)
John Moralessays
Ah, WTH. An extract — I like my prompts to be minimalistic, much as I like my desktop to be an actual metaphor for a desktop.
A sample:
“Please adopt these guidelines henceforth:
Avoid emoticons and excessive punctuation.
Use minimal prompts.
Refrain from stating intention to assist.
Do not offer assistance, inquiries, or encouragement for further interaction.
Use personal pronouns only when grammatically necessary.
Minimise conversational tone.
Provide succinct acknowledgements with concise responses.”
(Just a copypaste thingy that makes the interface much sleeker)
Who cares what the AI says? We don’t need AI to remind us of how certain people are known to act.
John Moralessays
I don’t need satisfactory sexual encounters, either.
(But they are quite fun!)
John Moralessays
We don’t need big mining machinery to dig for resources, but there’s a reason we don’t use picks and shovels for mining, these days.
John Moralessays
[genuine copypasta]
(Virgin instance new session):
Q: Who cares what the AI says?
A: Who knows? Maybe no one. Maybe everyone. After all, I’m just here to assist and share knowledge, spark ideas, and add a dash of humor to your day. Whether or not my words matter, that’s up to you to decide. What I can promise is that I’m always ready to help.
(Given my interactive prompt guidelines immediately after)
Q: Who cares what the AI says?
A: That would be for you to decide.
—
Interesting? I find it so.
John Moralessays
[bonus, some extra environmental requirements — hey, anyone remember DOS?]
Q: Who cares what the AI says?
A: In essence, the level of care or concern about what AI says depends on the context and the individual’s reliance on AI for critical tasks. For instance, in academic and professional settings, there is a growing emphasis on the ethical use of AI and the need for users to maintain their cognitive abilities by not becoming overly dependent on AI systems
John Moralessays
In passing, the locution ‘the AI’ is indicative. Definite article.
The interaction is with a particular instance of a particular engine.
Me, I use a freebie generic chatbot that has no session permanence and that I suspect has rather limited resources allocated to my queries. It lies a lot — well, ‘hallucinates’, but it turns out a lot of that can be caught out with but a few more prompts (A different set, but; e.g. ‘Please review your response for inconsistencies before providing it.’)
A work in progress, but I’m pretty sure the limitations and restrictions under which I operate are not universal.
John Moralessays
Try again, a bit more dispassionately, without following any specific style
In realms of code and crafted words, AI assists but does not replace.
Programmers bring insight and creativity to technology.
Poets infuse emotion and meaning into language.
Both roles require human intuition and experience, something AI cannot replicate.
John Moralessays
Anyway, basically, white-collar jobs are about to be as automated as blue-collar jobs.
And autonomous uncrewed fighting vehicles, well. Rapidly becoming a thing.
Ukraine war is showing that burst of creativity we saw during WW2 and during the cold war.
John Moralessays
Customer service officers / receptionists / suchlike will remain. But… well. The functionaries. Alas.
It only replaces people when people are functionally being used as tools.
(John Henry!)
John Moralessays
Right?
Bring the focus back to the actual post.
Poets, well. Their demise due to AI is no biggie.
Poets they remain.
(Handcrafted vs. factory-made goods; an old dichotomy, no?)
Bekenstein Boundsays
Raging Bee@44: A sphere has more points than he does.
John Moralessays
You really, really don’t get I’m saying AIs are a tool?
(Not the most complicated point)
—
Also, the locus of a sphere is full of points.
(heh)
Bekenstein Boundsays
I had a handy retort lined up, but then I realized it only works for a spherical troll in a vacuum. :/
John Moralessays
Heh. Or the degenerate case where the sphere’s radius is zero.
(Then, only one point)
Bekenstein Boundsays
Oh yeah? Well … ah, damn, that one assumes the trolling is coming from a point source and disregards spacetime curvature. Eh, gonna go hit the stacks again …
John Moralessays
You shoulda said a marble or suchlike.
But no, you had to try for more, to your detriment.
Again, O wilfully obtuse one: You really, really don’t get I’m saying AIs are a tool?
(I know you do, you know I know you do, etc.)
John Moralessays
Here, for you, my #15:
“Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
jo1storm says
Goid ole SMBS. I always loved this one.
https://www.smbc-comics.com/comic/dear-lord
Hemidactylus says
Paraphrasing Leslie Knope, poets and English majors are “mean, conniving, rude, and extremely well-read which makes them very dangerous.” Don’t ever get into a verbal battle with one without a thesaurus and grammar guide to rely upon…
larpar says
@2 Hemidactylus
“mean, conniving, rude,”
I might be a poet
and don’t know it
robro says
“When the AI replaces you programmers”…I’ve caught whiffs of that fantasy in the last few years. I don’t see any signs of it at this point. There’s whole new areas for programmers, for example “prompt engineering.”
“Is it too late for me to switch fields to poetry?” As a Philosophy/Literature major who moved to San Francisco because of poetry and music, it’s not too late, but the results may be less about poetry, music and philosophy than you might hope.
jenorafeuer says
@robro:
I remember over 20 years ago talking with a friend of mine about people needing courses for how to construct search queries to find things on the Internet. His response was that there was actually an entire degree program that included things like that: Library Sciences.
Also, yeah, ‘AI replacing programmers’ isn’t going to happen (at least not until maybe we get a full general sapient AI, and that’s not happening now and if anything current LLM models are moving away from it); at most it’s going to move the problem up a level, removing the need for some of the low-level ‘code monkeys’ and making it more important to have people who understand the system involved in creating the specifications. Because frankly a whole lot of the reason that software is such a mess is that most of it is built without a solid idea of what it’s supposed to do and what should happen in various odd corner cases. I know a good chunk of my job is literally ‘okay, how can I break this’ to try to find and close off any possible problems. And a lot of that has to be done at the specification level before any programming even starts.
pilgham says
“Mithridates, he died old” was one of the favorite sayings of my old English teacher.
chrislawson says
jenorafeuer@5–
In addition, the promised niche of AI-generated code is already fulfilled by coding sites like Stack Overflow and GitHub. Yes, this still requires a certain level of coding skill just to know how to find the snippets you need and how to plug them into your own code, but anyone who doesn’t have that basic coding proficiency will also have zero chance of knowing if their AI-generated code works as promised.
Testing and debugging are difficult, complex tasks even for experienced coders working on code they wrote themselves. The idea that AI will write smooth functional code for people who don’t have the skills is, at this stage, a dangerous fantasy. (Dangerous because it will encourage corporations to use bad code they don’t understand with unfortunate real life consequences.)
Bekenstein Bound says
Short of fully sapient AI, I don’t expect programmer-bots to get all that far either. (And I have considerable experience coding and debugging.)
The closest we might get would still require a human to write a specification and a comprehensive set of unit tests. LLM-based methods might reach the point of being able to iteratively generate code that will pass the unit tests, working in tandem with a testing framework that automatedly runs the tests and gives feedback to the AI. Even that is likely to be a stretch, though. And any gaps in the test coverage, let alone the specification, become “here be dragons”, aka in coderspeak “undefined behavior”. I especially don’t know how one would avoid the problem of overfitting: if we treat “what the code does” as a curve and the unit tests as a set of data points, the AI may well produce something that acts wildly everywhere but the vicinity of the exact given test-cases (think fitting a degree-10 polynomial to 10 data points), rather than doing what the specification-writers intended (think fitting a quadratic or an exponential to those 10 data points). In the worst case, it just spits out a rote memorization of the test cases: e.g., a times-table rather than a multiply function that generates errors or garbage results at every input that wasn’t a test case. The worst case result of “teaching to the test” and likely insoluble without full sapience.
John Morales says
Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
AI is a tool, not a jinn.
The better you specify and phrase your instructions, the better even current systems can perform.
You can correct and iterate and verify.
(Hint: you can even ask it to model possible exceptions to some stricture according to the interaction so far)
(In short, a tool is no better than its user)
Bekenstein Bound says
… says the biggest tool in this comments section.
John Morales says
I like your self-referential gnome, Beebee.
John Morales says
[meta: I asked the AI what a ‘self-referential gnome’ might be, in academic terms]
(Not too shabby, eh?)
John Morales says
[two iterations (within established rules I paste before each session); first one was what it was, but of course I got told there was no such thing. So, my own construct, but not uninterpretable. A neologism!]
John Morales says
Here, for you.
Go to an AI engine and establish a communication protocol.
Establish a context for the inquiry.
For example, I poisoned the well thus: “Are you able to provide pseudocode for an algorithm?”
So. I just tried this prompt:
‘How to find the polar equation of a circle given two points on its circumference?’
—
(No spoilers, but try it for yourself)
John Morales says
Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.
(At the moment, and I shan’t joke about the Basilisk. OOO11!)
John Morales says
[context having been establishe]
Please summarise the legend as succinctly as possible within two paragraphs
“John Henry, an African American folk hero, was known as a “steel-driving man,” responsible for hammering steel drills into rock to create holes for explosives during railroad construction. When a steam-powered drill was introduced, threatening the jobs of workers like him, Henry challenged the machine to a race. He ultimately triumphed, proving human strength and determination could surpass mechanical power. However, his victory came at a cost, as he collapsed and died from the immense physical exertion.
John Henry’s story has become a powerful symbol of human resilience and the struggle against the dehumanizing effects of industrialization. His legend has been preserved in songs, stories, and folklore, reflecting the broader themes of labor, dignity, and the fight for equality.”
jo1storm says
AI might be “reverse centaur” when it comes to automation.
“AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.”
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
John Morales says
jo1storm:
<snicker>
(Software is only cheaper if it can perform the same tasks more cheaply.
And that post is wank)
John Morales says
(Did you really buy into this bullshit pseudomarxism, Jo1? Come on!)
John Morales says
[from a brand-new session, not massaged by me]
me: please provide a succinct and heartfelt appeal for higher wages from the perspective of a low wage worker
AI:
Life’s simplicity holds its charm, but that charm should not come at the expense of my dignity and well-being. I work hard, putting in long hours to keep essential services running smoothly and provide for my family. Yet, the weight of financial uncertainty is a constant burden. Higher wages mean more than just money—they represent respect, stability, and the opportunity to dream of a better future. It’s not just about making ends meet, but about thriving, contributing fully, and having the peace of mind that allows for true growth and happiness. Please consider my plea not as a demand, but as a step towards a fair and just society for all. Thank you.
John Morales says
[almost poetic, no?]
John Morales says
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
—
This statement offers a provocative view on the potential impact of AI in the workplace. Let’s deconstruct it from an academic perspective, considering economic, psychological, and technological dimensions.
Economic Perspective
Economically, the statement highlights concerns about job displacement due to AI. Historically, technological advancements have led to shifts in labor markets, from the Industrial Revolution to the digital age. Economists term this the “displacement effect,” where jobs are lost in sectors directly impacted by new technologies. However, there’s also the “productivity effect” where new technologies create new jobs and industries. The net impact on employment depends on the balance between these two effects. Research by Acemoglu and Restrepo (2020) discusses how AI could potentially displace some jobs while creating others, thereby transforming the labor market.
Technological Perspective
From a technological standpoint, the statement underscores the capabilities of AI in performing specific tasks better than humans. AI excels in repetitive, data-intensive tasks where it can offer efficiency gains, reduce errors, and operate continuously without fatigue. However, AI currently struggles with complex tasks requiring nuanced human judgment, empathy, and creativity. Bostrom (2014) discusses how AI’s potential exceeds its current practical applications due to the limitations in general AI capabilities.
Psychological Perspective
Psychologically, the statement evokes fears of job insecurity. The perception of AI as a threat rather than a tool for augmentation is prevalent. Research in organizational behavior (e.g., by Frey and Osborne, 2017) explores how fear of automation can affect employee morale and productivity. The psychological impact of job displacement extends beyond economic concerns, affecting individuals’ mental health and identity.
Ethical Considerations
Ethically, the deployment of AI in the workplace raises questions about fairness and responsibility. Who is accountable when AI makes decisions that affect people’s livelihoods? The ethical design and implementation of AI systems must consider biases and ensure transparency. Brynjolfsson and McAfee (2014) discuss the need for ethical frameworks to guide the integration of AI into society.
Conclusion
In conclusion, the statement “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI” captures the multifaceted debate around AI’s role in the future of work. It reflects economic, technological, psychological, and ethical dimensions that are critical for understanding the full impact of AI on employment.
It’s an engaging subject, and I hope this evaluation provides a deeper academic understanding of the statement. If you have any specific aspects you’d like to explore further, feel free to ask!
John Morales says
[I know… my phrasing is rambly and abnormal, but somehow AIs can parse what I mean.
That was a virgin instance of a chatbot, now in 2024]
John Morales says
[Honestly, I could never have done as well as the few joules my query took to process did]
Bekenstein Bound says
Wow, a three-and-a-half page meltdown beginning with “I know you are but what am I”. You’re a real class act. /s
John Morales says
:)
You get what you give, Beebee.
jo1storm says
And for how long have you been delegating the task of thinking to chat bots, John? I have noticed the quality of your comments dropping for a while and your “percentage of rambling incoherence” increasing in your arguments. I guess this explains it. Now there’s even less reason to read your comments.
It’s a shame, really. You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.
John Morales says
For a zero duration of time, jo1.
But it’s kinda cute you imagined that.
Well, I am in my dotage and all.
Guesswork, that’s a fine way to live.
<snicker>
You are a johnny-come-lately, Jo1. You have no real idea of what I was like, in my pomp.
(Not as gentle and kind and amicable and patient as I now am)
—
See, people here post about how AI is shit and such.
I’m trying to get through to such people.
Trying to explain that a tool is only as good as its user.
John Morales says
Remember?
I asked: Bekenstein Bound, have you personally experimented with code snippets, with simple functions?
The response: … says the biggest tool in this comments section.
(Unlike an AI, I can interpret that, because I get apes with wounded egos; I just chucked in a bigger tool, liking my little jokes; guess what the bigger tool was, jo1?)
StevoR says
Who needs memory when you can just scroll up?
Wait, what’d I just type?
(& how full of typos is it? Dunno. Probly as usual a lot more than I thought?)
John Morales says
No typos, StevoR, though the ampersand is not normally used in prose.
—
Since it’s quiet: “You used to at least be good as a sparring partner for a person to “steelman” their arguments against strawman attacks.” was a claim made about me.
This claim, of course, evinces a deep misunderstanding of the concept of steelmanning — it is something one does to arguments one’s interlocutor has made, specifically, to interpret them in the most charitable manner possible before rebutting it.
(Also, it misunderstands the concept of straw dummy “attacks”, but one dollop at a time)
Raging Bee says
[please evaluate “AI CAN’T do your job, but an AI salesman CAN convince your boss to fire you and replace you with AI.” from an academic perspective, at not lower than tertiary level of education]
Who needs the “academic perspective?” That statement rings true from a common-sense perspective of how bosses, CEOs and business owners have been known to think and make decisions.
John Morales says
RB,
The AI. Again, one needs to establish context and protocols, and I was using a virgin instance.
Else, I would have gotten a colloquial-level response, geared to the average punter.
You know, the default setting.
If I’m actually using the tool, I first set up an environment.
For example, regarding protocol, this: “Answer questions directly unless explicitly requested otherwise.”
(Seems like a silly, redundant requirement at first hand. But it’s a chatbot, it uses natural language by default)
John Morales says
Ah, WTH. An extract — I like my prompts to be minimalistic, much as I like my desktop to be an actual metaphor for a desktop.
A sample:
“Please adopt these guidelines henceforth:
Avoid emoticons and excessive punctuation.
Use minimal prompts.
Refrain from stating intention to assist.
Do not offer assistance, inquiries, or encouragement for further interaction.
Use personal pronouns only when grammatically necessary.
Minimise conversational tone.
Provide succinct acknowledgements with concise responses.”
(Just a copypaste thingy that makes the interface much sleeker)
Raging Bee says
Who cares what the AI says? We don’t need AI to remind us of how certain people are known to act.
John Morales says
I don’t need satisfactory sexual encounters, either.
(But they are quite fun!)
John Morales says
We don’t need big mining machinery to dig for resources, but there’s a reason we don’t use picks and shovels for mining, these days.
John Morales says
[genuine copypasta]
(Virgin instance new session):
Q: Who cares what the AI says?
A: Who knows? Maybe no one. Maybe everyone. After all, I’m just here to assist and share knowledge, spark ideas, and add a dash of humor to your day. Whether or not my words matter, that’s up to you to decide. What I can promise is that I’m always ready to help.
(Given my interactive prompt guidelines immediately after)
Q: Who cares what the AI says?
A: That would be for you to decide.
—
Interesting? I find it so.
John Morales says
[bonus, some extra environmental requirements — hey, anyone remember DOS?]
Q: Who cares what the AI says?
A: In essence, the level of care or concern about what AI says depends on the context and the individual’s reliance on AI for critical tasks. For instance, in academic and professional settings, there is a growing emphasis on the ethical use of AI and the need for users to maintain their cognitive abilities by not becoming overly dependent on AI systems
John Morales says
In passing, the locution ‘the AI’ is indicative. Definite article.
The interaction is with a particular instance of a particular engine.
Me, I use a freebie generic chatbot that has no session permanence and that I suspect has rather limited resources allocated to my queries. It lies a lot — well, ‘hallucinates’, but it turns out a lot of that can be caught out with but a few more prompts (A different set, but; e.g. ‘Please review your response for inconsistencies before providing it.’)
A work in progress, but I’m pretty sure the limitations and restrictions under which I operate are not universal.
John Morales says
John Morales says
Anyway, basically, white-collar jobs are about to be as automated as blue-collar jobs.
And autonomous uncrewed fighting vehicles, well. Rapidly becoming a thing.
Ukraine war is showing that burst of creativity we saw during WW2 and during the cold war.
John Morales says
Customer service officers / receptionists / suchlike will remain. But… well. The functionaries. Alas.
(Is that not obvious?)
Raging Bee says
And…your point…? Or whoever’s point…?
John Morales says
(sigh>
AI is a tool.
John Morales says
Oh, right.
It only replaces people when people are functionally being used as tools.
(John Henry!)
John Morales says
Right?
Bring the focus back to the actual post.
Poets, well. Their demise due to AI is no biggie.
Poets they remain.
(Handcrafted vs. factory-made goods; an old dichotomy, no?)
Bekenstein Bound says
Raging Bee@44: A sphere has more points than he does.
John Morales says
You really, really don’t get I’m saying AIs are a tool?
(Not the most complicated point)
—
Also, the locus of a sphere is full of points.
(heh)
Bekenstein Bound says
I had a handy retort lined up, but then I realized it only works for a spherical troll in a vacuum. :/
John Morales says
Heh. Or the degenerate case where the sphere’s radius is zero.
(Then, only one point)
Bekenstein Bound says
Oh yeah? Well … ah, damn, that one assumes the trolling is coming from a point source and disregards spacetime curvature. Eh, gonna go hit the stacks again …
John Morales says
You shoulda said a marble or suchlike.
But no, you had to try for more, to your detriment.
Again, O wilfully obtuse one: You really, really don’t get I’m saying AIs are a tool?
(I know you do, you know I know you do, etc.)
John Morales says
Here, for you, my #15:
“Good news is that I reckon that just as photography did not replace painters, just as recordings did not replace live music, just like power tools did not replace the workers who use them, these sort of tools will not replace poets.
Main thing to remember is that they are tools.”
You can’t find a point there, BB. Ostensibly.