Targeted Advertising: Good or evil?

I have had some professional experience in marketing. It’s a job, you know? Targeted advertising is a very common data science application. Specifically, I’ve built models that use credit data to decide who to send snail-mail. Was this a positive contribution to society? Eh, probably not.

In the title I ask, “good or evil?”, but obviously most people think the answer is “evil”. I’m not here to convince you that targeted advertising is good actually. But I have a bunch of questions, ultimately trying to figure out: why do we put up with targeted ads?

For the sake of scope, I’m thinking mainly about targeted ads as they appear on social media platforms. And I’m just thinking of ads that try to sell you a commercial product, as opposed to political ads or public service announcements. These ads may be accused of the following problems:

  1. Using personal data that we’d rather keep private.
  2. Psychic pollution–wasting our time and attention, or making us unsatisfied with what we have.
  3. Misleading people into purchasing low quality or overpriced goods.

[Read more…]

LLM error rates

I worked on LLMs, and now I got opinions. Today, let’s talk about when LLMs make mistakes.

On AI Slop

You’ve already heard of LLM mistakes, because you’ve seen them in the news. For instance, some lawyers submitted bogus legal briefs–no, I mean those other lawyers–no the other ones.  Scholarly articles have been spotted with clear chatGPT conversation markers. And Google recommended putting glue on Pizza. People have started calling this “AI Slop”, although maybe the term refers more to image generation rather than text? This blog post is focused exclusively on text generation, and mostly for non-creative uses.

[Read more…]

Environmental impact of LLMs

A reader asked: what is the environmental impact of large language models (LLMs)? So I read some articles on the subject, and made comparisons to other technologies, such as video games and video streaming. My conclusion is that the environmental footprint is large enough that we shouldn’t ignore it, but I think people are overreacting.

Pricing

I’m not an expert in assessing environmental impact, but I’ve had a bit of experience assessing computational prices for LLMs. Pricing might be a good proxy for carbon footprint, because it doesn’t just represent energy costs, but also the costs of building and operating a data center. My guess is that across many different kinds of computation tasks, the carbon footprint per dollar spent is roughly similar. And in my experience, LLMs are far from the most significant computational cost in a typical tech company.

[Read more…]

I worked on LLMs

Generative AI sure is the talk of the town these days. Controversial, is it not? It’s with some trepidation that I disclose that I spent the last 6 months becoming an expert in large language models (LLMs). Earlier this year when I moseyed through the foundational LLM paper, that was where it began.

I’d like to start talking about this more, because I’ve been frustrated with the public conversation. Among both anti-AI folks as well as AI enthusiasts, people have weird, impossible expectations from LLMs, while being ignorant of other capabilities. I’d like to provide a reality check, so that readers can be more informed as they argue about it.

[Read more…]

Let’s Read: Transformer Models, Part 3

This is the final part of my series reading “Attention is all you need”, the foundational paper that invented the Transformer model, used in large language models (LLMs). In the first part, we covered some background, and in the second part we reviewed the architecture of the Transformer model. In this part, we’ll discuss the authors’ arguments in favor of Transformer models.

Why Transformer models?

The authors argue in favor of Transformers in section 4 by comparing them to previously extant options, namely recurrent neural networks (RNNs) and convolutional neural networks (CNNs).

[Read more…]

Let’s Read: Transformer Models, Part 2

This article is a continuation of my series reading “Attention is all you need”, the foundational paper that invented the Transformer model, which is used in large language models (LLMs).

In the first part, I covered general background. This part will discuss Transformer model architecture, basically section 3 of the paper. I aim to make this understandable to non-technical audiences, but this is easily the most difficult section. Feel free to ask for clarifications, and see the TL;DRs for the essential facts.

The encoder and decoder architecture

The first figure of the paper shows the architecture of their Transformer model:

diagram of Transformer architecture

Figure 1 from “Attention is all you need”

[Read more…]

Let’s Read: Transformer Models, Part 1

Large Language Models (LLMs) are a hot topic today, but few people know even the basics of how they work. I work in data science, but I also didn’t really know how they work. In this series, I’d like to go through the foundational paper that defined the Transformer model on which LLMs are based.

“Attention is all you need” by Ashish Vaswani et al. from the Proceedings of the 31st International Conference on Neural Information Processing Systems, December 2017. https://dl.acm.org/doi/10.5555/3295222.3295349 (publicly accessible)

This series aims to be understandable to a non-technical audience, but will discuss at least some of the technical details. If the technical parts are too difficult, please ask for clarification in the comments. You’re also welcome to just read the TL;DR parts, which should contain the essential points.

[Read more…]