Are you into cubes? I made a reference sheet / infographic of the cube’s symmetry group, and every single subgroup. There are 98 subgroups, so… I hope you like lots of cubes.
Are you into cubes? I made a reference sheet / infographic of the cube’s symmetry group, and every single subgroup. There are 98 subgroups, so… I hope you like lots of cubes.
Bloodless Board Games, Covert Colonialisms | Strange Matters – Kyle Flannery analyzes colonialism in a variety of euro-style board games, from Settlers of Catan to Spirit Island and Root. Really great article.
We are huge fans of Spirit Island, which is not just a thoughtful response to the colonialism extant in its genre, but is also a top tier strategy game purely on merit. That said, the game should hardly be the final word on anticolonialism in board games, and has its share of thematic issues. My husband remarked that because the game is focused on the narrative of an indigenous group fighting back against colonists and winning, it needs to give them tools that they did not have in real life. In this case, the tool they have is aid from magical spirits (aka, the players). The natives themselves are left with relatively little agency. (I do think the article exaggerates how much the spirits actually kill the natives though. By design, you never want to kill them.)
Was I Rejected from Jury Duty for being too smart? | Rebecca Watson – Rebecca Watson looks into the common contention that critical thinkers often get booted from jury selection. It seems that lawyers might sometimes block jurors for being “smart”, but rather than being a systematic thing, it’s a strategy they might use for certain cases. For what it is worth, I’ve served on a jury before, and was surprised by the high level of education among the other jurors.
Striped Box, designed by me
Once a year I run a little origami class for kids, for someone I know. As a self-imposed constraint, I always teach modular origami. It’s hard to find simple modular origami models that kids can do in a reasonable amount of time!
I’ve wanted to make a modular origami box, and a big one so that it can hold other origami inside. So I bought some colored A4 paper, and looked around for a simple box design. None of them were quite to my liking, so I made my own design. There’s no lid for this box, because we’re keeping it simple. I have folding diagrams if you’d like to try.
This is the final part of my series reading “Attention is all you need”, the foundational paper that invented the Transformer model, used in large language models (LLMs). In the first part, we covered some background, and in the second part we reviewed the architecture of the Transformer model. In this part, we’ll discuss the authors’ arguments in favor of Transformer models.
Why Transformer models?
The authors argue in favor of Transformers in section 4 by comparing them to previously extant options, namely recurrent neural networks (RNNs) and convolutional neural networks (CNNs).
This article is a continuation of my series reading “Attention is all you need”, the foundational paper that invented the Transformer model, which is used in large language models (LLMs).
In the first part, I covered general background. This part will discuss Transformer model architecture, basically section 3 of the paper. I aim to make this understandable to non-technical audiences, but this is easily the most difficult section. Feel free to ask for clarifications, and see the TL;DRs for the essential facts.
The encoder and decoder architecture
The first figure of the paper shows the architecture of their Transformer model:
Figure 1 from “Attention is all you need”
Large Language Models (LLMs) are a hot topic today, but few people know even the basics of how they work. I work in data science, but I also didn’t really know how they work. In this series, I’d like to go through the foundational paper that defined the Transformer model on which LLMs are based.
“Attention is all you need” by Ashish Vaswani et al. from the Proceedings of the 31st International Conference on Neural Information Processing Systems, December 2017. https://dl.acm.org/doi/10.5555/3295222.3295349 (publicly accessible)
This series aims to be understandable to a non-technical audience, but will discuss at least some of the technical details. If the technical parts are too difficult, please ask for clarification in the comments. You’re also welcome to just read the TL;DR parts, which should contain the essential points.
Five- and Six-pointed Stars. Designer unknown.
Back in 2019, we had a small wedding celebration–we didn’t actually hold a wedding reception, and that’s a story that I’ve already told. As decorations for the celebration, I made a dozen giant paper cranes (actually Tsuru Roses) from wrapping paper, and you can see a photo of those at the bottom of my story. I also made 50 origami stars from foil paper and holographic paper, seen above.
We’ve officially reached our 5th anniversary! I am not inclined to be sentimental, but I am grateful for how incredibly fortunate we are.