The earth was a complicated place for intelligent species a quarter million years ago

When I first heard about Homo naledi, the question at the top of my head was “How old is it?” It was a hominin, it looked fairly primitive with a small brain the size of a gorilla’s, yet it was found in a mass “grave”, where part of the mystery was how so many dead hominins ended up in this difficult-to-reach, hidden cave system in South Africa. The authors didn’t report a date. Speculation ran from 3 million years old to 300 thousand years old, both dates seeming extreme and unlikely.

Now we have a date: between 236,000 and 335,000 years old. Astonishing.

That’s really young. Furthermore, they’ve found another chamber in the cave network with even more bones.

All indications are that this was a thriving population of little, primitive people.

The bones, remarkably, show few signs of disease or stress from poor development, suggesting that Homo naledi may have been the dominant species in the area at the time. “They are the healthiest dead things you’ll ever see,” said Berger.

Homo naledi stood about 150cm tall fully grown and weighed about 45kg. But it is extraordinary for its mixture of ancient and modern features. It has a small brain and curved fingers that are well-adapted for climbing, but the wrists, hands, legs and feet are more like those found on Neanderthals or modern humans. If the dating is accurate, Homo naledi may have emerged in Africa about two million years ago but held on to some of its more ancient features even as modern humans evolved.

It’s still a mystery how all these bones ended up in the caves. These don’t seem to be ceremonial burials, it’s more like they were chucking their dead down some hole to drop them in a deep cave.

Why wasn’t this machine in my life 35 years ago?

Let me tell you about this miserable year I had in grad school. Judith Eisen and I had figured out that there was this repeating pattern of spinal motoneurons in zebrafish — this was special because it meant that we had a new set of identified neurons, cells that we could name and recognize and come back to in fish after fish, and that had specific locations and targets. I had flippantly suggested that we name them Primary Zebrafish Motoneurons (PZM cells, get it?), but a colleague, Walt Metcalfe, talked me down from that bit of vanity — it is so 19th century to name a cell after yourself, even indirectly — and I came up with the rather more mundane names of CaP, MiP, and RoP, for caudal, middle, and rostral primary motoneurons, for their location within each segment. So yay, interesting result, and it fit well within the overarching project I was working on for my thesis, which was on the development of connectivity in the spinal cord.

Specifically, I was looking at how another famously named neuron, the Mauthner cell, grew an axon down the length of the spinal cord and hooked up to the motor neurons there. Mauthner is a command neuron; when it fires, it sends a signal to one side of the spinal cord, triggering the motoneurons on that side to make all the muscles contract — the fish bends vigorously and quickly to one side as part of an escape response. Finding out that our one named cell, Mauthner, was making synapses on another set of named cells, our primary motoneurons, was an opportunity to look at connectivity in an even more detailed way.

But then my committee asked a really annoying question: how do you know Mauthner is making synapses on CaP? Have you looked? Thus began my miserable year. I said no, but how hard can it be? I’ll just make a few ultrathin sections, look at them in the transmission electron microscope, snap a few pictures, and presto, mission accomplished. Except, of course, I hadn’t done EM work before. Our EM tech, Eric Shabtach, made it look easy.

So I started learning how to fix and section zebrafish embryos for EM. It turns out that was non-trivial. I was working with nasty chemicals, cocktails of paraformaldehyde, glutaraldehyde, and acetaldehyde, which all had to be just right or you’d end up with tissue blown up full of holes. I had to postfix with osmium tetroxide, with all the fun warnings about how just the fumes can fix your corneas. And then I had to master using an ultramicrotome and making glass knives, and cutting those fish just right. There were times I’d get the fixation perfect and then find I’d screwed up on the sectioning, and produced a lot of crap as the knife chattered across the section, or there was a bit of a nick in the blade that gouged furrows across every one. And then the way we got these extremely thin slices into the scope was to scoop them up on these delicate copper grids, and of course every time you were closing in on the synapse you wanted, that section would have the most interesting part fall right on an opaque copper grid wire. Or you’d find that that was the section you lost.

It takes a lot of skill and practice to do electron microscopy well, and it also takes a little luck, at least in the old days, to find the one thing you were looking for. I failed. I struggled for about a year, going in every day and prepping samples and spending hours slicing away at tiny dead embryos imbedded in epoxy, before finally giving up and deciding I needed to do stuff that was more immediately successful, because I needed to do this graduation thing.

I still kind of cringe remembering that long fruitless year, but now I can ease my conscience by just telling myself the technology wasn’t yet ready. Here’s a cool new paper, Whole-Brain Serial-Section Electron Microscopy In Larval Zebrafish. They’ve automated the process. Just look at this goddamn machine, it’s beautiful:

Serial sectioning and ultrathin section library assembly for a 5.5dpf larval zebrafish. a, Serial sections of resin-embedded samples were picked up with an automated tape-collecting ultramicrotome modified for compatibility with larger reels containing enough tape to accommodate tens of thousands of sections. b–c, Direct-to-tape sectioning resulted in consistent section spacing and orientation. Just as a section left the diamond knife (blue), it was caught by the tape. d, After serial sectioning, the tape was divided onto silicon wafers that functioned as a stage in a scanning electron microscope and formed an ultrathin section library. For a series containing all of a 5.5dpf larval zebrafish brain, ~68m of tape was divided onto 80 wafers (with ~227 sections per wafer). e, Wafer images were used as a coarse guide for targeting electron microscopic imaging. Fiducial markers (copper circles) further provided a reference for a perwafer coordinate system, enabling storage of the position associated with each section and, thus, multiple rounds of re-imaging at varying resolutions as needed. f, Low-resolution overview micrographs (758.8×758.8×60nm3 vx –1) were acquired for each section to ascertain sectioning reliability and determine the extents of the ultrathin section library. Scale boxes: a, 5×5×5cm3 ; b, 1×1×1cm3 ; c, 1×1×1mm3 . Scale bars: e, 1cm; f, 250µm.

Then they scanned in all those tidily organized thin sections into the computer for reconstruction. I am impressed.

We next selected sub-regions within this imaging volume to capture areas of interest at higher resolutions using multi-scale imaging. We first performed nearly isotropic EM imaging by setting lateral resolution to match section thickness over the anterior-most 16000 sections. All cells are labelled in ssEM, so this volume offers a dense picture of the fine anatomy across the anterior quarter of the larval zebrafish including brain, sensory organs (e.g., eyes, ears, and olfactory pits), and other tissues. Furthermore, this resolution of 56.4×56.4×60nm3/vx is ~500× greater than that afforded by diffraction-limited light microscopy. The imaged volume spanned 2.28×108 µm3 , consisted of 1.12×1012 voxels, and occupied 2.4 terabytes (TB). In this data, one can reliably identify cell nuclei and track large calibre myelinated axons. To further resolve densely packed neuronal structures, a third round of imaging at 18.8×18.8×60nm3/vx was performed to generate a high-resolution atlas specifically of the brain. The resulting image volume spanned 12546 sections, contained a volume of 5.49×107 µm3 , consisted of 2.36×1012 voxels, and occupied 4.9TB. Additional acquisition at higher magnifications was used to further inspect regions of interest, to resolve finer axons and dendrites, and to identify synaptic connections between neurons.

Thirty five years ago we were storing most of our image data on VHS tape, and our computers all used floppies with about 100K capacity. I wonder how many floppies we would have needed to store all that? Oh, I did get my very first hard drive about the time I graduated, which held five million bytes. I was very proud.

I was wondering if they actually had the EM section demonstrating the Mauthner-to-CaP synapse. Probably. Now it’s such a minor issue that has already been shown elsewhere and with multiple techniques that it isn’t even mentioned. It’s in their data set, though, I’m sure. They’ve reconstructed the entire axon arbor of CaP in serial EM sections.

The position of a caudal primary (CaP) motor neuron in the spinal cord and its innervation of myotome 6 projected onto a reslice through ~2200 serial sections.

2200 sections! I spent a year on that project and probably got half that number. I don’t know whether to cry or steal the data, invent a time machine, and go back and hand myself a photo at the start of that year.

Oh, great — that’s how they acquire a taste for human flesh, you know

One of those forensic research facilities in Texas — the kind where you leave human bodies out to rot to figure out the progression of decay — has discovered deer nibbling on the corpses. Every November I’ve got students skipping class for the hunting season, and now I’m going to just have to excuse them, and encourage them to get out there and slaughter the man-eating monsters…before they get us.

Man-eating deer. Or is it deer eating man?

Guardians of the Galaxy vol. 2 and the taxonomy of aliens

I watched Guardians of the Galaxy, vol. 2 this weekend. It was a fun bit of fluff. I’m also a fan of movies that portray god-like aliens as inherently inimical to humans and evil by nature, and that therefore our purpose, if we have any at all, is to kill gods. And then there are lots of space battles with funky ’70s music and funny one-liners. Groot is adorable, but my favorite character had to be Drax.

But, I have to say, I was also distracted by the horrible science. I know, I know, this is a fantasy story based on a comic book, but I am compelled to judge.

First up, the video game-style space battles. They’re fun to watch, but come on — World War I dogfights and weapons with such high energies that you can use them to carve your way to the center of a planet? And when your ship gets hit by them it might chip the paint but otherwise just bounce off? Also, those streams of little ships in formation getting zapped by the good guys, I recognized those — I played Galaxian in my misspent youth.

Secondly, everything in this galaxy seems so cramped and close-up. “Radio” your coordinates to the galaxy at large, and in minutes hordes of space ships show up to hunt you down. Activate your doomsday device on your remote world and all of the evil death weapons start blossoming simultaneously on worlds separated by a hundred thousand light years. It’s a cartoon, but I miss the idea of the vastness of the universe.

My biggest gripe, though, is with the lazy biology. All these alien races from the far-flung corners of the galaxy, and mostly what they are is humans with different colors of body paint. This is less like a congerie of aliens and more like Burning Man costumery, only with less nudity.

But really, the movie was good mindless fun and I’ll see it again. I confess, though, in slow moments I was thinking about a SF taxonomy of alien universes. And I sort of assembled a preliminary draft in my head, which I’ve now set down in bits on the interweb. Basically I looked at these aliens and thought about how long ago these creatures would have hypothetically shared a common ancestor with Earth humans as a measure of how far out of the box the creators were thinking. The answer is usually not far at all — most aliens are Weird Americans In Space.

Here’s my classification scheme. Please do argue with it.

I. Every alien is human. They might have latex bumps on their forehead or fluorescent purple skin, but any Earth-type person is capable of breeding with them. Also, they tend to conveniently speak English.

These stories cannot comprehend the idea of a different species, and typically portray every distant alien world as having diverged from American culture roughly 100 years before.

II. Every alien is humanoid. No, you can’t mate with them, and probably don’t want to. They don’t speak English, at least, but they do have a vocal apparatus that produces sounds of the same type and range as ours, with concepts that are easily translatable.

These aliens are basically members of our genus, possibly family, and divergence occurred sometime in the Cenezoic, typically within a million years.

III. Every alien is a vertebrate. They have a head, paired eyes, jaws, a small number of limbs. They may be based on Earthly reptiles, for instance, but are often strangely distorted into a bipedal form; faces tend to be flattened and made expressive to human eyes.

Divergence is at the level of class/order, representing maybe 100 million years of evolution.

IVa. Every alien is a member of a terrestrial phylum. One type might be insectoid, another squid-like, another reptilian. Every form fits into a familiar type, although again usually the main characters will be humanoid.

Divergence at the level of the phylum implies maybe 500 million years of independent evolution.

For an interesting take on this category, Russell Powell points out that we seem to constrain ourselves to fixed sets of morphological modules that are only coupled by evolutionary contingencies, so we shouldn’t expect to see Type IVa aliens.

IVb. Every alien is a chimera with characteristics of multiple phyla. Put insectile compound eyes on the face of a humanoid; tentacles on your 4-legged vertebrate iguanoid.

The components might be separated by 500 million years of evolution, but the combination implies some kind of anastomosing lineage with fusion of wildly different species. This doesn’t happen.

V. Every alien clearly has a completely unique evolutionary history and is not in any way related to any Earthly form. There may be some convergence in general form — they may have legs, for instance, for locomotion — but they are completely different in detail — different pattern of joints, for instance, and they don’t necessarily terminate in a radial array of digits.

These represent billions of years of independent evolution from a different starting point.

Aliens like this don’t exist in movies, because they’d be visually disturbing. You know how some people freak out at the sight of spiders? It would be like that for the entire audience, who’d be struggling to interpret what the creature is doing and trying to fit it into a threat/non-threat category. You occasionally find them in science fiction novels, where the author doesn’t have to show you every distressing detail in every scene.

How about some examples?

The Star Trek universe is Type I across the board, unrelentingly vanilla. They even have a totally bullshit rationalization, that all those species are related. Also, the idea that two species could have radically different internal anatomy and physiology (green blood and two hearts in one, red blood and one heart in another) yet still look superficially similar and be able to interbreed is painfully stupid.

Speaking of painfully stupid, James Cameron’s Avatar managed to have a Type I main species (they were just big, blue, long-limbed people) with a visually well-developed background fauna with unique biological characteristics that would never in a billion years have produced the Na’vi.

The Star Wars is primarily Type I; almost all the main characters are indistinguishable from Homo sapiens, but there are a few exceptions. Chewbacca is Type II; a few of the background characters, like Admiral Ackbar or Jabba the Hut are type III.

Babylon 5 is an interesting case. Once again, it’s primarily Type I — this is simply a necessity to allow human audiences to identify with the cast. So you have Earth humans plus Centauri, Minbari, and Narn that are basically Type I humans with varying degrees of latex appliances. But then you also have the Shadows, who are Type IVa insect-like aliens, and the Vorlons are the very rare Type V, conveniently hidden away in strange-looking environment suits so you don’t have to see them…and the creators don’t have to portray a truly alien species.

The heptapod aliens in Arrival are space-faring octopuses, putting them squarely in the Type IVa category.

For the horror fans, the Alien xenomorph is Type III. It’s not that alien, sorry. It really relies on its similarity to familiar predatory morphologies to provide the scares. I just wish Cameron would stop fucking the story up with his totally bogus bad evolutionary biology.

The Predator from those movies is Type II. Those are some impressively elaborate mouthparts glued on, but it really is just a standard humanoid with some strange facial prosthetics.

As for the Guardians of the Galaxy series, it’s once again a biologically boring Type I universe where the primary species delineator is, distressingly, skin color. The colors tend to be Day-Glo hues of blues and greens and purples and oranges and gold, and fortunately no one seems to be judging people by the color of their skin, but it’s otherwise completely retro, with aliens that are only a shade different from what we got in Star Trek.

And that’s OK. These movies are for the entertainment of Earth humans, not thought-exercises in alien evolution for the delectation of freakish biologists. Don’t let my obsessions ruin what is definitely a fun movie for you.

One godless thumb up for god-murder, one primate thumb up for humor and action, one chitin-sheathed mucus-oozing appendage down for unimaginative biology, one electromagnetic flux capacitor down for bad physics, one protruding ciliated sensory apparatus emitting fluctuating phase fields radially for zgrarrl!(ptang). Obey the digits that correspond best to your cognitive and perceptual biases.

History will judge evolutionary psychology as the phrenology of our era

I’ve criticized evolutionary psychology more than a few times, and usually my arguments rest on their appallingly bad understanding of the “evolutionary” part of their monicker — proponents all seem to be rank adaptationists with a cartoon understanding of evolution. But what about the “psychology” part? I’ve mentioned at least one dissection of EP by a psychologist in the past, but here’s another one, a paper by the same author, Brad Peters, that explains that evolutionary psychology is poor neurobiology and bad psychology.

The paper points out that EP uses evidence inappropriately, ignores the range of alternative explanations to set up false dichotomies (“if you don’t accept evolutionary psychology, you must also deny evolution!”), plays rhetorical games to dodge questions about its assumptions, and basically is pulling an ideologically distorted version of neuroscience out of its institutional ass.

Evolutionary psychology defines the human mind as comprising innate and domainspecific information-processing mechanisms that were designed to solve specific evolutionary problems of our Pleistocene past. This model of the mind is the underlying blueprint used to engage in the kind of research that characterizes the field: speculating about how these innate mechanisms worked and what kinds of evolutionary problems they solved. But while evolutionary psychologists do engage in research to confirm or disconfirm their hypotheses, the results of even the most rigorous studies have been open to alternative, scientifically valid means of interpretation. What constitutes “evidence” would seem to vary in accordance with the theoretical assumptions of those viewing it. Arguments about, or appeals to, “the evidence” may thus involve little more than theoretical bible-thumping or pleading for others to view the “facts” from their preferred theoretical perspective. When theoretical paradigms are unable to agree on what it is that they are looking at, it reminds us that the data are anything but objective, and gives good reason to question the theoretical blueprints being used. This paper argues that evolutionary psychology’s assumptive definitions regarding the mind are often inconsistent with neurobiological evidence and may neglect very real biological constraints that could place limits on the kinds of hypotheses that can be safely posited. If there are problematic assumptions within evolutionary psychology’s definition of the mind, then we also have reason to question their special treatment of culture and learning, since both are thought to be influenced by modular assumptions unique to the paradigm. It is finally suggested that the mind can be adequately understood and its activities properly explained without hypothetical appeal to countless genetically pre-specified psychological programs, and in a way that remains consistent with both our neurobiology and neo-Darwinian evolution. While some of these critiques have been previously stated by others, the present paper adds to the discussion by providing a succinct summary of the most devastating arguments while offering new insights and examples that further highlight the key problems that face this field. Importantly, the critiques presented here are argued to be capable of standing their ground, regardless of whether evolutionary psychology claims the mind to be massively or moderately modular in composition. This paper thus serves as a continuation of the debate between evolutionary psychology and its critics. It will be shown how recent attempts to characterize critiques as “misunderstandings” seem to evade or ignore the main problems, while apparent “clarifications” continue to rely on some of the same theoretical assumptions that are being attacked by critics.

Another valid criticism is how evolutionary psychologists seem to be unaware of how the brain actually develops and works. Anybody who has actually studied neurodevelopment will know that plasticity is a hallmark. While genes pattern the overall structure, it’s experience that fine-tunes all the connections.

The current consensus within the neurobiological sciences seems to support a view where much of the brain is thought to be highly plastic and in which an abundance of neural growth, pruning, and differentiation of networks is directly influenced by environmental experience. This is especially the case for secondary, tertiary, and associational areas, which make up the majority of the brain’s neocortex and are primarily involved in the kinds of complex, higher-order, psychological processes that appear to be of greatest interest to experimental psychologists. These particular areas seemingly lack characteristics indicative of innate modularity, though, with experience and use, they may build upon the functional complexity of adjacent primary cortices that perhaps have such characteristics.

I also like that he addresses a common metaphor in EP — floating free of good evidence, much of the field relies on glib metaphors — that we can just treat the brain like it is a computer. It may compute, but it’s not very analogous to what’s going on in your desktop machine or phone. We aren’t made of circuits hard-printed by machines in Seoul; there is a general substrate of capabilities built upon by the experiences of the user. Further, we’re not entirely autonomous but rely in the most fundamental ways on by growth and development, sculpted by culture.

We can see the problem from a different perspective using evolutionary psychology’s favored computer analogy. While it is true that humans have some engrained and preprogrammed biological circuits, all evidence would suggest that, unlike modern computers, our environmental experiences can cause these mental circuits to become edited, hi-jacked, intensified or lessened, inhibited, and so on. How else might we explain a person acquiring a phobia of hats, a fetish for shoes, or having an apparent indifference to what might be an evolutionarily relevant danger (e.g., cliff jumping)? If we accept this is true, we must also accept that it becomes difficult to say what might have been there at birth, or instead shaped by common environmental experiences that we all share. Modern computers cannot be re-programmed without a human; they do not function like the human mind. We are the ones who effectively tell computers what the binary ones and zeros of their programming language will represent. We give symbolic meaning to the code, which allows us to even say that computers processes information. Now let us turn to the human mind. Evolutionary psychologists want to say that meaning and information are objectively pre-programmed by our inherited biology. However, it would appear that we extract much of our information, and the meaning it contains, from a sociocultural cloud of symbolic representations that belong to a shared human subjectivity, or something Raymond Tallis refers to as the community of minds. Our subjective mental states are thus socioculturally structured and shaped through our reliance on an agreed-upon language and agreed-upon sets of subjective human meanings. The brain is only one part of the picture: it facilitates the mechanistic activities of the mind, but it does not solely cause them. Human meanings, which belong to the collective community of minds, will thus often transcend the underlying mechanisms that represent them.

Wait. If the “evolution” part is crap, and the “psychology” part is bullshit, what’s left in evolutionary psychology to respect?


Peters, BM (2013) Evolutionary psychology: Neglecting neurobiology in defining the mind. Theory & Psychology 23(3) 305–322.

Friday Cephalopod: All we’re missing is the spinach

I was reading this account of an encounter between three cuttlefish — a consort male escorting a female, who is challenged by an intruder — and the story was weirdly familiar.

The intruder’s pupil dilation and arm extension began the first of three brief bouts over the course of about four minutes, each with escalating levels of aggression. The consort male met the initial insult with his own arm extension and — as only color-changing animals like cuttlefish can do — a darkening of his face. Then both males flashed brightly contrasting zebra-like bands on their skin, heightening the war of displays further.

Bout number one would go to the intruder as the consort became alarmed, darkened his whole body, squirted a cloud of ink in the intruder’s face and jetted away.

For more than a minute, the intruder male tried to guard and cozy up to the female, but the consort male returned to try to reclaim his position with a newly darkened face and zebra banding. He inked and jetted around the pair to find an angle to intervene, but the intruder fended him off with more aggressive gestures including swiping at him with that fourth arm. Bout number two again went to the intruder.

Then the intruder crossed a line.

He grabbed the female and tried to position her body to engage in head-to-head mating, but she didn’t exhibit much interest, Allen said.

The intruder’s act brought the consort male charging back into the fray with the greatest aggression yet. He grabbed the intruder and twisted him around in a barrel roll three times, the most aggressive gesture in the cuttlefish arsenal. He also bit the other male. The female, meanwhile, swam out of the fracas.

The intruder fled, chased off by the victorious consort male. Study co-author Roger Hanlon, Brown University professor of ecology and evolutionary biology and senior scientist at the Marine Biological Laboratory in Woods Hole, Mass., moments later observed and filmed the consort swimming with the female. Allen was affiliated with the Brown-MBL Joint Program in Biological and Environmental Sciences while Akkaynak was studying in a joint Massachusetts Institute of Technology-Woods Hole Oceanagraphic Institute graduate program.

“Male 1 wins the whole thing because we saw him with the female later, and that’s really what matters,” Allen said. “It’s who ends up with her in the end.”

OMG, I thought, that is the plot of every Popeye cartoon ever. Popeye is strolling along with his goyl, Olive Oyl, when Brutus comes along and snatches her away, battering Popeye a few times in the process. Then Popeye makes a spinach-fueled comeback and beats up Brutus.

Read it again with that trope in mind. It’s uncanny.

Finally! A perspective on AI I can agree with!

This Kevin Kelly dude has written a summary that I find fully compatible with the biology. Read the whole thing — — it’s long, but it starts with a short summary that is easily digested.

Here are the orthodox, and flawed, premises of a lot of AI speculation.

  1. Artificial intelligence is already getting smarter than us, at an exponential rate.
  2. We’ll make AIs into a general purpose intelligence, like our own.
  3. We can make human intelligence in silicon.
  4. Intelligence can be expanded without limit.
  5. Once we have exploding superintelligence it can solve most of our problems.

That’s an accurate summary of the typical tech dudebro. Read a Ray Kurzweil book; check out the YouTube chatter about AI; look at where venture capital money is going; read some SF or watch a movie about AI. These really are the default assumptions that allow people to think AI is a terrible threat that is simultaneously going to lead to the Singularity and SkyNet. I think (hope) that most real AI researchers aren’t sunk into this nonsense, and are probably more aware of the genuine concerns and limitations of the field, just as most biologists roll their eyes at the magic molecular biology we see portrayed on TV.

And here are Kelly’s summary rebuttals:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

My own comments:

  1. The whole concept of IQ is a crime against humanity. It may have once been an interesting, tentative hypothesis (although even in the beginning it was a tool to demean people who weren’t exactly like English/American psychometricians), but it has long outlived its utility and now is only a blunt instrument to hammer people into a simple linear mold. It’s also even more popular with racists nowadays.

  2. The funny thing about this point is that the same people who think IQ is the bee’s knees also think that a huge inventory of attitudes and abilities and potential is hard-coded into us. Their idea of humanity is inflexible and the opposite of general purpose.

  3. Yeah, why? Why would we want a computer that can fall in love, get angry, crave chocolate donuts, have hobbies? We’d have to intentionally shape the computer mind to have similar predilections to the minds of apes with sloppy chemistry. This might be an interesting but entirely non-trivial exercise for computer scientists, but how are you going to get it to pay for itself?

  4. One species on earth has human-like intelligence, and it took 4 billion years (or 500 million, if you’d rather start the clock at the emergence of complex multicellular life) of evolution to get here. Even in our lineage the increase hasn’t been linear, but in short, infrequent steps. Either intelligence beyond a certain point confers no particular advantage, or increasing intelligence is more difficult and has a lot of tradeoffs.

  5. Ah, the ideal of the Vulcan Spock. A lot of people — including a painfully large fraction of the atheist population — have this idea that the best role model is someone emotionless and robot-like, with a calculator-like intelligence. If only we could all weigh all the variables, we’d all come up with the same answer, because values and emotions are never part of the equation.

It’s a longish article at 5,000 words, but in comparison to that 40,000 word abomination on AI from WaitButWhy it’s a reasonable read and most importantly and in contrast, it’s actually right.

Done but for all the grading

Scattered throughout this semester, I’ve been discussing my EcoDevo course, Biol 4182, Ecological Development. It’s done now, so I’m just going to make note of a few things that I’d do differently next time around.

  • Fix the squishiness. I envisioned this as more like a graduate level course — a 15 week conversation on ecological development, with a textbook that kept us centered. Assessment was largely subjective, based on students demonstrating their understanding in discussion. I had an oral exam, for instance, where we just talked one on one. I think that went well, but in the end, I’ve only got a few specific metrics to use to assign a grade, and much of it will be built around how well they engaged with the material.

    I don’t mind that, but students are a bit bewildered by the absence of hard grades throughout the term. I’ll have to incorporate more detailed assignments next time around, something where they go home with a number that they can work on improving, artificial as all that is.

  • Personally, I greatly enjoyed the student presentations, and I want to do more to have students bring their interests to the course. I might include a student poster session next time — a different medium, and if in a public place, bringing in new perspectives.

    The oral exam was also valuable in getting to know where their interests were. I think I’d schedule it earlier in the term, when I do it again.

  • No way will I ever offer this course at 8am again. It was stuff that required interaction and attentiveness, and somedays it was tough to wake everyone up. These were really smart students, too, so the fault isn’t in them, but in timing.

    Maybe I’d do it at 8am if the college provided a big pot of coffee with donuts every day for the students in compensation. Hah, right.

  • One of the most dramatic effects on student participation was making it mandatory that they ask at least one question a day. Late in the course I added that requirement, and it worked surprisingly well — I could tell they were paying attention to try and find something to pursue further. They also asked good questions, so it wasn’t just pro forma noise. I’ll do that from day one in the future.

    It would be nice if that provided one of those non-squishy metrics I need to add, but it worked too well — they all met that minimal requirement easily. Guess I’ll just have to give them all As.

  • I was bad. I got summoned to Washington DC for important grant-related meetings twice during the semester, which rather gutted two weeks out of 15. That was unavoidable, but while I managed to cover the material in my syllabus, my hope that we could go a bit further and get into the evolution and development side of the textbook was thwarted. But then I never get as deeply into the subjects of any of my courses as I’d like.

    Next time, if I have planned absences, I’ll try to bring in colleagues from ecology or environmental science to cover for me, and keep the momentum going. I was really reluctant to do that this term because…8 goddamn am. I wasn’t going to ask that of anyone.

What I really got out of the course was getting to go in twice a week, even at an ungodly hour, and getting to think about more than just basic, familiar stuff. The core courses I teach in cell biology and genetics are fine, but fairly routine — I know those subjects inside and out, and the challenge is in improving the pedagogy, not in getting exposed to new science. +1, would do again.

Also, one of the best things about small upper-level classes like this is that I can get to know the students a little better, and they reaffirm my faith in humanity because they actually are smart and thoughtful and likeable (I can say that now, I’m not sucking up, because they’ve already done the course evaluation and turned it into the office). Maybe I should just give everyone an A+, with gold stars and smiley face stickers.

The horrible two-headed rat

I’m not impressed with this recent exercise in microsurgical technique to allow researchers to transplant the head of one rat to another rat’s body. In all honesty, I don’t see the point.

I’m going to put the discussion of this paper below the fold because it seems more an exercise in animal cruelty than anything else; I’ve included one figure illustrating the surgery, but it will be at thumbnail size and you’ll have to click on it to see it in all its gory vulgarity.

[Read more…]