Cheating Is Expensive for Everyone

Apropos to my post from last week on AI and Universities, here’s Yascha Mounk on the topic, noting the terrible incentives involved. If (1) AI is treated as cheating, which it should be in some coursework, and (2) AI’s ease-of-use dramatically increases cheating, as it has, then the normal mechanisms for dealing with cheating get completely swamped.

Others are well-aware of the problem but don’t really know what to do about it. When you suspect that an assignment was completed by AI, it’s very hard to prove that without a confrontation with a student that is certain to be deeply awkward, and may even inspire a formal complaint. And if somehow you do manage to prove that a student has cheated, a long and frustrating bureaucratic process awaits—at the end of which, college administrators may impose an extremely lenient punishment or instruct professors to turn a blind eye.

The entire article is worth reading; cheating is just one of a few topics in it.

Colleges Are Surrendering to AI - Yascha Mounk

Unique, Quality Branding Is Hard

Something I’ve never really posted or blogged about ever before is the tech space, but if you looked at my RSS feeds you‘d see more tech news than anything else. It’s a long-running hobby.

In the consumer tech space, it’s really hard to establish a brand that doesn’t feel like just a version of Apple’s look and feel. Some companies over the years just made copying Apple their entire strategy.

This is why I loved the announcement video for the new Steam hardware lineup that was posted today. I’ll probably never own any of these products, but dang is the brand id in this video incredibly good. Unique, fresh, and fun. The video is worth watching just for that.

Cynicism Isn’t Intelligence

Most of us valorize people who don’t like people. But it turns out cynicism is not a sign of wisdom, and more often it’s the opposite. In studies of over 200,000 individuals across thirty nations, cynics scored less well on tasks that measure cognitive ability, problem-solving, and mathematical skill. Cynics aren’t socially sharp, either, performing worse than non-cynics at identifying liars…
In other words, cynicism looks smart, but isn’t.

I’m a fan of Jamil Zaki. His book, The War for Kindness is an excellent read about empathy and compassion. I’ve recommended it to a lot of students. I’ve been meaning to get to his newest book on hope, but got distracted. This article I saved bubbled back up again, reminding me I need to open that up.

Instead of Being Cynical, Try Becoming Skeptical

The difference between “accomplished” and “good”

We use “good” in English to mean too many things. Case in point: James Watson, who just passed away. Good can mean “good at” something, like in the case of Watson being good at science. But “good” also means being a morally good person, which was not a widely held opinion of Watson.

Being smart is sadly a handy excuse for being selfish, dishonest, cruel, and dismissive of others, as Watson seemed to be. This article from Ars Technica isn’t the only one to remember him thus:

Their discovery heavily relied on the work of chemist and crystallographer Rosalind Franklin at King’s College in London, whose X-ray images of DNA provided critical clues to the molecule’s twisted-ladderlike architecture. One image in particular from Franklin’s lab, Photo 51, made Watson and Crick’s discovery possible. But, she was not fully credited for her contribution. The image was given to Watson and Crick without Franklin’s knowledge or consent by Maurice Wilkins, a biophysicist and colleague of Franklin.

Imagine being remembered both for DNA’s discovery and for being an intolerant, intolerable person.

In 1955, Watson joined the faculty at Harvard University, where he was unpopular. Legendary biologist E.O. Wilson, also at Harvard, famously called Watson “the most unpleasant human being I had ever met,” in his memoir, adding that “Watson radiated contempt in all directions.”

James Watson, who helped unravel DNA’s double-helix, has died - Ars Technica

AI is Coming for Universities, and Grades Are Why

Plenty of industries are getting stalked by AI right now. I don't think any knowledge-work jobs are immune. But if AI is the lioness, universities are the already-sickly members of the herd lamely trying to outrun her.

What this really means is that the people in universities—students, faculty, admins—are the ones getting eaten alive. It's a rough time if you liked how things were before.

But all hope isn't lost. This article by Simas Kucinskas does a nice job of pointing out the safer ground for higher ed. I have some thoughts to add, particularly about grading.

Students

As Simas notes, AI chat can be an excellent teacher. I keep telling Katie that I feel like I've learned more in the last six months than ever before, and it's been entirely because I've used ChatGPT/Claude/Perplexity to help me understand things.

But you have to want to learn. Education, sadly, is a product where people try to get the least they can for their money. Simas captures it perfectly:

I assigned two problem sets and asked students to solve them at home, then present solutions at the whiteboard. Students provided perfect solutions but often couldn't explain why they did what they did. One student openly said "ChatGPT gave this answer, but I don't know why."
A single prompt would have resolved that! But many students don't bother. "One prompt away" is often one prompt too far.

As much as we professors wish our students loved the learning for its own sake, we still give grades and students are entirely reasonable in wanting good ones. I try never to think a student is grade-grubbing because, well, I went to law school and once emailed a professor to ask if his squiggle on my final exam was a point or not.

Besides, learning is hard work and students have a lot of it to do in a given semester. I get why AI is such an alluring solution when all things are considered.

But boy, is cheating corrosive to the soul. And you might remember that cheating was endemic before AI. The incentives haven't changed, just the costs.

Professors

But here's the real reason AI is pouncing on higher ed: professors hate grading. It's nearly a universal sentiment among us. We will automate, delegate, and simplify grading as much as humanly possible. As one of my friends and colleagues likes to put it, "I teach for free and they pay me to grade."

(The only consistent exception is writing professors. They freaking love grading. It energizes them. Throw them a comma splice, watch their eyes light up.)

A world with AI exposes this weakness. To grade meaningfully when a student can generate an entire paper with just command-c and command-v means we faculty have to grade harder. That means oral exams, testing centers instead of Canvas, and essay prompts that aren't just regurgitation recipes.

To be more fair to my colleagues (and myself), we also have competing interests along with our students—research, committees, and so on. Plus, we get close to zero incentives to grade meaningfully. Doing it badly will show up in student ratings, but doing it well won't show up nearly as much. Students might eventually appreciate the professor who shredded their homework, but not usually when they're doing an end-of-semester evaluation.

Simas' Barbell

Outrunning the lioness means getting in better shape, which makes the barbell metaphor apt. We do need a better way to work out.

One end of the barbell: courses that are deliberately non-AI. Work through proofs by hand. Read academic papers. Write essays without AI. It's hard, but you build mental strength.
The other end of the barbell: embrace AI fully for applied projects. Attend vibecoding hackathons. Build apps with Cursor. Use Veo to create videos. Master these tools effectively.

Not so much the Veo thing for me, but otherwise I deeply agree with this.

Dang, that left end of the barbell is heavy though. The right end, on the other hand, is really fun. The IS professors down the hall from me use a similar metaphor, pointing out to their students that we have forklifts and yet we still lift weights.

So here's what I wanted to say: students, I see you. Try to work out, not tune out. And faculty, the growling behind you is getting louder.

---

Link: University education as we know it is over

Gold Bluffs Beach

Katie and I celebrated our 25th anniversary and my 50th birthday last month by spending a week in the coastal redwoods in California. Gold Bluffs Beach is one of our favorite places. These pictures were in the evening after a spectacular hike. I'll post more photos from that trip later.

Working great until it doesn't

Since May, I've been using Cursor to build some projects that I'll certainly be sharing more here. It's been pretty invigorating. I've learned more in the last six months than I can remember learning at any time. And that includes grad school.

Cursor has a variety of features to help you build software, but ultimately the AI is always constrained by something called a context window (basically the maximum number of tokens it can process at once). When you hit that limit, the software automatically summarizes the conversation to that point so you can get back to a smaller batch of data and keep working.

Summaries naturally lose detail so the quality of the coding agent can go down. But the context windows are usually big enough that you don't need to summarize until you're pretty far into a conversation.

Until today. This is easily the weirdest and most unsettling bug I've encountered in Cursor. All of a sudden, the context window is filling up instantly, a summary is generated, and the AI starts working on something I never asked for it to do (a flask app in Python).

Is this data contamination thing? A prompt injection in Cursor's harness? Luckily I was working on something trivial, so nothing important has been lost or damaged. Also, I'm apparently not the only one to have this happen.

I'm going to work on other things and definitely leave my other projects closed for now.

Let's give this a try

Maybe turning 50 has made me nostalgic, but I do miss the era of personal blogs. I never felt confident enough to do one of my own. But turning 50 has also helped me feel like I don't really have anything left to prove. Of course, I should have started this years ago anyway.

It was just this morning that I had the realization that I've spent most of my career as a tinkerer of sorts, just not with physical things. I've tinkered with writing books, being a lawyer, building cool things with students, designing new classes, and a variety of startups, and other things, many of which never went anywhere. I'm fortunate that I've had the opportunities as a professor to make a living doing that.

Having spent my career tinkering explains, I think, why I never really felt like I could hold myself out as an expert. Nothing I've done has ever gotten huge, at least nothing I can take much credit for. But reflecting on the aggregate of it all gives me more confidence that I might share something that's useful to someone.

So I expect this to be something like a tinkerer's journal. Lots of dead-end ideas, lots of curiosities, lots of things that have little to do with anything else, and hopefully some interesting and useful things, too.