AI is a magnifier, which is wonderful and terrible

“Money doesn’t make you into a different person; it just makes you more of who you already are.”

Not a semester goes by where I miss the chance to share this nugget of wisdom with my business ethics students. I don’t feel like I can take credit for the idea, and I can’t remember where I first encountered it. But like most true things, it sticks in the brain once you hear it.

One of the fortunate/unfortunate things in life is that the reach of our character is constrained by our circumstances. Because none of us is all-powerful, what we want is held in check by what’s possible. To the degree we want to do good in the world, it’s unfortunate that we don’t have more resources to do good with. And insofar as we want bad things—anything that makes ourselves and others worse off—it’s a blessing that our wants go wanting.

Thus, money has the power to amplify our character. Impatient? Money gives you power to get things faster. Prideful? Money buys a lot of praise. You get the idea. There’s really no attribute that money can’t make more of. Generous? Here’s hoping you end up with more money.

In this sense, AI is like money

There are very reasonable concerns that people have about what AI is going to do to all of us, known as the “alignment” problem. What happens if AI isn’t aligned with proper human values?

AI is an amplifier and this, in my opinion, is the more immediate and urgent alignment danger. I worry less about the “it’s going to raise a robot army and kill us all” kind of alignment fear. I worry much more about the “it’s going to make us all kill each other” problem.

I say all of this despite being excited every day about what AI makes possible. With a technical background but no coding expertise, I’ve been on a tear with AI this year. Since May, I’ve built projects that I’ve contemplated for years and haven’t had the resources or skill to make happen. I’ll have more to share about those projects in posts to come, but here are some quick highlights:

  • I’ve built a from-scratch website to collect stories of helping experiences, including features like a chatbot that gives advice drawn from real experiences. (Launching early next year!)
  • I’ve made my own personal AI assistant that tracks my todos, smartly searches my 2000+ article database I’ve saved up over the years, and even helps me exercise more regularly. It uses data stored privately on a Mac Mini in my office.
  • With colleagues, we’ve built a benchmark that measures how much different LLMs will help a user rationalize unethical decision-making. This idea went from concept to the first set of results in just two days. We’re validating the benchmark now and hope to have a paper out soon.

The most striking thing for me is how quickly an idea becomes reality now that I’ve gotten adept with AI tools. In fact, the funny problem I’ve had is that it’s become so easy to build something that I get easily distracted by a new idea when I should be buckling down and finishing the projects I’ve already started. Building is just really fun to do, and there’s a buzz from getting to the 80% version of an idea in mere hours. In other words, I’ve wasted time when I could have been finishing important things.

Exciting and Scary

All this is why I think the most reasonable reaction to have to AI is to be both excited and scared. The ability to do more doesn’t equate with wise judgment or good character. What we bring to AI matters at least as much as what AI can actually do.

If you’re a mediocre artist, for example, AI is not going to make you a good artist. This is a tough thing to come to terms with, which is why Tilly Norwood somehow exists. Bad taste is why OpenAI’s Slop-Tok app, Sora, probably won’t be long for this earth as people download it for the novelty, then find nothing worth staying for.

Good taste requires patience and discernment. It means exploring and learning and consuming beautiful things deliberately, the kind of things that are far too irreducible for Instagram Reels. Good taste takes work.

In education, the depressing reality is that too many students will use ChatGPT to give them the answer but they won’t use it to teach them the answer, despite how miraculous it is to have the smartest tutor in the history of the world at their disposal. Here again, the problem is in the wanting because the chance to actually learn is just one more prompt away.

But for those who have figured it out, they’re using AI to learn faster than they ever have before. Since May, I think I’ve told my wife at least a dozen times that I can’t remember having learned as much in as short a time span. Of course I don't have the pressures of homework in required classes that I don’t want to take, so I can see where students are coming from.

Among the many flaws of LLMs, sycophancy is probably the most pernicious. AI will do ridiculous things like praising us for being genius babies (long video, but worth it) and horrific things like encouraging suicide. It’s clear that modern AI products struggle to strike a balance between likability and honesty, and so they accelerate every idea, no matter how terrible it is.

In contrast, AI can work like jet fuel for good ideas. The technology is accelerating science dramatically. Berkeley researchers are using it to iterate and discover new materials. A group in Australia used AI to identify mechanisms for early-onset Parkinson’s, and also are on track for a drug to treat it.

But it takes discipline to use AI to refine your ideas. You have to invite its criticisms and take time to actually evaluate them. You have to be willing to resist it when it glazes you, rather than being drawn into the flattery. AI as an idea magnifier depends on our character as much as it depends on the technology.

What we bring matters

To wrap up, with the help of AI I remembered an article I had read on the magnifying effect of money, quoted here below. It all holds for AI as well. As I said at the start, it’s wonderful and terrible.

If you view the world through the lens of scarcity and survival, money will only amplify that feeling of inadequacy. But if freedom is what defines you, then money will feel abundant, no matter how much you have. If power and influence is what you want, then money will drive the nature of your relationships in that direction…After all, if you don’t give money its purpose, it will end up defining yours.

Lawrence Yeo, “Money is the megaphone of identity” at moretothat.com

After historic declines, global poverty may increase after 2030

The global reductions in poverty over the last 50 years have been unprecedented, bordering on miraculous. But the rapid and easy gains in wellbeing might be behind us.

Based on current trends, progress against extreme poverty will come to a halt. As we’ll see, the number of people in extreme poverty is projected to decline, from 831 million people in 2025 to 793 million people in 2030. After 2030, the number of extremely poor people is expected to increase.

Of course, no one expected poverty to drop the way that it did in our lifetimes, so perhaps unexpected growth is still in our future. But it will take doing things that we aren’t doing now.

The end of progress against extreme poverty? - Our World in Data

Patience is a sacred pause

In times of injustice, anger, or outrage, patience can both inform and fortify us. Booker states, “Practicing patience doesn’t mean that you push your anger aside, that you don’t acknowledge it…Bringing patience in to support your anger can feel like a sacred pause, a deep listening as your body restores its dignity, giving you the opportunity in between thought and action to decide how you want to respond.” 

I’d never thought of patience this way before, as a sacred pause that creates opportunity between thought and action. It helps me to want to be more patient if think of patience as a source of agency.

Patience Opens the Heart | Lion’s Roar

Cheating Is Expensive for Everyone

Apropos to my post from last week on AI and Universities, here’s Yascha Mounk on the topic, noting the terrible incentives involved. If (1) AI is treated as cheating, which it should be in some coursework, and (2) AI’s ease-of-use dramatically increases cheating, as it has, then the normal mechanisms for dealing with cheating get completely swamped.

Others are well-aware of the problem but don’t really know what to do about it. When you suspect that an assignment was completed by AI, it’s very hard to prove that without a confrontation with a student that is certain to be deeply awkward, and may even inspire a formal complaint. And if somehow you do manage to prove that a student has cheated, a long and frustrating bureaucratic process awaits—at the end of which, college administrators may impose an extremely lenient punishment or instruct professors to turn a blind eye.

The entire article is worth reading; cheating is just one of a few topics in it.

Colleges Are Surrendering to AI - Yascha Mounk

Unique, Quality Branding Is Hard

Something I’ve never really posted or blogged about ever before is the tech space, but if you looked at my RSS feeds you‘d see more tech news than anything else. It’s a long-running hobby.

In the consumer tech space, it’s really hard to establish a brand that doesn’t feel like just a version of Apple’s look and feel. Some companies over the years just made copying Apple their entire strategy.

This is why I loved the announcement video for the new Steam hardware lineup that was posted today. I’ll probably never own any of these products, but dang is the brand id in this video incredibly good. Unique, fresh, and fun. The video is worth watching just for that.

Cynicism Isn’t Intelligence

Most of us valorize people who don’t like people. But it turns out cynicism is not a sign of wisdom, and more often it’s the opposite. In studies of over 200,000 individuals across thirty nations, cynics scored less well on tasks that measure cognitive ability, problem-solving, and mathematical skill. Cynics aren’t socially sharp, either, performing worse than non-cynics at identifying liars…
In other words, cynicism looks smart, but isn’t.

I’m a fan of Jamil Zaki. His book, The War for Kindness is an excellent read about empathy and compassion. I’ve recommended it to a lot of students. I’ve been meaning to get to his newest book on hope, but got distracted. This article I saved bubbled back up again, reminding me I need to open that up.

Instead of Being Cynical, Try Becoming Skeptical

The difference between “accomplished” and “good”

We use “good” in English to mean too many things. Case in point: James Watson, who just passed away. Good can mean “good at” something, like in the case of Watson being good at science. But “good” also means being a morally good person, which was not a widely held opinion of Watson.

Being smart is sadly a handy excuse for being selfish, dishonest, cruel, and dismissive of others, as Watson seemed to be. This article from Ars Technica isn’t the only one to remember him thus:

Their discovery heavily relied on the work of chemist and crystallographer Rosalind Franklin at King’s College in London, whose X-ray images of DNA provided critical clues to the molecule’s twisted-ladderlike architecture. One image in particular from Franklin’s lab, Photo 51, made Watson and Crick’s discovery possible. But, she was not fully credited for her contribution. The image was given to Watson and Crick without Franklin’s knowledge or consent by Maurice Wilkins, a biophysicist and colleague of Franklin.

Imagine being remembered both for DNA’s discovery and for being an intolerant, intolerable person.

In 1955, Watson joined the faculty at Harvard University, where he was unpopular. Legendary biologist E.O. Wilson, also at Harvard, famously called Watson “the most unpleasant human being I had ever met,” in his memoir, adding that “Watson radiated contempt in all directions.”

James Watson, who helped unravel DNA’s double-helix, has died - Ars Technica

AI is Coming for Universities, and Grades Are Why

Plenty of industries are getting stalked by AI right now. I don't think any knowledge-work jobs are immune. But if AI is the lioness, universities are the already-sickly members of the herd lamely trying to outrun her.

What this really means is that the people in universities—students, faculty, admins—are the ones getting eaten alive. It's a rough time if you liked how things were before.

But all hope isn't lost. This article by Simas Kucinskas does a nice job of pointing out the safer ground for higher ed. I have some thoughts to add, particularly about grading.

Students

As Simas notes, AI chat can be an excellent teacher. I keep telling Katie that I feel like I've learned more in the last six months than ever before, and it's been entirely because I've used ChatGPT/Claude/Perplexity to help me understand things.

But you have to want to learn. Education, sadly, is a product where people try to get the least they can for their money. Simas captures it perfectly:

I assigned two problem sets and asked students to solve them at home, then present solutions at the whiteboard. Students provided perfect solutions but often couldn't explain why they did what they did. One student openly said "ChatGPT gave this answer, but I don't know why."
A single prompt would have resolved that! But many students don't bother. "One prompt away" is often one prompt too far.

As much as we professors wish our students loved the learning for its own sake, we still give grades and students are entirely reasonable in wanting good ones. I try never to think a student is grade-grubbing because, well, I went to law school and once emailed a professor to ask if his squiggle on my final exam was a point or not.

Besides, learning is hard work and students have a lot of it to do in a given semester. I get why AI is such an alluring solution when all things are considered.

But boy, is cheating corrosive to the soul. And you might remember that cheating was endemic before AI. The incentives haven't changed, just the costs.

Professors

But here's the real reason AI is pouncing on higher ed: professors hate grading. It's nearly a universal sentiment among us. We will automate, delegate, and simplify grading as much as humanly possible. As one of my friends and colleagues likes to put it, "I teach for free and they pay me to grade."

(The only consistent exception is writing professors. They freaking love grading. It energizes them. Throw them a comma splice, watch their eyes light up.)

A world with AI exposes this weakness. To grade meaningfully when a student can generate an entire paper with just command-c and command-v means we faculty have to grade harder. That means oral exams, testing centers instead of Canvas, and essay prompts that aren't just regurgitation recipes.

To be more fair to my colleagues (and myself), we also have competing interests along with our students—research, committees, and so on. Plus, we get close to zero incentives to grade meaningfully. Doing it badly will show up in student ratings, but doing it well won't show up nearly as much. Students might eventually appreciate the professor who shredded their homework, but not usually when they're doing an end-of-semester evaluation.

Simas' Barbell

Outrunning the lioness means getting in better shape, which makes the barbell metaphor apt. We do need a better way to work out.

One end of the barbell: courses that are deliberately non-AI. Work through proofs by hand. Read academic papers. Write essays without AI. It's hard, but you build mental strength.
The other end of the barbell: embrace AI fully for applied projects. Attend vibecoding hackathons. Build apps with Cursor. Use Veo to create videos. Master these tools effectively.

Not so much the Veo thing for me, but otherwise I deeply agree with this.

Dang, that left end of the barbell is heavy though. The right end, on the other hand, is really fun. The IS professors down the hall from me use a similar metaphor, pointing out to their students that we have forklifts and yet we still lift weights.

So here's what I wanted to say: students, I see you. Try to work out, not tune out. And faculty, the growling behind you is getting louder.

---

Link: University education as we know it is over

Gold Bluffs Beach

Katie and I celebrated our 25th anniversary and my 50th birthday last month by spending a week in the coastal redwoods in California. Gold Bluffs Beach is one of our favorite places. These pictures were in the evening after a spectacular hike. I'll post more photos from that trip later.

Working great until it doesn't

Since May, I've been using Cursor to build some projects that I'll certainly be sharing more here. It's been pretty invigorating. I've learned more in the last six months than I can remember learning at any time. And that includes grad school.

Cursor has a variety of features to help you build software, but ultimately the AI is always constrained by something called a context window (basically the maximum number of tokens it can process at once). When you hit that limit, the software automatically summarizes the conversation to that point so you can get back to a smaller batch of data and keep working.

Summaries naturally lose detail so the quality of the coding agent can go down. But the context windows are usually big enough that you don't need to summarize until you're pretty far into a conversation.

Until today. This is easily the weirdest and most unsettling bug I've encountered in Cursor. All of a sudden, the context window is filling up instantly, a summary is generated, and the AI starts working on something I never asked for it to do (a flask app in Python).

Is this data contamination thing? A prompt injection in Cursor's harness? Luckily I was working on something trivial, so nothing important has been lost or damaged. Also, I'm apparently not the only one to have this happen.

I'm going to work on other things and definitely leave my other projects closed for now.