Questions Are Velcro

The late Harvard business professor Clayton Christensen is famous for quite a few things, most notably, the Innovators Dilemma, and How Will You Measure Your Life?. I just want to add one more great idea to his long list of accomplishments.

Nine years ago, I was visiting a friend of mine in Boston and attended church with him that Sunday. If you’re not familiar, Christensen was a Latter-day Saint and lived in the area. He was also a local church leader, in a role known as an Area Seventy. Part of that responsibility meant speaking in different congregations.

His talk that day was about the importance of spiritual seeking, and I enjoyed it just for that alone. But it was during that talk that he said something quite simple that has been useful to me ever since. Christensen was talking about the importance of questions and used the following metaphor to describe how important questions are. I don't have the exact wording, but this is basically what he said:

“Questions are the velcro that answers stick to.”

A pithy idea like this wouldn’t be just for one place or moment, so it shouldn’t be surprising that Christensen shared it before, like in this conversation with Jason Fried or this tech talk to employees of The Church of Jesus Christ of Latter-day Saints. Despite the other places he’s credited for it, I haven’t seen the idea get much traction elsewhere so I wanted to give it my own little boost.

Velcro for learning

I don’t want to belabor the metaphor, other than to say it’s so smart that it’s obvious in the moment you hear it. Think about all the data hitting your brain every day. Most of it bounces off, and for good reason. Not every fact is important enough to claim your scarce attention.

But questions are perfect receptacles for data. They help focus us enough that an answer has just the place it needs to settle in our mind. To learn and have information fit into your brain, you need the questions laid down for the answers to stick.

How do we get good questions? For starters, if you want to learn a skill or master a topic, start using it. Our deficiencies generate plenty of questions, and almost certainly the right ones.

Plus, there’s a momentum to questions. An answer, once received by a curious mind, typically leads to the next question, and to the next. I’m sure you’ve felt this before many times. (In this way, AI is an exceptional teacher, hallucinations and all, never tiring of one smart or stupid question after another.)

There’s more, though, than just making knowledge practical. I don't think that captures it fully. Not every question we have points us to practical knowledge. A question might come because we find something interesting or we find something vexing. Questions are born from curiosity, and curiosity has many different origins.

I think there are different kinds of velcro, too. Idle curiosity is idly forgotten. In this way, maybe ChatGPT is a problem, because it immediately answers without much effort from us. (Imagine AI that responded with “Why do you wanna know?”) Light questions don’t have the same sticking power as the pestering ones that intrude and demand satisfaction. I love the way David Brooks describes this kind of curiosity:

The next stage of any calling or vocation is curiosity. When you’re in love with someone, you can’t stop thinking about her. You want to learn all there is to know. Curiosity is the eros of the mind, a propulsive force. It can seem so childish. Throughout history people have been nervous around curiosity. You never know where it will take you. One of Vladimir Nabokov’s characters called it the purest form of insubordination. Curiosity drives you to explore that dark cave despite your fears of going down there. Curiosity is leaping ahead of the comfortable place you’ve settled and dragging you into the unknown.

Those sorts of questions are uncomfortable precisely because they’re waiting places, gaps we can feel in our minds. Questions are incomplete thoughts that beg for finality, but not guaranteed to get it. The natural response is one of these two: (1) to be sufficiently annoyed by questions until we find answers or (2) to yank questions out of our brains to avoid the discomfort altogether. In either case, questions have obvious power over us.

A lack of good questions is why you found a class boring, by the way. Students complain about required, boring classes as “useless,” but the real problem is that the students didn’t have any questions that the class answered. Which brings me to my next thought…

Velcro for teaching

Whether they know it or not, I think all good teachers use the velcro principle. I’m sometimes a good teacher and sometimes not, but in those moments when I'm not it's typically because I'm not giving my students good questions to start with. I remember once my brother sharing a similar metaphor as we were designing an ethics training; he said that all good training starts with a wrestle. I think another way of phrasing it is that is all good learning starts with a question that matters.

As teachers, we shouldn't expect students to bring all the good questions. Instead, we need to point them to the questions that make our teaching sticky to their brains. Jonathan Frakes had the right idea. :)

To use questions in teaching, we might point to the origins of a big idea. What made Einstein wonder about time and space? At sixteen, he wondered what a light wave would look like if he traveled at light speed:

After ten years of reflection such a principle resulted from a paradox upon which I had already hit at the age of sixteen: if I pursue a beam of light with the velocity c (velocity of light in a vacuum), I should observe such a beam of light as a spatially oscillatory electromagnetic field at rest.

That question was ten years in the answering! What a great way to teach relativity, using the same questions Einstein was asking himself.

We might also evoke questions with examples, or roadmaps, or exercises. This blog post by Neel Nanda is exceptionally good and thorough. I recommend it highly.

Every great idea came in response to a question. Imagine giving that question to whoever comes up with the next one.

I’m sure there are plenty of other places or ways that this metaphor of sticky questions has been used. But I wanted to pay a small tribute to Clayton Christensen who shared it with us that Sunday in Boston. And so in the same missionary spirit he was known for, I’ll end with a promise from the Sermon on the Mount noting how the asking comes first:

Ask, and it will be given to you; seek, and you will find; knock, and it will be opened to you. For everyone who asks receives, and he who seeks finds, and to him who knocks it will be opened.

Contrarianism isn’t intelligence

Alex Tabarrok drew attention a couple of weeks ago to this study: disagreement with the consensus on controversial topics corresponds with worse understanding of non-controversial knowledge, like that we breathe oxygen from plants or that electrons are smaller than atoms.

The authors then correlate respondents’ scores on the objective (uncontroversial) knowledge with their opposition to the scientific consensus on topics like vaccination, nuclear power, and homeopathy. The result is striking: people who are most opposed to the consensus (7, the far right of the horizontal axis in the figure below) score lower on objective knowledge but express higher subjective confidence. In other words, anti-consensus respondents are the most confidently wrong—the gap between what they know and what they think they know is widest.

Confidently Wrong - Marginal REVOLUTION

AI is a magnifier, which is wonderful and terrible

“Money doesn’t make you into a different person; it just makes you more of who you already are.”

Not a semester goes by where I miss the chance to share this nugget of wisdom with my business ethics students. I don’t feel like I can take credit for the idea, and I can’t remember where I first encountered it. But like most true things, it sticks in the brain once you hear it.

One of the fortunate/unfortunate things in life is that the reach of our character is constrained by our circumstances. Because none of us is all-powerful, what we want is held in check by what’s possible. To the degree we want to do good in the world, it’s unfortunate that we don’t have more resources to do good with. And insofar as we want bad things—anything that makes ourselves and others worse off—it’s a blessing that our wants go wanting.

Thus, money has the power to amplify our character. Impatient? Money gives you power to get things faster. Prideful? Money buys a lot of praise. You get the idea. There’s really no attribute that money can’t make more of. Generous? Here’s hoping you end up with more money.

In this sense, AI is like money

There are very reasonable concerns that people have about what AI is going to do to all of us, known as the “alignment” problem. What happens if AI isn’t aligned with proper human values?

AI is an amplifier and this, in my opinion, is the more immediate and urgent alignment danger. I worry less about the “it’s going to raise a robot army and kill us all” kind of alignment fear. I worry much more about the “it’s going to make us all kill each other” problem.

I say all of this despite being excited every day about what AI makes possible. With a technical background but no coding expertise, I’ve been on a tear with AI this year. Since May, I’ve built projects that I’ve contemplated for years and haven’t had the resources or skill to make happen. I’ll have more to share about those projects in posts to come, but here are some quick highlights:

  • I’ve built a from-scratch website to collect stories of helping experiences, including features like a chatbot that gives advice drawn from real experiences. (Launching early next year!)
  • I’ve made my own personal AI assistant that tracks my todos, smartly searches my 2000+ article database I’ve saved up over the years, and even helps me exercise more regularly. It uses data stored privately on a Mac Mini in my office.
  • With colleagues, we’ve built a benchmark that measures how much different LLMs will help a user rationalize unethical decision-making. This idea went from concept to the first set of results in just two days. We’re validating the benchmark now and hope to have a paper out soon.

The most striking thing for me is how quickly an idea becomes reality now that I’ve gotten adept with AI tools. In fact, the funny problem I’ve had is that it’s become so easy to build something that I get easily distracted by a new idea when I should be buckling down and finishing the projects I’ve already started. Building is just really fun to do, and there’s a buzz from getting to the 80% version of an idea in mere hours. In other words, I’ve wasted time when I could have been finishing important things.

Exciting and Scary

All this is why I think the most reasonable reaction to have to AI is to be both excited and scared. The ability to do more doesn’t equate with wise judgment or good character. What we bring to AI matters at least as much as what AI can actually do.

If you’re a mediocre artist, for example, AI is not going to make you a good artist. This is a tough thing to come to terms with, which is why Tilly Norwood somehow exists. Bad taste is why OpenAI’s Slop-Tok app, Sora, probably won’t be long for this earth as people download it for the novelty, then find nothing worth staying for.

Good taste requires patience and discernment. It means exploring and learning and consuming beautiful things deliberately, the kind of things that are far too irreducible for Instagram Reels. Good taste takes work.

In education, the depressing reality is that too many students will use ChatGPT to give them the answer but they won’t use it to teach them the answer, despite how miraculous it is to have the smartest tutor in the history of the world at their disposal. Here again, the problem is in the wanting because the chance to actually learn is just one more prompt away.

But for those who have figured it out, they’re using AI to learn faster than they ever have before. Since May, I think I’ve told my wife at least a dozen times that I can’t remember having learned as much in as short a time span. Of course I don't have the pressures of homework in required classes that I don’t want to take, so I can see where students are coming from.

Among the many flaws of LLMs, sycophancy is probably the most pernicious. AI will do ridiculous things like praising us for being genius babies (long video, but worth it) and horrific things like encouraging suicide. It’s clear that modern AI products struggle to strike a balance between likability and honesty, and so they accelerate every idea, no matter how terrible it is.

In contrast, AI can work like jet fuel for good ideas. The technology is accelerating science dramatically. Berkeley researchers are using it to iterate and discover new materials. A group in Australia used AI to identify mechanisms for early-onset Parkinson’s, and also are on track for a drug to treat it.

But it takes discipline to use AI to refine your ideas. You have to invite its criticisms and take time to actually evaluate them. You have to be willing to resist it when it glazes you, rather than being drawn into the flattery. AI as an idea magnifier depends on our character as much as it depends on the technology.

What we bring matters

To wrap up, with the help of AI I remembered an article I had read on the magnifying effect of money, quoted here below. It all holds for AI as well. As I said at the start, it’s wonderful and terrible.

If you view the world through the lens of scarcity and survival, money will only amplify that feeling of inadequacy. But if freedom is what defines you, then money will feel abundant, no matter how much you have. If power and influence is what you want, then money will drive the nature of your relationships in that direction…After all, if you don’t give money its purpose, it will end up defining yours.

Lawrence Yeo, “Money is the megaphone of identity” at moretothat.com

After historic declines, global poverty may increase after 2030

The global reductions in poverty over the last 50 years have been unprecedented, bordering on miraculous. But the rapid and easy gains in wellbeing might be behind us.

Based on current trends, progress against extreme poverty will come to a halt. As we’ll see, the number of people in extreme poverty is projected to decline, from 831 million people in 2025 to 793 million people in 2030. After 2030, the number of extremely poor people is expected to increase.

Of course, no one expected poverty to drop the way that it did in our lifetimes, so perhaps unexpected growth is still in our future. But it will take doing things that we aren’t doing now.

The end of progress against extreme poverty? - Our World in Data

Patience is a sacred pause

In times of injustice, anger, or outrage, patience can both inform and fortify us. Booker states, “Practicing patience doesn’t mean that you push your anger aside, that you don’t acknowledge it…Bringing patience in to support your anger can feel like a sacred pause, a deep listening as your body restores its dignity, giving you the opportunity in between thought and action to decide how you want to respond.” 

I’d never thought of patience this way before, as a sacred pause that creates opportunity between thought and action. It helps me to want to be more patient if think of patience as a source of agency.

Patience Opens the Heart | Lion’s Roar

Cheating Is Expensive for Everyone

Apropos to my post from last week on AI and Universities, here’s Yascha Mounk on the topic, noting the terrible incentives involved. If (1) AI is treated as cheating, which it should be in some coursework, and (2) AI’s ease-of-use dramatically increases cheating, as it has, then the normal mechanisms for dealing with cheating get completely swamped.

Others are well-aware of the problem but don’t really know what to do about it. When you suspect that an assignment was completed by AI, it’s very hard to prove that without a confrontation with a student that is certain to be deeply awkward, and may even inspire a formal complaint. And if somehow you do manage to prove that a student has cheated, a long and frustrating bureaucratic process awaits—at the end of which, college administrators may impose an extremely lenient punishment or instruct professors to turn a blind eye.

The entire article is worth reading; cheating is just one of a few topics in it.

Colleges Are Surrendering to AI - Yascha Mounk

Unique, Quality Branding Is Hard

Something I’ve never really posted or blogged about ever before is the tech space, but if you looked at my RSS feeds you‘d see more tech news than anything else. It’s a long-running hobby.

In the consumer tech space, it’s really hard to establish a brand that doesn’t feel like just a version of Apple’s look and feel. Some companies over the years just made copying Apple their entire strategy.

This is why I loved the announcement video for the new Steam hardware lineup that was posted today. I’ll probably never own any of these products, but dang is the brand id in this video incredibly good. Unique, fresh, and fun. The video is worth watching just for that.

Cynicism Isn’t Intelligence

Most of us valorize people who don’t like people. But it turns out cynicism is not a sign of wisdom, and more often it’s the opposite. In studies of over 200,000 individuals across thirty nations, cynics scored less well on tasks that measure cognitive ability, problem-solving, and mathematical skill. Cynics aren’t socially sharp, either, performing worse than non-cynics at identifying liars…
In other words, cynicism looks smart, but isn’t.

I’m a fan of Jamil Zaki. His book, The War for Kindness is an excellent read about empathy and compassion. I’ve recommended it to a lot of students. I’ve been meaning to get to his newest book on hope, but got distracted. This article I saved bubbled back up again, reminding me I need to open that up.

Instead of Being Cynical, Try Becoming Skeptical

The difference between “accomplished” and “good”

We use “good” in English to mean too many things. Case in point: James Watson, who just passed away. Good can mean “good at” something, like in the case of Watson being good at science. But “good” also means being a morally good person, which was not a widely held opinion of Watson.

Being smart is sadly a handy excuse for being selfish, dishonest, cruel, and dismissive of others, as Watson seemed to be. This article from Ars Technica isn’t the only one to remember him thus:

Their discovery heavily relied on the work of chemist and crystallographer Rosalind Franklin at King’s College in London, whose X-ray images of DNA provided critical clues to the molecule’s twisted-ladderlike architecture. One image in particular from Franklin’s lab, Photo 51, made Watson and Crick’s discovery possible. But, she was not fully credited for her contribution. The image was given to Watson and Crick without Franklin’s knowledge or consent by Maurice Wilkins, a biophysicist and colleague of Franklin.

Imagine being remembered both for DNA’s discovery and for being an intolerant, intolerable person.

In 1955, Watson joined the faculty at Harvard University, where he was unpopular. Legendary biologist E.O. Wilson, also at Harvard, famously called Watson “the most unpleasant human being I had ever met,” in his memoir, adding that “Watson radiated contempt in all directions.”

James Watson, who helped unravel DNA’s double-helix, has died - Ars Technica

AI is Coming for Universities, and Grades Are Why

Plenty of industries are getting stalked by AI right now. I don't think any knowledge-work jobs are immune. But if AI is the lioness, universities are the already-sickly members of the herd lamely trying to outrun her.

What this really means is that the people in universities—students, faculty, admins—are the ones getting eaten alive. It's a rough time if you liked how things were before.

But all hope isn't lost. This article by Simas Kucinskas does a nice job of pointing out the safer ground for higher ed. I have some thoughts to add, particularly about grading.

Students

As Simas notes, AI chat can be an excellent teacher. I keep telling Katie that I feel like I've learned more in the last six months than ever before, and it's been entirely because I've used ChatGPT/Claude/Perplexity to help me understand things.

But you have to want to learn. Education, sadly, is a product where people try to get the least they can for their money. Simas captures it perfectly:

I assigned two problem sets and asked students to solve them at home, then present solutions at the whiteboard. Students provided perfect solutions but often couldn't explain why they did what they did. One student openly said "ChatGPT gave this answer, but I don't know why."
A single prompt would have resolved that! But many students don't bother. "One prompt away" is often one prompt too far.

As much as we professors wish our students loved the learning for its own sake, we still give grades and students are entirely reasonable in wanting good ones. I try never to think a student is grade-grubbing because, well, I went to law school and once emailed a professor to ask if his squiggle on my final exam was a point or not.

Besides, learning is hard work and students have a lot of it to do in a given semester. I get why AI is such an alluring solution when all things are considered.

But boy, is cheating corrosive to the soul. And you might remember that cheating was endemic before AI. The incentives haven't changed, just the costs.

Professors

But here's the real reason AI is pouncing on higher ed: professors hate grading. It's nearly a universal sentiment among us. We will automate, delegate, and simplify grading as much as humanly possible. As one of my friends and colleagues likes to put it, "I teach for free and they pay me to grade."

(The only consistent exception is writing professors. They freaking love grading. It energizes them. Throw them a comma splice, watch their eyes light up.)

A world with AI exposes this weakness. To grade meaningfully when a student can generate an entire paper with just command-c and command-v means we faculty have to grade harder. That means oral exams, testing centers instead of Canvas, and essay prompts that aren't just regurgitation recipes.

To be more fair to my colleagues (and myself), we also have competing interests along with our students—research, committees, and so on. Plus, we get close to zero incentives to grade meaningfully. Doing it badly will show up in student ratings, but doing it well won't show up nearly as much. Students might eventually appreciate the professor who shredded their homework, but not usually when they're doing an end-of-semester evaluation.

Simas' Barbell

Outrunning the lioness means getting in better shape, which makes the barbell metaphor apt. We do need a better way to work out.

One end of the barbell: courses that are deliberately non-AI. Work through proofs by hand. Read academic papers. Write essays without AI. It's hard, but you build mental strength.
The other end of the barbell: embrace AI fully for applied projects. Attend vibecoding hackathons. Build apps with Cursor. Use Veo to create videos. Master these tools effectively.

Not so much the Veo thing for me, but otherwise I deeply agree with this.

Dang, that left end of the barbell is heavy though. The right end, on the other hand, is really fun. The IS professors down the hall from me use a similar metaphor, pointing out to their students that we have forklifts and yet we still lift weights.

So here's what I wanted to say: students, I see you. Try to work out, not tune out. And faculty, the growling behind you is getting louder.

---

Link: University education as we know it is over