A Rare-Blood Donor Saved Millions of Lives

Australia's most prolific blood and plasma donor, James Harrison, has died at age 88. Known as the "Man with the Golden Arm," Harrison is credited with saving the lives of 2.4 million babies over the course of more than half a century.

Harrison died in February of last year. Of course, many, many people played a critical role in all the good that he did (nurses, doctors, researchers, phlebotomists), but Harrison also did his part and showed up, time after time.

Is there a better illustration of what it takes to make such an impact? Whatever we do, we have to keep showing up.

(I also posted this over at my other site, How to Help. If you don't know it, check it out.)

Blood donor James Harrison, who saved 2 million babies, has died | NPR

Some Provo Street Photography

Some Provo Street Photography

Since my focus has been learning landscape photography, I've never really done any street shooting. Thanks to a kind invitation, I had a chance to head downtown and try my hand at it. (Thanks Daren, Jason, and Justin!)

A bit of a grey day, and I need to get more comfortable taking photos of people. I mean, these look like a landscape photographer got lost downtown. šŸ˜‚ But here are my favorite shots from today.

My Ten Favorite Photos of the Year

My Ten Favorite Photos of the Year

This is the year I made landscape photography an official hobby rather than just a thing I enjoyed doing with my iPhone. All of these photos were shot on a Fujifilm X-T5. These aren't in any particular order. It was hard enough just choosing ten!

1. Bryce Canyon

This is from spring break with the family last April. It was sunny and warm the day before, then an overnight snow blanketed the park. This was the first photo I took where I looked at it later and thought, ā€œHoly cow. I took that?!?ā€ I’ve since learned that I do that a lot, with my favorites being more accidental than deliberate. šŸ˜‚

2. Flaming Gorge at Sunrise

On a campout with the young men in our congregation. I was up early in my tent and realized that I’d rather be out with the sunrise than laying in my sleeping bag failing to get more sleep. This photo is, I think, the best composition I did, even if I didn’t know at the time of shooting.

3. Flaming Gorge at Sunset

Same camping trip, but at sunset. There’s something about the colors in this one—orange, blue, and deep green—that I absolutely love.

4. Flaming Gorge Overlook

My son and I were driving home from this trip and decided to take a different way home than we came. There was an overlook sign so we pulled over. (How many overlooks have I driven past in my life?) I love the shapes and angles in this one.

5. Proposal Rock, Oregon

The couple in the distance here is my son and his now-wife, our first daughter-in-law. They were engaged at the time of this picture, but it wasn’t posed. I just happened to look up at the right time. This was a family trip in August and we had just one day of rain that week. Rather than spend it inside all day, we braved this little excursion. Bad weather makes some of the best photos.

6. Sea cave in Oregon

I love these colors so much. As I’m starting out with this hobby, I instinctively look for vistas. But I’m learning to see things closer to me.

7. The view from Timp

Katie and I hiked Mt. Timpanogos this fall. (Well, most of it. I had to turn around because of a strained calf.) This isn’t Timp itself, but the view across the valley on Timp’s north side. I don’t know how I got the sky to come out this color, but I love it so much, especially with the fall colors on the mountain.

8. John Irvine Trail, CA

We celebrated our 25th wedding anniversary and my 50th birthday in October, so Katie and I took a trip to the coastal redwoods in California. It was an absolutely magical week. This was a long hike, about 13 miles round trip. I’m glad I had more experience with my camera by the time of this hike, so I could better capture the contrasting light and dark of a redwoods trail.

9. King of Gold Bluffs

Coming back from this same hike. Elk roam this part of California, and I’d been hoping to see some but there weren’t any all day. And then on the drive out from Gold Bluffs Beach, we ended up driving through an entire herd of them. The patriarch was just ten feet from the car, so we paused to get his picture. That stare!

10. Sunrise over Capitol Reef

Another trip from the summer, while Katie was in charge of Girls’ Camp for our congregation. I came down to help cook dinners and woke up early one morning, couldn’t sleep, and went into the park. The funny thing about this picture is that it’s a pretty big crop of a much larger composition. Someday I’ll have a lens long enough to punch in on details like this without much cropping.


Looking back at this year just has me even more excited for the year to come! I don't know where I'll be going, but I look forward to seeing beautiful places.

Adversarial vs. Cooperative Teaching

Whatever your opinion of AI, I found this idea of teaching being either adversarial or cooperative to be really interesting. I definitely find myself using both perspectives depending on the situation (and the student). I’d rather be cooperative the vast majority of the time.

Your prediction about the effect of AI on education depends on whether you see teaching as an adversarial process or as a cooperative process. In an adversarial process, the student is resistant to learning, and the teacher needs to work against that. In a cooperative process, the student is curious and self-motivated, and the teacher is working with that.

AI has Educators Polarized - by Arnold Kling - In My Tribe

Low-Ambition Companies Will Suffer from AI

There are two ideas in the AI Zeitgeist that you come across almost daily. The first one is this:

"If you want to be a competitive worker, you need to know how to work with AI. Because if you don’t, you’ll be outpaced by the workers who do."

The second idea, even sometimes part of the same take, goes like this:

"Companies that adopt AI are going to do layoffs because AI agents can do the work of humans that are slower and more expensive."

It baffles me that these two ideas can somehow coexist when they are very obviously at odds with each other. At the very least, they misrepresent how AI agents work and the role that humans play in managing them.

The Overstated Autonomy of AI

My mom lives in Southern California, where she can take a Waymo to get around. She absolutely loves it. She’s relieved to not have to talk to an Uber driver, she likes the pace and consistency in how a Waymo drives, and she loves the convenience of doing it all from her phone.

But she still has to tell the Waymo where she wants to go. It doesn’t decide for her. Nor does it schedule the trips for her. Even if in the near future it started to recognize her habits, noting how she wants to go to the store at 8am on Wednesdays, it is still deriving its purpose from my mom’s intentions. And this is all for a pretty narrowly defined task, go from point A to point B. AI today doesn’t self-generate intention.

A manager who decides to replace employees with AI agents might think, ā€œI’ll just give these AI agents my intentions and manage the agents instead of people.ā€ Even assuming a fleet of agents can actually do extensive work autonomously today (they can’t), there’s still a huge constraint: the manager’s intentions.

Intentions need detail to lead to good decisions. They need elaboration. You can’t just tell an agent, ā€œI want to make a lot of money,ā€ and expect it to fill in the blanks. There are too many blanks. If such a thing were possible, a manager could just tell their employees the same thing. ā€œGo make me money.ā€ That’s hardly management at all, if you think about it.

Anthropic, the makers of Claude, illustrated all of this perfectly in a video they released just this morning. Meet Claudius, the AI agent who runs a vending machine business.

In this video, Anthropic is transparent about some of the pitfalls they encountered trying to get an autonomous AI agent to run a simple, profitable business. Claudius was easily manipulated by customers, confused about what was real and what wasn’t, and lost a lot of money. In the end, it only worked when they gave Claudius a boss (Seymour Cash, another AI agent). Of course, Seymour had his own bosses, the humans designing the experiment.

This is all part of work being done by Andon Labs, who designed a benchmark from this experiment called VendingBench (recently updated to Version 2). The purpose of the benchmark is to test how well agents can sustain a set of complex tasks over a long time horizon. Even brand-new frontier models, while capable of making a profit, can still end the benchmark prematurely.

The reality today is that AI agents are not truly autonomous. In my opinion, they won’t be for a long time to come. (There are good arguments they should never exist.) To succeed, they need to know how to choose what to work on, especially for anything longer than just a few hours at a time. For now, they don’t have a way to meaningfully make that choice absent human direction.

(Perhaps in the near- or far-distant future AI agents will choose entirely on their own what problems to solve or what products to produce. In the dystopian versions of this, we have no reason to think that they’ll want to produce anything that’s actually helpful to human beings.)

Elaborate, detailed intention is what matters to a successful AI agent, otherwise it’s a Waymo with no destination. This is why prompt engineering is a thing. And if you want a team of agents, you need to elaborate intentions for each of them, repeatedly. No one human manager can do this at scale, just as they can’t effectively manage a team beyond a certain number of employees. The manager is the constraint.

Ambition

Recognizing the constraint, I can’t think of any reason for a manager to replace employees with agents. Except if the manager is low-ambition, thinking ā€œMy old team could do X, and now I can have agents that do X.ā€ Why in the world, if you can use faster and cheaper AI, would you stop at X?

Instead, keeping employees and training them is the only reasonable thing to do because it means expanding the constrained resource of intention. Employees who share the vision of the team, can make decisions about intention to make it granular and elaborated enough for agents to go do the work.

Software development is where the biggest employment impacts are happening now. And companies are already starting to see the mistake of replacing developers with agents. It turns out junior developers are worth more with AI, not less. I’ve done a lot of coding with AI agents since May. When the agent screws up or produces something buggy, the likeliest cause is my lack of skill in not giving the coding agent a clear enough set of intentions. I’ve had to learn extensively about how different technologies work so I can get the agent to write better code. Laying off developers, instead of giving them AI agents to direct, is low-ambition and shortsighted.

There are definitely workers today performing commodity tasks, things like data entry that AI can do easily and quickly. But those workers are squandered if they’re just laid off. What goes out the door with them is the ability to manage AI agents with intention. Organizations will need more of that, not less.

Even just in the short-run, I’m confident that the market will reward high-ambition companies that are hiring and training people to direct AI agents. Those companies will produce far more and make it faster. And they will leave the low-ambition, fire-all-the-humans companies in the dust.

2025’s biggest impacts—for better or for worse

This is absolutely fascinating and I’ve already spent too much time on this page when I should be finishing grades. All the biggest scientific or technological changes of 2025, ranked.

How did the world change this year? Which results are speculative? Which are biggest, if true? We collected and scored 202 results. Filter by field, our best guess of the probability that they generalise, or their impact if they do.

Frontier of the Year 2025 — Renaissance Philanthropy

Journals are publishing fake citations, too

Apropos to my earlier post about the Springer textbook with fake citations, academic journals are seeing a rash of the same thing.

What Heiss came to realize in the course of vetting these papers was that AI-generated citations have now infested the world of professional scholarship, too. Each time he attempted to track down a bogus source in Google Scholar, he saw that dozens of other published articles had relied on findings from slight variations of the same made-up studies and journals.

Incidentally, the Heiss in this quote is my friend Prof. Andrew Heiss, one of the smartest people I know.

AI Chatbots Are Poisoning Research Archives With Fake Citations | Rolling Stone

30-year-old embryo born this year

A baby boy born over the weekend holds the new record for the ā€œoldest baby.ā€ Thaddeus Daniel Pierce, who arrived on July 26, developed from an embryo that had been in storage for 30 and a half years.

This happened over the summer, but I hadn’t seen the news about it. My faith doesn’t have precisely defined beliefs on when life begins (at conception, at first heartbeat, brain waves, quickening, etc.), but it makes for a fascinating question as to how a religious person might position this outcome. If life begins at conception, was Thaddeus alive while frozen for 30 years?

Interestingly, it was a Christian embryo adoption agency that helped arrange the pregnancy.

A record-breaking baby has born from an embryo that’s over 30 years old | MIT Technology Review

Machine learning textbook from major publisher has hallucinated sources

I don’t think I can adequately stress how bad this is for a publisher as big as Springer to screw up this badly. A new textbook of theirs had extensive hallucinated sources.

Based on a tip from a reader, we checked 18 of the 46 citations in the book. Two-thirds of them either did not exist or had substantial errors. And three researchers cited in the book confirmed the works they supposedly authored were fake or the citation contained substantial errors.

And then there’s this:

The 257-page book includes a section on ChatGPT that states: ā€œthe technology raises important ethical questions about the use and misuse of AI-generated text.ā€

It costs $169, in case you’re wondering.

Springer Nature book on machine learning is full of made-up citations | RetractionWatch

Questions Are Velcro

The late Harvard business professor Clayton Christensen is famous for quite a few things, most notably, the Innovators Dilemma, and How Will You Measure Your Life?. I just want to add one more great idea to his long list of accomplishments.

Nine years ago, I was visiting a friend of mine in Boston and attended church with him that Sunday. If you’re not familiar, Christensen was a Latter-day Saint and lived in the area. He was also a local church leader, in a role known as an Area Seventy. Part of that responsibility meant speaking in different congregations.

His talk that day was about the importance of spiritual seeking, and I enjoyed it just for that alone. But it was during that talk that he said something quite simple that has been useful to me ever since. Christensen was talking about the importance of questions and used the following metaphor to describe how important questions are. I don't have the exact wording, but this is basically what he said:

ā€œQuestions are the velcro that answers stick to.ā€

A pithy idea like this wouldn’t be just for one place or moment, so it shouldn’t be surprising that Christensen shared it before, like in this conversation with Jason Fried or this tech talk to employees of The Church of Jesus Christ of Latter-day Saints. Despite the other places he’s credited for it, I haven’t seen the idea get much traction elsewhere so I wanted to give it my own little boost.

Velcro for learning

I don’t want to belabor the metaphor, other than to say it’s so smart that it’s obvious in the moment you hear it. Think about all the data hitting your brain every day. Most of it bounces off, and for good reason. Not every fact is important enough to claim your scarce attention.

But questions are perfect receptacles for data. They help focus us enough that an answer has just the place it needs to settle in our mind. To learn and have information fit into your brain, you need the questions laid down for the answers to stick.

How do we get good questions? For starters, if you want to learn a skill or master a topic, start using it. Our deficiencies generate plenty of questions, and almost certainly the right ones.

Plus, there’s a momentum to questions. An answer, once received by a curious mind, typically leads to the next question, and to the next. I’m sure you’ve felt this before many times. (In this way, AI is an exceptional teacher, hallucinations and all, never tiring of one smart or stupid question after another.)

There’s more, though, than just making knowledge practical. I don't think that captures it fully. Not every question we have points us to practical knowledge. A question might come because we find something interesting or we find something vexing. Questions are born from curiosity, and curiosity has many different origins.

I think there are different kinds of velcro, too. Idle curiosity is idly forgotten. In this way, maybe ChatGPT is a problem, because it immediately answers without much effort from us. (Imagine AI that responded with ā€œWhy do you wanna know?ā€) Light questions don’t have the same sticking power as the pestering ones that intrude and demand satisfaction. I love the way David Brooks describes this kind of curiosity:

The next stage of any calling or vocation is curiosity. When you’re in love with someone, you can’t stop thinking about her. You want to learn all there is to know. Curiosity is the eros of the mind, a propulsive force. It can seem so childish. Throughout history people have been nervous around curiosity. You never know where it will take you. One of Vladimir Nabokov’s characters called it the purest form of insubordination. Curiosity drives you to explore that dark cave despite your fears of going down there. Curiosity is leaping ahead of the comfortable place you’ve settled and dragging you into the unknown.

Those sorts of questions are uncomfortable precisely because they’re waiting places, gaps we can feel in our minds. Questions are incomplete thoughts that beg for finality, but not guaranteed to get it. The natural response is one of these two: (1) to be sufficiently annoyed by questions until we find answers or (2) to yank questions out of our brains to avoid the discomfort altogether. In either case, questions have obvious power over us.

A lack of good questions is why you found a class boring, by the way. Students complain about required, boring classes as ā€œuseless,ā€ but the real problem is that the students didn’t have any questions that the class answered. Which brings me to my next thought…

Velcro for teaching

Whether they know it or not, I think all good teachers use the velcro principle. I’m sometimes a good teacher and sometimes not, but in those moments when I'm not it's typically because I'm not giving my students good questions to start with. I remember once my brother sharing a similar metaphor as we were designing an ethics training; he said that all good training starts with a wrestle. I think another way of phrasing it is that is all good learning starts with a question that matters.

As teachers, we shouldn't expect students to bring all the good questions. Instead, we need to point them to the questions that make our teaching sticky to their brains. Jonathan Frakes had the right idea. :)

To use questions in teaching, we might point to the origins of a big idea. What made Einstein wonder about time and space? At sixteen, he wondered what a light wave would look like if he traveled at light speed:

After ten years of reflection such a principle resulted from a paradox upon which I had already hit at the age of sixteen: if I pursue a beam of light with the velocity c (velocity of light in a vacuum), I should observe such a beam of light as a spatially oscillatory electromagnetic field at rest.

That question was ten years in the answering! What a great way to teach relativity, using the same questions Einstein was asking himself.

We might also evoke questions with examples, or roadmaps, or exercises. This blog post by Neel Nanda is exceptionally good and thorough. I recommend it highly.

Every great idea came in response to a question. Imagine giving that question to whoever comes up with the next one.

—

I’m sure there are plenty of other places or ways that this metaphor of sticky questions has been used. But I wanted to pay a small tribute to Clayton Christensen who shared it with us that Sunday in Boston. And so in the same missionary spirit he was known for, I’ll end with a promise from the Sermon on the Mount noting how the asking comes first:

Ask, and it will be given to you; seek, and you will find; knock, and it will be opened to you. For everyone who asks receives, and he who seeks finds, and to him who knocks it will be opened.

Contrarianism isn’t intelligence

Alex Tabarrok drew attention a couple of weeks ago to this study: disagreement with the consensus on controversial topics corresponds with worse understanding of non-controversial knowledge, like that we breathe oxygen from plants or that electrons are smaller than atoms.

The authors then correlate respondents’ scores on the objective (uncontroversial) knowledge with their opposition to the scientific consensus on topics like vaccination, nuclear power, and homeopathy. The result is striking: people who are most opposed to the consensus (7, the far right of the horizontal axis in the figure below) score lower on objective knowledge but express higher subjective confidence. In other words, anti-consensus respondents are the most confidently wrong—the gap between what they know and what they think they know is widest.

Confidently Wrong - Marginal REVOLUTION

AI is a magnifier, which is wonderful and terrible

ā€œMoney doesn’t make you into a different person; it just makes you more of who you already are.ā€

Not a semester goes by where I miss the chance to share this nugget of wisdom with my business ethics students. I don’t feel like I can take credit for the idea, and I can’t remember where I first encountered it. But like most true things, it sticks in the brain once you hear it.

One of the fortunate/unfortunate things in life is that the reach of our character is constrained by our circumstances. Because none of us is all-powerful, what we want is held in check by what’s possible. To the degree we want to do good in the world, it’s unfortunate that we don’t have more resources to do good with. And insofar as we want bad things—anything that makes ourselves and others worse off—it’s a blessing that our wants go wanting.

Thus, money has the power to amplify our character. Impatient? Money gives you power to get things faster. Prideful? Money buys a lot of praise. You get the idea. There’s really no attribute that money can’t make more of. Generous? Here’s hoping you end up with more money.

In this sense, AI is like money

There are very reasonable concerns that people have about what AI is going to do to all of us, known as the ā€œalignmentā€ problem. What happens if AI isn’t aligned with proper human values?

AI is an amplifier and this, in my opinion, is the more immediate and urgent alignment danger. I worry less about the ā€œit’s going to raise a robot army and kill us allā€ kind of alignment fear. I worry much more about the ā€œit’s going to make us all kill each otherā€ problem.

I say all of this despite being excited every day about what AI makes possible. With a technical background but no coding expertise, I’ve been on a tear with AI this year. Since May, I’ve built projects that I’ve contemplated for years and haven’t had the resources or skill to make happen. I’ll have more to share about those projects in posts to come, but here are some quick highlights:

  • I’ve built a from-scratch website to collect stories of helping experiences, including features like a chatbot that gives advice drawn from real experiences. (Launching early next year!)
  • I’ve made my own personal AI assistant that tracks my todos, smartly searches my 2000+ article database I’ve saved up over the years, and even helps me exercise more regularly. It uses data stored privately on a Mac Mini in my office.
  • With colleagues, we’ve built a benchmark that measures how much different LLMs will help a user rationalize unethical decision-making. This idea went from concept to the first set of results in just two days. We’re validating the benchmark now and hope to have a paper out soon.

The most striking thing for me is how quickly an idea becomes reality now that I’ve gotten adept with AI tools. In fact, the funny problem I’ve had is that it’s become so easy to build something that I get easily distracted by a new idea when I should be buckling down and finishing the projects I’ve already started. Building is just really fun to do, and there’s a buzz from getting to the 80% version of an idea in mere hours. In other words, I’ve wasted time when I could have been finishing important things.

Exciting and Scary

All this is why I think the most reasonable reaction to have to AI is to be both excited and scared. The ability to do more doesn’t equate with wise judgment or good character. What we bring to AI matters at least as much as what AI can actually do.

If you’re a mediocre artist, for example, AI is not going to make you a good artist. This is a tough thing to come to terms with, which is why Tilly Norwood somehow exists. Bad taste is why OpenAI’s Slop-Tok app, Sora, probably won’t be long for this earth as people download it for the novelty, then find nothing worth staying for.

Good taste requires patience and discernment. It means exploring and learning and consuming beautiful things deliberately, the kind of things that are far too irreducible for Instagram Reels. Good taste takes work.

In education, the depressing reality is that too many students will use ChatGPT to give them the answer but they won’t use it to teach them the answer, despite how miraculous it is to have the smartest tutor in the history of the world at their disposal. Here again, the problem is in the wanting because the chance to actually learn is just one more prompt away.

But for those who have figured it out, they’re using AI to learn faster than they ever have before. Since May, I think I’ve told my wife at least a dozen times that I can’t remember having learned as much in as short a time span. Of course I don't have the pressures of homework in required classes that I don’t want to take, so I can see where students are coming from.

Among the many flaws of LLMs, sycophancy is probably the most pernicious. AI will do ridiculous things like praising us for being genius babies (long video, but worth it) and horrific things like encouraging suicide. It’s clear that modern AI products struggle to strike a balance between likability and honesty, and so they accelerate every idea, no matter how terrible it is.

In contrast, AI can work like jet fuel for good ideas. The technology is accelerating science dramatically. Berkeley researchers are using it to iterate and discover new materials. A group in Australia used AI to identify mechanisms for early-onset Parkinson’s, and also are on track for a drug to treat it.

But it takes discipline to use AI to refine your ideas. You have to invite its criticisms and take time to actually evaluate them. You have to be willing to resist it when it glazes you, rather than being drawn into the flattery. AI as an idea magnifier depends on our character as much as it depends on the technology.

What we bring matters

To wrap up, with the help of AI I remembered an article I had read on the magnifying effect of money, quoted here below. It all holds for AI as well. As I said at the start, it’s wonderful and terrible.

If you view the world through the lens of scarcity and survival, money will only amplify that feeling of inadequacy. But if freedom is what defines you, then money will feel abundant, no matter how much you have. If power and influence is what you want, then money will drive the nature of your relationships in that direction…After all, if you don’t give money its purpose, it will end up defining yours.

Lawrence Yeo, ā€œMoney is the megaphone of identityā€ at moretothat.com