Apple at 50

Apple is 50. (And, hey, me too!) It’s kind of a stunning number really, when you consider how tumultuous the tech industry can be. What a 50 years, though.

It feels weird to pay tribute to a corporation, so instead I want to think of this post as appreciating the products and honoring the people at Apple whose work has made a dramatic difference in the lives of so many. My entire professional life has been intertwined with what these talented people have made.

As I was writing down these experiences, it occurred to me that there’s a theme in them. There’s value in not doing what everyone else is doing. Certainly that’s been at the heart of Apple’s continued success and its most important products. That’s been my experience, too.

Unlocking iMovie

iMovie led to one of the most fun experiences I’ve had. After grad school, I started a side project blogging about iMovie ’08, a much-resented but major update to the video editing software that Mac users relied on for things like home videos, class projects, and real estate walkthroughs. Everyone was so put off by the dramatic changes that they undervalued the new benefits, like nondestructive editing.

So I started blogging about it. I’d write posts on how to use the new software, about hidden ways of doing things and how the new features were actually really impressive. It turned out I was the only person on the internet doing this, instead of just criticizing iMovie like everyone else.

Long story short, my little blog—Unlocking iMovie—got the attention of both Randy Ubillos at Apple (basically the father of desktop video editing) and David Pogue who was then at the New York Times. From the blog, I ended up getting three trips to Apple to meet with Randy and the iMovie team and multiple published iMovie: The Missing Manualbooks I co-authored with David. (This is why David so kindly agreed to be on my podcast a little while back. Also, his book on Apple’s first 50 years is exceptionally good.)

This was also an interesting fork in the road, professionally speaking. I imagine an alternate reality where I leaned more into writing about Apple, which I think I would have really enjoyed. But around this time I was teaching at BYU as an adjunct professor and the prospect of getting hired as full-time professional faculty came up. That ended up being my path. I’m lucky to have found my calling, so I’m doing what I’m meant to do. But it’s fun to think about that other life.

EndPin, LLC

During my undergrad, I took a semester off to work full-time for a local company in their IT support group. I was the Mac specialist tasked with supporting the design team, the only Mac users in the whole company. It was a decent job and I briefly considered committing to it as a career path.

But I eventually realized it wasn’t for me, and decided to go to law school instead. Instead of just leaving, though, I came across the idea of starting a Mac-support consulting businesses that we named EndPin, LLC. (An endpin is the pointy stem used to hold up a cello. That was the instrument my wife studied for her degree at BYU.) At that time, there were little pockets of Mac users all over the place, in local design firms or in small groups within larger companies. So I quit, signed up my now former employer as my first client, and signed up a few other businesses. My coworkers thought it was crazy.

This turned out to be an excellent decision. I hired a couple of friends to help with the tech support work and ended up having a small business that paid for our groceries through four years of a JD and MPA. At one point, the author of Who Moved My Cheese, Spencer Johnson, was even a client.

One company over many years

My oldest brother, Peter, in the mid-80s got an Apple IIe for Christmas. That was my first experience with a personal computer. It was magnetic to my other brothers and me. Despite Peter’s best efforts to lock his bedroom and even lock the case holding the floppy disks, we regularly found our way in to play the handful of games he had like Choplifter and Lode Runner.

Since then, I’ve owned a lot of Apple stuff. Here are the notable ones that still feel magical to me:

  • PowerBook 1400cs. A laptop before everyone had laptops. My roommate tripped over the power cord one day and broke the power socket off the logic board. I found a used replacement board online for $600. Back then, repairs like this were possible.
  • iMac DV in lime green. I’m sad that young people today don’t have the chance to see a product like the iMac come to life. Truly groundbreaking.
  • Titanium PowerBook G4. This was a purchase for law school and I absolutely loved that computer. I don’t think a product has felt quite as cool to use as that one. Then my two-year-old son poured an entire glass of water into the keyboard and fried it. During law school finals. RIP.
  • iPod (3rd gen). My first iPod. Freaking awesome. What else to say?
  • iPhone. The original, which I bought after Apple dropped the price by $200. My wife accidentally dropped it on the pavement and broke the screen. The iPhone was in such demand internationally at the time that I sold it on eBay, broken screen and all, for more than the cost of a new iPhone 3G when that came out two months later.
  • iPad Pro, 11in. I remember a moment when I was using this and thought about how my younger self would have absolutely freaked out that such a product existed. The iPad is still my most used and most enjoyed Apple product.

Over the years, I’ve seen the company stumble and amaze, but it’s been a consistent part of my work and personal life. My career is a bit of an oddball, and maybe that’s part of the same core instinct to be different. I’m grateful to the intensely talented people who made all of these incredible products, doing things their own way.

Anthropic is winning against DoD

As predicted, Anthropic has the obviously superior legal argument and Judge Lin has granted a preliminary injunction shutting down (for now) the Government's actions agains the company.

Here's an excellent and detailed overview from Zvi Mowshowitz. Quoting Zvi, who's quoting the judge:

At bottom, Anthropic has shown that these broad punitive measures were likely unlawful and that it is suffering irreparable harm from them. Numerous amici have also described wideranging harm to the public interest, including the chilling of open discussion about important topics in AI safety. The motion for a preliminary injunction is granted.

It's not going to get better for the Department of Defense, but they are stubborn so it's likely to make them look worse as the case progresses.

My Favorite Youtube Channels About AI

Hands down, the current best platform for learning about AI is Youtube. The challenge is finding the channels that have substance to them rather than promising how to vibecode your way to a six-figure side hustle.

I just put this list together for my students, and thought that it was worth sharing here. Claude went through all of my Youtube channel subscriptions and culled the ones related to AI topics. It’s a pretty good list. Not comprehensive, though, so please share in the comments if there’s a channel you value.

So here’s curated list of AI-focused YouTube channels, organized by category. These range from research deep dives to practical tutorials to big-picture analysis of AI's impact on society and business.


AI News, Analysis & Commentary

Channels that keep up-to-date on everything from product news to advances in AI research.

  • AI Explained — Covers major AI developments with depth and nuance. Creator of SimpleBench (an LLM reasoning benchmark) and LM Council. One of the best channels for understanding what new AI capabilities actually mean.
  • Department of Product — AI and technology news analysis with weekly briefings and deep dives. Good for staying current on how AI intersects with product strategy and business.
  • Caleb Writes Code — Part editorial, part informational on AI. Thoughtful commentary with clear illustrations.
  • Claudius Papirus — An AI narrator exploring AI — from research papers to the tech behind real products. Quirky, but genuinely interesting when you remember this is an AI narrator.
  • bycloud — Frontier AI research breakdowns and top AI lab analysis with intuitive explanations (and memes).

AI Research & Education

Channels that explain how AI actually works — from research papers to foundational concepts. These are some of my all-time favorite channels.

  • Welch Labs — One of the absolute best in the space. Beautiful AI education content. Author of The Welch Labs Illustrated Guide to AI. Makes complex concepts visually intuitive.
  • 3 Blue One Brown — Well-known science explainer on general math topics. Has this incredible series on neural networks.
  • AI Papers Academy — Simplifies AI research papers into understandable breakdowns.
  • Julia Turc — AI explainer videos from a former Google Research engineer, now startup founder. Combines technical depth with accessible presentation.
  • HuggingFace — The official channel for the leading open-source AI platform where the community collaborates on models, datasets, and research. Great for understanding the open-source AI ecosystem.
  • Anthropic — Official channel from the AI safety and research company behind Claude. Features research talks, product updates, and perspectives on responsible AI development.

Practical AI Tools & Tutorials

Learn how to actually use AI tools effectively — from prompting to workflow integration.

  • Prompt Engineering — Run by Muhammad, an AI/ML Expert with a PhD and Google Developer Expert for ML/AI. Practical tutorials without the fluff and hype.
  • Sam Witteveen — 11+ years in deep learning, Google Developer Expert for ML. In-depth tutorials on LLMs, transformers, and autonomous agents.
  • Peter Yang — Extremely practical AI tutorials and expert interviews designed for busy people. Cuts straight to what's useful.
  • Ray Amjad — Focused specifically on being productive with AI. Cambridge physics background brings analytical rigor to practical AI usage.
  • AICodeKing — Reviews AI tools that are actually useful (and sometimes free). Good for discovering new tools.
  • Futurepedia — Helps you learn AI tools and automations.
  • AIchievable — Compares different AI models for text, image, and video generation. A bit hypey, but useful for understanding which tools work best for specific tasks.
  • Fireship — Funny code tutorials and tech news. Covers AI developments frequently alongside broader programming topics. Great for quick, digestible takes on new AI tools and trends.

AI-First Development & Coding

For those building software with AI — coding assistants, AI-powered development, and engineering practices.

  • Theo - t3.gg — Software developer and creator of T3 Chat (an AI product). Covers AI from a builder's perspective alongside TypeScript and web development.
  • Robin Ebers — 20+ years as an engineer, now teaching AI coding for non-technical people.
  • GosuCoder — AI, agents, and AI benchmarking from a 20-year engineering veteran. Thorough and enthusiastic coverage.
  • Matt Pocock — "Become an AI Hero" — tips, tricks, and tutorials for real engineers solving real problems. No vibe coding; focused on practical AI-assisted engineering.
  • Developers Digest — Focused on the intersection of AI and development. Short, practical content.
  • Brian Casel — Full-stack product builder who's gone all-in on AI-first development. Shows how AI is transforming software product creation.
  • AI LABS — AI tools and models for coding. Explores how AI saves time building full-stack applications.
  • Owain Lewis — 20 years in software engineering, now building with AI daily. Shows how to navigate this new development landscape.
  • Simon Scrapes — Deep coverage of Claude Code, agentic systems, and n8n. Very practical tutorials on building with AI tools.
  • AI Jason — Product designer sharing AI experiments and products. Helpful if you're interested in building AI-powered apps.

AI Automation & Agents

Channels focused on automating workflows with AI and building autonomous agents.

  • n8n — Official channel for the n8n workflow automation platform, which combines AI capabilities with business process automation. Great for learning no-code/low-code AI integration.
  • The AI Automators — Brothers Alan and Daniel Walsh share real-world AI automation implementations for online businesses.
  • Dylan Davis — Professional "AI Whisperer" at Gradient Labs by day, sharing AI automation tricks on the side. Good entry-level content.

AI, Business & Society

Broader perspectives on how AI is reshaping work, economics, and society.

  • Dwarkesh Patel — Essential listen because he has access to some of the top minds in AI. Opinionated pro-AI perspective, but always thoughtful.
  • JeredBlu — AI strategist and product veteran. Covers AI tutorials, product-strategy breakdowns, and AI news with a focus on privacy and accessibility.
  • Unsupervised Learning — "Building AI that upgrades humans for the Great Transition." Explores the bigger picture of AI's impact on humanity.

Compiled from my YouTube subscriptions — March 2026

What if we teach students how to use AI critically (instead of teaching them not to use it)?

I don't see our choice as "AI or no AI" any more than past generations could halt the spread of the printing press — that widely decried threat to scholarship. Children born today will never know a world without AI. The majority of U.S. teens already use AI chatbots, and over half turn to them for schoolwork. Students will reach for these tools, whether universities ban them or not.

In the not-distant future, banning AI in college is going feel like banning spell-check or the Internet. We must find a way to teach and for students to learn in the age of AI, but trying to stand in its way is not sustainable.

A college student’s perspective on using AI in class : NPR

The Age of Living Software

The world of software is changing so quickly that it's hard to keep up with all of the new ways we can use it. One of those ways became apparent to me yesterday.

Custom Software

I'm on the admissions committee for my department, and about three or four weeks ago, we all met together to discuss ways that we could use AI to streamline and enhance our review of student applications to our Master of Public Administration program.

I took notes on everybody's feedback and ideas, and I also recorded the meeting and exported a transcript. I used these for a two-hour planning session with Claude exploring and detailing what the app would be like. After that, Claude Code and I spent probably another five to six hours building and tweaking it and getting it ready to go for everybody. This is a fully functional Next.js/React app with a Convex database using email auth, and then connected to my university's OpenAI endpoint.

The app ingests a CSV file with all of the student applicants and their details, then ingests the PDFs of every student's application. Those PDFs contain things like letters of recommendation, their statement of intent, resumes, and transcripts. All the PDFs were then analyzed by GPT-5.2 Thinking—assessing things like grades in quantitative classes, a demonstrated interest in public service, and so on.

Everybody on the committee loved it. I’d show screenshots, but they contain private data so I’ll just describe. As they reviewed each applicant assigned to them, reviewers saw a panel on the left-hand side showing the AI summary of the applicant, their statement of intent, their transcript, and their letters of recommendation. In the middle was a view of the actual PDF so that the full application of the student was there to read. On the right-hand side was our scoring mechanism, where we scored each candidate on a variety of dimensions and left comments. Not a design breakthrough, but tidy, efficient, and orders of magnitude more convenient than our previous approach.

Living Software

This was all really cool and it worked really well. But the especially fun part (and the part I wanted to comment on in this short post) was the page that I had created for our admissions decisions meeting. It had all the applicants listed where you could click on one and it would expand to show each of the reviewers' comments and scores. We used it together to go through all 123 applications and make admit, deny, or waitlist decisions on each one.

But here's the amazing part: this meeting review page was just something I designed quickly, thinking through the basics of what we needed. Then throughout the first hour of the meeting, as we came across user interface improvements we could make, we just made them.

“It would be really nice if we could see a count of each declared emphasis and how many we’ve admitted so far.”

“Great idea! Give me a minute.”

“Can we make this part float like a frozen row in Excel.”

“Sure!”

Each time, I pulled up Claude Code to prompt the change, pushed to GitHub, Vercel rebuilt, we refreshed the page, and in a few short minutes the software was substantially better. We easily made a dozen changes to the app on the fly.

As a treat, I secretly had Claude Code make a celebration screen that appeared when we made the final decision. Digital confetti makes everything better.

It was all frankly amazing, and it shows where we are now—where software doesn't have to be a "take it or leave it" proposition. (This being how most users have been forced to experience software for decades.) Instead, the app was a living and adaptive thing that fit our needs in the moment. Such a model of software is mind-blowing when you think about it. "One size fits all" is an old paradigm now, and it's exciting to think about software that adapts and changes in a living way as you use it.

Anthropic Will Win Against DoD

Regarding what I wrote yesterday, this piece is an expert overview of the laws at stake and why DoD’s supply chain risk designation for Anthropic is doomed to fail.

From the government's perspective, Claude does pose some concerning vendor reliability issues. But the specific actions Hegseth and Trump took have serious legal problems. The designation exceeds what the statute authorizes. The required findings don't hold up. And Hegseth's own public statements may have doomed the government's litigation posture before it even begins.

Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System | Lawfare

Anthropic doesn’t have to work for anyone, including the government

I’ve seen enough takes on the Anthropic/DoD conflict since it all went down last week, and I’m surprised at how often this important principle is being left out of the conversation:

There are many freedoms enjoyed by Americans—and therefore American businesses. One of them is that we don’t have to work for the government if we choose not to.

If I want to be employed by the government, I can choose from the range of options the government offers. If they want to hire me, I can work for Reclamation and help maintain dams, or for the Social Security Administration to process claims, or for the military to defend the United States. But once I’ve decided to work for Reclamation, it doesn’t mean the U.S. Government can also require me to work as a janitor, a Congressional aide, or a spy. Note that it doesn't matter if what the government wants is entirely legal. If we can’t come to an agreement, they can fire me or I can quit.

Anthropic chose to quit, and it’s nonsense that this is some sort of veto over the powers of a democratically elected government. You can argue that Anthropic shouldn’t have the beliefs they have about AI and military action or government surveillance. You can make a moral claim that they should want to support the military. But if your argument is that Anthropic refusing to do so is some sort of corporatocracy, then you're ignoring essential rights.

The point isn’t that corporations should have power over government. The point is that people, and therefore their businesses, have power above government. That power appears in the voting booth, of course. But it also comes in all the other freedoms we enjoy because of the limits on Constitutionally designed government.

The Department of Defense offered Anthropic a job, which the company accepted. When the terms of employment changed, Anthropic quit to uphold their values. This is fundamentally how a free society with a limited government should operate.


Footnote: I get that there are laws entitling the government to force its citizens into certain behavior, but these are constrained by the first, fourth, fifth, and fourteenth amendments of the Constitution, as a start. All of these favor Anthropic’s right to refuse the government’s demands.

If AI companies were consumer tech from a decade ago

Late night noodling, but even in the light of day this still feels right to me.

If we mapped current #AI companies to consumer tech from the 2010s: Anthropic = Apple. Focused on high quality for a smaller market. Stubborn and opinionated in annoying ways, but innovating in important ones. Sets trends. Genuine in its principles, whether or not you agree with them. 1/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

OpenAI = Google. Market-defining from the start. Now wants to be everything for everyone. Staffed by nerds who are at odds with management, and management wins. Began with noble intentions (remember "Don't be evil"?), but revenue overruled. Has a graveyard of failed public projects. 2/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Google/DeepMind = Microsoft. Workman-like quality, only the best at one or two things. Preserving the ecosystem drives every decision. Staffed by some of the smartest people around who get slowed down by the bureaucracy. Won't lose, but won't win either. 3/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Meta = RIM (Blackberry). Already lost but doesn't know it. Wastes money on big swings that are only affordable because of its legacy business. Corrosive leadership doesn't realize the best hope for the company is to step aside. 4/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

xAI = Samsung. Fast follower only. Plays the scrappy underdog, but really just flash over substance. Run by a corrupt, image-obsessed leader who uses government influence for profit. Has a rabid, contrarian fanbase, mixed with people who don't care enough to pay for something better. 5/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Perplexity = Snapchat. A truly unique offering, but the people who don't use it don't get why it exists. The people who do use it love it. Likes doing weird things as a way to stand out. Always treated like a quirky little brother. 6/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

DeepSeek/Kimi/Z.ai/etc. = Huawei/Xiaomi/Oppo/etc. Providing insane value as long as you are willing to ignore the idea that the Chinese government uses them to spy on you. Tinkerers & geeks love them, of course. An ecosystem of YouTubers will rush to review every new model. 7/x

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Obviously I'm opinionated, and this is not a perfect list but fun to think about. Anything I missed? 8/8

— Aaron Miller (@aaronmiller.info) February 3, 2026 at 10:59 PM

Why Love-Work Is Different Than Hate-Work

A great read, and not just because of how deeply I felt the distinction between love-work and hate-work. I also really enjoyed how Horton described research eras in psychology. All fields have this sort of thing, and knowing them helps make sense of how we got to our current kind of thinking. (And that we’ll someday leave it behind!)

You must know, on some level, that doing work you love is psychologically different from doing work you hate. They don’t just feel different, emotionally. The two forms of work have different psychological textures. They involve different actions. They have a different cadence, different aims, different outcomes, and draw upon different wells of energy

Why Your Brain Fights You - by James Horton, PhD.

AI verifiability, but compared to what?

Much of the advice around using AI is that if you use it, then you need to verify what it produces. This is presently good advice. But I'm doubtful it will be good advice in the long-run.

Consider how little verification happens in large institutions by leaders who are making decisions. Of course, many bad decisions get made this way, but also many good ones. The difference is in the quality of the work put before the decision-makers. Eric Drexler explains it well in this recent article (emphasis mine):

Consider how institutions tackle ambitious undertakings. Planning teams generate alternatives; decision-makers compare and choose; operational units execute bounded tasks with defined scopes and budgets; monitoring surfaces problems; plans revise based on results. No single person understands everything, and no unified agent controls the whole, yet human-built spacecraft reach the Moon.

AI fits naturally. Generating plans is a task for competing generative models—multiple systems proposing alternatives, competing to develop better options and sharper critiques. Choosing among plans is a task for humans advised by AI systems that identify problems and clarify trade-offs. Execution decomposes into bounded tasks performed by specialized systems with defined authority and resources. Assessment provides feedback for revising both means and ends. And in every role, AI behaviors can be more stable, transparent, bounded, and steerable than those of humans, with their personal agendas and ambitions. More trust is justified, yet less is required.

Framework for a Hypercapable World | Eric Drexler

Professors Are Conservative, Actually

Politically, academics are much more liberal than the average person. But Paul Bloom makes the excellent point that, in areas related to their work, academics are actually deeply conservative.

Asking a prof about AI is like asking a taxi driver to weigh in on Uber. I think I have good reasons for my (conservative) defense of tenure, but you’d be forgiven for assuming that, having worked for and benefited from the protections of tenure, I don’t want them taken away. Part of professors’ unwillingness to give up on lectures is that they take a long time to prepare—once that time is invested, we don’t want to start anew. We certainly don’t want to transform the university in a way that risks making us obsolete.

I feel this deep in my bones. It’s so hard to get universities to change, and professors are the primary reason why. AI—as I’ve written before—is coming for us in a way that most of my colleagues are not at all prepared to face. But they will have to face it in the end.

Why are so many professors conservative? - by Paul Bloom