AI usage

Here’s how I use AI and LLMs in my work
Last updated

July 19, 2025

Everything on this website, in my research, and in my courses is written by me, personally—not by an AI.

I enjoy creating things. I like being the human behind all this stuff.


I think LLMs are helpful tools for some things, awful and useless tools for other things, and always ethically fraught.

Here’s my current thinking about using LLMs:

The /ai manifesto

Check out “The ‘/ai’ manifesto” to learn why I made this page! See the /ai public database for more.

Text

I am deeply opposed to LLMs for writing. LLMs are designed to create statistically-likely chains of text. The text they produce might look like something someone might write, but the text will philosophically be meaningless. It is bullshit,1 or “speech or text produced without concern for its truth.”2

1 I’m a super straight-laced Mormon and, like, never ever swear or curse, but in this case, the word has a formal philosophical meaning, so it doesn’t count :)

2 Michael Townsen Hicks et al., “ChatGPT Is Bullshit,” Ethics and Information Technology 26, no. 2 (2024): 2, https://doi.org/10.1007/s10676-024-09775-5; Harry G. Frankfurt, On Bullshit (Princeton University Press, 2005).

I believe that the process of writing is actually the process of thinking. Text that is meaningless doesn’t reflect thought. As the increasingly-common-in-our-LLM-times adage goes, “why would I bother to read something someone couldn’t be bothered to write?” Reading LLM-generated text is boring and gross—I want to read what humans think!

Tools that are custom-designed for specific writing tasks—like Apple’s Writing Tools or Grammarly—can sometimes be useful, but I don’t generally like using them.

Images and video

Ew no. I personally find the style of fully LLM-generated images and video to be aesthetically repulsive.

I find that tools that are custom-designed for specific artsy tasks, like Photoshop’s object selection tool, are useful and I do use those—but these existed before the creation of ChatGPT and Adobe’s pivot to shoving LLM stuff everywhere.

Code

This is one area where I find LLMs to actually be useful. LLMs are designed to create statistically-likely chains of text. Unlike human languages, computer languages are fairly rigid in what kind of syntax is allowed—when syntax is wrong, things break. Given that so much of different LLMs’ training data contains code, they do a fairly good job at producing plausible code.

This is especially the case with languages and packages that have good coverage online, are widely used, and have good documentation. In these cases, I tend to use LLMs for two general code tasks:

  1. LLMs as fancy documentation: I imagine code-focused LLMs as an amalgamation of StackOverflow + GitHub search + reading the documentation—GPT algorithms can typically figure out the most average, standard-looking way to approach specific coding issues. Like “here’s code for a bar chart with Observable Plot; how would I format the y-axis to have commas like 1,000 instead of 1000?” All the pieces for that answer are somewhere in the giant haystack of training data—statistically, the LLM can build something plausible.
  2. LLMs to translate between languages: I’ll often use LLMs to translate from things I know well into things I don’t. Like “here’s some R and ggplot code to run a regression and make a scatterplot; how would I do this with Python using Seaborn Objects?” That answer is also somewhere in the giant haystack of training data—statistically, the LLM can get there.

To avoid falling into over-reliance on LLM-assisted code help, I add as much friction into my workflow as possible. I only use GitHub Copilot and Claude in the browser, not through the chat sidebar in Positron or Visual Studio Code. I treat the code it generates like random answers from StackOverflow or blog posts and generally rewrite it completely. I disable the inline LLM-based auto complete in text editors. For routine tasks like generating {roxygen2} documentation scaffolding for functions, I use the {chores} package, which requires a bunch of pointing and clicking to use.

Research

I don’t use LLMs for research. I hate that they’re incorporated into search engine results.

I’ve tried using things like Perplexity and Elicit, since they’re designed to give non-invented citations and links, but I’ve found that they’re not as helpful as good old fashioned Google + Google Scholar + library search engines.

Learning

Oof. I hate this. I hate how LLMs are being jammed into education everywhere.

I’ve found that LLMs are really detrimental to learning when you don’t already know what you’re doing. They spit out statistically-likely average text agglomerated from training data—without enough background knowledge to say “wait, that’s not quite right” to its answers, there’s no way to know if the answers are good.

Learning is hard. Learning requires struggle and repetition and more struggle and more repetition. As the writer Ted Chiang says, “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”3

3 Ted Chiang, “Why A.I. Isn’t Going to Make Art,” The Weekend Essay, The New Yorker, August 31, 2024, https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art.

I have a whole spiel for my students about using LLMs for learning. I don’t know how many of them take it to heart. I feel like I’m pushing against a runaway freight train with this AI in education stuff. It’s awful.

Ethics

Oh man.

Read The AI Con.4 Read Empire of AI.5 They’re excellent and do a fantastic job covering the moral, environmental, and legal issues surrounding these things.

4 Emily M. Bender and Alex Hanna, The AI Con: How to fight big tech’s hype and create the future we want (HarperCollins, 2025).

5 Karen Hao, Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI (Penguin Press, 2025).

Data centers are poisoning people and causing droughts. People contracted to do reinforcement learning with human feedback (RLHF) are paid nearly nothing to sift through the worst humanity can throw at them. Intellectual property rights have been smashed.

I still haven’t fully settled on how to think about using them ethically. They are useful tools and can be trained in ways that respect community norms and consent. Empire of AI concludes with a fascinating story of indigenous communities in New Zealand working to create a custom LLM to help reconstruct and preserve some Māori dialects that had been recorded in oral interviews. That’s super cool and exciting.

Meta’s Ollama lets you run custom local LLMs on your computer, and it’s possible to use Ollama models for working with code, creating reproducible automated text classification, or anything else. But other than playing around with it a few times, I don’t use local Ollama stuff.

Hypocritical confession time: I use Claude and GitHub Copilot and friends and feel bad doing it.6

6 In The Good Place, Chidi eschews dairy on ethical grounds and drinks almond milk instead, but then realizes that growing almonds also has a negative environmental impacts, and blames his almond consumption for his ending up in the Bad Place. In S3E10 (“The Book of Dougs”), this point is brought home: there is no ethical consumption in capitalism.

References

Bender, Emily M., and Alex Hanna. The AI Con: How to fight big tech’s hype and create the future we want. HarperCollins, 2025.
Chiang, Ted. “Why A.I. Isn’t Going to Make Art.” The Weekend Essay. The New Yorker, August 31, 2024. https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art.
Frankfurt, Harry G. On Bullshit. Princeton University Press, 2005.
Hao, Karen. Empire of AI: Dreams and nightmares in Sam Altman’s OpenAI. Penguin Press, 2025.
Hicks, Michael Townsen, James Humphries, and Joe Slater. “ChatGPT Is Bullshit.” Ethics and Information Technology 26, no. 2 (2024): 38. https://doi.org/10.1007/s10676-024-09775-5.