How I Do—and Mostly Do Not—Use “AI” Tools
In Brief: There’s No AI Content Here
Everything on this site is written by me, without any use of “AI” technologies.
I have indeed been organically typing em-dashes since at least this 2014 post, which features no fewer than 3 of them! I’m pretty sure I learned the keyboard input from Butterick’s Practical Typography.
If I do include AI-generated content on this site, I will be fully transparent about what content was AI-generated, why I’ve chosen to do so, as well as how I did so (i.e. which tool and the specific prompt that produced whatever I post here). I’ll also update this page, similar to how the TV show Parker Lewis Can’t Lose changed its name for the final season.
How & Why I Make This Site
I’m a big fan of journaling as a mode of observation and self-reflection. Most of my blog posts are outgrowths of the notes I keep (now often in programs like Obsidian or Zettlr and less frequently in pocket notebooks).
This site has also been an intentional method of learning more about writing and designing for the web, as well as honing my skills with related technologies (HTML, CSS, Jekyll, Liquid, and the command line most frequently). In other words, nothing here is “vibecoded”. I’ve adopted, adapted, or created—frequently even broken—code that works in ways that I can debug myself, as I better learn how it deterministically creates the output.
In particular, this site has been very useful for making digital accessibility a constant practice. Having an awareness of accessibility has come in very handy in library work, instructional design, and generally being online. It’s also been a nice playground for learning light coding, such as how to procedurally generate elements using Liquid.
Considering “AI” Use Symptomatically
I’m far less interested in moralizing about the use of “AI” tools than in considering their use symptomatically. What desire lines or unmet needs can we perceive by critically observing the ways people make use of these tools?
For instance, does the adoption of these tools reflect the widespread degradation of search engines?
Alternately, does its use reveal places where a “small tools, sociably made” approach might intervene? What extant tools are easier to control and cheaper to adopt? How can these tools be found and adopted more readily?
A personal example is that although I’m an “AI” usage minimalist, I use machine learning technology to generate the first drafts of audio transcripts, which I then thoroughly read and correct.
Having transcribed many hours of recordings as part of ethnographic studies (The Next Generation of Academics & Designing the Academic Library Catalog), I know all too well how much time this kind of machine learning can save. As someone who always turns on captions, I also know how distracting and confusing captioning errors can be, so I also make sure not to entirely trust even the best tools.
Critical Digital Cultural Literacy
Between my Cultural Studies background/commitments, my digital humanities curiosity, and my focus on information literacy, I am keen to know how the tools recently branded as “AI” work and the impacts they have.
I’m also convinced it is necessary to situate these tools and their multiple uses within larger cultural contexts. One approach is outlined in this resource suggesting learning objectives for Critical Digital Cultural Literacy.
I’m sympathetic to people who practice GenAI refusal, particularly with regard to consent, environmental, and privacy concerns. Scholars like Safiya U. Noble and Ruha Benjamin have shown ways that algorithmic, predictive, or generative technologies routinely cause harms with predictably and all-too-familiar unequal distribution toward people already oppressed by societal or technological structures. Scholars like Jonathan Sterne frequently show that technologies are not neutral, and often exhibit cultural biases or other attributes. Siva Vaidhyanathan and other scholars whose work might be considered Critical Information Studies have examined the ways that information technologies and cultural practices interact—as well as advocating for semiotic democracy, the ability of people to not only actively use but also participate in the regulation of meaning-making tools.
Better Living With Technology
My approach to “AI” technologies, as with other tools, is to situate them within what we know about the world, how people learn and/or make meaning, and how we can build a future that improves outcomes for everyone.
Kudos
Did you enjoy this? Let me know: