Context

I’ve been thinking a lot about AI recently. I took part in this Navigating Privacy Issues with Generative AI panel earlier in the month. I’m also in the middle of reading N. Katherine Hayles’ book How We Became Postmodern, which covers a lot of history of mid-1900s conceptions of information and epistemology, as well as Donna J. Haraway’s Simians, Cyborgs, and Women: The Reinvention of Nature.

So it’s probably not a surprising bit of embodied knowledge that, when I had some ridiculously early insomnia mid-week, I ended up thinking about how so-called “artificial intelligence” helps us distinguish the difference between knowing and languaging.

Here’s a minimally edited version of what I ended up typing on my phone, as my brain/body couldn’t get back to sleep, as I was trying to think through what would be a sensible definition of for “knowing” that would be explicitly inclusive of disability. This also includes some questions & caveats to mull over further.

As currently written, it doesn’t really touch on the “languaging” aspect of this, but that’s definitely part of the mental context here. I think I probably had seen someone on Mastodon describe the output of some generative “AI” tool as “knowing” something, which was enough of a catalyst at the moment to send my brain weaving these connections.

It also ties back to my slowly-simmering idea of “cosmospolitanism,” a sort of extension of “cosmopolitanism”; an idea of how to extend the best parts of “humanism” to post- or non-human beings.

It also engages in the science fiction tradition of talking about synthetic (near/non/post)humans as a way to talk about our society’s own internal or external “others,” and the ways that our dominant rules & modes already exclude beings who should be included. (What is Frankenstein’s creature if not a synthetic being? What to the cyborg is the Fourth of July?) This “cosmospolitanism” term sprung to my mind from thinking about Sun Ra, Samuel Delaney, and Drexciya—but now that I’ve been watching Star Trek, it seems relevant to the principles of “the Federation” as well.

I’ll probably revise this set of ideas further, but it feels worth sharing already.

After all, I’d argue that knowledge—or at least, perhaps, wisdom—includes both an ongoing process of re-evaluation. Hmm… I should perhaps add that to the list already?

In an ideal world, I’d also engage with more philosophy here, beyond my re-reading of Foucault and other Continental folks. Certainly there must also be people in what’s known as STS (science and technology studies / science, technology, and society) who I’d benefit from thinking with as well! But for now, here’s something.

Toward a Definition of “Knowing”

An organism (cyborg, synthetic, organic, or otherwise) needs to be able to do these things into other to meet an exacting definition of scientific or critical thinking knowing:

  • ability to speculate/predict;
  • ability to design a relevant experiment for that prediction;
  • (maybe) ability to cause their designed experiment to occur, directly or indirectly (would one consequence of this requirement be that impoverished or disallowed entities couldn’t become knowers, even if they had those potentials/probable capacities? would this be an indictment of conditions of imposed, manufactured, or permitted scarcity?);
  • ability to be informed of the results (if they cannot directly perceive the experiment; our definition must include Hellen Keller);
  • ability to gauge whether the results confirm, contradict, or complicate their thesis;
  • ability to have ethical, moral, and social relations to other scientific knowers, such that the knower would not be willing to perform certain experiments due to their anticipated harm or other consequences;
  • ability to describe their own processes (ideally accurately, although that might be true only for outliers);
  • ability to meta-cognate and describe that metacognition in ways they anticipate other knowers would understand and find informative;
  • ability to act in an autonomous, unprompted way, for its own purposes (that need not exclusively be “goals” but could also be “amusement, discovery, learning, etc.”);
  • ability to engage in an ongoing process of re-evaluation (just added as I was re-reading this to post).

This definition has a few goals. It’s trying to describe a type of knowing that is self-aware, future-oriented, and willing to put boundaries on its own experimentation due to its relations to others. It also regards scientific or critical thinking knowledge as a practice of description and meta cognition.

This definition leaves room for other potential types of knowledges and other types of epistemologies. The definition anticipates and expects that this type of knower need not be entirely self-sufficient and instead exists in relationship with other potential knowers who can be expected to act in shared knowledge seeking.

In other words, this definition can include Stephen Hawkins and Hellen Keller, while also seemingly including the principle that it is wrong to deny others the ability to learn, as those others can be speculated to have these shared attributes (or their potentials). It’s a definition that aims at something like a liberal arts / liberatory vision of critical thinking or scientific knowers, who need not be overly-individuated and are presumed to always-already exist in social / moral / ethical relations to others, as well as to their own past and future selves.

This type of knowledge requires description as a process. It makes science/critical thinking a set of metacognitive acts, in relation to others who can be presumed to have similar capacities, although these capacities might occur in different forms and the descriptions might require translation.

This definition seeks to build on what I’ve read of science and critical thinking, of liberation-oriented traditions, of disability studies, of something like cosmopolitanism. It seeks to go toward inclusion of non-organic humans in a ways that could include the augments/cyborgs we already are, as well as whatever synthetic beings might exist in the future. It also seeks to explain where current “artificial intelligence” doesn’t yet meet crucial aspects of scientific or critical thinking knowledge / epistemological being.

I want to call it something like “a cosmospolitan way of being epistemological,” to nod toward its various influences & commitments to inclusion of difference.

Leave a comment