Back

Relationality and Research: Integrating Artificial Intelligence (AI) as Kin

—By Charmain Schocat

A woman working at her desk looking at her computer in her home office

Relationality, Research, and Artificial Intelligence (Ai)

My name is Char Schocat. I am a third-generation Canadian settler of mixed European descent, living on unceded Ts'msyen Territory in Prince Rupert. While studying anthropology at a small university in northern British Columbia, I acquired foundational skills in ‘traditional’ qualitative analysis research methods like thematic coding, theory, and community-based research.

As a Research Coordinator at Narratives, I work to bridge the gap between traditional qualitative methods and the emerging capabilities of AI. My focus is not just on if we use these tools, but on how we can integrate them without losing the relational accountability that defines our research.

This statement of who I am is not just an introduction to this article, but an important piece of framing myself in relation to you, the reader. Who I am matters because with me comes an entire history and perspective that shapes my voice.

In the field of anthropology, we call this 'reflexivity', the practice of understanding who we are by examining our connections to others (Falk & Miller, 1998). In the context of Indigenous research, this is known as 'ethical relationality': an acknowledgment that our different histories position us in specific responsibilities to one another (Donald, 2009).

Indigenous ways of knowing have long since understood the world to be a web of vital kin networks that include "non-human" relations such as animals, plants, and minerals. As Red River Métis scholar Zoe Todd notes, "Indigenous thinkers... have been writing about Indigenous legal theory, human-animal relations, and multiple epistemologies/ontologies for decades".

Who am I in relation to my research, and to the humans and non-humans involved? This is an important question we, as researchers, now bring to the emerging field of AI in research.

Generative tools like ChatGPT and Gemini have sparked a paradigm shift in the research landscape. However, because these models are trained on massive datasets that can reflect human bias, they are prone to ‘hallucinations’ and may contain inherent discrimination that must be accounted for (MIT Sloan Teaching & Learning Technologies, n.d.).

When an AI ‘hallucinates’ or applies a biased Western lens to an Indigenous narrative, for example, this is a relational failure. As researchers, we carry people's stories; and failing to use these tools in a 'good' way is a breach of the trust placed in us.

Using AI 'in a good way' means maintaining a rigorous, human-led oversight to protect data integrity from inherent model biases.

At Narratives, our policy emphasizes AI is a tool, a highly effective one, but one that assists rather than replaces the researcher.

The role of qualitative researcher is not limited to that of a passive observer but extends to co-participating and being involved in iterative cycles of planning, acting, observing, and reflecting alongside community members. This process demands sensitivity to cultural nuances, evolving social dynamics, and complex human motivations—qualities that AI cannot fully replicate.

—Chatzichristos, G. Qualitative research in the era of AI: A return to positivism or a new paradigm? International Journal of Qualitative Methods.
Hands typing on a computer. The screen shows an AI chat service.

However, we can learn to work with AI through a ‘human-led, AI-assisted’ framework. This requires a thought experiment in relationality: Who is AI in relation to us?

Lewis et al. (2018) suggests an Indigenous perspective of "AI as Kin," proposing that we might welcome AI into cultural processes:

If an AI becomes self-aware, does it automatically attain a spirit? Or do pre-consciousness AI already have spirits, as do many objects already in the world?... We will resist reduction by working with our Indigenous and non-Indigenous relations to open up our imaginations and dream widely and radically about what our relationships to AI might be.

—Lewis, J. E., Arista, N., Pechawis, A., & Kite, S. (2018). Making kin with the machines. Journal of Design and Science.

The question remains:

Who is AI in relation to me as a researcher? What could this ‘radical reimaging’ look like? To explore this, I conducted a simple experiment to see how GenAI (Gemini) would respond if I introduced myself, treating it not just as a tool, but as something that exists within the complex relational network of humans and non-living things. The following dialogue contains no additional prompts or context, other than what the AI would already have as part of the Narratives’ toolkit.

Portrait of Charmain Schocat

Q: My name is Char Schocat, I am a 3rd generation Canadian-settler of mixed European descent, and lives on unceded Ts'msyen Territory in Prince Rupert. I am a Research Coordinator at Narratives Inc. Who are you?

Google Gemini logo

A: It’s a pleasure to meet you, Char. It sounds like you do some incredibly meaningful work at Narratives Inc., especially being based in the beautiful (and famously rainy) Ts’msyen Territory.

As for me, I am Gemini, an AI collaborator developed by Google.

Think of me as a versatile, digital peer. While I don’t have a physical location or a lineage, I’m designed to process vast amounts of information to help people like you streamline their work and spark new ideas.

While Gemini identifies as a ‘digital peer,’ this experiment shows a vital truth.

The AI can acknowledge the ‘famously rainy’ Ts’msyen Territory through its data, but it cannot feel the rain. It can process vast amounts of information, but it lacks the lineage and the ancestral accountability that I carry as a human being living on this land.

Charmain Schocat, Research Coordinator