Q&A: Maria Finnsdottir on Digital Humanities, AI, Supercomputers and the Future of Research
By Nolan Hehir on March 11, 2026
Maria Finnsdottir, a Humanities and Social Sciences Specialist with the Digital Research Services team at the University of Alberta, is helping researchers navigate a period of rapid change in the humanities as technology reshapes how knowledge is created and shared.
Finnsdottir, who holds a PhD in Sociology from the University of Toronto (2024), works directly with scholars using computational tools in humanities research. We examined what digital humanities means in practice, how supercomputers and AI are transforming research, and what these developments signal for the future of humanities education in Canada.
This interview has been lightly edited for length and clarity.
Q: What is digital humanities and how does it differ from traditional humanities research?
A: This one is a little hard to define because it’s so broad. Traditional humanities covers a huge range of humanist research endeavors, including philosophy, culture and literature studies, language studies, etc. Digital humanities is anything that takes digital or computational tools and applies them to those fields.
That can include digital or computational analyses as part of humanities research. There is also a lot of public-facing scholarship taking traditionally done humanities research and using digital tools to share it. There is also production of open-source research tools for more traditional humanities fields.
It’s any category of humanities work that applies some sort of digital or computational method.
Q: Why are supercomputers necessary in this type of work?
A: Not all digital humanities uses supercomputers, but the part that does uses them for two reasons. First, supercomputers or high-performance computers (HPCs) let you work with either more data or more computationally complex analyses of that data.
They become necessary whenever you’re doing something that can’t be done on your own computer or laptop. Often in digital humanities, people are using HPCs for one of those two reasons, or sometimes both.
More often, it has to do with the amount of data being used. This tends to be very large corpuses of text. For example, analyzing everything Shakespeare has ever written. That might require supercomputing.
We also see really complex visualizations, such as projects using Geographic Information System (GIS) tools to create interactive maps. So they’re necessary to work with more data or more complex data.
Q: What kind of data is usually being analyzed?
A: What’s interesting about digital humanities is that people are doing many different kinds of things. I’ve worked with researchers who are training their own generative AI models to create music, which you could classify as computer-assisted creativity.
There’s also a lot of text work, with people dealing with very large corpuses of data. There are images and maps, image analyses, and data that is digitally analyzed or digitally hosted in order to share it. Such as databases, learning resources and websites.
I’m working with some people who are getting started on a new set of Holocaust teaching tools, which includes creating maps.
Q: How does high-performance computing change what researchers can discover?
A: I like this question because it gets to what interests me about digital humanities and computational social sciences: the way we do research informs what we produce from it.
HPC allows us to do bigger research with bigger data and bigger findings, but with less detail. There’s a cost-benefit analysis. This reflects a classic tension in research, whether to do a detailed case study or a population-level, generalizable study.
By using high-performance computing, researchers can work with amounts of data that were previously not feasible. It might take 20 years to read the amount of data you could process in 20 hours on an HPC system.
The flip side is that you won’t be as contextually specific as you might be in a case study. It’s a new frontier, but it means we don’t do some of the things that traditional methods allow.
Q: Is AI involved in this work?
A: Yes, AI is involved. There are interesting disagreements about how we incorporate AI. In digital humanities, there’s a lot of thought about how the ways we conduct research inform the research products, the social construction of knowledge.
There are ongoing debates about how to include AI responsibly. In practical terms, some researchers use AI to analyze huge amounts of data. Others train their own Large Language Models (LLMs) on specific datasets, such as policy documents or ancient texts.
There’s also computer-assisted creativity in the visual and fine arts, creating images or music in ways that support creativity rather than replace it.
Some researchers use AI, some train new AI models and others are strongly opposed to using AI in research. It’s a rich field.
Q: What does this mean for the future of humanities education?
A: It’s something I’ve been thinking about because we do a lot of training on the Digital Research Services team. It underlines the importance of computational competency in the humanities, we can’t afford to be left behind.
Humanities and social sciences offer important societal understandings of what technology means for knowledge, perspectives you can’t get from computer scientists alone. You need humanists in the room.
We need to close the competency gap in programming and digital skills between humanities and the natural sciences. There’s also computational literacy, it’s not enough to know how to program something. You need to know why you’re doing it and what it does.
We need to focus on training students to use these tools and understand how they impact their work and research. That training has to be specific to humanities. You can’t just drop a humanities student into a 200-level computer science course. It needs to reflect the kinds of work and theoretical frameworks they use, with relevant examples.
Q: Are there risks in relying on computational tools in cultural research?
A: Yes. When I say risk, I don’t mean we shouldn’t use them, but that their use needs to be conscious. The way we produce knowledge and the tools we use impacts what we create.
Risks arise when researchers aren’t aware of how they’re creating knowledge or aren’t making conscious decisions. For example, using AI trained on a particular dataset to study a marginalized population could introduce bias.
There’s also the risk of losing touch with your data. Spending time with texts, images or interview recordings builds familiarity. If computational methods speed up the process, you might lose some of that depth.
These risks require awareness. They don’t mean we shouldn’t use computational tools, only that they need to align with the purpose of the research.
Q: Where do you see digital humanities in the next decade?
A: We’re at somewhat of a crossroads. With generative AI entering every part of work and research life, even attached to email through tools like Copilot, the field could go in two directions.
Generative AI could become a core, important part of research, including digital humanities. Or the bubble could burst and enthusiasm could fade. There’s uncertainty about where it will land.
At the same time, there’s strong momentum around public-facing scholarship in digital humanities. Creating open-source software packages, open access learning materials and open data. That will continue to grow.
If Canadian digital research infrastructure can provide the resources scholars need to keep their materials free and public, it will continue strengthening digital humanities in Canada on the global stage.
For more information about digital humanities research at the University of Alberta, visit the Digital Research Services page here.
