AI to Unlock Archives: Revolutionising Access to our Digital Past
Our digital past is at risk of disappearing and becoming inaccessible and unusable. Artificial Intelligence is no longer a choice for archivists, librarians, historians and other users of digital archives. It is a necessity to discover, access and use digitised and born-digital records.
AI can be used to search vast amounts of data, and automatically identify and remove redundant, outdated and trivial (“ROT”) materials, leaving only materials worth preserving. Within this mass of important documents, AI can then filter sensitive and confidential information, making it possible for archivists to set closure periods for these problematic records, and to open up non-sensitive records for research.
In addition, end users can draw on AI as a research method to analyse huge masses of documents much more efficiently than through manual search. The rise of ChatGPT and other Generative AI tools offers new opportunities for archive users to ask questions using natural language, and get personalised responses. AI agents will go even further, performing tasks autonomously on behalf of the user. With the promise of Artificial General Intelligence (A. G. I.) coming up in the near future, AI could reach human level intelligence and fully revolutionise access to our digital past.
It is of course true that AI comes with enormous risks – including the risk of rewriting our digital past or even deleting vast amounts of records. But instead of obsessing about doomsday scenarios, archival institutions and users should fully embrace AI as a collaborative agent to unlock hidden histories, transform access to memory, and shape the digital future. We need to think clearly about how archival institutions could use technology to unleash their collections and connect with unprecedented numbers of users. What does the future of archives and accessibility look like, at a time of rapid technological advances and fears of rogue AI? How can this powerful technology transform the archival sector for public good? And what role can researchers and other users play in this AI revolution?

Lise Jaillant
Lise Jaillant is Professor of Digital Cultural Heritage at Loughborough University, UK. Lise has a background in publishing history and digital humanities. She is an expert on born-digital archives and the issues of preservation and access to these archives. Since 2020, she has been PI for several externally-funded projects on Archives and Artificial Intelligence. These international projects aim to make digitised and born-digital archives more accessible to researchers, and to use innovative research methods such as AI to analyse archival data.
Lise enjoys working across sectors and disciplines. As a digital humanist, she has extensive experience of collaborating with computer scientists, archivists, librarians, and government professionals to unlock digital archival data with innovative technologies.
More information can be found on her website www.lisejaillant.com
Tracing meaning across time: computational and human-centred approaches to semantic change
Semantic change, or the evolution of word meanings over time, provides crucial insights into historical, cultural, and linguistic processes, making it a central topic in the Digital Humanities. Language reflects shifts in societal values, norms, and technological advancements, and understanding how word meanings evolve helps us track these transformations. To fully analyse semantic change, it is essential to consider both the broader historical context and the specific linguistic environments in which meanings evolve. Advanced computational methods allow us to analyse vast datasets and uncover patterns that were previously inaccessible. However, few natural language processing algorithms account for the dynamic nature of language, particularly semantics, which is critical for humanistic inquiry. While AI systems are being developed to better understand historical context and language dynamics, human annotation and interpretation remain necessary to capture the nuances of language and its cultural context.
In this talk, I will explore how computational and human-centred approaches can be combined to examine semantic change and its connections to cultural and technological developments. I will show examples of how semantic change can be analysed across temporal, cultural, and textual dimensions. For instance, studying nineteenth-century British newspapers reveals how industrialization and mechanization influenced the meanings of specific words, providing insight into the relationship between language and technological progress during this period.
The integration of computational models with human annotation enables a more complex analysis of language dynamics. This combined approach not only allows us to examine macro-phenomena such as shifts in word usage but also makes it possible to study semantic change at the level of individual word senses, revealing patterns that might otherwise be overlooked.

Barbara McGillivray
Barbara McGillivray is Lecturer in Digital Humanities and Cultural Computation in the Department of Digital Humanities of King’s College, where she leads the Computational Humanities research group. She is Editor in Chief of the Journal of Open Humanities Data and convenor of the MA programme in Digital Humanities at King’s, as well as convenor of the Turing special interest group “Humanities and data science”. Her research focusses on computational methods for the study of language change in both historical languages and contemporary data. As a Turing research fellow at the University of Cambridge and at The Alan Turing Institute she was also co-Investigator of the Living with machine project. Previously she worked as a language technologist in the Dictionaries division of Oxford University Press and as a data scientist in the Open Research Group of Springer Nature. Her most recent book is “Applying Language Technology in Humanities Research. Design, Application, and the Underlying Logic“ (Palgrave Macmillan 2020).