Block 8

Be Prepared for Change!

In this block, we outline a simple framework for managing change around AI that you can adapt in your own library.

Abstract illustration of two overlapping right‑pointing arrow shapes in green with a curved dark green stroke forming a question‑mark‑like shape ending in a dot, set on a light neutral background, suggesting uncertainty or rethinking direction. © Sandra Kastl

If there is one word that is often mentioned in relation to AI, it is “change”. This new technology has changed the way we work, think, find, and process information. But this is also an opportunity for librarians to underline the importance of sources and ways to verify information. While we have already discussed multiple roles that you, as a library professional, can take when it comes to AI, we want to add “change manager” to the growing list of important titles that you might find yourself taking.
 

REFRAMING CHANGE MANAGEMENT: PEOPLE, NOT TECHNOLOGY

Although you might feel tangible change in the library space with the arrival of new technologies, services and approaches, “change management” in libraries should be understood as is a focus on people, trust, and purpose. In this sense, AI can be seen as the next transformation, and we understand that it might feel scary mainly because it is new. In this block, we will try to ease some of these fears and talk about approaches you can take when it comes to AI and change management. Here we, the LIBRA.I. team, will also try to summarise our own vision for future libraries in relation to the development of AI literacy in our field of work.

Librarians can respond to AI and meet users’ needs, yet many have fears about AI. These fears often relate to professional identity, ethics, quality, accountability, and responsibility. Librarians carry responsibility and must consider the consequences. Their professional identity can be questioned. Users expect them to know and guide, and staff expect management to provide direction.

A practical way to address fearful narratives is to provide facts and real-world examples. AI should be framed as an assistant rather than a replacement. Experience from our “bootcamps” (BLOCK #4) shows that when librarians understand what AI is, how it works, and what it can and cannot do, fear decreases. They begin to think constructively about how they can employ it in their everyday work.
 

GOVERNANCE APPROACHES FOR LIBRARIANS AND USERS

Good governance for librarians should mean a careful approach, tackling changes step by step and one at a time. It should be reflective and protective of staff, supporting capacity-building, and ethical awareness. Good governance for users should be open, practical, and building confidence. AI literacy in libraries should therefore be built on a two-layer system: one layer for librarians that focuses on capacity and ethics, and one layer for users that focuses on confidence and judgment.
 

TWO AUDIENCES, TWO PSYCHOLOGICAL POSITIONS

There are two approaches to change management: one aimed at librarians and another aimed at users. These groups are in very different psychological positions. Librarians carry institutional responsibility and are concerned with long-term consequences, ethics, and standards. Users, by contrast, have a wide spectrum of feelings about AI. They tend to look for quick help and easy solutions rather than theory. They may overtrust or mistrust AI, but they do not carry that same institutional responsibility.
 
LIBRARIANS USERS/READERS
  • They carry responsibility and have to think about the consequences
  • They have a spectrum of feelings around AI
  • Professional identity in question
  • Want quick help, easy solutions, not theory
  • Worry about ethics, quality, accountability
  • They can overtrust or mistrust AI
  • Users expect them to know and guide
  • They do not carry the responsibility
  • Staff expects form the management to know and guide
 

LAYER 1: CHANGE MANAGEMENT FOR LIBRARIANS
(Capacity + Ethics + Responsibility)

1. Start with psychological safety: fear decreases when exploration feels safe

  • Acknowledge uncertainty and fear openly
  • Separate experimentation from performance evaluation
  • Create internal “AI playground” sessions without judgment

2. Clarify the purpose before technology: purpose reduces resistance. Before introducing any tool, define:

  • What problem are we trying to solve?
  • How does this improve service quality?
  • Where does human judgment remain central?


3. Build competence through micro-experiments: understanding reduces fearful narratives. Begin with low-risk use cases:

  • Drafting texts
  • Summarising documents
  • Translating materials
  • Generating workshop ideas


Reflect after each use:

  • What worked well?
  • What requires human review?
  • Where are the risks?


4. Ground ethics in practice: ethics must be operational, not theoretical. Avoid abstract debates. Instead, ask:

  • Who is accountable for this output?
  • Would we publish this without review?
  • What biases might appear?
  • How could this affect vulnerable users?


5. Move in phases

Exploration Reflection Pilot → Governance
Governance should protect staff and clarify responsibilities,
not prematurely restrict experimentation.

LAYER 2: CHANGE MANAGEMENT FOR USERS

(Confidence + Judgment)

Users are in a different psychological position:

  • They seek quick solutions
  • They may overtrust or mistrust AI
  • They do not carry institutional responsibility


Focus on:

  • Practical demonstrations
  • Clear explanations of strengths and limits
  • Teaching verification skills
  • Encouraging critical thinking


User-facing governance should be:

  • Open
  • Practical
  • Confidence-building
REMEMBER: CHANGE DOES NOT HAPPEN OVERNIGHT, BUT SMALL, SAFE EXPERIMENTS CAN BUILD CONFIDENCE ACROSS THE WHOLE LIBRARY.

Follow us