Generative AI has landed in higher education like a shockwave. Tools that can draft text, generate examples, and help with research are now widely accessible—and they’re forcing universities to rethink teaching, assessment, and academic integrity. But one big question remains: why do some faculty adopt these tools quickly while others hold back or resist entirely?
A French study by Fatiha Tali Otmani argues that the answer is often less about technology itself and more about a psychological factor: digital self-efficacy—the confidence educators have in their ability to use digital tools effectively, solve problems when things go wrong, and still meet their teaching goals.
The key idea: confidence drives adoption
Drawing on Bandura’s social cognitive theory, the paper explains self-efficacy as a belief system: “I can do this, even when it gets messy.” In practice, that belief shapes whether faculty experiment with new tools, persist through setbacks, and turn guidelines into real routines.
The research also uses Flichy’s “usage framework” concept: technologies don’t come with a single “correct” use. Instead, uses are socially built—through institutions, policies, peers, and the everyday habits of real users. In other words, universities don’t just “deploy AI.” They help create the culture and rules of use around it.
Study setup: 265 faculty members, 3 user types
The study surveyed 265 higher education faculty members and mapped how they relate to generative AI. Three broad profiles emerged:
-
Engaged users (the adopters)
They actively use generative AI for practical gains—saving time, building resources, improving workflows.
-
Reflective reserved (the cautious non-users)
They’re not necessarily anti-AI. Many simply lack time, training, or a clear entry point. Their hesitation looks like “I’ll get to it when I can.”
-
Critical resisters (the principled skeptics)
They push back for deeper reasons: trust, ethics, environmental impact, transparency, or concerns about truth and academic standards.
A practical measurement: digital self-efficacy has 3 dimensions
The author validates a digital self-efficacy scale with three dimensions that matter for educators:
-
Technical mastery & problem-solving (handling tools, fixing issues)
-
Staying aligned with teaching goals (not letting tech derail pedagogy)
-
Digital assessment confidence (using tech in evaluation contexts)
This matters because assessment is where risk and anxiety spike—and the study finds that differences between AI users and non-users are especially strong here.
What the results say: self-efficacy predicts AI use
Across the board, faculty who use generative AI report higher self-efficacy. And the more confident faculty are, the more diverse their AI uses become (from course prep to research and teaching workflows).
Just as important: the study shows that not all resistance is about low competence. The “critical resisters” often have mixed strengths—confident in some digital areas but still rejecting AI for ethical or epistemic reasons (e.g., “I can’t trust it; it doesn’t cite sources; truth status is unclear.”). That’s a crucial nuance: universities can’t treat every non-user as a “skills deficit” problem.
Four ways faculty actually use generative AI
The paper proposes four “sociotechnical configurations” (think: common patterns of real-world use). They’re useful as a practical framework:
-
Preparation mode
AI helps draft lessons, create examples, generate activities, and prepare assessments—mostly behind the scenes.
-
Instrumental mode
AI supports research and productivity tasks (drafting, summarizing, brainstorming, coding support), while faculty keep intellectual control.
-
Integrated teaching mode
AI becomes part of the learning design. For example, students generate a draft with AI, then critique it—building critical thinking.
-
Critical-reflective mode
AI is used to reveal limitations: bias, reliability gaps, hidden assumptions. This mode can appeal even to skeptics.
The big takeaway: one-size AI policy won’t work
Because self-efficacy and values differ, the author argues universities should build differentiated support, not generic training sessions or blanket rules.
What this looks like in practice:
-
Targeted training (basic confidence-building for low self-efficacy; advanced pedagogy workshops for high self-efficacy)
-
Safe experimentation spaces (“sandboxes” where faculty can try AI without high stakes)
-
Peer modeling (supporting confident early adopters to mentor others)
-
Respecting principled critique (bringing critical resisters into policy design so governance isn’t just “pro-AI cheerleading”)
The paper also ties this to Europe’s evolving regulatory context (like the EU AI Act), emphasizing that education needs governance, transparency, and risk-aware practices—especially where AI touches assessment.
Bottom line
Generative AI adoption in universities isn’t just about access or tool quality. It’s deeply shaped by digital self-efficacy and the social framework of usage inside institutions. If universities want responsible AI integration, they should stop treating faculty as a single audience—and start designing support that matches how different educators actually think, work, and decide.
source: https://arxiv.org/pdf/2602.17673