9. Leadership Training in the Age of AI

Part 9 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article Leadership training and development in the age of artificial intelligence by Sposato (2024)

In the fast-evolving AI landscape, is the biggest challenge redesigning business processes, or redesigning the leader? This article highlights the critical imperative for leaders to adapt and commit to continuous learning, emphasizing the necessity of updating leadership training in the AI era. The core finding suggests that as AI automates regulatory and transient data-driven tasks, the next generation of managers must shift their focus to higher-level liabilities. These include fostering innovative thinking, creative problem-solving, and concentrating on strategic employee development. This redirection requires directors and leaders to understand the technical roles of AI and intentionally fill the human gaps left by automation, transitioning effectively to a data-centric decision-making approach.

This evolution makes sustained critical thinking synonymous with future leadership viability, specifically demanding the cultivation of a culture of continuous learning. Leaders must use critical judgment to invest in technologies that genuinely streamline operations, freeing them up to focus on strategic planning and innovation, rather than simply replacing tasks. The “so what” for critical thinking is that it ensures leaders assume the role of guardians of powerful machines, continuously questioning and adapting their skill set, and resisting the urge to rely on AI without contextual validation.

The author, M. Sposato, implicitly suggests that whether leaders are born or made, structured development is needed to address the challenges of staff shortages and technological change. What elements of traditional leadership training must be unlearned immediately to cultivate the critical, adaptive mindset required by the AI era? Share your opinion.

Reference: Sposato, M. (2024). Leadership training and development in the age of artificial intelligence. Development and Learning in Organizations: An International Journal, 38(4), 4–7.

8. AI-Driven Servant Leadership and Job Satisfaction

Part 8 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article The good shepherd: Linking artificial intelligence (AI)-driven servant leadership (SEL) and job demands-resources (JD-R) theory in tourism and hospitality by Han et al. (2026)

Can an AI ever truly be a “servant leader,” or is the concept limited to the human capacity for genuine empathy and community building? This research examines AI servant leadership in the hospitality and tourism sector, finding that AI’s cognitive dimensions—specifically conceptual skills and empowering—are significantly more effective for boosting employee job satisfaction by reducing job demands and clarifying role ambiguities. AI servant leaders excel at data analysis, task execution, and providing precise guidelines, mimicking human thinking using data analysis protocols. However, the AI leader demonstrates profound limitations in its capacity to convey authentic emotional responses and empathy, leading to a failure in crucial dimensions like “emotional healing” and “creating value for the community”.

The analytical strength but emotional weakness of AI leaders necessitates that human critical thinking focuses intensely on the socio-emotional and community gaps within the workforce. Critical thinking is required to determine the optimal boundary conditions for AI leaders and ensure that technology enhances, rather than diminishes, human connection and communal values. The “so what” is that human leaders must apply critical judgment to synthesize AI’s cognitive efficiency with the necessary emotional support and communal values, especially since AI leaders may not grasp the concept of shared values in a community.

The authors, H. Han, S. H. Kim, T. A. Hailu, A. Al-Ansi, S. M. R. Loureiro, and J. Kim, suggest that while AI’s cognitive functions are supported by their study, its emotional responses to employees are limited, reinforcing the need for human leaders. How must human leaders strategically apply their emotional intelligence to complement AI’s conceptual efficiency, especially in service-driven environments like hospitality?

Reference: Han, H., Kim, S. H., Hailu, T. A., Al-Ansi, A., Loureiro, S. M. R., & Kim, J. (2026). The good shepherd: Linking artificial intelligence (AI)-driven servant leadership (SEL) and job demands-resources (JD-R) theory in tourism and hospitality. International Journal of Hospitality Management, 133, 104470. https://doi.org/10.1016/j.ijhm.2026.104470

7. Influence of Leadership on Human–AI Collaboration

Part 7 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article Influence of Leadership on Human–Artificial Intelligence Collaboration by Zárate-Torres et al. (2025)

In the newly emerging hybrid workforce, what defines the essential boundary between the cold logic of an algorithm and indispensable human judgment? This research proposes a conceptual model where leadership acts as an ethical and strategic mediator in the Human Intelligence (HI)–Artificial Intelligence (AI) relationship, defining a crucial hybrid space of cooperation. The core finding establishes that while AI provides algorithmic efficiency based on data processing, HI remains necessary for interpretation, experience, and contextual judgment. Leadership modulates this relationship, shifting from mere supervision toward an essential role in co-creation. The model posits that effective leadership must integrate ethical governance mechanisms and establish a balancing mechanism to algorithmic efficiency through cognitive adaptability.

The introduction of this HI-AI hybrid space fundamentally reinforces and redefines human critical thinking as the ultimate strategic and ethical function. Critical thinking is embodied in the leader’s role of translating automated decisions into comprehensible language for teams, ensuring algorithmic transparency, and contextualizing decisions ethically. The essential need for human critical thought is derived from the fact that it is the only mechanism capable of putting automated decisions “in real context through human judgment and reasoning”, thereby guaranteeing organizational resilience beyond technical capability.

The authors, R. Zárate-Torres, C. F. Rey-Sarmiento, J. C. Acosta-Prado, N. A. Gómez-Cruz, D. Y. Rodríguez Castro, and J. Camargo, suggest that leadership acts as the axis that brings together human and technological systems, creating highly flexible, efficient, and ethically overseen interaction. As your organization integrates AI, how are you explicitly training leaders to be effective translators of algorithmic logic into human-centric direction? Share your strategies.

Reference: Zárate-Torres, R., Rey-Sarmiento, C. F., Acosta-Prado, J. C., Gómez-Cruz, N. A., Rodríguez Castro, D. Y., & Camargo, J. (2025). Influence of Leadership on Human–Artificial Intelligence Collaboration. Behavioral Sciences, 15(7), 873. https://doi.org/10.3390/bs15070873

6. Impact of AI on Corporate Leadership

Part 6 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article Impact of Artificial Intelligence on Corporate Leadership by Nguyen and Shaik (2024)

In the pursuit of AI-driven efficiency, are corporate leaders inadvertently sacrificing core human values like privacy and fairness? This research explores the profound dual impact of Artificial Intelligence (AI) on corporate leadership, detailing both transformative advantages and critical associated risks. Key findings show that AI significantly enhances positive leadership outcomes in four domains: communication (e.g., seamless collaboration via Slack), personalized feedback systems, optimized tracking mechanisms, and data-driven decision-making. However, the adoption introduces severe negative impacts, specifically algorithmic bias (citing Amazon’s biased recruiting tool) and substantial data privacy concerns. The paper proposes leveraging Local Large Language Models (LLMs) and techniques like federated learning to mitigate these privacy issues.

Successfully navigating the dual nature of AI necessitates advanced critical thinking centered on ethical oversight and risk management. Leaders must exercise critical judgment not only to maximize AI benefits but, crucially, to mitigate potential risks stemming from AI errors and biases. The “so what” for critical thinking is the imperative to establish and adhere to stringent ethical guidelines and accountability to protect the organization and its employees from unintended consequences. This continuous critical verification reinforces that technological prowess must be subordinate to human trust and ethical decision-making.

The authors, Daniel Schilling Weiss Nguyen and Mudassir Mohiddin Shaik, suggest that responsible AI adoption requires a delicate equilibrium between leveraging AI’s transformative potential and mitigating the associated risks. How do you structure your internal AI governance framework to proactively catch algorithmic biases before they impact human capital decisions? Let’s share best practices.

Reference: Nguyen, D. S. W., & Shaik, M. M. (2024). Impact of Artificial Intelligence on Corporate Leadership. Journal of Computer and Communications, 12(4), 40–48. https://doi.org/10.4236/jcc.2024.124004

5. How Will AI Evolve Organizational Leadership?

Part 5 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article How Will Artificial Intelligence (AI) Evolve Organizational Leadership? Understanding the Perspectives of Technopreneurs by Zaidi (2025)

Will the next generation of leadership be defined by human intellect, or by the sophistication of the algorithms they manage? Expert interviews confirm that AI mandates fundamental, real-time shifts in leadership philosophies, transforming leaders into “tech-savvy leaders” committed to continuous learning. The primary finding is that AI enables the automation of routine tasks (like organizing and scheduling), allowing human leaders to pivot toward higher-level responsibilities, such as creative thinking, employee development, and bridging the human-technology gap. Crucially, the study reinforces the irreplaceable value of human judgment, noting that AI lacks intuition, a moral compass, and a ‘soul’, meaning it cannot settle complex business problems alone.

This transformation intensely reinforces the need for advanced human critical thinking in both decision-making and ethical oversight. Leaders must use critical thought to effectively become the guardians of powerful machines. This requires them to critically understand the truth of the algorithms they use, including their limitations and capabilities, especially within the company’s decision chain. The key critical function is determining “where people can be taken out of the loop, and where they can be involved,” ensuring automation doesn’t lead to an organizational philosophy that neglects human well-being.

The author, Syed Yasir Abbas Zaidi, suggests that AI coaching is further enhancing tomorrow’s AI congruent business leaders, fundamentally altering how leaders make decisions and transforming future team dynamics. How do we standardize the measure of a leader’s “tech savviness” to ensure they maintain critical, ethical oversight over the AI systems they deploy? Let’s discuss.

Reference: Zaidi, S. Y. A., Aslam, M. F., Mahmood, F., Ahmad, B., & Bint Raza, S. (2025). How will artificial intelligence (AI) evolve organizational leadership? Understanding the perspectives of technopreneurs. Global Business and Organizational Excellence, 44(1), 66–83. https://doi.org/10.1002/joe.22275

3. Impact of AI on Leadership Styles in High-Stakes Environments

Part 3 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article Exploring the Impact of AI on Leadership Styles: A Comparative Study of Human-Driven vs. AI-Assisted Decision-Making in High-Stakes Environments by Hwang (2024)

When AI handles risk management in high-stakes sectors, does the human leader become obsolete, or is their role simply elevated? This research finds that AI integration in high-stakes environments (such as finance, healthcare, and aviation) fundamentally shifts leadership roles away from centralized control toward decentralization, with a crucial focus on interpretation, oversight, and ethical accountability. AI significantly augments decision-making accuracy and speed by analyzing vast datasets and offering predictive insights, which allows executives to concentrate more on organizational and strategic decisions. However, this augmented efficiency simultaneously introduces critical challenges regarding team trust in AI decisions and the complexities of ensuring algorithmic transparency and managing emotional/ethical nuances.

The profound shift toward decentralized decision-making clearly defines the indispensable requirement for human critical thinking: the leader must become the interpreter of algorithmic outcomes. This critical function bridges the gap between AI’s analytical strength and its inherent lack of contextual sensitivity required for real-world application. Without this interpretive layer, leaders risk losing the necessary ethical grounding, confirming that critical thinking is essential for maintaining human oversight in environments where mere technical accuracy might overlook broader social consequences.

The author, Jinyoung Hwang, suggests that organizational success requires adopting collaborative leadership approaches that blend AI capabilities with essential human judgment. If AI provides a highly optimized, data-driven recommendation, how do you critically ensure that the execution aligns perfectly with human values and existing team dynamics? Share your perspective.

Reference: Hwang, J. (2024). Exploring the Impact of AI on Leadership Styles: A Comparative Study of Human-Driven vs. AI-Assisted Decision-Making in High-Stakes Environments. International Journal of Science and Research Archive, 13(01), 3436–3446. https://doi.org/10.30574/ijsra.2024.13.1.2030

4. Generative AI Use in the Workplace

Part 4 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article Generative artificial intelligence use in the workplace: implications for management practice by Hernández-Tamurejo et al. (2025)

If trust is the strongest predictor of GenAI acceptance, what happens when managers blindly trust outputs generated from biased data? This mixed-method study finds that trust is the strongest predictor of intention to use Generative AI (GenAI). Crucially, this trust is conditioned primarily by the perception that organizational data management routines are reliable and objective. Interestingly, employee and manager perceptions of information transparency or privacy risk were found not to directly influence trust or usage intention. This raises significant risk, as users might blindly trust GenAI outcomes due to perceived efficiency benefits, leading to doubtful content usage in management decision-making and creating integrity issues in service delivery.

This reliance on perceived data integrity creates a paradox that demands rigorous human critical thinking focused on managing inputs and evaluating outputs. The research emphasizes that trustworthy data governance, not abstract explainability, is the foundation of sustainable GenAI adoption. Critical thinking must, therefore, be deployed to scrutinize both the data sources used by the AI and the factual accuracy of the outputs provided, rather than passively accepting results based on the promise of efficiency. This critical function serves as the essential check against the allure of speed, ensuring managers avoid the irresponsible use of GenAI content in high-stakes decisions.

The authors, Á. Hernández-Tamurejo, R. Bužinskienė, B. Barbosa, A. Miceikienė, and J. R. Saura, suggest that adopting monitoring, calibrated disclosure, and adaptive privacy protocols are concrete managerial levers to strengthen GenAI acceptance. In your experience, is the greatest challenge in AI adoption enforcing transparency, or instilling the critical capacity in staff to question seemingly objective algorithmic output? Share your thoughts.

Reference: Hernández-Tamurejo, Á., Bužinskienė, R., Barbosa, B., Miceikienė, A., & Saura, J. R. (2025). Generative artificial intelligence use in the workplace: implications for management practice. Review of Managerial Science. https://doi.org/10.1007/s11846-025-00949-z

2. AI-Powered Leadership: A Systematic Review

Part 2 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article AI-powered leadership: a systematic literature review by Aziz et al. (2025).

When AI delivers a ‘data-driven’ decision, who is responsible for the social and ethical fallout if it goes wrong? The emergence of Artificial Intelligence (AI) has positioned itself as a critical factor in reshaping organisational dynamics, particularly in the realm of leadership. This systematic literature review investigated the evolving relationship between AI and leadership, focusing on definitions, prevalent themes, and challenges. The findings confirmed a complex range of key challenges in AI-powered leadership, including ethical dilemmas, difficulties in human-AI interactions, implementation hurdles, and long-term risks associated with deep AI integration. The study synthesises findings across diverse disciplines like management and ethics, aiming to advance the understanding of this complex relationship and facilitate scholarly investigations into the AI-powered leadership domain. Although AI offers tools to enhance efficiency and cognitive abilities, a clear, universally accepted definition of AI-powered leadership remains elusive.

The inherent fragmentation in defining AI leadership and the established link to ethical dilemmas underscore the absolute necessity of robust human critical thinking and moral judgment. The true value of critical thought here is its role as an essential safeguard against algorithmic overreach. Leaders must critically clarify how the benefits of AI are achieved while upholding ethical standards and human-centric values. This involves navigating the inherent risk that reliance on data-driven decision-making may fail to adequately factor in crucial ethical and social issues.

The authors suggest that clarifying the challenges presented by the integration of AI into leadership contexts empowers scholars and practitioners to understand the evolving AI landscape and its impact on effective leadership. What steps are organizations taking today to explicitly build human moral judgment into AI-powered decision architecture? Share your insights.

Reference: Aziz, M. F., Rajesh, J. I., Jahan, F., McMurrray, A., Ahmed, N., Narendran, R., & Harrison, C. (2025). AI-powered leadership: A systematic literature review. Journal of Managerial Psychology, 40(5), 604–630. https://doi.org/10.1108/JMP-05-2024-0389

1. Enhancing Top Managers’ Leadership with AI Insights

Part 1 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article Enhancing top managers’ leadership with artificial intelligence insights from a systematic literature review by Bevilacqua et al. (2025)

In the AI era, are executive leaders truly adapting, or are they just layering technology over outdated strategic mindsets? Drawing on Upper Echelons Theory (UET), this systematic literature review confirms that AI radically restructures the managerial processes of organizations, making top managers’ leadership a determining factor in AI innovation effectiveness. The study identifies three key research clusters, with a core finding focused on the required AI-driven skills of top managers: data-driven decision-making, agility, and emotional and social intelligence. Successful integration requires leaders to cultivate environments that foster collaboration and knowledge sharing to maximize AI value. The integration necessitates a profound evolution of leadership dynamics, demanding leaders to balance technical capabilities with the ability to handle organizational and sociocultural factors.

The critical finding confirms that data provision alone is insufficient; sophisticated critical thinking is required to translate AI output into legitimate strategic action. The ability to analyze data critically and accurately and extract relevant insights remains crucial for top managers. This critical lens is essential not only for internal process efficiency but, more importantly, for navigating external pressures; top managers must use critical thought to align AI adoption with sociocultural context, ensuring regulatory compliance and ethical use. The critical layer ensures technology adoption, which is influenced by factors like social perceptions and regulations, contributes responsibly to competitiveness.

The authors, S. Bevilacqua, J. Masárová, F. A. Perotti, and A. Ferraris, suggest that the study contributes to UET by integrating AI as a crucial variable that radically transforms leadership and decision-making at the executive level. As AI tools multiply, how can we measure and accelerate the critical skill of translating algorithmic insights into human-centric strategic results? Let’s discuss.

Reference: Bevilacqua, S., Masárová, J., Perotti, F. A., & Ferraris, A. (2025). Enhancing top managers’ leadership with artificial intelligence insights from a systematic literature review. Review of Managerial Science, 19, 2899–2935. https://doi.org/10.1007/s11846-025-00836-7

Absolute Bottler or Train Wreck? If You Can’t Say Why, You’re Doomed to Repeat It

TL;DR: If you can’t explain why you succeeded or failed this year, you can’t repeat or avoid it. The fix: (1) Capture what happened immediately after important events—decisions, feelings, assumptions. (2) Reflect critically on the gaps between what you expected and what occurred, acknowledging your own role and the emotions that make this uncomfortable. (3) Build your theory—write down explicit hypotheses about what works in which conditions, recognising these are personal to you. (4) Test deliberately—treat your next decisions as experiments to validate or refine your theories. Repeat continuously, not just in December. Each cycle builds on every cycle that came before.


Had an exceptional year? A disastrous one? If you can’t articulate what drove either outcome, you’re operating in unconscious competence. Or incompetence. You got results, but you have no reliable mechanism to replicate success or avoid failure.

The solution lies in transforming unconscious experience into conscious capability through continuous, systematic reflection. Kolb’s learning cycle offers a practical framework for executives to extract actionable intelligence from their experiences in real-time, not in December when the insights have already decayed.

But here’s what matters: this isn’t about scheduling reflection sessions. It’s about recognising that learning didn’t start this year and won’t end next year. You’re already in the middle of a spiral that’s been turning your entire career.

A note on limitations: Kolb’s cycle has valid critiques. Egan (2008) notes it’s often misapplied as episodic snapshots rather than lifelong learning. Stirling (2013) highlights that individual learning styles aren’t as fixed as sometimes portrayed. The framework works best when you recognise these constraints—when you use it not as a rigid prescription but as a lens for understanding how learning actually accumulates across time, context, and experience.


1. Start With What Actually Happened: Concrete Experience (CE)

Your year wasn’t abstract. It consisted of specific decisions, interactions, wins, and failures. The cycle begins by capturing these concrete experiences as they occur, not retrospectively reconstructing them months later when memory has been sanitised by hindsight bias.

You don’t approach any situation as a blank slate. Every decision you made this year was filtered through pre-existing mental models, assumptions, and biases accumulated from prior experience. These aren’t just background noise. They’re the lens through which you perceived every situation, and they shaped every outcome before you even recognised there was a choice to make.

The question isn’t just “what happened?” It’s “what lens was I viewing it through, and how did that shape what I even saw as possible?”

Example: A CEO I worked with had just closed a difficult acquisition. Her immediate capture included: “Board meeting, 3 hours, felt defensive when questioned about integration timeline. Assumed they doubted my competence. Pushed back hard on their concerns about cultural fit. Got approval but left feeling hollow.” She wrote this within an hour of the meeting ending, while the emotional texture was still present. Three months later, she wouldn’t have remembered the defensiveness or the assumption about competence—only that she “got approval.”

After significant events—quarterly reviews, major deals, difficult conversations, strategic pivots—document the raw experience immediately. What happened? What did you feel? What were you trying to achieve? What assumptions were you carrying into the room? Write it down while it’s still fresh. But recognise this: you’re not starting from zero. You’re building on every cycle that came before.


2. Make Sense of It: Reflective Observation (RO)

This is where executives typically fail. Not through lack of intelligence, but through lack of structured time and psychological safety to genuinely reflect. Reflective observation demands that you step back and examine the experience from multiple perspectives, actively searching for gaps between expectation and reality.

Effective reflection is inherently critical and emotionally challenging. You must confront the discrepancy between what you intended and what actually occurred. This requires acknowledging your own complicity in suboptimal outcomes. Vince (1998) demonstrates through empirical research on management learning that executives are often “defended against experience”—subconsciously resistant to learning because genuine reflection provokes anxiety, threatens established identity, and exposes vulnerability. His work shows that managers mobilise strong emotions, particularly anxieties about their competence, that simultaneously promote and prevent learning.

Reflection isn’t just personal. It operates within organisational power structures. Admitting what you don’t know or acknowledging mistakes carries perceived risk, particularly in cultures that conflate competence with certainty. Your anxiety isn’t irrational. It’s a signal that real learning requires letting go of secure, tested ways of thinking.

Example continued: That CEO spent her next weekly reflection session examining the board meeting from multiple angles. She asked: Why did I assume doubt about competence rather than legitimate concern about risk? Where else have I interpreted questions as challenges to authority? She realised this pattern traced back to her first VP role, where a board member had publicly questioned her judgment. She’d been carrying that defensive posture for twelve years, applying it indiscriminately to contexts where it wasn’t warranted. The board wasn’t questioning her competence—they were doing their fiduciary duty. Her defensiveness had prevented her from hearing genuine concerns about cultural integration that would prove prescient six months later.

Build regular reflection intervals into your operating rhythm. Weekly 30-minute blocks where you ask: What worked? What didn’t? Where did my assumptions prove wrong? What am I avoiding examining? What patterns am I noticing across multiple experiences, not just this week but over months or years?

This must occur while the experience is still emotionally and cognitively accessible, not when it has become a sanitised anecdote. But don’t treat these sessions as isolated episodes. Each one should connect to what came before. What did you think last month? How has that changed? What’s building over time?

How past experiences shape present decisions: Each loop doesn’t just build incrementally—it compounds. That CEO’s twelve-year-old defensive pattern had shaped dozens of board interactions, vendor negotiations, and leadership team meetings. Once she identified it in reflection, she could trace its influence backward through her career and recognise how it had both protected her (in genuinely hostile early environments) and limited her (in collaborative contexts where it created unnecessary conflict). Past cycles create the interpretive lens for present experience. When you reflect, you’re not just examining what happened last week—you’re examining the accumulated sediment of every cycle that prepared you to perceive and act in that particular way.


3. Build Your Theory of Why: Abstract Conceptualisation (AC)

Observation without conceptualisation is merely anecdote collection. Abstract conceptualisation transforms specific observations into generalisable principles—your personal theory of what drives outcomes in your particular context.

This is where unconscious competence becomes conscious. You’re no longer relying on instinct you can’t articulate. You’re constructing explicit frameworks that explain causal relationships. Why did that negotiation succeed? What specific conditions enabled your team’s breakthrough? Which leadership behaviors correlated with engagement versus disengagement?

But here’s what you must understand: these frameworks are yours. They’re built from your experience, filtered through your perceptions, shaped by your history. They won’t work the same way for someone else. They might not even work the same way for you in a different context. This isn’t a weakness. It’s the nature of knowledge. As Stirling (2013) notes in her analysis of Kolb’s epistemological foundations, knowledge is constructed through the transformation of experience—meaning it’s inherently subjective and specific to each learner’s refined interpretations.

Context matters profoundly. The frameworks you develop must be situation-specific. A strategy that worked in a growth market may fail in contraction. Leadership approaches effective with experienced teams may backfire with new hires. The goal is not universal laws but conditional hypotheses: “In situations characterised by X, approach Y tends to produce outcome Z.”

Example of framework construction: That CEO developed what she called her “Question Interpretation Framework.” She wrote:

“When I receive challenging questions from stakeholders (board, investors, senior team):

  • First response is often defensive (legacy pattern from early VP role)
  • Defensiveness correlates with worse outcomes: I miss information, damage relationships, create adversarial dynamics
  • Better approach: pause, assume good intent, ask clarifying question before responding
  • Conditions where this works: high-trust environments, genuine questions (vs. disguised criticism)
  • Conditions where defensiveness is appropriate: documented bad faith, repeated pattern of undermining
  • Test: In next 5 board interactions, consciously pause and assume good intent. Track: my emotional state, information gained, relationship quality, decision outcomes”

This framework is specific to her, built from her history, applicable in particular conditions. It acknowledges both the pattern she wants to change and the contexts where her defensive instinct might still serve her.

After reflection, articulate your working theories explicitly. Write them down. “I’m noticing that when I involve the team early in decision framing—rather than presenting pre-formed solutions—both the quality of solutions and implementation velocity improve. Hypothesis: early involvement increases psychological ownership and surfaces information I wouldn’t have accessed otherwise.”

But also write down the conditions. When does this work? When doesn’t it? What prior experiences taught you to value this approach in the first place? Your theory isn’t emerging from nowhere. It’s part of a longer developmental arc.

The tension here matters. You’re trying to create general principles from specific experiences. You’re using reflective interpretation to understand something you initially grasped through immediate feeling. These are opposing modes of knowing, and they should feel like they’re pulling against each other. As Egan (2008) explains in his reconceptualisation of experiential learning, concrete experience is grasped via apprehension (immediate, tangible), while abstract conceptualisation is grasped via comprehension (reflective interpretation). That tension is where learning lives.


4. Test and Refine: Active Experimentation (AE)

Theories without application remain academic. Active experimentation means deliberately testing your newly formed hypotheses in real-world contexts. You treat your leadership practice as an ongoing series of experiments.

The cycle becomes continuous rather than episodic. You’re not reflecting once annually. You’re operating in a perpetual spiral: experience generates observation, observation produces theory, theory informs experimentation, experimentation creates new experience. Each loop builds on the last. The spiral constantly advances toward greater sophistication and conscious mastery.

Real-world problems are ill-structured and ambiguous. Your theories will be incomplete and sometimes wrong. The commitment is to systematic testing and refinement, not perfection. Each experiment generates data that validates, refutes, or nuances your working hypotheses.

Example of systematic testing: That CEO ran her experiment across five board meetings over two quarters. She documented each instance:

Board Meeting 1 (Budget Review):

  • Question about marketing spend seemed challenging
  • Paused, asked “What specifically concerns you about the allocation?”
  • Board member explained concern about CAC trends in Q3
  • Legitimate concern I’d missed in my analysis
  • Outcome: Better decision, relationship intact
  • Emotional state: Initially anxious, then relieved

Board Meeting 2 (Strategic Planning):

  • Question about international expansion timing
  • Paused, assumed good intent, asked clarifying question
  • Discovered board member had relevant experience from previous company
  • Gained valuable knowledge I wouldn’t have accessed if I’d defended the timeline
  • Outcome: Modified approach, better result

Board Meeting 5 (Crisis Response):

  • Aggressive question about supplier failure
  • Paused, but recognised pattern: this board member consistently undermines in crisis
  • Provided direct response, didn’t seek clarification
  • Framework condition met: documented bad faith
  • Outcome: Appropriate boundary, maintained authority

She tracked: emotional state, information gained, relationship quality, decision outcomes. Over ten weeks, she validated her framework in 80% of cases and identified one condition (crisis + bad faith pattern) where her old defensive response was actually appropriate. Her theory evolved. She refined it. The framework became more nuanced and more useful.

When implementing decisions based on your theories, explicitly frame them as experiments. “Based on my hypothesis about early involvement, I’m going to structure this quarter’s planning process differently and track engagement metrics, decision quality, and implementation speed against last quarter’s baseline.” Document results. Refine theory. Repeat.

But recognise what you’re doing. You’re not just testing a theory. You’re creating a new concrete experience that will feed the next cycle. And that next cycle will be informed by every cycle that came before it. Your learning isn’t contained in quarterly blocks. It’s accumulating across your entire career.


Moving From Accidental to Intentional Performance

The fundamental problem isn’t that executives are incompetent. Many are unconsciously competent, achieving results through pattern recognition and accumulated wisdom they cannot articulate, examine, or reliably replicate. When contexts shift or new challenges emerge, unconscious competence becomes a liability.

The learning cycle transforms this dynamic by making the implicit explicit. But it’s not a technique you apply to discrete events. It’s a way of understanding how you’ve been learning all along, whether you realised it or not.

Learning is a holistic process of adaptation that requires the integrated functioning of your total organism—feeling, thinking, perceiving, and behaving. You can’t just think your way to mastery. You can’t just act your way there either. You need the full cycle, and you need to recognise the tensions within it. Immediate experience versus reflective interpretation. Specific observation versus general theory. These tensions don’t resolve cleanly. They generate growth.

What holistic looks like in practice: That CEO didn’t just change her cognitive framework about board questions. The learning integrated across dimensions. Emotionally, she became less reactive to perceived challenges. Perceptually, she developed capacity to recognise contextual differences—same question, different intent depending on who asked and when. Behaviorally, she built new habits—the pause, the clarifying question, the documentation. Cognitively, she constructed theories she could articulate and test. The change wasn’t in one dimension. It was systemic. That’s what makes it sustainable.

The cycle acknowledges that reflection is not a comfortable, rational exercise but an emotionally demanding, politically complex practice that requires deliberate cultivation. As Vince’s empirical work (1998, 2002) demonstrates, organisational learning is fundamentally political, involving power relations that shape what can be learned, discussed, and changed. It recognises that learning is not an event but a continuous process. December is categorically too late to extract meaningful insight from January’s decisions.

But more than that: you didn’t start learning in January. You brought decades of prior cycles into every decision you made this year. The question is whether you’re conscious of those cycles or whether you’re letting them run on autopilot.

If you crushed it this year, can you explain precisely why? In sufficient detail to repeat it? Can you trace the decision back through the cycles that prepared you to make it? If you struggled, what specific causal factors drove the outcome? What would you change? And what prior learning led you into that situation in the first place?

If you can’t answer these questions with clarity and confidence, you’re gambling with next year’s performance.

Build continuous reflection into your leadership practice. Not as isolated episodes, but as a spiral that connects each experience to what came before and what comes next. Transform each experience into extractable wisdom before the insights fade into generalised memory. Recognise that your frameworks are personal, contingent, and context-dependent—and that this makes them valuable, not less so.

This is how unconscious competence becomes conscious mastery. How accidental success becomes intentional, repeatable performance. How you stop being lucky and start being capable.


References

Egan, T. (2008). A comparison of the Kolb and Illeris learning cycle models. Proceedings of the Adult Education Research Conference.

Stirling, D. (2013). David Kolb’s Experiential Learning: A critical evaluation. Journal of Perspectives in Applied Academic Practice, 1(1).

Vince, R. (1998). Behind and beyond Kolb’s learning cycle. Journal of Management Education, 22(3), 304-319.

Vince, R. (2002). Organizing reflection. Management Learning, 33(1), 63-78.

« Older Entries Recent Entries »