Tag Archives: HumanAICollaboration

17. Multilevel Review of AI in Organizations

Part 17 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice by Bankins et al. (2024)

Is the biggest challenge of AI implementation human resistance, or our failure to frame AI’s role correctly within the organizational psychological contract? The future of work depends not just on technology adoption, but on how human leaders manage the ensuing social and psychological shifts.

This multilevel literature review focuses specifically on the micro-level implications of AI for organizational behavior (OB), inductively classifying findings across individual, group, and organizational contexts. The study finds that successful Human–AI collaboration is fundamentally driven by employee attitudes, which are highly contingent on how they perceive AI’s capabilities compared to their own. Crucially, collaboration is significantly facilitated when employees feel both confident in and supported by the AI system. Conversely, when AI is perceived as a control mechanism (e.g., in the gig economy), workers may resort to “anticipatory compliance” or identity work for psychological relief. This highlights that the successful integration of AI requires intentional strategies to enhance job autonomy and innovative behaviors, rather than just increasing control or substitution.

The tension highlighted between AI’s capacity for objective management (e.g., in HR processes) and human perceptions of fairness demands acute critical thinking in the psychological design of work. The critical takeaway is that leadership must use strategic judgment to mitigate the risk of “algorithmic reductionism,” where fair outcomes are achieved at the expense of procedural justice perceptions. When AI decisions are perceived as unfair or lacking transparency, employee trust erodes. Therefore, critical thinking must be applied to determine how to positively frame AI’s contributions, while consciously structuring managerial practices (hiring, promotion) to ensure that the use of AI promotes genuine organizational fairness.

The authors, Sarah Bankins et al., suggest that future research must investigate how AI can be deployed to promote fairness in managerial practices such as hiring, promotion, and compensation decisions. How do you ensure your AI implementation strategy focuses on enhancing employee confidence rather than triggering anxiety over substitution?

Reference: Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45(2), 159–182. https://doi.org/10.1002/job.2735

7. Influence of Leadership on Human–AI Collaboration

Part 7 of 17 of a research-based series exploring AI’s impact on leadership This post summarises the article Influence of Leadership on Human–Artificial Intelligence Collaboration by Zárate-Torres et al. (2025)

In the newly emerging hybrid workforce, what defines the essential boundary between the cold logic of an algorithm and indispensable human judgment? This research proposes a conceptual model where leadership acts as an ethical and strategic mediator in the Human Intelligence (HI)–Artificial Intelligence (AI) relationship, defining a crucial hybrid space of cooperation. The core finding establishes that while AI provides algorithmic efficiency based on data processing, HI remains necessary for interpretation, experience, and contextual judgment. Leadership modulates this relationship, shifting from mere supervision toward an essential role in co-creation. The model posits that effective leadership must integrate ethical governance mechanisms and establish a balancing mechanism to algorithmic efficiency through cognitive adaptability.

The introduction of this HI-AI hybrid space fundamentally reinforces and redefines human critical thinking as the ultimate strategic and ethical function. Critical thinking is embodied in the leader’s role of translating automated decisions into comprehensible language for teams, ensuring algorithmic transparency, and contextualizing decisions ethically. The essential need for human critical thought is derived from the fact that it is the only mechanism capable of putting automated decisions “in real context through human judgment and reasoning”, thereby guaranteeing organizational resilience beyond technical capability.

The authors, R. Zárate-Torres, C. F. Rey-Sarmiento, J. C. Acosta-Prado, N. A. Gómez-Cruz, D. Y. Rodríguez Castro, and J. Camargo, suggest that leadership acts as the axis that brings together human and technological systems, creating highly flexible, efficient, and ethically overseen interaction. As your organization integrates AI, how are you explicitly training leaders to be effective translators of algorithmic logic into human-centric direction? Share your strategies.

Reference: Zárate-Torres, R., Rey-Sarmiento, C. F., Acosta-Prado, J. C., Gómez-Cruz, N. A., Rodríguez Castro, D. Y., & Camargo, J. (2025). Influence of Leadership on Human–Artificial Intelligence Collaboration. Behavioral Sciences, 15(7), 873. https://doi.org/10.3390/bs15070873