Skip to Content
Buccino Leadership Institute

Military personnel in the woods with a robotic dog.

Trust is the linchpin of human-robot teaming in modern warfare.

Military robot in the woods.

Photographs by Erika Norton

Soldiers move into formation, and their attention shifts to a four-legged robot engineered for rugged terrain and high-risk environments. It falls in with them, navigating obstacles as it advances toward the objective: a dilapidated two-story warehouse.

The team halts at a tree line a few hundred meters out. On cue, the robot detaches and slips through a breach in the fence, hugging the ground. Its thermal imaging and 3D mapping activate as it navigates inside the building’s debris-strewn halls and climbs stairs. Microphones catch muffled voices. Heat signatures confirm three armed people on the ground floor near a stack of wooden crates and two patrolling a catwalk above.

In a corner, behind a concealed panel, the robot flags a weapons cache containing small arms, ammunition, and likely shoulder-fired munitions. Nearly a half mile away, the command tablet in the team leader’s hand lights up with the live data stream. Wireframe overlays display interior layouts, enemy positions, and points of interest.

The robot has done its job: The team knows what awaits them. With targets marked and entry points identified, the lead operator gives a nod.

As emerging technologies reshape combat, soldiers adapt to teaming with robots, unmanned vehicles and decision-support tools driven by artificial intelligence. The military has long used robots for reconnaissance and explosives disposal, but modern systems increasingly operate autonomously. These machines can learn and adapt to their environments, recognizing patterns and making decisions faster than humans can.

It remains unclear how much soldiers will trust and use robotic teammates. Drones and ground robots are increasingly deployed for reconnaissance, logistics and threat detection, promising extended capabilities and reduced risks to personnel. But success depends not only on technical performance, but on human trust and engagement. Building and maintaining trust is crucial; without it, even advanced robots may be sidelined. This article examines how trust and leadership shape effective human robot teams, drawing insights from military operations.

What Is Trust? Identifying Some Performance Consequences of Overtrust and Undertrust in Automated Systems

Trust is one of the most common variables measured in human-robot interaction literature (O’Neill, McNeese, Barron, & Schelble, 2022). Lee & See (2004) define trust as an attitude toward an agent (robot) that will help achieve an individual’s goals in situations characterized by uncertainty and vulnerability. Further, like Mayer et al. (1995), we hold that there are three established bases of trust:

  • Capability: Can the robot reliably perform its intended tasks?
  • Integrity: Is the robot’s behavior predictable and aligned with mission goals?
  • Benevolence: Does it act to support human safety and success?

In human-autonomy teams, trust is essential for effective coordination, morale, and team performance. When trust develops over time, it enhances collaboration and enables human and machine agents to operate cohesively. However, when trust is misaligned, either too high or too low, it degrades performance.

And “overtrust” often leads to complacency, where operators disengage from active monitoring. Parasuraman, Molloy and Singh (1993) examined automation-induced complacency, showing that when operators relied too much on automation, they were less vigilant and slower to detect failures. This is especially dangerous in high-consequence environments where rare but critical system errors require timely human intervention.

A related issue is the out-of-the-loop unfamiliarity performance problem (Wickens, 1992). When operators monitor automated systems, they may lose situational awareness and become poorly equipped to take control when needed, particularly during abrupt or unanticipated events.

The other extreme is disuse or under reliance, where users either ignore automation or turn it off, even when it may enhance performance. According to Parasuraman and Riley (1997), disuse arises from a lack of trust in the system’s capability or transparency. When operators reject automation, they may take on tasks they are less suited for, increase their cognitive load, or compromise mission success.

The Dynamics of Trust Development in Human-robot Teams

Historically, trust in automation has been described as a matter of calibration, aligning trust with system capability. While this may apply to static automated systems, it falls short of capturing the dynamic behavior of modern autonomous agents. In these systems, trust must be understood as a continually evolving process shaped by real-time performance, transparency and shared decision-making. Understanding how trust forms, breaks down, and recovers over time is critical to building resilient, effective human-autonomous teams.

As individuals encounter new information or experiences, their trust shifts, sometimes consciously, but often not. Trust is essential to coordination, morale and performance.

While human-to-human trust often stems from empathy, communication and reciprocity, trust in autonomous systems depends more on perceived reliability, predictability, and transparency. Malle and Ullman (2021) argue that trust in less autonomous systems centers on reliability and capability, however as robots become more socially expressive, trust may increasingly involve moral dimensions such as benevolence, integrity and authenticity.

Trust in machines can be shaped by distinct design choices, and trust embedded within robotic systems. Moreover, humans often anthropomorphize machines, projecting human-like traits onto them, especially when systems are socially expressive or embodied. This can lead users to form emotional attachments or unrealistic expectations, even while holding machines to stricter performance standards than human teammates.

Indeed, Phillips, Ososky, Grove and Jentsch (2011) pointed out that as robots move from tools to teammates, appropriate mental models of robot capabilities and behaviors are critical in maintaining trust.

Failures in systems reduce human-autonomous performance (Hillesheim & Rusnock, 2016), lower trust (Hafizoğlu & Sen, 2018a, 2018b) and increase workload (Chen et al., 2011). Failures create uncertainty, making trust repair essential. Without it, operators may reduce reliance, increase their own workload, or assume tasks they are not well equipped to perform. Several researchers have examined trust violations and repair strategies such as apologies and explanations (e.g., Baker et al., 2018; de Visser et al., 2018). More recently, Esterwood and Robert (2022) highlight two persistent issues: inconsistent findings across studies and a lack of strong theoretical frameworks to account for these varied outcomes.

To address this, Pak and Rovira (2024) apply the Elaboration Likelihood Model (Petty & Cacioppo, 1986), framing trust repair as persuasion, so that the autonomous system is attempting to change an individual’s trust toward it following a trust violation. According to their model, trust repair depends on whether it occurs through the central or peripheral route, which is determined by the information presented in the repair strategy and the way the recipient interprets and engages with that information. They argue that outcomes hinge cognitive engagement, workload, attentional capacity, motivation and timing of the repair strategy.

Transparent operation, timely feedback and adaptive responses to errors increase the likelihood of regaining user trust. Robots that communicate the nature and resolution of failures are more likely to be trusted. How trust repair and other trust affordances are made part of the design of autonomous tools remains a critical area for research and development.

Military Applications: Where Trust is Tested

Trust must be earned through repeated, reliable and mission-relevant performance. Hoff and Bashir (2015) identify dispositional trust (a user’s baseline tendency to trust), situational trust (based on the immediate context) and learned trust (built through experience). In military operations, all three intersect, especially when robots are introduced into mission-critical scenarios. If operators have no prior exposure to the system, or the system’s behavior is unclear or unpredictable, situational trust breaks down quickly, regardless of its technical potential.

Trust is especially fragile in military applications. Initial exposure matters: When soldiers are not trained with robotic systems, they may hesitate to use them under pressure, leading to underutilization or rejection. Even high-performing systems can suffer from trust erosion due to delays, erratic behavior or confusing interfaces. These issues signal unreliability and disrupt the fluid coordination needed in time sensitive operations.

Robots must do more than function; they must communicate effectively. Clear, intuitive feedback and the ability to signal both success and failure in understandable ways are essential. Realtime interaction and user centered interfaces directly influence whether a system is seen as a helpful partner or a burden.

The “tool vs. teammate” distinction is often put to the test in field conditions; soldiers may initially treat robots as tools, but perceptions shift when the system proves it can reliably execute tasks, respond to commands and support goals. Conversely, it may be discarded when it looks more sophisticated than its performance bears out. In military environments, where performance during uncertainty is the norm, trust is a central requirement for adoption and sustained use.

A Leader’s Role in Shaping Human-robot Teaming Integration and Trust

Traditionally, military leaders have played an important, but somewhat indirect, role in technological acquisition and deployment. While high-level decisions are often made by defense agencies, acquisition organizations and government bodies, military leaders are typically involved in setting operational requirements, providing feedback and shaping doctrine that guides technology use.

Today, their involvement is becoming more direct and critical, as leaders set the culture for who and what is trusted and valued. Trust is not only technical, it is cultural. Leaders create the conditions under which soldiers are willing to rely on nonhuman teammates. Lack of leadership engagement often leads to robots being seen as distractions or burdens.

It is essential to cultivate leaders who possess the vision, adaptability and technical expertise to integrate emerging technologies, thereby maximizing lethality and operational advantage on the modern battlefield. Today’s leaders must actively shape how human-robot teams function under stress and uncertainty. That means supporting the development of robotic platforms that communicate clearly and respond intuitively, enabling soldiers to understand not only what a system is doing, but why.

It means planning for failure, not just success: Trust repair must be part of design and doctrine. Soldiers will only reengage with autonomous systems if they can understand and recover from errors.

Leaders must create environments that allow trust to form: low-risk training, repeated exposure and consistent messaging that robots are part of the team, not just tools to be discarded when inconvenient. They enable soldiers to be creative and adaptable in integrating emerging technologies. The “tool vs. teammate” distinction isn’t settled in code; it is shaped by experience and command emphasis. In short, effective leaders are the decisive factor in closing the gap between technological potential and practical, trustworthy integration.

Conclusion

Leaders must recognize that successful human-robot integration is not just fielding advanced systems, but also building conditions where trust can form, break and be repaired. Leadership sets expectations, shapes training environments, and reinforces the view of robots as mission-critical teammates.

Trust is central to this integration, and it demands consistent reliability, intuitive design and command support.

Human roles will evolve, not disappear. Robots are not replacing people; they are redefining roles and reshaping responsibilities. Leaders must ensure teams are trained to work with robots, not merely operate or supervise them. This requires deliberate attention to team dynamics, adaptability and shared mental models between humans and machines.

As technology evolves, success depends not only on the sophistication of intelligent systems, but on how effectively people and machines collaborate under pressure. The future of military advantage lies not in autonomy alone, but in leadership that ensures these systems are understood, tested and trusted, turning technical potential into operational performance.


In the Lead magazine is a collaboration between the Buccino Leadership Institute and the Stillman School of Business’s Department of Management. This edition reaffirms Seton Hall’s commitment to fostering innovative, ethical and impactful leadership. Stay ahead of the curve — explore the Fall 2025 issue of In the Lead.

Categories: Business, Science and Technology

Related Posts