art exhibit in the Embarcadero in San Francisco: masks hanging on strings in the air
Editorial

There Is No Universal AI Persona

5 minute read
Malvika Jethmalani avatar
By
SAVED
The biggest mistake leaders make with AI is treating it like a universal assistant. Different phases of collaboration require different AI roles.

The executive teams I speak with are beginning to experiment with bringing generative AI (GenAI) into strategic work. They use it to capture notes, pull data, draft alternative scenarios and tighten messaging. They tend to treat it as a neutral assistant of sorts, i.e., something that helps the team move faster. I believe that assumption deserves closer scrutiny. Every contribution in a group setting carries a function, and AI is no exception, especially given that we have entered the “AI as a teammate” era.

Collaboration is not a single activity. Brainstorming, evaluation, conflict resolution and alignment are all distinct social processes. Each requires different contributions from participants. Treating AI as a general-purpose assistant ignores this distinction. The research suggests that effective human-AI collaboration depends on assigning the right role at the right moment because there is no universal AI persona that works across contexts.

A Framework From 1948

Long before GenAI, social psychologists Kenneth Benne and Paul Sheats offered a taxonomy of group behavior in their 1948 article, “Functional Roles of Group Members.” They identified three broad categories:

  • Task roles that advance the work (e.g., initiating ideas, providing information, evaluating options)
  • Maintenance roles that sustain relationships and cohesion (e.g., encourager, harmonizer, gatekeeper)
  • Individualistic roles that serve personal needs at the group’s expense (e.g., blocker, aggressor, dominator)

Their primary insight was that group performance hinges on whether necessary functions are covered. A group with abundant ideas but no mechanism to evaluate and rate those ideas will struggle. A group with sharp critics but no harmonizers will fragment. A group that lacks dissent will converge prematurely.

When AI enters a collaborative setting, it occupies one or more of these roles. AI systems can do many things well, but the more important consideration is whether they are doing the right thing at the right time.

AI as Co-Ideator

Teams often require divergence in early-stage strategy work. They need idea expansion, reframing and exploration beyond existing assumptions.

Research on AI-augmented brainwriting demonstrates that LLMs can enhance idea diversity and support refinement during creative collaboration. When integrated into structured ideation processes, AI helps participants expand on nascent concepts and explore novel combinations.

In Benne and Sheats’ terms, AI is stimulating cognitive expansion by performing task roles such as initiator and contributor. This can be valuable because executive teams frequently fall prey to incremental thinking. An AI co-ideator can surface adjacent markets, unconventional partnerships and alternative business models that might not emerge organically.

However, divergence is only one phase of strategic work. A persona optimized for expansion can become counterproductive when the group shifts toward evaluation of ideas and decision-making.

AI as Devil’s Advocate

As conversations progress toward decision, the risk profile changes. The greatest danger in this phase is overconfidence, confirmation bias and premature alignment.

Chiang et al. examined the impact of introducing an LLM as a devil’s advocate in group decision-making. In the study, a separate LLM was introduced to question the AI system’s recommendation. When teams were exposed to that structured pushback, they demonstrated better judgment about when to rely on the AI and when to scrutinize it. Of note, the most effective interventions targeted the AI’s recommendation rather than attacking the majority view directly.

The distinction is relevant for executive contexts because leaders are sensitive to status dynamics — direct human dissent can be socially costly in such settings. An AI that surfaces counterarguments, stress tests assumptions and highlights edge cases can introduce structured friction without escalating interpersonal tension.

AI-mediated dissent may amplify minority viewpoints, according to research by Lee et al. When a system voices disagreement rather than a specific individual, it can reduce the perceived risk of speaking up. In this role, AI functions as evaluator and critic by challenging reasoning rather than generating additional options.

A default assistant persona, optimized for helpfulness and summarization, rarely performs this function; it tends to reinforce emerging consensus. In high-stakes decisions, that reinforcement can accelerate groupthink.

AI as Mediator

Entrenchment of positions is the central problem in polarized discussions. Tessler et al. studied AI-mediated deliberation through what they termed the “Habermas Machine.” The system synthesized diverse viewpoints into balanced group statements through iterative refinement. Participants preferred AI-generated summaries over those produced by human mediators and reported reduced polarization following deliberation.

Here, AI performs maintenance roles by harmonizing perspectives, integrating competing arguments and articulating trade-offs in neutral language. This function can be critical in executive settings when strategy debates harden into camps as leaders passionately defend departmental priorities or risk tolerances. An AI mediator can reframe the conversation by clarifying shared objectives and summarizing points of disagreement without attributing them to individuals.

This is a fundamentally different contribution from that of a co-ideator or a critic. Alignment requires synthesis. A persona optimized for creative expansion would likely amplify polarization in this setting. A persona optimized for critique could intensify defensiveness.

Facilitation and Its Limits

Assigning AI the correct role does not guarantee superior outcomes. In a large, randomized experiment, researchers tested LLM-facilitated decision-making in a setting where critical information was distributed unevenly across group members. Although AI increased the amount of information shared, it did not significantly improve the accuracy of the final decision. This finding highlights that even though AI can improve process quality and surface more information and structure conversation, deeply rooted cognitive biases still endure.

For executive teams, this means that AI role design must be integrated into broader governance systems through clear decision rights, explicit criteria and accountability structures. AI can fill functional gaps, but it cannot replace disciplined leadership.

The Error in the Assistant Model

The widespread adoption of a helpful assistant model stems from convenience. It allows organizations to deploy AI without rethinking collaboration. The assistant summarizes, drafts, retrieves and suggests. It appears additive and neutral, but in practice, neutrality is illusory. Each AI configuration strengthens certain contributions and weakens others:

  • A summarizing assistant amplifies dominant narratives.
  • A generative assistant expands possibilities but may prolong indecision.
  • A critical assistant sharpens analysis but may destabilize cohesion.
  • A mediating assistant fosters alignment but may bypass healthy dissent.

The mistake lies in assuming one persona can serve all phases of collaboration. Benne and Sheats make clear that groups require different functional roles at different times, and the same principle applies to AI.

Designing AI Roles Intentionally

To effectively leverage AI for collaboration, follow these simple steps:

  • Disaggregate collaboration into phases. Identify when the group is engaged in divergence, evaluation, conflict resolution, or convergence.
  • Diagnose which functional roles are underrepresented. Is the risk insufficient creativity, overconfidence, polarization, or silence?
  • Assign AI a role aligned to that diagnosis. Configure it explicitly as co-ideator, critic, or mediator, and clarify its boundaries and scope.
  • Revisit role assignment as the conversation evolves. Collaboration is dynamic, so AI personas should not remain static.
  • Embed AI role design within governance structures. Decision authority must remain clear. AI can challenge, expand, or synthesize, but it cannot assume accountability.
Learning Opportunities

Organizations that treat AI as a general assistant may capture incremental efficiency gains, but organizations that treat AI as a functional teammate will redesign collective cognition.

The myth of the helpful assistant persists because it simplifies adoption. The research indicates, however, that leveraging AI for collaboration requires more deliberate design. Collaboration is a system of roles, and AI alters that system the moment it begins to contribute. There is no universal persona that serves every purpose. There are only functions, context-dependent and time-sensitive.

Editor's Note: What other questions arise when contemplating human-AI collaboration?

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Malvika Jethmalani

Malvika Jethmalani is the Founder of Atvis Group, a human capital advisory firm driven by the core belief that to win in the marketplace, businesses must first win in the workplace. She is a seasoned executive and certified executive coach skilled in driving people and culture transformation, repositioning businesses for profitable growth, leading M&A activity, and developing strategies to attract and retain top talent in high-growth, PE-backed organizations. Connect with Malvika Jethmalani:

Main image: Dan Dennis | unsplash
Featured Research