In Brief
- From Personal Productivity Bursts to Enterprise-Wide Adoption: Personal AI experiments have been delivering productivity wins, but scaling across an organization takes a whole new mindset and approach.
- Data Quality Is a Non-Negotiable: With an estimated 80% of unstructured data relegated to the junk drawer, setting AI loose in your enterprise data without first cleaning it up will end with AI delivering junk responses.
- Change Management Is Hard. With AI It's Even Harder: All tech rollouts have to address the human psychology around the change, but AI is in a class of its own in terms of the fears and uncertainty it prompts in people. Addressing the human factor is the only way your pilot will succeed.
Study after study have shown the increase in productivity generated by Generative AI. However, those increases are all too often found at the individual level.
On today's episode of Three Dots, we brought together a panel to discuss what prevents AI pilots from scaling, identify the main obstacles and debate if ROI on AI pilots is worth pursuing.
Rebecca Hinds, head of the Work Innovation Lab at Asana, Craig Durr, founder of Collab Collective and Alan Pelz-Sharpe, founder of Deep Analysis join me to discuss where we are with AI and what next looks like. Tune in for more.
Table of Contents
- The Difference Between AI for Personal Productivity and AI at Scale
- Misaligned Expectations of AI ROI
- Are We Asking the Wrong Questions With AI Pilots?
- The Human Element: Cognitive Load and AI Adoption
- AI Utopia Won't Pay Big Tech Vendor's Bills
- AI Governance Questions
- Where to Start When Considering an Large Scale AI Pilot
- The Effect of Artificial Intelligence on Human Psychology
The Difference Between AI for Personal Productivity and AI at Scale
Siobhan Fagan: Hi everyone and welcome to today's episode of Three Dots. My name is Siobhan Fagan, I'm editor in chief of Reworked and am happy to be here today.
You may notice a few more faces on the screen with me today, we're doing something a little bit different. We're having a panel discussion and the topic today is why AI isn't scaling in the organization.
With me today, I have Craig Durr, the founder of Collab Collective. We also have Rebecca Hinds. She is the head of the Work Innovation Lab at Asana. And then last but not least, Alan Pelz-Sharpe, founder of Deep Analysis.
Let's jump in. With AI, we've been seeing these little bursts of productivity for individuals for one project. But when we start looking at it scaling across an organization, it's not really taking off. Different studies — IDC Lenovo came out with one that suggests 88% of AI pilots are failing.
Rebecca, your own organization has done some research which found two-thirds of organizations fail to scale AI. So why don't you just kind of jump in and start from that point? Talk a little bit about what you found why AI projects are not scaling.
Rebecca Hinds: That's exactly right. We see a lot of organizations have experimented with AI and deployed AI, but when it comes to actually deploying it across multiple different departments and functional groups at scale, that's where we see a breakdown. And I think there are numerous different reasons for that.
A big one that we see consistently in the research is there's a disconnect between the people at the top of the organization and the individual contributors who are often tasked with implementing these AI solutions, as well as adopting it. And when AI is implemented without that bottom-up buy-in, it typically backfires because either employees don't know how to use the tool or they're not invested in the solution.
Our research consistently points to this need for both top-down direction. You absolutely need policies, principles, guidelines, but you also need to understand how do we encourage bottom-up adoption? And so we see that some of the most influential people in the organization who are responsible for effectively scaling AI are not the people you might expect. They tend not to be incredibly technical. They tend to be domain experts. They tend to be people who already span different functional groups. We sometimes call them bridgers.
And so it really does require a concerted, both top-down and bottom-up approach to effectively scale. And often you have CEOs or mandates from the board where there is a pressure to implement this technology very quickly and at scale. We often see that that doesn't work because it doesn't account for this very important change that needs to happen.
Siobhan: That makes sense. And I should probably just clarify before I go any further that we're talking about mainly large language model implementations. There's tons of technology out there that already has AI embedded. We're not really looking at that necessarily.
So we're already saying that clearly the people part is a big hurdle, but I want to talk a little bit about what's happening more on the technology side. So, Alan, if you could kind of jump in there. I know we're not saying that the technology itself is the problem, but it's more the state of organizations back-ends that potentially might be getting in the way. So can you go in there?
Alan Pelz-Sharpe: Yes, it's two things. I agree with everything Rebecca said. And I want to cap on to what she said before I get into the tech side. The historical reality is 75% of IT projects fail. And they've been failing for the same reasons for the last 40 years. So I don't really know why we expected something different with AI. So there we go.
But when it comes to tech side, I mean, there's probably more than two, but to simplify things down, there's two big elements. One, and yes, it's a cliche, but, garbage in, garbage out, right? And frankly, most people's data, particularly unstructured data, because we are talking here about generative AI, is an absolute mess. I mean, again, that's just how it is, right?
If you do a data audit of unstructured data, you almost certainly will find 20% or less is actually of any value. So generally with AI, whether you go in the RAG approach or whatever, that's a big job you've got in your hands. The tech itself, that's slightly different.
So yeah, the AI works. It does what it's supposed to do, but it also has to do it in your environment. It has to do it with your stack. It's got to do it with your business applications and your data and your processes. And those are typically not well documented, not properly secured, whatever it was. But again, it comes back to what Rebecca was saying, was that chronic disconnect between the people at the top and the people at the bottom.
Again, that's always been the case. Being controversial here and saying that the sea level is typically an assumption that people just aren't working hard enough as opposed to there actually being some real challenges.
Misaligned Expectations of AI ROI
Siobhan: Rebecca, you talked about the C-suite and these expectations to just roll out AI. And so is part of it maybe unrealistic expectations about what it can do or unrealistic expectations? Because if you use one of these tools and you ask it to do something, it can do something very surprising. And so you think, it's going to do the same sort of thing at scale within the organization. Do you see that as potentially being part of the issue?
Rebecca: I think it's several different factors. What I see overwhelmingly is people are trying to find use cases for AI, for generative AI, now for agentic AI. They're not truly anchoring in the business use cases and I think that's where a lot of the problem is coming. When I talked to a head of AI at a large retail company a few months ago, he implemented a completely new vetting process for any AI solution that was going to be implemented in the organization where it needs to pass the same business use case as any other technology. AI doesn't get a free pass just because it's this shiny new technology.
I think there's a pressure to adopt AI regardless of whether the use case matters and whether it'll matter in a month or a year in the future. And I think that's one of the biggest challenges we're seeing.
Siobhan: Craig, I want to bring you in here.
Craig: I want to build off of what both Alan and Rebecca said. Talking to some customers, some of the things I've seen is that challenge of a CEO mandate about an investment requiring an ROI. Moving it over to that CTO or that CFO, and they do put those parameters together. And then rolling out and they're not getting what they expect as an ROI. And so there's kind of a start stop momentum that takes place.
Now that's not most use cases, but I think the challenge is how you're going to define ROI on what an AI implementation is. A lot of talk has been around productivity gains. A lot of talk has been around something that is a little bit more soft like employee well-being, happiness, things along these lines too.
Removing friction from process is difficult to measure. You can put a stopwatch to it, but it's the balance of being effective versus being efficient. And are you able then to create effective processes that aren't just efficient, that aren't shorter in time, that actually are effectively solving your business problems more readily.
One of those easy ways is when AI solves the boring use cases, those seem to be the low-hanging fruit. Call hold time, things like this, which takes place. And so you see certain sectors of the technology adopting it more readily. Call center environments, contact centers. Customer experiences have some low-hanging fruit with customer dissatisfaction, something that CSAT is readily measured. So if I can make that call time shorter, if I can make their point to resolution self-service for that customer, it's measurable. It starts talking to that ROI challenge and then up the value chain to that CEO answering the conversation.
But I do contend there's a point in time that we might have to start thinking about this like we thought about internet access. At some early times, executives probably thought I should only limit who has access to the internet based upon cost and return on that. But it winds up being a tool that's part of the fabric of how people are getting work done. So ROI is an important element, I don't want to discredit anyone who's building off of that, and I'm sure they're finding it maybe in this like call center use cases. But there is going to be a point where that mind shift may shift that it's more of how we do work as opposed to a technology that requires an ROI.
Are We Asking the Wrong Questions With AI Pilots?
Siobhan: I'm listening to some of your descriptions and the literature around AI, it's always about increasing productivity. But it seems like so much of it is thinking how to improve the way we currently work to make the way we currently work better. Is that the wrong question to be asking when thinking about these pilots? Should people actually be trying to figure out how to improve work entirely and not just the existing processes?
Alan, I saw you nod your head first, so sorry, you're on the spot.
Alan: Absolutely. It's, again, you could say this about all IT, but AI specifically, all too often, people are buying technologies and running down the path. They don't even know what the problem is. They know they've got a problem, but they don't actually know what it is, right?
I mean, I won't bore you with them today, but there's lots of anecdotes from the past where people have literally, I can think of one, they committed $25 million to the project and the tech was not the problem. The tech was 100% not the problem and it didn't matter what consultant came in and explained to them, that that is not your problem. They'd already committed to it.
That happens a lot, right? So to pick up on Craig's point, I think this is the key thing. It's a very basic thing, but very few people follow it. I'm sure I'm guilty, too. I've got a problem, but is the problem actually a symptom of something else? You've got to get down to it — and to pick up on Rebecca's point which is a really good one and a very important one — is that you have to talk to the people who are actually going to do this work. You have to involve them at the beginning.
Well, I guarantee this is not going to go well because they're the only ones who know how this works. It's definitely not documented. It's definitely not in a flow chart somewhere. And if there is a flow chart, don't trust it. Talk to the people who do the job and a lot of the time they are the last people you have spoken to.
Craig: I'm looking at some data right now, because when Rebecca mentioned that, I pulled up something from a company called Gallagher. And the question that was posed to end user is, which of the following are in place in general in your organization? It aligns exactly with what Rebecca's research was saying.
Sixty-six percent said they had no responsible person they could identify as an end user in charge of AI. Sixty-five percent didn't think they had tools in place. A larger percentage, 78%, didn't feel like they were properly trained about how to use AI. And more importantly, there was a huge group, about 70%, that just didn't know when and where they were allowed to use it.
Because is AI going out to ChatGPT and am I letting my company data out as part of that? Is it staying within Copilot, within Microsoft? Am I licensed for that or not? Because it's an expensive license and maybe my IT organization's rolling it out. So at that end user level, I spot on with what Alan and Rebecca said, and it's just another data source to help kind of align to that.
Rebecca: Yes, it's staggering. What we see in terms of the companies that have been successful at scaling AI, they think beyond ROI. Very few organizations right now are actually measuring whether users are using the technology. They're significantly more likely to measure ROI than they are any sort of user satisfaction.
We know from decades of technology adoption that technology rarely fails because of the technology. It fails because humans resist it and don't use it. And the fact that so few organizations are actually asking employees, is this the right solution? And measuring if they are using it is very surprising, but consistent, as Alan mentioned, with what we've seen in the past. The companies that have been successful, though, they're recognizing that those end users need to be at the center of the strategy.
The Human Element: Cognitive Load and AI Adoption
Siobhan: Rebecca, can I stay with you for a moment and Alan and Craig jump in if you want after? Is part of it, when we're looking at some of these tools, they're just these big sandboxes with absolutely no structure around them whatsoever. And so is part of it, when you're putting it on employees who already doing their daily jobs and all that, that they don't really necessarily know where to start or what to focus on. Can you speak to that?
Rebecca Absolutely. We often in our research measure something called digital exhaustion, right? How exhausted do employees feel as a result of the technologies they need to use every day at work? And that continues to increase. We saw a massive increase with the rise of GenAI in the workplace.
Workers are overwhelmed. And if they need to go to a separate tool or a separate sandbox to learn and play and experiment with AI, most won't. We like to think employees are curious and excited to use the technology, but the reality is so many are just too overwhelmed to pivot away from their day-to-day work. And so we consistently see the highest adoption when AI is embedded into the flow of work, it's embedded into workflows that already exist.
We know that 53% of workers' time is spent on busy work and the coordination tax associated with work. The real opportunity in the short-term is to use AI to take some of that tax and overwhelm off of employees' plates. And the reality is most aren't going to want to swivel to a different platform to experiment with AI.
Craig: Rebecca, I just had a chance to come from Google Cloud Next, and one of my key takeaways is what you were talking about, what they were doing within the workspace environment, was actually trying to alleviate the cognitive load. So think about your workflow. A lot of people right now, they're in a Word doc or they're about to write an email and then they have to go to a third party tool, Gemini or ChatGPT and I'm cutting and pasting and moving information back and forth. It weighs on your mind.
Now, some of these tools, these productivity suites, Workplace from Google, Microsoft is doing it, Zoom actually is doing a great job too in the UC space with integrating it within the workflow. So at least they're removing some of that cognitive load and that context switching that takes place. And I think that is a huge development.
We're all kind of technology leaning. We probably save time and go out and experiment. But if I've got to get my work done, I'm not going to experiment with three or four tools. I'm going to find something that works and I'm going to try and stick to it and keep going. The better that these AI tools are integrated into my workflow without having to contextually switch, I can imagine that it is going to lead to better adoption and then seeing gains in terms of productivity as well.
AI Utopia Won't Pay Big Tech Vendor's Bills
Siobhan: I want to stay there for a second, because when I think about how some of the current tools that you mentioned are integrated, I find what it is is it's trying to push you: do you want to write this email? Do you want to write this email? And it's like, no, I know how to write an email. Or can I can I write this article for you? And it's like, no, I have words.
Craig: Yeah, AI Clippy.
Let me paint a picture and everyone can call BS on me and I invite you to, but let's paint a utopia picture right now. What if I could tell you a path that AI could actually help and improve workplace culture? And if that would be a logical path forward? Bear with me.
This is what I think takes place: If I am working with an AI tool, where I am right now, and it's probably giving me bad suggestions. No, I don't want to rewrite this email. But over time, it starts improving. I first had to trust a meeting summary. We're going to get out of this meeting, I'm still going to go back and read it, which is still kind of double the work to make sure I trust it. But eventually, as an individual, I start trusting that AI tool. And hopefully, contextually, it becomes more aware of when and where I need its help based upon my patterns, my learning, whatever. That is the inferencing that's taking place along the way.
If I start trusting AI more, I could probably be more productive, less cognitive load. And that might help the four of us as a team. I'm able to provide information to you guys quickly. You start trusting me. We share AI tools. We might even have these agentic AI elements that are shared. Microsoft has facilitators in a meeting. Other people are creating other shared agentic AI. And as a team, we start having an improved level of trust and an improved level of interaction and removing friction.
If it goes from the individual to the team, can that team then also parlay that into a larger company idea? And ultimately, can you get to this utopia, that the workplace culture might just improve because we were able in the process to remove some friction to get things done in a more intuitive way where we trust this AI isn't just a tool, but it evolves to being a teammate in that process? I trust what it gave me, everything from that meeting summary to the notes that, Siobhan, you rolled up to me, to the outcome of a project, to where our team scores well and whatever company, competition, I don't know.
I see a path forward.
Call me optimistic, but I do think that if we start finding AI approaching us at the right time in the right place in a knowing way, it could actually evolve to what you're asking about. Is it here now? No? Is there a path forward? I want to be optimistic and think there is.
Siobhan: Do you want to?
Alan: You just knew to come to me, didn't you?
I'm not actually disagreeing with you, but I think we have to, there's a few things we have to put into context here.
AI has been eye-wateringly expensive to build and it's going to continue to be eye-wateringly expensive. So those kind of use cases, that's not going to pay the big guy's bills, they're just not. Nobody's going to start shelling out thousands of dollars extra a month to pay for these things. So I think there's something there.
I think we've got to also take into account that they do want their money back. I don't think they're going to get it back, by the way, but they want it back. And so the big focus really, and I think it's probably a good area of focus, is the part that's getting ignored by everybody at the moment. It's the back office stuff, right?
So it's things like, for example, nobody ever believes this, but it's true: 52% of business processes still involve paper documents. OK, so there's a place to start. And that's big money. It's not sexy, it's not exciting, but that's very serious.
But back to the ROI thing, I think there's two challenges here. I'll be honest, because I'm not really a big believer in ROI anyway. Because it's essentially two columns: One of actual hard costs I can measure and another column of things that might happen, possibly. So it's a tough one. We typically advise people to look, I can just repeat myself, to look for real problems. So, if you have an incredible error rate, if you're having lots and lots of escalations, those can be measured. So I get where you're coming from, but I think the issue is that the AI companies, they've got a very different agenda.
AI Governance Questions
Siobhan: Craig, I actually want to follow up on something that you raised, because you were discussing this in the framework of utopia, but this is very much a utopia that we're being currently sold, which is agentic AI and these magical tools that can go into all of our different data silos, that they can perform these really complex and often interwoven tasks and processes. And this does go back to the garbage in, garbage out question. But it's also a question of, is that ideal? Do we really want to do that when there are so many data silos and there are so many data silos where certain information is needed in order to perform these tasks, but we don't necessarily want to open those up?
Craig: So a data governance concern is what we're saying. So I want to get into my CRM database, but I don't want to be exposed to all of that or my AI agent.
Siobhan: I have an AI agent and it's working on a team level and it's going to be pulling in all of this information from different processes. But there are members of my team who all have different access, who all have different information that they should be seeing. Do we know, if the agent will respect all of the access levels, do we know that the data that it will be pulling in is actually the most recent data or current data, since we're also not great at metadata on the back end and making sure that the most recent information is labeled? Questions like that. Clearly, your utopia is what would be ideal, where it could help at that level. But I mean, how far off are we? What do we have to clean up in order to get there?
Craig You're right, there's a lot pre-work, which is a lot of also some of the cost, what Alan was alluding to also. Data cleansing is a term you hear a lot of right now in terms of actually cleaning data. Governance and security is another element too. You see large companies like Cisco investing in this and making that part of their messaging. But what size company do I have to be in order to take advantage of what that Cisco value proposition is around AI security and data governance?
These are legitimate concerns. If I'm a small company with 20 to 40 employees and I'm using something like HubSpot as a backend CRM, well, I would want to hope that there's probably that same level of access based upon identification of what I have access to apply to those AI agents as well too. I'd have to trust that that's going to take place. I mean, that's IT management. That's cleaning data, good governance, security as well going forward. But it's confusing because now I'm at that level that I'm trusting something that isn't a conscious to have access to this and share it with someone else on behalf of whatever workflow taking place. There could be breakdowns in that. I wouldn't discredit that.
Alan: What I think Siobhan and I were on the same page there, what we're alluding to is this sort of ... and people speaking very openly about it now so this isn't an Alan conspiracy theory, is basically I think today and some use a whole lot of billion agents in a few years time well it is going to be billion agents and a lot less jobs. I mean you know they're not building agents to help us. At the moment they are because those agents are not even close to being smart enough to do the job, but that's not the end goal.
So again, it's not about conspiracy theories, it's not about politics. It's just that the assistant phase is the phase we're in because the AI can't do it without us. It needs us to validate. It needs us to do that. But the goal, whether we'll ever get there, and I'm not sure it's a good goal, that could be dystopian goal, is not to need us.
Rebecca: I agree with everything that's being said and I think there are definitely concerns and organizations that haven't been thoughtful about it. But I do think we've come a reasonably decent way in terms of permission aware AI — we think about it a lot at Asana. Companies like Glean too, they think very carefully about the importance because if you're going to deploy AI at an enterprise level that needs to be a non-negotiable.
I also think the partnership with Databricks and Anthropic is going to be an exciting move in that direction. It's going to be a non-negotiable in terms of moving forward with AI at the enterprise level.
I think the flip side is also interesting. I do think our workplace cultures in general tend to be more isolated than is conducive to AI and more siloed. And so I have seen a shift at several large organizations. And I saw this in particular at the remote-first organizations where they tend to be much more transparent by default. And that opens up significant opportunity for AI. If you have the permissions down, you can imagine a world where if you're meeting transcripts across your organization have been shared publicly, you can glean so much more information from them that can be surfaced to the right individuals, regardless of department.
And so I think it's this really delicate, important balance between, yes, we absolutely need permission-aware AI that respects the permissions of the individual SaaS and other applications, but I do think we need to move more towards a culture where we're sharing more transparently by default. And when it comes to meeting transcripts, but also decision-making history within organization and all of these institutional knowledge pieces that so often remain siloed.
Where to Start When Considering an Large Scale AI Pilot
Siobhan: I should have clarified when I was talking about the permissions, because I actually do expect all of these AI tools to acknowledge the permissions that are set. It's, again, the people problem of whether the permissions were actually updated, were they kept up to date, etc, etc.
So where should people be focusing in order to reach that next level, in order to scale? We've talked about some different areas. Some of it is the data. Some of it is the change management. Which part of this do you think that organizations should first focus on if they want a successful AI pilot to scale? And Rebecca, you're nodding, so I'm going to grab you first.
Rebecca: It's a hard problem and I think it needs to be multifaceted. I think the data-driven approach is so important, right? Understanding which AI applications are being piloted across your organization, how they're being adopted, how users are responding to them. You can learn a ton from that.
And in particular, a lot of our recent work has focused on what we're calling those AI influencers, the people who are really gravitating towards specific AI use cases, because then you can harness them really effectively to leverage this bottom-up change.
I think overall, if I had to pick one recommendation, it would be, as you're thinking about this change management approach and piloting AI, make sure you're being holistic in terms of measuring what's working and measuring how it's being adopted and rolled out across your organization.
Alan: I don't disagree with that. Just to give a variation though, I think the first thing you start with is identifying a process that isn't working very well or has problems. The next step, which is tied in, is identifying the specific tasks within that process that you can actually do something to fix. Because the likelihood of you changing an entire process with AI is really unlikely. You've got to get quite specific, right?
So once you do that, once you've identified that sort of troublesome task or set of tasks, then you do have to do the thing on the data, right? OK. Well, do I actually have access to the data? Where is the data?
Those are the steps you should go through every time. The AI can come into the equation anytime and it should come in later, not at the beginning, because your projects are not going to work, it's really not.
So basically it's five very simple questions you ask yourself:
- Do I understand the process or have I identified process?
- Have I identified the task or subtype?
- Do I have access to the data? Is the data any good?
- Who's going to be involved in this? We've touched on this a few times, and it's definitely people who actually do the work today.
- Which we've already touched on, is how do I measure success?
If you can answer those five questions, you're in a good place to really build. Most people can't. So that's the point. It's simple stuff, if you think about it.
Craig: Good stuff. I could just let these guys drop the mic and we're done. But I'll add one thing on top of this. I think the role of our IT administrators who are tasked with AI are evolving a little bit. It's no longer just about provisioning and access, there is an element of user experience that they're now becoming responsible for. And I don't mean a UX in terms of an application interface.
What I'm talking about is if they're dealing with adoption of AI technology and they're trying to implement it, thinking through those end users point of view, we talked about cognitive load, we talked about removing friction. Is the AI that you're asking me to use in my work, is it accessible? Is it easy? Is it intuitive? Do I have to switch between applications a lot to take care of it as well?
This is kind of where people are taking AI and building workflows and eventually hoping to get to agentic and what have you. So I would just add on top of this that an element of success that they need to think about is their end user experience of how they're working through these workflows.
Siobhan: I'm listening to all of this very good advice, but I'm thinking back to what Alan said, which is these are the IT problems that we've been dealing with for the last 40 some odd years. Could we just take AI out of all of this and just write this down for any future IT rollouts?
Alan: It makes my job easy because I've been doing this for long time. I can just tell the same story and it's new again every time around. So I'm fine with it really.
The Effect of Artificial Intelligence on Human Psychology
Siobhan: Excellent! I know there's so much more we could have touched on, but is there one final point that any of you would like to throw out there before we wrap up?
Rebecca: I continue to be, and I think we've alluded to it in this conversation, but I continue to be fascinated by the psychology of this all. I do think this is similar to many other change efforts, but it is different in the sense of, at least within our lifetime, this fear and uncertainty associated with the technology.
We continue to see in our research high levels of fear, especially at that individual contributor level, even in terms of other people perceiving them to be lazy when they use the technology. And I first became fascinated by AI when I saw organizations banning or discouraging the use of that phrase, artificial intelligence. I think especially at that top level of the organization, it's overlooked just how psychological this is in terms of a change.
As we think about scaling, putting more of a focus on those individual contributors and just how they're responding to the technology is really important.
Alan: I'm 100% with you, Rebecca, many, many, many years ago, I trained as an analytical psychotherapist. So the psychology of it, the change management side is absolutely fascinating. But there's also, and I'm going off on a tangent here very quickly, but there's also a philosophical side to it as well. And these are the most interesting things. I mean, I guess if you build AI platforms, the tech's the interesting bit really, isn't it?
We've just talked about agents and rolling out hundreds of millions of billions of AIs, we have no idea what impact that would actually have on society and I think it's going to get more more interesting as these things actually start to get deployed.
Siobhan: We're going to have to have a whole separate session, maybe an entire workshop on, the psychology and philosophy. Craig, I want to give you a chance to jump in.
Craig: I think touching on that human psyche element is a great idea, leaning into your theme of Reworked. We are reworking a lot of how we're getting things accomplished. So again, removing the friction is a key element to think about here. And if the humans aren't part of this, you know, invested in this, that's an important friction point.
Siobhan Absolutely. Well, I want to thank you all and I hope that we can return again in maybe a few years to see where we are.
Siobhan:Thank you so much for joining us today. If you enjoyed today's show, please share it with a friend. Word of mouth marketing is the best marketing that anyone can ask for. See you next month!