AI hype is loud. But the reality is more complicated — and far more consequential.
Glossy product launches and bold promises are giving way to tougher questions: What does responsible implementation look like? How do we bridge the gap between innovation and regulation? And what will it take to personalize customer experiences without crossing the line into creepiness?
At this year’s Adobe Summit, industry leaders pulled back the curtain on what AI implementation actually demands — and why organizations need to get serious about doing it right.
The 3 Pillars of Successful AI Implementation
Ravi Pal, Global CTO at Ogilvy One, said he believes there are three pillars that can guide companies toward successful AI adoption, rather than simply chasing trends.
Pillar #1: Future-Proofing
“You can’t just introduce a new technology entity into your organization and not know what happens to your existing roles and how those roles now perform whatever activities in the new process,” said Pal. What you need to think about is how the roles and processes must evolve.
This is more than change management, he explained. It’s going beyond and thinking about the future of your organization, people, roles, talent, processes and more.
Pillar #2: Redefining Your Data Strategy
Now, you need to relook at what your data strategy is, which involves various pieces — infrastructure, engineering, governance and new variables, like synthetic data.
Questions you need to ask, said Pal, include:
- Where do you need to create and use synthetic data versus first-party data or data bought from third-party sources?
- How do you make the data available for the AI to make the right decisions, especially when data silos exist?
You also need to think about your data security, but more from the point of view of enabling this technology.
“If you don’t do it right, then you can’t achieve the ROI,” said Pal. “You’ll have a lot of bumps on the road.”
Pillar #3: Designing the Relationship With Your Customer
Then there is designing the brand-customer relationship, or what Adobe refers to as "orchestrating the experience,” said Pal.
AI, ultimately, should enhance customer engagement, creating a brand that customers can call their own. And to do that, you’ll need one-to-one personalization.
Taking Coca-Cola as an example, Pal said, “What does personalization look like for someone who’s going to buy a beverage as a consumer?” Or, in the case of B2B, what does personalization look like for a specific buying group?
These are the three dimensions everybody needs, he noted, “and then all three have to work together.”
Related Article: AI Upskilling in 2025: From Potential to Measurable Impact
Progress on the AI Transparency Front
More people — both consumers and organizations — are voicing the need for AI transparency. To meet this need, Adobe is pushing a content authenticity framework that embeds cryptographically sealed metadata in images, videos, audio files and documents.
Jace Johnson, VP of government affairs and public policy at Adobe, explained that this metadata records the entire history of the content — from capture to editing to dissemination — offering a transparent and secure way to track how and by whom content was created or altered.
This system is part of Adobe’s Content Authenticity Initiative, which has gained global traction. “4,500 organizations have joined with us,” noted Jace. “We started about five years ago. That's the best transparency we can think of, and that's why we started it so long ago.
The goal is to provide users with enough context to trust what they see or hear, while allowing for flexible levels of disclosure depending on the use case. For example, a war reporter may want to suppress their identity, while a commercial artist may want full attribution.
“It's like a dial…,” said Jace, “if there are scenarios where it's not as important that you have that dial turned all the way up, you can turn it back down.”
Governments wanted to go heavy down the watermarking with audio, video, anything…” he added. “We’re like, this is a better solution. You don’t need a visible watermark or an audible watermark if you put provenance into every digital file that you’ve created.”
Adobe has actively promoted this approach as a global standard, suggesting it could reduce the need for government censorship by enabling more informed consumption of digital media.
AI Brings Us Closer to Personalization — But Hurdles Persist
A big topic at this year’s Adobe Summit was true one-to-one personalization. Meaning a person lands on your site (or opens your brand’s app) and sees product descriptions, images and more, tailored specifically to them. But that’s left me (and many others) asking — is that even achievable, at least in the near future?
While AI has made one-to-one more feasible from a technological standpoint, Andrew Frank, distinguished VP analyst at Gartner, pointed out that human and organizational factors remain significant barriers.
"I guess we've heard the story of one-to-one marketing for decades now. I mean literally, since the 90s," he said. Yet execution still lags due to data limitations, organizational readiness and — perhaps most importantly — consumer trust.
Another core problem isn’t just whether brands can do personalization. It’s whether they should in every instance.
“I think there's also a question of whether consumers are truly ready for one-to-one relationships with all of the myriad brands," Frank noted, adding that excessive personalization can feel intrusive, especially when it’s not clearly tied to solving a real user problem.
"I think that brands sometimes have an overinflated impression of their importance in your life, and when they start to become overly intimate with the details they telegraph that they know about you… that can still be an uncomfortable experience for people."
For instance, many consumers have had the eerie experience of devices that seem to “listen” to conversations.
"I think a lot of people have had the experience where they just talk about something and they don't search it, and then they start getting ads for that thing,” said Frank. “Yes, that can be uncomfortable and not helpful for either the consumer or the brand."
The takeaway: while we get closer to the reality of true hyper-personalization, brands must approach with caution, empathy and a clear value proposition — or risk alienating the very people they’re trying to reach.
The Race to Regulate AI, Nation by Nation
At the AI Action Summit in Paris this past February, Johnson said he noticed a shift in the mindsets of global government officials. Previously, concerns focused on safety and ethical risk. Now, many countries are driven by the fear of economic and technological irrelevance.
“This concern was all about we're going to miss the boat, like we're just going to miss AI this round, and that could be devastating to our economy.” Previously, Johnson added, “That was something that hadn't come through in the AI public policy debate.”
This urgency to avoid missteps is prompting each nation to pursue its own approach to AI regulation, rather than follow a one-size-fits-all framework from global powers like the US or EU.
“They don’t want a map… or a master plan or somebody to spell it out for them,” said Johnson. “They feel like they’ll be better off if they blaze the trail that works for them.”
Hitting a ‘Trough of Disillusionment’
The initial shock and awe around generative AI — especially after ChatGPT’s release — created sky-high expectations. Many organizations and individuals imagined a near-future where these tools would automate away tedious work, revolutionize productivity and remove the complexity of content and campaign creation.
But as the technology has matured, reality has set in. Implementation is proving to be far more difficult, and each AI solution introduces a new set of challenges — both technical and ethical.
"We're definitely going to be in this agent chase for a few years,” predicted Frank. “I think… we're sliding into the trough of disillusionment with a lot of these technologies, which were so overhyped… and maybe overhyped is the wrong word. They were so disorienting, right?"
He pointed to the initial launch of ChatGPT as a moment that radically shifted people’s expectations for AI.
"When ChatGPT came on the scene, it was kind of unbelievable the things it could do, and I think it really sparked people's imagination… if we had a system that could just do this, and we could tell it to do anything, and it would just basically take all of the pain out of our operations."
But now, businesses are grappling with the reality: it’s not that easy. “And not only is it not that easy,” said Frank, “but as it solves some problems, it creates a whole new class of problems that we never had to think about before."
He listed a few examples:
- Copyright issues with content
- Potential to introduce biases into organizations
- Making people more susceptible to misinformation
- Technological capabilities expanding faster than our ability to manage them
"I think we're looking at a whole new set of problems and a whole new set of challenges for business and society, just as we were with the internet… when people realized that technology had gotten far ahead of our social maturity and our ability to handle it."
Related Article: AI Risks Grow as Companies Prioritize Speed Over Safety
From Hype to Strategy: What Comes Next
If the last two years were about sprinting into the generative AI future, this year is about slowing down just enough to figure out what we’ve signed up for.
The challenges ahead — compliance, transparency, creativity, personalization, regulation — aren’t small. But they’re necessary to address if we want to move past novelty and into lasting transformation.
The companies that will succeed in this next chapter won’t be the ones chasing every shiny new tool. They’ll be the ones investing in strategy, structure and guardrails — and building for the long term.