How Algorithmic Trust Models Can Help Ensure Data Privacy
The publication of the Gartner Hype Cycle for Emerging Technologies is always an interesting moment in the tech calendar.
Even the Stamford, Mass.-based research organization says, of all the hype cycles they publish, the emerging technologies research is unique in that it distills insights from more than 1,700 technologies into a succinct set of 30 emerging technologies and trends. It also specifically focuses on technologies that show promise in delivering competitive advantage over the next five to 10 years.
Out of those 30 technologies and a wider analysis of the market, Gartner identified five emerging trends that will dominate the technology in the coming years. Brian Burke, research vice president at Gartner, explained in a statement why the organization considers them important: “Emerging technologies are disruptive by nature, but the competitive advantage they provide is not yet well known or proven in the market. Most will take more than five years, and some more than 10 years, to reach the Plateau of Productivity."
“But some technologies on the Hype Cycle will mature in the near term and technology innovation leaders must understand the opportunities for these technologies, particularly those with transformational or high impact,” he added.
The Role of Algorithmic Trust
Among those technologies is algorithmic trust. Gartner explained that trust models based on responsible authorities are being replaced by algorithmic trust models to ensure privacy and security of data, source of assets and identity of individuals and things.
Algorithmic trust helps ensure that organizations will not be exposed to the risk and costs of losing the trust of their customers, employees and partners. Emerging technologies tied to algorithmic trust include secure access service edge (SASE), differential privacy, authenticated provenance, bring your own identity, responsible AI and explainable AI.
It is not surprising this has made its appearance in this year’s Hype Cycle, especially in light of the downfall of the US Privacy Shield and the concerns expressed by the public in both Europe and the US about the security of personal data used by organizations. In the coming years, these trust models will become increasingly important as new regulatory regimes are put in place globally.
Yaniv Masjedi, chief marketing officer at Scottsdale, Ariz.-based Nextiva, said that algorithmic trust refers to how people perceive algorithms as a more trustworthy handler of their data than operations run by humans. People know that algorithms work on a predetermined set of codes and programs and that it is challenging to push them away from their predetermined tasks.
AI's strict compliance with its set limitations allows it to be a more trustworthy option as perceived by consumers. Without human intervention, data privacy concerns will lessen since algorithms cannot perform any tasks outside of their set scope of abilities.
"Current AI technology allows for continuous improvement," he said. "AI can generate better and more secure privacy and security walls that are harder to penetrate as it continues to gather data and troubleshoot problems."
Related Article: The Role of AI in Ensuring Data Privacy
The Advantages of Algorithmic Trust
The AI community got a wake-up call when it was revealed that an algorithm-driven score to replace in-person university admission exams had lowered the grades of 40% of British students recently. The backlash was swift, but it was also a reminder that enterprises need to accelerate the privacy-preserving elements of AI workflows and models while there is still time, said Eliano Marques, executive vice president of data and AI at Salt Lake City, Utah-based Protegrity.
“We also have to look back and audit solutions already in production to ensure we are compliant, fair, and ethical, and that AI models are free of bias. Only then can we engender complete trust in the promise of AI,” he said.
Many organizations are relying on personal data to enhance the creation and delivery of products and services. Ethics and fairness have to be considered in any AI model that uses such sensitive data but some organizations are unaware their models contain biases on gender, age, salary, residence and other defining personal pieces of data.
“If organizations are found to have inserted bias into their models, will individuals give them a second chance?" Marques said. “Will people keep providing their data on the promise that someday there will be a fairer and more trusted AI algorithm? Probably not. Governments and businesses will get only one chance at this.”
As there is still no black-or-white solution to this problem, thinking of it as a trade-off between fairness and accuracy can help. Companies cannot entirely rid their AI models of sensitive attributes that search for meaningful measurements such as cost savings or revenue gains, but they do need to establish proper balance that considers fairness.
To do that, they can adopt privacy-preserving solutions, such as those that deliver differential privacy and k-anonymity, that guarantee complete privacy of individual data while also reducing bias in ML algorithms.
AI models that are demonstrably fair and free of bias will let organizations continue to find beneficial answers from machine learning, with the blessing and trust of customers.
Related Article: Should Your Organization Be Auditing Their Algorithms?
Learning Opportunities
Building Trust Models
Algorithmic trust provides end users and business leaders the peace of mind to be able to track how and where AI is being used inside an organization and offer insights into the why behind AI-enabled decisions, said Josh Elliot, head of operations at Bethesda, Md.-based Modzy. This is manifest in two ways:
1. Governing AI running in production.
2. Taking steps to build trust down at the model level.
From the governance layer, organizations must be able to understand who is using and they must build in audit functionality to look back at performance and usage over time. This also helps them keep a better handle on data privacy, because they will know who is accessing what data and can stop any unauthorized access in real time.
Role-based access control is one piece of this but it should also include audit functionality, API key management and everything at all related to governance. “By automating the governance of AI-enabled systems and tracking usage and performance, you're simply applying principles and best practices similar to any other IT system,” he said.
As an example of how this can be done, Modzy recently built an algorithmic trust model by:
- Documenting the model training and performance information, including model architecture, on both the training and validation dataset.
- Providing insight into model behavior, which comes from building in ‘explainability’ and securing models. ‘Explainability’ is not a panacea and not all models should be explained.
- Building in proactive measures to detect model poisoning and prevent model stealing. Not only does this ensure AI integrity remains intact but it addresses data privacy issues.
Related Article: Enterprise Data Strategies in the Aftermath of the US Privacy Shield Defeat
AI and the COVID-19 Impact
Darshan Desai, management professor at the New York City-based Berkeley College, pointed out there is no clear consensus on what AI is and the concept is quite dynamic, just as the notion of intelligence is. However, with AI of all kinds gaining traction in the enterprise it is likely investment will rise despite the negative economic impact of COVID-19 on businesses.
In fact, the pandemic has played a role in accelerating the pace of AI adoption. Organizations are realizing the complexities involved while working with humans and are exploring alternative ways that algorithms and humans can coexist and collaborate.
At the same time, dealing with a hiring freeze and fewer people doing the same amount of work has put a focus on innovating processes for efficiency. Automation is inevitable, clearly understandable and has attracted executive attention. This may have led to a significant rise in terms of executive ownership of AI, which is affecting AI budgets positively.
“AI already makes a huge impact on our daily routines when we contact Alexa, Uber, Amazon, voice assistant on smart phones, or auto-completes while drafting emails and messages,” Desai said. “In addition, the organizations are surely shifting from piloting to operationalizing AI technologies and that can drive significant increase in streaming data and analytics infrastructures.”
However, if we look at the Gartner AI Hype Cycle, many of the most popular AI technologies are currently at the peak of inflated expectations or sliding through the trough of disillusionment. At this point, it is especially important to be objective and outcome-focused when it comes to AI initiatives and not just follow the hype.
About the Author
David is a full-time journalist based in Paris, who spends his time working between Ireland, the UK and France. A partisan of ‘green’ living and conservation, he is particularly interested in information management and how enterprise content management, analytics, big data and cloud computing impact on it.