Man and woman smiling at robot engineering

AI Is Not Coming for Jobs Just Yet

March 03, 2022 Information Management
Mike Prokopeak
By Mike Prokopeak

The idea that artificial intelligence will eventually displace human workers has been around for a long time. In response, companies have repeatedly stressed that their purpose in using advanced technologies was not to take over anyone's job, but rather to improve productivity, efficiency and the like. The idea, the thinking goes, is that code can take over tasks people find repetitive or impossible to do, and allow humans to act more as decision-makers and guides.

But things may be changing. AI company DeepMind recently announced it has taught some of its machines to write code. The kicker? The output is on par with that of an average human programmer. 

So, where does that leave the humans?

The Role of AI in Coding

According to a statement by the company, AlphaCode, the system DeepMind created to write computer programs at a competitive level, achieved an estimated rank within the top 54 percent of participants in programming competitions by solving recent problems that require a combination of critical thinking, logic, algorithms, coding and natural language understanding.

The company claims it has validated its performance using competitions hosted on Codeforces, a popular platform that attract tens of thousands of participants from around the world looking to test their coding skills. In its statement, the company also said it would release its dataset of competitive programming problems and solutions on GitHub.

There's nothing controversial about what DeepMind claims to have done. It also should be no surprise since there's always been speculation about the possibility of AI programming AI. However, until now much of the discussion has focused on the technical aspects of accomplishing that — and whether it can be done — rather than the human element behind it.

DeepMind's release raises a new question about the role of humans in programming or developing AI. Or rather it presents the possibility that AI can, indeed, build itself.

Related Article: 3 Ways Human-Machine Collaboration Increases Employee Productivity

AI Coders Still Have a Ways to Go

It is early days yet, said Cameron Fen, head of research at Boston-based AI Capital Management. It's still very difficult to get AI algorithms to plan and think creatively, and the algorithm contests cited by DeepMind mostly involve solving highly stylized prompts. When asked to deploy these approaches in the real world, AI will stumble.

What's more, humans are the ones who picked the best algorithm out of a top 10 produced by AlphaCode. This higher level of thinking and planning will still be in the domain of humans for the foreseeable future, Fen said.

Instead, what's likely to happen in the future is that people will think of what to code, and machines will provide a range of solutions for humans to select. But even then, there's still time before this happens.

“At this point, computers are just much better than humans at copying and pasting code from Stack Overflow and GitHub and stitching them together to form coherent boilerplate code,” Fen said. “Ilya Sutskever of OpenAI is on record saying GitHub Copilot is made to help people automate boilerplate code, although clearly this algorithm will continue to move up the value chain of coding tasks.”

Related Article: Intelligent Process Automation Pushes the Boundaries of Business Process Automation

Software Roles Are Likely to Change

As AI matures, coding tools powered by technology such as AlphaCode's will change the nature of software engineering roles. But maybe not just yet.

It will take quite some time for these tools to replace human developers, said Sameer Maskey, CEO and founder of New York City-based Fusemachines and adjunct assistant professor at Columbia University. Human programmers create solutions based on their intuition and experience garnered over the years, he said.

"They also interface with clients/product owners and understand business processes that need to be incorporated in the code in nuanced ways," Maskey said. "AI-powered tools are only emulating abstract thinking and critical problem-solving."

Despite extensive research and progress around AI coding systems, full dependency on these tools remains a concern due to ongoing bugs, biases and other data-manifested complexities. The burden of building systems that are safe and ensure reduced failures and more accountability still rests with humans. For that reason, humans remain integral to the process of fine-tuning these tools, putting necessary checks in place and enhancing AI's problem-solving features.

“In the same vein, human oversight is required both before and after coding for planning, analysis and design, which require more human interaction," Maskey said. "Programmers still need to verify the code and, more often, will have to solve problems or figure out novel approaches to solve them. At this point, it's still unsure if the AI-pair programming tools such as GitHub Copilot are actually intelligent, or are reciting what they have learned from the trained data."

Related Article: Tech Giants Dominate Quantum Computing But It's Still Anybody's Game

4 Must-Solve Challenges Before AI Can Take Over

AI is playing an increasingly significant role in everyday life, from mobile devices and computers to household appliances and cars. Businesses and consumers have grown accustomed to it, but few may realize how much work goes into building it. While AI engines have become increasingly self-sufficient, human oversight is still necessary to fine-tune these models.

AI-augmented speech technology is a good example, said Ian Beaver, chief scientist at New York City-based Verint. He said there are four areas when it comes to AI in speech-based systems and training large language models that offer an insight into the challenges:

1. Understanding ethical and social impacts

Within the AI community, large attention has been given to the ethical and social impacts of AI in public spaces. AI experts are focusing on how AI impacts people's lives in a way that is unintentionally biased or has negative social implications — and how to detect and mitigate those impacts before they become a problem. The key will be determining an appropriate way to screen the data feeding these models at scale.

2. Mitigating undesirable responses

There's a great deal of focus on mitigating undesirable responses when training large language models. For example, pre-trained language models use data and live interactions from the internet that are not well filtered (e.g., Twitter, Reddit). Sometimes, data gets in that companies would not want their intelligent virtual agents to repeat.

“The challenge becomes, how do you detect it before you train it, and how do you ensure responses coming out are acceptable?" Beaver said. "This is a critical issue in the customer service realm with exceedingly high standards for delivering an exceptional customer experience."

3. Leveraging existing knowledge

Incorporating common human knowledge into AI to avoid the need to train machines from scratch each time is critical to the success and efficiency of algorithms. Collecting and annotating data to train large AI models is a costly and time-consuming task that drives the need to re-use and re-purpose what has already been learned.

4. Training multi-lingual models

Training speech and language systems to be multi- and cross-lingual without beginning from nothing is also critical to the success of AI in speech technology. Leveraging multi-language models makes automating the translation process easier because the system was trained on English, French and German, for example, at the same time. There are many shared rules across languages that can be learned by jointly training on multiple languages.

Related Article: Human-Machine Collaboration in 2021: The Machines Are Ready. Is Your Business?

How Fast Will AI Take Over?

We're on the cusp of a new era in human civilization where robots may start doing much of our work, said Josh Bachynski, a technologist and AI developer.

While this will have a dramatic impact on work and radically change society from the models set up in the last industrial revolution, that impact will unfold slowly and give workers a chance to adapt, he said. In other words, people are not going to lose their jobs overnight. Market factors will ultimately determine the pace.

“The more the currency value of the country is high, and the higher cost of living increases, the faster they will and must develop cheaper labor solutions," Bachynski said. "They will need AI/robotic solutions to replace or supplement their labor force."

Bachynski said it will only become cost effective when the total net cost of the AI solution is less over time than the cost of having humans do the work, including health care, recruitment and other liabilities. That's not to mention the technical challenges that still remain.

At this point, DeepMind’s AlphaCode solution still requires a well-established coding problem statement to produce meaningful solutions, said Zac Yung-Chun Liu, chief data scientist at Austin-based Hypergiant. Most machine learning (ML) problems today typically do not have a clear problem statement to begin with.

Despite that limitation, AlphaCode and similar solutions still provide a solid foundation for solving current coding pain points.

“As AlphaCode matures, domain experts will use this program to build their own ML models, bridging the gap between data scientists, programmers and industry professionals,” he said.


Featured Research

Related Stories

Upside down view of the pillars of the supreme court

Information Management

Why Regulating AI Is Going to Be a Challenge

arrow on a road pointing in two directions

Information Management

Should Information Management Focus on the Customer or Risk?

Escher-esque stairwells at the Hudson Yards in NYC

Information Management

Why the Process Mining Market Is Heating Up

Digital Workplace Experience Q3: August 3-4, 2022

DWX22 - Q3