Boon or Bane: Wading into AI Waters and Defining Risk in IDAM Circles

Jai Hooker@Work

Boon or Bane: Wading into AI Waters and Defining Risk in IDAM Circles

By Bryon E. Bass, CLMS, CEO, DMEC

Underneath the hype surrounding artificial intelligence (AI) is a tool that when used appropriately, is valuable to employers. The problem arises when companies race in and dabble with this powerful technology without creating appropriate “guardrails” for use. As Jackson Lewis legal experts explained during the 2024 DMEC Compliance Conference, no one should be letting the robots run things!

We have seen companies race in with AI while others hold back. The sweet spot lies somewhere in the middle. Strategic use of technology is essential for success in an industry that continues to evolve at breakneck speed. When used judiciously in integrated disability absence management (IDAM), AI can enhance an employee’s experience during emotional times.

For example, automating administrative tasks allows time-strapped IDAM professionals to be present for and focus on moments that matter to employees when they seek help. The key is human oversight. As Jackson Lewis speakers said, if employers would never let a new associate make disability or absence decisions without supervision, then why would they let AI?

The answer is that employers should not. AI should aid the decision-making process, not fully replace it. Organizations must be cautious about how they use AI and keep privacy concerns at the forefront as they explore opportunities. While technology use within the IDAM realm has historically been slow, data shows increased adoption to help manage hundreds of leave laws.1

Context is Key

Industry data also shows increasing interest in AI, which includes machine learning, robotic process automation (RPA), rules-based engines, and natural language processing.

To level set, AI was defined in 1955 as “the science and engineering of making intelligent machines,” though today, “we emphasize machines that can learn, at least somewhat like human beings do.”2

It is worth repeating that the definition was created 69 years ago and flagging the reference to “machines that can learn, at least somewhat like human beings.” That statement hints at the promise as well as potential pitfalls to AI at a time when there is unprecedented interest in and adoption of it. In fact, the number of companies adopting AI more than doubled from 2017 to 2022, according to the 2023 AI Index Report. 3 Additional data points that warrant references:

  • More than half of human resources (HR) leaders are researching the impact of generative AI.4
  • 76% of the HR leaders surveyed believe that adopting AI tools is essential to maintain a competitive advantage.5

Despite increasing anxiety surrounding AI, reports6 indicate that leaders who use AI tools have greater job satisfaction and higher productivity levels. That will not come as a surprise to anyone in HR circles who knows the volume of repetitive, administrative tasks involved. But employers must also ask about the risks that are introduced by automation and how they will mitigate them.

Generative AI is the newer technology star within the AI realm that outshines machine-learning models that rely on data for predictions.7 Some date this to the release of ChatGPT, which dominated headlines and stoked fears about robots taking over the world. For example, data experts explain that machine learning allows the analysis of millions of samples to identify tumors more quickly or highlight the chance borrowers might default on loans. In comparison, generative AI uses unsupervised learning algorithms to create new digital content, such as images, audio, and code. It learns how to generate content based on training data. That unsupervised component to generative AI can lead to or create vulnerabilities.

Oversight

As industry interest in AI increases, the Equal Employment Opportunity Commission (EEOC) is paying close attention to ensure AI and automation tools do not introduce or increase discriminatory practices. The EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative8 in 2021 to ensure that software, including AI and machine learning, and other emerging technologies used in hiring and other employment processes and decisions comply with federal civil rights laws that the agency enforces. AI is referenced repeatedly in the EEOC’s Strategic Enforcement Plan Fiscal Years 2024-2028,9 and the agency partnered with the Consumer Financial Protection Bureau, Department of Justice, and the Federal Trade Commission10 to expand its reach to fight discrimination and bias in automated systems. At the 2024 DMEC Compliance Conference, EEOC Vice Chair Jocelyn Samuels highlighted a critical oversight issue: Many employers are unaware that they could be held accountable for the actions — or lack thereof — of third-party AI systems used in their administrative and claims management processes. This responsibility extends to ensuring these systems do not inadvertently violate the Americans with Disabilities Act (ADA) requirements.

According to EEOC guidance on AI,11 if an employer or its third-party software, algorithms, or AI technologies lead to a failure to adequately address an employee’s request for reasonable accommodations or the exclusion of applicants with disabilities — who could perform the job with appropriate accommodations — the employer may face liability. This underscores the necessity for employers to rigorously assess and monitor the AI tools and third-party services they employ to ensure compliance with ADA requirements and prevent discrimination.

Level Set

A significant portion of the anxiety surrounding AI comes from misunderstandings about the technology and fears, whether grounded in reality or not, of machines becoming autonomous. To mitigate these concerns, clarity and specificity about AI’s role and capabilities are essential. Illustrating how AI has been effectively used in IDAM can be enlightening. This includes its application in processing claims; determining optimal timelines for disability leave; using predictive analytics to identify cases that need clinical follow-up; and managing communications for return-to-work initiatives through automated, text-based updates. By showcasing these practical applications, we can demystify AI and highlight its value as a tool that supports rather than replaces human decision-making and interaction.

For years, employers have effectively integrated AI into various operations such as digitizing documents, enhancing network security, and more recently, deploying chatbots. However, as the application of AI widens, the importance of human oversight and transparency becomes paramount to ensure the accuracy of AI systems and their ethical use, especially in preventing discriminatory outcomes. This highlights a fundamental truth: Despite advanced capabilities, AI systems are prone to errors.

In a hands-on evaluation, I experimented with an application intended to calculate paid family leave. Tasked with estimating benefits for an employee with a weekly income of $3,500 who needed leave from Jan. 1 through June 1 due to a disability, the application mistakenly supplied information pertaining to paid family leave rather than disability benefits. It also incorrectly assessed the criteria for qualifying leave reasons between disability benefits and paid family leave. When I pointed out these inaccuracies, the development team addressed and rectified them. This experience emphasizes the essential ability of AI to evolve and enhance its accuracy and reliability through user feedback.

However, this scenario also raises a significant concern: What if the individual using the app lacked the extensive IDAM experience needed to recognize these errors? And what if the misinformation influenced decisions affecting your employees? There’s a vital need for a balance between leveraging AI’s efficiencies and maintaining rigorous oversight to prevent potential pitfalls.

Get Ready

Understanding the potential applications of AI within your organization is crucial. It’s equally important to be vigilant about the plethora of freely available online AI tools. These tools, while beneficial, can sometimes spread misinformation among employees and lead to confusion and mistrust.

To navigate the challenges posed by the rapidly evolving AI landscape — a journey spanning nearly seven decades — organizations must pioneer innovative communication strategies. This entails clarifying AI-related terminology to eliminate confusion and alleviate concerns, thereby fostering an environment of trust and transparency. Sharing practical examples and case studies can illustrate AI’s positive impact on operational efficiency, such as streamlining requests for accommodations or enhancing decision-making processes.

Furthermore, establishing clear guidelines and deploying guardrails for using AI can ensure its alignment with organizational goals and ethical standards. An increasing number of organizations are formulating AI usage policies to delineate acceptable practices. Is your organization among them?

Integrating technology to augment human efforts is not a novel concept. However, as the technology’s capabilities expand, determining its role alongside human input becomes crucial. DMEC is committed to ensuring that this technological advancement serves as a boon rather than a bane to IDAM professionals. Success in this “brave new world,” as described by Samuels during the recent DMEC Compliance Conference, hinges on the leadership of experts who are well versed in the nuances of technology and human-centric practices.

Let’s embrace this journey together to ensure our foray into the world of AI enhances, rather than complicates, the IDAM landscape.

References

  1. 2023 DMEC Employer Leave Management Survey. To be released April 2024.
  2. Stanford University Human-Centered Artificial Intelligence. Artificial Intelligence Definitions. Retrieved from https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf
  3. Stanford University Human-Centered Artificial Intelligence. The AI Index Report. Measuring Trends in Artificial Intelligence. Retrieved from https://aiindex.stanford.edu/report/
  4. When AI, Ethics, and Data Privacy Collide, What Comes Next? Retrieved from https://www.adp.com/spark/articles/2024/02/when-ai-ethics-and-data-privacy-collide-what-comes-next.aspx?cid=elq_sales_enablement_45476&campaignid=45476&ecid=25670261&utm_source=eloqua&utm_medium=email&utm_campaign=ES_FY24_SPARK_HR_MAR2024&elqTrackId=cf65b6b299b7422d9f3c1d84c71476ba&elq=ddb3c4e28fab4b0c8ccf20296b3662ac&elqaid=228738&elqat=1&elqCampaignId=45476
  5. AI in HR: The Ultimate Guide to Implementing AI in Your HR Organization. Retrieved from https://www.gartner.com/en/human-resources/topics/artificial-intelligence-in-hr
  6. 10 Workplace Tasks Being Simplified by AI. March 1, 2024. Retrieved from https://www.benefitnews.com/list/ai-is-being-used-to-complete-these-workplace-tasks-the-most
  7. MIT News. Explained: Generative AI. Nov. 9, 2023. Retrieved from https://news.mit.edu/2023/explained-generative-ai-1109
  8. S. Equal Employment Opportunity Commission. Artificial Intelligence and Algorithmic Fairness Initiative. Retrieved from https://www.eeoc.gov/ai
  9. S. Equal Employment Opportunity Commission. Strategic Enforcement Plan Fiscal Years 2024-2028. Retrieved from https://www.eeoc.gov/strategic-enforcement-plan-fiscal-years-2024-2028
  10. S. Equal Employment Opportunity Commission. Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems. Retrieved from https://www.eeoc.gov/joint-statement-enforcement-efforts-against-discrimination-and-bias-automated-systems
  11. U.S. Equal Employment Opportunity Commission. The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees. Retrieved from https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence
DMEC-Related Resources