Federal IA Regulations
Artificial Intelligence compliance concerns have been increasing as the topic has been trending. Although this compliance concern is not new, many Employers and professionals fail to see the impact on the workplace in the form of discriminatory practices
The number of employers using AI is skyrocketing: Nearly 1 in 4 organizations reported using automation or AI to support HR-related activities, including recruitment and hiring, according to a 2022 survey by the Society of Human Resources Management (SHRM).
AI is being used in the workplace to manage the full employee life cycle, from sourcing and recruitment to performance management and employee development. Recruitment and hiring are by far the most popular areas where AI is used for employment-related purposes. However, AI can be utilized in almost any human resource discipline as listed below.
- Privacy Breaches
- Transparency
- Accountability
- Hiring & Selection
- Discrimination
- Violations of the American Disabilities Act (ADA)
- Confidentiality & Data Privacy
Tools like resume scanners, chatbots, video interviewing software, and testing software are often used during the recruiting or hiring process. While you might not think about these as artificial intelligence since they have been around for a while, they use different aspects of AI. These tools also save time and make the job of the recruiter or hiring manager easier.
Discrimination Compliance Guidance
The U.S. Equal Employment Opportunity Commission (EEOC), the U.S. Department of Justice (DOJ), the Consumer Financial Protection Bureau (CFPB), and the Federal Trade Commission (FTC) released a statement and held a press conference on April 25, 2023, highlighting their commitment to enforcing existing civil rights and consumer protection laws as they apply to AI in the workplace.
83% of employers and 99% of Fortune 500 companies use some type of automated tool in their hiring processes, according to the Equal Employment Opportunity Commission (EEOC). EEOC has established a guidance tool to help Employers manage AI without violating discrimination protections.
There are currently no federal laws specific to the use of AI in employment decisions; however, nondiscrimination laws such as Title VII of the Civil Rights Act, the Americans with Disabilities Act (ADA), and the Age Discrimination in Employment Act (ADEA) apply.
AI can be biased (or in the case of machine learning applications, become biased), creating concerns of illegal discrimination depending on how the technology and data are used. Understanding how machine learning works and how data is used by AI tools is necessary to identify and correct any outcome that negatively impacts certain groups of people.
The US. Equal Employment Opportunity Commission (EEOC), the enforcement arm of the Civil Rights Act of 1964 which protects candidates, employees, and former employees against workplace discriminatory practices has launched the Artificial Intelligence and Algorithmic Fairness Initiative of 2021—intended to guide applicants, employees, employers, and technology vendors in ensuring that AI technologies are used fairly and consistent with federal equal employment opportunity laws. One of the factors that is added to the Issue is technical assistance to guide algorithmic fairness and the use of AI in employment decisions.
Instead of avoiding AI altogether, employers can take measures to prevent bias and illegal discrimination. Understanding the algorithms that are used and how individuals are screened in, or out, is important when implementing AI tools. Regular review of this information and the subsequent results is necessary to ensure that the tool isn’t learning bias or illegal selection criteria over time.
Employer Liability on AI Systems and Vendors
The EEOC puts the burden of compliance squarely on employers. “[I]f an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor,” the agency states in its technical assistance guidance.
- First, front-line HR managers and procurement folks who routinely source AI hiring tools do not understand the risks.
- Second, AI vendors will not usually disclose their testing methods and will demand companies provide contractual indemnification and bear all risk for the alleged adverse impact of the tools.”
Employers can’t rely on a vendor’s assurances that its AI tool complies with Title VII of the Civil Rights Act of 1964. If the tool results in an adverse discriminatory impact, the employer may be held liable, the U.S. Equal Employment Opportunity Commission (EEOC) clarified in new technical assistance on May 18. The guidance explained the application of Title VII of the Civil Rights Act of 1964 to automated systems that incorporate artificial intelligence in a range of HR-related uses.
State AI Regulations
States are also reviewing their exposure to AI and as a result, they already have laws in place related to the use of artificial intelligence in the workplace. This will impact Employers in multi-state locations, especially remote employees.
There have been significant court cases that are pending and litigation is expected to result in many challenges for Employers. Employers need to take these cases seriously as well as federal and state regulations.
A current case includes Workday Inc., a maker of AI applicant screening software, which is in the middle of a class action lawsuit that alleges its products promote hiring discrimination. The lawsuit, filed in February 2023 alleges that Workday engaged in illegal age, disability, and race discrimination by selling its customers the company’s applicant-screening tools, which use biased AI algorithms.
Outcomes from AI tools, including employment decisions, can be skewed by datasets with unrepresentative or imbalanced data, historical bias, or other types of errors, the joint statement noted.
AI poses some of the greatest modern-day threats when it comes to discrimination today. We have an arsenal of bedrock civil rights laws that do give us the accountability to hold bad actors accountable. Those laws include the Civil Rights Act of 1964, the Americans with Disabilities Act (ADA), the Fair Credit Reporting Act, and the Equal Credit Opportunity Act.
The use of AI can also trigger compliance issues with other employment laws, such as the Fair Credit Reporting Act (FCRA) when using a third party to conduct a background check, or global requirements for employers with a multinational workplace.
Some state laws require an employer to disclose to individuals when AI is used in employment decisions, and the EEOC encourages this practice. The EEOC guidance indicates that employers should provide job applicants and employees who will undergo an assessment by an AI tool with as much information about the tool as possible.
It is recommended that employers not only be transparent but also obtain consent from individuals before using AI technology for employment decisions.
Employers Must Have AI Policies
HR professionals should receive from employees some written or other verified confirmation of receipt of the updated generative AI policy, lawyers recommend.
It is important to have a generative AI policy because, without one, employees may presume that they are free to use generative AI for whatever purposes they see fit and with whatever company information they have access to. This causes great risks to the quality of work product, as well as to the confidentiality of company and personal information.
Monitoring quality and accuracy: Having a policy prompts organizations and employees to regularly evaluate and validate AI outputs. This mitigates misinformation, poor decisions, and subpar outputs.
Promoting accountability and responsibility: A clear AI policy defines a path of accountability and responsibility to effectively and quickly handle situations where AI makes mistakes or causes harm, which helps prevent internal disputes or legal challenges.
Fostering employee trust: By transparently communicating the purpose and limitations of AI tools, the policy reassures employees, mitigating fears of replacement or excessive monitoring, and upholding organizational morale.
Conclusion
Effective regulation and oversight of AI use in the workplace through a generative AI policy can help mitigate these risks
- There are more federal and state regulations monitoring AI in the workplace
- Employers must stay up to date on all regulations that impact their workplace
- AI use in the workplace has the potential to place Employers at risk of unintentional discriminatory practices
- Employers are responsible for discriminatory practices whether intentional or unintentional
- Have employees approve AI procedures in writing
- AI Vendors must be evaluated and confirmed for system gaps
- Training should be mandated to explain the use allowed and avoided
- Policies must be developed to ensure AI is clear and avoids violations
Best Practices When Using AI
In the decision-making process, employers should not rely exclusively on AI; some of the best (and worst) hiring decisions have been made without the use of this technology.
Before implementing AI in any aspect of HR, carefully document the process, including the factors used in creating the algorithms.
For hiring and screening processes, implement a review process, such as full and false inclusion/exclusion tests of those selected and not selected.
Create a policy so employees can understand what they are allowed to do or what they can’t.