martedì, Maggio 14, 2024

ARTIFICIAL INTELLIGENCE AGAINST DISCRIMINATION AT WORK

The Science of Where meets Keith Sonderling, a Commissioner on the U.S. Equal Employment Opportunity Commission.

  1. What is the mission of the U.S. Equal Employment Opportunity Commission ? 

The United States Equal Employment Opportunity Commission (EEOC) is responsible for enforcing federal laws that make it illegal to discriminate in the workplace.  It is our mission to prevent and remedy unlawful employment discrimination and advance equal opportunity for all in the workplace.

The EEOC administers and enforces Title VII of the Civil Rights Act of 1964, the Pregnancy Discrimination Act, the Equal Pay Act, the Age Discrimination in Employment Act, the Americans with Disabilities Act, and the Genetic Information Nondiscrimination Act.  These laws protect a job applicant or an employee from discrimination based upon race, color, religion, sex, pregnancy, national origin, age, disability, or genetic information.  They apply to all types of work situations, including hiring, firing, promotions, harassment, training, wages, and benefits.

  1. Your reflection Is artificial intelligence ready for the great rehiring? published on the World Economic Forum website is very interesting. What is your thesis? 

One of my highest priorities as an EEOC Commissioner is ensuring that AI is used in a manner that mitigates, rather than exacerbates, discrimination in the workplace.  To that end, my article for the World Economic Forum highlights some of civil rights implications of using AI to hire workers in the United States. I have published similar articles about the use of AI in employment decision-making in the Chicago Tribune and HR Dive, and plan to publish more.  My goal in publishing these articles is to educate the public about the ways U.S. federal antidiscrimination laws apply to a technology that is developing rapidly and being adopted widely.

AI is being used in every type of employment decision, from recruiting and hiring to promotions and firing.  I believe that, carefully designed and properly used, AI has the potential to mitigate the risk of unlawful employment discrimination.   For example, an AI-enabled resumé-screening program can be programmed to disregard variables that have no bearing on job performance, such as an applicant’s sex, national origin, or race.  However, AI can also discriminate on a scale and magnitude far greater than any individual person if it is poorly designed and carelessly used.   Since an algorithm’s predictions are only as sound as its training data, skewed data leads to skewed results.  For example, an algorithm that relies solely on the characteristics of a company’s current workforce to model the attributes of the ideal job applicant may unintentionally replicate the status quo.  If the current workforce is made up primarily of employees of one race, one gender, or one age group, the algorithm may automatically screen out applicants who do not share those same characteristics.

The article I wrote for the World Economic Forum highlights one example of how that might happen.  In the early months of the pandemic, women, African-Americans, and Latinos were unemployed at higher rates than white men in the United States.  So, if an employer were to rely solely on the attributes of its pandemic workforce in developing a data model for hiring, the employer might inadvertently replicate gender and racial disparities well into the post-pandemic era.  This is problematic for many reasons, not least of which is that an employer need not intentionally discriminate in order to engage in a prohibited hiring practice under U.S. law.

The EEOC has the authority to sue employers on behalf of victims of discrimination.  However, I believe that enforcement and education must go hand-in-hand.  It is preferable to prevent discrimination from occurring in the first place than to have to remedy its consequences.  It has been my experience that most employers want to do the right thing; they just need the tools to comply.  I have found this to be uniquely true in the employment technology space, where I have encountered a community of employee advocates, engineers, scientists, entrepreneurs, ethicists, lawyers, and employers determined to get this right.  They are seeking guidance on how to use AI in a manner that respects the rights of employees. I welcome this interest and hope to work with the regulated community to provide the legal clarity they need to ensure that AI makes the workplace more fair, inclusive and diverse.

  1. There is a great debate on the impact of innovation on the global labor market. What’s your general opinion? 

Much of the debate on the impact of innovation on the global labor market focuses on the future – and for good reason. The World Economic Forum predicts that in the next four years, 85 million jobs may be displaced by AI.  And by 2025, AI will be so pervasive that machines and humans will be working the same number of hours.

However, from the standpoint of workplace antidiscrimination law, the future is now.  AI has been used to make decisions at every stage of the job lifecycle for years.  It recruits and hires, it evaluates and promotes, it identifies candidates for reskilling and upskilling, and it even terminates employees.  It writes job description, screens resumés, and conducts job interviews.  It identifies employees’ current skills and potential skills, tracks productivity, and assesses workers.  And, if an employee falls short of expectations, an AI algorithm may send him a message notifying him that he has been fired.  Because AI is so deeply embedded in the decisions that shape the livelihoods and careers of individual human beings, it is essential that the technology be deployed in a manner consistent with civil rights laws.

The United States has strong antidiscrimination laws, privacy laws, and labor laws. We also have robust institutions to enforce those laws.  But in countries where such laws are weak – or, for that matter, non-existent – AI has the potential to violate human rights on a colossal scale, exacerbating the divide between the global North and the global South.  This problem has attracted the attention of international organizations.  As but one government official in one regulatory corner of the United States, I am watching these trends closely.

  1. Artificial intelligence can help reduce inequalities in the different labor markets. What use do you suggest so that AI does not lead to further inequalities? 

 Deciding to entrust algorithms with people’s livelihoods is a complex and important matter. Employers should not be afraid to ask for help from experts such as the EEOC, employee advocacy groups, trade groups, think-tanks, scholars, industrial psychologists, ethicists, human resource professionals, in-house counsel, or law firms.

In 2020, President Trump issued an Executive Order enumerating a set of principles to guide the use of AI in government.  They include accuracy, reliability, security, responsibility, traceability, transparency, and accountability.  American employers would do well to adopt the same principles in their use of AI.  Before adopting an AI-enabled technology to build and manage their workforce, employers should fully vet both the algorithms and the vendors.  They should press vendors for details about how their AI programs are designed, how the they test for bias, and how training data is gathered and secured.  Even after deploying AI in the workplace, employers should continue testing their algorithms for bias.  They must not lose sight of the fact that AI is self-reinforcing and requires close monitoring.

  1.  Our Magazine deals, in particular, with the science of where. The ability to organize and manage data is increasingly essential, in many sectors, in order to identify the best operational solutions to govern the processes of reality. In essence, it is a question of strategically linking the data we produce and the complexities of the contexts to find the most suitable policies to manage the post-pandemic phase. Employment comes first. What do you think ? 

For the Equal Employment Opportunity Commission, employment necessarily comes first because we are responsible for enforcing the laws that prevent and remedy employment discrimination and promote equal opportunity for all in the workplace.  To the extent that COVID exacerbated existing inequalities in the workplace, we must prevent those inequalities from persisting in a post-pandemic world.

AI is likely to play an outsized role in the post-COVID return to work.  According to a recent study, 70% of talent professionals reported that they believe virtual recruiting will become the norm long after the pandemic is over.    Employers, especially those who need to hire rapidly and in large numbers, are turning to AI-driven technologies such as resumé screening programsautomated interviews, and mobile hiring apps to rebuild their workforces.  To the millions of employees who were displaced by the COVID-19 pandemic, these technologies can mean a fast track back into the workplace.  And to the businesses whose doors were shuttered by the pandemic, they are an efficient path back to profitability.  All this underscores the importance and timeliness of my initiative to ensure that AI is developed and deployed in ways that comply with U.S. antidiscrimination law.

  1. Finally, a geopolitical question. What are the main differences in the relationship between innovation and employment between the US and Europe?

With respect to innovation and employment, I think the United States and Europe have more in common than might initially meet the eye.  We are both trying to find ways to regulate rapidly developing technologies in a manner that neither stifles innovation nor infringes upon the rights of the individual.  We are also dealing with a multi-tiered effort to regulate.  In the United States, national efforts to develop a harmonized regulatory framework for AI and machine learning are in their early stages.  At the same time, state and municipal legislatures have begun to regulate these technologies, giving rise to a potential patchwork of laws across the United States.  Similarly, the European Union is considering a bold transnational regulation that would govern artificial intelligence and machine learning while some member states are also considering domestic legislation on the subject.

I believe that the United States has much to learn from the European Union’s Artificial Intelligence Act.  The proposal’s risk-based approach to the regulation of AI seeks to build trust in the technology by protecting fundamental rights, ensuring public safety, and fostering innovation.  To that end, it creates a four-level taxonomy of risk, from unacceptable to minimal.  Two things are worth noting about this taxonomy.  First, AI systems used for employment purposes are characterized as “high risk,” alongside systems used for biometric identification, critical infrastructure, and the dispatch of emergency service (among others.)  These high-risk systems are subject to robust reporting, disclosure, validation, and accuracy requirements.  Second, the Algorithmic Accountability Act of 2019, which was introduced in Congress but not enacted into law, similarly proposed a risk-based regulatory regime, although in terms far less specific than those in the EU’s AI regulation.  Additionally, it framed the regulation of AI primarily in terms of consumer protection as opposed to the whole-of-economy approach adopted by the EU’s proposal.    So, while more modest in scope and less specific in substance, the 2019 proposal reflects a fundamental agreement between the EU and the United States on the risks of unregulated AI.

 

 

Ultimi articoli