What The White Houses ‘AI Bill Of Rights Blueprint Could Mean For HR Tech
Over the past decade, the use of artificial intelligence in recruiting, hiring, and workplace monitoring has become a reality for many workplaces. Now the country's top officials are looking at these technologies.
On Oct. 4, the White House Office of Science and Technology Policy released the "Practical AI Act," a 73-page document that provides guidance on how to "remove pre-built barriers" against bias and discrimination in technology. of automation . Document , Marginalized Communities Where you give feedback on the development process and where designers work hard to make the benefits of technology accessible to everyone.
The project focuses on five areas of protection for US citizens related to artificial intelligence: system security and effectiveness; Algorithm bias; data privacy; Notification and clarification if an automated system is used; And find people's options when needed. This comes after the US Equal Employment Opportunity Commission and the US Department of Justice issued two warning documents in May that specifically focused on the use of decision-making tools. algorithmic hiring and other employment practices .
The employment plan lists a number of "sensitive areas" that deserve better privacy and data protection. Controllers must ensure that sensitive employment information is used only for "activities essential to this domain" and that activities that do not require consent are "optional."
The plan also states that continuous monitoring and control systems "should not be used in physical or digital workplaces," regardless of the individual's employment status. Monitoring is especially sensitive in a collaborative context; The plan states that federal law "may prescribe the costs of hiring employers and any consultants to provide a transparent system to monitor workers and protect unions in connection with labor disputes."
Increase attendance
The adoption of AI and work-centric automation can depend on the size and type of organization being studied, although studies show that a large number of employers are adopting the technology.
For example, a February survey by the Society for Human Resource Management found that nearly a quarter of employers use these tools, including 42 percent of employers with more than 5,000 employees. According to SHRM, 79% of respondents who use AI or automation say they use technology for recruiting and hiring.
Similarly , a 2020 Mercer survey found that 79% of employers use algorithms or plan to use algorithms this year, based on publicly available data. But AI has applications beyond recruiting and hiring. Mercer found that the majority of respondents use technology to manage employee self-service processes, track performance, and onboard staff, among other things.
What can a "plan" mean to employers?
Employers should note that the plan is not legally binding, does not represent US government policy and is not necessarily indicative of future policy, said Neil Ray, a shareholder in management firm Liler Mendelson. While the document's principles may be useful for artificial intelligence and automation systems, the plan is not in beta, he added.
"It certainly helps promote scholarship and thought leadership in this space," Ray said. But it does not reach the level of law and regulation.
Employers can benefit from a single federal standard for AI technology, Ray said, especially since it's an active legal area for many jurisdictions. A New York law restricting the use of artificial intelligence in hiring takes effect next year .
Then there is the global regulatory landscape, which can create additional problems, Ray said. Because of the complexity, Ray added, employers may want to see more discussion of a federal standard, and the Biden administration's bill could be one way to start that conversation.
"We don't have to jump through 55 hoops," Roy said of the federal level. "Until we have a pair of hoops to jump through."
Incorporating standards in data protection and other areas into the plan can be beneficial for employers, Julia Stoanović said, because AI and automation platforms for recruitment often review open data that applicants don't know they're using to selection purposes. Founder and director of the new Center for Responsible Artificial Intelligence at the University of York.
In an August post co- authored by Stoanovich, a team of NYU researchers detailed two personality tests used by Humane AI and Crystal, two providers of automated recruiting . The analysis concluded that the platforms display "significant uncertainty in key measurement aspects" and "cannot be considered reliable tools for personality assessment."
Before artificial intelligence entered the equation, the idea that a candidate's personality profile could predict job performance was controversial, Stoanovich said. Laws like the one in New York City help create more transparency about how automated hiring platforms work and give HR teams a better understanding of whether they are serving their intended purpose.
"It's very good news for employers that we're starting to regulate this area," Stoanovich said. "We know there are a lot of devices out there that don't work and aren't useful to anyone except the companies that make money selling them."
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home