Saanjh Balpande • 2024-06-05
Businesses put in a great deal of time and money searching for the ideal applicant for a job opening because it could mean the difference between success and failure. A recent report from LinkedIn claims that there is an increasing trend in middle-to-top-level openings for time-to-hire and cost-to-hire matrices. As a result, even though it costs more, important positions remain vacant for longer than usual. This prompted businesses like Amazon to search for creative ways to use artificial intelligence (AI) to cut down on such costs and times.
As per reports from 2014, Amazon Inc. assembled a team to develop a resume review tool that employed machine learning (ML) and natural language processing (NLP) to identify the most suitable candidates for a given job description. Once in place, this software would employ advanced artificial intelligence algorithms to gradually identify important characteristics from the resumes of hired candidates, then search for those same markers in resumes that were submitted for screening. Afterward, based on how much the candidate resembles the previous successful candidate, this tool would rate them on a 5-star scale, similar to the one used to rate products on Amazon.
By the end of 2014, the company had adopted this experimental tool widely, with few people relying heavily on it due to its significant time-saving benefits. The company became aware in 2015 that ratings for technical jobs such as software developers and architects were not gender-neutral. which prompted the business to assign one of its engineers to look into the main reason. Following extensive research, engineers determined that the data used to train the AI system was biased because it primarily included the resumes of male employees, reflecting the then-current trend of male dominance in the company and the tech sector. Such unknowingly biased training data led the algorithms to create an association that downgrades resumes that included words like “women’s” as in “women’s chess club captain”.It was also reported that the engineers identified cases where the system downgraded graduates of two all-women’s colleges.
These findings prompted Amazon to modify its algorithms to make them more neutral in that situation, but it was also determined that an AI system of that kind might eventually create a system of candidate sorting that might be somewhat biased.
Traditional Hiring
There is no set procedure for how traditional hiring should be carried out. Typically, it begins with the business determining that there is a vacancy, and then an analysis that yields a job description follows. Next, either or both internal channels—such as the company job portal—and external channels—such as LinkedIn, Monster, or head hunters—are informed about this. Following their sourcing, CVs are combined and reviewed by HR staff members and subject-matter specialists. Interviews are then conducted with the shortlisted candidates to determine the final candidates. The time and expense associated with hiring are the primary drawbacks of this tried-and-true, human-touch method of hiring.
AI in Traditional Hiring
While the idea of artificial intelligence has been around for a while and has found applications in a variety of scientific domains, it has only been further developed and applied in a wide range of organizational settings during the past ten years. Though Tecuci lists knowledge acquisition, natural language, and robotics as the three primary areas where AI can be implemented, the possibilities are seemingly endless.
Through the process of natural language processing, or NLP, knowledge and information can be extracted from plain text by scanning it. By automating the resume scanning process and gathering pertinent data, this type of knowledge extraction process can be used to rank candidates according to how well they fit a particular job description. In order to build these kinds of AI systems, training data is needed. This allows the underlying algorithm to learn how to correlate different resume traits with job profiles and determine which applicants are most qualified for the position. According to reports, businesses like Amazon have developed comparable systems to help with hiring; we will go into more detail about this in the section that follows.
Challenges In Adopting AI
Adopting AI presents a variety of challenges, broadly categorized as technological, ethical, and privacy-related. The adoption of these technologies in contemporary hiring workflows is hampered by ethical and privacy concerns, although technological obstacles appear to be surmountable given the industry's rapid pace of innovation.
Before being used, the majority of AI systems must be trained. To do this, they need to be given access to a training data set, which in the hiring context may include the personal information of both successful and unsuccessful applicants. This allows the systems to identify common characteristics between the successful and unsuccessful candidates. Data consent and personal data privacy are called into question by this.
Ethical Challenges In Adopting AI
The application of AI raises some difficult moral issues and moral conundrums. These difficulties prevent this technology from being used in hiring on a large scale. Among these conundrums and questions are important ones such as how AI guarantees equity, how the system handles conflicting ideas, how diversity will be preserved in an organization, whether the system has enough contextual integrity, and whether relying too much on AI technologies is risky.
distinct interpretations of justice. It was discovered that there are about 21 definitions of fairness in computer science. In many cases, fairness entails equal opportunity. Fairness could also mean not having any prejudice against people based on their gender or race. Another would be to treat everyone equally in all domains, such as legal, interpersonal, and so forth. In conclusion, even though it is ambiguous, this moral principle is still very important and what people expect from an AI system, but it is also frequently one of the common ethical problems with intelligent software.
It is difficult to optimize AI systems for fairness because fairness encompasses a wide range of concepts, some of which are antagonistic to one another. For example, a business may want to offer equal opportunities to all people without discrimination, so it cannot take into account a concept of fairness that makes up for societal injustices or historical or inherent disadvantages. Putting in place an AI that handles both would be difficult and might require making compromises frequently.
The hiring process is fundamentally biased, with some candidates receiving offers and others not, depending on specific characteristics deemed indicative of a "good" candidate. Over time, these positive attributes foster diversity within a comfortable level organization. Should the AI project's functional requirements include these parameters in order to achieve optimal performance? Or does it go against equity?
The appropriate use of personal data that complies fairly well with the individual's expectation of privacy at the time of disclosure is known as contextual integrity. There is an implicit breach of trust when a candidate provides their personal information to an AI system during the job application process. Now, one could counter that since the company received these data for review, it owns them and is free to do with them as it pleases. This viewpoint will, however, raise a number of additional ethical concerns about data ownership.
An organization may start to rely more on AI than on human judgment as AI systems become faster and more accurate at simulating human decisions. Will the company eventually be unable to hire people without the assistance of an AI in such a situation?
The aim of integrating AI technologies into conventional hiring processes is to alleviate human labor from various tiresome tasks involved in the hiring procedure. Even though this method of hiring is relatively new, it is expanding quickly. Before implementing such a system, any organization should carefully consider the implications regarding data privacy, ethics, labor law, technology, feasibility, and the necessity of such a system in hiring. After adoption, the AI system should be regularly examined to make sure it is operating within appropriate bounds. It is undeniable that using AI to hire more quickly and reduce the need for human labor can save time and money, but if done improperly or unethically, it can also damage a company's reputation and financial standing.
References
https://www.reuters.com/article/idUSKCN1MK0AG/
https://www.equitygroupuk.org/blogs/ai-amazon-recruiting
https://becominghuman.ai/amazons-sexist-ai-recruiting-tool-how-did-it-go-so-wrong-e3d14816d98e
https://www.youtube.com/watch?v=QvRZuHQBTps&t=40s
Copyright © 2021 Govest, Inc. All rights reserved.