Introduction
Over the years, utilization of artificial intelligence (“AI”) in recruitment has fetched about real changes in how business entities, particularly start-ups and entrepreneurs, conduct talent sourcing and evaluation. AEDTs (“Automated Employment Decision Tools”) claim to do a hiring process faster than ever, through data-heavy decision-making, and cheaper.[i] For a young start-up aiming to scale quickly, AI can help streamline the initial candidate shortlisting process. However, alongside these benefits come significant legal, ethical, and reputational risks. The very AI systems may reaffirm gender, racial, age, or educational stereotypes- barriers that these systems were initially designed to break.
The Amazon case (2018), where an in-house AI recruiting tool was dismantled after penalising resumes containing the word “women,” remains a classic example. A more recent instance is the enforcement action by the Equal Employment Opportunity Commission (“EEOC”) in 2023 against several companies using AI screening tools that disproportionately excluded neurodivergent or disabled candidates – a violation of the Americans with Disabilities Act, 1990.[ii] These events illustrate that AI systems are not inherently neutral, but instead represent the assumptions and limitations embedded in the data and design used to train them.
In response to global regulatory developments such as New York City’s Local Law 144[iii] and the EU AI Act[iv], businesses can no longer afford a ‘deploy now, fix later’ attitude. Start-ups must embed legal compliance, ethics, transparency, and human oversight into the product from day one. Entrepreneurs, many of whom operate with limited legal infrastructure, must be especially conscious as over-reliance on opaque black-box algorithms without human checkpoints can lead to discriminatory hiring practices, lawsuits, investor scepticism, and brand damage.
This Article explores how entrepreneurs can navigate the ethical and legal terrain of AI-driven hiring by embedding fairness, accountability and human intervention into their systems before it’s too late.
Amazon Recruitment AI: A Case in Point (2014 - 2018)
The Amazon recruitment algorithmic bias case, although never fully implemented, remains one of the most cited examples of bias in the hiring domain. In matters concerning their recruitment, the company is alleged to have created an AI tool between 2014 and 2018 that could scan resumes using past applications as a data set. The AI was trained using decades ’worth of historical data. Therefore, the algorithm learned to penalise resumes that had words like “women’s chess team” or references to women’s colleges. The company tried to scrub the resumes of gender indicators, but the execution led to more insidious forms of bias and discrimination. For example, the system would downgrade applicants whose educational background or extracurricular activities were statistically correlated with women. The company decided to abandon the project in 2018. Hence, while the system was never finally deployed in live hiring decisions, the episode revealed how even big tech companies can build biased systems, if not approached with ethical foresight and data responsibility.[v]
Takeaway for start-ups?
AI tools are not inherently objective, and without a diverse, representative training dataset and human oversight, they may augment systemic discrimination. For smaller companies, the reputational damage, legal exposure, or investor retaliation that such a failure cause may simply be unaffordable.
I. Recent Developments and Cases
a. Enforcement Actions by EEOC
In May 2023, the U.S. Equal Employment Opportunity Commission brought multiple enforcement actions against employers utilising AI tools that unintentionally screened out neurodivergent candidates or those with disabilities. The Commission alleged violations of the ADA and argued that such tools, through employing rigid keyword-based or timed assessment batteries, created structural barriers for candidates with cognitive or physical impairments.[vi]
Among the examples cited was an online coding platform that implemented AI to evaluate technical skills via automated live assessments that failed to accommodate individuals with autism or ADHD needing extra time or flexible formats. This exemplifies how bias goes beyond race or gender- tools need to adapt to multiple human conditions.
b. HireVue and Facial Analysis (2020–2021)
A plethora of lawmakers and digital rights groups created considerable waves against HireVue regarding the use of facial analysis to determine candidate emotions and traits. The critics, including EPIC, argued that the technique has no scientific validation and can discriminate against candidates based on race, facial structure, neurodiversity, or even the camera quality used. Under pressure from public opinion, in 2021, HireVue decided to scrap its facial analysis features.
The ICO stated that automated profiling should never be used to arrive at decisions without meaningful human involvement, particularly when the decision has significant consequences, for instance, in job selection.
This episode holds particular significance for start-ups developing HR tech products. It highlights the risks of bringing technologies to market without peer-reviewed validation or established ethical safeguards. As regulatory and societal expectations shift toward greater algorithmic accountability, the message to tech entrepreneurs is clear: innovation must operate within the bounds of privacy and anti-discrimination laws.
Business and Legal Implications for Start-ups
Start-ups involved in recruitment activities through AI offer the manifold virtues of efficiency and scalability. Nonetheless, these bring with them very complicated risks, particularly where poorly deployed in the absence of a multidisciplinary compliance approach.[vii]
a. Risk of Over-Reliance upon AI
Young entrepreneurs, especially those in the early stages of growth, might feel tempted to hand over the entire hiring pipeline to AI, but that would be a mistake for two key reasons. First, many AI tools, particularly those based on neural networks or deep learning, operate as black-box models. This means employers often can’t understand why specific candidates are ranked higher than others, leading to a lack of explainability in the decision-making process. Second, relying blindly on AI undermines legal principles of fairness and transparency. High-risk decisions like hiring demand human oversight under laws such as the EU’s AI Act and Article 22 of the GDPR, both of which stress the need for meaningful human involvement.[viii] Similarly, New York City’s Local Law 144 requires that candidates be informed before Automated Employment Decision Tools (AEDTs) are used and be given explanations for the decisions made. To ensure fairness and legal compliance, startups should build a human-in-the-loop (HITL) process, where recruiters regularly audit AI recommendations and have the power to override them when necessary.
b. Legal Compliance Requirements
Start-ups today must navigate an increasingly complex web of AI regulations, especially when using AI tools for recruitment. For instance, New York City’s Local Law 144 (2021) mandates annual independent bias audits and requires that candidates receive advance disclosures before Automated Employment Decision Tools (AEDTs) are used. Non-compliance can lead to civil penalties and reputational damage. In the European Union, the AI Act categorizes recruitment AI as “high-risk,” which means companies must conduct impact assessments, maintain clear documentation explaining the system’s logic, and ensure that human oversight remains a core part of the process. Additionally, the GDPR applicable in both the EU and UK restricts purely automated decision-making and demands data minimization, fairness, and explainability in AI systems. Despite variations in scope and enforcement, the underlying intent of these laws is aligned: to protect individuals and promote accountability in AI use. Other jurisdictions, including California and Canada, are actively exploring their own approaches, signalling that a global movement toward standardizing AI governance is well underway.
c. Ethical Design and Organisational Governance
Ethical development in AI shouldn’t be an afterthought. Start-ups need to build ethical principles into their systems from the very beginning. This includes rigorous bias testing, where algorithms are evaluated using measures like disparate impact ratio, equalized odds, or counterfactual fairness to ensure they don’t discriminate against specific groups. It also means adopting Explainable AI (XAI) practices systems must be able to provide clear, understandable reasons for ranking or rejecting candidates, especially when employment and livelihoods are at stake. Inclusive design is equally essential. AI tools should be built to accommodate candidates with disabilities or different communication styles, ensuring, for example, that those who use screen readers or have speech delays aren’t unfairly penalized. To foster broader accountability and include diverse perspectives, start-ups should consider setting up AI ethics committees comprising members from legal, technical, and HR backgrounds. This proactive approach not only builds trust but also prepares companies for the evolving landscape of AI governance.
d. Business Reputation and Investor Trust
Responsible AI practices can make for a strong selling point for early-stage ventures. Investors, especially those who put their money into ESG-aligned funds, are increasingly paying attention to the ethics of data and algorithmic transparency. An algorithmic tool for recruitment that is unexplainable or discriminatory would bring with it legal risks and could inevitably lead away from responsible capital.[ix]
Furthermore, public trust in AI systems is on shaky ground. Companies considered unethical or secretive are at the risk of gaining long-term reputational damage, as seen in the backlash against Amazon and HireVue. Transparency and fairness cannot be optional for start-ups competing for talent and customers; they must be strategic imperatives.
II. The Role of Human Oversight and Fairness Models
III. Actionable Guidelines for Start-ups
The use of AI in recruitment offers obvious advantages speed, scalability, reduced human error, and automation of repetitive tasks. For start-ups, especially, where HR infrastructure is often minimal, the appeal is strong. But with this efficiency comes a deeper responsibility. The question is no longer whether to use AI in hiring, but how to do so ethically and responsibly from day one. To begin with, AI systems must be designed with ethics at the core. This means building fairness, accessibility, and transparency into the architecture of the tool itself, not adding them as afterthoughts. Start-ups must avoid training models on historical data that might replicate existing biases or systemic exclusions, particularly those rooted in gender, race, disability, or socioeconomic status. Diverse and representative datasets are not just good practice; they are critical to avoiding discrimination.
Regular audits for bias are non-negotiable. These must be carried out by independent experts who can impartially analyse outcomes and flag discriminatory patterns. Ongoing testing and recalibration are necessary, as even well-designed systems can drift over time. Alongside this, the principle of human-in-the-loop must be upheld at every stage. Fully automated decision-making should be prohibited in hiring contexts; recruiters must review, question, and override AI decisions when needed. Transparency is also a foundational principle. Candidates should always be informed when AI is being used in their hiring process. Not only does this build trust, but it also aligns with emerging global legal norms that demand disclosure and the right to explanation. Explanations must be understandable and meaningful not just technical jargon or vague reasoning.
To ensure fairness and accountability, AI development teams must be multidisciplinary, including legal experts, engineers, recruiters, and importantly, people who are directly impacted by the system candidates, HR professionals, and even advocacy groups. This collaborative approach helps to identify and address bias early on, from the data collection stage all the way to algorithmic deployment. Equity considerations must be baked into each phase not just at the beginning or end. It’s also important to recognise that AI-based recruitment systems will change traditional hiring workflows, demanding new forms of governance. Transparent disclosures, third-party audits, and oversight committees must become standard practice to catch and correct blind spots that developers alone might miss.
As AI becomes central to modern hiring, responsible innovation must lead the way. When implemented with foresight legally sound, ethically conscious, and with human oversight AI can help shape workplaces that are not just efficient, but inclusive and just. This balance will define the future of next-generation companies. For entrepreneurs, it won’t just be a matter of operational success it’ll be part of the legacy they leave behind.
Conclusion
References:
[i] European Commission, ‘Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)’ COM (2021) 206 final.
[ii]Americans with Disabilities Act of 1990, 42 USC § 12101 et seq (1990).
[iii] Local Law 144 of 2021, City of New York Administrative Code, amendments to Title 20, Chapter 5, Subchapter 25 (effective 1 January 2023) known as the “Automated Employment Decision Tools Law”
[iv] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L 1689/1
[v] Jeffrey Dastin, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women’ Reuters (10 October 2018).
[vi] U.S. Equal Employment Opportunity Commission, ‘Use of Artificial Intelligence and Algorithms’ (EEOC, 18 May 2023).
[vii] Natasha Lomas, ‘HireVue Drops Facial Analysis from Hiring Software’ TechCrunch (13 January 2021).
[viii] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) [2016] OJ L119/1, art 22(1), (3).
[ix] Regulation (EU) 2024/XXX of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ LXX/XX.
[x] Pauline T Kim, ‘Data-Driven Discrimination at Work’ (2017) 58 William & Mary L Rev 857.
[xi] Regulation (EU) 2016/679 of the European Parliament and of the Council (General Data Protection Regulation) [2016] OJ L119/1, art 22.