top of page

AI Candidate Screening and Human Rights Compliance in Ontario

  • Writer: Tony Wong
    Tony Wong
  • Jul 23
  • 11 min read

Updated: Aug 4

ree

AI in recruiting refers to the use of technologies like machine learning and natural language processing to automate and enhance various stages of talent acquisition. These systems are designed to handle repetitive, high-volume tasks, freeing human resources (HR) professionals to focus on more strategic initiatives, such as building relationships with top-tier candidates. This shift promises to dramatically increase efficiency, reduce hiring times, and provide data-driven insights to support better decision-making. As algorithms become the new gatekeepers of opportunity, a critical tension emerges between the pursuit of efficiency and the legal imperative to ensure fairness, equity, and compliance with human rights legislation.


In Ontario, any tool or process used in recruitment, whether human-driven or algorithmic, is subject to the stringent requirements of the Ontario Human Rights Code (the Code). Employers who deploy AI screening tools remain fully liable for ensuring their hiring processes comply with these foundational legal obligations.


The Right to Equal Treatment in Employment


ree

The cornerstone of employment law in Ontario is Section 5(1) of the Ontario Human Rights Code (the Code), which states: "Every person has a right to equal treatment with respect to employment without discrimination because of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, record of offences, marital status, family status or disability". This right extends to every aspect of the employment relationship, including the recruitment and selection process. Critically, human rights jurisprudence focuses on the effect or outcome of an employer's practice, not on the employer's intent. An employer who uses an AI tool with no intention to discriminate can still be found liable if the tool's operation results in a discriminatory outcome.


Forms of Discrimination: Direct, Adverse Effect, and Systemic


ree

Understanding the different legal forms of discrimination is essential to grasping the full scope of risk associated with AI screening.


  • Direct Discrimination: This involves an action that is discriminatory on its face, such as an explicit policy not to hire people over a certain age. While less likely with a well-designed AI, a poorly configured tool that, for example, automatically rejects all applicants with a gap in employment over one year could be seen as directly discriminating on the basis of family status or disability.


  • Adverse Effect (Indirect) Discrimination: This is the most significant and insidious risk posed by AI in hiring. It occurs when a seemingly neutral rule, policy, or practice has a disproportionately negative impact on a group protected by the Code. An AI screening algorithm is a perfect example of such a "practice." For instance, a video interview tool that scores candidates on their "enthusiasm" and "clarity of speech" may appear neutral. However, if it consistently gives lower scores to candidates with speech impediments (a disability), candidates with accents (ethnic origin), or candidates from cultures where overt expressiveness is less common, it is creating an adverse effect and is discriminatory, regardless of the employer's intent.   


  • Systemic Discrimination: This refers to the "patterns of behaviour, policies or practices that are part of the social or administrative structures of an organization, and which create or perpetuate a position of relative disadvantage for groups identified under the Code." AI systems trained on biased historical data are powerful engines for entrenching and automating systemic discrimination. If an organization has historically hired fewer women into leadership roles, an AI trained on that data will learn that male-coded attributes are predictors of success for those roles and will continue to favour male candidates, perpetuating the existing systemic barrier.


Prohibited Grounds of Discrimination


ree

The Code enumerates specific personal characteristics, or "prohibited grounds," upon which discrimination is illegal. An employer cannot, directly or indirectly, make hiring decisions based on these attributes. The following table summarizes the grounds most relevant to employment and highlights their specific connection to the risks posed by AI screening tools.

Prohibited Ground

Definition and Scope

Relevance to AI Screening Risk

Race, Ancestry, Colour, Place of Origin, Ethnic Origin

Relates to a person's racial or ethnic background, lineage, skin colour, or country of origin. This can include associated characteristics like accent, manner of speech, or name.   


AI may learn to associate certain names or linguistic patterns from resumes or video interviews with negative outcomes, leading to discrimination. Postal codes can act as proxies for racialized communities.  


Citizenship

Refers to membership in a state or nation. Employers should only be concerned with legal status to work in Canada, not a person's citizenship.   


AI might penalize candidates without "Canadian work experience," a practice now restricted under Bill 149, which can indirectly discriminate based on citizenship or place of origin.   


Creed

A person's religion or system of faith, including beliefs and observances.   


AI could penalize candidates whose resumes list volunteer work with religious organizations or who require accommodations for religious holidays, which might be inferred from availability patterns.

Age

Protects individuals aged 18 and over from discrimination in employment. Mandatory retirement is prohibited.   


AI trained on data from a younger workforce may undervalue the experience profiles of older workers or penalize them for using different terminology for skills. It may also screen out younger workers for perceived lack of experience.   


Sex (including pregnancy, gender identity, gender expression)

Protects against discrimination based on being male, female, pregnant, or on one's internal and/or external sense of gender.   


AI trained on historical data from male-dominated industries can learn to penalize resumes with female-coded language or affiliations (e.g., Amazon's tool).   


Sexual Orientation

Protects individuals who are gay, lesbian, bisexual, heterosexual, etc..   


AI could make adverse inferences based on affiliations listed on a resume (e.g., volunteering for an LGBTQ+ organization), leading to discrimination.

Marital Status & Family Status

Marital status refers to being married, single, divorced, etc. Family status refers to the parent-child relationship.   


AI is highly likely to penalize gaps in employment history, which can disproportionately affect individuals (most often women) who have taken parental leave, thus discriminating on the basis of family status.   


Disability

A broad definition including physical, mental, developmental, or learning disabilities, whether past, present, or perceived.   


AI tools create numerous barriers: video analysis may penalize atypical speech or expressions; timed tests may be inaccessible; resume parsers may penalize employment gaps for medical leave.   


Record of Offences

Protects individuals with a record of a provincial offence or a pardoned federal offence in the context of employment.   


AI tools that conduct automated background checks or scan public data could improperly screen out candidates based on offences for which they have a right to protection.

Emerging Case Law and Vendor Liability


ree

While Canadian case law directly addressing AI in hiring is still developing, litigation in the United States is beginning to define the legal battleground. A landmark case is Mobley v. Workday, Inc., No. 23-cv-00770-RFL, 2024 U.S. LEXIS 126336 (N.D. Ca. 7/12/24), a nationwide collective action lawsuit alleging that Workday's popular AI-powered screening tools systematically discriminate against applicants based on age (over 40), race, and disability. The plaintiff claims he was rejected from over 100 jobs for which he was qualified after applying through Workday's platform.


The significance of this case is twofold. First, it highlights the potential for massive class-action liability involving potentially millions of job seekers. Second, it directly targets the technology vendor, arguing that Workday is not merely a passive software provider but an "agent" of the employer and thus shares liability for the discriminatory outcomes. This represents a crucial shift, suggesting that both the companies that use AI tools and the companies that build them can be held responsible. For employers, this underscores the importance of demanding contractual indemnities and proof of bias testing from their vendors.


Canadian Federal Regulatory Framework


ree

At the federal level, the government introduced Bill C-27, which included the proposed Artificial Intelligence and Data Act (AIDA). AIDA aimed to establish a national regulatory framework for "high-impact" AI systems, which would likely include those used in employment. The act would have required organizations to identify, assess, and mitigate risks of harm and biased output. However, the bill "died" when Parliament was prorogued in early 2025, and its future remains uncertain. Despite the setbacks, federally regulated employers still have to answer to existing prohibition against workplace harassment and discrimination under the Canada Labour Code and the Canadian Human Rights Act.


Ontario's Emerging Regulatory Framework


ree

In March 2024, Ontario became the first jurisdiction in Canada to pass legislation specifically addressing the use of AI in hiring. The Working for Workers Four Act, 2024, also known as Bill 149, amends the Employment Standards Act (ESA) to introduce new transparency requirements.


The core provision, set to take effect on January 1, 2026, mandates that employers with 25 or more employees must include a statement in any publicly advertised job posting if they use AI "to screen, assess or select applicants". The implementing regulation defines AI broadly as a "machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions".


While a landmark step, the legislation's practical impact may be limited. The regulations do not specify precisely which tools or processes trigger the disclosure requirement, nor do they dictate the content or format of the disclosure statement. This ambiguity may lead many employers to adopt a cautious approach of including a generic, boilerplate disclosure statement in all job postings to ensure compliance. Such a practice, while satisfying the minimal requirement of the ESA, could dilute the intended goal of providing meaningful transparency to job seekers.


The Human Rights AI Impact Assessment (HRIA)


ree

To help organizations operationalize these principles, the Ontario Human Rights Commission (OHRC) and the Law Commission of Ontario jointly developed the Human Rights AI Impact Assessment (HRIA) tool, released in late 2024. The HRIA is a practical, step-by-step guide designed to help organizations proactively identify, assess, and mitigate the human rights risks of AI systems throughout their lifecycle.


The HRIA is structured in two main parts :


  1. Part A: Impact and Discrimination: This section guides users through a series of questions to determine if an AI system is being used in a high-risk context (e.g., employment), whether it produces differential outcomes based on protected grounds, and whether it properly considers the duty to accommodate.


  2. Part B: Response and Mitigation: This section focuses on developing internal procedures for human rights review, ensuring data quality and explainability, conducting meaningful consultations with affected communities, and implementing robust testing and auditing protocols.


It is crucial for employers to understand that the HRIA is a voluntary guidance document. Its use is not legally mandated, and its completion does not grant immunity from liability. However, the act of undertaking and documenting a thorough HRIA serves as powerful evidence of due diligence. It demonstrates that an organization has turned its mind to its human rights obligations and has taken reasonable, proactive steps to prevent discrimination—a key element of any defense before the Human Rights Tribunal.


Proactive Compliance for Ontario Employers


ree

In an environment of legal uncertainty and evolving technology, a reactive, wait-and-see approach to AI governance constitutes a significant business risk. Employers in Ontario must move beyond minimum compliance with disclosure requirements and adopt a proactive, multi-layered framework for the responsible deployment of AI in hiring. In the event of a human rights challenge, the most effective legal defense will not be an attempt to prove that an AI tool is perfectly unbiased—a technical impossibility. Rather, it will be the ability to demonstrate that the organization has implemented a robust, documented, and good-faith process to mitigate the risks of discrimination.


Establishing AI Governance


ree

Effective risk management begins with a strong internal governance structure.


  • Establish a Cross-Functional AI Governance Team: Organizations should form a dedicated, cross-functional team comprising representatives from Human Resources, Legal, Compliance, Privacy, and Information Technology to develop and enforce a comprehensive AI in Employment policy, conduct due diligence on all potential and existing AI tools, and serve as the central point of oversight.   


  • Rigorous Vendor Due Diligence: Employers are legally responsible for the tools they deploy, even if they are developed by a third party. The governance team must conduct thorough due diligence before procuring any AI hiring tool.


Auditing and Continuous Monitoring


ree

  • Conduct Regular Bias Audits: The most critical proactive step is to commission regular, independent bias audits of all AI screening tools. These audits should statistically measure whether the tool produces an adverse impact on candidates based on protected grounds under the Code. While not yet legally mandated in Ontario, conducting such audits is the clearest way to demonstrate due diligence and identify potential discrimination before it becomes a systemic problem.


  • Implement Continuous Monitoring: AI models are not static; they can "drift" or change their behaviour as they process new applicant data. Organizations must establish processes for ongoing monitoring of hiring outcomes to detect any emerging disparities. If the diversity of shortlists or hires begins to skew unexpectedly, it should trigger an immediate re-evaluation and re-testing of the AI tool.


Ensuring Meaningful Human Oversight


ree

  • Maintain Human Discretion: At no point should an AI be permitted to make a final, autonomous decision to reject a candidate. A qualified human recruiter must review the outputs of the AI—including both the candidates it recommends and those it flags for rejection—and retain the ultimate authority to make the decision.


  • Provide Avenues for Recourse: Applicants must have a clear and easily accessible channel to request a human review of an automated decision or to request an alternative assessment process as an accommodation. This process should be clearly communicated to all applicants at the outset.


Cultivating Transparency and Explainability


ree

  • Provide Meaningful Disclosure: Employers should go beyond the minimal disclosure requirement of Bill 149. Job postings and application portals should include a plain-language notice that explains how AI is used, what general qualifications and characteristics the tool assesses, and how to request an accommodation or alternative process. This builds trust with candidates and strengthens the employer's position by demonstrating a commitment to transparency.


  • Prioritize Explainability: When selecting AI tools, employers should prioritize systems that offer a degree of "explainability." If a vendor cannot provide a coherent, defensible, and job-related reason for why their AI ranked one candidate higher than another, that tool introduces an unacceptable level of legal risk. An inability to explain a decision is a significant liability in the face of a human rights complaint.



Conclusion


ree

AI-powered candidate screening tools, including resume parsers and video analysis software, are increasingly used in Ontario's hiring processes. While intended to improve efficiency, these systems pose a significant risk of discrimination under the Ontario Human Rights Code. Discrimination can arise from biased algorithms trained on historical data that reflects past societal biases, or through "proxy discrimination," where seemingly neutral data points correlate with protected grounds like race, gender, or disability.


Currently, Ontario has no specific legislation governing AI in hiring. However, the existing Human Rights Code fully applies, making employers liable for any discriminatory outcomes produced by their AI tools. The Ontario Human Rights Commission (OHRC) has issued guidance, urging employers to conduct Human Rights Impact Assessments before implementation.


To mitigate legal risks, employers should ensure transparency, conduct regular bias audits, maintain meaningful human oversight, and provide accommodations for applicants. As jurisdictions like New York City and the European Union enact specific AI regulations, pressure is mounting for Ontario to develop a clearer legal framework to protect job applicants in the age of AI.


Relevant Articles of Interest:



As an employer, you may want to consult with an experienced employment law firm, such as HTW Law, to learn about the DO and Don't in employment law context to ensure that all angles are covered.


As an employee, you may want to consult with an experienced employment law firm, such as HTW Law, to learn about your employment law rights in case of workplace harassment, discrimination, wrongful dismissal, constructive dismissal or misclassification, to ensure that your rights are fully protected. By doing so, you can ensure that you receive fair compensation for the actionable discrimination and safeguard your employment rights.


top law firm with best employment lawyers in toronto

With the right legal support, employers and employees can navigate the challenges of unfair practices and work towards a more equitable and respectful work environment. 


HTW Law - top employment law firm 2023

You don't have to fight the battle alone. Speaking with an employment lawyer who is familiar with the laws and intricacies regarding AI hiring practices, workplace harassment and discrimination will go a long way. If you are in doubt, it's essential that you reach out for help as soon as possible right away.



Click here to contact HTW Law - Employment Lawyer for assistance and legal consultation.

ree
contact htw law - employment lawyer for wrongful dismissal help

Author Bio:





bottom of page