AI is changing the way companies hire. With top use cases for AI in recruitment now including candidate sourcing, screening, assessment, and interviews, the role of AI in HR recruitment is not just about automating routine tasks anymore—it’s directly impacting who gets a chance to even be considered.
The good news is, Singapore companies remain conscious about the opportunities and pitfalls of using AI in their hiring processes. According to Hays, almost 60% of employers in Singapore recognise that AI-powered resume screening can be biased, and must be addressed before being fully utilised.
Regardless, the tough question remains: how do HR and compliance teams harness the speed and efficiency of AI without compromising on fairness and transparency?
PDPC and AI governance in your hiring process
In recruitment, AI often influences high-stakes decisions, whether it’s scoring a candidate or filtering resumes. Aligning your hiring tools with AI governance frameworks helps strengthen your data protection practices, reduce regulatory risk, and build trust with candidates and stakeholders alike.
Singapore’s Personal Data Protection Commission (PDPC) serves as the country’s main regulatory authority on matters related to personal data protection. It represents the Singapore Government on an international stage on issues related to the protection, use, and governance of personal data.
The PDPC’s Model AI Governance Framework (MGF) lays out clear, practical steps for organisations to manage the ethical and governance challenges that come with deploying AI in their business processes. The MGF pushes organisations to make their AI systems understandable, justifiable, and auditable—which helps build trust, supports regulatory compliance, and reduces reputational risk.
While the framework is voluntary, adopting it signals that your organisation is being responsible with AI adoption, and is taking proactive measures to meet the standards set out by the regulatory body.
Core pillars of the PDPC Model AI Governance Framework
The guiding principles of the MGF are designed to tackle the core challenges that come with using AI in real-world decision-making. They focus on two areas:
- Decisions made by or with the help of AI should be Explainable, Transparent, and Fair.
- AI systems should be Human-Centric.
These principles address common risks that often come with AI models, such as the “black box” problem, unintended bias and discrimination, and lack of accountability with AI use. They are further expanded into four areas to help guide companies with their future implementation strategies:
Internal Governance Structures and Measures
Responsible AI starts from within. Organisations should define clear roles, responsibilities, and training across the organisation to build accountability and set the tone for ethical use. This ensures that everyone, from leadership to frontline teams, understand that they have a part to play in responsible AI use.
Human Involvement in AI-Augmented Decision-Making
Determining the right level of human oversight is a balancing act—some decisions may be fully automated, but others require human review or intervention to minimise risk. Calibrate oversight according to sensitivity and impact of each use case to maintain efficiency without compromising on accountability.
Operations Management
The quality and reliability of your AI systems hinges on good operational management. This includes maintaining high data quality, ensuring model robustness, and regularly reviewing and fine-tuning AI systems. As circumstances change over time, so will AI systems—operational discipline helps prevent drift, bias, and unintended consequences as situations and technology continue to evolve.
Stakeholder Interaction and Communication
This is key to transparency, both internal and external. Communicating AI policies clearly to stakeholders, whether job applicants or your HR recruitment team, builds trust. It also allows individuals to understand how decisions that affect them are being made. The key is to use plain, accessible language to demystify AI and reinforce a culture of openness and accessibility.
A step-by-step guide to operationalise the PDPC AI framework in recruitment
Within the high-risk recruitment domain, adopting the MGF requires translating the core principles and implementation areas into specific actions. The following steps serve as a roadmap to re-risk the use of AI in hiring, ensuring fairness, transparency, and accountability at every stage.
Step 1: Conduct a Pre-Implementation Risk Assessment
Before adopting any AI tool, start by identifying potential harms in your specific hiring context. For example, will certain demographic groups or education backgrounds be disadvantaged? Assess how critical each AI-supported decision is (e.g., resume screening is typically high-criticality) to determine the right level of safeguards and oversight.
Step 2: Vendor Due Diligence
When putting out a request for proposal, require that AI vendors provide comprehensive documentation on their model’s training data, fairness metrics, security measures, and data handling processes. Ensure that their model has been tested for bias in a way that reflects Singapore’s local context, rather than relying on benchmarks from overseas or other markets.
Step 3: Develop Internal Governance and Training
Appoint an AI Governance Lead for your organisation to oversee AI risks. This role is often within the DPO or Compliance function. Develop clear internal policies on acceptable data use and provide mandatory training for HR and talent acquisition teams. Audit these trainings to ensure everyone understands—and is able to identify and address—algorithmic bias.
Step 4: Implement Human-in-the-Loop Oversight
Identify key decision points where human intervention must occur. This is especially crucial for material hiring decisions such as shortlisting or rejection. Human reviewers should have both the authority and the capacity to override AI recommendations where necessary. Because at the end of the day, accountability remains with people, not algorithms.
Compliance and consequence go hand-in-hand
AI offers unprecedented potential for HR recruitment. But if left unchecked, it can inadvertently cause more harm than good, whether that’s disadvantaging certain candidates, or failing to protect sensitive data such as video interviews and assessment results.
These missteps can lead to breaches of the PDPC’s Protection and Fairness obligations, triggering legal and financial consequences including fines of up to S$1 million or 10% of annual local turnover. Beyond penalties, reputational damage and loss of candidate trust can have lasting impacts on operations and growth.
But effective compliance can mitigate the risk of negative outcomes. Operationalising the PDPC’s Model AI Governance Framework ensures your organisation embeds fairness, transparency, and human oversight throughout AI hiring processes.
And in today’s talent landscape, responsible AI adoption isn’t just ethical—it’s a strategic safeguard against the risks and consequences of non-compliance.
Schedule a consultation with our AI governance experts to assess your recruitment framework for PDPC-compliance.