Thien-Lam Nguyen
I want you to picture something. You have spent the better part of two years gathering documents, paying fees, refreshing a portal, and waiting. Your life is genuinely on hold in the way that only people who have been through an immigration process understand, the kind of suspended animation where you cannot commit to a lease, cannot accept a job offer, cannot make plans, because everything depends on a decision that has not come. Then someone tells you that Canada has deployed algorithms that are now sorting through applications with a speed no human team could match.
You feel, for the first time in months, something close to hope.
That feeling is exactly what I want to examine here. Not to say it is wrong to feel it, but to ask what it is actually based on, and whether the people generating it, the press releases, the ministerial statements, the breathless coverage of government tech adoption, have been honest about what this technology can and cannot do. There is a seductive logic to technological solutionism in immigration.
Canada's immigration backlog is not a small administrative inconvenience. As of late 2025, nearly one million applications sit outside IRCC's normal processing standards, part of a total inventory exceeding two million files. These numbers are real people who applied to reunite with spouses, to study, to work, to seek refuge. The scale of the problem made some kind of technological response almost inevitable, and in fairness, IRCC has built a lot.
The Advanced Data Analytics system for temporary resident visa applications has been running since 2018, sorting files by predicted complexity, routing low-risk cases for fast-track processing, and in certain narrow categories, issuing automatic approvals without any officer ever opening the file. IRCC claims the system can speed up processing by up to 87 percent, and a November 2024 IRCC update noted that over 80 percent of visitor visa applications now use automation.These are real improvements. Dismissing them because they are politically convenient for the government to cite would be its own kind of intellectual dishonesty.
But here is where I start to get uncomfortable, because the story the numbers tell overall is far less tidy than the story being told about them.
While specific processing streams were improving, Canada's total immigration backlog climbed again in mid-2025. The efficiency gains in one category kept being absorbed by surging demand elsewhere and by political decisions to cut immigration targets without any equivalent reduction in the applications already sitting in the queue.
To understand why, it helps to know what automated triage actually does. At its core, it is a filtering mechanism, a way of sorting incoming applications by complexity before a human officer ever looks at them. The system is designed to skim the low-complexity cases off the top so that officers can focus their time and judgment on what genuinely requires it. A straightforward study permit from a country with a low overstay rate, a clean financial history, an offer from an accredited institution, that kind of file moves quickly because the variables line up in predictable ways and the algorithm can process it with minimal friction.
What that means in practice, though, is that the backlog which remains after automation is not the backlog that automation was ever going to touch. It is made up of exactly the cases that resist algorithmic shortcuts, the ones where a person's life genuinely cannot be reduced to a set of variables. Automation just made the easier pile move faster, and in a system absorbing the volume Canada has been absorbing, faster on one end does not mean smaller on the other.
I keep coming back to Chinook. If you have not heard of it, that is partly the point. Chinook was deployed in 2018, used to process hundreds of thousands of immigration applications, and the Canadian public found out it existed through an Access to Information request. Someone had to file a request to get the information.
IRCC has since described Chinook as essentially a presentation interface, a tool that pulls case information into a more officer-friendly format rather than an algorithm making decisions. That may well be accurate. But the fact that the department's default posture was to deploy first and disclose never is a problem that exists entirely independently of what the tool actually does. It tells you something about how IRCC thinks about its obligation to the people whose lives run through its systems.
And that instinct toward opacity matters enormously when you start looking at how the other tools work. Immigration lawyer Mario Bellissimo has documented cases in which refusal reasons carry the exact same timestamp as the processing decision itself. If the reason for refusing someone's application was generated at the identical moment the decision was made, it raises a serious question about whether any substantive human review happened in between. IRCC's formal position is that no AI system can refuse an application, that all denials are human decisions. That may be technically true. But a human decision ratified at the speed of a click is doing a lot of work with the word "decision."
You cannot audit for bias you cannot see. And the people most likely to end up on the wrong end of a biased triage classification are rarely the ones with the resources to force transparency through the courts. The numbers that emerged from the Chinook era are difficult to wave away. In 2022, the Université de l'Ontario français reported that nearly 75 percent of international students who had already been accepted to the school were refused study permits, and that roughly 30 percent of applicants never received any response on their application status at all. Students from French-speaking African countries like Cameroon, Senegal, and Ivory Coast fared even worse, facing refusal rates approaching 80 percent between 2019 and 2022, which — if you are one of the people it happened to — is not a statistic so much as a door being closed in your face without explanation. A pattern this consistent, falling on the same communities with the same regularity, deserves something more rigorous than mild reassurance from the institution being asked to account for itself.
When I started thinking about this piece, I kept returning to what it actually feels like to be inside this process when someone tells you that technology is going to fix it. The hope is real. The relief is real. And I think that emotional reality is part of what makes the accountability gap so serious, because the story being told publicly is not exactly false. Parts of it are genuinely true. Artificial intelligence is modernizing Canadian immigration in certain respects. It has brought a kind of efficiency to a system that desperately needed something. But the story is being told in a way that quietly papers over the distributional reality underneath it, which is that the efficiency gains have overwhelmingly landed on the cases that were already the most straightforward. The people in the most precarious situations, the ones who needed the system to work most urgently, are still waiting. The technology made the system faster in the places where speed was easiest to achieve, and the headline statistics have been allowed to carry implications they cannot actually support.
The Standing Committee on Citizenship and Immigration has called for expanded algorithmic impact assessments and independent monitoring. Bellissimo has called for binding legislation, specialized officer training, and external audits with enforcement powers. These recommendations exist in committee reports and briefs, and they have not been implemented in any serious legislative form. In their absence, the governance framework around artificial intelligence in immigration consists largely of internal departmental commitments and a legal baseline built incrementally through the Federal Court. In 2023, the court held in Haghshenas v. Canada that while AI had been used in the processing of an application, the officer remained the decision-maker, and that the use of the tool was therefore irrelevant to judicial review. The following year, in Luk v. Canada, the court went a step further and held that AI-assisted processing does not inherently breach procedural fairness at all. Together, those two decisions define the current legal ceiling on accountability, and it is not a particularly high one.
The backlog is not a technology problem with a technology solution, but a resourcing problem dressed up in language that makes it sound like the hard part has already been handled. For the person who has been refreshing that portal for two years, still waiting, the algorithm was never going to save them. But the story that it might has made it easier for the people who could actually fix things to look away.
A rights-based framework, instead, would start with mandatory algorithmic impact assessments before any automated tool is deployed, ones that require IRCC to demonstrate with disaggregated data that a system does not produce discriminatory outcomes across national origin, race, or language group before it goes live rather than years later when the damage is already done. It would include a statutory right to a human decision-maker. It would require that refusal letters contain enough reasoning to be meaningfully challenged, because a decision that cannot be explained cannot be appealed, and a system where appeals are practically impossible is not procedurally fair in any sense worth claiming.
Canada could be a leader in rights-based innovation, but right now, the use of automated decision-making systems in immigration looks more like an experiment on vulnerable populations than a fair or just reform to a system that desperately needs it.