Author ORCID Identifier

Trevor C.W. Farrow: 0000-0001-5236-076X

Sean Rehaag: 0000-0002-4432-9217

Document Type



Media is loading

Publication Date



Sean Rehaag Rights-Enhancing Tech: Using AI to Open the Black Box of Human Refugee Adjudication

Jake Okechukwu Effoduh How Artificial Intelligence is Bastardizing Paradigms of Human Rights in the Third World

James Sheptycki AI and the police intelligence division-of-labour; a Canadian perspective

Alexandra Scott Autonomous weapons systems and International Humanitarian Law

Anthony Sangiuliano Approaches to Prohibiting Algorithmic Discrimination under the Canadian Human Rights Act

Aneurin Thomas, Regulating Police Facial Recognition Technology: Issues and Options

Artificial Intelligence (AI) is dramatically reshaping how people live, work, and interact, as well as the functioning of societies and legal systems’ adaptations to these changes. Machine learning technologies’ integration into various decision-making processes carries profound implications for sentencing, taxation, workplace dynamics, surveillance and policing, privacy, and financial markets. The rising automation of human activities prompts significant legal inquiries spanning constitutional, contractual, and tort issues. Large Language Models (LLMs) such as Chat GPT are AI technologies with a range of legal, ethical, and societal implications. These models, trained on massive volumes of text data, can generate text resembling human language, enabling tasks like answering questions, writing essays, even crafting poetry. They implicate freedom of expression, the right to information, and the democratic process at large. They have the potential to generate misleading, harmful, or hateful content, regardless of their programmers’ and owners’ intentions. They could become tools for propaganda or disinformation campaigns. They raise intellectual property questions, particularly when their output is based on pre-existing intellectual or artistic works and could lead to mass job automation.