Teaching

Teaching Statement

My foremost objectives as an instructor are (i) to enable students to see the bearing and applicability of philosophy to their everyday lives and (ii) to foster students' confidence in their ability to succeed at philosophy while (iii) helping to improve their abilities to read, write, speak, and think clearly and critically about philosophical ideas. To achieve these objectives, I use a goal-based approach that focuses on maximising student motivation via environment, expectancy, and value.

A full teaching statement can be found here.

A summary of my recent teaching evaluations can be found here.

Some sample syllabi for the course I have taught are available below.

Teaching Experience (as Instructor of Record)

Dalhousie University

  • Winter 2023 • Social/Ethical/Professional Issues in Computer Science
  • Winter 2023 • Case Studies in Computing and Society
  • Autumn 2022 • Contemporary Philosophical Issues (Topic: 'Philosophy on the Spectrum')
  • Autumn 2022 • Logic: Understanding of Scientific Reasoning
  • Winter 2022 • Social/Ethical/Professional Issues in Computer Science
  • Winter 2022 • Case Studies in Computing and Society
  • Autumn 2021 • Topics in Ethics I (Topic: 'Evolutionary Ethics')

    Description. In a 1973 article, evolutionary biologist Theodosius Dobzhansky proclaimed that ‘nothing in biology makes sense except in the light of evolution’. One way to understand this provocative statement is that biology (in terms of the observed diversity of life and its distribution on the earth’s surface) only makes sense in the light of evolution because it only makes sense if life on earth has a shared history. If we take this sentiment seriously, then in order to (truly) understand (nearly) any facet of human life, we ought to understand it in the context of the evolutionary history of humanity (as a species). Rather than assuming that ethics is the result of divine revelation or the application of our rational faculties, evolutionary ethics examines the possibility that morality is a social phenomenon, borne out of the biological and cultural evolution of intelligent, social creatures.

    In this course, we will examine proto-morality and pro-social behaviour, as it appears in non-human animals (especially primates). We will also examine possible mechanisms of evolution that may apply to the evolution of moral behaviour in humans, including natural selection, sexual selection, kin selection, and group selection, in addition to social adaptations that may aid, foster, or give rise to moral behaviour, including altruism, signalling, cooperation, and conventions. On the philosophical side of things, we will discuss how a biological point of view allows and disallows certain normative concepts; we will also discuss criticisms of such a biological approach and contemporary challenges for evolutionary ethics.

    In a paraphrase of a question posed by the linguist and cognitive scientist Massimo Piattelli-Palmarini, this course will essentially centre upon two converse questions:

    1. What is ethics, that it may have evolved? And,
    2. What is evolution, that it may apply to ethics?

    Syllabus available here.

University of Toronto

  • Summer 2021 • Seminar in Philosophy of Science (Topic: 'AI & the Value-Alignment Problem')

    Description. Artificial intelligence research is progressing quickly, and along with it the capacities of AI systems. As these systems become more sophisticated and more deeply embedded in society, it will become increasingly essential to ensure that we are able to maintain control of these systems, and that the decisions and actions they take are aligned with the values of humanity writ large. These are known, in the field of machine ethics, as the control problem and the value alignment problem.

    In the first part of this course, we will examine the concepts of control and value alignment to see how they are connected and what practical, scientific, ethical, and philosophical questions arise when trying to solve these problems. We will focus on both the normative and technical components of value-aligned artificial intelligence—namely, how to achieve moral agency in an artificial system. The normative component of the value alignment problem asks what values or principles (if any) we ought to encode in an artificial system; whereas, the technical component asks how we can encode these values. In the final part of the course, we will examine the social, ethical, and philosophical consequences that might arise (indeed, have arisen) from misaligned AI systems.

    Syllabus available here.

Teaching Experience (as Teaching Assistant)

University of California, Irvine

  • Spring 2018 • Inductive Logic
  • Winter 2018 • Introduction to Linguistics
  • Autumn 2017 • Acquisition of Language
  • Spring 2017 • Inductive Logic
  • Winter 2017 • Probability and Statistics for Economics

Simon Fraser University

  • Winter 2016• Critical Thinking
  • Autumn 2018 • Critical Thinking
  • Winter 2017 • Critical Thinking
  • Autumn 2017 • Introduction to Ethics

Guest Lectures

University of California, Irvine

  • "Language and Cognition", Acquisition of Language (Linguistics/Psychology), 1 Dec. 2017.