Talks and Presentations

Popular Media

NeuroDiving: A Philosophy Podcast about Neurodivergence

NeuroDiving is a narrative-driven podcast, created by Amelia Hicks and Joanna Lawson, about philosophical questions raised by the experiences of neurodivergent people. Season 1, "Autism Mind-Myths," focuses on the myth that autism is a "theory of mind deficit," and offers reflections on autistic experiences of empathy and moral motivation. More information available here: https://neurodiving.fm.

Academic Presentations

What Do Philosophers Talk About When They Talk About Autism?

(w/ A. Amero and B. Sidloski)

Description forthcoming.

Artificial Intelligence and the Value Alignment Problem

Description forthcoming.

Panel Discussion: Fast vs. Slow AI Adoption

(w/ C. Blouin, C. Heggie, and T. Das)

Symposium website available here.

The Autistic Metrestick in Paris: Autism and the Value-Free Ideal

The main goal of this paper is to explore the value-ladenness of scientific inquiry by examining some key moments in the history of autism research. Autism research has, historically, encoded the values of non-autistics. However, rejecting the value-free ideal of science opens the door to examining competing sets of (non-epistemic) values that might influence and direct scientific research. Using formal diagnosis versus self-identification as a case study, I argue for the prioritisation of the values of autistics and autistic communities in autism research.

Autism and the Pseudoscience of Mind

Research on autism spectrum disorder (ASD) has aimed to elucidate the psychological or cognitive mechanisms underpinning autism’s behavioural manifestations. Such cognitive explanations are supposed to further our aetiological understanding of ASD by positing an 'intervening variable' between biology and behaviour. Numerous hypotheses have been forwarded in the past half-century, including the popular claim that autistics lack a theory of mind. Theory-of-mind-deficit explanations of autism have been of particular interest to philosophers in light of the normative and theoretical entailments of an individual who is 'unable' to attribute mental states to others. This fact would have consequences for epistemology, theories of mind, theories of meaning, and normative theory, among others. However, the claim that autistics lack a theory of mind is false. The purpose of this paper is to describe how this claim is false. I begin by reviewing research that suggests that theory-of-mind-deficits cannot be adequate as an explanatory model for autism. I then rehearse the empirical failures of experiments intended to measure theory-of-mind abilities. Finally, I argue that experimental 'evidence' for the theory-of-mind-deficit explanation of autism amounts to pseudoscience by exploring the following two questions: Do tests of theory of mind measure theory of mind? What test could disprove the claim that autistics lack a theory of mind? I conclude by examining this argument’s consequences for philosophers who uncritically invoke autism (qua theory-of-mind deficit) as a thought experiment.

Moral Dilemmas for Moral Machines

Autonomous systems are being developed and deployed in situations that may require some degree of ethical decision-making ability. As a result, research in machine ethics has proliferated in recent years. Part of this work has involved the use of moral dilemmas as validation mechanisms for the implementation of algorithms for decision making in ethically-loaded situations. Using trolley-style problems in the context of autonomous vehicles as a case study, I argue (1) that this is a misapplication of philosophical thought experiments because (2) it fails to appreciate the purpose of moral dilemmas, and (3) this has potentially catastrophic consequences; however, (4) there are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.

[Slides available here.]

On Being in (Autistic) Community

Description forthcoming.

Information and Meaning in the Evolution of Compositional Signals

This paper provides a formal treatment of the argument that syntax alone cannot give rise to compositionality in a signalling game context. This conclusion follows from the standard information-theoretic machinery used in the signalling game literature to describe the informational content of signals.

The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial

The value-alignment problem for artificial intelligence (AI) asks how we can ensure that the 'values' (i.e., objective functions) of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication (natural language) is a necessary condition for robust value alignment. I discuss the consequences that the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems; or, more loftily, designing robustly beneficial or ethical artificial agents.

The Tragedy of the AI Commons

Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma---namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.

A recording of this talk is available here.

Reflexivity, Functional Reference, and Modularity: Alternative Targets for Language Origins

Researchers in language origins typically try to explain how compositional communication might evolve to bridge the gap between animal communication and natural language. However, as an explanatory target, compositionality has been shown to be problematic for a gradualist approach to the evolution of language. In this paper, I suggest that reflexivity provides an apt and plausible alternative target which does not succumb to the problems that compositionality faces. I further explain how proto-reflexivity, which depends upon functional reference, gives rise to complex communication systems via modular composition.

The paper associated with this talk has been accepted for publication in Philosophy of Science. A pre-print can be found on the PhilSci archive, here.

If Gradualism Is the Correct Approach to Language Origins, Then Compositionality Is Not a Plausible Explanatory Target

In this paper, I suggest that if the gradualist approach to language origins is correct, then compositionality is the wrong explanatory target for filling the explanatory gap between simple communication systems (as found in nature) and linguistic communication systems (i.e., natural languages).

Learning from Learning Machines: Optimisation, Rules, and Social Norms

(w/ Y. Bengio)

There is an analogy between machine learning systems and economic entities in that they are both adaptive, and their behaviour is specified in a more or less explicit way. It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making, but it is an open question as to how precisely moral behaviour can be achieved in an AI system. This paper explores the analogy between these two complex systems, and we suggest that a clearer understanding of this apparent analogy may help us forward in both the socio-economic domain and the AI domain: known results in economics may help inform feasible solutions in AI safety, but also known results in AI may inform economic policy. If this claim is correct, then the recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.

Emerging Communication Under Conflict of Interest (Co-Presented with Michael Noukhovitch)

(w/ M. Noukhovitch, A. Lazaridou, A. Courville)

Current literature in machine learning holds that when their interests are not aligned, agents do not learn to use an emergent communication channel. We introduce a new sender-receiver game to study emergent communication for this spectrum of partially-competitive scenarios and put special care into evaluation. We find that communication can indeed emerge in partially-competitive scenarios, and we discover three things that are tied to improving it. First, that communication under partial conflict of interest is proportional to cooperation, and it naturally occurs for situations that are more cooperative than competitive. Second, that stability and performance are improved by using LOLA (Foerster et al, 2018), especially in more competitive scenarios. And third, that discrete protocols lend themselves better to learning cooperative communication than continuous ones.

Epistemology and the Structure of Language

(w/ J. A. Barrett)

We are concerned here with how structural properties of language may come to reflect features of the world in which it evolves. As a concrete example, we will consider how a simple term language might evolve to support the principle of indifference over state descriptions in that language. The point is not that one is justified in applying the principle of indifference to state descriptions in natural language. Instead, it is that one should expect a language that has evolved in the context of facilitating successful action to reflect probabilistic features of the world in which it evolved.

The paper associated with this talk has been accepted for publication in Erkenntnis. A pre-print can be found on the PhilSci archive, here.

Biology and Compositionality: Considerations for Emergent-Communication Protocols

Significant advances have been made in artificial systems by using biological systems as a guide. However, there is often little interaction between computational models for emergent communication and biological models of the emergence of language. Many researchers in language origins and emergent communication take compositionality as their primary target for explaining how simple communication systems can become more like natural language. However, there is reason to think that compositionality is the wrong target on the biological side, and so too the wrong target on the machine-learning side. As such, the purpose of this paper is to explore this claim. This has theoretical implications for language origins research more generally, but the focus here will be the implications for research on emergent communication in computer science and machine learning—specifically regarding the types of programmes that might be expected to work and those which will not. I further suggest an alternative approach for future research which focuses on reflexivity, rather than compositionality, as a target for explaining how simple communication systems may become more like natural language. I end by providing some reference to the language origins literature that may be of some use to researchers in machine learning.

Using Logic to Evolve More Logic: Composing Logical Operators via Self-Assembly

In recent work on self-assembly, Barrett and Skyrms (2017) show how a binary logical operator can evolve more quickly in a signalling game when the agents utilize pre-evolved dispositions-as opposed to learning a new disposition from scratch-via template transfer. Their argument is not intended to show how such logical dispositions might evolve in the first place. Further, template transfer does not show how to evolve, e.g., a ternary-input logical operator from a binary-input logical operator. This paper extends their analysis. I begin by analysing simple unary logical operations, rather than binary ones. I then show how binary logical operations can evolve out of unary logical operations via modular composition-a process whereby one game evolves to accept the play of another game as input. Thus, the new models presented here are able to account for phenomena which cannot be accommodated by the models presented in Barrett and Skyrms (2017).

The paper associated with this talk has been accepted for publication in British Journal for the Philosophy of Science. A pre-print can be found on the PhilSci archive, here.

The Correction Game

How might pre-evolved communicative dispositions affect how individuals learn to communicate in a novel context? I present a model of learning that varies the reward for coordination in the signalling-game framework under simple reinforcement learning as a function of the agents' actions. The model takes advantage of a type of modular compositional communicative bootstrapping by which the sender and receiver use pre-evolved communicative dispositions—a "yes/no" command—to evolve new dispositions.

Less Is More: Degrees of Compositionality for Complex Signals

Several formal models of signalling conventions have been proposed to explain how and under what circumstances compositional signalling might evolve. I suggest that these models fail to give a plausible account of the evolution of compositionality because (1) they apparently take linguistic compositionality as their target phenomenon, and (2) they are insensitive to role asymmetries inherent to the signalling game. I further suggest that, rather than asking how signals might come to be compositional, we must clarify what it would mean for signals to be compositional to begin with.

Reference by Proxy and Truth-in-a-Model

I examine Simchen's (2017) challenge to interpretationist metasemantics by extending his theoretical problem of singular reference in light of actual communicative exchanges. I show that when the problem is couched in these terms, the ability to refer depends inherently upon coordination—the onus of which is on the receiver. Thus, I show how the interpretationist stance, in this case, can reasonably be understood to encompass the productivist stance.

On the Role of Power in the Evolution of Inequitable Norms

(w/ C. O'Connor)

We use tools from evolutionary game theory to examine how power might influence the cultural evolution of inequitable norms between discernible groups (such as gender or racial groups) in a population of otherwise identical individuals. Similar extant models always assume that power is homogeneous across a social group. As such, these models fail to capture situations where individuals who are not themselves disempowered nonetheless end up disadvantaged in bargaining scenarios by dint of their social group membership. Thus, we assume that there is heterogeneity in the groups in that some individuals are more powerful than others.

Our model shows that even when most individuals in two discernible sub-groups are relevantly identical, powerful individuals can affect the social outcomes for their entire group; this results in power by association for their in-group and a bargaining disadvantage for their out-group. In addition, we observe scenarios like those described where individuals who are more powerful will get less in a bargaining scenario because a convention has emerged disadvantaging their social group.

On The Role of Information in the Evolution of Signalling

I present new analytic and numerical analyses of signalling games that give rise to informational bottlenecks—that is to say, signalling games with more state/act pairs than available signals to communicate information about the world. I show that agents learning to coordinate tend to favour maximal information transfer in spite of the fact that nothing from an initial analysis of the stability properties of the underlying signalling game suggests that this should be the case. To explain this, I note that the underlying structure of this model favours maximal information transfer in regard to the simple combinatorial properties of the ways in which the agents might partition nature into kinds.

The paper associated with this talk has been accepted for publication in Journal for Experimental and Theoretical Artificial Intelligence. A pre-print can be found on the PhilSci archive, here.

On Salience and Signalling in Sender-Receiver Games

I introduce an extension of the Lewis-Skyrms signalling game, analysed from a dynamical perspective via simple reinforcement learning. In Lewis' (1969) conception of a signalling game, salience is offered as an explanation for how individuals may come to agree upon a linguistic convention. Skyrms (2010) offers a dynamic explanation of how signalling conventions might arise presupposing no salience whatsoever. The extension of the atomic signalling game examined here—which I refer to as a salience game—introduces a variable parameter into the atomic signalling game which allows for degrees of salience, thus filling in the continuum between Skyrms' and Lewis' models. The model does not presuppose any salience at the outset, but illustrates a process by which accidentally evolved salience is amplified, to the benefit of the players. It is shown that increasing degrees of salience allow populations to avoid sub-optimal pooling equilibria and to coordinate upon conventions more quickly.

The paper associated with this talk has been accepted for publication in Synthese. A pre-print can be found on the PhilSci archive, here.

Signalling Games & Their Models

I apply the theoretical criteria laid out by D'Arms, et al. (1998) to various aspects of evolutionary models of signalling (Skyrms 2010). The question that D'Arms, et al. seek to answer can be formulated as follows: Are the models that we use to explain the phenomena in question conceptually adequate? The conceptual adequacy question relates the formal aspects of the model to those aspects of the natural world that are supposed to be captured by the model. Moreover, this paper extends the analysis of D'Arms, et al. by asking the following additional question: Are the models that we use sufficient to explain the phenomena in question? The sufficiency question asks what formal resources are minimally required in order for the model to get the right results most of the time.

The paper associated with this talk has been accepted for publication in Journal for General Philosophy of Science. A pre-print can be found on the PhilSci archive, here.

Fractionally Quantified Predicate Logic

The notion of fractional quantification—quantified statements of the form 'at least half of A are B'—arises in the context of Aristotle’s Syllogistic. Several different models have been proposed to formalize syllogistic logic—the most complete of which, perhaps, in which the syllogistic is extended to include denumerably many quantifiers (Johnson 1994). However, the notion of fractional quantification seems to arise only in the context of Aristotle’s syllogistic. The question that is raised here is simply, 'Why?'

In order to answer this question, the purpose of this paper is to adequately formulate the motivation for the question of whether it is possible to construct a (non-Aristotelian) formal system supplemented with fractional quantification. This will be assesed in two sections: first, antecedent problems with classical quantification that would make such a model desirable; second, (forseeable) initial problems that may arise as a consequence of such a construction, to the extent that it is possible.