Research

My research centres on philosophical questions surrounding the dynamics of complex social systems, often using formal tools from evolutionary game theory. This work includes interrelated projects in the areas of (1) ethically-aligned artificial intelligence, (2) social dynamics, norms, and conventions, and, more recently, (3) the philosophy of autism. Some current research projects are described below. (Click through for details.)

1. Philosophy and Ethics of AI and Emerging Technologies.

My main research programme surrounds the philosophy and ethics of AI. My primary focus has been on value-alignment problems in the context of AI systems. As these systems become more sophisticated and more deeply embedded in society, it will become increasingly essential to ensure that we can maintain control of them and that the decisions and actions they take are aligned with our values. My research emphasises that ensuring value alignment for AI systems requires more than just translating our best normative theories into a programming language.

Artificial Intelligence and the Value-Alignment Problem

The value-alignment problem for AI asks how we can ensure that the ‘values’ (objective functions) of artificial systems align with humanity’s values. One component of this problem is technical (how do we encode values or principles in AI systems?), and one component is normative (what values or principles are the ‘correct’ ones to encode in AI systems?).

One of my main research projects is an interdisciplinary treatment of value-alignment problems for AI systems. Although an intuitive understanding of value alignment is superficially useful, it should be clear that it raises more questions than answers:

  • Are the ‘values of humanity’ individual values? Aggregate values?
  • Whose values are they? How should they be determined?
  • How can a conception of value alignment deal with the variation of values across culture or time?
My approach to value alignment is differentiated from the extant literature in machine learning insofar as, in my analysis, value-alignment problems arise from the dynamics of multi-agent interactions.

Thus, rather than focusing on which values are the right ones and how to implement them, I discuss the socio-dynamic contexts that give rise to value-alignment problems in the first place. This treatment provides a conceptual basis for understanding what value-alignment problems are (and how they are generated) in addition to shedding light on similar features between value-alignment problems for AI systems and other dynamic interactions (e.g., between human agents).

One key consequence of this approach is that, outside of the intentional misuse of AI systems by bad-faith actors, every problem arising in machine ethics—bias and fairness; transparency, explainability, and opacity; control problems, etc.—can be cashed out in terms of value alignment.

Understanding value-alignment problems in terms of their structural features underscores that we will not find solutions in the manipulation or re-configuration of training data or alternative algorithms.

Language and Value Alignment

In recent and ongoing work, I argue that linguistic communication is necessary for robust value alignment. Therefore, understanding the fundamental principles involved in the (biological or cultural) evolution of effective communication may lead to innovative communication methods for AI systems. The importance of linguistic communication for coordinating social values and norms in complex social systems is often taken for granted in contemporary research on value alignment for AI. However, there are important evolutionary connections between social norms and linguistic communication.

One goal of this research is to demonstrate the importance of understanding the biological origins of language as a path to achieving moral behaviour in AI systems.

Normativity and Artificial Intelligence

Along with a multi-disciplinary research team I have been reviewing the literature on artificial moral agency. We seek to provide a systematic account of the most promising approaches to developing artificial moral agents, addressing normative pluralism, conflicts, and codification. A central research question outlines the key issues and most promising implementation approaches for high- and low-level agents. Additionally, we explore the main benchmarking approaches for assessing the level of functionality, accuracy, or moral competence of artificial moral agents. My recent single- and co-authored research in this area has sought to critically examine the very idea of benchmarking ethics for AI systems. I explore why current approaches in computer science fail in light of category mistakes involved in using moral dilemmas from philosophy as benchmarks for whether an AI system acts morally, in addition to discussing metaethical problems for benchmarking ethics for AI.

2. Social Dynamics, Norms, and Conventions.

A second main focus of my research concerns questions surrounding social dynamics and the cultural and biological evolution of social phenomena. Primarily, this work has centred on language origins, but I have also studied the dynamics of other non-linguistic social phenomena.

Evolutionary Origins of Linguistic Communication

My work on the evolutionary origins of language stems from my dissertation. This work aims to address the following questions:

  • What are the salient differences between the simple signalling systems that are ubiquitous in nature and the linguistic communication systems that are unique to humans?
  • Which of these salient features of natural language provides an empirically plausible target for explaining how linguistic communication systems may have evolved out of simpler systems of communication?
Many researchers think that if we could explain how some distinctive feature(s) of language evolved, we would have taken great strides in bridging the evolutionary gap between simple communication and natural language. The most common feature of natural language appealed to as a gap-bridging explanatory target is compositionality (and related features like hierarchy and recursion). I argue that the emphasis on compositional syntax in language-origins research is misguided. I examine the inherent asymmetry between the benefits of compositional syntax for senders and receivers in a signalling-game context. I further discuss the binary nature of compositionality, which precludes a gradualist explanation for how it evolved. Using comparative methods from evolutionary biology, I show that there is no empirical evidence for any relevant proto-compositional precursors in nature. This body of research suggests that it is a mistake to assume that since compositional syntax provides a crucial difference between language and simple communication, research on language origins must, therefore, centre on the evolution of compositional syntax itself.

Instead, I propose that reflexivity—the ability to use language to talk about language—provides a plausible alternative explanatory target for language-origins research. Communication is a unique evolved mechanism to the extent that it can overtly influence the evolution of future communication. Once individuals learn to communicate, they may use those abilities to influence future communicative behaviour, leading to a positive feedback loop. I have demonstrated, via formal models, that reflexivity gives rise to rich compositional structures from which genuinely compositional syntax can emerge; but it emerges as a byproduct rather than an explicit target of evolutionary pressures. I further argue that reflexivity does not succumb to the problems that compositionality faces: role asymmetries are accounted for by the underlying mechanisms that give rise to reflexive communication systems; there exists empirical evidence of plausible precursors to reflexivity in nature; the precursors of reflexivity are genuinely graded. My research in this area provides initial evidence that reflexivity is a fruitful direction for explanations of language origins.

Social Dynamics, Epistemology, and Justice

My work in social dynamics has also examined general (non-linguistic) social phenomena. This research stems primarily from work done under a grant from the National Science Foundation (USA) on Social Dynamics and Diversity in Epistemic Communities (PI: Cailin O’Connor, UC Irvine). We use formal models and simulation results to precisify arguments about various social phenomena. For example, my research on discrimination shows how small degrees of power give rise to radically inequitable distributions of resources between perceptibly distinct (but otherwise effectively identical) populations of individuals. This result has consequences for the viability of accounts of distributive justice. My research in social epistemology shows how false beliefs—concerning, for example, scientific information or ‘fake news’ items—spread and persist in a network of individuals, even when an explicit retraction is issued [18]. My future work in this area will provide novel models for analysing the social dynamics of epistemic accuracy, diversity, and inequity. This research programme is unified by the aim of understanding the role of social structure and interaction in epistemology and justice. With my co-author, Aydin Mohseni, I have also used stochastic models from game theory to analyse the cooperative challenge for AI Ethics guidelines

3. The Philosophy of Autism and Autistic Philosophy.

Since Fall 2022, beginning with a graduate seminar that I designed to teach at Dalhousie University (Philosophy on the Spectrum: The Philosophy of Autism and Autistic Philosophy), I have been exploring philosophical research on autism spectrum disorders. Stuart Murray suggests that the very idea of an autistic person is a philosophical one. However, Kenneth Richman points out that ‘the philosophy of autism’ is not (yet) a subfield of philosophy insofar as philosophical work on autism has fallen primarily under ethics, philosophy of mind, philosophy of psychology, or philosophy of medicine. Although the philosophy of autism is a fruitful research area within and without these narrower domains, my main research interest concerns (what I refer to as) autistic philosophy.

What Is Autistic Philosophy?

Autistic philosophy starts from the realisation that much philosophical theorising has proceeded from a ‘neurotypical’ perspective, thus theoretically (and often in practice) grossly misunderstanding autism and perpetuating harmful stereotypes about autistics. The general approach to autistic philosophy is to take certain phenomena for granted in light of the existence and experience of autistic persons and then proceed to take these phenomena as fundamentally challenging existing philosophical theories from an autistic perspective. Thus, taking autistic perspectives as a starting point for one’s inquiry, it is possible to critically reflect on philosophical preconceptions, thereby challenging misconceptions about these notions based on neurodiversity within populations. As a philosopher of science, I am particularly interested in how autistic perspectives challenge dominant (neurotypical) scientific paradigms concerning epistemology, ontology, aetiology, and the very process of categorising, diagnosing, and pathologising autistic characteristics.

Books

Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction

Under contract with Broadview Press. Draft forthcoming.

Peer-Reviewed Articles

Power by Association (with Cailin O'Connor)

Ergo, an Open Access Journal of Philosophy (2022)

We use tools from evolutionary game theory to examine how power might influence the cultural evolution of inequitable norms between discernible groups (such as gender or racial groups) in a population of otherwise identical individuals. Similar extant models always assume that power is homogeneous across a social group. As such, these models fail to capture situations where individuals who are not themselves disempowered nonetheless end up disadvantaged in bargaining scenarios by dint of their social group membership. Thus, we assume that there is heterogeneity in the groups in that some individuals are more powerful than others.

Our model shows that even when most individuals in two discernible sub-groups are relevantly identical, powerful individuals can affect the social outcomes for their entire group; this results in power by association for their in-group and a bargaining disadvantage for their out-group. In addition, we observe scenarios like those described where individuals who are more powerful will get less in a bargaining scenario because a convention has emerged disadvantaging their social group.

[Official version available Here.]
[PhilSci-Archive Preprint available here.] (Please cite published version, if available.)

Recommended citation:
LaCroix, Travis and Cailin O'Connor. 2022. "Power by Association." Ergo. 8(29): 163-189.

Est-ce que vous Compute? Code-Switching, Culutral Identity, and AI (with Arianna Falbo)

Feminist Philosophical Quarterly (2022)

Cultural code-switching concerns how we adjust our overall behaviours, manners of speaking, and appearance in response to a perceived change in our social environment. We defend the need to investigate cultural code-switching capacities in artificial intelligence systems. We explore a series of ethical and epistemic issues that arise when bringing cultural code-switching to bear on artificial intelligence. Building upon Dotson's (2014) analysis of testimonial smothering, we discuss how emerging technologies in AI can give rise to epistemic oppression, and specifically, a form of self-silencing that we call 'cultural smothering'. By leaving the socio-dynamic features of cultural code-switching unaddressed, AI systems risk negatively impacting already-marginalised social groups by widening opportunity gaps and further entrenching social inequalities.

[Official version available Here.]
[arXiv preprint available here.] (Please cite published version, if available.)

Recommended citation:
Falbo, Arianna and Travis LaCroix. 2022. "Est-ce que vous compute? Code-switching, cultural identity and AI." Feminist Philosophical Quarterly. 8(3-4): 9:1-9:24.

The Tragedy of the AI Commons (with Aydin Mohseni)

Synthese (2022)

Policy and guideline proposals for ethical artificial-intelligence research have proliferated in recent years. These are supposed to guide the socially-responsible development of AI for the common good. However, there typically exist incentives for non-cooperation (i.e., non-adherence to such policies and guidelines); and, these proposals often lack effective mechanisms to enforce their own normative claims. The situation just described constitutes a social dilemma—namely, a situation where no one has an individual incentive to cooperate, though mutual cooperation would lead to the best outcome for all involved. In this paper, we use stochastic evolutionary game dynamics to model this social dilemma in the context of the ethical development of artificial intelligence. This formalism allows us to isolate variables that may be intervened upon, thus providing actionable suggestions for increased cooperation amongst numerous stakeholders in AI. Our results show how stochastic effects can help make cooperation viable in such a scenario. They suggest that coordination for a common good should be attempted in smaller groups in which the cost for cooperation is low, and the perceived risk of failure is high. This provides insight into the conditions under which we should expect such ethics proposals to be successful with regard to their scope, scale, and content.

[Official version available Here.]
[arXiv preprint available here.] (Please cite official version, if available.)
[Recorded talk available here.]
[Poster available here.]

Recommended citation:
LaCroix, Travis and Aydin Mohseni. 2022. "The Tragedy of the AI Commons." Synthese 200: 289:1-289:33. https://doi.org/10.1007/s11229-022-03763-2

Moral Dilemmas for Moral Machines

AI and Ethics (2022)

Autonomous systems are being developed and deployed in situations that may require some degree of ethical decision-making ability. As a result, research in machine ethics has proliferated in recent years. This work has included using moral dilemmas as validation mechanisms for implementing decision-making algorithms in ethically-loaded situations. Using trolley-style problems in the context of autonomous vehicles as a case study, I argue (1) that this is a misapplication of philosophical thought experiments because (2) it fails to appreciate the purpose of moral dilemmas, and (3) this has potentially catastrophic consequences; however, (4) there are uses of moral dilemmas in machine ethics that are appropriate and the novel situations that arise in a machine-learning context can shed some light on philosophical work in ethics.

[Official version available here.]
[PhilSci-Archive preprint available here.] (Please cite published version, if available.)
[arXiv Preprint available here.] (Please cite published version, if available.)

Summaries of this research have been published on the Blog of the American Philosophical Association and the AI Ethics Brief, distributed by the Montreal AI Ethics Institute.

Recommended citation:
LaCroix, Travis. 2022. "Moral dilemmas for moral machines." AI and Ethics. 2: 737-746. [https://doi.org/10.1007/s43681-022-00134-y].

Using Logic to Evolve More Logic: Composing Logical Operators via Self-Assembly

British Journal for the Philosophy of Science (2022)

I consider how complex logical operations might self-assemble in a signalling-game context via composition of simpler underlying dispositions. On the one hand, agents may take advantage of pre-evolved dispositions; on the other hand, they may co-evolve dispositions as they simultaneously learn to combine them to display more complex behaviour. In either case, the evolution of complex logical operations can be more efficient that evolving such capacities from scratch. Showing how complex phenomena like these might evolve provides an additional path to the possibility of evolving more or less rich notions of compositionality. This helps provide another facet of the evolutionary story of how sufficiently rich, human-level cognitive or linguistic capacities may arise from simpler precursors.

[Official version available here.]
[PhilSci-Archive preprint available here.] (Please cite official version.)

Recommended citation:
LaCroix, Travis. 2019. "Using Logic to Evolve More Logic: Composing Logical Operators via Self-Assembly." British Journal for the Philosophy of Science (2022) 73(2): 407-437.

Epistemology and the Structure of Language (with Jeffrey A. Barrett)

Erkenntnis (2022)

We are concerned here with how structural properties of language may evolve to reflect features of the world in which it evolves. As a concrete example, we will consider how a simple term language might evolve to support the principle of indifference over state descriptions in that language. The point is not that one is justified in applying the principle of indifference to state descriptions in natural language. Rather, it is that one should expect a language that has evolved in the context of facilitating successful action to reflect probabilistic features of the world in which it evolved.

[Official version available here.]
[PhilSci-Archive Preprint available here.] (Please cite official version.)

Recommended citation:
Barrett, Jeffrey A. and Travis LaCroix. 2020. "Epistemology and the Structure of Language." Erkenntnis (2022) 87: 953-967. https://doi.org/10.1007/s10670-020-00225-4

Reflexivity, Functional Reference, and Modularity: Alternative Targets for Language Origins

Philosophy of Science (2021)

Researchers in language origins typically try to explain how compositional communication might evolve to bridge the gap between animal communication and natural language. However, as an explanatory target, compositionality has been shown to be problematic for a gradualist approach to the evolution of language. In this paper, I suggest that reflexivity provides an apt and plausible alternative target which does not succumb to the problems that compositionality faces. I further explain how proto-reflexivity, which depends upon functional reference, gives rise to complex communication systems via modular composition.

[Official version available here.]
[PhilSci-Archive preprint available here.] (Please cite official version, if available.)

Recommended citation:
LaCroix, Travis. 2020. "Reflexivity, Functional Reference, and Modularity: Alternative Targets for Language Origins." Philosophy of Science (2021) 8(5): 1234-1245.

The Dynamics of Retraction in Epistemic Networks (with Cailin O'Connor and Anders Geil)

Philosophy of Science (2021)

Sometimes retracted scientific information is used and propagated long after it is understood to be misleading. Likewise, sometimes retracted news items spread and persist, even after it has been publicly established that they are false. In this paper, we use agent-based models of epistemic networks to explore the dynamics of retraction. In particular we focus on why false beliefs might persist, even in the face of retraction.

[Official version available here.]
[PhilSci-Archive preprint available here.] (Please cite published version, if available.)

Recommended citation:
LaCroix, Travis, Anders Geil, and Cailin O'Connor. 2020. "The Dynamics of Retraction in Epistemic Networks." Philosophy of Science (2021) 88(3): 415-438. https://doi.org/10.1086/712817

Communicative Bottlenecks Lead to Maximal Information Transfer

Journal of Experimental and Theoretical Artificial Intelligence (2020)

This paper presents new analytic and numerical analysis of signalling games that give rise to informational bottlenecks—that is to say, signalling games with more state/act pairs than available signals to communicate information about the world. I show via simulation that agents learning to coordinate tend to favour partitions of nature which provide maximal information transfer. This is true in spite of the fact that nothing from an initial analysis of the stability properties of the underlying signalling game suggests that this should be the case. As a first pass to explain this, I note that the underlying structure of our model favours maximal information transfer in regard to the simple combinatorial properties of the ways in which the agents might partition nature into kinds. However, I suggest that this does not perfectly capture the empirical results; thus, several open questions remain.

[Official version available here.]
[PhilSci-Archive preprint available here.] (Please cite official version.)

Recommended citation:
LaCroix, Travis. 2020. "Communicative Bottlenecks Lead to Maximal Information Transfer." Journal of Experimental and Theoretical Artificial Intelligence (2020) 32(6): 997-1014.

Evolutionary Explanations of Simple Communication: Signalling Games & Their Models

Journal for General Philosophy of Science / Zeitschrift für allgemeine Wissenschaftstheorie (2020)

This paper applies the theoretical criteria laid out by D'arms et al. (1998) to various aspects of evolutionary models of signalling. The question that D'Arms et al. seek to answer can be formulated as follows: Are the models that we use to explain the phenomena in question conceptually adequate? The conceptual adequacy question relates the formal aspects of the model to those aspects of the natural world that the model is supposed to capture. Moreover, this paper extends the analysis of D'Arms et al. by asking the following additional question: Are the models that we use sufficient to explain the phenomena in question? The sufficiency question ask what formal resources are minimally required in order for the model to get the right results most of the time.

[Official version available here.]
[PhilSci-Archive preprint available here.] (Please cite official version.)

Recommended citation:
LaCroix, Travis. 2019. "Evolutionary Explanations of Simple Communication: Signalling Games & Their Models." Journal for General Philosophy of Science / Zeitschrift für allgemeine Wissenschaftstheorie (2020) 51(1): 19-43.

On Salience and Signalling in Sender-Receiver Games: Partial-Pooling, Learning, and Focal Points

Synthese (2020)

I introduce an extension of the Lewis-Skyrms signaling game, analysed from a dynamical perspective via simple reinforcement learning. In Lewis' (Convention, Blackwell, Oxford, 1969) conception of a signaling game, salience is offered as an explanation for how individuals may come to agree upon a linguistic convention. Skyrms (Signals: evolution, learning & information, Oxford University Press, Oxford, 2010a) offers a dynamic explanation of how signaling conventions might arise presupposing no salience whatsoever. The extension of the atomic signaling game examined here—which I refer to as a salience game—introduces a variable parameter into the atomic signaling game which allows for degrees of salience, thus filling in the continuum between Skyrms' and Lewis' models. The model does not presuppose any salience at the outset, but illustrates a process by which accidentally evolved salience is amplified, to the benefit of the players. It is shown that increasing degrees of salience allow populations to avoid sub-optimal pooling equilibria and to coordinate upon conventions more quickly.

[Official version available here.]
[PhilSci-Archive preprint available here.] (Please cite official version.)

Recommended citation:
LaCroix, Travis. 2018. "On Salience and Signaling in Sender-Receiver Games: Partial Pooling, Learning, and Focal Points." Synthese (2020) 197(4): 1725-1747.

Book Chapters

Ethics and Deep Learning (with Simon J. D. Prince)

Chapter 21 in Simon J. D. Prince (2023) Understanding Deep Learning

An overview of some ethical issues arising from deep learning approaches to artificial intelligence research.

[Complete book draft available here.]

Recommended citation:
LaCroix, Travis, and Simon J. D. Prince. 2023. "Ethics and Deep Learning". Chapter 21 in Simon J. D. Prince, Understanding Deep Learning. Cambridge, MA: The MIT Press. 420-435.

Conference Proceedings

Emergent Communication under Competition (with Michael Noukhovitch, Angeliki Lazaridou, and Aaron Courville)

Autonomous Agents and Multiagent Systems (AAMAS 2021)

Current literature in machine learning has only negative results for learning to communicate between competitive agents using vanilla RL. We introduce a modified sender-receiver game to study the spectrum of partially-competitive scenarios and show communication can indeed emerge in this setting. We empirically demonstrate three key takeaways for future research. First, we show that communication is proportional to cooperation, and it can occur for partially competitive scenarios using standard learning algorithms. Second, we highlight the difference between communication and manipulation and extend previous metrics of communication to the competitive case. Third, we investigate the negotiation game where previous work failed to learn communication between independent agents. We show that, in this setting, both agents must benefit from communication for it to emerge. Finally, with a slight modification to the game, we successfully learn to communicate between competitive agents. We hope this work overturns misconceptions and inspires more research in competitive emergent communication.

[Official version available here.]
[arXiv preprint available here.] (Please cite published version, if available.)
[Recorded Talk (Noukhovitch) Available Here.]

Recommended citation:
Noukhovitch, Michael, Travis LaCroix, Angeliki Lazaridou, and Aaron Courville. 2021. Emergent Communication under Competition. In U. Endriss, A. Nowé, F. Dignum, and A. Lomuscio (eds.), Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021), London, UK, 3-7 May 2021, International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS) 974-982.

Biology and Compositionality: Empirical Considerations for Emergent-Communication Protocols

NeurIPS 2019 workshop Emergent Communication: Towards Natural Language (2019)

Significant advances have been made in artificial systems by using biological systems as a guide. However, there is often little interaction between computational models for emergent communication and biological models of the emergence of language. Many researchers in language origins and emergent communication take compositionality as their primary target for explaining how simple communication systems can become more like natural language. However, there is reason to think that compositionality is the wrong target on the biological side, and so too the wrong target on the machine-learning side. As such, the purpose of this paper is to explore this claim. This has theoretical implications for language origins research more generally, but the focus here will be the implications for research on emergent communication in computer science and machine learning—specifically regarding the types of programmes that might be expected to work and those which will not. I further suggest an alternative approach for future research which focuses on reflexivity, rather than compositionality, as a target for explaining how simple communication systems may become more like natural language. I end by providing some reference to the language origins literature that may be of some use to researchers in machine learning.

[ArXiv preprint available here.]
[Poster available here.]

Recommended citation:
LaCroix, Travis. 2019. "Biology and Compositionality: Empirical Considerations for Emergent-Communication Protocols." arXiv preprint: 1911.11668. https://arxiv.org/abs/1911.11668.

Book Reviews

Review of Ronald J. Planer and Kim Sterelny’s From Signal to Symbol

Philosophy of Science (2023)

Review of Ronald J. Planer and Kim Sterelny’s From Signal to Symbol – Ronald J. Planer and Kim Sterelny, From Signal to Symbol: The Evolution of Language. Cambridge: The MIT Press (2021), 296 pp., $35.00 (hardcover)

[Official version available here.]

Recommended citation:
LaCroix, Travis. 2023. "Review of Ronald J. Planer and Kim Sterelny's From Signal to Symbol." Philosophy of Sceince (Forthcoming): 1-4. https://doi.org/10.1017/psa.2023.75 .

Dissertation

Complex Signals: Reflexivity, Hierarchical Structure, and Modular Composition

My dissertation argues that what drives the emergence of complex communication systems is a process of modular composition, whereby independent communicative dispositions combine to create more complex dispositions. This challenges the dominant view on the evolution of language, which attempts to resolve the explanatory gap between communication and language by demonstrating how complex syntax evolved. My research shows that these accounts fail to maintain sensitivity to empirical data: genuinely compositional syntax is extremely rare or non-existent in nature. In contrast, my research prioritises the reflexivity of natural language—the ability to use language to talk about language—as an alternative explanatory target.

The first part of my dissertation provides the philosophical foundation of this novel account using the theoretical framework of Lewis-Skyrms signalling games and drawing upon relevant work in evolutionary biology, linguistics, cognitive systems, and machine learning. Chapter 1 introduces the signalling game and contextualises it with respect to problems in the realm of traditional philosophy of language. Chapter 2 examines empirical data from biology and linguistics and argues that complex syntax is not the most apt explanatory target for how language might have evolved out of simple communication. Chapter 3 then argues that the reflexivity of language is a more fruitful property to consider, showing how reflexivity aids the evolution of complex communication via a process of modular composition. This connects parallel research in the evolution of language, cognitive systems, and machine learning paradigms. Once such complexity is exhibited, at a small scale, it may lead to a 'feedback loop' between communication and cognition that gives rise to the complexity we see in natural language.

The second part of my dissertation provides a set of models, along with analytic and simulation results, that show precisely how (and under what circumstances) this process of modular composition is supposed to work.

A more detailed summary of this work can be read HERE.

[Official version available here.]

Recommended citation:
LaCroix, Travis. 2020. Complex Signals: Reflexivity, Hierarchical Structure, and Modular Composition. UC Irvine. ProQuest ID: LaCroix_uci_0030D_16213. Merritt ID: ark:/13030/m5ps345j. Retrieved from https://escholarship.org/uc/item/5328x080

Under Review

Accounting for Polysemy and Role Asymmetry in the Evolution of Compositional Signals

Several formal models of signalling conventions have been proposed to explain how and under what circumstances compositional signalling might evolve. I suggest that these models fail to give a plausible account of the evolution of compositionality because (1) they apparently take linguistic compositionality as their target phenomenon, and (2) they are insensitive to role asymmetries inherent to the signalling game. I further suggest that, rather than asking how signals might come to be compositional, we must clarify what it would mean for signals to be compositional to begin with.

[Unpublished draft available here.] (Please cite official version, if available.)

Recommended citation:
LaCroix, Travis. 2019. "Accounting for Polysemy and Role-Asymmetry in the Evolution of Compositional Signals." Unpublished Manuscript. May 2019, PDF File.


Autism and the Pseudoscience of Mind

This paper critically examines the theory-of-mind-deficit explanation of autism—a cognitive explanation of autistic behaviour that has significantly influenced empirical research and philosophical discourse surrounding autism. However, the claim that autistics lack a theory of mind is false. Part of the purpose of this paper is to describe how. First, a theory-of-mind deficit is inadequate as an explanatory model. Second, prior research has demonstrated the empirical failures of experiments intended to measure theory-of-mind abilities. These facts together suggest that the science of theory of mind in the context of autism is bad science. I argue that it is pseudoscience. This view has important consequences for philosophers who uncritically invoke autism (qua theory-of-mind deficit) as a thought experiment.

[PhilSci-Archive preprint available here.] (Please cite published version, if available.)

Recommended citation:
LaCroix, Travis. 2023. "Autism and the Pseudoscience of Mind." Unpublished Manuscript. December 2023.


The Correction Game or, How Pre-Evolved Communicative Dispositions Might Affect Communicative Dispositions

How might pre-evolved communicative dispositions affect how individuals learn to communicate in a novel context? I present a model of learning that varies the reward for coordination in the signalling game framework under simple reinforcement learning as a function of the agents' actions. The model takes advantage of a type of modular compositional communicative bootstrapping by which the sender and receiver use pre-evolved communicative dispositions—a "yes/no" command—to evolve new dispositions.

[Unpublished draft available here.] (Please cite official version, if available.)

Recommended citation:
LaCroix, Travis. 2019. "The Correction Game or, How Pre-Evolved Communicative Dispositions Might Affect Communicative Dispositions." Unpublished Manuscript. April 2019, PDF File.


Information and Meaning in the Evolution of Compositional Signals

This paper provides a formal treatment of the argument that syntax alone cannot give rise to compositionality in a signalling game context. This conclusion follows from the standard information-theoretic machinery used in the signalling game literature to describe the informational content of signals.

[PhilSci-Archive preprint available here.] (Please cite published version, if available.)

Recommended citation:
LaCroix, Travis. 2022. "Information and Meaning in the Evolution of Compositional Signals." Unpublished Manuscript. April 2022.


Learning From Learning Machines: Optimisation, Rules, and Social Norms (with Yoshua Bengio)

There is an analogy between machine learning systems and economic entities in that they are both adaptive, and their behaviour is specified in a more or less explicit way. It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making, but it is an open question as to how precisely moral behaviour can be achieved in an AI system. This paper explores the analogy between these two complex systems, and we suggest that a clearer understanding of this apparent analogy may help us forward in both the socio-economic domain and the AI domain: known results in economics may help inform feasible solutions in AI safety, but also known results in AI may inform economic policy. If this claim is correct, then the recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.

[arXiv Preprint available here.] (Please cite published version, if available.)

Recommended citation:
LaCroix, Travis and Yoshua Bengio. 2019. "Learning from Learning Machines: Optimisation, Rules, and Social Norms." arxiv.org/abs/2001.00006.


The Linguistic Blind Spot of Value-Aligned Agency, Natural and Artificial

The value-alignment problem for artificial intelligence (AI) asks how we can ensure that the 'values'—i.e., objective functions—of artificial systems are aligned with the values of humanity. In this paper, I argue that linguistic communication is a necessary condition for robust value alignment. I discuss the consequences this the truth of this claim would have for research programmes that attempt to ensure value alignment for AI systems—or, more loftily, designing robustly beneficial or ethical artificial agents.

[arXiv preprint available here.] (Please cite published version, if available.)

Recommended citation:
LaCroix, Travis. 2022. "The linguistic blind spot of value-aligned agency, natural and artificial." arXiv Preprint: 2207.00868. 1-49. https://arxiv.org/abs/2207.00868.


Metaethical Perspectives on 'Benchmarking' AI Ethics (with Alexandra Sasha Luccioni)

Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research and have been developed for a variety of tasks ranging from question answering to facial recognition. An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system. In this paper, drawing upon research in moral philosophy and metaethics, we argue that it is impossible to develop such a benchmark. As such, alternative mechanisms are necessary for evaluating whether an AI system is 'ethical'. This is especially pressing in light of the prevalence of applied, industrial AI research. We argue that it makes more sense to talk about 'values' (and 'value alignment') rather than 'ethics' when considering the possible actions of present and future AI systems. We further highlight that, because values are unambiguously relative, focusing on values forces us to consider explicitly what the values are and whose values they are. Shifting the emphasis from ethics to values therefore gives rise to several new ways of understanding how researchers might advance research programmes for robustly safe or beneficial AI. We conclude by highlighting a number of possible ways forward for the field as a whole, and we advocate for different approaches towards more value-aligned AI research.

[arXiv Preprint Available Here.] (Please cite official version, if available.)

Recommended citation:
LaCroix, Travis and Alexandra Sasha Luccioni. 2022. "Metaethical Perspectives on ‘Benchmarking’ AI Ethics." arXiv Preprint: 2204.05151. 1-39. https://arxiv.org/abs/2204.05151.


What Russell Can Denote: Aboutness and Denotation Between Principles and 'On Denoting'

How ought we to analyse propositions that are about nonexistent entities? Russell (1903) details the concept of denoting in Principles of Mathematics, and this theory appears to answer the question posed. However, in the paper 'On Denoting' (Russell 1905), we see that his theory of denoting has changed greatly. Hylton (1990) argues that the move from the former theory to the latter was unnecessary. The purpose of this paper is to show that, contra Hylton, the move to the theory found in 'On Denoting' was indeed necessary.

I argue that Hylton is correct to the extent that an answer to our first question relies on a different question concerning the ontology of nonexistent entities. However, this fails to take into account is a more interesting question regarding the truth values of propositions containing such puzzling entities. This question relies on Russell's notion of aboutness, and in this sense is more sensitive to his theory as a complete picture of denotation. If we take the aboutness relation seriously, then we see that the move from the former theory to the latter was necessary after all.

[Unpublished Draft Available Here.] (Please cite official version, if available.)

Recommended citation:
LaCroix, Travis. 2019. "What Russell Can Denote: Aboutness and Denotation Between Principles and 'On Denoting'." Unpublished Manuscript. May 2019, PDF File.

Selected Working Papers

Reference by Proxy and Truth-in-a-Model

Simchen (2017) brings to light the notion of 'scrambled truth' to show how productivist metasemantics is able to deal with problems of singular reference in a way that an interpretationist metasemantics (such as Lewisian reference magnetism) cannot. This serves to show that productivism is a live alternative, and indeed a rival to interpretationist metasemantics, and so cannot be subsumed by interpretationist theories.

I examine Simchen's challenge to interpretationist metasemantics by extending his theoretical problem in light of actual communicative exchanges. I show that when the problem is couched in these terms, the ability to refer depends inherently upon coordination—the onus of which is on the receiver. Thus, I show how the interpretationist stance, in this case, can reasonably be understood to encompass the productivist stance.


Saltationist versus Gradualist Approaches to Language Origins: A Critical Discussion

In spite of their vast differences, theories of language origins can be, more or less, partitioned into two exhaustive and mutually exclusive camps: saltationist and gradualist. Saltationism—from the Latin saltus, meaning 'leap'—is the view that (the human-level capacity for) language sprang into existence suddenly and recently, and that there is a complete discontinuity between the linguistic capacities of humans and the communication systems of non-human animals; whereas, gradualism—from the Latin gradus, meaning 'step'—is the view that language evolved slowly over long periods of time. However, rather than arguing for the plausibility of a gradualist versus a saltationist scenario, most researchers appear to fall into one or the other camp based purely upon external or pre-theoretic commitments regarding what they believe evolved [emerged] and how.

The purpose of this paper is to critically survey the respective commitments and entailments of saltationist and gradualist theories of language origins in order to make an explicit argument that the saltationist view is theoretically untenable. Under scrutiny, holding one or the other theoretical stance toward language origins will require or entail certain commitments, which vary in plausibility. It appears that many researchers either ignore these facts or are willing to bite the bullet with respect to them. However, arguments for why one should be so willing are often few and far between.