Skip to Content

Stuff

Academic

Publications

International Journal of Technology Policy and Law (2017)

Copyright is at the centre of both popular and academic debate. That emotions are running high is hardly surprising - copyright influences who contributes what to culture, how culture is used, and even the kind of persons we are and come to be. Consequentialist, Lockean, and personality interest accounts are generally advanced in the literature to morally justify copyright law. I argue that these approaches fail to ground extensive authorial rights in intellectual creations and that only a small subset of the rights accorded by copyright law is justified. The pared-down version of copyright that I defend consists of the right to attribution, the right to have one’s non-endorsement of modifications or uses of one’s work explicitly noted, and the right to a share of the profit resulting from the commercial uses of one’s work. I also cursorily explore whether contribution to another person’s work gives rise to moral interests.

Work-in-progress

Thinking of oneself as someone: the structure of minimal self-representation

When investigating the nature of self-representation, one standard question concerns the types of property that need to figure in their content. Here, authors have claimed that self-representations need to be about spatial, temporal, bodily, or mental properties. However, we can also ask a second question: how does a self-representation need to represent these properties? It is this latter question that I address. I argue that a distinction between egocentric and non-egocentric forms of representation – known primarily from the literature on spatial cognition – also applies to representations of other kinds of property. I use examples drawn from animal cognition and developmental psychology to show how creatures non-egocentrically represent their temporal, bodily, and cognitive properties. These representations are, I submit, minimal self-representations: they involve representing one’s properties so that an explicit differentiation is made between the system’s and other objects’ properties (or between the system’s actual and merely possible properties). The upshot is a view on which different creatures may self-represent (in this minimal sense) more or fewer kinds of property. More substantive conceptions of self-representation (for instance, as exemplified by neurotypical adult human beings) then require integrated minimal self-representations of the right kinds of property.

Dispositions and objects’ changing properties

Analyses of dispositions share the following formal structure: O has disposition D if O fulfils modal conditions C. This simple structure hides a difficult question: what is the relation between O in the analysandum and O in the analysans? Clearly, they must be numerically identical – and just as clearly, this is insufficient. Whether a thirty-year-old is disposed to wake up early is unaffected by their night owl teenage years. What, then, is an appropriate additional constraint? This paper argues that no suitable constraint has so far been advanced and that finding one presents important difficulties. We might think that the objects need to share all intrinsic properties – but that renders dispositions largely useless in explanation and prediction. As most objects change over time, we cannot, for instance, use our knowledge of someone being an early riser to infer that they will get up early tomorrow. We might, in contrast, think that the objects need to share only some of their intrinsic properties. This approach is more promising but requires explaining which properties need to be shared. I develop a constraint according to which the objects need to share the causal basis (inspired by Lewis’s reformed conditional analysis), but ultimately find it wanting. Finally, I argue that the puzzle of the relation between O in the analysandum and O in the analysans can help motivate some re-evaluation of how dispositions are affected by objects’ dynamic natures.

Joint attention and infant self-other (in)differentiation

Famously, joint attention is characterised by openness: when two agents jointly attend to an object, they are both immediately and fully aware that they are engaged in joint attention. It has been argued that the nature of openness is such that accounts based on the representation of individual mental states must fail. I agree with this diagnosis but am unhappy with existing alternatives that refer to primitive shared states. In this paper, I argue that openness is rooted in young infants’ failure to differentiate between themselves and others in various important ways. These indifferentiations entail that they tacitly assume that openness obtains. The development of the capacity for joint attention is, then, the development of the ability not to assume that openness obtains when doing so isn’t warranted. I trace the development of various cognitive strategies with which the infant detects an increasing amount of non-open situations and use this account to solve various puzzles that have long been associated with openness.

Should we discourage AI extension? (with Hadeel Naeem)

We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put into perspective – many unreliable technologies are unlikely to be used transparently precisely because they are unreliable. Second, an agent who transparently employs a resource may also reflect (opaquely) on its reliability. Finally, agents can rely on a process transparently and be yanked out of their transparent use when it turns unreliable. When an agent is responsive to the reliability of their process in this way, they have epistemically integrated it, and the beliefs they form with it are formed responsibly. This prevents the agent from automatically incorporating problematic beliefs. Responsible (and transparent) use of AI resources – and consequently responsible AI extension – is hence possible. We end the paper with several design and policy recommendations that encourage epistemic integration of AI-involving belief-forming processes.

Phenomenal transparency and the boundary of cognition (with Hadeel Naeem)

Phenomenal transparency was once widely believed to be necessary for cognitive extension. Recently, this claim has come under attack, with a new consensus coalescing around the idea that transparency is neither necessary for internal nor extended cognitive processes. We take these recent critiques as an opportunity to refine the concept of transparency relevant for cognitive extension. In particular, we highlight that transparency concerns an agent’s employment of a resource – and that such employment is compatible with an agent consciously apprehending (or attending to) a resource. This means it is possible for an object to be transparent and opaque to an agent, even at a single moment in time. Once we understand transparency in this way, the detractors’ claims lose their bite, and existing arguments for transparency’s necessity for cognitive extension return to apply with full force.

Non-academic

Videos