Dissertations

  • Marina DiMarco (2023)

    Washington University, St. Louis, tenure track

    Philosophy of Science

    ​Dissertation: Explaining and Intervening in Biosocial Science  

    Biosocial scientists claim to improve our understanding of health disparities by integrating social and biological causes of human health and behavior. While many philosophers, sociologists, and historians of science embrace the liberatory promise of biosocial science for the design of clinical interventions and public health policy, others are skeptical. As feminist science scholars Dorothy Roberts, Victoria Pitts-Taylor, and Sarah Richardson point out, the “new biosocial science” often reproduces biologically deterministic explanations of health and behavior that mark marginalized individuals as hard-wired or programmed for pathology. As a result, the subjects of explanation in new biosocial science are often targeted for individualistic interventions, and social determinants of health mysteriously disappear into the background. This project forensically analyzes the disappearance of social causes from biosocial explanations. To begin, I characterize and parse the heterogeneity of biosocial science to focus on a specific genre of these explanations: those which ask how social causes “get under the skin” to become embodied in molecular terms. In the rest of the dissertation, I interrogate the values in, and of, these questions and competing answers to them. My approach draws from feminist science studies, feminist philosophy of science, and work on science and values to embrace pragmatic, social, and political dimensions of explanatory success. This is only fitting for a science that is itself marked by, and conscious of, its own political implications, past and present.

  • Dasha Pruss (2023)

    George Mason University, tenure track

    Philosophy of Science

    ​Dissertation: Carceral Machines: Algorithmic Risk Assessment and the Reshaping of Crime and Punishment  

    Recidivism risk assessment instruments are used in high-stakes pre-trial, sentencing, or parole decisions in nearly every U.S. state. These algorithmic decision-making systems, which estimate a defendant's risk of rearrest or reconviction based on past data, are often presented as an 'evidence-based' strategy for criminal legal reform. In this dissertation, I critically examine how automated decision-making systems like these shape, and are shaped by, social values. I begin with an analysis of algorithmic bias and the limits of technical audits of algorithmic decision-making systems; the subsequent chapters invite readers to consider how social values can be expressed and reinforced by risk assessment instruments in ways that go beyond algorithmic bias. I present novel analyses of the impacts of the Sentence Risk Assessment Instrument in Pennsylvania and cybernetic models of crime in the 1960s Soviet Union. Drawing on methods from history and philosophy of science, sociology, and legal theory, I show not only how societal values about punishment and control shape (and are shaped by) the use of these algorithms – a phenomenon I term domain distortion – but also how the instruments interact with their users – judges – and existing institutional norms around measuring and sentencing crime. My empirical and theoretical findings illustrate the kinds of insidious algorithmic harms that rarely make headlines, and serve as a tonic for the exaggerated and speculative discourse around AI systems in the criminal legal system and beyond.

  • Dana Matthiessen (2023)

    University of Minnesota, 2-yr Postdoc

    Philosophy of Science

    ​Dissertation: Empirical and Pragmatic Grounds of Scientific Representation  

    The central thesis of this dissertation is that the ability to reason and learn about the natural world using models can be explained in terms of the practices that warrant researchers to integrate models with accounts of their data-gathering procedures and act on their behalf. I argue that a model only functions as a representation with respect to a target phenomenon when this phenomenon is a plausible member of its domain of application and when the model can be used to characterize this target from data. I argue that this requires, first, that the model can be compared to data and second, that the model be integrated with an account of the process by which this data was produced from the target phenomenon. I provide an account of the representational accuracy of models based on their integration with a theory of technique and subsequent comparison with data patterns. On the same basis, I provide an account of the pragmatic representational content of models in terms of the set of practical inferences they license as a supplement to the empirical programs within the model’s domain of application. Historically, one often sees a back-and-forth negotiation where a model-based target characterization and a data-gathering practice are iteratively tuned to one another. Models are routinely informed by empirical results in the process of their construction and adjusted in response to them. Conversely, models add depth to target characterizations and fill out theories of technique in ways that alter data-gathering procedures. From this perspective, we can understand how a model’s representational content might gradually accrue to it and allow for finer distinctions in data outcomes. I present an extended case that tracks the development of X-ray crystallography and its use for the characterization of the molecular structure of proteins. Ultimately, what is presented here is intended as a robustly pragmatist account of scientific representation. That is, one that does not only tie model use to purposes, but also to the realm of human action.

  • Jennifer Whyte (2023)

    Duke University, 2-yr Postdoc

    Philosophy of Science

    ​Dissertation: A New Function for Thought Experiments in Science  

    In this dissertation I propose and defend a new account of thought experiments in science and show that it solves an otherwise outstanding problem in the epistemology of models in science. In the first chapter, I argue that a handful of reasonable premises about the epistemic status of science and its models leads to a challenge: shifts in scientific concepts lead to shifts in scientific models that lead to potential non-empirical incompatibilities between them. The solution I propose is to construe the role of thought experiments in science as non-empirical operational tests of models in a hypothetical context of use – as model engineering, rather than a source of evidence. In the second chapter, I fully elaborate this account, demonstrate its features, and compare it to three of the most prominent alternative accounts of thought experiments within the literature. The final two chapters of this dissertation are case studies that use the model-engineering account of thought experiments to interpret thought experiments drawn from the history of physics. In the third chapter, I present the lottery thought experiment from Ludwig Boltzmann’s 1877 paper ‘On the Relationship Between the Second Fundamental Theorem of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium’ and show that my account not only well-explains the case, but also explains the absence of this thought experiment from the many subsequent presentations of Boltzmann’s achievement in this paper. In the fourth chapter I present the Rota Aristotelica, a pseudo-Aristotelian mechanical paradox, and through it discuss the intersection of three topics: thought experiments, paradoxes, and historical variability. I show that my account of thought experiments allows that many paradoxes can be interpreted as thought experiments, and that this way of interpreting them can solve outstanding questions about what it means to be the solution of a paradox. My aim in this dissertation is to present a complete picture of an account of thought experiments in science, the way that account fits into contemporary discussions of the epistemology of models in science, and how the account can be used to bring light to historical case studies.

  • Tom Wysocki (2023)

    University of Heidelberg, 2-yr Postdoc

    Philosophy of Science

    ​Dissertation: Underdeterministic Causation  

    Metaphysicians and philosophers of science have recently been analyzing two species of causation: deterministic causes, which guarantee their effects (Hitchcock 2001, Halpern 2016, Weslake 2015, Woodward 2003), and probabilistic causes, which raise the probability of their effects (Fenton-Glynn 2017, Twardy & Korb 2011). Yet, consider: about to jump off the tower, Daedalus realizes he only may escape, but also that if he doesn’t try, he’ll stay imprisoned forever. He jumps and flees, and his jump is a cause of his escape. It’s not a deterministic cause, however, because a successful escape wasn’t guaranteed. It’s not a probabilistic cause either because there needn’t be a fact of the matter how probable his escape was given the jump (maybe the events involved are too unique to be assigned a probabilistic distribution). Rather, his jump is what I call an underdeterministic cause, which elevates the modal status of the effect: the cause made possible what was otherwise impossible. But for the jump, Daedalus wouldn’t have fled, even though the jump didn’t necessitate his escape. No one to date has offered a theory of underdeterministic causes, nor even identified them as a separate causal species. Yet, such causes are frequently studied by the humanistic, natural, and social sciences. If we want to understand what causal claims mean—not only in these disciplines, but in general—we need a theory of underdeterministic causation. My dissertation develops such a theory. Specifically, I build a framework for analyzing underdeterministic causal phenomena (ch. 1). Then, I use it to put forward a semantics of counterfactuals and an algebra of events (ch. 2), a theory of type underdeterministic causation (ch. 3), token causation (ch. 4), an account of the dynamic evolution of context (ch. 5), a superior alternative to the epistemic thesis (ch. 6), and an underdeterministic causal decision theory (ch. 7).

  • Nedah Nemati (2022)

    Columbia University, 3-yr postdoc

    Philosophy of Science

    ​Dissertation: Lived Experience in the Behavioral Neuroscience of Sleep: Conceptual, Methodological, and Ethical Implications  

    Neuroscience is widely thought to shed light on core questions about what it means to be human. The neuroscience literature is also animated by an urgency to render our behaviors knowable through the discipline’s tools and procedures. For example, by studying insect sleep, scientists seek to understand – and in some ways succeed in characterizing – a human process long deemed inaccessible and the opposite of consciousness. Meanwhile, key questions – What is sleep? Where is sleep? Why do humans do it? How can sleep be improved? – resist compact answers and demand novel philosophical insight to link neuroscientific facts to our behavioral experiences. This dissertation applies historical and philosophical approaches to the neuroscientific study of sleep to argue that explaining behavioral experiences relies on lived experience. Examining the study of insect sleep, the first half of the dissertation explores the necessity of these lived experiences in neurobiological studies today, as well as how they have taken shape in the past. The second half of the dissertation then investigates what is lost – philosophically, scientifically, and socially – when the role of lived experience is neglected in empirical investigations.

  • Katie Morrow (2022)

    University of Bielefeld, 2-yr postdoc

    Philosophy of Science

    ​Dissertation: Explaining and Intervening in Biosocial Science  

  • Kathleen Creel (2021)

    Northeastern University (tt)

    Philosophy of Science

    ​Dissertation: Opening the Black Box: Explanation and Transparency in Machine Learning  

    Machine learning algorithms remain highly predictively accurate and powerful yet opaque. They predict and classify without offering human-cognizable reasons for their evaluations. When confronted with the opacity of machine learning in science, what is our epistemic situation and what ought we to do to resolve it? In order to answer this question, I first outline a framework for increasing transparency in complex computational systems such as climate simulations and machine learning on big scientific data. I identify three different ways to attain knowledge about these opaque systems and argue that each fulfills a different explanatory purpose. Second, I argue that analogy with the renormalization group helps us choose the better of two philosophically suggestive explanatory strategies that rely on different diagnoses of the success of deep learning. The coarse-graining strategy suggests that highlighting the parts of the input which most contributed to the output will be misleading without two things: an explanation for why the irrelevant parts are themselves irrelevant, and an explanation for the stability of the output under minor perturbations of the input. Armed with a framework for understanding transparency and an analysis of explanatory strategies appropriate for deep learning, I turn to an application of these frameworks to automated science. Automated science is the use of machine learning to automate hypothesis generation, experimental design, performance of experiment, and evaluation of results. If automated science is to find patterns on its own, then it must be able to solve the Molyneux problem for science, namely to recognize identity across modalities or data streams of different types without the aid of causation or correlation.

  • Mahi Hardalupas (2021)

    Rotman Institute of Philosophy at Western University, 2-yr postdoc

    Philosophy of Science

    ​Dissertation: How neural is a neural net? Bio-inspired computational models and their impact on the multiple realization debate  

    My dissertation introduces a new account of multiple realization called ‘engineered multiple realization’ and applies it to cases of artificial intelligence research in computational neuroscience. Multiple realization has had an illustrious philosophical history, broadly used to describe when a higher-level (psychological) kind can be realized by several different lower-level (physical) kinds. In philosophy of mind, multiple realization is typically seen as arbitrating a debate between metaphysical accounts of the mind, namely functionalism and identity theory. Philosophers of science look to how multiple realization is connected to scientific practice, but many have questioned what it is useful for outside of philosophy of mind. I address this gap by drawing on cases from machine learning and computational neuroscience to show there is a useful form of multiple realization based on engineering practice. My account differs from previous discussions of multiple realization in three ways. First, it reintroduces the link between engineering and multiple realization, which has been mostly neglected in current debates. Second, it is explicitly perspectival, where what counts as multiple realization depends on your perspective. Third, it locates the utility of engineered multiple realization in its ability to support constraint-based reasoning in science. This account provides an answer to concerns about the utility of multiple realization in philosophy of science and explains one way biologically-inspired deep neural networks could provide understanding of the brain. The first half of this dissertation proposes my account of Engineered Multiple Realization and applies it to scientific cases. The second half considers further implications of my account for interpreting Deep Neural Networks as models of the brain, and for mechanistic explanation in computational neuroscience.

  • Jacob Neal (2021)

    Rotman Institute of Philosophy at Western University, 2-yr postdoc

    Philosophy of Science

    ​Dissertation: Protein Structure, Dynamics, and Function: A Philosophical Account of Representation and Explanation in Structural Biology  

    Most philosophical work in molecular biology has historically centered on DNA, genetics, and questions of reduction. My dissertation breaks from this tradition to make proteins the object of philosophical and historical analysis. The recent history of structural biology and protein science offers untapped potential for history and philosophy of science. My ultimate goal for this dissertation therefore is to identify and analyze some of the key historical and philosophical puzzles that arise in these fields. I focus primarily on the shift from the static to the dynamic view of proteins in the late twentieth century. The static view treated proteins as stable, rigid structures, whereas the dynamic view considers proteins to be dynamic molecules in constant motion. In the first half of the dissertation, I develop a historical account of the origins of the static view of proteins. I show how this view led molecular biologists to adopt mechanistic explanation as their preferred strategy for explaining protein function. I then develop an account of the emergence of the dynamic view of proteins, arguing that thermodynamic theory and the theoretical commitments of scientists played an important and often overlooked role in driving this change. In the second half of the dissertation, I analyze the epistemological relationship between the static and dynamic concepts of the protein and argue that conceptual replacement is occurring. I then develop an account of ensemble explanation, a new type of explanation introduced to highlight the role of dynamics in protein function. I show that these explanations fail to fit existing philosophical accounts of explanation, ultimately concluding that my account is required to capture their epistemic structure.

  • William Penn (2021)

    University of Wisconsin, Milwaukee, Lecturer

    Philosophy of Science

    ​Dissertation: What's Really Going On: Process Realism in Science  

    I argue for a novel form of scientific realism, called “pure process realism,” that rejects orthodox ontologies of static objects and structures. The continuity between an experimenter and experimental systems requires that the processes of intervention and observation are the same ontic type as the observed and inferred features of experimental systems, on pain of ontological incoherence. Therefore, only processes can be inferred to exist within experiments from the epistemology of experiments alone. Additionally, every argument for the existence of a static object or structure within an experiment either fails or fails to rule out that the argument actually supports inferences to a more fundamental process. Firstly, this is because such arguments are either fallacious or inconclusive. Secondly, the history of scientific research, in chemistry and physics in particular, reveals that for each static object or structure posited in the history of science, research eventually redescribes it as a system of processes. For example, the history of the candle flame, the molecule, and the nucleus are explicit evidence of this conclusion, and these examples generalize. By induction, all static objects and structures we could posit are no more than systems of processes. Taken together, these arguments show that pure process realism is superior in scope, strength, and epistemic modesty to orthodox forms of realism in the epistemology, ontology, and history of science.

  • Shahin Kaveh (2021)

    University of Pittsburgh, Visiting Scholar

    Philosophy of Science

    ​Dissertation: A Prescriptivist Account of Physical Theories  

    A question of central importance to any philosopher of science is: what is the essential content of a scientific theory? What does a given theory really tell us about the world? Philosophers of science have disagreed on many aspects of the answer to this question, for instance whether the essential content of theories concerns entities, properties, or structures, whether it should be cashed out in terms of sentences or models, and whether one should be a realist or an anti-realist about this content; but philosophers have near-universally agreed on one claim: that theories provide a description of the natural system to which they are applied. Call this the descriptive-ontological view. I argue against the descriptive-ontological view in physics and propose an alternative: the prescriptivedynamical view. According to the latter, the essential content of a physical theory is to provide prescriptions for interfacing with the natural system. More precisely, physical theories consist of a fixed part and an open-ended part, such that the fixed part is a prescription for constructing the open-ended part from local data, gathered through interaction with the system. The answer to the question of essential content directly determines or at least influences one's response to many other crucial questions such as theoretical equivalence (Chapter 2), theory-world relations (Chapter 3), and realism-antirealism (Chapter 4), which I will subsequently explore. Moreover, as I will argue (Chapter 5), the prescriptivedynamical account also sheds fresh light on the history of quantum mechanics. In particular, the prescriptive-dynamical account allows us to understand the history of Bohr and Heisenberg’s work in the 1920s as a painstaking realization that instead of telling us what there is, physical theories must tell us what to do.

  • Zina Ward (2020)

    Florida State University (tt)

    Philosophy of Science

    ​Dissertation: Individual Differences in Cognitive Science: Conceptual and Methodological Issues  

    A primary aim of cognitive science is the investigation of psychological and neuroscientific generalizations that hold across subjects. Individual differences between people’s minds and brains are pervasive, however, even among subjects considered neurotypical. In this dissertation, I argue that both scientific practice and our philosophical understanding of science must be updated to reflect the presence of such individual differences. The first half of the dissertation proposes and applies a philosophical account of what it takes to explain variation, while the second half identifies several methods in psychology and neuroscience that demand reform in light of existing individual differences.

  • Evan Pence (2020)

    Philosophy of Science

    ​Dissertation:  Four Paradigms in Comparative Psychology

    This dissertation examines the development of comparative psychology and the evidence, arguments, and epistemological challenges that have characterized its approach to the question of animal rationality. I distinguish between four modes of research that come to prominence at different points in its history, the natural historical, strict behavioral, cognitive, and neurophysiological, analyzing each through a critical episode in its development and the set of claims associated with the approach. The first study concerns the field’s Darwinian origins and its early commitment to the fundamental similarity of human and animal minds. I argue from a close reading of Darwin’s notebooks that the critical break for the nascent field came not from an antecedent endorsement to evolutionary theory, as commonly supposed, but a set of political and philosophical commitments inherited from the Enlightenment. Next, I show how this approach proved vulnerable to attack from younger and more positivistic psychologists in the twentieth century. I analyze why the Darwinians were accused of employing less than scientific methods, explaining how this fact helped precipitate a shift toward more conservative standards of evidence and strictly lab-based research. From there, I consider how the behavioral tools of this era have left modern ‘cognitive’ research with nagging underdetermination issues. I argue that strictly behavioral methods cannot tell us what the nature of animal thought is but that other methods may. Finally, I consider the state of the rationality debate at present. Drawing on the most recent evidence from systems neuroscience, I argue that animals as distant as rats have the capacity to engage in basic forms of reasoning ventured by Darwin and suspected but never quite shown in the cognitive era.

  • Morgan Thompson (2020)

    University of Bielefeld, 4-year PD

    Philosophy of Science

    Dissertation: Robustness in the Life Sciences: Issues in Modeling and Explanation

    ​My dissertation introduces two new accounts of how robustness can be used to identify epistemically trustworthy claims. Through an analysis of research practices in the life sciences, I focus on two main senses of robustness: robust reasoning in knowledge generating inferences and explanatory strategies for phenomena that are themselves robust. First, I provide a new account of robustness analysis (called ‘scope robustness analysis’), in which researchers use empirical knowledge to constrain their search for possible models of the system. Scope robustness analysis is useful for scientific discovery and pursuit whereas current accounts of robustness analysis are useful for confirmation. Second, I provide a new account of how researchers use different methods to produce the same result (a research strategy called ‘triangulation’). My account makes two contributions: I criticize a prominent account of the diversity criterion for methods because it analyzes an inferential strategy (i.e., eliminative inference) distinct from the inferential strategy underlying triangulation (i.e., common cause inductive inferences). My account also better explains how triangulation can fail in practice by assessing points of epistemic risk, which I demonstrate by applying it to implicit attitude research. Finally, I contribute to a debate about another sense of robustness: phenomena that occur regardless of changes in their component parts and activities. I argue that some robust phenomena in network neuroscience are not best explained mechanistically by citing their constituent parts (e.g. individual neurons) and their activities, but rather by appealing to features of the connectivity among brain areas.

     

  • Siska de Baerdemaeker (2020)

    Stockholm University, 2-year PD

    Philosophy of Science

    Dissertation: Cosmology: The Impossible Integration

    ​My dissertation introduces a new account of how empirical methods and lines of evidence can come to bear on cosmological model-building. Through a careful study of the recent history of cosmology and dark matter research, I explicate a new type of justification for experiments, a 'method-driven logic'. This structure of justification underlies terrestrial experiments researching dark matter and dark energy, but it is more generally prevalent in cases of an underdescribed target. Using a method-driven logic comes with a cost, however. Specifically, interpreting the empirical results of experiments justified through a method-driven logic is non-trivial: negative results warrant secure constraints on the space of possibilities for the target, whereas significant positive results remain ambivalent. While this ambivalence can be resolved through the amalgamation of multiple lines of evidence, this solution is sometimes faced with conflicts between those lines of evidence. I propose that, under specific circumstances, restricting the relevant empirical evidence can be warranted. Finally, I discuss the use of cosmological evidence as a constraint in other sub fields of physics. This brings me full-circle on the integration of disciplines in cosmology/an integration driven by experimental practice.

  • Trey Boone (2019, Dec)​

    Duke University, Visiting Fellow

    Dissertation: Functional Robustness: A New Account of Multiple Realization and its Epistemic Consequences

    In this dissertation, I provide a novel account of multiple realization. My account reframes the concept in terms of causal theories of explanation, in contrast to the original framing in terms of the deductive-nomological theory of explanation. I show that the phenomenon of functional robustness exemplifies multiple realization in this new framework. I then explore the epistemic consequences of functional robustness by examining a number of cases of robustness in neural systems. I argue that systems that exhibit robustness will tend to violate causal faithfulness, thus posing challenges to causal hypothesis testing and causal discovery. I then consider the proposal that robustness undermines modularity—i.e. the ability of causal relationships within a system to be independently disrupted. I argue that it does not and instead propose that robustness is often due to feedback control driving systems toward particular outcomes. As a result, robustness will attend failures of acyclicity, not failures of modularity. I conclude by contrasting these epistemic consequences of functional robustness with those traditionally associated with multiple realization.

  • Haixin Dang (2019)

    Leeds University, 4-year PD

    Philosophy of Science

    Dissertation: Epistemology of Scientific Collaborations

    This dissertation primarily concerns how scientific collaborations function, how scientists know together, and how we ought to think about collective justification and collective responsibility in light of scientific practice. When a group of 5,000 physicists announces that “The mass of the Higgs boson is 126GeV,” who is responsible for this discovery? Who should be held accountable if the claim turns out to be false or otherwise faulty? My account of collective responsibility seeks to assign responsibility to individual agents, while recognizing that it is the relationships in which individuals stand to each other and to the group which make them the appropriate targets for judgments of responsibility. However, in order to have a decomposition of collective responsibility, we first need to clarify the notion of epistemic responsibility. Epistemic responsibility exists as a vague concept at the intersection between epistemology and ethics. I clarify this concept and show how it can and should work in practice. I argue that epistemic responsibility should be distributed among members of a group when epistemic labor is distributed. My account of epistemic responsibility extends recent work in metaethics on moral responsibility. I decompose the concept into three distinct senses: attributability, answerability, and accountability. An epistemic agent can be responsible in one, two, or all three senses of responsibility. My account recognizes that agents in a collaboration may not all be responsible in the same way or to the same degree. Agents are epistemically responsible depending on their degree of answerability and in virtue of their epistemic position within the group. An important implication of my analysis of collective responsibility is that collective justification does not depend on members always coming to consensus on the justifiers of a group’s conclusions. Existing accounts of collective justification take consensus as the ideal, such that disagreement or heterogeneity among individuals is taken as a negative feature which should be eliminated. I argue that not all disagreement is bad. If the disagreement is itself justified, then disagreement is actually of epistemic value and not a negative feature.

  • David Colaco (2019)

    Mississippi State University, PD

    Philosophy of Science

    Dissertation: An Investigation of Scientific Phenomena

    To determine how things work, researchers must first determine what things occur. Such an idea seems simple, but it highlights a fundamental aspect of science: endeavors to theorize, explain, model, or control often result from first determining and adequately characterizing the targets of these practices. This dissertation is an investigation of how researchers determine one important kind of target: scientific phenomena. In doing so, I analyze how characterizations of these phenomena are formulated, defended, revised, and rejected in light of empirical research. I focus on three questions. First, what do characterizations of scientific phenomena represent? To answer this, I investigate what it means to characterize a phenomenon, as opposed to describing the results of individual studies. Second, how do researchers develop these characterizations? This question relates to the logic of discovery: I examine how researchers use existing theories and methods to explore systems, search for phenomena, and develop representations of them. Third, how do researchers evaluate these characterizations? This question relates to the logic of justification: I investigate how empirical findings serve as defeasible evidence for the characterizations of phenomena, and in light of what evidence we should accept, suspend judgment about, or reject them.

  • Jeff Sykora (2019)

    Pursuing Medical Training

    Philosophy of Science

    Dissertation: Fluid Mechanics, Models, and Realism: Philosophy at the Boundaries of Fluid Systems

    Philosophy of science has long drawn conclusions about the relationships between laws, models, and theories from studies of physics. However, many canonical accounts of the epistemic roles of laws and the nature of theories derived their scientific content from either schematized or exotic physical theories. Neither Theory-T frameworks nor investigation on interpretations of quantum mechanics and relativity reflect a majority of physical theories in use. More recently, philosophers of physics have begun developing accounts based in versions of classical mechanics that are both homelier than the exotic physical theories and more mathematically rigorous than the Theory-T frameworks of the earlier canon. Some, including Morrison (1999, 2015), Rueger (2005), and Wilson (2017), have turned to the study of fluid flows as a way to unpack the complex relationships among laws, models, theories, and their implications for scientific realism. One important result of this work is a resurgence of interest in the relationship between the differential equations that express mechanical laws and the boundary conditions that constrain the solutions to those equations. However, many of these accounts miss a crucial set of distinctions between the roles of mathematical boundary conditions modeling physical systems, and the roles of physical conditions at the boundary of the modeled system. In light of this systematic oversight, in this dissertation I show that there is a difference between boundary conditions and conditions at the boundary. I use that distinction to investigate the roles of boundary conditions in the models of fluid mechanics. I argue that boundary conditions are in some cases more lawlike than previously supposed, and that they can play unique roles in scientific explanations. Further, I show that boundaries are inherently mesoscale features of physical systems, which provide explanations that cannot be inferred from microscale dynamics alone. Finally, I argue that an examination of the domain of application of boundary conditions supports a form of realism.

  • Nora Boyd (2018)

    Sienna College (tt)

    Philosophy of Science

    Dissertation: Scientific Progress at the Boundaries of Experience

    My dissertation introduces a new empiricist philosophy of science built on a novel characterization of empirical evidence and an analysis of empirical adequacy appropriate to it. I analyze historical and contemporary cases primarily, though not exclusively, from the space sciences attending carefully to the intricate practices involved in data collection and processing. I argue that the epistemic utility of empirical results as constraints on theorizing depends on the conditions of their provenance and that therefore information about those conditions ought to be included in our conception of empirical evidence. I articulate the conditions requisite for adjudicating the empirical adequacy of a theory with respect to some evidence and argue that much more background information is required for this adjudication than has been widely appreciated. Although my account is strictly anti-realist, this project is a defense of a sense of epistemic progress in science. Empirical evidence, as I have defined it, genuinely accumulates over the history of human inquiry. We learn that whatever theoretical framework we propose for understanding what the world is like will have to be consistent with this growing evidential corpus.

  • Aaron Novick (2018)

    Purdue University (tt)

    Philosophy of Science

    Dissertation: The Prodigal Genetics Returns: Integrating Gene Regulatory Network Theory Into Evolutionary Theory

    The aim of this dissertation is to show how gene regulatory network (GRN) theory can be integrated into evolutionary theory. GRN theory, which lies at the core of evolutionary-developmental biology (evo-devo), concerns the role of gene regulation in driving developmental processes, covering both how these networks function and how they evolve. Evolutionary and developmental biology, however, have long had an uneasy relationship. Developmental biology played little role in the establishment of a genetic theory evolution during the modern synthesis of the early to mid 20th century. As a result, the body of evolutionary theory that descends from the synthesis period largely lacks obvious loci for integrating the information provided by GRN theory. Indeed, the relationship between the two has commonly been perceived, by both scientists and philosophers, as one of conflict. By combining historical and philosophical analysis, I consider four sources of tension between evo-devo and synthesis-derived evolutionary theorizing in order to show how those tensions can be resolved. I present a picture of the conceptual foundations of evo-devo that reveals the potential for integrating it with existing evolutionary theorizing. In chapter one, I argue that a major historical source of tension between evolutionary and developmental biology was the debates, in the first half of the 20th century, about the possibility of explaining development in terms of gene action. I show that the successes of GRN theory put these worries to bed. In chapter two, I argue that, rather than conceive of evo-devo as typological, we should see it as resting on Cuvieran functionalism. I argue that Cuvieran functionalism complements the Darwinian functionalism of the modern synthesis. In chapter three, I present a picture of the fine structure of the concept ‘homology’. This picture shows how accounts of homology that have traditionally been taken to conflict are in fact compatible and complementary. In chapter four, I analyze the nature of structure/function disputes in terms of types of answers to contrastive why-questions. On the basis of this analysis, I show how the structure of evolutionary theory requires both structuralist and functionalist approaches.

  • Marina Baldissera Pacchetti (2018)

    University of Leeds (research fellow)

    Philosophy of Science, Philosophy of Climate Science, Environmental Philosophy

    Dissertation: Spatiotemporal Scales in Scientific Modeling: Identifying Target Systems

    Current debates about epistemic issues in modeling presuppose that a model in question uncontroversially represents a particular target system. A standard line of argument is that we can gain knowledge of a target system simply by noting what aspects of the target are veridically represented in the model. But this misses epistemically important aspects of modeling. I examine how scientists identify certain phenomena as target systems in their models. Building on the distinction between data and phenomena introduced by Bogen and Woodward, I analyze how scientists target systems from data and from basic theoretical principles. I show that there are two crucial empirical assumptions that are involved in identifying phenomena. These assumptions concern the conditions under which phenomena can be indexed to a particular length or time scale and the conditions under which one can treat phenomena occurring at different length or time scales as distinct. The role of these assumptions in modeling provides the basis for a new argument that shows how, in many cases, idealizations and abstractions in models are essential for providing knowledge about the world in so far as they isolate relevant components of a phenomenon from irrelevant ones. My analysis of the identification of phenomena also shows that structural uncertainty arises in models when the scale of a phenomenon of interest is not properly identified. This clarification promises to improve the communication of the limitation of current climate models to policy makers.

  • Michael Miller (2017)

    University of Toronto (tt)

    Philosophy of Physics, Philosophy of Science

    Dissertation: The structure and interpretation of quantum field theory

    Quantum field theory accurately describes the world on the finest scales to which we have empirical access. There has been significant disagreement, however, about which mathematical structures ought to be taken as constitutive of the theory, and thus over which structures should serve as the basis for its interpretation. Perturbative methods allow for successful empirical prediction but require mathematical manipulations that are at odds with the canonical approach to interpreting physical theories that has been passed down from the logical positivists. Axiomatic characterizations of the theory, on the other hand, have not been shown to admit empirically interesting models. This dissertation shows how to understand the empirical success of quantum field theory by reconsidering widely held commitments about how physical meaning accrues to mathematical structure.

  • Joseph B. McCaffrey (2016)

    Washington University in St. Louis (Postdoctoral Research Fellow) 

    Philosophy of Cognitive Science, General Philosophy of Science

    Dissertation: Mental function and cerebral cartography: Functional localization in fMRI research

    My dissertation examines the relationship between human brain mapping and cognitive theorizing in neuroimaging (fMRI) research. Many researchers advocate using fMRI to test psychological hypotheses; others argue that brain scans cannot support or disconfirm cognitive theories. I argue that fMRI can inform psychology given assumptions about how brain structure relates to function. My diagnosis is that human brain mapping is radically changing due to new techniques (e.g., “resting state” fMRI) and theoretical approaches (e.g., network mapping). These shifts undermine the assumptions that traditionally make fMRI results speak to cognitive theories (e.g., “each region performs a unique function”). I conclude that fMRI research should focus its efforts on developing new bridging assumptions, rather than testing cognitive theories.

  • Lauren Ross (2016)

    UC Irvine (deferred for post-doc at the University of Calgary)

    Philosophy of Biology, Philosophy of Neuroscience, General Philosophy of Science

    Dissertation: Explanation in Contexts of Causal Complexity

    My dissertation examines common types of causal complexity in the biological sciences, the challenges they pose for explanation, and how scientists overcome these challenges. I provide a novel distinction between two types of causal complexity and I analyze explanatory patterns that arise in these contexts. My analysis reveals how explanation in the biological sciences is more diverse than mainstream accounts suggest, which view most or all explanations in this domain as mechanistic. I examine explanations that appeal to causal pathways, dynamical models, and monocausal factors and I show how these explanations are guided by considerations that have been overlooked in the extant literature. My project explores connections between these explanatory patterns and other topics of interest in philosophy and general philosophy of science, including: reduction, multiple realizability, causal selection, and the role of pragmatics in explanation.

  • Elizabeth O'Neill (2015)

    Eindhoven University of Technology (Assistant Professor)

    Epistemology; Metaethics; Philosophy of Cognitive Science;Philosophy of Biology

    Dissertation: The Epistemological Implications of the Causes of Moral Beliefs

    This dissertation investigates what the causes of moral beliefs indicate about the epistemic status of those beliefs. I argue that information about the causes of moral beliefs can tell us whether those beliefs track the truth, and that truth tracking is the primary epistemic property that should concern us in the moral domain. I formulate three novel debunking arguments that employ information about the causes of moral beliefs to support conclusions about truth tracking while minimizing normative assumptions. These arguments lead to the conclusion that harm-related moral beliefs that hinge on sympathy, moral beliefs influenced by disgust, certain political beliefs, and beliefs about punishment that are subject to the influence of extraneous emotions do not track moral truth. For each of these types of moral beliefs, information about the proximal causes of the moral belief supports epistemic conclusions. I compare the value of information about proximal and distal causes for assessing epistemic status: I argue that proximal causes are a superior source of information, but under certain conditions, we should take information about distal causes into account. In the case of beliefs about the fair distribution of resources, information about their proximal causes does not shed light on whether they track truth, but information about their distal, evolutionary origins tell us that such beliefs do not track the truth. Thus, using empirical information about the causes of moral beliefs, I offer selective debunking arguments for five types of moral beliefs.

  • Greg Gandenberger (2015)

    University of Bristol (Postdoctoral Fellow)

    Philosophy of Science, Epistemology, Philosophy of Physics

    Dissertation: Moving Beyond 'Theory T': The Case of Quantum Field Theory

    A standard approach towards interpreting physical theories proceeds by first identifying the theory with a set of mathematical objects, where such objects are defined according to mathematicians' standards of rigor. In making this identification, philosophers rule out the relevance of many inferential methods that physicists use, as these often do not meet mathematicians' standards of rigor. Philosophers thus sanitize physical theories of all mathematically messy or ambiguous parts before interpreting them.

    My dissertation argues against this sanitized approach towards interpreting theories using the example of quantum field theory (QFT). When we look at the details of QFT, we find that the mathematical objects it requires differ according to the specific systems the theory is being applied to in ways that advocates of the sanitized approach do not anticipate. Furthermore, the mathematical objects required for successful application are still being developed in some applicational contexts, so it would be unwise to determine in advance which objects constitute the theory. During this ongoing developmental process, physicists interpret the mathematics using strategies that violate the standards of pure mathematics. In contrast to the sanitized approach, these strategies are more sensitive to the ways in which the mathematics required for the relevant contexts is still under development. I argue that these strategies are not merely instrumental. They suggest alternative approaches to interpretation that philosophers should take into account.

  • Julia Bursten (2015)

    University of Kentucky (Assistant Professor)

    Philosophy of Science, Philosophy of Chemistry, Philosophy of Physics

    Dissertation: Surfaces, Scales, and Synthesis: Scientific Reasoning at the Nanoscale

    Philosophers interested in scientific methodology have focused largely on physics, biology, and cognitive science. They have paid considerably less attention to sciences such as chemistry and nanoscience, where not only are the subjects distinct, but the very aims differ: chemistry and nanoscience center around synthesis. Methods associated with synthesis do not fit well with description, explanation, and prediction that so dominate aims in philosophers' paradigm sciences. In order to synthesize a substance or material, scientists need different kinds of information than they need to predict, explain, or describe. Consequently, they need different kinds of models and theories.

    Specifically, chemists need additional models of how reactions will proceed. In practice, this means chemists must model surface structure and behavior, because reactions occur on the surfaces of materials. Physics, and by extension much of philosophy of science, ignores the structure and behavior of surfaces, modeling surfaces only as “boundary conditions” with virtually no influence on material behavior. Such boundary conditions are not seen as part of the physical laws that govern material behavior, so little consideration has been given to their roles in improving scientists' understanding of materials and aiding synthesis. But especially for theories that are used in synthesis, such neglect can lead to catastrophic modeling failures. In fact, as one moves down toward the nanoscale, the very concept of a material surface changes, with the consequence that nanomaterials behave differently than macroscopic materials made up of the same elements. They conduct electricity differently, they appear differently colored, and they can play different roles in chemical reactions. This dissertation develops new philosophical tools to deal with these changes and give an account of theory and model use in the synthetic sciences. Particularly, it addresses the question of how models of materials at the nanoscale fit together with models of those very same materials at scales many orders of magnitude larger. To answer this and related questions, strict attention needs to be paid to the ways boundaries, surfaces, concepts, models, and even laws change as scales change.

  • Aleta Quinn (2015)

    Caltech (Postodctoral Instructor in Philosophy of Science)

    History and Philosophy of Biology, Values and Science

    Dissertation: Biological Systematics and Evolutionary Theory

    In this dissertation I examine the role of evolutionary theory in systematics (the science of discovering and classifying biodiversity). Following Darwin's revolution, systematists have aimed to reconstruct the past. Understanding what it means that systematists reconstruct the past requires clarifying the history of systematics and of some important episodes in philosophy of science. My dissertation analyzes a common but inadequate view about what systematics qua historical science is up to by tracing the inadequate view to its origins in J.S. Mill. I show that critiques advanced by Mill's contemporary, William Whewell, identify problems that recurred in twentieth century philosophical work on the historical sciences. I develop an alternative and more complete account of systematics as relying on inference to the best explanation. My account answers two challenges that have been pressed against philosophical attempts to analyze scientific reasoning as inference to the best explanation.

    First, I analyze the inadequate view: that scientists use causal theories to hypothesize what past chains of events must have been, and then form historical hypotheses which identify segments of a network of past events and causal transactions between events. This model assumes that scientists can identify events in the world by reference to neatly delineated properties, and that discovering causal laws is simply a matter of testing what regularities hold between events so delineated. Twentieth century philosophers of science tacitly adopted this assumption in otherwise distinct models of explanation. As Whewell had pointed out in his critique of Mill, the problem with this assumption is that the delineation of events via properties is itself the hard part of science. Whewell's philosophy of science captures the key point that different scientific theories identify different types of properties and events. Distinct scientific theories may not agree on how to individuate either properties or events. The case of systematics illustrates this dramatically.

    Drawing on Whewell's philosophy of science, and my work as a member of a team of systematists revising the genus Bassaricyon, I show how historical scientists avoid the problems of the inadequate view. Whewell's analysis of consilience in the historical sciences and in biological classification provides a better foothold for understanding systematics. Whewell's consilience describes the fit between a single hypothesis and evidence drawn from distinct scientific theories that are organized under wholly different conceptual structures. This fit does not require agreement about causal ontology in the way required by the inadequate view that I have critiqued.
    My analysis clarifies the significance of two revolutions in systematics. Whereas pre-Darwinian systematists used consilience as an evidentiary criterion without explicit justification, after Darwin's revolution consilience can be understood as a form of inference to the best explanation. I show that the adoption of Hennig's phylogenetic systematics, a twentieth century revolution in systematics, formalized methodological principles at the core of Whewell's philosophy of the historical sciences. Drawing on the philosophical and historical resources developed in the dissertation, I conclude by showing how two challenges that are frequently pressed against inference to the best explanation are met in the context of phylogenetic systematics.

  • Kathryn Tabb (2015)

    Coumbia University (Assistant Professor)

    Early Modern Philosophy, Philosophy of Psychiatry, Biomedical Ethics

    Dissertation: Mad Errors: Associated Ideas, Enthusiasm, and Personal Identity in Locke

    Associationism — in its most basic formulation, the view that all cognition begins with the compounding of simple sensations into chains of ideas — is frequently held to have been introduced by John Locke in 1700, expanded on by David Hartley and David Hume, and come into its own the nineteenth century with psychologists like James Mill and Alexander Bain. The aim of this dissertation is to argue that Locke is not an associationist, and that he has been cast on the wrong side of a fundamental divide over the role of the understanding in the connection of ideas. I show that Locke coins the term “association of ideas” not to launch a new architectonic for psychology based on acquired habit, but to diagnose what he sees as the biggest obstacle to right understanding: madness. Hume's positive embrace of association has often been read back onto Locke, resulting in the easy conflation of the two thinkers under the banner of empiricism. In championing the powers of the active perception over the automaticity of association, however, Locke's psychology stands apart from later empiricist philosophies of mind.

    Along with challenging Locke's traditional characterization as an associationist, my project explores the ramifications of Locke's concept of association for his broader commitments. Locke believes that natural philosophy is possible due to the ability of men and women to perceive the truth or falsity of propositions, or, failing this, to make probabilistic judgments about their truth-value. The capacities that allow for these mental acts, reason and judgment (respectively), are gifts from God that allow us to flourish in our environment, despite our mediocre mental endowments. I argue that associated ideas show that these capacities sometimes fail us, compromising Locke's intellectualist picture. False knowledge is possible in Locke's system, insofar as associated ideas generate propositions that are perceived to be true but which are in fact false. I call such propositions “mad errors,” and describe their profound ramifications for Locke's ethics of belief and his theory of personal identity.

  • Elay Shech (2015)

    Auburn University (Assistant Professor)

    Philosophy of Physics, Philosophy of Science, Ethics

    Dissertation: Assume a Spherical Cow: Studies on Representation and Idealizations

    My dissertation concerns the philosophical underpinnings of representation and idealization in science. I begin by looking at the philosophical debate revolving around phase transitions and use it as a foil to bring out what I take to be most interesting about phase transitions, namely, the manner by which the illustrate the problem of essential idealizations.

    I continue to solve the problem in several steps. First, I conduct an interdisciplinary comparative study of different types of representations (e.g., mental, linguistic, pictorial) and consequently promote a content-based account of scientific representation intended to accommodate the practice of idealization and misrepresentation. I then critically asses the literature on idealizations in science in order to identify the manner by which to justify appeals to idealizations in science, and implement such techniques in two case studies that merit special attention: the Aharonov-Bohm effect and the quantum Hall effects. I proceed to offer a characterization of essential idealizations meant to alleviate the woes associated with said problem, and argue that particular types of idealizations, dubbed pathological idealizations, ought to be dispensed with. My motto is that idealizations are essential to explanation and representation, as well as to methodology and pedagogy, but they essentially misrepresent. Implications for the debate on platonism about mathematical objects are outlined.

  • Karen Zwier (2014)

    Drake University (Adjunct Professor)

    Philosophy of Science, History and Philosophy of Physical Science, Science and Religion

    Dissertation: Interventionist Causation in Physical Science

    The current consensus view of causation in physics, as commonly held by scientists and philosophers, has several serious problems. It fails to provide an epistemology for the causal knowledge that it claims physics to possess; it is inapplicable in a prominent area of physics (classical thermodynamics); and it is difficult to reconcile with our everyday use of causal concepts and claims.
    In this dissertation, I use historical examples and philosophical arguments to show that the interventionist account of causation constitutes a promising alternative for a “physically respectable” account of causation. The interventionist account explicates important parts of the experimental practice of physics and important aspects of the ways in which physical theory is used and applied. Moreover, the interventionist account succeeds where the consensus view of causation in physics fails.

    I argue that the interventionist account provides an epistemology of causal knowledge in physics that is rooted in experiment. On the interventionist view, there is a close link between experiment and the testing of causal claims. I give several examples of experiments from the early history of thermodynamics that scientists used in interventionist-type arguments. I also argue that interventionist claims made in the context of a physical theory can be epistemically justified by reference to the experimental interventions and observations that serve as evidence for the theory.

    I then show that the interventionist account of causation is well-suited to the patterns of reasoning that are intrinsic to thermodynamic theory. I argue that interventionist reasoning constitutes the structural foundation of thermodynamic theory, and that thermodynamic theory can provide clear answers to meaningful questions about whether or not a certain variable is a cause of another in a given context.

    Finally, I argue that the interventionist account offers the prospect of a unification of “physically respectable” causation and our everyday notion of causation. I conclude the dissertation by sketching an anti-foundationalist unification of causation, according to which causal reasoning occurs in the same manner in physics as it does in other branches of life and scientific research.

  • Eric Hatleback (2014)

    University of Pittsburgh (Research Associate Professor)

    Philosophy of Cosmology, Philosophy of Science

    Dissertation: Chimera of the Cosmos

    Multiverse cosmology exhibits unique epistemic problems because it posits the existence of universes inaccessible from our own. Since empirical investigation is not possible, philosophical investigation takes a prominent role. The inaccessibility of the other universes causes argumentation for the multiverse hypothesis to be wholly dependent upon typicality assumptions that relate our observed universe to the unobserved universes. The necessary reliance on typicality assumptions results in the Multiverse Circularity Problem: the multiverse hypothesis is justified only through invoking typicality assumptions, but typicality assumptions are justified only through invoking the multiverse hypothesis. The unavoidability of the circularity is established through argumentation for each of the two conjuncts that comprise it.

    Historical investigation proves the first conjunct of the Multiverse Circularity Problem. Detailed study of the now-neglected tradition of multiverse thought shows that philosophers and scientists have postulated the multiverse hypothesis with regularity, under different names, since antiquity. The corpus of argumentation for the existence of the multiverse breaks cleanly into three distinct argument schemas: implication from physics, induction, and explanation. Each of the three argument schemas is shown to be fully reliant upon unsupported typicality assumptions. This demonstrates that the multiverse hypothesis is justified only through invoking typicality assumptions.

    Philosophical assessment of cosmological induction establishes the second conjunct of the Multiverse Circularity Problem. Independent justification for typicality assumptions is not forthcoming. The obvious candidate, enumerative induction, fails: Hume's attack against inference through time is extended to inference through space. This move undercuts external justification for typicality assumptions, such as the Cosmological Principle, which cosmologists implement to justify induction. Removing the legitimacy of enumerative induction shows that typicality assumptions are justified only through invoking the multiverse hypothesis, thereby establishing the Multiverse Circularity Problem.

  • Yoichi Ishida (2014)

    Ohio University (Assistant Professor)

    Philosophy of Science, Philosophy of Biology

    Dissertation: Models in Scientific Practice

    This dissertation presents an account of the practice of modeling in science in which scientists' perceptual and bodily interactions with external representations take center stage. I argue that modeling is primarily a practice of constructing, manipulating, and analyzing external representations in service of cognitive and epistemic aims of research, and show that this account better captures important aspects of the practice of modeling than accounts currently popular in philosophy of science.

    Philosophical accounts of the practice of modeling classify models according to the categories of abstract and concrete entities developed in metaphysics. I argue that this type of account obscures the practice of modeling. In particular, using the analysis of the Lotka-Volterra model as an example, I argue that understanding mathematical models as abstract entities---non-spatiotemporally located, imperceptible entities---obscures the fact that the analysis of the Lotka-Volterra model relies primarily on visual perception of external representations, especially hand- or computer-generated graphs. Instead, I suggest that we apply the concepts of internal and external representations, developed in cognitive science, to models, including mathematical models.

    I then present two case studies that illustrate different aspects of modeling, understood as a practice of constructing, manipulating, and analyzing external representations. First, using Sewall Wright's long-term research on isolation by distance, I articulate the relationship between the uses of a model, the particular aims of research, and the criteria of success relevant to a given use of the model. I argue that uses of the same model can shift over the course of scientists' research in response to shifts in aim and that criteria of success for one use of a model can be different from those for another use of the same model. Second, I argue that in successful scientific research, a scientist uses a model according to the methodological principles of realism and instrumentalism despite the tension that they create among the scientist's uses of the model over time. This thesis is supported by a detailed analysis of successful scientific research done by Seymour Benzer in the 1950s and 60s.

  • Keith Bemer (2014)

    Winchester Thurston School (science teacher)

    Classics, Philosophy, and Ancient Science
    Ancient Philosophy, History and Philosophy of Science, Early Modern Philosophy

    Dissertation: A Philosophical Examination of Aristotle's Historia Animalium

    In this dissertation I address two related questions pertaining to Aristotle's philosophy of science and his biology and zoology. They are: (1) what are the goals of Aristotle's Historia Animalium (HA) and how does the treatise achieve these goals? And, more generally, (2) what is the role of a historia in Aristotle's philosophy of science?

    Together these questions touch upon a long recognized problem in the interpretation of Aristotle's philosophical and scientific works related to the relationship between Aristotle's philosophy of science and his actual scientific practice. I pursue this broad question by focusing my attention on Aristotle's historia of animals and the related discussions of scientific investigation and demonstration, primarily in the Analytics. I argue that the term historia was used by Aristotle with a range of meanings that center around the notions of investigation and inquiry (or the reports thereof), and, in some instances, emphasize the early stages of inquiry, dedicated to establishing and organizing facts prior to causal explanation. I proceed by considering the theoretical background of a historia provided by the Analytics and Parts of Animals, before turning to a detailed analysis of select passages from the HA itself. I argue that the Analytics provides the framework for a method of correlating facts regarding a field of study that acts as a guide to further causal research, but that establishing the actual causal relations that hold within a field depends upon additional considerations that are largely domain-specific. I turn to the HA in order to illustrate this method of correlation, noting examples where the correlation of features appears to prefigure causal explanations. I conclude by considering the relationship between Aristotle's notions of historia and experience (empeiria), and argue that a historia provides the sort of comprehensive, factual knowledge of a domain of study that Aristotle often notes is necessary for coming to recognize causal relations, and thus coming to have scientific knowledge (epistêmê).

  • Marcus Adams (2014)

    University at Albany, SUNY (Assistant Professor)

    Early Modern Philosophy, History & Philosophy of Science

    Dissertation: Mechanical Epistemology and Mixed Mathematics: Descartes's Problems and Hobbes's Unity

    My dissertation answers the following question: How is Hobbes's politics related to his physics and metaphysics? I argue that Hobbes does in fact provide a unified systematic philosophy, and I contrast this unity with problems in Descartes's epistemology and optics.

    To make this argument, I carve a middle way between the two extremes in the literature by situating Hobbes within mechanical philosophy and 17th century mathematics. I use three concepts to clarify Hobbes's project: mechanical explanation, maker's knowledge, and mixed mathematical science. First, I show that for Hobbes a mechanical explanation involves tracing the motions of bodies at various levels of complexity, from simple points in geometry to human bodies in the state of nature and to commonwealth bodies. This view provides Hobbes with resources for a naturalized epistemology, which I show is the point at issue in Hobbes's Objections to Descartes's Meditations. Second, Hobbes says that we have "maker's knowledge" in geometry and politics. I show that "maker's knowledge" is Hobbes's empiricist answer to (1) how we have causal knowledge in politics and mathematics by constructing and (2) how mathematics is applicable to the world. Finally, I show that the mixed mathematical sciences, e.g., optics, were Hobbes's inspiration for a unified philosophical system. I argue that the physics in De corpore, the optics in De homine, and the politics in Leviathan are treated by Hobbes as mixed mathematical sciences, which provides a new way to see Hobbes as a consistent and non-reductive naturalist. Viewed in this light, the Leviathan turns out to have more methodological similarities to optics than to geometry.

  • Thomas Pashby (2014)

    University of Southern California (Postdoc)

    Philosophy of Physics, Philosophy of Science

    Dissertation: Time and the Foundations of Quantum Mechanics

    This dissertation aims at understanding, and challenging, the common view that "time is a parameter in quantum theory and not an observable." I argue that — like position in space — location in time of an event is an observable quantity.

    The celebrated argument of Wolfgang Pauli against the inclusion of time as an observable of the theory ('Pauli's Theorem') has been seen as a demonstration that time may only enter quantum mechanics as a classical parameter. Against this orthodoxy I argue that there are good reasons to expect certain kinds of 'time observables' to find a representation within quantum theory, including clock operators (which provide the means to measure the passage of time) and event time operators, which provide predictions for the time at which a particular event occurs, such as the appearance of a dot on a luminescent screen. I contend that these time operators deserve full status as observables of the theory, and on reflection provide a uniquely compelling reason to expand the set of observables allowed by the standard formalism of quantum mechanics. In addition, I provide a novel association of event time operators with conditional probabilities, and propose a temporally extended form of quantum theory to better accommodate the time of an event as an observable quantity. This leads to a proposal to interpret quantum theory within an event ontology, inspired by Bertrand Russell's Analysis of Matter. On this basis I mount a defense of Russell's relational theory of time against a recent attack.

  • Thomas V. Cunningham (2013)

    Medical Bioethics Director, Kaiser Permanente West Los Angeles

    Philosophy of Biology and Medicine, Applied Ethics, Philosophy of Science

    Dissertation: Socializing Medical Practice: A Normative Model of Medical Decision-Making

    This dissertation is about the way people should and do make medical choices. It defends the claim that medical decisions should be made by groups of persons acting together, not by individuals acting alone.

    I begin by arguing that prominent models of medical decision-making are problematic, because they fail to be both descriptively and normatively adequate, which I argue any account of choice in medicine should be. The remainder of the work articulates a model that meets these two criteria. First, I justify an account of the uniquely medical context my model is designed to apply to by distinguishing two basic aims of medicine: (i) to fully understand patients in personal and scientific terms; and, (ii) to intervene upon patients' health states in ways that are consistent with this understanding. Then, I take two chapters to develop a descriptive account of medical decision-making. In them, I introduce a close study of the case of hereditary breast and ovarian cancer decision-making, which I argue shows choices are made by groups of interacting persons over extended spatiotemporal and social dimensions. So, I appeal to the theory of distributed cognition to describe this collection of persons processing information together when making choices. Having defended a descriptive account of medical choice, I then take two more chapters to propose a normative account, based on a modified version of Rawlsian reflective equilibrium that I call medical reflective equilibrium. On my account, medical choices should be made by searching for, selecting, and integrating the right kind and amount of information, which requires considering sufficient information to meet the basic aims of medicine. Given that the basic aims are defined in terms of an epistemic distinction between subjective and objective knowledge, I argue that performing the medical reflective equilibrium procedure adequately requires multiple participants in decision-making. Consequently, I conclude that medical choices are and should be social.

  • Balázs Gyenis (2013)

    Hungarian Academy of Sciences (Research Fellow), London School of Economics (Research Fellow)

  • Philosophy of Physics, Philosophy of Science, Probabilistic Causality

    Dissertation: Well posedness and physical possibility

    There is a sentiment shared among physicists that well posedness is a necessary condition for physical possibility. The arguments usually offered for well posedness have an epistemic flavor and thus they fall short of establishing the metaphysical claim that lack of well posedness implies physical impossibility. My dissertation analyzes the relationship of well posedness to prediction and confirmation as well as the notion of physical possibility and we devise three novel and independent argumentative strategies that may succeed where the usual epistemic arguments fail.

  • Peter Distelzweig (2013)

    University of St. Thomas, Minnesota (Assistant Professor)

    Early Modern Philosophy, Ancient Philosophy, History and Philosophy of Science

    Dissertation: Descartes' Teleomechanics in Medical Context: Approaches to Integrating Mechanism and Teleology in Hieronymus Fabricius ab Aquapendente, William Harvey and René Descartes

    In this dissertation, I examine the relation between mechanism and teleology in Descartes's physiology, placing his views in the wider medical context.

    There, as I show, we find a very different, Galeno-Aristotelian approach to integrating mechanics and teleology in the work of anatomists Hieronymus Fabricius ab Aquapendente and his more famous student, William Harvey. I provide an interpretation of teleology and mechanism in Descartes by exploring the historical and conceptual relationship between his approach and that exhibited by these anatomists. First, I show that Fabricius and Harvey develop creative, teleological, and non-reductive approaches to mechanizing the animal precisely by developing Arisotelian and Galenic resources. They propose that mathematical mechanics, understood as an Aristotelian subordinate science, should be employed to articulate the way the functions of the locomotive organs explain (as final causes) certain features of their anatomy, rendering them hypothetically necessary. They articulate these explanations using the Galenic concepts ofactio and usus. Employing the resources developed in my analysis of Fabricius and Harvey, I then provide a new interpretation of the relation of mechanism and teleology in Descartes and of its significance. Although he explicitly rejects final causes in natural philosophy, Descartes still appeals in physiology to apparently teleological concepts like function and usage. By focusing on the medical context of these concepts, I show that Descartes intends to and primarily does employ these terms in mechanical explanations meant to replace the metaphysically more extravagant but still material-efficient (not final causal) explanations present in the medical tradition. I argue, further, that Descartes at times does in fact employ final causal explanations like those in Fabricius's and Harvey's work and that he is hard-pressed to ground these explanations while still rejecting both divine purposes and non-mechanical principles in natural philosophy.

  • Catherine Stinson (2013)

    Western University (Postdoc)

    History & Philosophy of Neuroscience & Psychology

    Dissertation: Cognitive Mechanisms and Computational Models: Explanation in Cognitive Neuroscience

    Cognitive Neuroscience seeks to integrate cognitive psychology and neuroscience. I critique existing analyses of this integration project, and offer my own account of how it ought to be understood given the practices of researchers in these fields.

    A recent proposal suggests that integration between cognitive psychology and neuroscience can be achieved `seamlessly' via mechanistic explanation. Cognitive models are elliptical mechanism sketches, according to this proposal. This proposal glosses over several difficulties concerning the practice of cognitive psychology and the nature of cognitive models, however. Although psychology's information-processing models superficially resemble mechanism sketches, they in fact systematically include and exclude different kinds of information. I distinguish two kinds of information-processing model, neither of which specifies the entities and activities characteristic of mechanistic models, even sketchily. Furthermore, theory development in psychology does not involve the filling in of these missing details, but rather refinement of the sorts of models they start out as. I contrast the development of psychology's attention filter models with the development of neurobiology's models of sodium channel filtering. I argue that extending the account of mechanisms to include what I define as generic mechanisms provides a more promising route towards integration. Generic mechanisms are the in-the-world counterparts to abstract types. They thus have causal-explanatory powers which are shared by all the tokens that instantiate that type. This not only provides a way for generalizations to factor into mechanistic explanations, which allows for the `-looking' explanations needed for integrating cognitive models, but also solves some internal problems in the mechanism literature concerning schemas and explanatory relevance. I illustrate how generic mechanisms are discovered and used with examples from computational cognitive neuroscience. I argue that connectionist models can be understood as approximations to generic brain mechanisms, which resolves a longstanding philosophical puzzle as to their role. Furthermore, I argue that understanding scientific models in general in terms of generic mechanisms allows for a unified account of the types of inferences made in modeling and in experiment.

  • Benjamin Goldberg (2012)

    University of South Florida (Permanent Instructor)

    Early Modern Philosophy, History of Science and Medicine

    Dissertation: William Harvey, Soul Searcher: Teleology and Philosophical Anatomy

    The goal of this dissertation is to understand the ways in which teleology structures the natural philosophy of William Harvey (1578-1657), the physician and philosopher who discovered the circulation of the blood, announced in his De motu cordis (1628).

    In particular, I hope to incorporate new archival research, as well as the study of a number of texts that have not yet received due attention, including the Prelectiones anatomie universalis (1616-1627) and the De generatione animalium (1651). The study is divided into three parts. The first two parts focus on the role of two sorts of teleology in defining Harvey's subject matter. I first discuss the teleology of being, which characterizes the functioning and material organization of the parts of the body, and which we would call today 'physiology and anatomy'. I then turn to examine the teleology of becoming, which characterizes the process of the generation of those parts, what we would call today 'embryological development'. Thus Harvey's subject matter must be understood as the study of, and search for, final causes. The third section shifts to examining Harvey's methods in light of this conception of the subject matter. I start by articulating how, in general, Harvey conceives of anatomy not as a body of pre-existing knowledge, but rather as an active ability, combining skills of hand, eye, and mind. I then turn to look in detail at Harvey's particular methods, such as vivisection and broad comparisons across animals. I argue that his methodology should be seen as an innovative reinterpretation and extension of the philosophies of Aristotle and Galen, mediated by certain Renaissance trends in medicine and natural philosophy. I focus specifically on how experience and experiment, observing and cutting, are used by Harvey to determine the final causes so central to his conception of his subject matter.

  • Bryan Roberts (2012)

    London School of Economics (Lecturer)

    History and philosophy of physics

    Dissertation: Time, Symmetry and Structure: Studies in the Foundations of Quantum Theory

    This dissertation is about the meaning and distinction between the past and the future according to our fundamental physical laws.
    I begin with an account of what it means for quantum theory to make such a distinction. I then show that if Galilei invariant quantum theory does distinguish a preferred direction in time, then this has consequences for the ontology of the theory. In particular, it requires matter to admit internal degrees of freedom. I proceed to show that this is not a purely quantum phenomenon, but can be expressed in classical mechanics as well. I then illustrate three routes for generating quantum systems that distinguish a preferred temporal direction in this way.

  • Jonathan Livengood (2011)

    University of Illinois, Urbana Champaign (Assistant Professor)

    Philosophy of Science, Metaphysics, Philosophy of Statistics

    Dissertation: On Causal Inferences in the Humanities and Social Sciences: Actual Causation

    The last forty years have seen an explosion of research directed at causation and causal inference. Statisticians developed techniques for drawing inferences about the likely effects of proposed interventions: techniques that have been applied most noticeably in social and life sciences. Computer scientists, economists, and methodologists merged graph theory and structural equation modeling in order to develop a mathematical formalism that underwrites automated search for causal structure from data. Analytic metaphysicians and philosophers of science produced an array of theories about the nature of causation and its relationship to scientific theory and practice.

    Causal reasoning problems come in three varieties: effects-of-causes problems, causes-of-effects problems, and structure-learning or search problems. Causes-of-effects problems are the least well-understood of the three, in part because of confusion about exactly what problem is supposed to be solved. I claim that the problem everyone is implicitly trying to solve is the problem of identifying the actual cause(s) of a given effect, which I will call simply the problem of actual causation. My dissertation is a contribution to the search for a satisfying solution to the problem of actual causation.
    Towards a satisfying solution to the problem of actual causation, I clarify the nature of the problem. I argue that the only serious treatment of the problem of actual causation in the statistical literature fails because it confuses actual causation with simple difference-making. Current treatments of the problem of actual causation by philosophers and computer scientists are better but also ultimately unsatisfying. After pointing out that the best current theories fail to capture intuitions about some simple voting cases, I step back and ask a methodological question: how is the correct theory of actual causation to be discovered? I argue that intuition-fitting, whether by experimentation or by armchair, is misguided, and I recommend an alternative, pragmatic approach. I show by experiments that ordinary causal judgments are closely connected to broadly moral judgments, and I argue that actual causal inferences presuppose normative, not merely descriptive, information. I suggest that the way forward in solving the problem of actual causation is to focus on norms of proper functioning.
  • Jonah Schupbach (2011)

    University of Utah (Assistant Professor)

    Philosophy of Science, Epistemology (including Formal Epistemology), Logic

    Dissertation: Studies in the Logic of Explanatory Power

    Human reasoning often involves explanation. In everyday affairs, people reason to hypotheses based on the explanatory power these hypotheses afford; I might, for example, surmise that my toddler has been playing in my office because I judge that this hypothesis delivers a good explanation of the disarranged state of the books on my shelves. But such explanatory reasoning also has relevance far beyond the commonplace. Indeed, explanatory reasoning plays an important role in such varied fields as the sciences, philosophy, theology, medicine, forensics, and law.

    This dissertation provides an extended study into the logic of explanatory reasoning via two general questions. First, I approach the question of what exactly we have in mind when we make judgments pertaining to the explanatory power that a hypothesis has over some evidence. This question is important to this study because these are the sorts of judgments that we constantly rely on when we use explanations to reason about the world. Ultimately, I introduce and defend an explication of the concept of explanatory power in the form of a probabilistic measure. This formal explication allows us to articulate precisely some of the various ways in which we might reason explanatorily.
    The second question this dissertation examines is whether explanatory reasoning constitutes an epistemically respectable means of gaining knowledge. I defend the following ideas: The probability theory can be used to describe the logic of explanatory reasoning, the normative standard to which such reasoning attains. Explanatory judgments, on the other hand, constitute heuristics that allow us to approximate reasoning in accordance with this logical standard while staying within our human bounds. The most well known model of explanatory reasoning, Inference to the Best Explanation, describes a cogent, nondeductive inference form. And reasoning by Inference to the Best Explanation approximates reasoning directly via the probability theory in the real world. Finally, I respond to some possible objections to my work, and then to some more general, classic criticisms of Inference to the Best Explanation. In the end, this dissertation puts forward a clearer articulation and novel defense of explanatory reasoning.
  • Justin Sytsma (2010)

    Victoria University of Wellington (Senior Lecturer in Philosophy)
    jmsytsma@gmail.com

    Philosophy of Mind, Philosophy of Cognitive Science

    Dissertation: Phenomenal consciousness as scientific phenomenon? A Critical Investigation of the New Science of Consciousness

    Phenomenal consciousness poses something of a puzzle for philosophy of science. This puzzle arises from two facts: It is common for philosophers (and some scientists) to take its existence to be phenomenologically obvious and yet modern science arguably has little (if anything) to tell us about it. And, this is despite over 20 years of work targeting phenomenal consciousness in what I call the new science of consciousness. What is it about this supposedly evident phenomenon that has kept it beyond the reach of our scientific understanding? I argue that phenomenal consciousness has resisted scientific explanation because there is no such phenomenon: What is in fact phenomenologically obvious has not resisted scientific explanation, exposing phenomenal consciousness as an unneeded and unwarranted theoretical construct that is not supported by the scientific evidence. I show this through an investigation of the new science. I detail how these researchers understand “phenomenal consciousness,” tie this understanding to the recent philosophical debates, and critically assess the reasons given for believing that such a scientific phenomenon exists.

  • Holly Andersen (2009)

    Simon Fraser University (Associate Professor)
    holly_andersen@sfu.ca

    Philosophy of Science

    Dissertation: The Causal Structure of Conscious Agency

    I examine the way implicit causal assumptions about features of agency and action affect the philosophical conclusions we reach from neuroscientific results, as well as provide a positive account of how to incorporate scientific experiments on various features of agency into philosophical frameworks of volition, using tools from interventionist causal analysis and research on human automatism. I also provide new, general, arguments for the autonomy for any higher level causes, including but not limited to features of conscious agency.

  • Peter Gildenhuys (2009)

    Lafayette College (Assistant Professor)
    gildenhp@lafayette.edu

    Philosophy of Biology, Philosophy of Science, Biomedical Ethics, Virtue Ethics, Causal Reasoning, Philosophy of Language

    Dissertation: A Causal Interpretation of Selection Theory

    My dissertation is an inferentialist account of classical population genetics. I present the theory as a definite body of interconnected inferential rules for generating mathematical models of population dynamics. To state those rules, I use the notion of causation as a primitive. First, I put forward a rule stating the circumstances of application of the theory, one that uses causal language to pick out the types of entities over which the theory may be deployed. Next, I offer a rule for grouping such entities into populations based on their competitive causal relationships. Then I offer a general algorithm for generating classical population genetics models suitable for such populations on the basis of information about what causal influences operate within them.

    Dynamical models in population genetics are designed to demystify natural phenomena, chiefly to show how adaptation, altruism, and genetic polymorphism can be explained in terms of natural rather than supernatural processes. In order for the theory to serve this purpose, it must be possible to understand, in a principled fashion, when and how to deploy the theory. By presenting the theory as a system of ordered inferential rules that takes causal information as its critical input and yields dynamical models as its outputs, I show explicitly how classical population genetics functions as a non-circular theoretical apparatus for generating explanations.
    Though focused on the foundations of population genetics, my dissertation has implications for a number of important philosophical disputes. Foremost, a causal interpretation of classical population genetics, if successful, would settle the hotly debated issue of whether that theory is causally interpretable. But further, the algorithm I developed for generating population genetics equations also shows how such notions as group selection, fitness, drift, and even natural selection itself do not serve as critical inputs in the construction of classical population genetics models. The meanings of these terms are contested in part because they need not be given a definite role to play in generating classical population genetics models.
    While many writers have aimed to generalize the theory of natural selection, I show how the theory is applicable to systems that are picked out and distinguished because of very general features of their causal structure, rather than their more narrow biological ones, and hence I show how the theory is deployable over more than just organisms bearing genetic variations. Even the various different sorts of groupings one finds in population genetics can be distinguished using causal language. Indeed, my explicit understanding of these last two aspects of selection theory has prepared me to demonstrate that inheritance is not a requirement for selection, as well as to show that population genetics models featuring diploid organisms are not instances of models of “group selection” that feature sub-groups of genes. Other corollaries of my work are an account of the causes that produce drift and a novel stance on why selection theory is a stochastic theory.
    Finally, because selection theory must function as a principled theory, it must be possible to say explicitly under what conditions equations can be used to make inferences about system dynamics when the theory is applied to natural populations. Accordingly, I show how applications of classical population genetics equations do not hold ceteris paribus; rather they can be coupled with a different proviso, one with a definite meaning that makes explicit what conditions must hold for the equations to function as tools of inference. The alternative to the vacuous proviso ceteris paribus that I offer for population genetics equations should be generalizable, with suitable modification, for use in the special sciences more generally. Lastly, my dissertation has implications for the relationship between models and laws, at least in the domain of classical population genetics, wherein laws are typically of narrow applicability and models generalizations of them, the opposite of the pattern seen in physical theory.
  • Julie Zahle (2009)

    University of Copenhagen (Assistant Professor)
    juliezahle@gmail.com

    Philosophy of Science, Philosophy of Psychology & Cognitive Science, Philosophy of Mind, Epistemology/Metaphysics

    Dissertation: Practices, Perception, and Normative States

    Theories of practice are widespread within the humanities and the social sciences. They reflect the view that the study of, and theorizing about, social practices hold the key to a proper understanding of social life or aspects thereof. An important subset of theories of practice is ability theories of practice. These theories focus on the manner in which individuals draw on their abilities, skills, know-how, or practical knowledge when participating in social practices.

    In this dissertation, I concentrate on ability theories of practice as advanced within the social sciences and the philosophy of the social sciences. Ability theorists within these two fields stress individuals' ability to act appropriately in situations of social interaction. But how, more precisely, is this ability to be understood? The thesis I develop and defend provides a partial answer to this important question: In situations of social interaction, individuals' ability to act appropriately sometimes depends on their exercise of the ability directly to perceive normative states specified as the appropriateness of actions.
    In the first part of the dissertation, I introduce and motivate this thesis. I provide an overview of ability theories of practice and, against that background, I present my thesis. Though generally unexplored, influential ability theorists have toyed with the thesis. Or, their theories invite an extension in this direction. For this reason, I argue, the thesis constitutes a natural way in which further to develop their approach.
    In the second part of the dissertation, I develop and defend my thesis. First, I present a plausible way in which to make ontological sense of the claim that normative states are sometimes directly perceptible. Next, I offer an account of perception and argue that, by its lights, individuals sometimes have the ability directly to perceive normative states. Finally, I briefly show that individuals' ability to act appropriately sometimes depends on their exercise of this ability directly to perceive normative states. From both a practical and a theoretical perspective, the development and defense of this thesis constitutes a valuable elaboration of the basic approach associated with ability theories of practice.
  • Zvi Biener (2007)

    University of Cincinnati (Assistant Professor)

    Metaphysics and Epistemology in the Early-Modern Period, History of Philosophy

    Dissertation: The Unity and Structure of Knowledge: Subalternation, Demonstration, and the Geometrical Manner in Scholastic-Aristotelianism and Descartes

    The project of constructing a complete system of knowledge—a system capable of integrating all that is and could possibly be known—was common to many early-modern philosophers and was championed with particular alacrity by René Descartes. The inspiration for this project often came from mathematics in general and from geometry in particular: Just as propositions were ordered in a geometrical demonstration, the argument went, so should propositions be ordered in an overall system of knowledge. Science, it was thought, had to proceed more geometrico. In this dissertation, I offer a new interpretation of 'science more geometrico' based on an extended analysis of the explanatory and argumentative forms used in certain branches of geometry. These branches were optics, astronomy, and mechanics; the so-called subalternate, subordinate, or mixed-mathematical sciences. In Part I, I investigate the nature of the mixed-mathematical sciences according to Aristotle and early-modern scholastic-Aristotelians. In Part II, the heart of the work, I analyze the metaphysics and physics of Descartes' Principles of Philosophy (1644, 1647) in light of the findings of Part I and an example from Galileo. I conclude by arguing that we must broaden our understanding of the early-modern conception of 'science more geometrico' to include exemplars taken from the mixed-mathematical sciences. These render the concept more flexible than previously thought.

  • Brian Hepburn (2007)

    Wichita State University (Assistant Professor)
    brh15@interchange.ubc.ca

    History and Philosophy of Science, Philosophy of Physics, History of Science

    Dissertation: Equilibrium and Explanation in 18th Century Mechanics

    The received view of the Scientific Revolution is that it was completed with the publication of Isaac Newton's (1642-1727) Philosophiae Naturalis Principia Mathematica in 1687. The century following was relegated to a working out the mathematical details of Newton's program, expression into analytic form. I show that the mechanics of Leonhard Euler (1707—1782) and Joseph-Louis Lagrange (1736—1813) did not begin with Newton's Three Laws. They provided their own beginning principles and interpretations of the relation between mathematical description and nature. Functional relations among the quantified properties of bodies were interpreted as basic mechanical connections between those bodies. Equilibrium played an important role in explaining the behavior of physical systems understood mechanically. Some behavior was revealed to be an equilibrium condition; other behavior was understood as a variation from equilibrium. Implications for scientific explanation are then drawn from these historical considerations, specifically an alternative to reducing explanation to unification. Trying to cast mechanical explanations (of the kind considered here) as Kitcher-style argument schema fails to distinguish legitimate from spurious explanations. Consideration of the mechanical analogies lying behind the schema is required.

  • Jackie Sullivan (2007)

    University of Western Ontario (Assistant Professor)
    jsulli29@uwo.ca

    Philosophy of Science, Philosophy of Neuroscience, Philosophy of Mind

    Dissertation: Reliability and Validity of Experiment in the Neurobiology of Learning and Memory

    The concept of reliability has been defined traditionally by philosophers of science as a feature that an experiment has when it can be used to arrive at true descriptive or explanatory claims about phenomena. In contrast, philosophers of science typically take the concept of validity to correspond roughly to that of generalizability, which is defined as a feature that a descriptive or explanatory claim has when it is based on laboratory data but is applicable to phenomena beyond those effects under study in the laboratory. Philosophical accounts of experiment typically treat of the reliability of scientific experiment and the validity of descriptive or explanatory claims independently. On my account of experiment, however, these two issues are intimately linked. I show by appeal to case studies from the contemporary neurobiology of learning and memory that measures taken to guarantee the reliability of experiment often result in a decrease in the validity of those scientific claims that are made on the basis of such experiments and, furthermore, that strategies employed to increase validity often decrease reliability. Yet, since reliability and validity are both desirable goals of scientific experiments, and, on my account, competing aims, a tension ensues. I focus on two types of neurobiological experiments as case studies to illustrate this tension: (1) organism-level learning experiments and (2) synaptic-level plasticity experiments. I argue that the express commitment to the reliability of experimental processes in neurobiology has resulted in the invalidity of mechanistic claims about learning and plasticity made on the basis of data obtained from such experiments. The positive component of the dissertation consists in specific proposals that I offer as guidelines for resolving this tension in the context of experimental design.

  • Jim Tabery (2007)

    University of Utah (Assistant Professor)
    tabery@philosophy.utah.edu

    Philosophy of Science, Philosophy of Biology, Bioethics, History of Biology

    Dissertation: Causation in the Nature-Nurture Debate: The Case of Genotype-Environment Interaction

    In the dissertation I attempt to resolve an aspect of the perennial nature-nurture debate. Despite the widely endorsed “interactionist credo”, the nature-nurture debate remains a quagmire of epistemological and methodological disputes over causation, explanation, and the concepts employed therein. Consider a typical nature-nurture question: Why do some individuals develop a complex trait such as depression, while others do not? This question incorporates an etiological query about the causal mechanisms responsible for the individual development of depression; it also incorporates an etiological query about the causes of variation responsible for individual differences in the occurrence of depression. Scientists in the developmental research tradition of biology investigate the causal mechanisms responsible for the individual development of traits; scientists in the biometric research tradition of biology investigate the causes of variation responsible for individual differences in traits. So what is the relationship between causal mechanisms and causes of variation, between individual development and individual differences, and between the developmental and biometric traditions?

    I answer this question by looking at disputes over genotype-environment interaction (or G×E). G×E refers to cases where different genotypes phenotypically respond differently to the same array of environments. Scientists in the developmental tradition argue that G×E is a developmental phenomenon fundamentally important for investigating individual development and its relation to variation. Scientists in the biometric tradition argue that G×E is simply a population-level, statistical measure that can generally be ignored or eliminated. In this way, an isolationist pluralism has emerged between the research traditions. In contrast to this isolationist solution, I offer an integrative model. The developmental and biometric research traditions are united in their joint effort to elucidate what I call difference mechanisms. Difference mechanisms are regular causal mechanisms made up of difference-making variables that take different values in the natural world. On this model, individual differences are the effect of difference-makers in development that take different values in the natural world. And the difference-making variables in the regular causal mechanisms responsible for individual development simultaneously are the causes of variation when the difference-making variables naturally take different values.
    I then use this general integrative framework to resolve the disputes over G×E. I first show that there have been two, competing concepts of G×E throughout the history of the nature-nurture debate: what I call the biometric concept (or G×EB) and what I call the developmental concept (or G×ED). On the integrative model, however, these concepts can also be related: G×E results from differences in unique, developmental combinations of genotype and environment when both variables are difference-makers in development that naturally take different values and the difference that each variable makes is itself dependent upon the difference made by the other variable; and this interdependence may be measured with population-level, statistical methodologies.
  • Ingo Brigandt (2006)

    University of Alberta (Associate Professor)
    brigandt@ualberta.ca

    Philosophy of Biology, Philosophy of Mind, Philosophy of Language

    Dissertation: A Theory of Conceptual Advance: Explaining Conceptual Change in Evolutionary, Molecular, and Evolutionary Developmental Biology

    The theory of concepts advanced in the dissertation aims at accounting for a) how a concept makes successful practice possible, and b) how a scientific concept can be subject to rational change in the course of history. Traditional accounts in the philosophy of science have usually studied concepts in terms only of their reference; their concern is to establish a stability of reference in order to address the incommensurability problem. My discussion, in contrast, suggests that each scientific concept consists of three components of content: 1) reference, 2) inferential role, and 3) the epistemic goal pursued with a concept's use. I argue that in the course of history a concept can change in any of these three components, and that change in one component—including change of reference—can be accounted for as being rational relative to other components, in particular a concept's epistemic goal.

    This semantic framework is applied to two cases from the history of biology: the homology concept as used in 19th and 20th century biology, and the gene concept as used in different parts of the 20th century. The homology case study argues that the advent of Darwinian evolutionary theory, despite introducing a new definition of homology, did not bring about a new homology concept (distinct from the pre-Darwinian concept) in the 19th century. Nowadays, however, distinct homology concepts are used in systematics/evolutionary biology, in evolutionary developmental biology, and in molecular biology. The emergence of these different homology concepts is explained as occurring in a rational fashion. The gene case study argues that conceptual progress occurred with the transition from the classical to the molecular gene concept, despite a change in reference. In the last two decades, change occurred internal to the molecular gene concept, so that nowadays this concept's usage and reference varies from context to context. I argue that this situation emerged rationally and that the current variation in usage and reference is conducive to biological practice.
  • Francesca DiPoppa (2006)

    Texas Tech University (Associate Professor)
    francesca.di-poppa@ttu.edu

    History of Early Modern Philosophy

    Dissertation: "God acts through the laws of his nature alone": From the Nihil ex Nihilo axiom to causation as expression in Spinoza's metaphysics

    One of the most important concepts in Spinoza's metaphysics is that of causation. Much of the expansive scholarship on Spinoza, however, either takes causation for granted, or ascribes to Spinoza a model of causation that, for one reason or another, fails to account for specific instances of causation-such as the concept of cause of itself (causa sui). This work will offer a new interpretation of Spinoza's concept of causation. Starting from the "nothing comes from nothing" axiom and its consequences, the containment principle and the similarity principle (basically, the idea that what is in the effect must have been contained in the cause, and that the cause and the effect must have something in common) I will argue that Spinoza adopts what I call the expression-containment model of causation, a model that describes all causal interactions at the vertical and horizontal level (including causa sui, or self-cause). The model adopts the core notion of Neoplatonic emanationism, i.e. the idea that the effect is a necessary outpouring of the cause; however, Spinoza famously rejects transcendence and the possibility of created substances. God, the First Cause, causes immanently: everything that is caused is caused in God, as a mode of God. Starting from a discussion of the problems that Spinoza found in Cartesian philosophy, and of the Scholastic and Jewish positions on horizontal and vertical causation, my dissertation will follow the development of Spinoza's model of causation from his earliest work to his more mature Ethics. My work will also examine the relationship between Spinoza's elaboration of monism, the development of his model of causation, and his novel concept of essence (which for Spinoza coincides with a thing's causal power).

  • Abel Franco (2006)

    California State University, Northridge (Associate Professor)
    abel.franco@csun.edu

    History of Early Modern Philosophy

    Dissertation: Descartes' theory of passions

    Descartes not only had a theory of passions, but one that deserves a place among contemporary debates on emotions. The structure of this dissertation attempts to make explicit the unity of that theory. The study of the passions by the physician (who not only studies matter and motion but also human nature) [Chapter 2] appears to be the “foundations” (as he tells Chanut) of morals [Chapters 1 and 4] insofar as their main function [Chapter 3] is to dispose us to act in ways which directly affect our natural happiness. In other words, Descartes is in the Passions of the Soul (1649) climbing the very tree of philosophy he presented two years earlier in the Preface to French Edition of the Principles of Philosophy: the trunk (in this case a section of it: our nature) leads us to the highest of the three branches (morals) when we study human passions.

    Human passions constitute the only function of the mind-body union that can guide us in the pursuit of our (natural) happiness. They do this (1) by informing the soul about the current state of perfection both of the body and, most importantly, of the mind-body union; (2) by discriminating what is relevant in the world regarding our perfection; and (3) by proposing (to the will) possible ways of action (i.e. by disposing us to act). The virtuous (the generous) are those who have achieved “contentment” not by impeding the arousal of their passions but by living them according to reason, that is, by following freely the dispositions to act (brought about by them) which can increase our perfection—i.e. the disposition to join true goods and to avoid true evils.
    Regarding current debates on emotions [Chapter 5], Descartes' perceptual model not only provides a satisfactory answer to the major challenges faced today both by feeling theories (intentionality) and judgment theories (feelings and the passivity of emotions) but it can also help advance those debates by, on one hand, bringing into them new or neglected ideas, and, on the other, providing a solid overall framework to think about passions.
  • Doreen Fraser (2006)

    University of Waterloo (Associate Professor)
    dlfraser@uwaterloo.ca

    Philosophy of Physics, Philosophy of Science, History of Science

    Dissertation: Haag's theorem and the interpretation of quantum field theories with interactions

    Quantum field theory (QFT) is the physical framework that integrates quantum mechanics and the special theory of relativity; it is the basis of many of our best physical theories. QFT's for interacting systems have yielded extraordinarily accurate predictions. Yet, in spite of unquestionable empirical success, the treatment of interactions in QFT raises serious issues for the foundations and interpretation of the theory. This dissertation takes Haag's theorem as a starting point for investigating these issues. It begins with a detailed exposition and analysis of different versions of Haag's theorem. The theorem is cast as a reductio ad absurdum of canonical QFT prior to renormalization. It is possible to adopt different strategies in response to this reductio: (1) renormalizing the canonical framework; (2) introducing a volume (i.e., long-distance) cutoff into the canonical framework; or (3) abandoning another assumption common to the canonical framework and Haag's theorem, which is the approach adopted by axiomatic and constructive field theorists. Haag's theorem does not entail that it is impossible to formulate a mathematically well-defined Hilbert space model for an interacting system on infinite, continuous space. Furthermore, Haag's theorem does not undermine the predictions of renormalized canonical QFT; canonical QFT with cutoffs and existing mathematically rigorous models for interactions are empirically equivalent to renormalized canonical QFT. The final two chapters explore the consequences of Haag's theorem for the interpretation of QFT with interactions. I argue that no mathematically rigorous model of QFT on infinite, continuous space admits an interpretation in terms of quanta (i.e., quantum particles). Furthermore, I contend that extant mathematically rigorous models for physically unrealistic interactions serve as a better guide to the ontology of QFT than either of the other two formulations of QFT. Consequently, according to QFT, quanta do not belong in our ontology of fundamental entities.

  • Greg Frost-Arnold (2006)

    Hobart and William Smith Colleges (Assistant Professor)
    gfrost-arnold@hws.edu

    History of Analytic Philosophy, Philosophical Logic, Philosophy of Science

    Dissertation: Carnap, Tarski, and Quine's Year Together: Logic, Science and Mathematics

    During the academic year 1940-1941, several giants of analytic philosophy congregated at Harvard: Russell, Tarski, Carnap, Quine, Hempel, and Goodman were all in residence. This group held both regular public meetings as well as private conversations. Carnap took detailed diction notes that give us an extensive record of the discussions at Harvard that year. Surprisingly, the most prominent question in these discussions is: if the number of physical items in the universe is finite (or possibly finite), what form should the logic and mathematics in science take? This question is closely connected to an abiding philosophical problem, one that is of central philosophical importance to the logical empiricists: what is the relationship between the logico-mathematical realm and the natural, material realm? This problem continues to be central to analytic philosophy of logic, mathematics, and science. My dissertation focuses on three issues connected with this problem that dominate the Harvard discussions: nominalism, the unity of science, and analyticity. I both reconstruct the lines of argument represented in Harvard discussions and relate them to contemporary treatments of these issues.

  • Francis Longworth (2006)

    Institut d'Histoire et de Philosophie des Sciences et des Techniques (Research Fellow)
    francis.longworth@univ-paris1.fr

    Philosophy of Science, Metaphysics

    Dissertation: Causation, Counterfactual Dependence and Pluralism

    The principal concern of this dissertation is whether or not a conceptual analysis of our ordinary concept of causation can be provided. In chapters two and three I show that two of the most promising univocal accounts (the counterfactual theories of Hitchcock and Yablo) are subject to numerous counterexamples. In chapter four, I show that Hall's pluralistic theory of causation, according to which there are two concepts of causation, also faces a number of counterexamples. In chapter five, I sketch an alternative, broadly pluralistic theory of token causation, according to which causation is a cluster concept with a prototypical structure. This theory is able to evade the counterexamples that beset other theories and, in addition, offers an explanation of interesting features of the concept such the existence of borderline cases, and the fact that some instances of causation seem to be better examples of the concept than others.

  • David Miller (2006)

    Iowa State University(Assistant Professor)
    david.m.miller@emory.edu

    History of Early Modern Philosophy, History of Science

    Dissertation: Representations of Space in Seventeenth Century Physics

    The changing understanding of the universe that characterized the birth of modern science included a fundamental shift in the prevailing representation of space—the presupposed conceptual structure that allows one to intelligibly describe the spatial properties of physical phenomena. At the beginning of the seventeenth century, the prevailing representation of space was spherical. Natural philosophers first assumed a spatial center, then specified meanings with reference to that center. Directions, for example, were described in relation to the center, and locations were specified by distance from the center. Through a series of attempts to solve problems first raised by the work of Copernicus, this Aristotelian, spherical framework was replaced by a rectilinear representation of space. By the end of the seventeenth century, descriptions were understood by reference to linear orientations, as parallel or oblique to a presupposed line, and locations were identified without reference to a privileged central point. This move to rectilinear representations of space enabled Gilbert, Kepler, Galileo, Descartes, and Newton to describe and explain the behavior of the physical world in the novel ways for which these men are justly famous, including their theories of gravitational attraction and inertia. In other words, the shift towards a rectilinear representation of space was essential to the fundamental reconception of the universe that gave rise to both modern physical theory and, at the same time, the linear way of experiencing the world essential to modern science.

  • Christian Wüthrich (2006)

    University of California, San Diego (Associate Professor)
    wuthrich@ucsd.edu

    Philosophy of Physics, Philosophy of Science, Metaphysics

    Dissertation: Approaching the Planck Scale from a Generally Relativistic Point of View: A Philosophical Appraisal of Loop Quantum Gravity

    My dissertation studies the foundations of loop quantum gravity, a candidate for a quantum theory of gravity based on classical general relativity. After an evaluation of the motivations for seeking a quantum theory of gravity, I embark upon an investigation of how loop quantum gravity codifies general relativity's main innovation, the so-called background independence, in a formalism suitable for quantization. This codification pulls asunder what has been joined together in general relativity: space and time. It is thus a central issue whether or not general relativity's four-dimensional structure can be retrieved in the alternative formalism. I argue that the rightful four-dimensional spacetime structure can only be partially retrieved at the classical level, while its retrieval at the quantum level is an open question. Next, I scrutinize pronouncements claiming that the "big-bang" singularity of classical cosmological models vanishes in quantum cosmology based on loop quantum gravity and conclude that these claims must be severely qualified. Finally, a scheme is developed of how the re-emergence of the smooth spacetime from the underlying discrete quantum structure could be understood.

  • Erik Angner (2005)

    George Mason University (Associate Professor)

    History and Philosophy of Social Science, Social and Political Philosophy

    Dissertation: Subjective Measures of Well-Being: A philosophical examination

    Over the last couple of decades, as part of the rise of positive psychology, psychologists have given increasing amounts of attention to so-called subjective measures of well-being. These measures, which are supposed to represent the well-being of individuals and groups, are often presented as alternatives to more traditional economic ones for purposes of the articulation, implementation and evaluation of public policy. Unlike economic measures, which are typically based on data about income, market transactions and the like, subjective measures are based on answers to questions like: "Taking things all together, how would you say things are these days would you say you're very happy, pretty happy, or not too happy these days?" The aim of this dissertation is to explore issues in the philosophical foundations of subjective measures of well-being, with special emphasis on the manner in which the philosophical foundations of subjective measures differ from those of traditional economic measures. Moreover, the goal is to examine some arguments for and against these measures, and, in particular, arguments that purport to demonstrate the superiority of economic measures for purposes of public policy. My main thesis is that the claim that subjective measures of well-being cannot be shown to be inferior to economic measures quite as easily as some have suggested, but that they nevertheless are associated with serious problems, and that questions about the relative advantage of subjective and economic measures for purposes of public policy will depend on some fundamentally philosophical judgments, e.g. about the nature of well-being and the legitimate goals for public policy.

  • Megan Delehanty (2005)

    University of Calgary (Associate Professor)
    mdelehan@ucalgary.ca

    Dissertation: Empiricism and the Epistemic Status of Imaging Technologies

    The starting point for this project was the question of how to understand the epistemic status of mathematized imaging technologies such as positron emission tomography (PET) and confocal microscopy. These sorts of instruments play an increasingly important role in virtually all areas of biology and medicine. Some of these technologies have been widely celebrated as having revolutionized various fields of studies while others have been the target of substantial criticism. Thus, it is essential that we be able to assess these sorts of technologies as methods of producing evidence. They differ from one another in many respects, but one feature they all have in common is the use of multiple layers of statistical and mathematical processing that are essential to data production. This feature alone means that they do not fit neatly into any standard empiricist account of evidence. Yet this failure to be accommodated by philosophical accounts of good evidence does not indicate a general inadequacy on their part since, by many measures, they very often produce very high quality evidence. In order to understand how they can do so, we must look more closely at old philosophical questions concerning the role of experience and observation in acquiring knowledge about the external world. Doing so leads us to a new, grounded version of empiricism. After distinguishing between a weaker and a stronger, anthropocentric version of empiricism, I argue that most contemporary accounts of observation are what I call benchmark strategies that, implicitly or explicitly, rely on the stronger version according to which human sense experience holds a place of unique privilege. They attempt to extend the bounds of observation iii and the epistemic privilege accorded to it—by establishing some type of relevant similarity to the benchmark of human perception. These accounts fail because they are unable to establish an epistemically motivated account of what relevant similarity consists of. The last best chance for any benchmark approach, and, indeed, for anthropocentric empiricism, is to supplement a benchmark strategy with a grounding strategy. Toward this end, I examine the Grounded Benchmark Criterion which defines relevant similarity to human perception in terms of the reliability-making features of human perception. This account, too, must fail due to our inability to specify these features given the current state of understanding of the human visual system. However, this failure reveals that it is reliability alone that is epistemically relevant, not any other sort of similarity to human perception. Current accounts of reliability suffer from a number of difficulties, so I develop a novel account of reliability that is based on the concept of granularity. My account of reliability in terms of a granularity match both provides the means to refine the weaker version of empiricism and allows us to establish when and why imaging technologies are reliable. Finally, I use this account of granularity in examining the importance of the fact that the output of imaging technologies usually is images.

  • Alan Love (2005)

    University of Minnesota (Associate Professor)
    aclove@umn.edu

    Philosophy of Biology, Philosophy of Science, Biology

    Dissertation: Explaining Evolutionary Innovation and Novelty: A Historical and Philosophical Study of Biological Concepts

    Explaining evolutionary novelties (such as feathers or neural crest cells) is a central item on the research agenda of evolutionary developmental biology (Evo-devo). Proponents of Evo-devo have claimed that the origin of innovation and novelty constitute a distinct research problem, ignored by evolutionary theory during the latter half of the 20th century, and that Evo-devo as a synthesis of biological disciplines is in a unique position to address this problem. In order to answer historical and philosophical questions attending these claims, two philosophical tools were developed. The first, conceptual clusters, captures the joint deployment of concepts in the offering of scientific explanations and allows for a novel definition of conceptual change. The second, problem agendas, captures the multifaceted nature of explanatory domains in biological science and their diachronic stability. The value of problem agendas as an analytical unit is illustrated through the examples of avian feather and flight origination. Historical research shows that explanations of innovation and novelty were not ignored. They were situated in disciplines such as comparative embryology, morphology, and paleontology (exemplified in the research of N.J. Berrill, D.D. Davis, and W.K. Gregory), which were overlooked because of a historiography emphasizing the relations between genetics and experimental embryology. This identified the origin of Evo-devo tools (developmental genetics) but missed the source of its problem agenda. The structure of developmental genetic explanations of innovations and novelties is compared and contrasted with those of other disciplinary approaches, past and present. Applying the tool of conceptual clusters to these explanations reveals a unique form of conceptual change over the past five decades: a change in the causal and evidential concepts appealed to in explanations. Specification of the criteria of explanatory adequacy for the problem agenda of innovation and novelty indicates that Evo-devo qua disciplinary synthesis requires more attention to the construction of integrated explanations from its constituent disciplines besides developmental genetics. A model for explanations integrating multiple disciplinary contributions is provided. The phylogenetic approach to philosophy of science utilized in this study is relevant to philosophical studies of other sciences and meets numerous criteria of adequacy for analyses of conceptual change.

  • Andrea Scarantino (2005)

    Georgia State University (Associate Professor)
    phlams@langate.gsu.edu

    Dissertation: Explicating Emotions

    In the course of their long intellectual history, emotions have been identified with items as diverse as perceptions of bodily changes (feeling tradition), judgments (cognitivist tradition), behavioral predispositions (behaviorist tradition), biologically based solutions to fundamental life tasks (evolutionary tradition), and culturally specific social artifacts (social constructionist tradition). The first objective of my work is to put some order in the mare magnum of theories of emotions. I taxonomize them into families and explore the historical origin and current credentials of the arguments and intuitions supporting them. I then evaluate the methodology of past and present emotion theory, defending a bleak conclusion: a great many emotion theorists ask "What is an emotion?" without a clear understanding of what counts as getting the answer right. I argue that there are two ways of getting the answer right. One is to capture the conditions of application of the folk term "emotion" in ordinary language (Folk Emotion Project), and the other is to formulate a fruitful explication of it (Explicating Emotion Project). Once we get clear on the desiderata of these two projects, we realize that several long-running debates in emotion theory are motivated by methodological confusions. The constructive part of my work is devoted to formulating a new explication of emotion suitable for the theoretical purposes of scientific psychology. At the heart of the Urgency Management System (UMS) theory of emotions I propose is the idea that an "umotion" is a special type of superordinate system which instantiates and manages an urgent action tendency by coordinating the operation of a cluster of cognitive, perceptual and motoric subsystems. Crucially, such superordinate system has a proper function by virtue of which it acquires a special kind of intentionality I call pragmatic. I argue that "umotion" is sufficiently similar in use to "emotion" to count as explicating it, it has precise rules of application, and it accommodates a number of central and widely shared intuitions about the emotions. My hope is that future emotion research will demonstrate the heuristic fruitfulness of the "umotion" concept for the sciences of mind.

  • Armond Duwell (2004)

    University of Montana, Missoula (Associate Professor)
    armond.duwell@mso.umt.edu

    Philosophy of Physics, Information Theory

    Dissertation: Foundations of Quantum Information Theory and Quantum Computation Theory

    Physicists and philosophers have expressed great hope that quantum information theory will revolutionize our understanding of quantum theory. The first part of my dissertation is devoted to clarifying and criticizing various notions of quantum information, particularly those attributable to Jozsa and also Deutsch and Hayden. My work suggests that no new concept of information is needed and the Shannon information theory works perfectly well for quantum mechanical systems.

    The second part of my dissertation is devoted to explaining why quantum computers are faster than conventional computers for some computational tasks. The current best explanation of quantum computational speedup is that quantum computers can compute many values of a function in a single computational step, whereas conventional computers cannot. Further, it has been suggested that the Many Worlds Interpretation of quantum theory is the only interpretation that can underwrite such a claim. In my dissertation I clarify the explanandum and articulate possible explananda for the explanatory task at hand. I offer an explanation and I argue that no appeal needs to be made to any particular interpretation of quantum theory to explain quantum computational speedup.
  • Uljana Feest (2003)

    University of Hanover (Professor)

    Cognitive and Behavioral Sciences

    Dissertation: Operationism, Experimentation, and Concept Formation

    I provide a historical and philosophical analysis of the doctrine of operationism, which emerged in American psychology in the 1930s. While operationism is frequently characterized as a semantic thesis (which demands that concepts be defined by means of measurement operations), I argue that it is better understood as a methodological strategy, which urges that experimental investigation. I present three historical case studies of the work of early proponents of operationism and show that all of them were impressed by behaviorist critiques of traditional mentalism and introspectivism, while still wanting to investigate some of the phenomena of traditional psychology (consciousness, purpose, motivation). I show that when these psychologists used “operational definitions”, they posited the existence of particular psychological phenomena and treated certain experimental data – by stipulation – as indicative of those phenomena. However, they viewed these stipulative empirical definitions as neither a priori true, nor as unrevisable. While such stipulative definitions have the function of getting empirical research about a phenomenon “off the ground”, they clearly don't provide sufficient evidence for the existence of the phenomenon. In the philosophical part of my dissertation, I raise the epistemological question of what it would take to provide such evidence, relating this question to recent debates in the philosophy of experimentation. I argue that evidence for the existence of a given phenomenon is produced as part of testing descriptive hypotheses about the phenomenon. Given how many background assumptions have to be made in order to test a hypothesis about a phenomenon, I raise the question of whether claims about the existence of psychological phenomena are underdetermined by data. I argue that they are not. Lastly, I present an analysis of the scientific notion of an experimental artifact, and introduce the notion of an “artifactual belief”, i.e. an experimentally well confirmed belief that later turns out to be false, when one or more of the background assumptions (relative to which the belief was confirmed) turn out to be false.

  • Gualtiero Piccinini (2003)

    University of Missouri - St. Louis (Associate Professor)
    piccininig@umsl.edu

    Philosophy of Mind

    Dissertation: Computations and Computers in the Sciences of Mind and Brain

    Computationalism says that brains are computing mechanisms, that is, mechanisms that perform computations. At present, there is no consensus on how to formulate computationalism precisely or adjudicate the dispute between computationalism and its foes, or between different versions of computationalism. An important reason for the current impasse is the lack of a satisfactory philosophical account of computing mechanisms. The main goal of this dissertation is to offer such an account. I also believe that the history of computationalism sheds light on the current debate. By tracing different versions of computationalism to their common historical origin, we can see how the current divisions originated and understand their motivation. Reconstructing debates over computationalism in the context of their own intellectual history can contribute to philosophical progress on the relation between brains and computing mechanisms and help determine how brains and computing mechanisms are alike, and how they differ. Accordingly, my dissertation is divided into a historical part, which traces the early history of computationalism up to 1946, and a philosophical part, which offers an account of computing mechanisms.

    The two main ideas developed in this dissertation are that (1) computational states are to be individuated by functional properties rather than semantic properties, and (2) the relevant functional properties are specified by an appropriate functional analysis. The resulting account of computing mechanism, which I call the functional account of computing mechanisms, can be used to individuate computing mechanisms and the functions they compute. I use the functional account of computing mechanisms to taxonomize computing mechanisms based on their different computing power, and I use this taxonomy of computing mechanisms to taxonomize different versions of computationalism based on the functional properties that they ascribe to brains. By doing so, I begin to tease out empirically testable statements about the functional organization of the brain that different versions of computationalism are committed to. I submit that when computationalism is reformulated in the more explicit and precise way I propose, the disputes about computationalism can be adjudicated on the grounds of empirical evidence from neuroscience.
  • Wendy Parker (2003)

    University of Durham (Reader)

    Modeling and Simulation, Science and Public Policy, Environmental Philosophy

    Dissertation: Computer Modeling in Climate Science: Experiment, Explanation, Pluralism

    Computer simulation modeling is an important part of contemporary scientific practice but has not yet received much attention from philosophers. The present project helps to fill this lacuna in the philosophical literature by addressing three questions that arise in the context of computer simulation of Earth's climate. (1) Computer simulation experimentation commonly is viewed as a suspect methodology, in contrast to the trusted mainstay of material experimentation. Are the results of computer simulation experiments somehow deeply problematic in ways that the results of material experiments are not? I argue against categorical skepticism toward the results of computer simulation experiments by revealing important parallels in the epistemologies of material and computer simulation experimentation. (2) It has often been remarked that simple computer simulation models—but not complex ones—contribute substantially to our understanding of the atmosphere and climate system. Is this view of the relative contribution of simply and complex models tenable? Io show that both simple and complex climate models can promote scientific understanding and argue that the apparent contribution of simple models depends upon whether a causal or deductive account of scientific understanding is adopted. (3) When two incompatible scientific theories are under consideration, they typically are viewed as competitors, and we seek evidence that refutes at least one of the theories. In the study of climate change, however, logically incompatible computer simulation models are accepted as complementary resources for investigating future climate. How can we make sense of this use of incompatible models? I show that a collection of incompatible models climate models persists in part because of difficulties faced in evaluating and comparing climate models. I then discuss the rationale for using these incompatible models together and argue that this climate model pluralism has both competitive and integrative components.

  • Chris Smeenk (2002)

    University of Western Ontario (Associate Professor)
    csmeenk2@uwo.ca

    Philosophy of Physics, Early Modern Philosophy

    Dissertation: Approaching the Absolute Zero of Time: Theory Development in Early Universe Cosmology

    This dissertation gives an original account of the historical development of modern cosmology along with a philosophical assessment of related methodological and foundational issues. After briefly reviewing the groundbreaking work by Einstein and others, I turn to the development of early universe cosmology following the discovery of the microwave background radiation in 1965. This discovery encouraged consolidation and refinement of the big bang model, but cosmologists also noted that cosmological models could accomodate observations only at the cost of several "unnatural" assumptions regarding the initial state. I describe various attempts to eliminate initial conditions in the late 60s and early 70s, leading up to the idea that came to dominate the field: inflationary cosmology. I discuss the pre-history of inflationary cosmology and the early development of the idea, including the account of structure formation and the introduction of the "inflaton" field. The second part of my thesis focuses on methodological issues in cosmology, opening with a discussion of three principles and their role in cosmology: the cosmological principle, indifference principle, and anthropic principle. I assess appeals to explanatory adequacy as grounds for theory choice in cosmology, and close with a discussion of confirmation theory and the issue of novelty in relation to cosmological theories.

  • Daniel Steel (2002)

    Michigan State University (Associate Professor)
    steel@msu.edu

    Causality and Confirmation; Biological and Social Sciences

    Dissertation: Mechanisms and Interfering Factors: Dealing with Heterogeneity in the Biological and Social Sciences

    The biological and social sciences both deal with populations that are heterogeneous with regard to important causes of interest, in the sense that the same cause often exerts very different effects upon distinct members of the population. For instance, welfare- to-work programs are likely to have different effects on the economic prospects of trainees depending on such variables as education, prior work experience, and so forth. Moreover, it is rarely the case in biology or social science that all such complicating variables are known and can be measured. In such circumstances, generalizations about the effect of a factor in a given population average over these differences, and hence take on a probabilistic character. Consequently, a causal generalization that holds with respect to a heterogeneous population as a whole may not hold for a given sub-population, a fact which raises a variety of difficulties for explanation and prediction. The overarching theme of the dissertation is that knowing how a cause produces its effect is the key to knowing when a particular causal relationship holds and when it does not. More specifically, the proposal is the following. Suppose that X is the cause of Y in the population P. Then there is a mechanism, or mechanisms, present among at least some of the members of P through which X influences Y. So if we know the mechanism and the kinds of things that can interfere with it, then we are in a much better position to say when the causal generalization will hold and when it will not. This intuitive idea has been endorsed by several philosophers; however, what has been lacking is a systematic exploration of the proposal and its consequences. That is what I aim to provide. The approach to the heterogeneity problem is developed in the context of an example drawn from biomedical science, namely, research into the causal mechanism by which HIV attacks the human immune system. Moreover, I argue that my approach to the problem of heterogeneity sheds new light on some familiar philosophical issues that are relevant to the biological and social sciences, namely, ceteris paribus laws and methodological holism versus methodological individualism.

  • Chris Martin (2001)

    Left the field

    Philosophy of Physics, Gauge Theories

    Dissertation: Gauging Gauge: Remarks on the Conceptual Foundations of Gauge Symmetry

    Of all the concepts of modern physics, there are few that have the sort of powerful, sometimes mysterious, and often awe-inspiring rhetoric surrounding them as has the concept of local gauge symmetry. The common understanding today is that all fundamental interactions in nature are described by so-called gauge theories. These theories, far from being just any sort of physical theory are taken to result from the tsrict dictates of principles of local gauge symmetry—gauge symmetry principles. The success—experimental, theoretical and other wise—of theories based on local symmetry principles has given rise to the received view of local symmetry principles as deeply fundamental, as literally “dictating” or “necessitating” the very shape of fundamental physics. The current work seeks to make some headway towards elucidating this view by considering the general issue of the physical content of local symmetry principles in their historical and theoretical contexts. There are two parts to the dissertation: a historical part and a more “philosophical” part. In the first, historical part, I provide a brief genealogy of gauge theories, looking at some of the seminal works in the birth and development of gauge theories. My chief claim here is about what one does not find. Despite the modern rhetoric, the history of gauge field theories does not evidence loaded arguments from (a priori) local symmetry principles or even the need for ascriptions of any deep physical significance to these principles. The history evidences that the ascendancy of gauge field theories rests quite squarely on the heuristic value of local gauge symmetry principles. In the philosophical component of the dissertation I turn to an analysis of the gauge argument, the canonical means of cashing out the physical content of gauge symmetry principle. I warn against a (common) literal reading of the argument. As I discuss, the argument must be afforded a fairly heuristic (even if historically-based) reading. Claims to the effect that the argument reflects the “logic of nature” must, for many reasons that I discuss, be taken with a grain of salt. Finally, I highlight how the “received view” of gauge symmetry—which takes it that gauge symmetry transformations are merely non-physical, formal changes of description—gives rise to a tension between the “profundity of gauge symmetry” and “the redundancy of gauge symmetry”. I consider various ways one might address this tension. I conclude that one is hard pressed to do any better than a “minimalist view” which takes it that the physical import of gauge symmetry lies in its historically based heuristic utility. While there are less minimalist views of the physical content to be ascribed to gauge symmetry principles, it is clear that neither the history nor the physics obliges us to make such ascriptions.

  • Andrew Backe (2000)

    City University of Hong Kong (Visiting Assistant Professor)

    Philosophy of Mind, American Pragmatism

    Dissertation: The Divided Psychology of John Dewey

    This dissertation examines the extent to which John Dewey's psychology was a form of behaviorism, and, in doing so, considers how metaphysical commitments influenced psychological theories at the turn of the century. In his 1916 Essays in Experimental Logic, Dewey described his psychology as a science not of states of consciousness, but of behavior. Specifically, Dewey argued that conscious states can be assimilated to modes of behavior that help the individual adapt to a situation of conflict. Hence, the role of psychology, Dewey argued, is to provide a natural history of the conditions under which a particular behavioral mode emerges. Based on an analysis of a number of Dewey's major works written during the period of 1884 to 1916, I claim that there is an underlying metaphysical intuition in Dewey's views that prevents a behavioristic interpretation of his psychology. This intuition, I argue, stems from Dewey's absolute idealist philosophy of the mid 1880s. The intuition raises the concern that, if psychologists permit a transition from one psychological state to another to be described in terms of a causal succession of discrete events, then there is no way that the transition can be held together in a relational complex. As applied to psychology by Dewey, the intuition rejected treating any psychological phenomenon as constituted of separate existences, regardless of whether the phenomenon is defined in terms of conscious or behavioral events. Instead, the intuition presupposed that psychological events are unified in a special kind of relation in which events merge and are, in a mystical sense, identical. I maintain that Dewey's intuition regarding psychological causation served as the basis for his concept of coordination, which Dewey set out in his criticism of the reflex arc concept in the context of the Baldwin-Titchener reaction-time controversy. According to my account, Dewey's coordination concept was at odds with the behaviorists' unit of analysis, which explicitly divided any psychological phenomenon into separate existences of stimulus and response. I consider the broader implications of Dewey's metaphysical intuition through a discussion of different types of causal explanation that emerged in psychology in the early 20th century.

  • Benoit Desjardins (1999)

    Hospital of the University of Pennsylvania (Assistant Professor of Radiology)

    Causality, Statistical Algorithms

    Dissertation: On the Theoretical Limits to Reliable Causal Inference

    One of the most central problems in scientific research is the search for explanations of some aspect of nature for which empirical data is available. One seeks to identify the causal processes explaining the data, in the form of a model of the aspect of nature under study. Although traditional statistical approaches are excellent for finding statistical dependencies in a body of empirical data, they prove inadequate at finding the causal structure in the data. New graphical algorithmic approaches have been proposed to automatically discover the causal structure in the data. Based on strong connections between graph theoretic properties and statistical aspects of causal influences, fundamental assumptions about the data can be used to infer a graphical structure, which is used to construct models describing the exact causal relations in the data. If the data contain correlated errors, latent variables must be introduced to explain the causal structure in the data. There is usually a large set of equivalent causal models with latent variables, representing competing alternatives, which entail similar statistical dependency relations. The central problem in this dissertation is the study of the theoretical limits to reliable causal inference. Given a body of statistical distribution information on a finite set of variables, we seek to characterize the set of all causal models satisfying this distribution. Current approaches only characterize the set of models which satisfy limited properties of this distribution, notably its relations of probabilistic conditional independence. Such models are semi-Markov equivalent. Some of these models might however not satisfy other properties of the distribution, which cannot be expressed as simple conditional independence relations on marginal distributions. We seek to go beyond semi-Markov equivalence. To do so, we first formally characterize the variation in graphical structure within a semi-Markov equivalence class of models. We then determine possible consequences of this variation as either experimentally testable features of models, or as testable features of marginal distributions.

  • Elizabeth Paris (1999)

    Deceased

    History of Particle Physics

    Dissertation: Ringing in the New Physics: The Politics and Technology of Electron Colliders in the United States, 1956-1972

    The “November Revolution” of 1974 and the experiments that followed consolidated the place of the Standard Model in modern particle physics. Much of the evidence on which these conclusions depended was generated by a new type of tool: colliding beam storage rings, which had been considered physically unfeasible twenty years earlier. In 1956 a young experimentalist named Gerry O'Neill dedicated himself to demonstrating that such an apparatus could do useful physics. The storage ring movement encountered numerous obstacles before generating one of the standard machines for high energy research. In fact, it wasn't until 1970 that the U.S. finally broke ground on its first electron-positron collider. Drawing extensively on archival sources and supplementing them with the personal accounts of many of the individuals who took part, Ringing in the New Physics examines this instance of post-World War II techno-science and the new social, political and scientific tensions that characterize it. The motivations are twofold: first, that the chronicle of storage rings may take its place beside mathematical group theory, computer simulations, magnetic spark chambers, and the like as an important contributor to a view of matter and energy which has been the dominant model for the last twenty-five years. In addition, the account provides a case study for the integration of the personal, professional, institutional, and material worlds when examining an episode in the history or sociology of twentieth century science. The story behind the technological development of storage rings holds fascinating insights into the relationship between theory and experiment, collaboration and competition in the physics community, the way scientists obtain funding and their responsibilities to it, and the very nature of what constitutes successful science in the post-World War II era.

  • Tom Seppalainen (1999)

    Portland State University (Associate Professor)
    seppalt@pdx.edu

    Visual Perception and Cognition, Metaphysics

    Dissertation: The Problematic Nature of Experiments in Color Science

    The so-called opponent process theory of color vision has played a prominent role in recent philosophical debates on color. Several philosophers have argued that this theory can be used to reduce color experiences to properties of neural cells. I will refute this argument by displaying some of the problematic features of the experimental inference present in color science. Along the way I will explicate some of the methodological strategies employed by vision scientists to accomplish integration across the mind-body boundary. At worst, the integration follows the looks-like methodology where effects resemble their causes. The modern textbook model for human color vision consists of three hypothetical color channels, red-green, blue-yellow, and white-black. These are assumed to be directly responsible for their respective color sensations. The hue channels are opponent in that light stimulation can cause only one of the respective hue sensations. The channels are also seen as consisting of opponent neural cells. The cells and the channels are claimed to have similar response properties. In my work, I reconstruct some of the critical experiments underwriting the textbook model. The centerpiece is an analysis of Hurvich and Jameson's color cancellation experiment. I demonstrate that the experiment cannot rule out the contradictory alternative hypothesis for opponent channels without making question-begging assumptions. In order to accomplish this, I clarify the theorizing of Hurvich and Jameson's predecessor, Ewald Hering, as well as the classic trichromatic theory. I demonstrate that currently no converging evidence from neurophysiology exists for the opponent process theory. I show that the results from De Valois' studies of single cells are theory-laden. The classification into cell types assumes the textbook model. Since the textbook model is an artifact of experimental pseudo-convergence both claims for a reductive and a causal explanation of color experiences are premature.

  • Jonathan Bain (1998)

    Polytechnic Institute of NYU (Associate Professor)
    jbain@duke.poly.edu

    Philosophy of Spacetime, Scientific Realism, Philosophy of Quantum Field Theory

    Dissertation: Representations of Spacetime: Formalism and Ontological Commitment

    This dissertation consists of two parts. The first is on the relation between formalism and ontological commitment in the context of theories of spacetime, and the second is on scientific realism. The first part begins with a look at how the substantivalist/ relationist debate over the ontological status of spacetime has been influenced by a particular mathematical formalism, that of tensor analysis on differential manifolds (TADM). This formalism has motivated the substantivalist position known as manifold substantivalism. Chapter 1 focuses on the hole argument which maintains that manifold substantivalism is incompatible with determinism. I claim that the realist motivations underlying manifold substantivalism can be upheld, and the hole argument avoided, by adopting structural realism with respect to spacetime. In this context, this is the claim that it is the structure that spacetime points enter into that warrants belief and not the points themselves. In Chapter 2, an elimination principle is defined by means of which a distinction can be made between surplus structure and essential structure with respect to formulations of a theory in two distinct mathematical formulations and some prior ontological commitments. This principle is then used to demonstrate that manifold points may be considered surplus structure in the formulation of field theories. This suggests that, if we are disposed to read field theories literally, then, at most, it should be the essential structure common to all alternative formulations of such theories that should be taken literally. I also investigate how the adoption of alternative formalisms informs other issues in the philosophy of spacetime. Chapter 3 offers a realist position which takes a semantic moral from the preceding investigation and an epistemic moral from work done on reliability. The semantic moral advises us to read only the essential structure of our theories literally. The epistemic moral shows us that such structure is robust under theory change, given an adequate reliabilist notion of epistemic warrant. I call the realist position that subscribes to these morals structural realism and attempt to demonstrate that it is immune to the semantic and epistemic versions of the underdetermination argument posed by the anti-realist.

  • Carl Craver (1998)

    Washington University in St. Louis (Associate Professor)
    ccraver@artsci.wustl.edu

    Visual Perception and Cognition, Metaphysics

    Dissertation: Neural Mechanisms: On the Structure, Function, and Development of Theories in Neurobiology

    Reference to mechanisms is virtually ubiquitous in science and its philosophy. Yet, the concept of a mechanism remains largely unanalyzed; So too for its possible applications in thinking about scientific explanation, experimental practice, and theory structure. This dissertation investigates these issues in the context of contemporary neurobiology. The theories of neurobiology are hierarchically organized descriptions of mechanisms that explain functions. Mechanisms are the coordinated activities of entities by virtue of which that function is performed. Since the activities composing mechanisms are often susceptible to mechanical redescription themselves, theories in neurobiology have a characteristic hierarchical structure. The activities of entities at one level are the sub-activities of those at a higher level. This hierarchy reveals a fundamental symmetry of functional and mechanical descriptions. Functions are privileged activities of entities; they are privileged because they constitute a stage in some higher-level (+1) mechanism. The privileged activities of entities, in turn, are explained by detailing the stages of activity in the lower-level ($-$1) mechanism. Functional and mechanical descriptions are different tools for situating activities, properties, and entities into a hierarchy of activities. They are not competing kinds of description. Experimental techniques for testing such descriptions reflect this symmetry. Philosophical discussions of inter-level explanatory relationships have traditionally been framed by reference to inter-theoretic reduction models. The representational strictures of first order predicate calculus and the epistemological strictures logical empiricism combine in this reduction model to focus attention upon issues of identity and deriveability; these are entirely peripheral to the explanatory aims of mechanical ($-$1) explanation. Mechanical explanation is causal. Derivational models of explanation do not adequately reflect the importance of activities in rendering phenomena intelligible. Activities are kinds of change. 'Bonding,' 'diffusing,' 'transcribing,' 'opening,' and 'attracting' all describe different kinds of transformation. Salmon's modified process theory (1998) is helpful in understanding the role of entities and properties in causal interactions; but it ultimately makes no room for kinds of change in the explanatory cupboard. We make change intelligible by identifying and characterizing its different kinds and relating these to activities that are taken to be fundamental for a science at a time.

  • Heather Douglas (1998)

    University of Waterloo (Associate Professor)

    Philosophy of Science, Environmental Philosophy, Science and Public Policy

    Dissertation: The Use of Science in Policy-Making: A Study of Values in Dioxin Science

    The risk regulation process has been traditionally conceived as having two components: a consultation of the experts concerning the magnitude of risk (risk assessment) and a negotiated decision on whether and how to reduce that risk (risk management). The first component is generally thought to be free of the contentious value judgments that often characterize the second component. In examining the recent controversy over dioxin regulation, I argue that the first component is not value-free. I review three areas of science important to dioxin regulation: epidemiological studies, laboratory animal studies, and biochemical studies. I show how problems of interpretation arise for each area of science that prevent a clear-cut answer to the question: what dose of dioxins is safe for humans? Because of significant uncertainties in how to interpret these studies, there is significant risk that one will err in the interpretation. In order to judge what risk of error to accept, one needs to consider and weigh the consequences of one's judgments, whether epistemic or non-epistemic. Weighing non-epistemic consequences requires the use of non-epistemic values. Thus, non-epistemic values, or the kind that are important in risk management, have an important and legitimate role to play in the judgments required to perform and interpret the dioxin studies. The risk assessment component of the risk regulation process (or any similar consultation of the scientific experts) cannot be claimed to be value-free and the process must be altered to accommodate a value-laden science.

  • Mark Holowchak (1998)

    Rider University (Adjunct Assistant Professor)

    Ancient Philosophy, Philosophy of Sport

    Dissertation: The Problem of Differentiation and the Science of Dreams in Graeco-Roman Antiquity

    Dreams played a vital role in Graeco-Roman antiquity at all levels of society. Interpreters of prophetic dreams thrived at marketplaces and at religious festivals. Physicians used dreams to facilitate diagnosis. Philosophers talked of dreams revealing ne's moral character and emotional dispositions. Many who studied dreams developed rich and elaborate accounts of the various sorts of dreams and their formation. All of this bespeaks a science of dreams in antiquity. Did these ancients, by a thorough examination of the content of dreams and their attendant circumstances, develop criteria for distinguishing the kinds or functions of dreams and, if so, were these criteria empirically reliable? I attempt to answer these questions chiefly through an evaluation of ancient Graeco-Roman 'oneirology' (the science of dreams) in the works of eight different Graeco-Roman oneirologists, especially philosophers and natural scientists, from Homer to Synesius. First, I argue that Homer's famous reference to two gates of dreams led subsequent thinkers to believe in prophetic and nonprophetic dreams. Additionally, the two gates engendered a practical approach to dreams that had a lasting impact on Graeco-Roman antiquity, especially through interpreters of prophetic dreams. Yet, as interpreters of dreams prospered, critics challenged the validity of their art. Ultimately, I argue that the interpreters' responses to their critics were unavailing. Moreover, the emergence of the belief in an agentive soul around the fifth century B.C. paved the way for psychophysiological accounts of dreams. Philosophers and physicians thereafter begin to explore nonprophetic meanings of dreams--like moral, psychological, or somatic meanings. Some philosophers rejected the notion of prophecy through dreams altogether, while many essayed to ground prophetic dreams by giving them psychophysiological explanations like other dreams. In general, those oneirologists who tried to give all dreams a psychophysiological explanation bypassed the problem of differentiating dreams by positing, strictly speaking, only one kind of dream—though committing themselves to a plurality of functions for them. In summary, I argue that the ancient Graeco-Roman oneirology—as a thorough admixture of the practical, Homeric approach to dreams and the psychogenetic approach—was an inseparable blend of literary fancy and respectable science.

  • David Sandborg (1998)

    Left the field

    Philosophy of Mathematics, Explanation

    Dissertation: Explanation in Mathematical Practice

    Philosophers have paid little attention to mathematical explanations (Mark Steiner and Philip Kitcher are notable exceptions). I present a variety of examples of mathematical explanation and examine two cases in detail. I argue that mathematical explanations have important implications for the philosophy of mathematics and of science. The first case study compares many proofs of Pick's theorem, a simple geometrical result. Though a simple proof surfaces to establish the result, some of the proofs explain the result better than others. The second case study comes from George Polya's Mathematics and Plausible Reasoning. He gives a proof that, while entirely satisfactory in establishing its conclusion, is insufficiently explanatory. To provide a better explanation, he supplements the proof with additional exposition. These case studies illustrate at least two distinct explanatory virtues, and suggest there may be more. First, an explanatory improvement occurs when a sense of 'arbitrariness' is reduced in the proofs. Proofs more explanatory in this way place greater restrictions on the steps that can be used to reach the conclusion. Second, explanatoriness is judged by directness of representation. More explanatory proofs allow one to ascribe geometric meaning to the terms of Pick's formula as they arise. I trace the lack of attention to mathematical explanations to an implicit assumption, justificationism, that only justificational aspects of mathematical reasoning are epistemically important. I propose an anti-justificationist epistemic position, the epistemic virtues view, which holds that justificational virtues, while important, are not the only ones of philosophical interest in mathematics. Indeed, explanatory benefits are rarely justificational. I show how the epistemic virtues view and the recognition of mathematical explanation can shed new light on philosophical debates. Mathematical explanations have consequences for philosophy of science as well. I show that mathematical explanations provide serious challenges to any theory, such as Bas van Fraassen's, that considers explanations to be fundamentally answers to why-questions. I urge a closer interaction between philosophy of mathematics and philosophy of science; both will be needed for a fuller understanding of mathematical explanation.

  • Marta Spranzi-Zuber (1998)

    Université de Versailles St-Quentin-en-Yvelines

    Ancient and Early Modern Philosophy

    Dissertation: The tradition of Aristotle's Topics and Galileo's Dialogue Concerning the Two Chief World Systems: Dialectic, dialogue, and the demonstration of the Earth's motion

    In this work I show that Galileo Galilei provided a "dialectical demonstration" of the Earth's motion in the Dialogue concerning the two chief world systems, in the sense outlined in Aristotle's Topics. In order to understand what this demonstration consists of, I reconstructed the tradition of dialectic from Aristotle to the Renaissance, analyzing its developments with Cicero, Boethius, the Middle Ages up to the 16th century. As far as Renaissance developments are concerned, I singled out three domains where the tradition of Aristotle's Topics was particularly important: "pure" Aristotelianism, the creation of a new dialectic modelled on rhetoric, and finally the theories of the dialogue form. In each case I focused on a particular work which is not only interesting in its own right, but also represents well one of these developments: Agostino Nifo's commentary to Aristotle's Topics, Rudolph Agricola's De inventione dialectica, and Carlo Sigonio's De dialogo liber, respectively. As far as Galileo is concerned, I focused on the first Day of the Dialogue where Galileo proves that the Earth is a planet, as an example of dialectical strategy embodied in a literary dialogue. Galileo's dialectical demonstration of the Earth's motion can be identified neither with rhetorical persuasion nor with scientific (empirical) demonstration. Rather, it is a strategy of inquiry and proof which is crucially dependent on an exchange between two disputants through a question and answer format. A dialectical demonstration does not create consensus on a given thesis, nor does it demonstrate it conclusively, but yields corroborated and justified knowledge, albeit provisional and contextual, namely open to revision, and dependent upon the reasoned assent of a qualified opponent.

  • Andrea Woody (1998)

    University of Washington (Associate Professor)
    awoody@uw.edu

    Philosophy of Science, History of Science, and Feminist Perspectives within Philosophy

    Dissertation: Early twentieth century theories of chemical bonding: Explanation, representation, and theory development

    This dissertation examines how we may meaningfully attribute explanatoriness to theoretical structures and in turn, how such attributions can, and should, influence theory assessment generally. In this context, I argue against 'inference to the best explanation' accounts of explanatory power as well as the deflationary 'answers to why questions' proposal of van Fraassen. Though my analysis emphases the role of unification in explanation, I demonstrate ways in which Kitcher's particular account is insufficient. The suggested alternative takes explanatory power to be a measure of theory intelligibility; thus, its value resides in making theories easy to probe, communicate, and ultimately modify. An underlying goal of the discussion is to demonstrate, even for a small set of examples, that not all components of rational assessment distill down, in one way or another, to evaluations of a theory's empirical adequacy. Instead, the merits of explanatory structures are argued to be forward-looking, meaning that they hold the potential to contribute significantly to theory development either by providing directives for theoretical modification, perhaps indirectly by guiding empirical investigation, or by facilitating various means of inferential error control. The dissertation's central case study concerns the development of twentieth century quantum mechanical theories of the chemical bond, provocative territory because of the diversity of models and representations developed for incorporating a computationally challenging, and potentially intractable, fundamental theory into pre-existing chemical theory and practice. Explicit mathematical techniques as well as various graphical, schematic, and diagrammatic models are examined in some detail. Ultimately these theoretical structures serve as the landscape for exploring, in a preliminary fashion, the influence of representational format on inferential capacities generally. Although the connection between representation and explanation is seldom emphasized, this dissertation offers evidence of the high cost of such neglect.

  • Rachel Ankeny (1997)

    The University of Adelaide (Associate Dean(Research) and Deputy Executive Dean, Faculty of Arts)
    rachel.ankeny@adelaide.edu.au

    History and Philosophy of Biological and Biomedical Sciences; Bioethics

    Dissertation: The conqueror worm: An historical and philosophical examination of the use of the nematode Caenorhabditis elegans as a model organism

    This study focuses on the concept of a "model organism" in the biomedical sciences through an historical and philosophical exploration of research with the nematode Caenorhabditis elegans. I examine the conceptualization of a model organism in the case of the choice and early use of C. elegans in 1960s, showing that a rich context existed within which the organism was selected as the focus for a fledging research program in molecular biology. I argue that the choice of C. elegans was obvious rather than highly inventive within this context, and that the success of the "worm project" depends not only on organismal choice but on the conceptual and institutional frameworks within which the project was pursued.

    I also provide a selective review of the C. elegans group research in the late 1960s through the early 1980s as support for several theses. Although development and behavior were the general areas of interest for the research project, the original goals and proposed methodology were extremely vague. As the project evolved, which investigations proved to be tractable using the worm depended not only on which methodologies were fruitful but also on the interests and skills of early workers. I also argue that much of the power of C. elegans as a model organism can be traced historically to the investment of resources in establishing a complete description of the organism which was relatively unprecedented, and which methodologically represents a return to a more naturalistic biological tradition.
    In light of the historical study, I provide a philosophical analysis of various components that have contributed to the conceptualization of a model organism in the case of C. elegans. I synthesize several components of traditional views in the philosophy of science on modeling and expand the concept of a descriptive model which thus allows C. elegans to be viewed as a prototype of the metazoa, through exploration of the three kinds of modeling that occur with C. elegans: modeling of structures, of processes, and of information. I argue that C. elegans as a model organism has been not only heuristically valuable, but also essential to this research project. I conclude by suggesting that more investigation of descriptive models such as those generated in the worm project must be done to capture important aspects of the biomedical sciences that may otherwise be neglected if explanatory models are the sole focus in the philosophy of science.
  • Jonathan Simon (1997)

    Université Claude Bernard Lyon 1

    History of Chemistry

    Dissertation: The alchemy of identity: Pharmacy and the chemical revolution, 1777-1809

    This dissertation reassesses the chemical revolution that occurred in eighteenth-century France from the pharmacists' perspective. I use French pharmacy to place the event in historical context, understanding this revolution as constituted by more than simply a change in theory. The consolidation of a new scientific community of chemists, professing an importantly changed science of chemistry, is elucidated by examining the changing relationship between the communities of pharmacists and chemists across the eighteenth century. This entails an understanding of the chemical revolution that takes into account social and institutional transformations as well as theoretical change, and hence incorporates the reforms brought about during and after the French Revolution. First, I examine the social rise of philosophical chemistry as a scientific pursuit increasingly independent of its practical applications, including pharmacy, and then relate this to the theoretical change brought about by Lavoisier and his oxygenic system of chemistry. Then, I consider the institutional reforms that placed Lavoisier's chemistry in French higher education. During the 17th century, chemistry was intimately entwined with pharmacy, and chemical manipulations were primarily intended to enhance the medicinal properties of a substance. An independent philosophical chemistry gained ground during the 18th century, and this development culminated in the work of Lavoisier who cast pharmacy out of his chemistry altogether. Fourcroy, one of Lavoisier's disciples, brought the new chemistry to the pharmacists in both his textbooks and his legislation. Under Napoleon, Fourcroy instituted a new system of education for pharmacists that placed a premium on formal scientific education. Fourcroy's successors, Vauquelin and Bouillon-Lagrange, taught the new chemistry to the elite pharmacists in the School of Pharmacy in Paris. These pharmacists also developed new analytical techniques that combined the aims of the new chemistry with traditional pharmaceutical extractive practices. The scientific pharmacist (for example, Pelletier and Caventou) was created, who, although a respected member of the community of pharmacists, helped to define the new chemistry precisely by not being a true chemist.

  • Aristidis Arageorgis (1996)

    National Technical University of Athens (Assistant Professor)

    Philosophy of Quantum Field Theory

    Dissertation: Fields, Particles, and Curvature: Foundation and Philosophical Aspects of Quantum Field Theory in Curved Spacetime

    The physical, mathematical, and philosophical foundations of the quantum theory of free Bose fields in fixed general relativistic spacetimes are examined. It is argued that the theory is logically and mathematically consistent whereas semiclassical prescriptions for incorporating the back-reaction of the quantum field on the geometry lead to inconsistencies. Still, the relations and heuristic value of the semiclassical approach to canonical and covariant schemes of quantum gravity-plus-matter are assessed. Both conventional and rigorous formulations of the theory and of its principal predictions, cosmological particle creation and horizon radiation, are expounded and compared. Special attention is devoted to spacetime properties needed for the existence or uniqueness of the relevant theoretical elements (algebra of observables, Hilbert space representation(s), renormalization of the stress tensor). The emergence of unitarily inequivalent representations in a single dynamical context is used as motivation for the introduction of the abstract $/rm C/sp[/*]$-algebraic axiomatic formalism. The operationalist and conventionalist claims of the original abstract algebraic program are criticized in favor of its tempered outgrowth, local quantum physics. The interpretation of the theory as a wave mechanics of classical field configurations, deriving from the Schrodinger representations of the abstract algebra, is discussed and is found superior, at least on the level of analogy, to particle or harmonic oscillator interpretations. Further, it is argued that the various detector results and the Fulling nonuniqueness problem do not undermine the particle concept in the ways commonly claimed. In particular, arguments are offered against the attribution of particle status to the Rindler quanta, against the physical realizability of the Rindler vacuum, and against the more general notion of observer-dependence as to the definition of 'particle' or 'vacuum'. However, the question of the ontological status of particles is raised in terms of the consistency of quantum field theory with non-reductive realism about particles, the latter being conceived as entities exhibiting attributes of discreteness and localizability. Two arguments against non-reductive realism about particles, one from axiomatic algebraic local quantum theory in Minkowski spacetime and one from quantum field theory in curved spacetime, are developed.

  • Keith Parsons (1996)

    University of Houston, Clear Lake (Professor)

    Paleontology, Realism-Constructivism

    Dissertation: Wrongheaded science? Rationality, constructivism, and dinosaurs

    Constructivism is the claim that the "facts" of science are "constructs" created by scientific communities in accordance with the linguistic and social practices of that community. In other words, constructivists argue that scientific truth is nothing more than what scientific communities agree upon. Further, they hold that such agreement is reached through a process of negotiation in which "nonscientific" factors, e.g. appeals to vested social interests, intimidation, etc., play a more important role than traditionally "rationa"' or "scientific" considerations. This dissertation examines and evaluates the arguments of three major constructivists: Bruno Latour, Steve Woolgar, and Harry Collins. The first three chapters are extended case studies of episodes in the history of dinosaur paleontology. The first episodes examined are two controversies that arose over the early reconstructions of sauropods. The more important dispute involved the decision by the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania, to mount a head on their Apatosaurus specimen which, after 45 years, it came to regard as the wrong head. The second case study involves the controversy over Robert Bakker's dinosaur endothermy hypothesis. Finally, I examine David Raup's role in the debate over the Cretaceous/Tertiary extinctions. In particular, I evaluate certain Kuhnian themes about theory choice by examining Raup's 'conversion' to a new hypothesis. In the last three chapters I critically examine constructivist claims in the light of the case studies. The thesis of Latour and Woolgar's Laboratory Life is clarified; I argue that each author has a somewhat different interpretation of that thesis. Both interpretations are criticized. The constructivist arguments of Harry Collins' Changing Order are also examined and rejected. I conclude that a constructivist view of science is not preferable to a more traditionally rationalist account. A concluding meditation reflects on the role of the history of science in motivating constructivist positions.

  • Ofer Gal (1996)

    University of Sydney (Associate Professor)

    Early Modern History and Philosophy of Science

    Dissertation: Producing knowledge: Robert Hooke

    This work is an argument for the notion of knowledge production. It is an attempt at an epistemological and historiographic position which treats all facets and modes of knowledge as products of human practices, a position developed and demonstrated through a reconstruction of two defining episodes in the scientific career of Robert Hooke (1635-1703): the composition of his Programme for explaining planetary orbits as inertial motion bent by centripetal force, and his development of the spring law in relation to his invention of the spring watch. The revival of interest in the history of experimental and technological knowledge has accorded Hooke much more attention than before. However, dependent on the conception of knowledge as a representation of reality, this scholarship is bound to the categories of influence and competition, and concentrates mainly on Hooke's numerous passionate exchanges with Isaac Newton and Christiaan Huygens. I favourably explore the neo-pragmatist criticism of representation epistemology in the writing of Richard Rorty and Ian Hacking. This criticism exposes the conventional portrayal of Hooke as 'a mechanic of genius, rather than a scientist' (Hall) as a reification of the social hierarchy between Hooke's Royal Society employers and his artisan-experimenters employees. However, Rorty and Hacking's efforts to do away with the image of the human knower as an enclosed realm of 'ideas' have not been completed. Undertaking this unfinished philosophical task, my main strategy is to erase the false gap between knowledge which is clearly produced—practical, technological and experimental, 'know how', and knowledge which we still think of as representation—theoretical 'knowing that'. I present Hooke, Newton and Huygens as craftsmen, who, employing various resources, labor to manufacture material and theoretical artifacts. Eschewing the category of independent facts awaiting discovery, I attempt to compare practices and techniques rather than to adjudicate priority claims, replacing ideas which 'develop', 'inspire', and 'influence', with tools and skills which are borrowed, appropriated and modified for new uses. This approach enables tracing Hooke's creation of his Programme from his microscopy, and reconstructing his use of springs to structure a theory of matter. With his unique combination of technical and speculative talents Hooke comes to personify the relations between the theoretical-linguistic and the experimental-technological in their full complexity.

  • David Rudge (1996)

    Western Michigan University (Associate Professor)
    david.rudge@wmich.edu

    The Role of History and Philosophy of Science for the Teaching and Learning of Science

    Dissertation: A philosophical analysis of the role of selection experiments in evolutionary biology

    My dissertation philosophically analyzes experiments in evolutionary biology, an area of science where experimental approaches have tended to supplement, rather than supercede more traditional approaches, such as field observations. I conduct the analysis on the basis of three case studies of famous episodes in the history of selection experiments: H. B. D. Kettlewell's investigations of industrial melanism in the Peppered Moth, Biston betularia; two of Th. Dobzhansky's studies of adaptive radiation in the fruit fly, Drosophila pseudoobscura; and M. Wade's studies of group selection in the flour beetle, Tribolium castaneum. The case studies analyze the arguments and evidence these investigators used to identify the respective roles of experiments and other forms of inquiry in their investigations. I discuss three philosophical issues. First, the analysis considers whether these selection experiments fit models of experimentation developed in the context of micro-and high energy physics by Allan Franklin (1986, 1990) and Peter Galison (1987). My analysis documents that the methods used in the case studies can be accommodated on both Franklin and Galison's views. I conclude the case studies do not support claims regarding the relative autonomy of biology. Second, the analysis documents a number of important roles for life history data acquired by strictly observational means in the process of experimentation, from identification of research problems and development of experimental designs to interpretation of results. Divorced from this context experiments in biology make no sense. Thus, in principle, experimental approaches cannot replace more traditional methods. Third, the analysis examines a superficial tension between the use of experiments, which I characterize by the presence of artificial intervention, and the stated goal of most investigations in evolutionary biology, that of understanding how systems behave in the absence of intervention. Experiments involve trade-offs between the control one has over the circumstances of the study and how informative the study is with regard to questions of interest to biologists regarding specific, actual systems in nature. Experimental simulations of natural phenomena in other historical sciences (e.g. meteorology) involve similar trade-offs, but there are reasons for believing this tension is more prominent in biology.

  • Madeline Muntersbjorn (1996)

    University of Toledo (Associate Professor)
    mmuster@uoft02.utoledo.edu

    History and Philosophy of Mathematics, Calculus in the Seventeenth Century

    Dissertation: Algebraic Reasoning and Representation in Seventeenth Century Mathematics: Fermat and the Treatise on Quadrature C. 1657

    Contemporary philosophers of mathematics commonly assume that mathematical reasoning is representation neutral, or that changes from one notational system to another do not reflect corresponding changes in mathematical reasoning. Historians of mathematics commonly hypothesize that the incorporation of algebraic representations into geometrical pursuits contributed to the problem-solving generality of seventeenth-century mathematical techniques and to the invention of the infinitesimal calculus. In order to critically evaluate the relative merits of these positions, the dissertation analyzes representational techniques employed by Pierre de Fermat (1601-1665) in the development of seventeenth-century quadrature methods. The detailed case study of Fermat's Treatise on Quadrature c. 1657 illustrates the manner in which his representational strategy contributes to the generality of his quadrature methods. The dissertation concludes that, although 17th-century mathematicians' use of algebraic representations cannot simpliciter explain the generality of mathematical techniques developed during that time, Fermat's use of a variety of representational means—figures, discursive text, equations, and so on—can explain the generality of his methods. Thus, the dissertation lays the foundation for a larger argument against the common philosophical assumption of representation neutrality and for the thesis that developing a good representational strategy is a philosophically significant feature of mathematical reasoning.

  • Michel Janssen (1995)

    University of Minnesota (Associate Professor)
    janss011@tc.umn.edu

    Philosophy of Physics, History of Relativity Theory

    Dissertation: A comparison between Lorentz's ether theory and special relativity in the light of the experiments of Trouton and Noble

    In Part One of this dissertation, I analyze various accounts of two etherdrift experiments, the Trouton-Noble experiment and an earlier experiment by Trouton. Both aimed at detecting etherdrift with the help of a condenser in a torsion balance. I argue that the difficulties ether-theorists Lorentz and Larmor had in accounting for the negative results of these experiments stem from the fact that they did not (properly) take into account that, if we charge a moving condenser, we not only change its energy, but also its momentum and its mass. I establish two additional results. (1) The Trouton experiment can be seen as a physical realization of a thought experiment used by Einstein to argue for the inertia of energy. (2) Closely following Rohrlich, I develop an alternative to Laue's canonical relativistic account of the Trouton-Noble experiment to show that the turning couple Trouton and Noble were looking for is a purely kinematical effect in special relativity. I call this effect the Laue effect.

    In Part Two, I use these results to illustrate some general claims about the post-1905 version of Lorentz's ether theory. I use (1) to illustrate that Lorentz needs to assume more than the contraction of rods and the retardation of clocks to make his ether theory empirically equivalent to special relativity. I use (2) to illustrate that what makes the addition of such assumptions unsatisfactory is not that it would make the theory ad hoc, in the sense that it would compromise its testability, but that it makes Lorentz invariance a symmetry of the dynamics in a classical Newtonian space-time, whereas, in fact, it is a symmetry of the relativistic Minkowski space-time. To provide the necessary context for my claims, I give a detailed account of the conceptual development of Lorentz's theory from 1895 to 1916. In particular, I analyze the relation between the so-called theorem of corresponding states and what I call the generalized contraction hypothesis. I show that the various versions of Lorentz's theory have been widely misunderstood in the literature.