By Luis Favela
Greetings! Thanks to Marcus Arvan for providing a much needed and supportive environment for early-career philosophers such as myself. As noted in Marcus’ introduction, I recently began a position at the University of Central Florida and earned my Ph.D. in the philosophy and life sciences program at the University of Cincinnati (UC). While earning my Ph.D. at UC, I concurrently earned a master’s degree in experimental psychology. UC’s department of psychology is unique in that a number of its faculty and labs—especially the Center for Cognition, Action, and Perception—are guided by theories and methods that can be said to fall under the heading of “Complexity Science.”
So, what is “complexity science?” Complexity science (or complex systems theory, complex dynamical systems, etc.) is one of those things that does not have a clear and broadly agreed upon definition. Moreover, complexity science manifests in various forms across different disciplines such as biology, economics, and physics. My interest in complexity science is generally limited to how it relates to the cognitive, neural, and psychological sciences. I’ve found it useful to think of complexity science as a cluster of theoretical commitments and methods. In general, complexity science is committed to the idea that the phenomena under investigation are dynamic, nonlinear, and involve many interacting parts. “Dynamic” refers to the idea that systems exhibit properties and behaviors over time, i.e., they are best understood not in terms of slices of time but as evolving over time. “Nonlinear” refers to the idea that systems are non-additive, or, that the outputs are not directly proportional to inputs. Thus, the interaction of parts within a system is best characterized as happening over time and often in a nonlinear manner.
Thus far it doesn’t seem that complexity science is much different than “non-complexity science.” A couple theoretical commitments of complexity science make it more distinguishable: emergence and self-organization. “Emergence” is a notoriously controversial concept in philosophy. In relation to complexity science, “emergence” refers to the idea that systems have collective behaviors that are difficult to predict based on knowledge of the constituents. In this sense, emergence is more epistemic than ontological in that it refers to properties of systems that the investigator may have difficulty predicting, and is less committed to the idea that those collective behaviors are “more than the sum of the parts.” Self-organization, however, can be understood as having more ontological import. A system exhibits self-organization when the collective behavior is the result of the interactions of the parts and not of a central controller. Termite nests are an example of a complicated behavior—i.e., building a nest—resulting from interactions not guided by a central controller. Instead, termite nests result from interactions among the parts. Complexity science includes—but is not limited to—the following cluster of theories and concepts: dynamic, emergence, nonlinear, and self-organization.
In order to investigate and explain phenomena that exhibit these features, complexity science utilizes a number of methods. Since complexity science deals with phenomena that often involve many parts interacting nonlinearly, two of the most utilized groups of methods are nonlinear data analyses and computational modeling. Some common forms of nonlinear analyses are fractal analyses (e.g., detrended fluctuation analysis) and cross- and recurrence quantification analyses. These methods—especially fractal analysis—contrast with linear methods such as the more typical ANOVA’s and t-tests that many (most? [all?!]) practitioners in the cognitive, neural, and psychological sciences learn in grad school and continue to utilize throughout their careers. One problem with these more typical methods is that they are guided by commitments to the central limit theorem, or “normal”/“standard” distribution of data. This commitment refers to the idea that with “enough” samples the data will distribute along a bell curve. One problem with this assumption is that not all natural phenomena exhibit properties quantifiable such that those data would fall along a “normal” distribution. Often in the cognitive, neural, and psychological sciences, outliers are trimmed and fluctuations in data are treated as “noise” that merely obscure the phenomenon of interest. A complex systems approach, utilizing methods such as fractal analysis, can be used to make the case that some phenomena do not produce normally distributed data. In fact, what many consider to be “noise” could actually be important features of the system. Sometimes the “noise is the message”—which, by the way, after a web search I learned is the name of a very interesting song by a German DJ.
Hopefully, this brief (and far from adequate) introduction to complexity science at least gives you an idea of what its theoretical commitments are, the concepts it uses, and the methods it employs. For more on complexity science see Chemero & Silberstein, the Encyclopedia of Complexity and Systems Science, Hooker et al., Mitchell, Richardson, Dale, & Marsh, and Riley & Holden. With this background in place, my next posts will be about topics demonstrating how I’ve put the framework to work in my research.