Sunday, September 29, 2013

How to probe public attitudes?

We are almost always interested in knowing how the public thinks and feels about various issues -- global warming, race relations, the fairness of rising income inequalities, and the acceptability of same-sex marriage, for example. The public is composed of millions of individuals, and the population can be segmented in a variety of relevant ways -- gender, age, race, region, political affiliation, and many other cleavages. So we might want to know how teenagers think about alcohol, and how these attitudes have changed over time, or how attitudes towards the equality of women have evolved since 1960.

What research tools are available to us to investigate public opinion? And how reliable are these tools?

The most immediate answer to this question is survey research. We can formulate a set of survey questions, select a population of respondents, and tabulate the distribution of responses. And if we do this on several occasions over time, we can make some inferences about changes in attitudes over time. There are innumerable examples of these kinds of studies, from the GSS to the euro barometer to the Pugh organization's frequent polling data. And we seem to learn some important things from studies like these concerning the distribution of attitudes across time and space. Spaniards are less concerned about fair trade produce than Swedes, young people have become more accepting of single-sex marriage, people over 65 are more sympathetic to Tea Party values than people in their thirties.

Another research approach takes a more qualitative approach. Researchers sometimes use open-ended interviews and focus groups to learn more directly how various groups and individuals think about certain topics. We may learn more from such studies than we can learn from a mass survey -- for example, the interviewer may gain a better understanding of the reasoning that individuals use to reach their beliefs. A survey question may ask whether a consumer is willing to pay 10% more for fair trade bananas, whereas a series of interviews may determine that the "no's" break into a group who don't have the discretionary money and another group who oppose fair trade pricing on ideological grounds.

There are more indirect methods for studying public opinion as well. We might examine the comments that are submitted to newspapers on topics of interest and try to quantify over time the "temperature" of those comments -- more intolerant, more angry, more reflective. Likewise, we might attempt to quantify the streams of social media -- Twitter, Facebook, tumblr -- with an eye towards testing the attitudes and emotions of various segments of the public. Consider he vitriol that exploded on twitter following the selection of Miss New York as Miss America a few weeks ago.

Attitudes towards race in America are especially interesting to me. How have Americans changed in the ways they think about race? Have Americans become less racially intolerant since the civil rights movement decade? Survey data seems to give a qualified "yes" to this question. Answers to survey questions that explicitly probe the individual's level of racial antagonism seem to support the notion that on average, antagonism has declined. But scholars in race studies such as Tyrone Forman and Eduardo Bonilla-Silva argue that the survey results are misleading (link). They use a methodology of extended interviews, along with a rigorous way of interpreting the results, which suggests that there is a widening divergence between the responses American college students give to surveys probing their racial attitudes and the values and beliefs that are revealed through extensive open-ended interviews.

In consonance with this new structure, various analysts have pointed out that a new racial ideology has emerged that, in contrast to the Jim Crow racism or the ideology of the color line (Johnson, 1943, 1946; Myrdal, 1944), avoids direct racial discourse but effectively safeguards racial privilege (Bobo et al., 1997; Bonilla-Silva and Lewis, 1999; Essed, 1996; Jackman, 1994; Kovel, 1984). That ideology also shapes the very nature and style of contemporary racial dis-cussions. In fact, in the post civil rights era, overt discussions of racial issues have become so taboo that it has become extremely difficult to assess racial attitudes and behavior using conventional research strategies (Myers, 1993; Van Dijk, 1984, 1987, 1997). Although we agree with those who suggest that there has been a normative change in terms of what is appropriate racial discourse and even racial etiquette (Schuman et al., 1988), we disagree with their interpret-ation of its meaning. Whereas they suggest that there is a ‘mixture of progress and resistance, certainty and ambivalence, striking movement and mere surface change' (p. 212), we believe (1) that there has been a rearticulation of the domi-nant racial themes (less overt expression of racial resentment about issues anchored in the Jim Crow era such as strict racial segregation in schools, neigh-borhoods, and social life in general, and more resentment on new issues such as affirmative action, government intervention, and welfare) and (2) that a new way of talking about racial issues in public venues – a new racetalk – has emerged. Nonetheless, the new racial ideology continues to help in the reproduction of White supremacy. (52)

They argue that a new "racetalk" has emerged that makes explicitly racist utterances socially unacceptable; but that the underlying attitudes have not changed as much. And this implies that survey research is likely to misrepresent the degree of change in attitudes that has occurred.

Here is a good example of the kind of analysis Forman and Bonilla-Silva provide for transcripts from the open-ended interviews:

The final case is Eric, a student at a large midwestern university , an example of the students who openly expressed serious reservations about interracial mar-riages (category 6). It is significant to point out that even the three students who stated that they would not enter into these relationships, claimed that there was nothing wrong with interracial relationships per se. Below is the exchange between Eric and our interviewer on this matter.

Eric: Uh . . . (sighs) I would say that I agree with that, I guess. I mean . . . I would say that I really don't have much of a problem with it but when you, ya know, If I were to ask if I had a daughter or something like that, or even one of my sisters, um . . . were to going to get married to a minority or a Black, I . . . I would probably . . . it would probably bother me a little bit just because of what you said . . . Like the children and how it would . . . might do to our family as it is. Um . . . so I mean, just being honest, I guess that's the way I feel about that. Int.: What would, specifically , if you can, is it . . . would it be the children? And, if it's the children, what would be the problem with, um . . . uh . . . adjustment, or Eric: For the children, yeah, I think it would just be . . . I guess, through my experience when I was younger and growing up and just . . . ya know, those kids were different. Ya know, they were, as a kid, I guess you don't think much about why kids are dif-ferent or anything, you just kind of see that they are different and treat them differ-ently . Ya know, because you're not smart enough to think about it, I guess. And the, the other thing is . . . I don't know how it might cause problems within our family if it happened within our family, ya know, just . . . from people's different opinions on some-thing like that. I just don't think it would be a healthy thing for my family. I really can't talk about other people. Int.: But would you feel comfortable with it pretty much? Eric: Yeah. Yeah, that's the way I think, especially , um . . . ya know, grandparents of things like that. Um, right or wrong, I think that's what would happen. (Interview # 248: 10)

Eric used the apparent admission semantic move (“I would say that I agree with that”) in his reply but could not camouflage very well his true feelings (“If I were to ask if I had a daughter or something like that, or even one of my sisters, um . . . were [sic] to going to get married to a minority or a Black, I . . . I would probably . . . it would probably bother me a little bit”). Interestingly , Eric claimed in the interview that he had been romantically interested in an Asian-Indian woman his first year in college. However, that interest “never turned out to be a real big [deal]” (Interview # 248: 9). Despite Eric's fleeting attraction to a person of color, his life was racially segregated: no minority friends and no meaningful interaction with any Black person.

This is an important argument within race studies. But it also serves as an important caution about uncritical reliance on survey research as an indicator of public attitudes and thinking.

Wednesday, September 25, 2013

What is reduction?

Screen Shot 2013-09-25 at 10.23.31 AM

The topics of methodological individualism and microfoundationalism unavoidably cross with the idea of reductionism -- the notion that higher level entities and structures need somehow to be "reduced" to facts or properties having to do with lower level structures. In the social sciences, this amounts to something along these lines: the properties and dynamics of social entities need to be explained by the properties and interactions of the individuals who constitute them. Social facts need to reduce to a set of individual-level facts and laws. Similar positions arise in psychology ("psychological properties and dynamics need to reduce to facts about the activities and properties of the central nervous system") and biology ("complex biological systems like genes and cells need to reduce to the biochemistry of the interacting systems of molecules that make them up").

Reductionism has a bad flavor within much of philosophy, but it is worth dwelling on the concept a bit more fully.

Why would the strategy of reduction be appealing within a scientific research tradition? Here is one reason: there is evident explanatory gain that results from showing how the complex properties and functionings of a higher-level entity are the result of the properties and interactions of its lower level constituents. This kind of demonstration serves to explain the upper level system's properties in terms of the entities that make it up. This is the rationale for Peter Hedstrom's metaphor of "dissecting the social" (Dissecting the Social: On the Principles of Analytical Sociology); in his words,

To dissect, as the term is used here, is to decompose a complex totality into its constituent entities and activities and then to bring into focus what is believed to be its most essential elements. (kl 76)

Aggregate or macro-level patterns usually say surprisingly little about why we observe particular aggregate patterns, and our explanations must therefore focus on the micro-level processes that brought them about. (kl 141)

The explanatory strategy illustrated by Thomas Schelling in Micromotives and Macrobehavior proceeds in a similar fashion. Schelling wants to show how a complex social phenomenon (say, residential segregation) can be the result of a set of preferences and beliefs of the independent individuals who make up the relevant population. And this is also the approach that is taken by researchers who develop agent-based models (link).

Why is the appeal to reduction sometimes frustrating to other scientists and philosophers? Because it often seems to be a way of changing the subject away from our original scientific interest. We started out, let's say, with an interest in motion perception, looking at the perceiver as an information-processing system, and the reductionist keeps insisting that we turn our attention to the organization of a set of nerve cells. But we weren't interested in nerve cells; we were interested in the computational systems associated with motion perception.

Another reason to be frustrated with "methodological reductionism" is the conviction that mid-level entities have stable properties of their own. So it isn't necessary to reduce those properties to their underlying constituents; rather, we can investigate those properties in their own terms, and then make use of this knowledge to explain other things at that level.

Finally, it is often the case that it is simply impossible to reconstruct with any useful precision the micro-level processes that give rise to a given higher-level structure. The mathematical properties of complex systems come in here: even relatively simple physical systems, governed by deterministic mechanical laws, exhibit behavior that cannot be calculated on the basis of information about the starting conditions of the system. A solar system with a massive star at the center and a handful of relatively low-mass planets produces a regular set of elliptical orbits. But a three-body gravitational system creates computational challenges that make it impossible to predict the future state of the system; even small errors of measurement or intruding forces can significantly shift the evolution of the system. (Here is an interesting animation of a three-body gravitational system; the image at the top is a screenshot.)

We might capture part of this set of ideas by noting that we can distinguish broadly between vertical and lateral explanatory strategies. Reduction is a vertical strategy. The discovery of the causal powers of a mid-level entity and use of those properties to explain the behavior of other mid-level entities and processes is a lateral or horizontal strategy. It remains within a given level of structure rather than moving up and down over two or more levels.

William Wimsatt is a philosopher of biology whose writings about reduction have illuminated the topic significantly. His article "Reductionism and its heuristics: Making methodological reductionism honest" is particularly useful (link). Wimsatt distinguishes among three varieties of reductionism in the philosophy of science: inter-level reductive explanations, same-level reductive theory succession, and eliminative reduction (448). He finds that eliminative reduction is a non-starter; virtually no scientists see value in attempting to eliminate references to the higher-level domain in favor of a lower-level domain. Inter-level reduction is essentially what was described above. And theory-succession reduction is a mapping from one theory to the next of the ontologies that they depend upon. Here is his description of "successional reduction":

Successional reductions commonly relate theories or models of entities which are either at the same compositional level or they relate theories that aren't level-specific.... They are relationships between theoretical structures where one theory or model is transformed into another ... to localize similarities and differences between them. (449)

I suppose an example of this kind of reduction is the mapping of the quantum theory of the atom onto the classical theory of the atom.

Here is Wimsatt's description of inter-level reductive explanation:

Inter-level reductions explain phenomena (entities, relations, causal regularities) at one level via operations of often qualitatively different mechanisms at lower levels. (450)

Here is an example he offers of the "reduction" of Mendel's factors in biology:

Mendel's factors are successively localized through mechanistic accounts (1) on chromosomes by the Boveri–Sutton hypothesis (Darden, 1991), (2) relative to other genes in the chromosomes by linkage mapping (Wimsatt, 1992), (3) to bands in the physical chromosomes by deletion mapping (Carlson, 1967), and finally (4) to specific sites in chromosomal DNA thru various methods using PCR (polymerase chain reaction) to amplify the number of copies of targeted segments of DNA to identify and localize them (Waters, 1994).

What I find useful about Wimsatt's approach is the fact that he succeeds in de-dramatizing this issue. He puts aside the comprehensive and general claims that have sometimes been made on behalf of "methodological reductionism" in the past, and considers specific instances in biology where scientists have found it very useful to investigate the vertical relations that exist between higher-level and lower-level structures. This takes reductionism out of the domain of a general philosophical principle and into that of a particular research heuristic.

Friday, September 20, 2013

Large predictions in history


To what extent is it possible to predict the course of large-scale history -- the rise and fall of empires, the occurrence of revolution, the crises of capitalism, or the ultimate failure of twentieth-century Communism? One possible basis for predictions is the availability of theories of underlying processes. To arrive at a supportable prediction about a state of affairs, we might possess a theory of the dynamics of the situation, the mechanisms and processes that interact to bring about subsequent states, and we might be able to model the future effects of those mechanisms and processes. A biologist's projection of the spread of a disease through an isolated population of birds is an example. Or, second, predictions might derive from the discovery of robust trends of change in a given system, along with an argument about how these trends will aggregate in the future. For example, we might observe that the population density is rising in water-poor southern Utah, and we might predict that there will be severe water shortages in the region in a few decades. However, neither approach is promising when it comes to large historical change.
One issue needs to be addressed early on: the issue of determinate versus probabilistic predictions. A determinate prediction is one for which we have some basis for thinking that the outcome is necessary or inevitable: if you put the Volvo in the five million pound laboratory press, it will crush. This isn't a philosophically demanding concept of inevitability; it is simply a reflection of the fact that the Volvo has a known physical structure; it has an approximately known crushing value; and this value is orders of magnitude lower than five million pounds. So it is a practical impossibility that the Volvo will survive uncrushed. A probabilistic prediction, on the other hand, identifies a range of possible outcomes and assigns approximate probabilities to each outcome. Sticking with our test press example -- we might subject a steel bridge cable rated at 90,000 pounds of stress to a force of 120,000. We might predict that there is a probability of failure of the cable (40%) and non-failure (60%); the probability of failure rises as the level of stress is increased. But there is a range of values where the probabilities of the two possible outcomes are each meaningfully high, while there are extreme values where one option or the other is impossible.
In general, I believe that large-scale predictions about the course of history are highly questionable. There are several important reasons for this.
One reason for the failure of large-scale predictions about social systems is the complexity of causal influences and interactions within the domain of social causation. We may be confident that X causes Z when it occurs in isolated circumstances. But it may be that when U, V, and W are present, the effect of X is unpredictable, because of the complex interactions and causal dynamics of these other influences. This is one of the central findings of complexity studies -- the unpredictability of the interactions of multiple causal powers whose effects are non-linear.
Another difficulty -- or perhaps a different aspect of the same difficulty -- is the typical fact of path dependency of social processes. Outcomes are importantly influenced by the particulars of the initial conditions, so simply having a good idea of the forces and influences the system will experience over time does not tell us where it will wind up.
Third, social processes are sensitive to occurrences that are singular and idiosyncratic and not themselves governed by systemic properties. If the winter of 1812 had not been exceptionally cold, perhaps Napoleon's march on Moscow might have succeeded, and the future political course of Europe might have been substantially different. But variations in the weather are not themselves systemically explicable -- or at least not within the parameters of the social sciences.
Fourth, social events and outcomes are influenced by the actions of purposive actors. So it is possible for a social group to undertake actions that avert the outcomes that are otherwise predicted. Take climate change and rising ocean levels as an example. We may be able to predict a substantial rise in ocean levels in the next fifty years, rendering existing coastal cities largely uninhabitable. But what should we predict as a consequence of this fact? Societies may pursue different strategies for evading the bad consequencs of these climate changes -- retreat, massive water control projects, efforts at atmospheric engineering to reverse warming. And the social consequences of each of these strategies are widely different. So the acknowledged fact of global warming and rising ocean levels does not allow clear predictions about social development.
For these and other reasons, it is difficult to have any substantial confidence in predictions of the large course of change that a society, cluster of institutions, or population will experience. And this is a reason in turn to be skeptical about the spate of recent books about the planet's future. One such example is Martin Jacques' provocative book about China's future dominance of the globe, When China Rules the World: The End of the Western World and the Birth of a New Global Order: Second EditionThe Economist paraphrases his central claims this way (link):
He begins by citing the latest study by Goldman Sachs, which projects that China's economy will be bigger than America's by 2027, and nearly twice as large by 2050 (though individual Chinese will still be poorer than Americans). Economic power being the foundation of the political, military and cultural kind, Mr Jacques describes a world under a Pax Sinica. The renminbi will displace the dollar as the world's reserve currency; Shanghai will overshadow New York and London as the centre of finance; European countries will become quaint relics of a glorious past, rather like Athens and Rome today; global citizens will use Mandarin as much as, if not more than, English; the thoughts of Confucius will become as familiar as those of Plato; and so on.
This is certainly one possible future. But it is only one of many scenarios through which China's future may evolve, and it overlooks the many contingencies and strategies that may lead to very different outcomes.
(I go into more detail on this question in "Explaining Large-Scale Historical Change"; link.)

Thursday, September 19, 2013

Response to Little by Tuukka Kaidesoja

[Tuukka Kaidesoja accepted my invitation to write a response to my discussion (link) of his recent article in Philosophy of the Social Sciences, “Overcoming the Biases of Microfoundations: Social Mechanisms and Collective Agents”. Currently Kaidesoja works as a post-doctoral researcher at the Finnish Academy Centre of Excellence in the Philosophy of the Social Sciences, Department of Political and Economic Studies, University of Helsinki, Finland.He is the author of Naturalizing Critical Realist Social Ontology.  Thanks, Tuukka!]

Daniel Little defends “the theoretical possibility of attributing causal powers to meso-level social entities and structures.” I agree that meso-level social entities like groups and organizations have causal powers that are not ontologically reducible to the causal powers of their components or the aggregates of the latter. In addition, Little argues for “the idea of an actor-centered sociology, according to which the substance of social phenomena is entirely made up of the actions, interactions, and states of mind of socially constituted individual actors.” Though I like the idea of actor-centered sociology, I have problems with the view that “the substance of social phenomena is entirely made up of the actions, interactions, and states of mind of socially constituted individual actors.”

The latter view can also be stated in terms of the ontological microfoundations of social facts insofar as these microfoundations are thought to consist of socially constituted individuals, their actions and interactions. Thus, in Little’s view, the ontologically microfoundational level in social research is always the individual-level even though it is not required that “our explanations proceed through the microfoundational level.” This is because there are good sociological explanations that refer to causal relations at the meso-level and do not specify the microfoundations of these relations. In addition, Little argues that sociological theories cannot be reduced to theories about individuals. This view presupposes a concept of theory-reduction that is used in philosophy of mind by Jerry Fodor and others. I will come back to this later.

Now, I believe that the ontologically microfoundational role of the individual-level can be questioned from two directions. Firstly, it can be argued that, in addition to human individuals, artifacts and technologies built and used by people belong to the microfoundations of the causal powers of many social entities. On this view, then, organizations are not just “structured groups of individuals” but structured groups of human individuals and the artifacts (e.g. strategy papers, organizational charts, written codes of conduct, archives, computers, soft-ware programs, data bases, mobile phones and so on) that are used by individuals in their social interactions. One reason for including artifacts (with causal powers and affordances of their own) as proper parts of some social entities (e.g. organizations) is that human members of these entities need them in order to coordinate their interactions as well as to make collective decisions and, perhaps more controversially, to create and maintain collective (or transactive) memories.

Secondly, I believe that there are interesting sub-individual cognitive capacities and processes that are potentially important in understanding of some social phenomena. For example, the phenomenon of contextual priming in social cognition (i.e. a cognitive process in which the presence of certain events and people automatically activates our internal knowledge of and affects towards them that are relevant in responding to the situation) as well as unconscious imitation of behavior of strangers may well be important factors in explaining some social phenomena. I think that it would be misleading to say that cognitive processes of this kind belong to the individual-level due to the fact they take place at the subconscious level of cognitive processing.

I think that both of these points question the view that the individual-level should be considered as the ontologically microfoundational in the context of social research. My intention is not, however, to deny the importance of human individuals and their actions and interactions to any plausible social ontology.

Finally, I want to indicate that, in addition to the concept of theory-reduction used in the context of philosophy of mind, there is a different concept of “mechanistic reductive explanation” developed by Mario Bunge, William Wimsatt and others. When combined with causal powers theory, this concept is interesting since it enables one to argue not only that social entities have (weakly) emergent causal powers that are ontologically irreducible to the causal powers of their parts (and their aggregates), but also that these causal powers, their emergence and endurance, may well be mechanistically explainable in terms of the causal powers, relations and interactions of the components of social entities (e.g. human individuals and their artifacts). It should be emphasized that this view does not entail that social scientific theories that refer to social entities with emergent causal powers should be conceptually reducible to (or deductively derivable from) the theories that refer to the components of these entities. Rather, it is compatible with the view that theories about human beings, artifacts and social entities are continually developed at different levels of organization; conceptually adjusted to each other; and sometimes connected via mechanistic reductive explanations. This kind of perspective to ontological emergence and mechanistic reductive explanations allows, too, that the outcomes of macro-level social events and processes can be legitimately explained by referring to the interactions of the meso-level social entities (with emergent causal powers).

Thanks for the great blog!

Sunday, September 15, 2013

The global city -- Saskia Sassen

London financial district

Saskia Sassen is the leading urban theorist of the global world. (Here are several prior posts that intersect with her work.) Her The Global City: New York, London, Tokyo (1991) has shaped the concepts and methods that other theorists have used to analyze the role of cities and their networks in the contemporary world. The core ideas in her theory of the global city are presented in a 2005 article, "The Global City: Introducing a Concept" (link). This article is a convenient place to gain an understanding of her basic approach to the subject.

Key to Sassen's concept of the global city is an emphasis on the flow of information and capital. Cities are major nodes in the interconnected systems of information and money, and the wealth that they capture is intimately related to the specialized businesses that facilitate those flows -- financial institutions, consulting firms, accounting firms, law firms, and media organizations. Sassen points out that these flows are no longer tightly bound to national boundaries and systems of regulation; so the dynamics of the global city are dramatically different than those of the great cities of the nineteenth century.

Sassen emphasizes the importance of creating new conceptual resources for making sense of urban systems and their global networks -- a new conceptual architecture, as she calls it (28). She argues for seven fundamental hypotheses about the modern global city:

  1. The geographic dispersal of economic activities that marks globalization, along with the simultaneous integration of such geographically dispersed activities, is a key factor feeding the growth and importance of central corporate functions.
  2. These central functions become so complex that increasingly the headquarters of large global firms outsource them: they buy a share of their central functions from highly specialized service firms.
  3. Those specialized service firms engaged in the most complex and globalized markets are subject to agglomeration economies.
  4. The more headquarters outsource their most complex, unstandardized functions, particularly those subject to uncertain and changing markets, the freer they are to opt for any location.
  5. These specialized service firms need to provide a global service which has meant a global network of affiliates ... and a strengthening of cross border city-to-city transactions and networks.
  6. The economic fortunes of these cities become increasingly disconnected from their broader hinterlands or even their national economies.
  7. One result of the dynamics described in hypothesis six, is the growing informalization of a range of economic activities which find their effective demand in these cities, yet have profit rates that do not allow them to compete for various resources with the high-profit making firms at the top of the system. (28-30)

Three key tendencies seem to follow from these structural facts about global cities.  One is a concentration of wealth in the hands of owners, partners, and professionals associated with the high-end firms in this system. Second is a growing disconnection between the city and its region. And third is the growth of a large marginalized population that has a very hard time earning a living in the marketplace defined by these high-end activities. Rather than constituting an economic engine that gradually elevates the income and welfare of the whole population, the modern global city funnels global surpluses into the hands of a global elite dispersed over a few dozen global cities.

These tendencies seem to line up well with several observable features of modern urban life throughout much of the world: a widening separation in quality of life between a relatively small elite and a much larger marginalized population; a growth of high-security gated communities and shopping areas; and dramatically different graphs of median income for different socioeconomic groups. New York, London, and Hong Kong/Shanghai represent a huge concentration of financial and business networks, and the concentration of wealth that these produce is manifest:

Inside countries, the leading financial centers today concentrate a greater share of national financial activity than even ten years ago, and internationally, cities in the global North concentrate well over half of the global capital market. (33)

This mode of global business creates a tight network of supporting specialist firms that are likewise positioned to capture a significant level of wealth and income:

By central functions I do not only mean top level headquarters; I am referring to all the top level financial, legal, accounting, managerial, executive, planning functions necessary to run a corporate organization operating in more than one country. (34)

These features of the global city economic system imply a widening set of inequalities between elite professionals and specialists and the larger urban population of service and industrial workers. They also imply a widening set of inequalities between North and South. Sassen believes that communications and Internet technologies have the effect of accelerating these widening inequalities:

Besides their impact on the spatial correlates of centrality, the new communication technologies can also be expected to have an impact on inequality between cities and inside cities. (37)

Sassen's conceptual architecture maintains a place for location and space: global cities are not disembodied, and the functioning of their global firms depends on a network of activities and lesser firms within the spatial scope of the city and its environs. So Sassen believes there is space for political contest between parties over the division of the global surplus.

If we consider that global cities concentrate both the leading sectors of global capital and a growing share of disadvantaged populations (immigrants, many of the disadvantaged women, people of color generally, and, in the megacities of developing countries, masses of shanty dwellers) then we can see that cities have become a strategic terrain for a whole series of conflicts and contradictions. (39)

But this strategic contest seems badly tilted against the disadvantaged populations she mentions. So the outcomes of these contests over power and wealth are likely to lead, it would seem, to even deeper marginalization, along the lines of what Loic Wacquant describes in Urban Outcasts: A Comparative Sociology of Advanced Marginality (link).

This is a hugely important subject for everyone who wants to understand the dynamics and future directions of the globe's mega-cities and their interconnections. What seems pressingly important for urbanists and economists alike, is to envision economic mechanisms that can be established that do a better job of sharing the fruits of economic progress with the whole of society, not just the elite and professional end of the socioeconomic spectrum.

 

Thursday, September 12, 2013

Culture change within an organization

1960-Office

It is often said that culture change within an organization or workplace is difficult -- perhaps the most difficult part of trying to reform an organization. What do we mean by this? And why is this so difficult?

The daily workings of an organization depend on the activities and behavior of the people who make it up (and those with whom it interacts). People have habits, expectations, ways of perceiving social situations, and behavioral dispositions in a range of stylized circumstances. Their habitual modes of behavior may conform better or worse to the official rules and expectations of behavior in the performance of their roles. If there is a practice of stretching the lunch hour, for example, absenteeism at 2:00 may be a significant drag on efficient work processes. As Crozier and Friedberg note in Actors and Systems: The Politics of Collective Action, individuals within organizations are not robots programmed by the rules of the organization; they are willful and strategic actors who interact with each other and with the rules of the organization in complex and sometimes non-conformist ways (link). Or in other words, the culture of the workforce -- the habits, practices, and ways of thinking of the participants -- can significantly interfere with the intended workings of the organization.

Moreover, these habits and expectations are often mutually reinforcing. The fact that A, B, and C have certain ways of conducting their work often reinforces the similar behaviors of D. For example, the Baltimore police detectives in The Wire have fairly specific habits and expectations when it comes to encounters with the corner boys in the drug trade -- a lot of use of force, a harsh tone of voice, a ready display of disrespect. These habits of behavior are infectious; new recruits model the behavior of their elders, and soon they are just as violent and disrespectful as the previous generation. So a police commander who wants to reform the style of policing in his district is faced with a difficult problem: changing policing means changing behavior of individual police on the street, but the tools available to the commander to bring about these changes are very limited. So the habits persist in spite of orders, regulations, briefings, and seminars.

Within a Fordist understanding of organizations, these conflicts between habits of behavior and the official expectations of the organization can be resolved through supervision: non-conformist behavior can be identified and penalized. Violent detectives can be punished or dismissed; line workers who break the rules can be fined; call center workers can be disciplined when they deviate from their scripts. But there are at least two problems with this approach. One is the cost of close supervision. It isn't realistic to imagine having enough supervisors to detect a high percentage of bad behavior by workers. And the second is the nature of much of the work within modern organizations, which depend on creating a space of autonomy and independence for the worker.  An architect, surgeon, or professor doesn't do his or her best work within a regime of time clocks, keystroke loggers, and penalties.

So the problem of culture change within a modern organization comes down to something like this. Organizations involving the productive activities of well educated specialists need to rely on a high level of self-motivation and self-direction on the part of its workers. Therefore modern organizations need to encourage high level contributions to the organization's goals through means other than close supervision and a code of penalties and rewards. This means finding ways of aligning the personal values of the worker with the goals and processes of the organization. The organization needs to create an environment of development and work in which the individual worker wants to achieve the key goals of the organization -- rather than disregarding those goals to pursue his/her own agenda in the workplace. In turn this means persuading the worker of some basic realities about the organization: that it is fair towards all workers, that the goals of the organization are worth achieving, and that the managers of the organization are talented and capable.

All is well if these assumptions about the organization are widely shared by workers and managers. If, on the other hand, there is a high level of cynicism and disaffection among workers or a high level of self-serving among managers, it is likely that the performance of both workers and managers will deviate from the organization's expectations of how they will behave. A culture of shirking, self serving, and "easy riding" will undermine the effectiveness of the organization. The problem of culture change is the problem of changing those assumptions and habits on the part of workers and managers.

There seem to be a few fairly obvious ways of trying to improve the culture of a given organization. One is to insist on a high degree of transparency in the organization so that workers and managers can come to have confidence in the basic fairness of the workplace. A second is to find ways of communicating the value of the work being done by the organization in a way that is clear and motivating for workers and managers. A third is to be effective at expressing the respect and appreciation that the organization has for the workers. And a final means is to recruit well when filling open positions, mindful of the intangible characteristics of behavior that the organization needs to proliferate. Will this candidate adapt willingly to expectations about cooperation and respect within the workplace? Will that candidate be able to embrace the values and purposes of the organization? Does that candidate have prior experience that permits us to judge that he/she will contribute strongly and willingly to the tasks of the organization?

Making PCR: A Story of Biotechnology is Paul Rabinow's ethnography of Cerbus, the laboratory where the genetic research tool PCR was invented. The study provides a good example of how studies of professional workplaces can shed light on the outcomes of innovation and effectiveness that we want to achieve.

Tuesday, September 10, 2013

Meso causes and microfoundations

In earlier posts I've paid attention to the need for microfoundations and the legitimacy of meso-level causation. And I noted that there seems to be a prima facie tension between the two views in the philosophy of social science. I believe the two are compatible if we understand the microfoundations thesis as a claim about social ontology and not about explanation, and if we interpret it in a weak rather than a strong way. Others have also found this tension to be of interest. The September issue of The Philosophy of the Social Sciences" provides a very interesting set of articles on this set of issues.

Particularly interesting is a contribution by Tuukka Kaidesoja, "Overcoming the Biases of Microfoundations: Social Mechanisms and Collective Agents" (link). Here are the four claims advanced in the article:

  1. The mechanism approach to social explanation does not presuppose a commitment to the individual-level microfoundationalism.
  2. The microfoundationalist requirement that explanatory social mechanisms should always consists of interacting individuals has given rise to problematic methodological biases in social research.
  3. It is possible to specify a number of plausible candidates for social macro-mechanisms where interacting collective agents (e.g. formal organizations) form the core actors.
  4. The distributed cognition perspective combined with organization studies could provide us with explanatory understanding of the emergent cognitive capacities of collective agents. (abstract)

I agree with many of Kaidesoja's criticisms of what he calls individual-level microfoundationalism (IMF). I also agree with his preference for the weak "rationalist" conception of emergence (along the lines of Mario Bunge) rather than the strong conception associated with Niklas Luhman (link). However, I want to continue to maintain that there is a different version of microfoundationalism that is not vulnerable to the criticisms he offers -- what I call the "weak" version of microfoundations. (This is explicated in several earlier posts; link.) On this approach, claims about higher-level entities need to be plausibly compatible with there being microfoundations at the individual level (an ontological principle), but I deny that we always need to provide those microfoundations when offering a social explanation (an explanatory principle). And in fact, Kaidesoja seems to adopt a very similar position:

By contrast, in many explanatory studies on large-scale macro-phenomena, it is sufficient that we have a general understanding how the collective agents of this kind function (e.g., how collective-decisions are typically made in the organizations that are the components of the relevant macro-mechanism) and empirically grounded reasons to believe that the macro-phenomenon of interest was causally generated by the interactions of this kind of collective agents with emergent powers.... Of course, it is always possible to zoom in to a particular collective agent and study the underlying mechanisms of its emergent causal powers, but this type of research requires the uses of different methods and data from the explanatory studies on large-scale macro- phenomena. (316)

So it is the in-principle availability of lower-level analyses that is important, not the actual provision of those accounts. Or in other words, K is offering a set of arguments designed to establish the explanatory sufficiency of at least some meso- and macro-level causal accounts (horizontal) rather than requiring that explanations should be vertical (rising from lower levels to higher levels). This is what I want to refer to as "relative explanatory autonomy of the meso-level."

Kaidesoja's position is a realist one; he couches his analysis of causation in terms of the idea of causal powers. Here is Kaidesoja's description of the idea of causal powers:

In general terms, causal powers of complex entities include their dispositions, abilities, tendencies, liabilities, capacities, and capabilities to generate specific type of effects in suitable conditions. Each particular entity (or powerful particular) possesses its powers by virtue of its nature, which in turn can typically be explicated in terms of the intrinsic relational structure of the entity. (302)

This position provides an answer to one of the questions recently posed here: are causal powers and causal mechanisms compatible? I think they are, and Kaidesoja appears to as well.

One important nuance concerns the kinds of higher-level social structures that Kaidesoja offers as examples. They all involve collective actors, thus assimilating social causal power to intentional action. But the category of macro social factor that possess causal powers is broader than this. There are credible examples of social powers that do not depend on any kind of intentionality. Most of the examples offered by Charles Perrow, for example, of organizations with causal powers depend on features of operation of the organization, not its functioning as a quasi-intentional agent.

Also interesting in the article is Kaidesoja's gloss on the idea of distributed cognition. I'm not receptive to the idea of collective social actors is a strongly intentionalist sense (link), but K makes use of the idea of distributed cognition in a sense that seems unobjectionable to people who think that social entities ultimately depend on individual actors. K's interpretation doesn't imply commitment to collective thoughts or intentions. Here is a clear statement of the idea:

An important implication of the above perspectives is that they enable one to ascribe emergent cognitive capacities to social groups and to study the underlying mechanisms of these capacities empirically (e.g., Hutchins 1995; Theiner and O’Connor 2010). This nevertheless requires that we reconsider our received concept of cognition that ties all cognitive capacities to individual organisms (e.g., human beings), since groups obviously lack system-level consciousness or brains as distinct from those of their individual members.  (317)
Now, drawing on organization studies (e.g., Scott and Davis 2003), I suggest that formal organizations (in short, organizations) can be understood as social groups that are designed to accomplish some (more or less clearly specified) goal or goals, and whose activities are planned, administrated, and managed by their members (or some subgroup of their members such as managers). Examples of organizations include schools, business firms, universities, hospitals, political parties, and governments. (318)
This is a conception of "cognition" that doesn't imply anything like "collective minds" or group intentions, and seems unobjectionable from an ontological point of view.

This is a very nice piece of work in the philosophy of social science, and it suggests that it will be worthwhile to spend time reading Kaidesoja's recent book, Naturalizing Critical Realist Social Ontology (Ontological Explorations), as well.

 

Thursday, September 5, 2013

Social mechanisms and meso-level causes

(This post summarizes a paper I presented at the British Society for the Philosophy of Science Annual Meeting in 2012.)

Here and elsewhere I want to defend the theoretical possibility of attributing causal powers to meso-level social entities and structures.  In this I follow a number of philosophers and sociologists, including many critical realists (e.g. Roy Bhaskar, A Realist Theory of Science and Margaret Archer, Realist Social Theory: The Morphogenetic Approach) and also the recent thinking of Dave Elder-Vass (The Causal Power of Social Structures).  But I also defend the idea of an actor-centered sociology, according to which the substance of social phenomena is entirely made up of the actions, interactions, and states of mind of socially constituted individual actors.  Making out both positions, and demonstrating their consistency, is the work of this paper. I refer to this position as “relative explanatory autonomy” of the meso-level. This topic is of renewed interest because of the current influence and progress of analytical sociology (Peter Hedström, Dissecting the Social: On the Principles of Analytical Sociology; Hedström and Bearman, The Oxford Handbook of Analytical Sociology ; Peter Demeulenaere, Analytical Sociology and Social Mechanisms), which offers an emphatic “no” to the question; whereas critical realists are equally firm in defending an affirmative answer to the question.

My defense of meso-level causation is based on four ideas. First, the practice of sociologists justifies this claim, since sociologists do in fact make use of meso-meso claims.  They often do not attempt to provide vertical explanations from circumstances of the actor to meso- and macro-level outcomes; instead, they often provide horizontal explanations that explain one set of meso and macro outcomes on the basis of the causal powers of another set of meso and macro conditions or structures.  Second, sociology is a “special science” analogous to cognitive science, dependent on a set of causally linked entities at a lower level. Arguments offered for the relative explanatory autonomy of the higher-level theories are applicable to sociology as well. The basis for rejecting reductionism is well established here. Third, meso entities (organizations, institutions, normative systems) often have stable characteristics with regular behavioral consequences. This is illustrated with the example of organizations. Fourth, those entities must have microfoundations; we must be confident that there are individual behaviors at lower levels that support these macro characteristics.  But it is legitimate to draw out the macro-level effects of the macro-circumstance under investigation, without tracing out the way that effect works in detail on the swarms of actors encompassed by the case.   The requirement of microfoundations is not a requirement on explanation; it does not require that our explanations proceed through the microfoundational level. It is an ontological principle but not a methodological principle.  Rather, it is a condition that must be satisfied on prima facie grounds, prior to offering the explanation. (I refer to this as the "weak" requirement of microfoundations; link.)  In short, we are not obliged to trace out the struts of Coleman’s boat in order to provide a satisfactory macro- or meso-level explanation or mechanism.

Does the extended social world have causal powers?  Here are some reasons for thinking that it does.

First, working sociologists offer explanations that postulate meso-meso causal connections on a regular basis. They identify what they take to be causal properties of social structures and institutions, and then draw out causal chains involving those causal properties. And often they are able to answer the follow-on question: how does that causal power work, in approximate terms, at the micro level? But answering that question is not an essential part of their argument. They do not in fact attempt to work through the agent-based simulation that would validate their general view about how the processes work at the lower level.

This explanatory framework seems entirely reasonable in the social sciences.  It does not seem necessary to disaggregate every claim like “organizational deficiencies at the Bhopal chemical plant caused the devastating chemical spill” onto specific individual-level activities. We understand pretty well, in a generic way, what the microfoundations of organizations are, and it isn’t necessary to provide a detailed account in order to have a satisfactory explanation.  In other words, we can make careful statements about macro-macro and macro-meso causal relations without proceeding according to the logic of Coleman’s boat—up and down the struts. So one argument for the relative autonomy of meso-level causal claims is precisely the fact that good sociologists do in fact make credible use of such claims.

Second, there is a more general reason within the philosophy of the social sciences for being receptive to the idea of meso-meso social causes.  This derives from the arguments against reductionism in a range of the special sciences.  The idea of relative explanatory autonomy has been invoked by cognitive scientists against the reductionist claims of neuro-scientists. Of course cognitive mechanisms must be grounded in neurophysiological processes. But this doesn’t entail that cognitive theories need to be reduced to neurophysiological statements. Jerry Fodor introduced highly influential arguments against reductionism in “Special sciences and the disunity of science as a working hypothesis” (link).
Once we have reason to accept something like the idea of relative explanatory autonomy in the social sciences, we also have a strong basis for rejecting the exclusive validity of one particular approach to social explanation, the reductionist approach associated with methodological individualism, analytical sociology, and Coleman’s boat. Rather, social scientists can legitimately use explanations that call upon meso-level causal linkages without needing to reduce these to derivations from facts about individuals. And this implies the legitimacy of a fairly broad conception of methodological pluralism in the social sciences, constrained always by the requirement of weak microfoundations.

Third, we have good research-based reasons to maintain that meso-level social structures have causal powers. Consider organizations as paradigm examples of meso-level social structures. An organization is a social entity which possesses a degree of stability in functioning that can be studied empirically and theoretically. An organization consists of a structured group of individuals, often hierarchically organized, pursuing a relatively clearly defined set of tasks.  In the abstract, it is a set of rules and procedures that regulate and motive the behavior of the individuals who function within the organization. There are also informal practices within an organization that are not codified that have significant effects on the functioning of the organization (for example, the coffee room as a medium of informal communication, or the norm of covering for a co-worker’s absence).  Some of those individuals have responsibilities of oversight, which is a primary way in which the abstract rules of the organization are transformed into concrete patterns of activity by other individuals. Another behavioral characteristic of an organization is the set of incentives and rewards that it creates for participants in the organization. Often the incentives that exist were planned and designed to have specific effects on behavior of participants; by offering rewards for behaviors X, Y, Z, the organization is expected to produce a lot of X, Y, and Z. Sometimes, though, the incentives are unintended, created perhaps by the intersection of two rules of operation that lead to a perverse incentive leading to W.

Now we are in a position to address the central question here: do organizations have causal powers? It seems to me that the answer is yes, in fairly specific ways. The most obvious causal properties of an organization are bound up in the function of the organization. An organization is invented and developed in order to bring about certain social effects: operate and maintain a complex technology, reduce pollution or crime, distribute goods throughout a population, provide services to individuals, seize and hold territory, disseminate information. But the specifics of an organization also give rise to unintended consequences; these too contribute to the causal powers of the organization. All these effects occur as a result of the coordinated activities of people within the organization. When organizations work correctly they bring about one set of effects; when they break down they bring about another set of effects. Here we can think about organizations in analogy with technology components like amplifiers, thermostats, stabilizers, or surge protectors.

Finally, I too believe that there is a burden of proof that must be met in asserting a causal power or disposition for a social entity -- something like “the entity demonstrates an empirical regularity in behaving in such and such a way” or “we have good theoretical reasons for believing that X social arrangements will have Y effects.” And some macro concepts (e.g. State, Islam, market economy) are likely cast at too high a level to admit of such regularities. That is why I favor “meso” social entities as the bearers of social powers. As new institutionalists demonstrate all the time, one property regime elicits very different collective behavior from its highly similar cousin. And this gives the relevant causal stability criterion. Good examples include Robert Ellickson’s new-institutionalist treatment of Shasta County and liability norms and Charles Perrow’s treatment of the operating characteristics of technology organizations. In each case the microfoundations are easy to provide. What is more challenging is to show how these social causal properties interact in cases to create outcomes we want to explain.

So how does the micro-macro link look when we attempt to provide the idea of meso explanations with microfoundations?  The various versions of methodological individualism—microeconomics, analytical sociology, Elster’s theories of explanation, and the model of Coleman’s boat—presume that explanation needs to invoke the story of the micro level events as part of the explanation. The perspective offered here requires something quite different.  This position requires that we be confident that these micro-level events exist and work to compose the meso level; but it does not require that the causal argument incorporates a reconstruction of the pathway through the individual level in order to have a satisfactory explanation.  This account suggests an alternative diagram to Coleman’s boat.

new model
The diagram represents each of the causal linkages represented in the Coleman boat. But it also calls out the meso-meso causal connection that Coleman prohibits in his analysis. And it replaces the idea that causation proceeds through the individual level, with the idea that each meso level factor has a set of actor-level microfoundations. But this is an ontological fact, not a prescription on explanation.
(Here is the full paper as presented at the British Society for the Philosophy of Science Annual Meeting in July 2012 at the University of Stirling.)

Sunday, September 1, 2013

Ian Hacking on chance as worldview


Ian Hacking was one of the more innovative and adventurous philosophers to take up the philosophy of science as their field of inquiry. The Taming of Chance (1990) is a genuinely fascinating treatment of the subject of the emergence of the idea of populations of events rather than discrete individuals. Together with The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference (1975; 2nd ed. 2006), the two books represent a very original contribution to an important aspect of modern ways of thinking: the ways in which the human sciences and the public came to think differently about the nature of social and biological reality.

Hacking's contributions to the history of statistical and probabilistic thinking are particularly valuable for the light they shed light on a crucial moment during which fundamental change in the largest gauge intellectual framework took place -- the shift away from deterministic causation to the idea that phenomena present themselves with a distribution of characteristics.
Determinism was eroded during the nineteenth century and a space was cleared for autonomous laws of chance. The idea of human nature was displaced by a model of normal people with laws of dispersion. These two transformations were parallel and fed into each other. Chance made the world seem less capricious: it was legitimated because it brought order out of chaos. The greater the level of indeterminism in our conception of the world and of people, the higher the expected level of control. (TC, vii) 
Hacking compares his approach to the emergence of probabilistic thinking to that of Foucault in The Archaeology of Knowledge: as a genealogy of a new intellectual framework. (A mark of Hacking's originality as an analytic philosopher is exactly his readiness to find sources of inspiration in Foucault.) His effort is to capture and document the series of shifts in conceptual system and language through which scientists, philosophers, and ordinary people talked about such things as suicide, criminality, and disease.
This book is a piece of philosophical analysis. Philosophical analysis is the investigation of concepts. Concepts are words in their sites. Their sites are sentences and institutions. I regret that I have said too little about institutions, and too much about sentences and how they are arranged. (7)
But the crucial point that Hacking is making is that the concepts and assertions that he studies are drawn from a scientific culture that was in a period of flux; so this kind of philosophical, conceptual, and linguistic analysis can document the shifting of a framework of ideas and ways of thinking.

One important aspect of Hacking's arguments in both books is the point that the emergence of probability and statistics was not solely a development of a field of mathematics; it was a new intellectual creation that involved parsing the social and natural worlds in ways that were very different from medieval and modern frameworks. It involved a shift from thinking of events in the world as being causally determined to thinking of them as emerging from a set of probabilistic laws or regularities. It is at bottom a question of metaphysics not mathematics (4).

Hacking also establishes a point about social constructivism that is fundamental to other parts of his work as well (including The Social Construction of What?): the idea that the act of conceptualizing and measuring is also often the act of constituting a particular slice of human reality. The definitions of mental disorder included in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition: DSM-5 do not merely describe mental realities; they serve to constitute delineations of populations of people with disorders. Prior to these stipulations, we can make the case that the disorder did not exist. (Human beings were troubled in various ways; but they were not "paranoid schizophrenics" until a set of symptoms and behaviors were parsed in such a way by the profession of psychiatry.) Hacking's account of the emergence of official statistics in the eighteenth and nineteenth centuries emphasizes that this kind of social construction was equally in play in the decision to classify certain deaths as "suicide".

Hacking refers to the object of his study as the emergence of a new "style of reasoning," explicitly paired with the ideas of paradigm, research programme, and themata (6). Here he tips his hat to A. C. Crombie. What he seems to mean by this is a congeries of assumptions about nature and society, system of conceptualization, modes of observation and measurement, and rough expectations about outcomes. (These ideas are sketched in the first chapter of The Taming of Chance.)

One part of the emergence of the new way of thinking derived from the consequences of the counting and measurement that came to be a part of government activity by the eighteenth century. Population size, trade data, mortality rates, disease rates ... all of these topics came in for extensive state scrutiny and investigation. And these sources of quantitative data permitted the formulation of new questions: were city dwellers more or less prone to suicide? Did Protestants or Catholics have higher birth rates? Hacking notes that Leibniz played an important role in trying to make sense of new statistical knowledge about one's society:
Leibniz had a lively interest in statistical questions of all sorts, and pursued an active correspondence on issues of disease, death and population. (18)
And Leibniz believed that these questions were crucial to the health of the modern state; in fact, he offered a "white paper" to this effect to Prince Frederick of Prussia in 1700.

It was found that at least some statistical information about human processes -- birth, death, disease -- might reveal statistical laws; so the individual events might reflect chance, but they were subsumed under stable and enduring statistical laws like life tables. Hacking refers to a kind of "statistical fatalism" (126), which maintained that the apparent contingency of individual events conformed to an underlying causal necessity in the aggregate.
 By 1830 innumerable regularities about crime and suicide seemed visible to the naked eye. There were 'invariable' laws about their relative frequency by month, by method, by sex, by region, by nation. No one would have imagined such statistical stabilities had it not been for an avalanche of printed and public tables. (73)
This is a remarkable work of scholarship -- Hacking has pulled together a very detailed collation of the efforts at statistical measurement of national populations and the scientific reflections that these elicited. It crosses over histories of public data collection, epidemiology, theories of suicide, and mathematical representations of new concepts of the statistics of populations. The scholarship by itself is daunting. But even more important is Hacking's ability to place these developments into an intellectual and philosophical context. The book is a tour-de-force.

(Here is an earlier post on Hacking's appreciation of Thomas Kuhn; link.)