Context in Parallax

[Cross-posted to the UWM Digital Cultures Collaboratory.]

Generic Corn Flakes Box
Image 1

This past Wednesday, the Lunch Zone explored Parallax, a stark and yet somehow intimate puzzle game. The resolute black and white imagery, the abstract and icon-marked shapes, and sense of containment from being apparently enclosed in a rational sphere (complete with indexed x-y-z axes) combined to support a real sense of being set-apart. I was reminded of so-called “generic” goods that were sold for a brief time in U.S. groceries stores during my childhood (Image 1).

Anthropologists have probably never encountered an attempt at context-free design that they couldn’t skewer by exposing its referentiality, and so naturally we spent some time doing so. But even though it may be easy, it’s a useful exercise precisely because of how the digital spaces within which we act can so swiftly stand as taken-for-granted, as natural. And this holds true whether that space is saturated with a contrived social context, as in The Witcher, or is seemingly denuded of cultural specificity.Read More »

The Witcher and the Watcher

[Cross-posted to The Digital Cultures Collaboratory.]

Continuing our summer of the The Witcher III, last Wednesday we kept moving through the game, taking on the “talking pigs” quest. In the course of doing so, we were presented with an array of visuals, some perhaps unwelcome, some overbearing, some bizarre (and some all three). There was camera that gave us: an angle dominated by a pig’s butt, rendered large on the screen; a lingering zoom on a naked, breasted torso in the bath; and, most strikingly, a “pan shot” of a wizard’s private realm, in which a pair of rabbits “just happened” to be furiously fornicating as the camera scanned down the hillside. Ah, cutscenes.

While The Witcher III, from these examples, seems nothing but consistent with the point of view we noted in our earlier conversations – that is, it seems as intent as ever on proceeding in ways that seek to confirm its audience’s expectations – our conversation turned to cutscenes as an element, and what we can learn from TWIII’s handling of them as against other uses of them we’ve encountered.

Read More »

Down to the Nitty Gritty: Tiresome Dickishness in The Witcher III

[Cross-posted to the UWM Digital Cultures Collaboratory.]

Nathan, Kelly, and I were joined by Kelly’s sister Caitlin for July 3rd’s episode of the Lunchzone, where we continued our playing through of The Witcher III (find Stuart’s write-up of our Jun19th session here). We’ll be sticking with The Witcher through the summer, so pop in to our Twitch channel for Lunchzone’s next episode on July 17th (12 pm US Central time)

Never having played this series, yet being a dedicated Elder Scrolls series player, I immediately began noticing the many similarities to Skyrim – graphically, aesthetically, and mechanically. What followed in our conversation was a set of thoughts that revolved around the appeal to the “gritty” in these games. The seriousness, the gray palette, the gore, and – above all, in Kelly’s memorable phrase – the “tiresome dickishness” exhibited in practically every dialog with NPCs.

Read More »

Coding Dispositions at GDC

[Note: This post originally appeared on Terra Nova.]

March 23, 2006

Not too long ago on the TN backchannel, a few of us got to talking about the tendency for game designers and developers to fall into the trap of looking down upon their users, devaluing their knowledge, opinions, and skills. Well, in the wake of my first (and certainly very limited) GDC experience, I’m surprised to find myself revisiting this criticism of game developer myopia. I’m coming to believe that precisely what this site might be seen as testament to — the ability of researchers, writers, designers, and developers to talk together productively — is the very thing that may not have much potential beyond cozy corners such as this…

What really stuck in my craw during the Social Science and Games panels (very well organized by T L Taylor) on Monday was the following: The presentations by academics were interesting, wide-ranging, and mutually conversant (with the important exception of my own, no doubt), yet two leading voices on games research and design, Eric Zimmerman and Raph Koster, took the opportunity to publicly ask how these ideas could possibly be relevant for them.

So, nothing new so far, right? We’re used to explaining this gap. Of course it was the academics’ fault for speaking in esoteric jargon. Or, of course it was the developers’ fault for being mired in the practicalities of their profession, specifically in the need, above all, to make money. But I don’t think either of these tendencies (certainly in evidence on both sides) are sufficient to explain the disconnect.

Instead, I am coming to believe that game designers and developers, on the whole(some of the august exceptions being right here on TN), are simply not able to see beyond their own way of thinking about MMOGs. I am not chalking it up simply to arrogance (although there is some of that too, especially from some bright lights who clearly have enough going on upstairs to know better). I’m actually suggesting that they are (largely) incapable of thinking outside the box (to use a well-overworn phrase). This should not be seen, however, as some devastating slam on them  — all people, in all places (though I would suggest particularly those enculturated into heavily technical professions) have trouble looking at things from another point of view, and this group is really not so different. But it was still a bit surprising, especially given, in Eric’s and Raph’s cases, their stated interest in academic research.

Here is what I wrote on the backchannel a month ago when the topic was the related issue of developers and their attitude toward the content contributions of their player-base:

But the designer arrogance goes deeper than that, I’d say. This kind of elitist characterization [of users as lacking in skill] itself rests on a rather narrow conception of what “content” is. This is an attitude (deeper than that, it’s a disposition) which I’d suggest is rooted in developer practice generally, and computer games developer practice specifically. It is a view which recognizes that which is scripted, modeled, or otherwise generated according to the practice of software development as seemingly both the (only) site of creativity and (therefore) the ultimate locus of value. Other kinds of (creative) human activity vanish from its radar screen.

This is an argument that forms part of a chapter I’ve written for a volume I’m co-editing  with Sandra Braman (Command Lines) that is currently under review, and there the specific example is Second Life and the challenges that the varieties of user content therein make to the multiple ideas about content held by the different teams within Linden Lab. But GDC led me to see this claim as more applicable here as well.

The best way to put the assertion (and this is all it is at this point; and again, please keep in mind that there are a number of familiar exceptions) is that the practice of game software development generates a way of seeing and defining problems (as essentially precise, logical, and algorithmic), and creating solutions (through linear, text-defined code) that makes other ways of accounting for what happens in VWs seem at worst  nonsensical and at best irrelevant or quixotic.

Here is just one quick example of this kind of disposition in action: Billmonk, which Constance posted about here. The site promises to help you keep track of your obligations throughout your social network precisely (using any of a number of imaginable currencies). It is double-entry bookeeping for your friendships, and thereby prompts you to conceive of these obligations in exact terms. This is a perfect example of a code-based solution to a code-defined problem: People’s moral obligations are essentially precise and monetary, and they therefore need a precise tool to manage them. (And this approach is not just applied externally; within software companies one frequently sees similar efforts to address organizational issues with precise and enumerated systems that can be, above all, measured.) Heather Kelly, one of the developers on a panel on Monday asked a great question about game development that she hoped researchers could help answer: Why does money trump everything? The answer lies in the remarkably good ‘fit’ between the market and code, and in the existence of a lot of well-trained people who can find ways to exploit it.

I submit for your comments the idea that the reason many developers have a hard time finding anything of value not only from researchers, but often from their own players, is that they are, in effect, seeing a different world, all the time. An optimistic disposition — a faith, even — in technology and code-based problem solving runs deep in the technology and software development community (see, for example, Gary Lee Downey’s ethnography of CAD/CAM engineering, The Machine in Me), and it hampers developers’ ability to recognize the range of content and community creation (very broadly defined) by users as well as the fruits of the well-established but different methodologies and concepts of researchers.

I, Realmstalker, Being of Sound Mind

[Note: This post originally appeared on Terra Nova.]

January 10, 2006

Ted and I had a conversation last month in WoW that has stuck with me. We are all familiar by now with the explosion of exchange in virtual worlds, whether of the moral (gifts, reciprocity) or market varieties. But what about inheritance? In these worlds, that appear able to persist beyond not only the duration of our interest in them but also, perhaps,  our mortality, will there come a time when we want to find the right home for the valuable, but also particularly meaningful objects we have? But the possibilties don’t stop there. Consider bequeathing an avatar…

Rather than the power-leveled, commodified items, characters, and currency for sale, let us consider for a moment what bequeathing an avatar might mean. We may find that to do so helps us think through some of the thorniest issues about both avatars and the nature of things of value in synthetic worlds.

The apparent nature of the avatar as both a representative, or even form, of the self, on one hand, and as a separable object, on the other, has provided a particularly intriguing duality for researchers. In Synthetic Worlds (2005, p. 110), for example, Ted considers the extent to which an avatar may not only represent a user in a world (to others and to the user herself), it can also come to bear a history, with specific experiences, objects, and credentials attached, irrevocably, to it. Thus, there are attributes which accumulate within the avatar itself, as an artifact, that cannot be transferred out of it.

I thought about this further in connection with a broader project of mine to  understand the various forms of capital that are created in virtual worlds: market, social, and cultural (you can find the full essay here). In the course of that, a connection to the ethnographic literature sprang to mind. The avatar bears a striking similarity to formalized, inheritable ritual roles, such as are passed down in the Tsimshian potlatch of the Pacific Northwest. The Pacific Northwest potlatch is famous, of course, as an example of the destruction of material wealth by a lineage (“house”) in order to achieve status. But the most recent work on the Tsimshian (Roth, 2002) emphasizes their potlatch not as an occasion for the spectacular destruction of excess wealth, but rather as an event about the inheritance of “names”, the ritual offices held by houses, and specifically individuals within them, which contain an extensive set of obligations and powers. They are represented materially by robes, blankets, and headdresses, and they are passed down only under the proper conditions, involving extensive material outlay, and this effort is itself risk-filled, subject to all the contingencies of any large-scale and involved social drama.

These names have the same dual characteristics as avatars: at times they are objectified (such as in the objects above, or when listed as part of the lineage’s property), while at other times  they are directly and inextricably associated with the unique capacities and idiosyncracies of the persons holding them at any given time. As Roth writes (2002, p. 132-133), “The dual nature of names as objects of wealth and as personages…corresponds to the dual nature of a structure of names as both a store of wealth and a social structure of individuals.”

While avatars are currently strongly associated with individuals and therefore do not (yet) index a social structure within synthetic worlds in anything but an embryonic way (through their association with guilds, most obviously), the dual principle which Roth notes applies nonetheless: avatars have characteristics of objects or property and characteristics of personas. Certain avatars, as powerful figures in guilds and the like, have acquired similar obligations and relations. Might we yet see the day when these avatars are transferred, with great ceremony, to other users as a matter of course within virtual worlds, as their original owners pass them along due to death or other life changes?

The French anthropologist Maurice Godelier has called inheritance and exchange the “twin foundations of society,” the two primary means through which culture is transmitted through space and time. If we take seriously the notion that synthetic worlds are persistent, and that the things of value made within them are not limited to commodities, then the first “willed” avatar transfer can’t be far away.

Actually, given how things normally work around here, I would ask: has this perhaps already happened?

References

Castronova, Edward. (2005). Synthetic Worlds: The Business and Culture of Online Games. Chicago: The University of Chicago Press.

Roth, Christopher F. (2002). Goods, Names, and Selves: Rethinking the Tsimshian Potlatch. American Ethnologist. 29(1), 123-150.

Delusions of Granter

[Note: This post originally appeared on Terra Nova.]

November 15, 2005

One thing that I’ve been thinking about a lot lately is my set of experiences over the past two years reviewing for large grant agenices. While my experience is still a bit limited, it has been intense, and I have developed some impressions (and that is really all they are, given anyone’s limited point of view relative to these funding agencies) about the current trends in social science research on technology in general (and virtual worlds in particular)…

I invite anyone with experience on either side of this equation (applying, reviewing) to share their thoughts as well. Unfortunately, the incentives in this part of academia push everyone to hoard what little information they have about what works (and doesn’t), but I’m actively trying to push against that here in order to ponder whether there are any important shifts or trends that we can identify.

This post is also inspired by one thing that has struck me in particular: the unabashed and common presence of qualitative and exploratory research methods as components in research proposals. In many cases in my experience, and in quite large grants, ethnographic and interview-based methods have a prominent place (I understand that this is somewhat field-specific; education has apparently been pushing the other way, toward quantitative). Cultural anthropologists have a tendency to see themselves as the ‘black sheep’ of the social sciences because of the historical marginalization of both their method and their subjects (for many decades the far-flung places of the world), and this still has some validity, but it is such a dominant self-picture that sometimes we miss how our methods and perspective are in fact increasingly welcomed in some quarters. (Marketing and governmental intelligence are both growth industries for anthropologists right now–a development which makes my head nearly rotate off its axis.)

My general impression about the qualitative methods, particularly ethnographic research, in these proposals (again, primarily those focusing on technology) is that it is mixed–sometimes there is careful thinking and established qualifications behind the research design of that component, but just as often there is clearly not. But I take this as actually good news for qualitative social science, in a way, because it suggests that this method is valued enough in funding circles to be a component that applicants ‘reach for’ even if they do not have the expertise to carry it out. Of course, it is the reviewers who then evaluate the quality of this component, along with the others in the proposal, so my presence as a researcher with a qualitative social science background also itself testifies to these agencies’ commitment to qualitiative approaches.

Beyond this point, however, I’ve also noticed that there are three contrasts that variously define types of research that are sometimes conflated more generally, but which in my experience are seen quite clearly as distinct within the funding arena, and which we need to think a bit about in virtual worlds research. I’ll keep this short, however, because these are very treacherous waters, and I really just want to begin a conversation. (Boilerplate caveat: the oppositional way that I’ve presented them here should be taken with a grain of salt, because these are not mutually exclusive, at least theoretically.)

-quantitative vs qualitiative approaches. In addition to my impression noted above, there is a broader trend, which the funding trend (if it exists) may be following, toward a reincorporation of qualitative research across the social sciences (most obviously in sociology). For me, the key question in virtual world research is: How do we incorporate both of these forms, and evaluate claims across them?

-macro- vs micr0-level studies. This is not quite the same as quant/qual, although they are often treated as such. For virtual world research, the question is: What counts as macro-level? Given that these worlds still occupy the attention of a relatively small group of people, are we necessarily engaging in small-scale research, at least currently? How broad is the impact of our conclusions?

-experimental vs exploratory research. This is to me a central, but largely unspoken, contrast that has a particular importance for virtual world research. Exploratory research, which gets far less attention as a scientific methdology than experimental, is typical of the activity of many geologists, astronomers, botanists, and archaeologists. Rather than hypothesis-testing, it is based on the gathering of information about a given phenomenon, particularly one that is large, complex, and about which we know too little to generate useful (that is, other than self-confirmatory) hypotheses. Its contribution is both empirical (lots of data–‘thick description’ in Geertz’s phrase) and analytical, proposing possible explanations suggested by the data. (In this respect it can plausibly be seen as the expansion of the ‘observation’ step of the ‘scientific method’.)

I’m particularly interested in this last contrast, because I get the impression that it mirrors differing views among virtual world researchers. Some see virtual worlds as having promise for their work primarily as sites for experimental research, and they look for how experimental research could be carried out within and through VWs designed for that purpose. A possible extension of this claim is that that until this research is done, all the observation and analysis of what’s currently going on is not going to generate knowledge that is comparable in terms of impact. Others see the current landscape of virtual worlds as already a site for such a wide variety of (potentially transformative) human activity that exploratory research is our best hope for generating knowledge about them. The possible  extension of this view is to say that experimental contexts will never generate insights that apply in the ‘real world’, where there are real stakes.

I have some reservations about laying out these contrasts as I see them, because I don’t want to polarize the discussion, but (and to return to the topic at hand) I was surprised by the degree to which qualitative, micro-, and exploratory research were a significant part of the proposals I’ve seen, and we researchers need to take account of this when we think about how we can increase support and awareness of virtual worlds research.

Meaning, Games, and Bureaucracy

[Note: This post originally appeared on Terra Nova.]

November 7, 2006

A while back in a comment posted in this thread, Ren posed an excellent question that I’ve been pondering for some time. Wondering about the implications of my model of games as process for the question of meaning, he asked:

Do we then just have that the meaning-generative property of games is just a fact of process [i.e., no different from other social processes] and the types of meanings [in games] are consequences of the contrived contingency?

Curse you, Ren! I haven’t slept since August!

Puzzling through this in the wee hours of the night, I began with how I responded to Ren originally: on Weber and bureaucracy.  This has led to the beginnings of a paper that I hope to have up to ssrn soon, but I wanted to talk about it now because I gave my first airing of its ideas on a recent panel that I wanted to mention. Tom Boellstorff (SL: Tom Bukowski) and I co-organized a panel on virtual worlds and anthropology at the annual meetings of the American Anthropological Association, where we were joined by Heather Horst and Mizuko Ito (co-authored paper, Ito presenting), Genevieve Bell, and Douglas Thomas, with the distinguished linguistic anthropologist Michael Silverstein as our discussant. The panel was filled with great ideas, on everything from virtual methodism in England to the Neodaq, and I hope to have news to those presentations’ culminations in paper form soon.

As for me, I gave a version of my current and still-rough answer to Ren’s question. I proposed that virtual worlds and their emergent effects demonstrate an aspect of the human condition that has largely been obscured under modernity – that of the human engagement with the unpredictable or contingent. Max Weber and his definitive account of bureaucracy and the state formed the backdrop for a century-long inquiry into the vanishing sources of meaning under the advent of rationalization; for Weber, charismatic leadership provided the only answer to the iron cage of rationality. But a consideration of bureaucracy, games, and virtual worlds alongside one another fills in this bleak picture. If bureaucratic projects are driven, at root, by an ethic of necessity (in their procedures and logic of consistency), games, and the virtual worlds based on them, are driven by its antithesis: contingency. As socially legitimate spaces for cultivating the unexpected, games provide grounds for the generation of meaning that is not ultimately charismatic. Virtual worlds like Second Life have largely retained this open-ended quality, and they rely on game architecture to create a domain that, while not utterly unbounded in possibility, has wide opportunities for success, failure, and unintended consequences, and it is this that makes possible the meaningful and emergent effects we witness today.

So the answer to Ren’s question is that, in my view, the engaging mix of constraint and contingency that well-designed games (and the worlds based on them) have makes them more productive of meaning than those parts of our lives that are increasingly governed by regulatory projects which aim to eliminate the uncalled-for. (One might further say that those parts of our lives that are too contingent, too unbounded in possibility, also create a challenge of meaning.) Of course bureaucracy in practice is also a site for contingency (and regularity). Bureaucratic projects certainly do not perfectly realize the modern aim of consistency, but they always aspire to do so. Games, by contrast, are socially legitimate domains where unpredictable events are supposed to happen, and that is why they are valuable lenses through which to see key points of discursive and practical contestations over meaning and resources played out. Games, then, do not create “unbounded” contingency; they are not places where anything at all can happen. But they provide room for a contrived mix of constraint and contingency. By mixing the regularity and the sources of contingency just so, they create their potential for the meaningfully unexpected, as well as for unexpected meanings.

Claude Shannon in the mid-twentieth century presented the surprising finding from mathematical information theory that messages which contain the most information are those with 50% expected (redundant) information and 50% unexpected (noise) information. Katherine Hayles of UCLA expanded on this point during a visit to my seminar on ethnography and technology at UWM. Imagine, she said, a language in which it was impossible to say anything new; it would be meaningless. The lesson is that contingency is inextricable from meaning. New circumstances, new experiences, and new collisions between different systems of meaning are at the heart of meaningful human life. This is why we should be very interested in virtual worlds and the approach to cultivating the contingent which underwrites them. By leveraging the techniques of game design, Linden Lab and others have almost accidentally fallen into creating products which are supposed to do things they do not expect, and in this way they have made a choice that turns out to be strikingly anti-bureaucratic in its ethical stance. For Weber, it was only the individual virtuoso – a master of performance in a singular context – who could provide new meaning in an era of the iron cage. Virtual worlds show us another possibility; that meaning can be cultivated through techniques derived of game-making.

Anti Anti-Anecdotalism

[Note: This post originally appeared on Terra Nova.]

Dec 30, 2006

The recent flurry of attention to SL and its numbers (here, here, here, and, most recently, here) leads me to think that folks might be interested in having a chance to chew through some methodological stuff, along the lines of the “Methodologies and Metrics” panel on which Nic, Dmitri, and I served at the State of Play/Terra Nova Symposium early this month. Below the fold, some tweaked ideas from some emails I circulated among the panelists in preparation for the panel. While I’m not discussing virtual worlds and the methodologies we’d use to understand them specifically, I hope this will be helpful background for such a discussion.

It is hard to get away from a common conception, both within and outside academia, that numbers are the one, true path to understanding. This is part of a set of cultural expectations that are reproduced precisely because they are so rarely challenged. Most commonly, one hears that claims with numbers are “grounded” or otherwise true in a way that other kinds of claims (such as the ones based on the kind of research that Tim talked about here), are not. Claims based primarily on those other kinds of research, particularly on interviews and participant observation, often get branded as “anecdotes”, with the suggestion that they hold no real value as reliable claims. Here I would like to push against this association, and help clarify our understanding of what qualitative social science research methods (ethnographic research ones in particular) bring to the table. In short, they are not “anecdotes”, and they can form the basis of reliable claims, even without numbers, although as Dmitri and I never tire of saying, having both is better than having just one.

No social scientist, of course, would want to “generalize from anecdotes,” but the problem is that often we do not really understand what that means; or perhaps it is more accurate to say that across the academy many scholars (not to mention the public at large and policy makers) do not know enough about methodology (this is true of both qualitative and quantitative methods, and more broadly about exploratory versus experimental research), and therefore these charges are in essence a political move meant to marginalize the other side’s research that can succeed because of that lack of broad grounding. From my conversations with everyone involved with TN I have never felt that we (as a group of authors) were particularly prone to make these errors, but there is no question that it finds its way into the discussions on TN, as in the recent threads.

The goal of all social science is “generalization” in a sense, but the legacy of positivist thinking about society (that it is governed by discoverable and universal laws) has left us in the habit of thinking that the only generalization that counts is universal. It is always interesting to me how some work (especially that done by the more publicly-legitimized fields, such as economics) can proclaim itself to be about the universal despite the fact that only a moment’s thinking reveals the application of the ideas to be narrow (to industrialized, capitalist contexts, etc). The strange thing is that this doesn’t end up being a problem for those already-legitimate fields; instead, it is largely ignored — this is what being well situated on the landscape of policy and academic relations of power gets you (to be Foucauldian for a moment).

But of course generalization, in the more limited sense of seeking a bridgehead of understanding across times and spaces, has long been the hallmark of history (the first social science, in a way). The strange thing is how difficult it seems to be for those who would like to criticize methods such as participant observation and interviewing to see the projects those methods support in a similar light to history and its efforts. There is nothing inherently problematic with such claims; they are just as able to inform policy as universal ones, and have the benefit of incorporating more nuance.

So then what is an anecdote? It is a description of an event isolated from its broader context, so no wonder all of us would like to shy away from the suggestion that we are drawing our conclusions in isolation of the broader context. But ethnography (meaning principally participant observation, along with interviewing, surveying, and other methods), to speak of that relevant methodology most familiar to me, quite distinctly does not treat these events in isolation. Brief descriptions are often presented in the course of ethnographic writing in order to illustrate a point concretely, but the point made is only as sound as the degree to which we trust the author’s command of the broad array of processes ongoing in the context at hand. How is this credibility established? Through a complex of many, many, many techniques of writing, thick description, peer review (always including experts in that period or place), solid reasoning itself, track record of previous research, etc, etc. This form of generating reliable claims is not somehow “less” viable than other ones, and its strengths and weaknesses of similar scope (though differing in their particulars).

So one of the tropes that one finds in the recent spate of posts about SL and its numbers is the suggestion that only when numbers that we trust are present do we feel that the claims authors make are “grounded”. This is not true. As anyone with much experience with statistics knows, the numbers say nothing without the ability to interpret them provided by other kinds of interpretive research. In fact, given the above, if any research has a claim to being “grounded” it is the first-hand research of participant observation.

Even when this kind of contribution from qualitative research methods is acknowledged, however, there is still a tendency to see the claims of work based on them as always and severely limited to a “niche”, at least until numbers come along. But a social history or ethnography of a place and time is not this narrow. They are able to make general claims at the level of locale, region, or even nation, and they often do (when done well). The idea for ethnographies is that the ethnographic research method, at root, inculcates in the researcher a degree of cultural competence such that he or she can act capably (and sensibly) as a member of that culture. Supported by observation, archival research, surveys, or interviews (usually some combination), as well as (possibly) prior work, this learned disposition informs an account of the shared disposition of the actors on the ground, and is laid out in the published work (as best one can in writing) as representative of a worldview from a particular time and place. Thus, my claims about gambling in Greece were made beyond the level of the city where I did my research, and I argued for the existence of a cultural disposition that characterizes Greek attitudes toward contingency at something like the national level (without holding too much to hard boundaries).

Of course, these claims are further bolstered by the broadening of one’s research methods, whether through surveys, demographic data, archival research, media studies, or any other means that support the big picture. Relatedly, there is nothing about quantitative methods that dictates that they must “stay big.” They can be productively focused and narrow as well.

This is not to say that there isn’t a limit to the level of generalization for qualitative research that is exceeded by quantitative methods. So, for example, while an ethnography could make reliable claims about Greek culture, I don’t think it could about American culture. The reason for this connects to what culture is — a set of shared expectations, based on shared experience and continually re-made through shared practices — and why it is far too fragmented and varied across the US for an ethnography to make such claims. But while this is true, the important point is that qualitative methods’ levels of claims are not as particularist as they are sometimes made out to be.

I become, I confess, a bit sad whenever I encounter this kind of marginalization in action (for me, it most often happens on interdisciplinary fellowship review panels and the like), because at root it bespeaks a lack of trust across the academy. There is little doubt that there have been excesses across the gamut of methodologies and theories that the social sciences use (reductions to representation, or materiality, or power all come to mind), and perhaps this accounts for the parochialism and suspicion, but let’s hope that we don’t fall prey to what are more often, in my view, essentially not battles over the nature of sound inquiry but instead part of gambits meant to direct or redirect institutional resources.

Read More »

Discipline & Pwnage

[Note: This post originally appeared on Terra Nova.]

Feb 1, 2007

Dp2_1So I’ve been having my usual beginning-of-the-semester chats with my graduate students about their projects and progress. I enjoy these, and I think they do to (they almost never complain about the thumbscrews, or — more of a shock — having to read Habermas). One of them, Krista-Lee Malone, is a master’s student and long-time gamer who is completing an excellent thesis about hardcore raiding guilds. During our chat she said something about how these raiding guilds went about preparing her to participate in their activities, and it prompted me to follow up on some ideas from here. It’s about Foucault, bodies, institutions, and whether the relationship between developers and guilds is changing in important ways.

Krista-Lee plays a priest (one with more purples than I’ll ever see for my druid, I’m sure), and what she said was (paraphrasing), “I can healbot Molten Core in my sleep, but if I’m thrown into a new situation, I can’t heal at all.” While that’s probably an overstatement, it suggests something about the nature of raiding guild discipline — at least, pre-TBC. It turns out, and this is not unusual, that the guild power-leveled her toon and then taught her to follow a very specific and detailed script for the instances they were running, starting with UBRS and then through Naxx.

Michel Foucault famously argued that the power of modern institutions is driven, at root, by the ability to discipline people, or, more directly, to discipline their bodies — to mold those bodies and order their actions in ways that allow groups to achieve institutional objectives effectively. To do this, they draw on practical techniques developed first in places like early Christian monasteries and the Roman legions. Bodies are organized, regimented, taught to sit, to stand, to kneel, to match their singular shapes to the demands of regularity — no pinky out of place, the leg held just so. The effect of this “bio-power”, as he most convincingly shows in Discipline & Punish: The Birth of the Prison, is not only effective institutional control over otherwise unruly subjects, but in fact a re-shaping of their selves. They come to see this discipline as consitutive of who they are, as shaping their very desires. The classic (and idealized — practice is messier) example is the panopticon, where prisoners are architecturally situated in view of an invisible and authoritative observer. The guard watches from in a darkened room while they are laid out in a brightly-lit Cartesian grid. It comes to matter little if the guard is there at all, as the prisoners internalize the surveillance.

I’m not saying that Krista-Lee was a prisoner of her guild. Um, exactly. Foucault argues (in later works) that this disciplining of bodies is something taking place all around us, particularly as we learn to act within highly-regulated contexts, like schools, the military, hospitals, and airports. And, like the prisoners, he asserts that we come to accept and even celebrate the kind of self the institutions have made of us.

All of this is to get us thinking about to what extent hardcore raiding guilds should be seen in a similar light. The essence of disciplined bodies is that they are malleable; they can be shaped to perform in lock-step (literally) under a command hierarchy. The tension, of course, is that this strategic control always involves a tradeoff with the tactical, the ability of a group to respond on the fly, to emergent situations. For Krista-Lee, this effect was directly discernible — while she enjoys soloing and quest-grouping, she felt lost in new instances, when there wasn’t an explicit script to follow.

As I’ve pointed out, for WoW, this had — before the expansion — created a mutually constructive relationship between the 5(10)-person instancing and the large-scale raiding. While small-scale grouping not only allows for, it depends upon, tactical rethinking on the fly, large-scale groups narrow and leverage the set of available class skills (maybe hunters begin to leave pets behind, druids get pushed into healing, only one hemo rogue is called for) into more strictly-defined roles. The small-scale was, perhaps like boot camp in the military, an intense and necessary part of enculcating a set of competencies (what is a pull, sheeping, aggro), but one that ultimately is left behind, smaller in comparison to the institutional ambitions which these competent bodies now serve to realize. Rationalized systems of resource distribution, like DKP, along with political structures and communications tools, play a role as well for these institutions, harnessing individual desire into organizational discipline, to get the 40 people needed together all at one time, ready to down Onyxia, or tackle a world boss.

The reason I think this is particularly interesting for us to think about now are the cases of both WoW and Second Life and some of the recent changes these VWs have undergone. The downsizing of endgame instances in WoW, the availability of soloable loot roughly on a par with Tier 1+ in Outland, and (to my unsystematic eye) the prevalence of small group quests there with excellent rewards, all suggest that Blizzard’s moving away from supporting the emerging institutions (guilds) of its creation, ones which had dominated server culture for pretty much the whole game. This is an interesting contrast with past TN conversations, like the one here.

By contrast, the revamped estate tools in SL (which I’m sure many folks out there know more intimately than I), increase the amount of governance by island owners not only over a piece of property, but also over a group of people, and in fact these tools have thereby become deeply intertwined. To my eye, this enables the generation of institutional players on the SL landscape that LL has never had to deal with before. I’m not thinking first of the existing external institutions with a “presence” in SL, but rather of those entities that until recently we could somewhat reliably continue to think of as individuals, but which are now better understood as institutions. While the relationship of LL to some of its major content creators has been undoubtedly cozy, one can’t help but wonder how long that will last — institutions are competitive. The interesting thing about Second Life is the extent to which Linden Lab has had a “free-ride” for a long time, effectively being the only large institutional player in the arena. Social convention was emergent from the users, and was (is) something with which to contend — a lot of time at Linden is devoted to this “community management”. But architecture, the market, and “law” (others modes of governance, as I see it) were all firmly in Linden’s hands. That’s changing now, and the question is whether Second Life will fly apart at the seams once these other institutionalized interests find their footing.

All this is really just to wonder whether we’re entering an era where the relationships between virtual world makers and the people involved them are changing. It is probably wise for us to get in the habit of thinking just as readily about developer/(in-world)institution relationships as we do about developer/individual player relationships. I actually think this will be a hard habit to break — the idea of the game maker/game player relationship as primarily institution-to-individual is just one instance of the engrained tendency for those in industrialized societies to think about social institutions primarily as they relate to individuals.

WoW and SL both demonstrate, at a very broad level, different solutions to the emergence of institutions within their creations, an emergence that was, I believe, inevitable once resources began accumulating within these persistent and contingent domains. Foucault, like Weber, thought that people banding together to accomplish something was fine, but was wary of what happens next. Once any nascent institution begins looking for something else to accomplish, its primary raison d’etre has already changed. At that point, it’s more interested in its own reproduction than in its original aims or purview. Once that happens, look out.

[Addendum: Ever-alert Julian Dibbell points to ShaunConnery’s Rapwing Lair. Surely the script in Krista-Lee’s guild never sounded so good.]