Tuesday, July 25, 2017

Four-square brightness

IMGP1552rd v2 HiSat

Computational Thinking and the Digital Critic: Part 1, Four Good Books

This is about computational thinking. But computational thinking is not one thing. It is many, some as yet undefined. What can it become for students of the humanities?

How, you might ask, are we to engage a computational understanding of literary process, if computation isn’t well-defined?

With care, I say, with care. We have to make it up.

* * * * *

As Stephen Ramsay pointed out in a post, DH and CS (where DH = digital humanities and CS = computer science), computer scientists are mostly interested in abstract matters of computability and data structures while programmers are mostly concerned with the techniques of programming certain kinds of capabilities in this or that language. Those are different, though related, undertakings.

Further, the practical craft has two somewhat different aspects. One faces toward the end user and is concerned with capturing that user’s world in the overall design of the program. This design process is, in effect, applied cognitive anthropology. The other aspect faces toward the computer itself and is concerned with implementing that design through the means available in the appropriate programming language. This is writing, but in a very specialized dialect. But it’s all computational thinking in some meaningful sense.

Though I have written a computer program or three, that was long ago. I have, however, spent a fair amount of time working with programmers. At one period in my life I documented software; at a different time I participated in product design.

But I also spent several years in graduate school studying the computational semantics of natural language with the late David Hays. That’s an abstract and theoretical enterprise. Though he is one of the founders of computational linguistics, Hays did no programming until relatively late in his career, after he’d left academia. He was interested in how the mind works and computation was one of his conceptual strategies. I studied with Hays because I wanted to figure out how poetry worked. All the members of his research group were interested in the human mind in one way or another; some of them were also programmers of appreciable skill.

Computational Thinking and the Digital Critic: Part 2, An Ant Walks on the Beach and a Pilot is Alone

Simon’s ant is a well-known thought experiment from Chapter 3, “The Psychology of Thinking: Embedding Artifice in Nature,” in Herbert A. Simon, The Sciences of the Artificial, 1981. It’s a parable about computation, about how computational requirements depend on the problem to be solved. Stated that way, it is an obvious truism. But Simon’s thought experiment invites you to consider this truism where the “problem to be solved” is an environment external to the computer – it is thus reminiscent of Braitenberg’s primitive vehicles (which I discussed in Part 1).

Think of it like this: the nervous system requires environmental support if it is to maintain its physical stability and operational coherence. Note that Simon was not at all interested in the physical requirements of the nervous system. Rather, he was interested in suggesting that we can get complex behavior from relatively simple devices, and simplicity translates into design requirements for a nervous system.

Simon asks us to imagine an ant moving about on a beach:
We watch an ant make his laborious way across a wind- and wave-molded beach. He moves ahead, angles to the right to ease his climb up a steep dunelet, detours around a pebble, stops for a moment to exchange information with a compatriot. Thus he makes his weaving, halting way back to his home. So as not to anthropomorphize about his purposes, I sketch the path on a piece of paper. It is a sequence of irregular, angular segments--not quite a random walk, for it has an underlying sense of direction, of aiming toward a goal.

Monday, July 24, 2017

More synch: Firewalking (performers and spectators), Romantic partners (& empathy for pain)

Pavel Goldstein, Irit Weissman-Fogel, Simone G. Shamay-Tsoory. The role of touch in regulating inter-partner physiological coupling during empathy for pain. Scientific Reports, 2017; 7 (1) DOI: 10.1038/s41598-017-03627-7
Abstract: The human ability to synchronize with other individuals is critical for the development of social behavior. Recent research has shown that physiological inter-personal synchronization may underlie behavioral synchrony. Nevertheless, the factors that modulate physiological coupling are still largely unknown. Here we suggest that social touch and empathy for pain may enhance interpersonal physiological coupling. Twenty-two romantic couples were assigned the roles of target (pain receiver) and observer (pain observer) under pain/no-pain and touch/no-touch conditions, and their ECG and respiration rates were recorded. The results indicate that the partner touch increased interpersonal respiration coupling under both pain and no-pain conditions and increased heart rate coupling under pain conditions. In addition, physiological coupling was diminished by pain in the absence of the partner’s touch. Critically, we found that high partner’s empathy and high levels of analgesia enhanced coupling during the partner’s touch. Collectively, the evidence indicates that social touch increases interpersonal physiological coupling during pain. Furthermore, the effects of touch on cardio-respiratory inter-partner coupling may contribute to the analgesic effects of touch via the autonomic nervous system.

Ivana Konvalinkaa, Dimitris Xygalatas, Joseph Bulbulia, Uffe Schjødt, Else-Marie Jegindø, Sebastian Wallot, Guy Van Orden, and Andreas Roepstorff. Synchronized arousal between performers and related spectators in a fire-walking ritual. PNAS, May 17, 2011 vol. 108 no. 20 8514-8519, doi: 10.1073/pnas.1016955108
Abstract: Collective rituals are present in all known societies, but their function is a matter of long-standing debates. Field observations suggest that they may enhance social cohesion and that their effects are not limited to those actively performing but affect the audience as well. Here we show physiological effects of synchronized arousal in a Spanish fire-walking ritual, between active participants and related spectators, but not participants and other members of the audience. We assessed arousal by heart rate dynamics and applied nonlinear mathematical analysis to heart rate data obtained from 38 participants. We compared synchronized arousal between fire-walkers and spectators. For this comparison, we used recurrence quantification analysis on individual data and cross-recurrence quantification analysis on pairs of participants' data. These methods identified fine-grained commonalities of arousal during the 30-min ritual between fire-walkers and related spectators but not unrelated spectators. This indicates that the mediating mechanism may be informational, because participants and related observers had very different bodily behavior. This study demonstrates that a collective ritual may evoke synchronized arousal over time between active participants and bystanders. It links field observations to a physiological basis and offers a unique approach for the quantification of social effects on human physiology during real-world interactions.

How's this for enlightenment?

20150621-_IGP4337 smth tw dvd smth

Sunday, July 23, 2017

Remembrance of Christmas past


Computational Psychiatry?

Psychiatry, the study and prevention of mental disorders, is currently undergoing a quiet revolution. For decades, even centuries, this discipline has been based largely on subjective observation. Large-scale studies have been hampered by the difficulty of objectively assessing human behavior and comparing it with a well-established norm. Just as tricky, there are few well-founded models of neural circuitry or brain biochemistry, and it is difficult to link this science with real-world behavior.

That has begun to change thanks to the emerging discipline of computational psychiatry, which uses powerful data analysis, machine learning, and artificial intelligence to tease apart the underlying factors behind extreme and unusual behaviors.

Computational psychiatry has suddenly made it possible to mine data from long-standing observations and link it to mathematical theories of cognition. It’s also become possible to develop computer-based experiments that carefully control environments so that specific behaviors can be studied in detail.
The article then goes on to discuss research reported in:

Sarah K Fineberg (MD PhD), Dylan Stahl (BA), Philip Corlett (PhD), Computational Psychiatry in Borderline Personality Disorder, Current Behavioral Neuroscience Reports, March 2017, Vol 4, Issue 1, pp31-40: arXiv:1707.03354v1 [q-bio.NC]
Purpose of review: We review the literature on the use and potential use of computational psychiatry methods in Borderline Personality Disorder.

Recent findings: Computational approaches have been used in psychiatry to increase our understanding of the molecular, circuit, and behavioral basis of mental illness. This is of particular interest in BPD, where the collection of ecologically valid data, especially in interpersonal settings, is becoming more common and more often subject to quantification. Methods that test learning and memory in social contexts, collect data from real-world settings, and relate behavior to molecular and circuit networks are yielding data of particular interest.

Summary: Research in BPD should focus on collaborative efforts to design and interpret experiments with direct relevance to core BPD symptoms and potential for translation to the clinic.

Tuesday, July 18, 2017

Language boundaries & surface tension

In his new study, Burridge presents a deliberately minimal model of language change, which focuses on explaining dialect distribution solely in terms of topographical features and speaker interaction. The model assumes the existence of multiple linguistic variants for multiple linguistic variables, which effectively define different dialects. In determining whether a given speaker adopts a specific variant, the model does not consider “social value” factors. Instead, it assumes that speakers interact predominantly with people living in their local environment (defined by some radius around their home), and that they will conform to the speech patterns of the majority of people in that geographic vicinity. Such local linguistic alignment favors the emergence of distinct dialect areas, with dialect boundaries tending to shorten in length in a way that mimics how surface tension minimizes the surface area of a water droplet (see Fig. 1). In a region with uniform population density, this language-based surface tension will cause the boundary between two dialects to form straight lines. Densely populated areas, however, interfere with boundary straightening by repelling boundaries and effectively creating new dialect areas around themselves. Furthermore, topography can have an imprint on dialect spatial distributions. In systems with irregular perimeters, Burridge shows that boundary lines tend to migrate to places where they emerge perpendicular from the edge of the system, such as indentations in coastlines.
Original research HERE (PDF).

Monday, July 17, 2017

Organic sphere of latticework


Where is the never ending (medieval) text? [#DH]

I checked in at Academia.edu today and found another article by medievalist Stephen Nichols. I've not finished it, but wanted to blog a passage or two anyhow.
Stephen G. Nichols, Dynamic Reading of Medieval Manuscripts, Florilegium, vol. 32 (2015): 19-57 DOI: 10.3138/ or.32.002 download at Academia.edu. https://www.academia.edu/33907842/Nichols_Dynamic_Reading_flor_32_002
Here's the abstract:
Abstract: Digital manuscript and text representation provides such a wealth of information that it is now possible to see the incessant versioning of works like the Roman de la Rose. Using Rose manuscripts of the Bibliothèque municipale de Lyon MS 763 and BM de Dijon MS 525 as examples and drawing on Aristotelian concepts such as energeia, dynamis, and entelecheia, the copiously illustrated article demonstrates how pluripotent circulation allows for “dynamic reading” of such manuscript texts, which takes into consideration the interplay between image, text, and the context of other texts transmitted in the same manuscript.
What caught my attention was his statement about the unexpected impact of digital technology. It made it possible, for the first time, to examine a number of different codices of the same title and to compare them. And THAT led to a sea-change in understanding of what a text is. The normative concept of the Urtext as the author's original version is in trouble. What happens to the so-called critical edition? Thus (p. 22):
that the critical edition represents a construct based on selected evidence is neither exceptional nor particularly shocking. More problematic is the fact that expediency decrees that manuscript mass be accorded short shrift. Not all manuscripts are equal in this scenario. Indeed, the purpose of manuscript selection—the choice by the editor of a small number of manuscripts deemed reliable — lay precisely in minimizing the number of manuscripts. The more versions an editor could eliminate as defective or uninteresting, the greater the probability that one had located the few copies closest to an original or early version of a work. The select copies could then be closely scrutinized for variant readings. And ‘variant’ meant precisely that: readings of lines or passages differing from what the editor determined to be the normative text. It was in reaction to such a restrictive treatment of manuscript variation that New Philology emerged. Initially, we argued that manuscript copies bore witness to a dialectical process of transmission where individual versions might have the same historical authority as that represented by the critical edition.
And so (pp. 24-25):
Perhaps the most startling question posed by the specular confrontation of manuscripts concerns the status of textuality itself. With unerring perspicuity, Jacqueline Cerquiglini-Toulet pinpoints the issue by asking the simple, but trenchant question: “what, exactly, is ‘a text’ in the Middle Ages, and how do we locate it in a manuscript culture where each codex is unique? [. . .] More radically still,” she continues, “we might legitimately ask just where we’re supposed to nd the text in the manuscript. How does it come to instantiate itself materially as object? And how is its literary identity realized?”

If such questions seem disorienting, it is because they underline how much print editions of medieval works have shaped our expectations. We have grown accustomed to finding the ‘text’ of a medieval work before our eyes whenever we open an edition. In the critical edition, the text is a given; that is why the work is called ‘textual scholarship.’ The editor works hard to establish a text on the basis of painstaking study of the manuscripts that he or she determines to be authoritative. The point, of course, is to circumscribe or close the text to from continuing to generate additions or variants. As we know, that is a modern practice grounded in concepts of scientific text editing.

But as Jacqueline Cerquiglini-Toulet observes, the very concept of a definitive text, a text incapable of generating new versions, is an illusion propagated by its own methodology. Authentic medieval texts, she observes, are never closed, nor, I would add, would their mode of transmission allow them to remain static. And, as a corollary, she observes: “Where are the boundaries?” How do we “identify the borders of a text”? She means that the manuscript folio has a very different ecology from the page of a printed edition. Textual space on a folio is not exclusive, but shared with other systems of representation, or — why not? — other kinds of ‘texts.’ These include rubrics, miniature paintings, decorated or historiated initials, bas-de-page images, marginal glosses, decorative programmes, and so on. In other words, the medieval manuscript page is not simply complex but, above all, an inter-artistic space navigated by visual cues.
We are far from the world of "distant reading" a large corpus of texts and thereby beginning to see patterns in literary history that had been but dimly envisioned before. But the change is equally profound. For example (26-27):
To understand the astonishing virtuosity and variety we find in manuscript versions of the ‘same’ work — such as the Roman de la Rose, for example, for which we have some 250 extant manuscripts produced between the end of the thirteenth and the beginning of the sixteenth century — we need to identify imminent factors responsible for generating multiple versions of a given work throughout the period. Here again, digital manuscript study offers reasons to move beyond conventional explanations.

Whereas increased manuscript production might intuitively be explained by such external causes as rising literacy among the merchant and artisan classes and the growth in the number of booksellers, the great variation we see in manuscripts, even those contemporaneous with one another, suggests the possibility of inherent forces of variation at work. Put another way, whereas the increase in literacy and leisure certainly contributed to the growing market for manuscripts to which Parisian booksellers responded, the efficient cause generating multiple manuscripts of a given work lay in the nature of the manuscript matrix itself.

It is not by chance that versions of a given work vary. Literary prestige derived in part from a work’s ability to renew itself from generation to generation by a dynamic process of differential repetition.
And so it goes. And we bring in Artistotle (p. 30): "But whereas we might think of striving for perfection as linear and directed, Aristotle sees it as continuous and open-ended." Is Nichols going to be arguing, then, that the production of version after version is a "striving for perfection" the extends through a population of scribes and readers? I suppose that's what I'll find out as I continue reading.

Thus, p. 32: "In other words, manuscripts are, by their very nature as eidos, ergon, and energeia, predisposed towards actualizing the works they convey not as invariant but as versions in an ever-evolving process of representation.  Against those who would see manuscript copies as regressions from an authoritative original to ever fainter avatars of that primal moment, we must recall Aristotle’s notion of form as atemporal actuality. "

* * * * *

Here's an earlier post about Nichols: Mutable stability in the transmission of medieval texts. And here's a post about the three texts of Hamlet that's relevant: Journey into Shakespeare, a tedious adventure – Will the real Hamlet stand up?

Early history of digital creativity (James Ryan)

And so he's been digging up all sorts of interesting things, not just computer storytelling. Here's some recent stuff he's dug up.

Sunday, July 16, 2017

Luxury real estate & Trump: International networks of power crossing public and private boundaries

Bloggingheads.tv – Published on Jul 14, 2017
00:26 Alex’s book Dictators Without Borders
04:29 Oligarchs and autocrats and kleptocrats, oh my!
10:52 Luxury real estate’s illicit money problem
22:11 The globalization of money laundering
30:12 Trump and networks of power
45:28 How Trump is blurring lines between business and politics
56:07 The slippery slope to kleptocracy

Daniel Nexon (The Duck of Minerva, Georgetown University) and Alexander Cooley (Columbia Harriman Institute, Barnard College, Dictators Without Borders)

Recorded on July 14, 2017

A most interesting discussion about how luxury real estate is a vehicle for money laundering & Trump's network extends into this world. "The lines between business and politics are not how we think about them."

Friday, July 14, 2017

"Lawfare" comes of age [@lawfareblog]

I first became aware of Lawfare through a wonderful March 3 post by Benjamin Wittes and Quinta Jurecic, What Happens When We Don’t Believe the President’s Oath? It seems that a lot of people discovered Lawfare about the same time and its readership has blossomed until
Obviously it is the Presidency of Donald Trump that made Lawfare's commentary so salient. Trump's bull-in-a-china-shop style begged for informed legal analysis, and Lawfare was there to provide it.

Congratulations Ben Wittes, Robert Chesney, Jack Goldsmith and the rest of the team!

Friday Fotos: Five Views of a Painted Reptile on the Rocks

May 7, 2011
July 1, 2011
August 7, 2011
April 24, 2016

July 9, 2017

Once more, a history of American Lit Crit, this time with politics

Writing in the LA Review of Books, Bruce Robbins reviews Joseph North, Literary Criticism: A Concise Political History (Harvard 2017). An interesting review of what sounds like an interesting book. Robbins reads the recent politics of lit crit as conservative rather than radical, which is how such criticism styles itself; and we get once more universals.
The broad strokes of his narrative are familiar enough, at least to literature professors. As everyone knows, the radicals of 1968, when they turned their attention to the university, insisted that academic attention be paid to race, gender, sexuality, colonialism, and other measures of historically inflicted injury. In literary criticism, these were contexts that had been missing from the everyday practice of interpretation. Moving into the ’70s and ’80s, it became obvious to much or most of the discipline that to read a work of past literature without asking what sort of society the work emerged from was as reprehensible, in its way, as ignoring those who were currently suffering injustice all around you. This is how close reading, little by little, went out of fashion — a momentous shift that, like so much else that later came to be associated with the ’60s, I was somehow living through but not really registering.

Most of the academics who advocated for historicism thought of themselves as radicalizing an apolitical or even crypto-conservative discipline. In North’s view, though, this gets the story backward. The politicization of the discipline that seemed to follow the eclipse of close reading was actually its depoliticization. In the period that began in the late 1970s “and continues through to the present,” North writes, “the project of ‘criticism’ was rejected as necessarily elitist, dehistoricizing, depoliticizing, and so forth; the idea of the ‘aesthetic’ was rejected as necessarily Kantian, idealist, and universalizing.” Yet
it was in fact quite wrong to reject the project of criticism as if its motivating concept, the aesthetic, could only ever be thought through in idealist terms. What was being elided here was the fact that modern disciplinary criticism had been founded on an aesthetics of just the opposite kind. In our own period, this historical amnesia has allowed a programmatic retreat from the critical project of intervening in the culture, back toward the project of analyzing the culture, without any mandate for intervention.
The newer style of interpretation recognized context, oppression, and injustice, yes, but it also masked a movement away from “criticism” and toward what North calls “scholarship.” Criticism, as he sees it, aspires to intervene in social life. Scholarship, as he sees it, is knowledge-production that has no such aspiration. Scholarship gets off on interpreting the world but can’t be bothered to do anything non-scholarly to change it. Since close reading, as North sees it, was a way of changing the world, if only reader by reader, what looked like a lurch to the left was actually a subtle move to the right.

For North, the production of analytic knowledge about the past, whatever its political motives, amounts to complacent non-interference. It’s a way of comfortably inhabiting a present that we ought to see, ethically speaking, as unfit for human habitation, hence requiring us to get up from our desks to do something about.
OK, so all the political critics have been hoisted on their own petards as it where. A call for revolution uttered from the comfort of one’s study is no call at all. Let’s just leave that alone.

Thursday, July 13, 2017

Walter Murch on being immersed in a film project and then pulling yourself out

Walter Murch is perhaps best-known for his work on Apocalypse Now, where he did the sound design (for which he won and Oscar) and much of the editing. This is a passage from an interview about his craft and his career that he did with Emily Buder in 2015:
To be an editor, you have to be the kind of person who can be in a room for 16 hours at a time. You are working alone a lot of the time, but there are also times when you’re working with a director in the room. You have to be able to accommodate that. For feature-length pictures, it’s like running a marathon. You have to pace yourself over a year. When I’m considering a film, that’s in the back of my mind. You have to really like the project. Also, you are frequently away from home. You go where the director is. I was working in Argentina for a year, a number of years ago. Before that, I was in Romania, and before that I was in London, and then after that about 2 years ago I was in New York for a year. If you’re married, you have to find ways of coping with that and that’s a whole chapter unto itself.

At the end of the film, it can be very disorienting when the work is suddenly finished. This is not exclusive to film editing; I’m sure it’s true of many other areas of human activity. Soldiers have this problem, actors who are acting in a play when the play is suddenly over, it’s like you’ve been cut loose: “Now what?!” This was never explained to me at film school. So when it first happened, I felt something was wrong with me. It’s the equivalent of a kind of seasickness; if you’ve never been on a ship before and somebody warns you about it, it’s okay. You’ll still feel just as sick, but you won’t feel like killing yourself. This is not that intense, but it is that kind of disorientation. And it passes, but it takes anywhere from two to six weeks to go away. During that time I would be very reluctant to try to decide what to do next. It’s like a love affair where you don’t want to bounce from one relationship to another; that’s dangerous. So, you should just let that project fade away and get back to normal, and then you can decide what to do next. We frequently don’t have the luxury of that, but that’s a goal.
That seems like a kind of mourning. When you work that long and with that intensity, you become attached to the film. When it's over, you've got to unattach yourself. That requires something very like mourning.

Wednesday, July 12, 2017

Under the Arches, July 9, 2017


MAGA: A conspiracy of oligarchs vs. the rest of us?

Just a quick take: We know that prior to becoming President Donald Trump was doing business in Russia. We now know that the Trump campaign – DJ Jr., Kushner, & Manafort – had a conversation with well-connected Russians about dirt on Hillary Clinton. We don’t yet know whether or not anything illegal has been done – expert opinion seems divided. But at the very least, it’s unseemly. Is this how to make American great again, collaborate with a nation that, not so long ago, was America’s fiercest rival?

But is this about nations, or just about the oligarchs and plutocrats that run them? We know that any self-respecting Russian oligarch is going to have an apartment in London, or New York, perhaps Singapore, or Dubai? The Chinese too? And folks on Jersey City, across the Hudson from Manhattan, have been getting exercised at son-in-law Jared’s sister dangling HB-5 visas before potential Chinese investors in their projects.

It’s looking like “Make America Great Again” is just the brand name under which a loose transnational gaggle of oligarchs manipulates politics in the USofA.

Meanwhile, I keep reading these articles about the waning of the nation-state as a vehicle for governance. The most recent of these talk about how states and cities in America are going around the federal government on climate change. That is to say, on this issue, they’ve decided to conduct their own foreign policy and foreign policy, we know, has traditionally be the prerogative of the nation-state. That’s why nation-states exist, to conduct foreign affairs.

What’s it all mean?

Monday, July 10, 2017

Red shoes, a white roller, and a green one

red shoes.jpg

Ted Underwood on Intellectual Genealogies: Distant Reading is Social-Science, Not Digital Humanities [#DH]

Ted Underwood, “A Genealogy of Distant Reading”, DHQ Vol. 11, No. 2, 2017:
Abstract: It has recently become common to describe all empirical approaches to literature as subfields of digital humanities. This essay argues that distant reading has a largely distinct genealogy stretching back many decades before the advent of the internet – a genealogy that is not for the most part centrally concerned with computers. It would be better to understand this field as a conversation between literary studies and social science, inititated by scholars like Raymond Williams and Janice Radway, and moving slowly toward an explicitly experimental method. Candor about the social-scientific dimension of distant reading is needed now, in order to refocus a research agenda that can drift into diffuse exploration of digital tools. Clarity on this topic might also reduce miscommunication between distant readers and digital humanists.
Rather than attempt to summarize it myself, I’ll present a set of tweets by Alan Liu, starting with this:
Liu continues with a long series of tweets, which I’ll present as quotes without the Twitter format. Along the way I will present brief comments of my own, thus inserting my own concerns into the argument.

Liu begins:
Here are my top 13 quotes--a kind of thirteen ways of looking at distant reading, cited by paragraph number.

(As it were: "Among twenty snowy mountains of texts, The only moving thing Was the eye of the distant reader"):

¶5: "The questions posed by distant readers were originally framed by scholars (like Raymond Williams and Janice Radway) who worked on the boundary between literary history and social science."

¶10: "these projects … pose broad historical questions about literature, and answer them by studying samples of social or textual evidence. I want to highlight the underlying project of experimenting on samples, and the premise that samples … have to be constructed"

¶21 "The crucial underlying similarity between [Radway & Moretti's] works, which has made both of them durably productive models for other scholars, is simply the decision to organize critical inquiry as an experiment."

¶22 "Distant reading is a historical science, and it will need to draw on something like Carol Cleland’s definition of scientific method, which embraces not only future-oriented interventions, but any systematic test that seeks 'to protect … from misleading confirmations.'"

¶22 "Literary historians who use numbers will have to somehow combine rigor with simplicity, and prune back a thicket of fiddly details that would be fatal to our reason for caring about the subject."

¶24 "I try not to join any debate about the representativeness of different samples until I have seen some evidence that the debate makes a difference to the historical question under discussion…. [S]amples are provisional, purpose-built things. They are not canons. It makes no sense to argue about their representativeness in the abstract, before a question is defined."

¶27 "Instead of interpreting distant reading as a normative argument about the discipline, it would be better to judge it simply by asking whether the blind spot it identified is turning out to contain anything interesting."

¶28 "Consensus about new evidence emerges very slowly: inventing an air-pump doesn’t immediately convince readers that vacuums exist…. But at this point, there is no doubt in my mind that literary scholarship turned out to have a blind spot. Many important patterns in literary history are still poorly understood, because they weren’t easily grasped at the scale of individual reading."

The insane confusion about culture that's behind too much contemporary thinking

The logic is on vivid display in a TV ad for Ancestry.com featuring a woman named Kim who pays her money, gets her DNA scan, and is thrilled to discover that she’s 23-percent Native American. Now, she says, while standing in front of some culturally appropriate pottery, "I want to know more about my Native American heritage." If the choice of Southwest-style cultural artifacts seems a little arbitrary, that’s because, as the Ancestry.com website warns you, the technology isn’t yet advanced enough to tell you whether you’re part Navajo or part Sioux. But, of course, that arbitrariness is less puzzling than the deployment of any artifacts at all.The point of Kim’s surprise is that she has no Native Americancultural connection whatsoever; the point of those pots is that they become culturally appropriate only when they’re revealed to be genetically appropriate.

As befits an ad, Kim’s story is a happy one. But it could have gone differently. The genetic transmission of an appreciation for Navajo pottery could just as easily have turned out to be a genetically traumatic relation to the catastrophe of the Long Walk. What if Sam Durant had gotten himself an Ancestry.com saliva test and discovered that he, too, was part Native American? The bad news: Thirty-eight of his ancestors had been unjustly hanged; the good news: their hanging was part of his story after all.
Later, writing about sociologist Alice Hoffman, who'd done fieldwork in a black neighborhood in Philadelphia:
Even when the experiences really are shared — when something actually did happen to us — we don’t think that autobiographical accounts of people’s own experiences are necessarily more true than other people’s accounts of those same experiences, or that only we have a right to tell our stories. No one thinks that either Goffman or the men she wrote about are the final authorities on their lives. My version of my life is just my version; no one is under any obligation to agree with it, much less refrain from offering his or her own.

So even our own stories don’t belong to us — no stories belong to anyone. Rather, we’re all in the position of historians, trying to figure out what actually happened. Interestingly, even if the logic of their position would seem to require it, the defenders of a racialized past haven’t been all that interested in confining historians to what are supposed to be their own stories. Maybe that’s because history (at least if it isn’t cultural) makes it harder to draw the needed lines. You obviously can’t understand the political economy of Jim Crow without understanding the actions of both white and black people. And you can’t understand the actions of those white and black people without reading the work of historians like (the white) Judith Stein and (the black) Adolph Reed.
And so:
The students at elite American universities come overwhelmingly from the upper class. The job of the faculty is to help them rise within (or at least not fall out of) that class. And one of the particular responsibilities of the humanities and social-science faculty is to help make sure that the students who take our courses come out not just richer than everyone else but also more virtuous. (It’s like adding insult to injury, but the opposite.)

Identity crimes — both the phantasmatic ones, like cultural theft, and the real ones, like racism and sexism — are perfect for this purpose, since, unlike the downward redistribution of wealth, opposing them leaves the class structure intact. [...]

The problem is not that rich people can’t feel poor people’s pain; you don’t have to be the victim of inequality to want to eliminate inequality. And the problem is not that the story of the poor doesn’t belong to the rich; the relevant question about our stories is not whether they reveal someone’s privilege but whether they’re true. The problem is that the whole idea of cultural identity is incoherent, and that the dramas of appropriation it makes possible provide an increasingly economically stratified society with a model of social justice that addresses everything except that economic stratification.
Not THAT's an interesting argument.

Sunday, July 9, 2017

Green Villain, Ecotrek into the Arches





Terrorists, Collateral Damage, and Trolley Problems

Moral philosophers in the analytic tradition like to run thought experiments of a kind known as the trolley problem:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

1. Do nothing, and the trolley kills the five people on the main track.

2. Pull the lever, diverting the trolley onto the side track where it will kill one person
A recent movie, Eye in the Sky, posed a problem with a similar form. Instead of five people tied to a track we have five terrorists having a meeting inside a private house in Nairobi. Instead of a person tied to a sidetrack we have an innocent young girl selling bread on the street outside that same house. An explosion that kills the terrorist will likely kill the girl as well. Do we do it?

That’s the basic situation. In fact, things are more complicated. Three of the terrorists are hold high-level leadership roles in the organization. The other two are suicide bombers who have just donned explosive vests. Presumably when the meeting is over they are going to public places and kill themselves, along with tens if not hundreds of others. So we can’t wait for the girl leave. But, of course, we don’t really know about the timing of things.

As for We, we are several levels of military and civilian leadership in Britain and American plus the remote pilot who flies the drone and who is the one who actually executes the order to bomb the house. The drama lies in the back and forth decision-making and buck-passing running in counterpoint with events on the ground.

The house is bombed and the girl dies, but only by seconds. If she’d been a bit quicker, if the bomb had been released half a minute later, she’d have lived while the terrorists would still have been killed.


It's a good film.

Collective Computation

Collective computation is about how adaptive systems solve problems. All systems are about extracting energy and doing work, and physical systems in particular are about that. When you move to adaptive systems, you’ve got the additional influence of information processing, which we think allows a system to extract energy more efficiently even though it has to expend a little extra energy to do the information processing. Components of adaptive systems look out at the world, and they try to discover the regularities. It’s a noisy process.

Unlike in computer science where you have a program you have written, which has to produce a desired output, in adaptive systems this is a process that is being refined over evolutionary or learning time. The system produces an output, and it might be a good output for the environment or it might not. And then over time it hopefully gets better and better.
For example, the human brain:
The human brain contains roughly 86 billion neurons, making our brains the ultimate collectives. Every decision we make can be thought of as the outcome of a neural collective computation. In the case of our study, which was lead by my colleague Bryan Daniels, the data we analyzed were collected during an experiment by Bill Newsome’s group at Stanford from macaques who had to decide whether a group of dots moving across a screen was traveling left or right. Data on neural firing patterns were recorded while the monkey was performing this task. We found that as the monkey initially processes the data, a few single neurons have strong opinions about what the decision should be. But this is not enough: If we want to anticipate what the monkey will decide, we have to poll many neurons to get a good prediction of the monkey’s decision. Then, as the decision point approaches, this pattern shifts. The neurons start to agree, and eventually each one on its own is maximally predictive.

We have this principle of collective computation that seems to involve these two phases. The neurons go out and semi-independently collect information about the noisy input, and that’s like neural crowdsourcing. Then they come together and come to some consensus about what the decision should be. And this principle of information accumulation and consensus applies to some monkey societies also.

Saturday, July 8, 2017

Visual Thinking: Math, Astronomy, Biology

Beware of attack lobster


Images and Objectivity

Ryan Cordell has an interesting post, Objectivity and Distant Reading, in which he comments on Objectivity (2010) by Lorraine Daston and Peter Galison:
Objectivity attempts to trace the emergence of scientific objectivity as a concept, ideal, and moral framework for researchers during the nineteenth century. In particular, the book focuses on shifting ideas about scientific images during the period. In the eighteenth and early nineteenth centuries, Daston and Galison argue, the scientific ideal was “truth-to-nature,” in which particular examples are primarily useful for the ways in which they reflect and help construct an ideal type: not this leaf, specifically, but this type of leaf. Under this regime scientific illustrations did not attempt to reconstruct individual, imperfect specimens, but instead to generalize from specimens and portray a perfect type.

Objectivity shows how, as the nineteenth century progressed and new image technologies such as photography shifted the possibilities for scientific imagery, truth-to-nature fell out of favor, while objectivity rose to prominence.
And that's what interests me, the focus on images, and the rise of photography:
In debates about the virtues of illustration versus photography, for instance, illustration was touted as superior to the relative primitivism of photography—technologies such as drawing and engraving simply allowed finer detail than blurry nineteenth century photography could. Nevertheless photography increasingly dominated scientific images because it was seen as less susceptible to manipulation, less dependent on the imagination of the artist (or, indeed, of the scientist).
Images, of course, are clearly distinct from the prose in which they are (often) set. Images are a form of objectification, though it takes more than objectification to yield objectivity.

Cordell then goes on to discuss computational criticism (aka distant reading), where "computation is invoked as a solution to problems of will that are quite familiar from decades of humanistic scholarship." Computational critics
might argue that methods such as distant reading or macroanalysis seek to bypass the human will that constructed such canons through a kind of mechanical objectivity. While human beings choose what to focus on for all kinds of reasons, many of them suspect, the computer will look for patterns unencumbered any of those reasons. The machine is less susceptible to the social, political, or identity manipulations of canon formation.
Interesting stuff. I've got two comments:

1) Consider one of my touchstone passages by Sydney Lamb, a linguist of Chomsky’s generation but of a very different intellectual temperament. Lamb cut his intellectual teeth on computer models of language processes and was concerned about the neural plausibility of such models. In his major systematic statement, Pathways of the Brain: The Neurocognitive Basis of Language (John Benjamins 1999) remarked on importance of visual notation (p. 274): “... it is precisely because we are talking about ordinary language that we need to adopt a notation as different from ordinary language as possible, to keep us from getting lost in confusion between the object of description and the means of description.” That is, we need the visual notation in order to objectify language mechanisms.

I note that, I think of objectification (in the sense immediately above) as a prerequisite for objectivity, but it is by no means a guarantee of it. That requires empirical evidence. A computer model will give us objectification, but no more.

2) Tyler Cowen has an interesting and wide-ranging interview with Jill Lepore in which she notes that Frederick Douglass was the most widely photographed man of 19th century America: "In the 1860s, he writes all these essays about photography in which he argues that photography is the most democratic art. And he means portrait photography. And that no white man will ever make a true likeness of a black man because he’s been represented in caricature — the kind of runaway slave ad with the guy, the little figure, silhouette of the black figure carrying a sack."

Friday, July 7, 2017

Friday Fotos: Graffiti Fragments






Logan Hicks on the state of the world, art in particular

Bill McKibben on the new nation-states

But the Paris decision may also reshape the world for the better, or at least the very different. Consider: A few days after Trump’s Rose Garden reveal, California Governor Jerry Brown was in China, conducting what looked a lot like an official state visit. He posed with pandas, attended banquets—and sat down for a one-on-one meeting with President Xi Jinping, which produced a series of agreements on climate cooperation between China and California. (Trump’s secretary of energy, Rick Perry, was in Beijing the same week: no pandas, no sit-down with Xi.) It was almost as if California were another country. Call it a nation-state—a nation-state that has talked about launching its own satellites to monitor melting polar ice. A nation-state that has joined New York and a dozen others in a climate alliance to announce they will meet the targets set in the Paris accord on their own. A nation-state that already holds joint auctions with Quebec in its carbon cap-and-trade program. A nation-state that is convening hundreds of other “subnational actors” from around the world next year to pledge to keep the rise in global temperature below 2 degrees Celsius.
It’s ironic that global warming might be the wedge issue for the rise of “subnationalism.” After all, if you ever wanted an argument for world government, climate change provides it. But the United Nations has been trying to stop global warming since the days when we called it the greenhouse effect. And national governments, hijacked by the fossil fuel industry, have intervened again and again to obstruct any progress: The Kyoto treaty more or less collapsed, as did the Copenhagen talks. Paris “succeeded,” but only if you squint: The world’s nations vowed to keep the planet’s temperature increase to under 2 degrees Celsius, but their promises actually add up to a world that will grow 3.5 degrees hotter. The real hope was that the accord would spur private investment in renewable energy: And as the price of solar panels plummeted, in fact, China and India started to exceed their pledges.

Even that modest progress alarmed what energy expert Michael Klare calls the Big Three carbon powers: the United States, Saudi Arabia, and Russia. (Trump’s foreign policy looks more coherent, by the way, when viewed through this prism.) The United States has now pulled out of Paris, and an aide to Vladimir Putin has said the withdrawal makes it “perfectly evident” the pact is now “unworkable.”

So what’s a state like California to do? It can’t ignore climate change, which threatens its very existence. [...]

If you want to know who is serious about forging a new path on global warming, ignore all the airy proclamations about meeting the Paris targets—and instead pay attention to the cities and states making the very real and measurable pledge to go 100 percent renewable. California’s senate just passed such a commitment by a 2–1 margin. More dramatically, the day after Trump said he had been elected to serve “Pittsburgh, not Paris,” Mayor Bill Peduto announced that Pittsburgh will run entirely on clean energy by 2035. “If you are a mayor and not preparing for the impacts of climate change,” Peduto said, “you aren’t doing your job.” All told, 27 cities in 17 states have pledged to go 100 percent renewable—a move that puts them at direct odds with federal policy. Call them “climate sanctuaries.” San Francisco, Boulder, and Burlington won’t surprise you—but Atlanta and Salt Lake City and San Diego have done the same.

If you want to know who is serious about forging a new path on global warming, ignore all the airy proclamations about meeting the Paris targets—and instead pay attention to the cities and states making the very real and measurable pledge to go 100 percent renewable. California’s senate just passed such a commitment by a 2–1 margin. More dramatically, the day after Trump said he had been elected to serve “Pittsburgh, not Paris,” Mayor Bill Peduto announced that Pittsburgh will run entirely on clean energy by 2035. “If you are a mayor and not preparing for the impacts of climate change,” Peduto said, “you aren’t doing your job.” All told, 27 cities in 17 states have pledged to go 100 percent renewable—a move that puts them at direct odds with federal policy. Call them “climate sanctuaries.” San Francisco, Boulder, and Burlington won’t surprise you—but Atlanta and Salt Lake City and San Diego have done the same.

Thursday, July 6, 2017

Organic lattice


Two Problems for the Human Sciences, and Two Metaphors

For as long as I can remember such things – back to my undergraduate years in the 1960s – humanists have been defending themselves and their work against all comers: politicians, scientists of all kinds, and disgruntled letter writers. And always the defense comes down to this: we provide a holistic and integrated view of what it is to be human in a world that is, well, just what IS the world like anyhow?

It’s a mugs game and I refuse to play it. I was trained in the human sciences: hermeneutics AND cognitive science, history AND social science, and I’ve played jazz and rhythm and blues in seedy nightclubs, ritzy weddings, and outdoors before thousands. It’s all good. It’s all come into play as I’ve investigated the human mind through music and literature.

In this essay I look at literature. First I consider literary form as displayed in ring form texts. Then I review the historical problem posed by Shakespeare and the rise of the European novel. My general point is that we need all our conceptual resources to deal with these problems. But let’s begin with an analogy: how do we understand, say, a cathedral?

The Cathedral Problem

Cathedrals are made of stone blocks, mortar, pieces of stained glass, lead strips, metal fittings, wooden beams and boards, and so forth. You can go through a cathedral and count and label every block and locate them on a (3D) map. You can do the same for the doors and cabinets, the plumbing, heating fixtures, and wiring, and so forth. You will now, in some sense, have described the cathedral. But you won't have captured its design. That’s difficult and those how focus on it often use vague language, not because they like vagueness, but because, at the moment, that’s all that’s available.

And so it goes with literature and newer psychologies: cognitive science, evolutionary psychology, and neuroscience. My humanist colleagues keep hearing that they should get on board with the cognitive revolution and the decade of the brain. But it all sounds like trying to explain a cathedral by counting the building blocks, measuring the pitch of the roof, and analyzing the refractive properties of pieces of colored glass.

The advice may be well meant, but it isn’t terribly useful. It takes our attention away from the problem – how the whole shebang works – and asks us to settle for a pile of things we already know. Almost.

Ring Forms in Literature

I first learned of ring form in an article published in PMLA – the oldest literary journal published in the United States – back in 1976: “Measure and Symmetry in Literature” by R. G. Peterson. The idea is a simple one, that some texts, or parts of texts, are symmetrically arranged about a center point: 

A, B … X … B’, A’

He produced many examples, from Iliad through Shakespeare’s Hamlet to the “Author’s Prologue” to Dylan Thomas, Collected Poems. But my interests, like those of most literary critics, were elsewhere and so I merely noted the article and went on about my business.

I was reminded of this work some years ago when I entered into correspondence with the late Mary Douglas, a British anthropologist who rose to academic stardom – such as it was back in ancient times – after the 1966 publication of Purity and Danger: An Analysis of Concepts of Pollution and Taboo. She spent the last decade of her career immersed in the arcana of classical and Biblical studies, publishing monographs on the Book of Leviticus and the Book of Numbers and, in 2007, Thinking in Circles: An Essay on Ring Composition, based on a series of lectures she had delivered at Yale. Among other things, she argues that such forms aren’t special to the ancient world, that they continue in modern times – she offers Sterne’s Tristram Shandy as an example.

She opens her 10th chapter by referring to Roman Jakobson, one of the pioneering linguistics of the 20th Century, who believed, on the basis of extensive study, that such patterns reflect “a faculty inherent in the relation among language, grammar, and brain.” But why are such patterns so very difficult to recognize if they are so natural to us?

Wednesday, July 5, 2017

The soverign state of New York?

Will the citizen's of New York decide to amend the state's constitution to give the state greater independence from the federal government? From the NYTimes:
Every 20 years, New Yorkers have the chance to vote whether they want to hold a constitutional convention to amend, tweak or otherwise improve the founding document of the state.

For the past half-century, voters have demurred. This year, however, academics, good-government groups and others believe the outcome of the ballot question in November may be different. And — perhaps no surprise — it has something to do with the current occupant of the White House.

“Trump’s election emphasizes how valuable it is for states to chart their own course,” said Peter J. Galie, author of “Ordered Liberty: A Constitutional History of New York” and a professor of political science at Canisius College in Buffalo. “We can put a right to clean air and water in our Constitution. If we want to add more labor protections, we can do it. That’s the beauty of federalism.”
What about New York City separating from the rest of the state?
John Bergener Jr., a retiree who lives outside Albany, would like to see the separation of New York City from the rest of the state. As chairman of Divide NYS Caucus, a political committee, he believes a constitutional convention is the best mechanism to achieve that.

Upstate areas, he said, have suffered economically from excessive business regulations and unfunded mandates. His vision — and he claims thousands of supporters — calls for two or three autonomous regions, each with its own regional governor and legislature. (The upstate region, north of the lower Hudson Valley, would be called New Amsterdam.) A statewide governor would be titular, with the same “powers as the queen of England.”

Friday, June 30, 2017

Blue America is going around the federal government on climate change

David Roberts, in Vox: "States and cities are forming a kind of parallel national government around climate change."
In recoiling from Trump, states, cities, and institutions are entering into closer cooperation. A coalition is forming, a Blue America, and at least on climate change, it is going beyond mere resistance to a more proactive role, negotiating with the international community on its own behalf, like a separate nation.

It is, in foreign policy terms, a remarkable development — and while it seems to offer some near-term hope on climate change, it carries troubling implications for the ongoing stability of the country.

Blue America, the international negotiator, awakens

Since Trump gave the world the finger over Paris, more than 1,400 companies and institutions, 200 cities, and a dozen states have committed to meet the carbon targets the US originally pledged there.

There’s been so much activity that it can be difficult to track all the new initiatives and groups. There’s the US Climate Alliance, representing 12 states and about a third of the US population. There’s We Are Still In, representing nine states, hundreds of cities, and thousands of businesses and institutions of higher learning. There’s Climate Mayors, with 338 US mayors representing 65 million constituents. And probably more I’m missing.

Just this week, at the US Conference of Mayors in Miami Beach, Florida, US mayors of 1,481 cities signed a unanimous resolution calling on Trump to rejoin the Paris agreement, implement the Clean Power Plan, and help build electric vehicle infrastructure.

All of this action was more or less symbolic until earlier this month, when yet another coalition, as yet unnamed — consisting of three governors, 30 mayors, and more than 80 university presidents, led by ex-NYC Mayor Michael Bloomberg — began negotiating with the UNFCCC to have their contributions officially counted alongside other nations in the Paris agreement.
Moreover, "California Gov. Jerry Brown, who represents the world’s seventh-largest economy, has been in China promising ongoing cooperation and support — acting, more or less, as a head of state." Are we moving toward a parallel government?
It is a parallel government with sharply limited powers, of course. Even under optimistic scenarios, it likely can’t achieve the carbon reductions the US would have achieved acting nationally. (The Sierra Club has a good analysis of the potential.)

And its legal status is murky. As Harvard’s Robert Stavins told Vox’s Alexia Fernández Campbell, “the Constitution of the United States prohibits subnational entities from carrying out meaningful international agreements.”

But that’s only if you take a somewhat legalistic view of “international agreements,” as agreements committing the US as a nation to particular actions. It’s worth remembering that the Paris agreement didn’t commit the US to anything either, not in a legal sense. The agreement contains no mechanism to punish countries that don’t meet their targets. It’s voluntary.
How will this unfold?
What will happen when Red and Blue America starting thinking of themselves as separate countries, and acting that way, consolidating their power and negotiating independently? What happens when they really start fighting?

It’s a little dystopian, as it carries the whiff of a second Civil War. But as we’ve learned about dystopias this past decade, they need not happen all at once, dramatically. They can happen in creeping increments, each of which allows for enough of a pause that it comes to seem normal.

Right now it’s all fun and games. It’s only Canada. It’s only a voluntary climate treaty. Blue and Red America are not, as yet, wielding conflicting legal authorities, getting involved in internal economic or trade disputes, or seeking explicitly to fight or punish one another.

But how long will that last? How long before open (or at least more open) hostilities?

Friday Fotos: Across the River

20150508-_IGP3526 EQ SAT





Wednesday, June 28, 2017

The unintelligibility/opacity of AI systems

In the old days of classical symbolic AI program logic was "hand-coded" and based on expert knowledge of the application domain. It was thus possible, at least in principle, to figure out why the program did what it did in any particular case. That's not true of contemporary AI systems that use so-called "deep learning", which more or less program themselves. Writing about a self-driving car, Will Knight observes:
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
Of course, this has legal implications:
There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

Is this what mind looks like from the inside?

Forest crop


Humans as pattern-seekers

Last week I’d posted a video in which Jeremy Lent sketches out a transformation in which humankind manages to escape climate catastrophe. He’s recently published The Patterning Instinct: A Cultural History of Humanity’s Search for Meaning. While I’m leery of the term “instinct” in this context, I certainly believe that we are pattern seeking creatures, and that we seek meaning (unity of being?).

What I’m wondering is if we can deriving this pattern seeking from the fact that each individual neuron is, of course, a living agent, seeking to increase its inputs (nutrients) through its actions (its outputs) – see my old post on The Busy Bee Brain. Of course that’s true of every kind of brain, not just human brains. What is it that sets the human brain free to seek patterns of every kind everywhere? Conversely, what is it that keeps the brains of butterflies, octopi, iguanas, rabbits, parrots, and so forth from such untethered pattern seeking?

I think it’s the (special) nature of human society, our ability to walk about in one another’s minds though language and the arts and sciences, that’s what does it. Alas, I don’t know how to turn that into an explicit argument. How is it that seeking and finding patterns energizes individual neurons, for the seeking and finding of patterns requires the coordinated efforts of millions and billions of neurons distributed across many brains. How can we formulate that in a coherent way?

Multifractals (fractals within fractals) in literary texts

From Science Daily:
James Joyce, Julio Cortazar, Marcel Proust, Henryk Sienkiewicz and Umberto Eco. Regardless of the language they were working in, some of the world's greatest writers appear to be, in some respects, constructing fractals. Statistical analysis carried out at the Institute of Nuclear Physics of the Polish Academy of Sciences, however, revealed something even more intriguing. The composition of works from within a particular genre was characterized by the exceptional dynamics of a cascading (avalanche) narrative structure. This type of narrative turns out to be multifractal. That is, fractals of fractals are created.

[...]  Physicists from the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow, Poland, performed a detailed statistical analysis of more than one hundred famous works of world literature, written in several languages and representing various literary genres. The books, tested for revealing correlations in variations of sentence length, proved to be governed by the dynamics of a cascade. This means that the construction of these books is in fact a fractal. [...]

Multifractals are more highly advanced mathematical structures: fractals of fractals. They arise from fractals 'interwoven' with each other in an appropriate manner and in appropriate proportions. Multifractals are not simply the sum of fractals and cannot be divided to return back to their original components, because the way they weave is fractal in nature. The result is that in order to see a structure similar to the original, different portions of a multifractal need to expand at different rates. A multifractal is therefore non-linear in nature.

"Analyses on multiple scales, carried out using fractals, allow us to neatly grasp information on correlations among data at various levels of complexity of tested systems. As a result, they point to the hierarchical organization of phenomena and structures found in nature. So we can expect natural language, which represents a major evolutionary leap of the natural world, to show such correlations as well. Their existence in literary works, however, had not yet been convincingly documented. Meanwhile, it turned out that when you look at these works from the proper perspective, these correlations appear to be not only common, but in some works they take on a particularly sophisticated mathematical complexity," says Prof. Stanislaw Drozdz (IFJ PAN, Cracow University of Technology).
Stream of consciousness turned out to be particularly complex:
However, more than a dozen works revealed a very clear multifractal structure, and almost all of these proved to be representative of one genre, that of stream of consciousness. The only exception was the Bible, specifically the Old Testament, which has so far never been associated with this literary genre.

"The absolute record in terms of multifractality turned out to be Finnegan's Wake by James Joyce. The results of our analysis of this text are virtually indistinguishable from ideal, purely mathematical multifractals," says Prof. Drozdz. [...] "It is not entirely clear whether stream of consciousness writing actually reveals the deeper qualities of our consciousness, or rather the imagination of the writers."

The original research:

Stanisław Drożdż, Paweł Oświȩcimka, Andrzej Kulig, Jarosław Kwapień, Katarzyna Bazarnik, Iwona Grabska-Gradzińska, Jan Rybicki, Marek Stanuszek. Quantifying origin and character of long-range correlations in narrative texts. Information Sciences, 2016; 331: 32 DOI: 10.1016/j.ins.2015.10.023