Saturday, April 30, 2016

Obama at the Comedy Cellar

We know that Jerry Seinfeld is an admirer of Obama's comedy skills; that's why he had him on his cars and comedy coffee klatch.. Now a former White House speech writer, David Litt, has written at NYTimes opinion piece about the President's comedy skills.
Part of what makes any presidential joke funny is the fact that the president is telling a joke. But this president has a talent for comedy — an impressive sense of timing and audience. His administration combined that talent with an understanding of a changing media landscape and the emergence of viral videos. Jokes became a real tool to move his agenda forward.
But they didn't really tap Obama's comedy skills until after the somewhat rocky roll-out of Obamacare:
By March 2014, the health care exchanges were finally working, but most young people didn’t seem to know that. Not enough of them were signing up. One solution, at least in part, was for President Obama to plug the site on the comedian Zach Galifianakis’s online talk show “Between Two Ferns.” The commander in chief sat between two ferns and listened as the comedian asked him, “What’s it like to be the last black president?” before they got around to talking health care.

The day the “Ferns” video appeared online it was viewed by 11 million people, and traffic to HealthCare.gov spiked 40 percent. Of course that video isn’t the only reason the administration can now report that 20 million people are enrolled in insurance through the Affordable Care Act. But it certainly helped get the word out.
But what struck me about the article came up front. Pitt was writing about Luther, Obama's "anger translator":
Each time Mr. Key, as the anger translator, began a new manic tirade, the president burst out laughing. Already dressed in his tuxedo for the evening, he glanced toward us, his staff, huddled in a corner of the room.

“I’ve got to hold it together,” Mr. Obama said. He said it again backstage a few hours later, this time using a comedy term for laughing in the middle of a scene. “I have to make sure I don’t break.”
Think about that for a minute. The comedian's job is to get the audience to laugh. But the comedian cannot, absolutely cannot, laugh at his own jokes. How does that work? That's a tricky bit of psychology.

* * * * *

The President and his anger translator:

Friday, April 29, 2016

Friday Fotos: childhood's end

baby mickey.jpg

lion king.jpg

smile.jpg

IMGP7874rd.jpg

IMGP1313rd.jpg

Wikipedia reinvents corporate bureaucracy in its internal structure

Wikipedia is one of the (potentially) great social experiments of our time. A large self-organizing community built from the bottom up. Ah! freedom! And what got built? According to an exhaustive study that examined the sight, which records every action taken in its construction an maintenance, what got built is a standard corporate hierarchy. Writing in Gizmodo, Jennifer Ouellette reports:
One of their most striking findings is that, even on Wikipedia, the so-called “Iron Law of Oligarchy”—a.k.a. rule by an elite few—holds sway. German sociologist Robert Michels coined the phrase in 1911, while studying Italian political parties, and it led him to conclude that democracy was doomed. “He ended up working for Mussolini,” said DeDeo, who naturally learned about Michels via Wikipedia.

“You start with a decentralized democratic system, but over time you get the emergence of a leadership class with privileged access to information and social networks,” DeDeo explained. “Their interests begin to diverge from the rest of the group. They no longer have the same needs and goals. So not only do they come to gain the most power within the system, but they may use it in ways that conflict with the needs of everybody else.”

He and Heaberlin found that the same is true of Wikipedia. The core norms governing the community were created by roughly 100 users—but the community now numbers about 30,000.
There goes the neighborhood! But then, how could it have been otherwise? Top-down hierarchies are all the most of us know, right?

H/t 3QD.

* * * * *

Here's the study:

Future Internet 2016, 8(2), 14; doi:10.3390/fi8020014
The Evolution of Wikipedia’s Norm Network
Bradi Heaberlin and Simon DeDeo

Abstract: Social norms have traditionally been difficult to quantify. In any particular society, their sheer number and complex interdependencies often limit a system-level analysis. One exception is that of the network of norms that sustain the online Wikipedia community. We study the fifteen-year evolution of this network using the interconnected set of pages that establish, describe, and interpret the community’s norms. Despite Wikipedia’s reputation for ad hoc governance, we find that its normative evolution is highly conservative. The earliest users create norms that both dominate the network and persist over time. These core norms govern both content and interpersonal interactions using abstract principles such as neutrality, verifiability, and assume good faith. As the network grows, norm neighborhoods decouple topologically from each other, while increasing in semantic coherence. Taken together, these results suggest that the evolution of Wikipedia’s norm network is akin to bureaucratic systems that predate the information age.

And a related study:

Intellectual interchanges in the history of the massive online open-editing encyclopedia, Wikipedia

Jinhyuk Yun (윤진혁), Sang Hoon Lee (이상훈), and Hawoong Jeong (정하웅)
Phys. Rev. E 93, 012307 – Published 22 January 2016

Abstract: Wikipedia is a free Internet encyclopedia with an enormous amount of content. This encyclopedia is written by volunteers with various backgrounds in a collective fashion; anyone can access and edit most of the articles. This open-editing nature may give us prejudice that Wikipedia is an unstable and unreliable source; yet many studies suggest that Wikipedia is even more accurate and self-consistent than traditional encyclopedias. Scholars have attempted to understand such extraordinary credibility, but usually used the number of edits as the unit of time, without consideration of real time. In this work, we probe the formation of such collective intelligence through a systematic analysis using the entire history of 34534110 English Wikipedia articles, between 2001 and 2014. From this massive data set, we observe the universality of both timewise and lengthwise editing scales, which suggests that it is essential to consider the real-time dynamics. By considering real time, we find the existence of distinct growth patterns that are unobserved by utilizing the number of edits as the unit of time. To account for these results, we present a mechanistic model that adopts the article editing dynamics based on both editor-editor and editor-article interactions. The model successfully generates the key properties of real Wikipedia articles such as distinct types of articles for the editing patterns characterized by the interrelationship between the numbers of edits and editors, and the article size. In addition, the model indicates that infrequently referred articles tend to grow faster than frequently referred ones, and articles attracting a high motivation to edit counterintuitively reduce the number of participants. We suggest that this decay of participants eventually brings inequality among the editors, which will become more severe with time.

Wednesday, April 27, 2016

Mapping Semantic Space to the Cortical Surface


A Continuous Semantic Space Describes
the Representation of Thousands of Object and Action Categories across the Human Brain

Alexander G. Huth,1 Shinji Nishimoto,1 An T. Vu,2 and Jack L. Gallant1,2,3,* 1Helen Wills Neuroscience Institute
2Program in Bioengineering
3Department of Psychology
University of California, Berkeley, Berkeley, CA 94720, USA *Correspondence: gallant@berkeley.edu

http://dx.doi.org/10.1016/j.neuron.2012.10.014


SUMMARY
Humans can see and name thousands of distinct object and action categories, so it is unlikely that each category is represented in a distinct brain area. A more efficient scheme would be to represent categories as locations in a continuous semantic space mapped smoothly across the cortical surface. To search for such a space, we used fMRI to measure human brain activity evoked by natural movies. We then used voxelwise models to examine the cortical representation of 1,705 object and action categories. The first few dimensions of the underlying semantic space were recovered from the fit models by prin- cipal components analysis. Projection of the recov- ered semantic space onto cortical flat maps shows that semantic selectivity is organized into smooth gradients that cover much of visual and nonvisual cortex. Furthermore, both the recovered semantic space and the cortical organization of the space are shared across different individuals. 

* * * * * 

Some remarks by the lead author, Alexander Huth:
Back in 2012 I wrote a paper about the cortical representation of visual semantic categories. I showed that pretty much all of the higher visual cortex is semantically selective, and argued that this representation is better understood as gradients of selectivity across the cortex than as distinct areas. I also made a video that explains the paper, and there's a nice FAQ on our lab website. I also made a nify online viewerfor that dataset. 

Tuesday, April 26, 2016

From Telling to Showing, by the Numbers

I've been thinking about some remarks Moretti made about the digital humanities in  a recent interview. Among other things he suggested that the results of computational criticism have so far been disappointing. But he also held up Lit Lab Pamphlet #4 as an example of "an intelligence that takes the form of writing a script, but in the writing of the script there is also the beginning of a concept, very often not expressed as a concept, but that you can see that it was there from the results that the coding produces." Here's what I wrote about that pamphlet back in October of 2012.
I’ve just looked at a pamphlet from Stanford’s Literary Lab: Ryan Heuser and Long Le-Khac, A Quantitative Literary History Of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method (68 page PDF), May 2012. I’ve not read it in detail, but only blitzed my way through, looking for the good parts. Well, not even all of those. I was just looking to get a sense of what’s going on.

Which I did. And I like it. THIS is the sort of work I want to see from ‘digital humanities.’ Not the only sort, but it’s one of the things we can do with ‘big data’ and pretty much only do with big data. If traditional humanists can’t see value in this kind of work, well, then forget about them.

First I’ll give you the abstract, then I’ll quote a bunch and make some comments.

Authors’ abstract
The nineteenth century in Britain saw tumultuous changes that reshaped the fabric of society and altered the course of modernization. It also saw the rise of the novel to the height of its cultural power as the most important literary form of the period. This paper reports on a long-term experiment in tracing such macroscopic changes in the novel during this crucial period. Specifically, we present findings on two interrelated transformations in novelistic language that reveal a systemic concretization in language and fundamental change in the social spaces of the novel. We show how these shifts have consequences for setting, characterization, and narration as well as implications for the responsiveness of the novel to the dramatic changes in British society.

This paper has a second strand as well. This project was simultaneously an experiment in developing quantitative and computational methods for tracing changes in literary language. We wanted to see how far quantifiable features such as word usage could be pushed toward the investigation of literary history. Could we leverage quantitative methods in ways that respect the nuance and complexity we value in the humanities? To this end, we present a second set of results, the techniques and methodological lessons gained in the course of designing and running this project.

Sunday, April 24, 2016

Mother Father Nature Universe in Red with Blindfold, Dog, and Bird

20160424-_IGP6575

Digital Humanities, the Public and "Saving the Humanities"

Melissa Dinsman interviews Laura Mandell in the LARB:
Another concern that has come up deals with public intellectualism, which many scholars and journalists alike have described as being in decline (for example, Nicholas Kristof’s New York Times essay last year). What role, if any, do you think digital work plays? Could the digital humanities (or the digital in the humanities) be a much-needed bridge between the academy and the public, or is this perhaps expecting too much of a discipline?

I have a story to tell about this. I was at the digital humanities conference at Stanford one year and there was a luncheon at which Alan Liu spoke. His talk was a plea to have the digital humanities help save the humanities by broadcasting humanities work — in other words, making it public. It was a deeply moving talk. But to her credit, Julia Flanders stood up and said something along the lines of, “We don’t want to save the humanities as they are traditionally constituted.” And she is right. There are institutional problems with the humanities that need to be confronted and those same humanities have participated in criticizing the digital humanities. Digital humanists would be shooting themselves in the foot in trying to help the very humanities discipline that discredits us. In many ways Liu wasn’t addressing the correct audience, because he was speaking to those who critique DH and asking that they take that critical drive that is designed to make the world a better place and put it into forging a link with the public — making work publicly available. Habermas has said that the project of Enlightenment is unfinished until we take specialist discourses and bring them back to the public. This has traditionally been seen as a lesser thing to do in the humanities. For Habermas, it is seen as the finishing of an intellectual trajectory. This is a trajectory that we have not yet completed and it is something, I think, the digital humanities can offer.
I like that:  “We don’t want to save the humanities as they are traditionally constituted.”

Saturday, April 23, 2016

Physics as a mode of thought – critical points in biology (and mind?)

Philip Ball, in Nautilus: Why Physics Is Not a Discipline. Rather, it's a mode of thinking that knows no disciplinary bounds.
The habit of physicists to praise peers for their ability to see to the “physics of the problem” might sound odd. What else would a physicist do but think about the “physics of the problem?” But therein lies a misunderstanding. What is being articulated here is an ability to look beyond mathematical descriptions or details of this or that interaction, and to work out the underlying concepts involved—often very general ones that can be expressed concisely in non-mathematical, perhaps even colloquial, language. Physics in this sense is not a fixed set of procedures, nor does it alight on a particular class of subject matter. It is a way of thinking about the world: a scheme that organizes cause and effect.

This kind of thinking can come from any scientist, whatever his or her academic label. It’s what Jacob and Monod displayed when they saw that feedback processes were the key to genetic regulation, and so forged a link with cybernetics and control theory. It’s what the developmental biologist Hans Meinhardt did in the 1970s when he and his colleague Alfred Gierer unlocked the physics of Turing structures. These are spontaneous patterns that arise in a mathematical model of diffusing chemicals, devised by mathematician Alan Turing in 1952 to account for the generation of form and order in embryos. Meinhardt and Gierer identified the physics underlying Turing’s maths: the interaction between a self-generating “activator” chemical and an ingredient that inhibits its behavior.

Once we move past the departmental definition of physics, the walls around other disciplines become more porous, to positive effect. Mayr’s argument that biological agents are motivated by goals in ways that inanimate objects are not was closely tied to a crude interpretation of biological information springing from the view that everything begins with DNA. As Mayr puts it, “there is not a single phenomenon or a single process in the living world which is not controlled by a genetic program contained in the genome.”

This “DNA chauvinism,” as it is sometimes now dubbed, leads to the very reductionism and determinism that Mayr wrongly ascribes to physics, and which the physics of biology is undermining. For even if we recognize (as we must) that DNA and genes really are central to the detailed particulars of how life evolves and survives, there’s a need for a broader picture in which information for maintaining life doesn’t just come from a DNA data bank. One of the key issues here is causation: In what directions does information flow? It’s now becoming possible to quantify these questions of causation—and that reveals the deficiencies of a universal bottom-up picture.
Biological systems seem to operate close to critical points, where the system changes from one phase to another:
By operating close to a critical point, Bialek and Mora said, a system undergoes big fluctuations that give it access to a wide range of different configurations of its components. As a result, Mora says, “being critical may confer the necessary flexibility to deal with complex and unpredictable environments.” What’s more, a near-critical state is extremely responsive to disturbances in the environment, which can send rippling effects throughout the whole system. That can help a biological system to adapt very rapidly to change: A flock of birds or a school of fish can respond very quickly to the approach of a predator, say.

Criticality can also provide an information-gathering mechanism. Physicist Amos Maritan at the University of Padova in Italy and coworkers have shown that a critical state in a collection of “cognitive agents”—they could be individual organisms, or neurons, for example—allows the system to “sense” what is going on around it: to encode a kind of ‘internal map’ of its environment and circumstances, rather like a river network encoding a map of the surrounding topography. “Being poised at criticality provides the system with optimal flexibility and evolutionary advantage to cope with and adapt to a highly variable and complex environment,” says Maritan. There’s mounting evidence that brains, gene networks, and flocks of animals really are organized this way. Criticality may be everywhere.
I have written variously about behavioral mode. Those modes may be considered different phases of mind.

Thursday, April 21, 2016

The Art of the Deal: The Narrow Virtue of Donald Trump

Scott Alexander has some interesting observations about Donald Trump that he makes by discussing Trump's book, The Art of the Deal. Here's his conclusion about what Trump does as a developer:
As best I can tell, the developer’s job is coordination. This often means blatant lies. The usual process goes like this: the bank would be happy to lend you the money as long as you have guaranteed renters. The renters would be happy to sign up as long as you show them a design. The architect would be happy to design the building as long as you tell them what the government’s allowing. The government would be happy to give you your permit as long as you have a construction company lined up. And the construction company would be happy to sign on with you as long as you have the money from the bank in your pocket. Or some kind of complicated multi-step catch-22 like that. The solution – or at least Trump’s solution – is to tell everybody that all the other players have agreed and the deal is completely done except for their signature. The trick is to lie to the right people in the right order, so that by the time somebody checks to see whether they’ve been conned, you actually do have the signatures you told them that you had. The whole thing sounds very stressful.
And now we get to Trump the politician:
Maybe I’m imagining things, but I feel like this explains a lot about his presidential campaign. People ask him something like “How would you fix Medicare?”, and he gives some vapid answer like “There are tremendous problems with Medicare, but I’m going to hire the best people. I know all of the best doctors and health care executives, and we’re going to cut some amazing deals and have the best Medicare in the world.” And yeah, he did say in his business tips that you should change the frame to avoid being negative to reporters. But this isn’t a negative or a gotcha question. At some point you’d expect Trump to do his homework and get some kind of Medicare plan or other. Instead he just goes off on the same few tangents. This thing about hiring the best people, for example, seems almost like an obsession in the book. But it works for him. [...]

These strategies have always worked for him before, and floating off into some intellectual ideal-system-design effort has never worked for him before. So when he says that he’s going to solve Medicare by hiring great managers and knowing all the right people, I don’t think this is some vapid way of avoiding the question. I think it’s the honest output of a mind that works very differently from mine. I’ve been designing ideal systems of government for the heck of it ever since I was old enough to realize what a government was. Trump is at serious risk of actually taking over a government, and such design still doesn’t appeal to him. The best he can do is say that other people are bad at governing, but he’s going to be good at governing, on account of his deal-making skill. I think he honestly believes this. It makes perfect sense in real estate, where some people are good businesspeople, others are bad businesspeople, and the goal is to game the system rather than change it. But in politics, it’s easy to interpret as authoritarianism – “Forget about policy issues, I’m just going to steamroll through this whole thing by being personally strong and talented.”
And so:
The world is taken as a given. It contains deals. Some people make the deals well, and they are winners. Other people make the deals poorly, and they are losers. Trump does not need more than this.

Polylingualism

Victor Mair has a fascinating post on this topic over at Language Log, with many interesting comments. He begins:
I'm sitting in the San Francisco International Airport waiting for my flight to Taipei. The guy next to me is happily chattering away on his cell phone to someone (or some people) at the other end of the "line". What is curious is that one moment he is speaking in Taiwanese, the next moment in Japanese, then English, and then Mandarin.

I don't know whether it is proper to call this "code switching", because he is speaking each of these languages in whole sentences or even blocks of sentences.

He does not speak the languages with equal fluency, but they all sound natural and do not require great effort on his part to produce. The man's first language seems to be Taiwanese, then comes Japanese (with a Chinese accent), English (with a multilingual accent), and Taiwan-style Mandarin.
What's going on? That is, who's on the other end of the conversation? Mair has a speculation of his own and others offer suggestions.

In the comments, for example, Gene Andersen:
As one who is incapable of being very good even at English, let alone anything else, it is always an experience to watch somebody like Lothar von Falkenhausen switch from perfectly polished English to French to German to Japanese to Chinese at a meeting without missing a beat. But the amazing linguists are the people from southern India–they grow up having to know English, Hindi and their usual language, and they generally wind up knowing all the Dravidian languages (which are close), and with that background they can learn anything. We had a Telugu-speaking temp for a while–she started chattering away in Bangali with a colleague from Bengal–I asked her if that was her fifth language or what, and she said "My tenth."
Michael c. Dunn:
I once knew a maitre d' in Cairo in the days when the Russians were still around. he was Armenian and grew up knowing Armenian, Russian, Arabic, and maybe Turkish, and had acquired excellent English and at least conversational French and German. That may not have been all. Also a couple I know: the husband grew up speaking English but knew Brazilian Portuguese, learned Persian in the Peace Corps and Arabic studying abroad, then married a Puerto Rican who was studying Italian literature. I've been at gatherings where there was extensive code-switching.
Miles Archer:
I worked with a guy who from the Netherlands at an American company. He spoke perfect American English with only the faintest trace of an accent. He lived in Barcelona and thus I assume he spoke Spanish though I never heard it myself. His wife is German and they spoke German in their home.

I once met up with him at Schipol Airport to do some business nearby. He had to struggle for a minute to reset his brain to speak his native language!

It's really sad that so many of us Americans are monolingual.

Wednesday, April 20, 2016

Sunflower close-up

IMGP8588rd

Deadwood and Moral Injury

I worked my way through Deadwood on DVDs however many years ago and thought it was terrific. I’ve just been through it again, all three seasons, as streamed on Amazon Prime. It holds up. The writing, the acting, the story, and of course the language ¬– not quite the same as the writing, if you catch my drift, but obviously closely related – it all holds up.

If I were to do serious work on it I think I’d center my thinking on Al Swearengen, saloon keeper, pimp, crime boss, and, in an interesting way, pillar of the community. He’s the one who called meetings of town elders, with canned peaches for refreshment, to get organized for becoming legally incorporated into the United States in some manner. Sure, he’s self-interested. But not, I believe, entirely so – something worth exploring (even from an evolutionary psychology point of view). And it’s clear that, as things move along, others count on him to do the dirty work that they won’t do themselves (I’m thinking of a specific incident involving Seth Bullock, but can’t recall the particulars).

One thing I’d look into is moral injury, which I’ve mentioned in some other posts here. There are a few scenes – two, three, four, something like that – where Swearengen is down on his hands and knees scrubbing blood stains from the floor. In at least one of these scenes, perhaps two, he let the blood with his own hand (slit someone’s throat). Otherwise, he ordered the killing even if he didn’t do it himself. His concern was not merely cosmetic. He paid a price for the killing he did, either directly or through others. Other reference scenes: talking about Dan’s reaction to killing Hearst’s enforcer; late in season three when he’s alone at the bar at night, singing sadly, wistfully.

In this respect – capacity for moral injury – compare Swearengen with Cy Tolliver and George Hearst.

* * * * *

Here’s a bunch of old columns from The Valve that reference Deadwood. I wrote some of them, but not all. Start reading them from the bottom. Those are the columns that got me to watch the show:

Tuesday, April 19, 2016

Single Shots: Seinfeld’s Ongoing Anatomy of Comedy

One of the many clips I saw on YouTube in my ongoing investigation of stand-up was an interview with George Carlin where he said that, early in his career, an older comedian (whose name I forget) advised him to write everything down and to organize. And that’s what Carlin did, writing ideas and bits on three by five cards and organizing them.

Seinfeld writes everything on yellow pads. I wonder what kind of filing system he has? For surely he does have some kind of filing system, no?

Consider his personal website: Jerry Seinfeld: Personal Archives. Under the heading “What Is this?” he tells us:
When I started doing TV, I saved every appearance on every show I did.

I thought it might be fun to go through all of it and pick out three bits each day that still amuse me for some reason or another.
Yeah, sure. But fun? Really? Fun?

How about tedious. But just how did he save copies of those appearances? Back when he first started showing up on TV personal computers didn’t have the capacity to store and organize video clips. He must have had boxes of videotapes. Neatly organized by date? Perhaps he some annotation of the contents on each tape or perhaps he numbered the tapes and kept track of the contents on three by five cards.

Who knows? Maybe he just threw them in boxes for years and then, once he started raking in the ducats, he hired staff to digitize and organize that stuff. Still, I’d think he’d want to keep close tabs on it.

I don’t think for a minute he coded up the website himself. But how closely does he monitor it? Everyday we get to see three, and only three, clips of stand-up comedy. The clips are half a minute to a minute-and-a-half long, perhaps two. Somehow all those boxes of videotapes got broken into short segments. That’s a job and a half in itself. And those segments have to be labeled and classified. Who does that?

And who chooses which three are displayed on any given day? Is the selection random or is some thought given to it? If the latter, what’s the thinking? Today there was a clip from Carson 1981, one from Carson 1990, and one HBO 1998. The Carson81 was about a fat man (I think this was from his first appearance on Carson). The Carson90 was about newspapers. And the HBO98 was about race horses. Three topics, three different time periods.

There is SOME system. Maybe it’s tight, maybe it’s loose. Can’t tell. What’s Jerry’s role in it? Imagine he makes the choice. What does that imply about his management style? Maybe he’s completely hands-off. But what does that mean, completely hands off? He never ever even looks at the site? He lets someone else choose, but checks on the site every week or so?

Keep in mind that this site isn’t the only thing he’s got to do. He’s got a wife and three kids. He does stand-up three weekends or so a month. He’s got all those sports cars he’s got to drive. And he’s got this show, Comedians in Cars Getting Coffee.

He’s a very busy man. I’m thinking he’s a very organized guy. Got to be.

The main deal with his current show, CCGC, are the 10 to 20 minute shows, each with a different comedian and car. At some point in the run, however, he started cutting those shows up into bits and pieces and assembling bits from several different shows into two-minute segments on a single theme: Single Shots: A smaller, more concentrated cup of comedy.

How does THAT happen? I’ve watched all the shows and all the Single Shots and I’m seeing some bits in those Single Shots that weren’t in the shows. And that means that someone is somehow keeping track of more than just what shows up in the individual programs. Is ALL the raw footage for each show – three GoPros and (at least) two DSLR’s for three to four hours, and a drone here and there – cut into snippets, labeled, and classified for later use? I don’t know, don’t think so, but... There’s a system there. It probably evolved over time. Perhaps it’s evolving still.

* * * * *

When I started thinking about this I figured I’d list all the Single Shots and then analyze the selection of topics. And maybe I will do that some day, analyze the selection of topics. But for how, I’m just going to list the shots along with a short, and sometimes cryptic, notation about what’s in an individual shot.

To date there are 72 of them. I’ve listed them from the most recent at #1 to the oldest at #72. Note, for example, that #72 (the oldest one) is about donuts while #7, quite recent, is about donut holes–that's obviously what's behind a recent bit. #5 is shots of and from drone-mounted cameras; as such, we’re deep behind the scenes (and perhaps a tad running out of ideas?).

Think about it. Someone did. Here they are:

Strange Wires

neuronal tensions.jpg

Sunday, April 17, 2016

Colorado Marijuana Tours: Evidence of Cultural Change in the USofA?

Alan Feuer reports on a 3-day pot tour of Denver and environs (NY Times):
I found the options dizzying: In the two years since the state first permitted the sale of weed to recreational users, an intricate economy has rapidly sprung up. Dope-smoking ski buffs can ride to the slopes in weed-friendly charter S.U.V.s, and arriving potheads can schedule pickups from the airport through dedicated livery services like THC Limo. There are stoner painting classes, stoner mountain treks and stoner chefs who will cook you a four-course marijuana dinner. Visitors can avail themselves of mobile apps like Leafly and Weedmaps to track down nearby vendors or book their bud-and-breakfasts through websites like TravelTHC.
You can learn to cook of marijuana oil:
I noticed a similar phenomenon at the Stir Cooking School in the Highlands area, a very Martha Stewart-looking outfit — exposed brick walls, wide-wale wooden floors — that had recently embarked on a sideline teaching tourists to cook with marijuana oil. Our class that morning was led by a graduate of the Johnson & Wales culinary school, Travis French, who instructed us in the preparation of weed chicken tacos, weed guacamole and weed-infused jicama slaw. The students were another motley crew — in an upmarket, foodie sort of way: a husband and wife who owned a weed dispensary in California, a pot-loving lesbian couple from Fort Lauderdale and some married academics on a secret holiday from their small Catholic college in the Midwest.
However:
mean, I got it: It was cool getting high without fear of being hassled by the cops. But was that really something around which you could plan a whole vacation? I understand that people go on wine trips, but generally speaking, they’re not popping bottles of shiraz the minute they leave the baggage claim. When I thought about it later, it occurred to me that what I might have been reacting to was the hard sell that Denver’s ganja-preneurial class was putting on these poor, weed-repressed out-of-towners, the way in which their stifled desire for pot was being commodified.
And then there's the high tech marijuana lab:
When we reached the lab, my tour mates stumbled off the bus and stood for a moment in the parking lot gazing at the 40,000-square-foot structure as though it were the Vatican. “Oh yeah, dude,” the cattle rancher murmured with a slow-motion nod as we stepped inside. There, we met Meg Sanders, the chief executive of Mindful, the company that runs the lab. Ms. Sanders, knowing her audience, told us that the site housed 8,000 individual plants of 50 different strains. This elicited an awe-struck silence from the potheads, into which she added, waving us on, “All right, let’s head back to Disneyland.”

The technical aspects of the lab were pretty interesting: cryogenic freezers, low-temp ovens, lots of fluorescent lights — like something you might find at a pharmaceutical plant or in crime scene photos. Ms. Sanders informed us that every seedling in the building had been tagged at birth with an RFID chip so that the state could monitor its progress from cultivation to retail sale. She was pretty interesting herself: a former financial compliance officer who, like many others, saw an opportunity in pot. “I had a passion for the plant,” she said as we made our way past a giant indoor copse of marijuana, “and” — this seemed especially important — “there was no glass ceiling. ...

There, on the shelves before the spellbound heads, was Mindful’s entire product line: transdermal pot patches, marijuana taffy, pot bacon brittle, all-natural vegan pot capsules, Incredible Affogato pot candy bars, CannaPunch cannabis drinks, a Bubba Kush strain of root beer, Wake and Shake canna coffee, Lip Buzz lip balm, Apothecanna pain creams, and, of course, a wide variety of hashes, extracts and smokeables.
In the larger scheme of things, however:
“For most travelers, marijuana is a ho-hum issue,” said Cathy Ritter, the director of the Colorado Tourism Office. “It’s a very small segment of our travel population.” When I spoke with her by phone, Ms. Ritter acknowledged that she hadn’t used state money to promote pot tourism because most of the funds would, by definition, be spent outside of Colorado and, as she explained, “It’s pretty clear that that’s a federal offense.”

Recently, the Colorado Cannabis Chamber of Commerce pushed a bill in the state that would allow producers and sellers to open tasting rooms, as wineries and breweries have, and yet the real work of turning Denver into a pot Napa Valley may in the end rest with people on the ground like Mr. Schaefer or like Pepe Breton, whose greenhouse lab we visited after brunch. Mr. Breton’s story was, by then, familiar: He was a former stockbroker who had gone in search of profit as a marijuana farmer.

But it seemed to me that he had a different — and slightly darker — take on the future of the industry. “The big boys are coming,” Mr. Breton said as we walked through his lab. “And when that happens, I won’t be able to compete anymore. I just hope I can sell at the right time and get a good price.”

Saturday, April 16, 2016

Speaking your soul in a foreign language

Rebecca Tan, "Accent Adaptation (On sincerity, spontaneity, and the distance between Singlish and English, The Pennsylvania Gazette 2/18/2016:
Every international student will surely find this idea of performance familiar. The most difficult thing about speaking in a foreign country isn’t adopting a new currency of speech, but using it as though it’s your own—not just memorizing your lines, but taking center stage and looking your audience in the eye. It is one thing to pronounce can’t so that it rhymes with ant instead of aunt, but a whole other order to do that without feeling like a fraud.

Two years ago, one of my friends left Singapore to attend an international school in Shanghai. She returned with a vaguely American inflection, a kind of slow, methodical drawl that sounded especially conspicuous against the efficient gushing that constitutes Singaporean speech. I remember her telling me how frustrating it was when people asked if she could “turn it off,” like it was a faucet—if she could just erase those two years of her life as though they had left no imprint on her whatsoever. “When you’re alone in a foreign country,” she confided, “all you will want to do is feel like you belong.”

When you’re grappling with things as heavy as loneliness and disconnection—when you have to simultaneously worry about your parents, mid-terms, laundry, and the cost of your education—changing your accent really just feels like survival. [...]

Lately I’ve been wondering if I’ve taken this whole language situation a tad too personally. Till now, I have kept my Singaporean inflection close at hand, for fear that attempts at Americanisms will be wrong—or, worse, permanent. Yet I am beginning to feel myself grow tired of this stage fright, tired of this senseless preoccupation with the packaging of ideas rather than the ideas themselves. Away from all these theatrics, the simple facts are that I am 9,500 miles away from home, and will be for four more years. I came here looking for change, and the words forming in my mouth to accommodate that change are not jokes, lies, or betrayals. They are real, not strange, and they are mine.

Wednesday, April 13, 2016

This is your brain on LSD

Researchers from Imperial College London, working with the Beckley Foundation, have for the first time visualized the effects of LSD on the human brain.

In a series of experiments, scientists have gained a glimpse into how the psychedelic compound affects brain activity. The team administered LSD (Lysergic acid diethylamide) to 20 healthy volunteers in a specialist research centre and used various leading-edge and complementary brain scanning techniques to visualize how LSD alters the way the brain works.

The findings, published in Proceedings of the National Academy of Sciences (PNAS), reveal what happens in the brain when people experience the complex visual hallucinations that are often associated with LSD state. They also shed light on the brain changes that underlie the profound altered state of consciousness the drug can produce.

A major finding of the research is the discovery of what happens in the brain when people experience complex dreamlike hallucinations under LSD. Under normal conditions, information from our eyes is processed in a part of the brain at the back of the head called the visual cortex. However, when the volunteers took LSD, many additional brain areas -- not just the visual cortex -- contributed to visual processing.

Dr Robin Carhart-Harris, from the Department of Medicine at Imperial, who led the research, explained: "We observed brain changes under LSD that suggested our volunteers were 'seeing with their eyes shut' -- albeit they were seeing things from their imagination rather than from the outside world. We saw that many more areas of the brain than normal were contributing to visual processing under LSD -- even though the volunteers' eyes were closed. Furthermore, the size of this effect correlated with volunteers' ratings of complex, dreamlike visions. "

The study also revealed what happens in the brain when people report a fundamental change in the quality of their consciousness under LSD.

Dr Carhart-Harris explained: "Normally our brain consists of independent networks that perform separate specialised functions, such as vision, movement and hearing -- as well as more complex things like attention. However, under LSD the separateness of these networks breaks down and instead you see a more integrated or unified brain.
The original research article is available online HERE. Wouldn't you know, our good old friend the default mode network (DMN). From the first paragraph of the discussion section:
The present findings offer a comprehensive new perspective on the changes in brain activity characterizing the LSD state, enabling us to make confident new inferences about its functional neuroanatomy. Principal findings include increased visual cortex CBF, RSFC, and decreased alpha power, predicting the magnitude of visual hallucinations; and decreased DMN integrity, PH-RSC RSFC, and delta and alpha power (e.g., in the PCC), correlating with profound changes in consciousness, typified by ego-dissolution. More broadly, the results reinforce the view that resting state ASL, BOLD FC, and MEG measures can be used to inform on the neural correlates of the psychedelic state (9, 16). Importantly, strong relationships were found between the different imaging measures, particularly between changes in BOLD RSFC (e.g., network “disintegration” and “desegregation”) and decreases in oscillatory power, enabling us to make firmer inferences about their functional meaning.

Shaky-cam, a reminder

IMGP8122

Tuesday, April 12, 2016

The Evolution of Language and Thought

 2016 Mar 8. [Epub ahead of print]

The evolution of language and thought.

Abstract

Language primarily evolved as a vocal medium that transmits the attributes of human culture and the necessities of daily communication. Human language has a long, complex evolutionary history. Language also serves as an instrument of thought since it has become evident that in the course of this process neural circuits that initially evolved to regulate motor control, motor responses to external events, and ultimately talking were recycled to serve tasks such as working memory, cognitive flexibility linguistic tasks such as comprehending distinctions in meaning conveyed by syntax. This precludes the human brain possessing an organ devoted exclusively to language, such as the Faculty of Language proposed by Chomsky (1972, 2012). In essence like Fodor's (1983) modular model, a restatement of archaic phrenological theories (Spurzheim, 1815). The subcortical basal ganglia can be traced back to early anurans. Although our knowledge of the neural circuits of the human brain is at a very early stage and incomplete, the findings of independent studies over the past 40 years, discussed here, have identified circuits linking the basal ganglia with various areas of prefrontal cortex, posterior cortical regions and other subcortical structures. These circuits are active in linguistic tasks such as lexical access, comprehending distinctions in meaning conferred by syntax and the range of higher cognitive tasks involving executive control and play a critical role in conferring cognitive flexibility. The cingulate cortex which appeared in Therapsids, transitional mammal-like reptiles who lived in age of the dinosaurs, most likely enhanced mother-infant interaction, contributing to success in the Darwinian (1859) "Struggle for Existence" - the survival of progeny. They continue to fill that role in present-day mammals as well as being involved in controlling laryngeal phonation during speech and directing attention (Newman & MacLean, 1983; Cummings, 1993". The cerebellum and hippocampus, archaic structures, play role in cognition. Natural selection acting on genetic and epigenetic events in the last 500,000 years enhanced human cognitive and linguistic capabilities. It is clear that human language did not suddenly come into being 70,000 to 100,000 years as Noam Chomsky (Bolhuis et al., 2014) and others claim. The archeological record and analyses of fossil and genetic evidence shows that Neanderthals, who diverged from the human line at least 500,000 years ago possessed some form of language. Nor did the human population suddenly acquire the capability to relate two seemingly unrelated concepts by means of associative learning 100,000 years ago, re-coined "Merge" by Chomsky and his adherents, Merge supposedly is the key to syntax but associative learning, one of the cognitive processes by which children learn languages and the myriad complexities of their cultures, is a capability present in dogs and virtually all animals.
PMID:
 
26963222
 
[PubMed - as supplied by publisher]

* * * * *

PDF of the full text is available HERE.

Brain size: African elephants and us

Why are humans so smart? Is it the sheer number of neurons we have, or the global architecture of those neurons? The brain of the African elephant is three times as heavy as ours, but what's the neuron count? Suzana Herculano-Houzel in Nautilus:
Lo and behold, the African elephant brain had more neurons than the human brain. And not just a few more: a full three times the number of neurons, 257 billion to our 86 billion neurons. But—and this was a huge, immense “but”—a whopping 98 percent of those neurons were located in the cerebellum, at the back of the brain. In every other mammal we had examined so far, the cerebellum concentrated most of the brain neurons, but never much more than 80 percent of them. The exceptional distribution of neurons within the elephant brain left a relatively meager 5.6 billion neurons in the whole cerebral cortex itself. Despite the size of the African elephant cerebral cortex, the 5.6 billion neurons in it paled in comparison to the average 16 billion neurons concentrated in the much smaller human cerebral cortex.

So here was our answer. No, the human brain does not have more neurons than the much larger elephant brain—but the human cerebral cortex has nearly three times as many neurons as the over twice as large cerebral cortex of the elephant. Unless we were ready to concede that the elephant, with three times more neurons in its cerebellum (and, therefore, in its brain), must be more cognitively capable than we humans, we could rule out the hypothesis that total number of neurons in the cerebellum was in any way limiting or sufficient to determine the cognitive capabilities of a brain.

Only the cerebral cortex remained, then. Nature had done the experiment that we needed, dissociating numbers of neurons in the cerebral cortex from the number of neurons in the cerebellum. The superior cognitive capabilities of the human brain over the elephant brain can simply—and only—be attributed to the remarkably large number of neurons in its cerebral cortex.
And coking has something to do with it:
As it turns out, there is a simple explanation for how the human brain, and it alone, can be at the same time similar to others in its evolutionary constraints, and yet so different to the point of endowing us with the ability to ponder our own material and metaphysical origins. First, we are primates, and this bestows upon humans the advantage of a large number of neurons packed into a small cerebral cortex. And second, thanks to a technological innovation introduced by our ancestors, we escaped the energetic constraint that limits all other animals to the smaller number of cortical neurons that can be afforded by a raw diet in the wild.

So what do we have that no other animal has? A remarkable number of neurons in the cerebral cortex, the largest around, attainable by no other species, I say. And what do we do that absolutely no other animal does, and which I believe allowed us to amass that remarkable number of neurons in the first place? We cook our food. The rest—all the technological innovations made possible by that outstanding number of neurons in our cerebral cortex, and the ensuing cultural transmission of those innovations that has kept the spiral that turns capacities into abilities moving upward—is history.
H/t 3QD.

Saturday, April 9, 2016

The Heart of AlphaGo

From Michael Nielson, Quanta Magazine, Is AlphaGo Really Such a Big Deal?
To begin, AlphaGo took 150,000 games played by good human players and used an artificial neural network to find patterns in those games. In particular, it learned to predict with high probability what move a human player would take in any given position. AlphaGo’s designers then improved the neural network by repeatedly playing it against earlier versions of itself, adjusting the network so it gradually improved its chance of winning.

How does this neural network — known as the policy network — learn to predict good moves?

Broadly speaking, a neural network is a very complicated mathematical model, with millions of parameters that can be adjusted to change the model’s behavior. When I say the network “learned,” what I mean is that the computer kept making tiny adjustments to the parameters in the model, trying to find a way to make corresponding tiny improvements in its play. In the first stage of learning, the network tried to increase the probability of making the same move as the human players. In the second stage, it tried to increase the probability of winning a game in self-play. This sounds like a crazy strategy — repeatedly making tiny tweaks to some enormously complicated function — but if you do this for long enough, with enough computing power, the network gets pretty good. And here’s the strange thing: It gets good for reasons no one really understands, since the improvements are a consequence of billions of tiny adjustments made automatically.

After these two training stages, the policy network could play a decent game of Go, at the same level as a human amateur. But it was still a long way from professional quality. In a sense, it was a way of playing Go without searching through future lines of play and estimating the value of the resulting board positions. To improve beyond the amateur level, AlphaGo needed a way of estimating the value of those positions.

To get over this hurdle, the developers’ core idea was for AlphaGo to play the policy network against itself, to get an estimate of how likely a given board position was to be a winning one. That probability of a win provided a rough valuation of the position. (In practice, AlphaGo used a slightly more complex variation of this idea.) Then, AlphaGo combined this approach to valuation with a search through many possible lines of play, biasing its search toward lines of play the policy network thought were likely. It then picked the move that forced the highest effective board valuation.

We can see from this that AlphaGo didn’t start out with a valuation system based on lots of detailed knowledge of Go, the way Deep Blue did for chess. Instead, by analyzing thousands of prior games and engaging in a lot of self-play, AlphaGo created a policy network through billions of tiny adjustments, each intended to make just a tiny incremental improvement. That, in turn, helped AlphaGo build a valuation system that captures something very similar to a good Go player’s intuition about the value of different board positions.
H/t 3QD.

Evolutionary Synthesis and Integrated Anthropology

The Extended Evolutionary Synthesis, Ethnography, and the Human Niche: Toward an Integrated Anthropology

Agustin Fuentes
Agustin Fuentes is Professor in the Department of Anthropology at the University of Notre Dame (Notre Dame, Indiana, 46556, U.S.A. []).
Abstract
Seeing bodies and evolutionary histories as quantifiable features that can be measured separately from the human cultural experience is an erroneous approach. Seeing cultural perceptions and the human experience as disentangled from biological form and function and evolutionary history is equally misguided. An integrative anthropology moves past dichotomous perspectives and seeks to entangle the “inside” and “outside,” methodologically and theoretically, to move beyond isolationist trends in understanding the human. In this paper I illustrate the underlying rationale for some anthropological lack of engagement with neo-Darwinian approaches and review contemporary evolutionary theory discussing how, in combination with a dynamic approach to human culture, it can facilitate integration in anthropology. Finally, I offer an overview of the human niche concept and propose a heuristic framework as a set of shared assumptions about human systems to help frame a sincerely anthropological and emphatically evolutionary approach to the human experience.

Friday, April 8, 2016

Toward a Fan-Based Research Collaboratory

Cross posted at The Valve
Despite some reservations about fan scholarship -- e.g. I've seen pointless edit wars at Wikipedia & pros are adept at pointless quarrels as well -- I'm seriously thinking about an initiative to see if fans are interested in doing at least some of the descriptive work I call for in the piece on cultural evolution I recently did for the National Humanities Center (cf. this "quasi-festo" for naturalist criticism, and this piece on "Kubla Khan"). I see little prospect that academy-based scholars will under take such work in the near term. The sort of descriptive work I have in mind is not obviously subordinate to an inquiry into the "meaning" of a text. That pretty much means that the work is not unpublisheable on its own; there's no obvious way to earn professional credit for doing it.

But fans may well be interested in doing such work, though on the texts that interest them. And those texts are only rarely going to be canonical high culture texts. And that's just fine with me. I've done such work on manga and cartoons and would have no problem with doing it on episodes of, e.g. Buffy the Vampire Slayer or Star Trek (any generation).

I've recently been doing quite a bit of work on Sita Sings the Blues, an animated film by Nina Paley, which I discuss in the Humanities Center post. As some of you may know, the film is done in four different visual styles. So I've made a table with a column for each style and then gone through the film from beginning to end and briefly annotated each segment in the proper column. You can find that table online in a Google docs file here. One of those segments, the Agni Pariksha, is done in a fifth style. I've gone through that segment an annotated each "shot" or sequence within it. You can find that here. In principle each of the some 60+ segments in the film could be described at the level of detail I've used in the Agni Pariksha segment.

In fact, one could easily describe a film frame-by-frame. Would that be worthwhile? In some cases, yes, and in some cases no. It depends. There's really no way of knowing until the work's been done in at least some cases and we can take a look at it.

It's clear to me that such descriptive work is a necessary precondition to a deeper knowledge of texts, whether written, filmed, or videotaped. All the cognitive psych and evolutionary psych and neuro-psych in the world is not going to accomplish what can only be accomplished through description. If the pros aren't going to do the work, then it's up to the fans. If the fans get into it, then in a decade or two the pros will have no choice but to follow or simply to drop off the edge of the earth.

Seel also, this post on Tvtropes.org.

Friday Fotos: This and That

IMGP0042rd.jpg

IMGP5518

IMGP5613rd

20151119-P1110520

20151119-P1110501

Thursday, April 7, 2016

Two Puzzles Concerning the Self

A couple years ago I posted about how Plato confused the operations of his nervous system – its capacity to extract ‘canonical’ forms of objects from the flux of everyday appearances – with a postulated realm of Ideal Forms. Today I want to consider two other cases of how we get in our own way. The first case is not about us at all, but about two apes. The second involves one of those ingenious experiements of Jean Piaget.

Though the examples are somewhat different, I offer them to make a single point, that The Self is a construct, not a philosophical absolute or essence or ground. It is social (first case) and can, in fact, be mistaken (second case), and is thus contingent.

Chimps ‘R Us

Let us begin with one of those chimpanzees who were raised among humans, possibly the first of them. As a youngster Vicki was given the task of sorting photographs into two piles, “human” and “animal.” She placed her own photograph in the human pile while she put her chimpanzee father’s picture went into the animal pile (Eugene Linden, Apes, Men, and Language, 1974, p. 50). Was she expressing aggression against her father? Possibly, but not likely. Her father was a chimpanzee and so she placed his picture in the pile for animals, where it belonged. He looked like other animals, more or less. But why did she think her picture belonged in the pile with humans? After all, she didn’t look like humans, and least not as humans judge these things.

Seagull on a rail

20160222-P1120067

Tuesday, April 5, 2016

Geoffrey Hinton on Deep Learning, Go, and the future of AI

Geoffrey Hinton is the one who invented so-called "deep learning", the technique that allowed a computer to beat Lee Sedol, a South Korean grandmaster, at Go. Adrian  Lee interviews Hinton in MacLean's:
Q: So what now? Are there other, even more complicated games that the AI world wants to conquer next?  
A: From what we think of as board games and things like that, I don’t think there is—I think this is really the pinnacle. There are of course other games, these fantasy games, where you interact with characters who say things to you. AI still can’t deal with those because they still can’t deal with natural language well enough, but it’s getting much better. And the way translation’s currently done will change because Google now has what promises to be a much better way to do machine translation. That’s part of understanding natural language properly, and that’ll influence lots of things—it’ll influence fantasy games and things like that, but it will also allow you to search much better, because you’ll have a better sense of what documents mean. It’s already influencing things—in Gmail you have Smart Reply, that figures out from an email what might be a quick reply, and it gives you alternatives when it thinks they’re appropriate. They’ve done a pretty good job. You might expect it to be a big table, of ‘If the email looks like this, this is a good reply, and if the email looks like that, then his might be a good reply.’ It actually synthesizes the reply from the email. The neural net goes through the words in the email, and gets some internal state in its neurons, and then uses that internal state to generate a reply. It’s been trained in a lot of data, where it was told what the kinds of replies are, but it’s actually generating a reply, and it’s much closer to how people do language.
Q: Beyond games, then—what might come next for AI? 
A: It depends who you talk to. My belief is that we’re not going to get human-level abilities until we have systems that have the same number of parameters in them as the brain. So in the brain, you have connections between the neurons called synapses, and they can change. All your knowledge is stored in those synapses. You have about 1,000-trillion synapses—10 to the 15, it’s a very big number. So that’s quite unlike the neural networks we have right now. They’re far, far smaller, the biggest ones we have right now have about a billion synapses. That’s about a million times smaller than the brain.

Q: Do you dare predict a timeline for that? 
A: More than five years. I refuse to say anything beyond five years because I don’t think we can see much beyond five years. And you look at these past predictions like there’s only a market in the world for five computers [as allegedly said by IBM founder Thomas Watson] and you realize it’s not a good idea to predict too far into the future.
The importance of computing power:
Q: How important is the power of computing to continued work in the deep learning field? 
In deep learning, the algorithms we use now are versions of the algorithms we were developing in the 1980s, the 1990s. People were very optimistic about them, but it turns out they didn’t work too well. Now we know the reason is they didn’t work too well is that we didn’t have powerful enough computers, we didn’t have enough data sets to train them. If we want to approach the level of the human brain, we need much more computation, we need better hardware. We are much closer than we were 20 years ago, but we’re still a long way away. We’ll see something with proper common-sense reasoning.

Q: Can the growth in computing continue, to allow applications of deep learning to keep expanding? 
A: For the last 20 years, we’ve had exponential growth, and for the last 20 years, people have said it can’t continue. It just continues. But there are other considerations we haven’t thought of before. If you look at AlphaGo, I’m not sure of the fine details of the amount of power it was using, but I wouldn’t be surprised if it was using hundreds of kilowatts of power to do the computation. Lee Sedong was probably using about 30 watts, that’s about what the brain takes, it’s comparable to a light bulb. So hardware will be crucial to making much bigger neural networks, and it’s my guess we’ll need much bigger neural networks to get high-quality common sense.

Q: In the ’80s, scientists in the AI field dismissed deep learning and neural networks. What changed? 
A: Mainly the fact that it worked. At the time, it didn’t solve big practical AI problems, it didn’t replace the existing technology. But in 2009, in Toronto, we developed a neural network for speech recognition that was slightly better than the existing technology, and that was important, because the existing technology had 30 years of a lot of people making it work very well, and a couple grad students in my lab developed something better in a few months. It became obvious to the smart people at that point that this technology was going to wipe out the existing one.
H/t Tyler Cowen.

We Live in a Culture of Fear

Barry Glassner. The Culture of Fear: Why Americans are Afraid of the Wrong Things. Basic Books 1999.

From the introduction, p. xxvi:
Mary Douglas, The eminent anthropologist who devoted much of her career to studying how people interpret risk, pointed out that every society has an almost infinite quantity of potential dangers from which to choose. Dangers get selected for special emphasis, Douglas showed, either because they offend the basic moral principles of the society or because they enable criticism of disliked groups and institutions.
p. xxviii:
The short answer to why Americans harbor so many misbegotten fears is that immense power and money await those who tap into our moral insecurities and supply us with symbolic substitutes.
Could it be that the craziness of American politics for the past two decades reflects residual awareness of and anxiety about DEEP TROUBLE AHEAD that is being deflected onto other things: crime, minorities, death panels, immigrants, welfare moms and so forth.

Friday, April 1, 2016

"Not like those guys"

From a Paris Review interview with Sarah Thomason, an expert on what happens when different languages collide:
It’s not as if people come into contact and one crowd says, Boy, your language is a lot more efficient than ours! It depends on who’s got the power. The world I live in, the world you live in, Western Europe, the United States, highly industrialized countries, the paradigm we’re used to is colonialism—and then the indigenous languages are threatened. A lot of them have disappeared and the ones that haven’t are at great risk, so that seems like the norm.

But imagine a society—and again, these are mostly hunter-gatherer societies, but there are still a lot of those around—where the people practice exogamy, meaning you have to find a marriage partner outside your own group. Often the criterion is whether they speak the same language as you. If you have a society like that, you’re in contact with at least one other group and typically several relatively small groups—and it’s greatly to your advantage to maintain different languages, right? You don’t want to change your whole culture, you value your culture, exogamy seems like the way the world ought to be, and you certainly want to get married and you have this view that you shouldn’t marry your sister—then you preserve the languages.

That’s one reason languages get preserved. You find another phenomenon—it’s particularly common in and around Papua New Guinea, where there are about a thousand languages. That means that they’re close together, they’re small groups. Some of them are related to one another, so they’re pretty similar, and in that part of the world it’s probably not accidental that there are so many languages in such a relatively small area. It’s fairly common for groups to deliberately change their languages so they’re not so much like the guys next door. And the most spectacular examples are where you’ve got dialects of the same language and oh, we don’t want to be too much like those guys. It’s an identity-preserving thing, it’s a distancing phenomenon.
For example:
I told you about the distancing changes in New Guinea. There’s an island called Bougainville—which is famous if you’ve read a lot about World War II—but it’s a big island and it has a language called Buin. Buin has several dialects, and one of them is Uisai. There are about fifteen thousand Buin speakers in all, and maybe fifteen hundred Uisai speakers. And Buin has, including all its dialects, a very elaborate gender system, sort of like what you find in French or Russian or German but more elaborate because each noun is either masculine or feminine, and then the verb will agree in gender with the noun, and the adjective will agree in gender with the noun, and so on. So in a sentence you’ve got a lot of markers indicating the gender—it’s part of the syntax as well as the lexicon. But in Uisai, all the genders are reversed. Every noun that’s feminine in Uisai is masculine in all the other dialects of Buin.

Now, this just isn’t conceivable as any kind of ordinary, natural, gradual linguistic change. I mean they have to have sat down and said, We’re too much like those guys, we’ve got to do something. How about this? A lot of linguists, maybe most linguists, would say this isn’t even a possible linguistic change. My belief, which has gotten more radical the older I get—which is nice, you don’t want to get intellectually fossilized—is that anything you can become aware of in your language, you can change if you’ve got a powerful enough motive. And of course, it’s not going to affect anybody’s language but yours, unless everybody else changes, too.