Friday, November 24, 2017

Friday Fotos: Cleopatra’s Shoes, or, the F Me Pump

I recently explained how I found a woman’s shoe on the street and decided to use it as a prop for photographs. That has blossomed into a photography project I’m (tentatively) calling “Cleopatra’s Shoes, or the F Me Pump”. Why Cleopatra? Here’s how Shakespeare introduces her in Antony and Cleopatra:


I suppose that Harvey Weinstein, Leon Wieseltier, Donald Trump, and other of their ilk think that a woman wearing such a shoe is “asking for it”. That betrays their insecurity, contempt for women, and utter lack of imagination.

Cleopatra may well have been flaunting it, but that’s very different from asking for it. If she is flaunting, then, she may signal that you are welcome to ask for it provided you do so with desire, imagination, politesse, and respect. She’s also playing. And you know how the cliché goes, do you? Fun is fundamental.

And, you know what, I’ll bet Antony flaunted his pumps too. Here’s how Shakespeare sends him off the stage:


Between them, Antony and Cleopatra ruled half the Mediterranean world. And they delighted in their F Me Pumps. It behooves us to do the same.

Here are some relevant videos pointed out to me by a few of my Facebook friends; friends, incidentally, who are also real-life friends.

Here’s the photos I’ve collected so far. Click on the angle brackets to scroll through the photos and  click on the photo itself to be whisked away to to my Flickr album fro the project, which currently has 76 photos, with more on the way.

The Eff Me Pump / Cleopatra's Shoe

They’re just raw material for the project, not the final product. What’s the final product? Don’t know. We’re not there yet.

Finally, the Shakespeare passages by themselves:
Age cannot wither her, nor custom stale
Her infinite variety: other women cloy
The appetites they feed: but she makes hungry
Where most she satisfies; for vilest things
Become themselves in her: that the holy priests
Bless her when she is riggish.
Antony and Cleopatra, Act 2, Scene 2

His legs bestrid the ocean: his rear'd arm
Crested the world: his voice was propertied
As all the tuned spheres, and that to friends;
But when he meant to quail and shake the orb,
He was as rattling thunder. For his bounty,
There was no winter in't; an autumn 'twas
That grew the more by reaping: his delights
Were dolphin-like; they show'd his back above
The element they lived in: in his livery
Walk'd crowns and crownets; realms and islands were
As plates dropp'd from his pocket.
Antony and Cleopatra, Act 5, Scene 2

Sometimes the thing to do is declare the problem solved – and then, and only then, solve it: Is the REAL Singularity at hand? [#HEX01]

First you declare the problem solved and then figure out whether or not you’re right. The order is important, for the declaration is necessary to the set the stage for solving the problem. You can’t (usually don’t) do it the other way around.

What’s remarkable is that it seems to work. It’s worked for me several times, though only two specific occasions come to mind. One is quite recent, when I declared the “Kubla Khan” problem solved (yeah, I know, I know, there’s still work to be done proving it out). The other is years ago when I was working on my dissertation and I declared, yes, I’ve figured out Sonnet 129 – of as much of it as I needed to. But I’m sure it’s happened several times in between, though I can’t come up with specific occasions.

But why does it work?

Solving the large complex unknown

The problems are relatively large and complex and I have no model to guide me to a solution. I don’t know what I’m looking for.

Let’s step outside and imagine we’ve got transcendental knowledge of these sort of problems. We see the problem to be solved, and we in fact know how to solve it. We also see the investigator working on it and we know what he knows. There comes a time when he has all the pieces to hand. He can solve the problem at any time simply by putting the pieces the right way. That is, he’s got all the components, but lacks a plan for their proper assembly.

One can wonder whether or not such a concrete metaphor is very useful in understanding such an abstract matter. I’m aware of the problem. There is a crucial distinction between components and a plan for their assembly. But is that a real distinction? Let’s go ahead as though it is.

What does he do? It depends. If he thinks more components are needed ¬– though he’s not likely to be thinking in terms of components and assembly plan – he’ll go on looking for more components and miss the opportunity to assemble the missing ones in the proper way. If however he decides, for whatever reason, that he’s got all that he needs, then it becomes possible to intuit the assembly plan, though it may take a bit of fiddling. That is, the plan itself is not a big deal. It’s knowing when you’ve reached the state where all you need is a scheme for assembling the parts you’ve got. Once you’ve reached that point, the components will “tell” you how they go together.

What happens, in effect, if that you figure out how to see a duck, rather than a rabbit:


And thinking about ducks allows you to move ahead.

We’re living the Singularity

Well, I’m beginning to think we’ve got all the components for the next step in an understanding of, simulation of, and imitation of mind. I’ve been blogging around and about this for some time, but I’ll give particular notice to Wednesday’s post, Explain yourself, Siri, or Alex or Watson or any other AI that does interesting/amazing things and we don't know how it does it. I smell that the game is afoot. Beyond this I offer the concluding paragraphs from the paper I prepared for HEX01, Abstract Patterns in Stories: From the intellectual legacy of David G. Hays, which takes a historical look at relevant technical issues:
As a child my imagination was shaped by Walt Disney, among others. Disney, as you know, was an optimist who believed in technology and in progress. He had one TV program about the wonders of atomic power, where, alas, things haven’t quite worked out the way Uncle Walt hoped. But he also evangelized for space travel. That captured my imagination and is no doubt, in part, why I became a fan of NASA. I also watched The Jetsons, a half-hour cartoon show set in a future where everyone was flying around with personal jetpacks. And then there’s Stanley Kubrick’s 2001: A Space Odyssey, which came out in 1969, which depicted manned flight to near-earth orbit as routine. In the reality of 2017 that’s not the case, nor do we have a computer with the powers of Kubrick’s HAL. On the other hand, we have the Internet and social media; neither Disney, nor the creators of The Jetsons, nor Stanley Kubrick anticipated that.

The point is that I grew up anticipating a future filled with wondrous technology. By mid-1950s standards, yes, we do have wondrous technology. Just not the wondrous technology that was imagined back then. One bit of wondrous future technology has been looming large for several decades, the super-intelligent computer. I suppose we can think of HAL as one instance of that. There are certainly others, such as the computer in the Star Trek franchise, not to mention Commander Data. For the last three decades Ray Kurzweil has been promising such a marvel under the rubric of “The Singularity”. He’s not alone in that belief. 

Color me skeptical.

But here’s how John von Neumann used the term: “The accelerating progress of technology and changes in the mode of human life, give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”. Are we not there? Major historical movements are not caused by point events. They are the cumulative effect of interacting streams of intellectual, cultural, social, political, and natural processes. Think of global warming, of international politics, but also of technology, space exploration – Voyager 1 has left the solar system! – and the many ways we can tell stories that didn’t exist 150 years ago. Have we not reached a point of no return?

The future is now. Oh, I’m sure there are computing marvels still to come. Sooner or later we’re going to figure out how to couple Old School symbolic computing with the current suite of machine learning and neural net technologies and trip the lights fantastic in ways we cannot imagine. That day will arrive more quickly if we concentrate on the marvels we have at hand rather than trying to second guess the future. We are living in the singularity.

Thursday, November 23, 2017

Happy Thanksgiving!




Complexity and technological evolution: What everybody knows?

Vaesen, K. & Houkes, W. Biol Philos (2017).
Publisher Name: Springer Netherlands
Print ISSN: 0169-3867
Online ISSN: 1572-8404
AbstractThe consensus among cultural evolutionists seems to be that human cultural evolution is cumulative, which is commonly understood in the specific sense that cultural traits, especially technological traits, increase in complexity over generations. Here we argue that there is insufficient credible evidence in favor of or against this technological complexity thesis. For one thing, the few datasets that are available hardly constitute a representative sample. For another, they substantiate very specific, and usually different versions of the complexity thesis or, even worse, do not point to complexity increases. We highlight the problems our findings raise for current work in cultural-evolutionary theory, and present various suggestions for future research.
I've included the final discussion below the fold.

* * * * *

Wednesday, November 22, 2017

Out the window through a screen the sun shines indirectly




Explain yourself, Siri, or Alex or Watson or any other AI that does interesting/amazing things and we don't know how it does it

From the NYTimes:
It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.

This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.
And so we have a new research field, explainable A.I., or X.A.I.
Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.
One expert, David Gunning, asserts:
“The real secret is finding a way to put labels on the concepts inside a deep neural net,” he says. If the concepts inside can be labeled, then they can be used for reasoning — just like those expert systems were supposed to do in A.I.’s first wave.
And so:
To create a neural net that can reveal its inner workings, the researchers in Gunning’s portfolio are pursuing a number of different paths. Some of these are technically ingenious — for example, designing new kinds of deep neural networks made up of smaller, more easily understood modules, which can fit together like Legos to accomplish complex tasks.
Makes sense. That's what the brain does, isn't it? Except that the network in even a small patch of neural tissue is huge in comparison to deep learning nets.

Perhaps language will help:
Five years ago, Darrell and some colleagues had a novel idea for letting an A.I. teach itself how to describe the contents of a picture. First, they created two deep neural networks: one dedicated to image recognition and another to translating languages. Then they lashed these two together and fed them thousands of images that had captions attached to them. As the first network learned to recognize the objects in a picture, the second simply watched what was happening in the first, then learned to associate certain words with the activity it saw. Working together, the two networks could identify the features of each picture, then label them. Soon after, Darrell was presenting some different work to a group of computer scientists when someone in the audience raised a hand, complaining that the techniques he was describing would never be explainable. Darrell, without a second thought, said, Sure — but you could make it explainable by once again lashing two deep neural networks together, one to do the task and one to describe it.

Darrell’s previous work had piggybacked on pictures that were already captioned. What he was now proposing was creating a new data set and using it in a novel way. Let’s say you had thousands of videos of baseball highlights. An image-recognition network could be trained to spot the players, the ball and everything happening on the field, but it wouldn’t have the words to label what they were. But you might then create a new data set, in which volunteers had written sentences describing the contents of every video. Once combined, the two networks should then be able to answer queries like “Show me all the double plays involving the Boston Red Sox” — and could potentially show you what cues, like the logos on uniforms, it used to figure out who the Boston Red Sox are.
Sounds promisin.

I wonder if these people could make sense of some obscure notes I wrote up a decade ago:

Abstract: These notes explore the use of Sydney Lamb’s relational network notion for linguistics to represent the logical structure of complex collection of attractor landscapes (as in Walter Freeman’s account of neuro-dynamics). Given a sufficiently large system, such as a vertebrate nervous system, one might want to think of the attractor net as itself being a dynamical system, one at a higher order than that of the dynamical systems realized at the neuronal level. A mind is a fluid attractor net of fractional dimensionality over a neural net whose behavior displays complex dynamics in a state space of unbounded dimensionality. The attractor-net moves from one discrete state (frame) to another while the underlying neural net moves continuously through its state space.

Abstract: These diagrams explore the use of Sydney Lamb’s relational network notion for linguistics to represent the logical structure of complex collection of attractor landscapes (as in Walter Freeman’s account of neuro-dynamics). Given a sufficiently large system, such as a vertebrate nervous system, one might want to think of the attractor net as itself being a dynamical system, one at a higher order than that of the dynamical systems realized at the neuronal level. Constructions include: variety ('is-a' inheritance), simple movements, counting and place notation, orientation in time and space, language, learning.

Introduction: This is a series of diagrams based on the informal ideas presented in
Attractor Nets, Series I: NotesToward a New Theory of Mind, Logic and Dynamics in Relational Networks, which explains the notational conventions and discusses the constructions. These diagrams should be used in conjunction with that document, which contains and discusses many of them. In particular, the diagrams in the first three sections are without annotation, but they are explained in the AttractorNets paper.
The rest of the diagrams are annotated, but depend on ideas developed in the attractor nets paper. 
The discussions of Variety and Fragments of Language compare the current notation, based on thework of Sydney Lamb, with a more conventional notion. In Lamb’s notation, nodes are logicaloperators (and, or) while in the more conventional notation nodes are concepts. The Lamb-basednotation is more complex, but also fuller.
And, we might as well toss these notes in as well:
From Associative Nets to the Fluid Mind. Working Paper. October 2013, 16 pp.

Abstract: We can think of the mind as a network that’s fluid on several scales of viscosity. Some things change very slowly, on a scale of months to years. Other things change rapidly, in milliseconds or seconds. And other processes are in between. The microscale dynamic properties of the mind at any time are context dependent. Under some conditions it will function as a highly structured cognitive network; the details of the network will of course depend on the exact conditions, both internal (including chemical) and external (what’s the “load” on the mind?). Under other conditions the mind will function more like a loose associative net. These notes explore these notions in a very informal way.

Tuesday, November 21, 2017

Intersection of the Worlds Realized in Two Media

I've been digging out old MacPaint images over in Twitter, so I thought I'd bump this to the top of the queue.
This is a slightly off-angle photograph of a painting I did in the summer of 1981:


When I started it I had a simple formal problem in mind, to do a painting that used a full range of colors. Id been doing paintings that leaned toward blues and reds and paintings that leaned toward blues and greens, but none that had all three in prominent use. That was the problem I started with in that painting. I approached it right off the bat by painting that rainbow arc of color patches across the top and right side. I then filled in the rest with appropriate imagery. Note the three worlds separated by the squid's tentacles: the yellow sky with the bluish sun, the forest with blue sky and stream, and the underwater scene with the strange ET-like face.

A couple years later I got a Macintosh and decided to realize that same image in the very limited medium of MacPaint, which gave me only white dots and black dots, no grays, much less color. Here's the final image:

3W7 framed

Big head


Been down so long a change is gonna’ come [#HEX01]

Been Down So Long It Looks Like Up to Me, by Richard Fariña, 1966: “coming on like the Hallelujah Chorus done by 200 kazoo players with perfect pitch... hilarious, chilling, sexy, profound, maniacal, beautiful and outrageous all at the same time”–Thomas Pynchon.
I’m not going to try summarizing the novel. Just think about the title. What could that possibly mean? Well, I’m thinking it’s the story of my life.

Isn’t it the story of everyone’s life, up to a point, which you may not have yet arrived at, who knows?

I’ve lived under a cloud most of my adult life. The clouds, I think, are breaking up. The sun is shining through. & it’s not that the light hurts, because it doesn’t, but that it’s disorienting. I don’t know the world from this POV.

* * * * *

* * * * *

Are you familiar with the sociological concept of a reference group? It’s a group which you use as a standard for judging your behavior and accomplishments. When I got my PhD in English Literature, those people, academic literary critics, became my reference group. I started out strong, with good articles in good publications, and then that came to an end. I knew, of course, that my work was very different from standard literary criticism, and that caused problems, for me, not for them.

But I worked on literature, and they work on literature, no one else, so what choice have I had. They’re my reference group.

But maybe not. As I said yesterday, at last, someone’s interested in the technical work I did 40 years ago. And they’re not literary critics. They’re gamers.

Maybe I can stop worrying about academic literary criticism. They’re certainly not interested in my technical world, even those interested in cognitive criticism and computational criticism (two very different groups, BTW) have little use for it. And I can’t see making much headway with the descriptive folks, either. They seem more interested in theorizing description than in actually doing it. So we really don’t have anything to talk about. They’re not going to tumble to my ring-composition work. It’s actual description rather than theoretical throat-clearing in preparation for description at some later date.

So, let’s just bracket academic literary criticism for awhile. That profession is no longer a reference group for me. Let’s see if I can get somewhere with the gamers.

* * * * *

That’s one thing. And, in a way, it’s secondary. The big thing is that I think I’m finally going to be able to deliver on a task I set myself four and a half decades ago: to come to terms with, to understand, in some sense, the mechanisms underlying Coleridge’s “Kubla Khan”.

I talked about “Kubla Khan” in the presentation I delivered at HEX01 (First Workshop on the History of Expressive Systems), only a week ago today (early in the morning). At the end James Ryan, one of the organizers, asked me whether I would get back to “Kubla Khan”. I forget exactly what I said, but it was something like “maybe/I hope to/someday/yes”. That was the short answer. The long answer isn’t really that long, but it was too long to give in that context.

The long answer is that I long ago made “Kubla Khan” my touchstone, my personal reference point, my North Star. I judge my intellectual progress by what it tells me about “Kubla Khan”. So I’ve thought about the poem – and it’s relation to “This Lime-Tree Bower My Prison” – off and on for most of my adult life. I did my MA thesis on it in 1972, published an updated version of that in 1985 (“Articulate Vision: A Structuralist Reading of “Kubla Khan”), and a considerably more sophisticated account in 2003 (“Kubla Khan” and the Embodied Mind). I count that last as considerable progress, but still, a way to go with no sense of just how far or even in what direction. In 2013 I put up a working paper, STC, Poetic Form, and a Glimpse of the Mind, in which I did a comparison between “Kubla Khan” and “This Lime-Tree Bower My Prison” that was considerably more detailed and sophisticated than the one I’d published way back in MLN in 1981, “Metaphoric and Metonymic Invariance: Two Examples from Coleridge” (obviously, my first formal publication on “Kubla Khan”). So, I’ve been through the poem five times in my career, including my unpublished master’s thesis. And, yes, to answer Ryan’s question, I hope to get to it again. But just when, I don’t know.

Well, a day or two later I got back to it. And I’ve declared the problem to be solved. Of course, there’s something of a gap between the declaration and the actual solution. I know that. And it’s not so much the solution that I’m after, but a sure sense of the terms in which a solution is likely to be found. That’s where I’m at.

Monday, November 20, 2017

At last, after 40 years, someone is listening [#DH]

That's from the First Workshop on the History of Expressive Systems. The image you see on the screen originated in my computer in Hoboken, NJ, and was being viewed, via Skype, in Funchal, Madeira, Portugal.

These people aren't literary critics. They're into gaming. That is to say, they are interested in stories, in creating interactive stories, and they think in computational terms. And that's how I've been thinking about literary texts for over 40 years. I can talk to them about Shakespeare, Coleridge, and Conrad in computational terms, but also Francis Ford Coppola, Walt Disney, King Kong, Gojira, and others. They need to know what I know, and vice versa.

The diagram in that image is from my 1976 article, Cognitive Networks and Literary Semantics, MLN 91: 1976, 952-982. Given the importance of MLN as a journal, and the fact that that particular issue was a special one commemorating 100 years of publication, I figured it would mark the beginning of a spectacular academic career. WRONG! Oh, the intellectual work's been good, at times even thrilling, but the literary academy wanted to go to Kansas (though that is not, perhaps, how they thought of it) and I wanted to go to the moon.

Have I found some fellow astronauts?

Stay tuned.

Special FX: The moon didn't fall in Alabama, either, but Jumper and Kong made a splash in JC

infant-stars & hot box spin 5.jpg

tumble rumble.jpg

down down down.jpg



Sunday, November 19, 2017

Either Tokyo was a lot smaller or I've shrunk since my glory days



As you can see from my most recent post, I have acquired a rather exotic women's shoe.  I was walking to the library when I spotted it on the sidewalk. Apparently discarded, a single shoe, left foot, size 7, "Kiss & Tell" – How's THAT for branding? There's a label on the sole at the instep that says, "All Man-Made Material Made in China". What does that mean? I understand "Made in China", but "All Man-Made Material" is ambiguous. Does it mean that all the materials are man-made (and they're made in China), so that the suede uppers are actually some artificial suede substance? Of does it mean that the man-made materials were made in China (the sole and heel are plastic) but the rest might well be natural? If so, was it also assembled in China?

Anyhow, as soon as I saw the shoe one of those little light-bulbs went off above my head:


So I grabbed it and put it in my backpack and then continued on to the library to return my film, Miyazaki's Castle in the Sky, and pick up my book, King Kong: The History of a Movie Icon from Fay Wray to Peter Jackson. There's a connection, you see, between King Kong and that shoe. King Kong died on the Empire State Building, right? Why not pose the shoe with the Empire State Building. Like this perhaps:


Notice that, from this angle, that size 7 woman's shoe is larger than that phallic whatsiewhoseit across the river.

Then I realized that these aren't the only photos of shoes I've got. For example, I found these hanging outside the improvised shack of some homeless person:

red shoes.jpg

And then we have the stash of women's shoes that my friend Wayquay is selling at The Ruins JC. Mostly women's shoes, but not all of them. I suppose we could say these baby booties (made by Wayquay herself) aren't shoes, strictly speaking, but they serve the same function, no?


And I've got other shoe shots as well, like these:


This, of course, is a minor sport.

Anyhow, I figured that, with these latest shots of the green shoe – I've got more that I haven't uploaded, and I plan to take more photos as well (perhaps in Narnia) – I should create a tag here at New Savanna (shoes) to capture those shots and write up a brief post acknowledging the importance of shoes.


Isn't that green just gorgeous! That shoe's the greatest prop ever!

Two views of Manahattan



BONUS below the fold –

"It was beauty killed the beast."

Ben and the Boys: Some Casual Remarks Violence, Authority, and Sex in Bonanza

Bumping this to the top of the queue as it is relevant to current news about sexual harassment and rape in America.
A couple of weeks ago I watched my way through the first season of Bonanza, a TV western that I had watched in my youth. As some of you may know, it was one of the most popular shows on television at the time and ran for 14 seasons from 1959 to 1973. It was set in Nevada in the 1860s and centered on the Cartwright family, father Ben and his three adult male sons, proprietors of the Ponderosa, a large cattle ranch bordering on lake Tahoe and near Virginia City, a mining town.

This post consists of two casual notes about the show. The first concerns sexual violence against women and the second is about the structure of (political) power.

Rape in the Old West

After I’d watched about a dozen episodes a thought struck me: there’s a lot of sexual violence against women in this show. That thought stayed with me to the end of the season, 34 episodes. I wasn’t taking notes or keeping count but I’d say that half the episodes depicted sexual violence.

What do I mean? As I said, I wasn’t keeping notes, but typically a man would embrace a woman and try to kiss her. She would resist but he wouldn’t stop. At this point either the camera would cut away, leaving us to imagine what happened next, or a Good Guy, such as one of the Cartwrights, would come along and rescue the woman. The violence wasn’t nearly as graphic as we’d see on Deadwood, set in a similar place a decade later, but then Deadwood wasn’t made for a family audience watching on primetime network television back in the day when network television was much more important than it is now. Bonanza WAS made for a family audience.

Was this typical of primetime television back in the 1960s? I don’t know, but I suspect it was more common than I remember. Was this typical of westerns? I don’t know.

I know that it is not typical of The West Wing, a more recent and very different television show that I’m now watching. As you may know The West Wing is a political drama set in the west wing of the White House, which contains offices for the President, Vice President, and high-level staffers. While sexual violence comes up as a topic every now and then, the show doesn’t have a lot of scenes where a man forces himself on a woman. In fact I cannot think of one such scene off hand, and I’ve been through the whole run of the show.

Why then is it so common in Bonanza? Westerns are typically set in a world where the rule of law is tenuous. Westerns are about violence: cattle rustling, disputes over land, conflicts with Indians (aka Native Americans), bank robbery and other forms of theft, and, in the case of Bonanza, rape. What about other TV Westerns? I don’t know off hand; though I watched many TV Westerns when I was young, I don’t recall many of them.

Whatever the more general case, the first season of Bonanza was concerned about sexual violence against women. In some cases the women were saloon girls, prostitutes I (now) assume, though that was certainly not explicit (unlike Deadwood). In other cases the women were married or simply single; in one episode two Indian women were raped at a trading post.

Were the other 13 seasons like this? I don’t know and I don’t intend to watch them. It would be interesting to know, though.

Saturday, November 18, 2017

OOO: Baby Jesus and the Sausage Roll

LONDON — A British bakery chain has apologized after creating a Nativity scene in which the baby Jesus, surrounded by three wise men, was replaced by a sausage roll.