Religion & Politics Fit For Polite Company Thu, 26 Feb 2015 16:47:04 +0000 en-US hourly 1 God in the Machine: The Role of Religion in Net Neutrality Debates Tue, 24 Feb 2015 18:33:36 +0000 (Ben Stansall/AFP/Getty)

(Ben Stansall/AFP/Getty)

The public movement to protect a free and open Internet is approaching a critical moment this week: on February 26, the Federal Communications Commission will vote whether to pass strong rules against corporate control of the Internet. For years, companies that manage America’s access to the Internet—corporate giants like Verizon, Comcast, and Time Warner—have sparred with activists and the FCC for control of cyberspace. Advocates on both sides have debated net neutrality, the notion that all information, data, and content online should be treated the same and equally accessible to all.

At stake is whether more wealthy content providers (think: Netflix) should be able to pay for faster service while smaller, less wealthy start-ups, or personal websites are left behind in an Internet gridlock. President Obama supports FCC regulation of net neutrality, and polling shows that the majority of Americans across the political spectrum oppose Internet service providers (ISPs) charging some websites more for faster service. Last summer, nearly 4 million people submitted comments to the FCC, most of them urging the agency to pursue stronger net neutrality protections.

FCC Chairman Tom Wheeler’s plan, which five FCC commissioners will vote on this week, proposes to reclassify the Internet as a public utility like water or electricity, using Title II of the Telecommunications Act of 1934, which could then treat the Internet as an unencumbered benefit for all citizens. This legal maneuver will enable the FCC to prohibit Internet Service Providers (ISPs) from censoring or prioritizing content or charging additional fees to some websites and video-streaming services.

The topic can be highly technical, and policy wonks, online startups, and public interest legal groups have understandably led much of the net neutrality movement over the past decade. But religious activists and organizers like us have also played a role in these changing debates. While largely Christian, religious groups have brought together strange bedfellows of progressive, faith-based activists and conservative religious organizations, from the Christian Coalition of America and the United Church of Christ to the United States Conference of Catholic Bishops. All have spoken out for net neutrality, even if they do not necessarily share arguments or coalition actions.

As high-speed Internet has grown from a novelty to an essential tool, communities of faith and conscience have come to rely on it for their media, organizing, and outreach. Religious messaging has framed net neutrality as a free speech issue for religious voices on both ends of the political spectrum; more recently, activists have also made the case that open access to the Internet is a civil rights issue for underserved populations and religious communities. Last April in The Daily Beast, the former head of Obama’s White House faith office, Joshua Dubois, envisioned a future without net neutrality protections: a small-time innovator, a young man from Detroit with tech skills and big ideas, cannot compete with big companies who can pay for faster and better Internet service. Last year, the Jewish online magazine ZEEK published an article advocating for net neutrality and noting that “small online magazines like ZEEK need #NetNeutrality.” Posts like these have cut through technical jargon to show that religious voices have a very real stake in net neutrality debates.

In the final months of 2014, a small group of lawyers, religious activists, and organizers launched the Faithful Internet campaign in order to collect video and written testimonials from religious leaders and community members about why a free and open Internet makes a difference in their spiritual lives. (Full disclosure: We are both Faithful Internet volunteers.) The project is managed by the Stanford Center for Internet and Society and the United Church of Christ’s Office of Communications, Inc. (OC, Inc.)—a key operator in creating a movement around net neutrality. The campaign has included diverse voices ranging from North Carolina’s Moral Mondays leader, the Rev. William Barber; to Helen Osman, the secretary for communications for the U.S. Conference of Catholic Bishops (USCCB); to Valarie Kaur, a prominent Sikh activist and a central organizer for Faithful Internet.

According to OC, Inc. policy advisor Cheryl Leanza, the United Church of Christ (UCC), a liberal denomination, was working on “what was then called ‘open access’ principles for high speed Internet” as early as 2002, when the church’s media ministry filed comments on the subject with the FCC. Their goal was to serve as “a technically sophisticated, highly visible player in the narrow field of FCC regulation” and to be “a small but vocal proponent of prophetic Christian social justice.” In 2005, the UCC, along with the Christian Coalition of America, Gun Owners of America, and, joined a broad group of organizations from across the religious and political spectrum to be part of the Save the Internet coalition, which lobbied for legislative net neutrality protections.

Some conservative Christian organizations, such as the Christian Coalition, worried that a lack of net neutrality could silence minority conservative voices: “What if a cable company with a pro-choice Board of Directors decides that it doesn’t like a pro-life organization using its high speed network to encourage pro-life activities?” they asked on their website.

In 2010, when the FCC first proposed net neutrality rules, debates began to gain traction in the broader faith and politics advocacy community—with an eye toward advancing open Internet regulations for everyone, not just particular religious communities. In 2010, the National Council of Churches, representing more liberal Christian communities, released a comprehensive resolution in favor of net neutrality. They argued that access to the Internet was important for religious people and “network neutrality principles will allow the full diversity of voices to flourish.” Likewise, in a 2011 letter to Congress, the USCCB stated that American Catholic bishops supported net neutrality legislation and regulation to “ensure equal access to the Internet for all.” They added, “True net neutrality is necessary for people to flourish in a democratic society.”

Today, some conservative Christian voices are still speaking out in favor of net neutrality, although fewer than in the past. Last year, National Religious Broadcasters denounced religious censorship and its president has said that Internet providers should not be allowed to block content of which they disapprove. But the NRB does not endorse net neutrality or ambitious FCC regulatory measures, preferring a light touch approach that would limit the government’s involvement in the market.

In September of 2014, a coalition letter representing diverse religious groups, including the USCCB, the National Council of Churches, and the Islamic Society of North America, called for the FCC to promote net neutrality. Their message was threefold: “Strong net neutrality protections are critical to the faith community to function and connect with members, essential to protect and enhance the ability of vulnerable communities to use advanced technology, and necessary for any organization that seeks to organize, advocate for justice or bear witness in the crowded and over-commercialized media environment.”

One of the central concerns expressed in the coalition’s letter was whether religious voices may have to give way to well-funded entertainment and corporate speech—a well-founded fear. ISPs have fought regulations and have argued that they require flexible pricing strategies to continue to invest in costly Internet infrastructure and maintenance. In 2012 alone, the National Cable and Television Association, a lobbying entity of the country’s major cable and Internet providers, along with Verizon, Comcast and AT&T, spent more than $19 million to oppose measures to preserve net neutrality.

Understanding the uphill battle to challenge such well-heeled lobbying efforts, we and other organizers at Faithful Internet have turned from national-level coalition letters to grassroots organizing and advocacy. In our collected testimonials, individuals can offer their pleas for a free and open Internet. In one statement, David Gladston, a chaplain and counselor from a South Carolina Baptist church, says that an open Internet allows him to connect and “help my patients and clients face the end of their lives.” In another, Sara Fitzgerald, a clerk at a Virginia UCC congregation, writes that the Internet allows her church to reach out locally so “people find the kind of progressive Christian community we aspire to be.” From New York, Aaron Stauffer, who directs Religions for Peace USA, says an open Internet is crucial to his campaign to fight Islamophobia. He writes, “Sharing the stories of Christians and Muslims and Jews and Baha’I and Hindu and people of all faith, in text and video, would simply fail through any other avenue.”

Net neutrality has been more than a political debate, but an opportunity for religious organizations and activists to express their political identities with spiritual impetus. Today, the ideals of open access as an organizing tool and a civil rights issue are the driving forces for faith-based activists. On one hand, religious voices want to protect the faithful from real or perceived censorship, and they want to reach their own members online. On the other hand, they want justice for disadvantaged communities who may have limited or mobile-only access to the Internet. In particular, communities of color rely disproportionately on mobile devices to access the Internet, and by acknowledging that, many faith communities are hitting upon touchstones of their theological-political identities: service to the underserved and vulnerable.

With their theological tone and their push for Internet access for all communities, the faith-based, net-neutrality activists aspire to be descendants of the Civil Rights Movement, as religious-political connectors for grassroots engagement. Indeed, recently Civil Rights icon, Representative John Lewis of Georgia, took to Facebook to support net neutrality. He noted, “If we had the technology, if we had the Internet during the movement, we could have done more, much more, to bring people together from all around the country, to organize and work together to build the beloved community.”

These religious advocates for net neutrality are striving to protect the entire beloved community— online and off.

Emily Baxter and Aseem Mehta are non-residential fellows at the Stanford Center for Internet and Society.

]]> 0
The Troubling Push to Deregulate Homeschooling Tue, 17 Feb 2015 17:00:28 +0000 (Getty/Boston Globe)

(Getty/Boston Globe)

When my mother pulled me out of kindergarten to homeschool, it wasn’t a religious choice. We were an average Christian family, casually attending a non-denominational church. I was a shy child, overwhelmed by the boisterous atmosphere of my school and quickly targeted by bullies. Homeschooling was supposed to be a temporary measure, a chance for me to build up confidence for a return to school. For the first few years, I thrived on a flexible curriculum that built upon my natural love of reading and writing. Homeschooling gave my family ample free time to take field trips and participate in science and craft fairs. A local homeschooling group provided everyday social support as well as extracurricular activities.

But over time, the group became polarized, driving out non-Christian families and focusing more on “character building” than academics. Religious homeschoolers recruited my mother to a strict form of patriarchal Christianity, convincing her that homeschooling was a godly necessity and that the right to homeschool was under immediate threat by the government. As I grew older and academic subjects grew more difficult, it became less acceptable for my mother to seek help outside the homeschool group or consider putting me back into school. Regular evaluation and testing in Pennsylvania kept us on track during elementary and middle school, but my education fell apart when we moved to New Jersey, a state with no homeschooling laws. I spent an entire academic year with my geometry book propped open to the same page, and my only exposure to the theory of evolution was a caricature crafted by my creationist textbooks to make secular science sound absurd.

By the time I graduated high school as a homeschooler, my family and I were deeply involved in a fundamentalist church, where girls were forbidden to attend college. What had begun as an experiment to improve my education had inadvertently derailed my academic life.


WHILE HOMESCHOOL ADVOCATES commonly assert that America’s early leaders were homeschooled, it is impossible to speak of homeschooling as a self-conscious movement before the 1960s. From its inception, the movement included both religious homeschoolers who sought to remove secular influences from their children’s lives, and secular homeschoolers whose motivations were based on beliefs about child development. After the Supreme Court ended state-sponsored prayer in schools in the early 1960s, Calvinist theologian R.J. Rushdoony began to urge parents to consider homeschooling as a means of protecting their children from the secular school environment. Ray Moore, a Seventh-day Adventist who had worked in higher education, began to promote homeschooling for a combination of religious and developmental reasons. Moore argued that homeschooling cultivated children’s natural curiosity and allowed them to learn at an individual pace, an argument that appealed to parents across religious lines. Education theorist John Holt echoed the developmental case for homeschooling in his magazine Growing Without Schooling, founded in 1977. Holt believed that the concept of school was inherently flawed: it created an artificial environment that isolated children from natural learning experiences. By the early 1980s, a growing body of families subscribed to both religious and secular arguments for homeschooling, some using them interchangeably. Moore’s Home Grown Kids, published in 1981, rapidly became a classic among homeschooling families and remains widely read today.

The religious side of the homeschooling movement grew stronger and more rigid in its opposition to other forms of education over the next decade. When evangelical leader James Dobson invited Moore to speak on his radio show, Focus on the Family, in 1979, he helped widen the audience for religious homeschooling to mainstream evangelical families. The endorsement was well-timed. Families who had withdrawn from public schools but remained unsatisfied with the climate of Christian schools increasingly turned to homeschooling as a more effective means of directing their children’s education. Religious homeschoolers developed an ideology based on Deuteronomy 6:7, which urges parents to teach their children the commandments of God from morning to night, to prove that homeschooling was ideal. Homeschooling not only removed children from perceived negative influences in schools, but it also kept Christian parents (usually mothers) in constant contact with their children. In 1983, homeschooling parent Michael Farris founded the Home School Legal Defense Association (HSLDA) to advocate for the parental right to homeschool without state supervision. By the 1990s, some evangelical and fundamentalist parents opposed even Christian schools, as they believed parents who did not homeschool were shirking a moral duty.

Today, the number of homeschooled children in the United States is likely close to 2 million. In 2011, the National Center for Education Statistics reported 1.77 million homeschooled children in the U.S., up from 850,000 in 1999. But there is no federal standard for how these students should be taught. Homeschooling statutes, which govern program assessments and subject areas, vary widely across the United States. And in recent years, there has been a push for more deregulation. Eleven states do not require notification of a parent’s intent to homeschool, and 25 states require no academic assessment of homeschooled students. States that do oversee homeschooling—such as New York, Massachusetts, and Vermont—use a combination of testing and portfolio evaluation to measure progress. In Pennsylvania, where I completed grades 1-8, homeschooling families had to file notices of intent to homeschool, cover a range of required subjects, take biennial standardized tests, and submit a portfolio to a licensed evaluator and the local school superintendent every year. These requirements remain in place for the 2015-16 academic year, but the state legislature eliminated the superintendent review component last October, as a result of lobbying by HSLDA and a subset of homeschool parents who favor a hands-off approach. There are dangers in forgoing this last round of oversight, as it severs the connection between homeschoolers and their district, leaving the evaluation process subject to corruption. It also prevents the district from identifying educationally neglected children and referring them for assistance.

Homeschooling parents often support reasonable oversight like the subject and assessment laws in Pennsylvania. Former homeschooling mother LaDonna Sasscer writes that accountability laws in her state made her a more effective educator to her children. “I appreciated the professionalism involved in keeping meticulous records,” she writes on a homeschooling advocacy website. “It kept me on my toes.” My own mother saw our evaluator as a partner in my education. Subject requirements and standardized tests served as benchmarks: strong test results validated my mother’s choice to homeschool, proving that we could exceed public school standards. But the longer we stayed in our homeschool group, the more we learned to fear state supervision. We perceived the superintendent as a threatening figure intent on discrediting homeschoolers, despite the fact that no one we knew had ever been investigated for educational neglect. We absorbed these fears from HSLDA bulletins, which warned us periodically that the right to homeschool was under threat. We believed them, despite the growing acceptance we saw in our own community.

The deregulation lobby is troubling, as absences of oversight provide opportunities for abusive parents to use homeschooling as a cover for deliberate isolation and educational neglect. The case of Hana Williams, an adopted and homeschooled child who died in 2011 of abuse and neglect at the age of 13, underscores how proper oversight could prevent tragedy. Between 2009 and 2011, Hana did not see a doctor. At the time of her death, she weighed less than 80 pounds. Her body was marked from beatings with plumbing supply line, a punishment drawn from the book To Train Up a Child by Michael and Debi Pearl. That these warning signs went undetected for two years underscores how complete the isolation of homeschooled children can be if their parents have less than honorable intentions. Proper oversight, including annual evaluations and regular medical exams, would have brought Hana into contact with doctors or educators who would be required by law to report suspected abuse. It is likely that regular contact with mandatory reporters would have lessened Hana’s abuse and saved her life.

Homeschooling can be a powerful educational tool, but in the absence of oversight, it can also leave children vulnerable to educational neglect, abuse, and anti-education ideologies. Recently, some homeschooling alumni have been advocating for keeping needed regulations in place. The nonprofit Coalition for Responsible Home Education (CRHE), which was founded in 2013 by homeschool alumni Rachel Coleman and Heather Doney, brings together a small team of alumni, researchers, and educators to promote reasonable oversight for homeschooling at the state level. (Full disclosure: I am involved in the organization as a volunteer.) Homeschooling’s Invisible Children (HIC), a volunteer-led organization operating under CRHE, tells the stories of children whose abusive parents used homeschooling as a cover to avoid detection. The vision of CRHE, broadly shared by its affiliates, is “for homeschooling to be a child-centered educational option, used only to lovingly prepare young people for an open future.” Homeschool alumni who support CRHE differ in their religious and political convictions, but agree that oversight is necessary to ensure that homeschooled children’s best interests are respected.

Homeschool graduates now in their 20s and 30s constitute a first generation cohort for the homeschooling movement. Among alumni there appears to be a generally positive or moderate attitude toward homeschooling, though more research is needed to create a complete picture of homeschooling experiences across the U.S. An informal survey conducted in 2014 by the homeschool alumni group Homeschoolers Anonymous found that a majority of alumni agreed that homeschooling had prepared them for the future, and 47 percent preferred homeschooling over other educational choices for their own children. But the survey also raised red flags: 30 percent and 16.2 percent of respondents reported they had experienced emotional or physical abuse, respectively. Seventeen percent reported educational neglect. Survey respondents also reported being less likely to have access to, and more likely to skip, courses in higher-level mathematics and the natural sciences compared to other subjects. Academic research supports the existence of a “math gap” in home education: in a 2013 study, Robert Kunzman and Milton Gaither observed that homeschooling “tends to improve students’ verbal and weaken their math capacities.” As homeschooling alumni age, they bring insights into the effects of the homeschooling movement on children’s academic and personal wellbeing, and are able to identify areas where children might fall through the cracks. Those of us who have gone on to advocate for better oversight are interested in plugging such gaps, ensuring that the next generations of homeschooled children have access to strong academic preparation and adequate protection from abuse.

Opponents of oversight believe parents’ rights supersede any relationship a child has with society. They argue that no one cares about children as much as their parents, and therefore parents are the only ones qualified to evaluate their own educational choices. HSLDA’s director Michael Farris heads, a 501(c)(4) lobbying group that would like to add a Parental Rights Amendment to the U.S. Constitution. Their arguments favor total parental sovereignty, operating on the assumption that parents always act in their children’s best interests. The case of Hana Williams, and more recently transgender teen Leelah Alcorn, demonstrate that this is not always true. Alcorn’s parents, who allegedly used homeschooling to isolate her from her friends and prevent her from expressing her gender identity, believed they acted out of love. When Leelah committed suicide on December 28, 2014, she left a note citing her parents’ controlling behavior as the reason she could not go on. These stories are in the minority in homeschooling families, but all children nonetheless deserve safety, support, and a solid educational foundation. The parental rights lobby is ideological rather than pragmatic: its Old Testament basis and vision for a conservative Christian America led by homeschool graduates would dismantle protections for the vulnerable few in order to relieve all homeschooling parents of accountability.


MY EXPERIENCE MIRRORED the shift among religious homeschoolers from a vision of homeschooling as a simple, positive educational choice toward an ideology in which homeschooling is mandatory, regardless of academic progress. Initially, homeschooling was an unusual but pragmatic choice that my mother believed would encourage my creativity. But by fifth grade, my family had entered a homeschooling world that was no longer about customized curriculum or the pursuit of natural curiosity. We were now part of a countercultural movement. I was fed a steady stream of stereotypes about public school kids: they were foul-mouthed, disobedient, and slovenly; they abused drugs, joined gangs, and had sex too young. Public schools, I was taught, were indoctrination camps where the government bred docile consumers. I was afraid to set foot on a public school campus, much less make friends with any public schooled kids in my neighborhood. Homeschooling was no longer an educational choice; it was an act of cultural warfare.

Lax homeschooling laws provide cover for fundamentalist parents who neglect girls’ education in service of patriarchal ideals. Religious homeschooling has been fertile ground for a range of ideologies aimed at keeping young women out of the workforce. These include the Quiverfull movement, which rejects the use of birth control, and the stay-at-home daughter movement. In 2005, a group of young women from my church, all homeschool graduates, banded together to request permission to attend Bob Jones University. Their fathers denied the request; adult women had no business moving away from home before marriage. While young men regularly earned degrees, their sisters were told that education was a distraction from their calling as homemakers. Parents in my church sometimes spoke of educating their daughters through high school as “rendering unto Caesar,” a duty performed so as not to break the law. These parents used their freedom to homeschool in order to limit their daughters’ education, locking them into permanent economic dependence on fathers and husbands.

How did I escape? I was lucky. The religious messages of my church were counterbalanced by my working-class parents’ determination that I would have a better education than they did. In my final year of high school, my mother sent me to community college to earn credit toward my homeschool diploma. Community college was my earthly salvation. A literature professor opened the doors of the world to me, encouraging me to see myself as intelligent and capable for the first time in years. This intervention helped me persevere through remedial classes, catch up, and ultimately acquire a master’s degree. I had to leave my church—indeed, my whole world—behind to do this. Had I not fallen off the grid in high school, I might have entered college prepared to choose any major. As it was, my inadequate preparation in math and science pushed me into the humanities. I could not have become an engineer or a scientist without undertaking years of remedial training in subjects we’d simply skipped, like chemistry and calculus. I succeeded in college, but my options were curtailed from the start. Young adults need adequate preparation in basic subjects in order to have a full range of choices about the careers and lives they want to pursue. I was lucky, but American children deserve better than luck; they deserve a level playing field. Homeschooled children cannot have that without reasonable oversight.

Caitlin G. Townsend is a writer in Ann Arbor. She holds a master’s degree from the University of Cambridge.

]]> 0
Jewish Life in the City: An Interview with Deborah Dash Moore Tue, 10 Feb 2015 18:22:15 +0000 (Jacob A. Riis/Museum of the City of New York/Getty Images)

(Jacob A. Riis/Museum of the City of New York/Getty Images)

Deborah Dash Moore has played a pioneering role in the study of Jews and in establishing the field of American Jewish history. Only with difficulty could one find a scholar of American Jewish history who has not been touched, often personally, by Moore in her roles as writer, teacher, conference organizer, mentor, and editor. She is recognized as an architect of the field, particularly at its intersections with urban history and gender history. Last fall, her Urban Origins of American Judaism was published, and I corresponded with her over email about her work and this latest book.

Moore is the Director of the Frankel Center for Judaic Studies and the Frederick G.L. Huetwell Professor of History at the University of Michigan. Previously, she taught at Vassar College where she helped to found their program in Jewish Studies and served as head of Religious Studies. She is the authored or edited ten books, including At Home In America: Second Generation New York Jews, To the Golden Cities: Pursuing the American Dream in Miami and Los Angeles, and, with Paula Hyman, Jewish Women in America: An Historical Encyclopedia, for which they received a National Jewish Book Award. Moore’s 2004 book, GI Jews: How World War II Changed a Generation, was a Washington Post Best Book of the Year. This interview had been edited slightly for length and clarity. 

R&P: “Jews and cities” has been a longstanding and rich scholarly preoccupation for you. How did your upbringing in New York shape this interest?

DDM: New York was the quintessential city for me and I learned its rhythms from a relatively early age. In 2013 I published an essay, “Sidewalk Histories,” that spoke to some of the specific things I gained from growing up in the city, from walking its sidewalks to riding its public transit to viewing its streets from eye level. I largely took for granted the city’s socioeconomic, religious, ethnic, and racial diversity, just as I accepted as normal living to be on the 11th floor of a 20-story apartment building and needing to walk blocks to find a park with trees. The city intrigued me. I loved to explore it. I was fortunate that as I was choosing a dissertation topic, urban history was a burgeoning field and that allowed me to bring my local, insider’s knowledge into dialogue with what I uncovered as a historian.

R&P: Photography has been another important aspect of your scholarship and this book. As you point out in Urban Origins of American Judaism, it’s hard to think about the history of the Jews of New York without thinking of all of the historic photographic images associated with that topic, thanks to Jacob Riis, Lewis Hine, Arnold Eagle, Cornell Capa, and others. How has photography influenced your teaching and thinking about American Jewish history?

DDM: In 2001 Howard Rock and I published Cityscapes: A History of New York in Images. We had started the project in the 1990s as an effort to recast a classic visual account of New York City by John Kouwenhoven, The Columbia Historical Portrait of New York, that we thought needed to be updated. We divided the book and I took the years from 1870 to 2000. Initially I looked at various visual representations of the city, but increasingly the photographic record attracted my attention and I decided just to use photographs. I began with what was familiar (the Byron Brothers, Riis, Hine) and then moved on to less familiar photographers whose work I discovered at the Museum of the City of New York and the New York Public Library. When I started to request permission to publish the photographs (I ended up looking at around 10,000 images in order to narrow it down to 4,000, and then finally to 1,000), many of the photographers asked me where I had seen their work. Then they invited me to come to their studios to see a much larger collection. So I met these amazing photographers and immersed myself in their vision of New York.

Around that time David Lobenstine, a Vassar student who was working as my research assistant, drafted a paper on the photographic legacy of the Lower East Side, which we ended up publishing as “Photographing the Lower East Side.” This served as my introduction to thinking visually about urban history. Having met so many photographers who also turned out to be Jewish then prompted me to consider why so many Jews picked up cameras to record images of the city. I was not alone in these observations. Max Kozloff curated an exhibit at The Jewish Museum on “New York: Capitol of Photography.” In the catalog he raised the question of a Jewish sensibility in New York photography.

After these initial forays into thinking about Jewish photographers and their pictures, I began to integrate photobooks (such as Richard Nagler’s My Love Affair with Miami Beach) into my American Jewish history courses. These books usually wed text and image to reflect upon urban culture. Photographs bring an immediacy to history; they also can be read in many different ways. The more I have taught history with photographs, the more attention I have paid to how people, in different time periods, have interpreted a photograph. The photographs in Urban Origins of American Judaism include classic images but also less well-known pictures. And they invite readers to imagine a dialogue, to initiate a conversation. They are far more mutable than other historical documents. And, I should note, students are very comfortable offering their own interpretations. They are visually literate and good at looking at photographs and drawing insights from them.

R&P: As you’ve moved from New York to Ann Arbor, how has your interest in cities evolved? 

DDM: This semester I have just finished co-teaching with Marian Krzyzowski a course on “Detroit: Race, Religion and Ethnicity in the 20th Century.” I find Detroit so very different from New York in its industry, politics, ethnic groups, religious organizations, and economy. Even its racism seems different, despite common elements. So I’m constantly making comparisons in my head and discussing them with Marian. I’ve also become far more interested in suburbs since so many of my students come from suburbs.

I should add that I have wonderful colleagues in the History department who run a Metropolitan History seminar and several of my graduate students participate in this. So moving to the Midwest has broadened my interests in cities.

R&P: I thought of your book recently when I heard an interview with the late director Mike Nichols in which he described his first memory of 1930’s New York. Seeing the Yiddish signs on city streets, he asked his father, “Is that allowed?” “It is here,” his father told him. In Urban Origins of American Judaism, you discuss that public face of Judaism in American cities. Did experiences like Nichols’s shape your interest in urban Judaism?

DDM: Most definitely, yes. I recall discussing with Paula Hyman, z”l [of blessed memory], when we were both graduate students about how liberating it was for her to be living in New York City in comparison with Boston, especially around Christmas time. I had just assumed a Jewish presence visible on the city streets, in stores especially, and her comments to me made me realize how distinctive it was and also how empowering it was. All of the department stores regularly featured specials for Jewish holidays like Passover and, of course, Hanukkah appeared alongside Christmas. This commercial visibility, combined with attention to lived religion, contributed significantly to my interest in studying the dimensions of urban Judaism.

R&P: Religious leaders have sometimes feared and denounced New York’s influence on religion (I’m thinking of Billy Graham’s 1957 Gotham crusade), but American Jewish leaders, as your book portrays them, have historically taken a liberal attitude to the positive effects that bustling, urban life could have on Judaism. How do you explain this?

DDM: Jon Butler has been working on a book on religion in Gotham, arguing that cities—commonly thought of as sites of sin and moral decay—actually are places that nourish religious invention. I think that American Jewish leaders appreciated the opportunities cities presented and were willing to take the risks that accompanied those opportunities. Even some insular Hasidic groups have held on to a stake in the city. Urban spaces offer niches in neighborhoods and even on blocks that allow religious groups like Jews to fashion their own distinctive milieus. Multiplicity outweighs uniformity. Jews recognize this.

R&P: In your book GI Jews: How World War II Changed a Generation, you described how Jews like your father, who had grown up in pre-WWII New York, experienced Jewishness as “a way of being and thinking.” (Jewishness was the food you ate, your politics, the company you chose, etc.) But then World War II transformed this urban paradigm of Jewishness.

DDM: Yes, I think that there was something of a paradigm shift in urban Jewish identity. As Jews became identified with a Judaism that was considered one of the three fighting faiths of democracy, they began to adumbrate religious forms of Jewish life that followed the other two American faiths: Protestantism and Catholicism. Rather than understanding Jewishness as a way of being and perceiving the world, they came to think of it as set aside for specific occasions, such as life cycle events or days on the calendar. Jews who moved to the suburbs especially privatized many aspects of Jewishness. Lila Corwin Berman argues in a forthcoming book on Detroit Jews that they transformed their urban perspectives into a political ideology of “remote urbanism” that allowed them to remain connected to the city even while living in the suburbs. Still, this was a far cry from a sensibility that many urban Jews held that imagined many of their neighbors as “Jew-ish,” even though they knew weren’t Jews. (One thinks of Colin Powell, for example, who spoke a decent Yiddish, part of his experience growing up in the Bronx when it was a very Jewish borough.)

R&P: In addition to the misery of crowded urban life, The Urban Origins of American Judaism discusses the fun that could be had through urban living. I wonder if you think there’s a connection between some 20th century rabbis’ desire to create an American Judaism in which Jews could live in two civilizations (American and Jewish) and the allure of their city’s cultural offerings. Your book points out Rabbi Mordecai Kaplan’s appreciation of art and aesthetics, and it shows that Rabbi Louis Finkelstein was shaped by his awareness that other Jews cared about the leisure-time pursuits the city had to offer.

DDM: Like baseball. There’s a great account of Kaplan and Finkelstein walking around the reservoir in Central Park and talking about a sabbath sermon—and the importance of baseball as part of it. (The World Series occasionally coincided with the High Holidays and kids regularly snuck out of services to follow the score on the radio.) But you’re referencing not just sports but elite forms of culture such as opera and ballet and symphony concerts and musical theater. When Rubin Tucker, the cantor at the Brooklyn Jewish Center, decided to leave the cantorate for a career in opera under the name Richard Tucker, his congregants cheered (and bought tickets to hear him sing at the Metropolitan Opera House). The city enticed Jews with its many forms of culture, and rabbis especially recognized how Judaism needed to compete creatively with the latest cultural turns. They saw it as a stimulus because they, too, enjoyed the city’s leisure pleasures.

R&P: Food is naturally a part of this story of urban Judaism. Your book discusses “appetizing stores.” It’s such a curious name! In America, food can seem like a party that all of America is having, all of the time (hot dogs sold at baseball games, cheeseburgers advertised on television). But it’s a party to which Jews don’t always feel invited. One way to respond was by overcompensating. Jewish food wouldn’t just be labeled kosher or Jewish food; it was oh-so-appetizing. What is the cultural work of Jewish food in cities?

DDM: “Appetizing store” is an odd name for dairy stores, but I don’t think it reflected any overcompensation. Mostly it developed as a result of the requisites of kashrut to keep meat and dairy products separate combined with a capitalist fueled diversity of retail stores. One needs to remember that by 1930 there were more Jews living in New York City than in most Western European nations and this concentration in a relatively small space spurred all kinds of commercial establishments that specialized in their products. Cities also were sites of a lot of food production: bread baking, meatpacking, fish canning, even wine and beer distilling. The demands of kashrut spurred Jews to enter many parts of food production as well as marketing.

R&P: Since the immediate post-WWII era, there’s been a flip in how suburban and urban Judaism are perceived. Urban Judaism has become synonymous with young, innovative Jewish life, and—no disrespect to the suburban Judaism of my youth—but the bloom is somewhat off the rose of suburban Judaism, compared with its postwar days. And yet, it is often the children of suburbia who are now leaders in innovative urban Jewish life. How do we understand this dynamic?

DDM: Really good question and one that I suspect you will be answering in the future through your scholarship. I think you’re right that there’s a pendulum and that the excitement of the suburbs, the lure of a new private home and yard, and the appeal of the nuclear family apart from nosy relatives, is dimming in favor of recycled old apartments, the fast pace of city streets, and, of course, an enduring desire to get away from parents. There’s definitely a suburban-urban counterpoint here. Perhaps there is also inspiration.

R&P: You’ve played a leading role in the establishment of American Jewish history as a recognized field in American religious history, Jewish Studies, and in American history. What have you observed about how the field of American Jewish history has changed over the past three decades?

DDM: American Jewish history has really blossomed since the mid-1980s into a recognized field in dialogue with American religious history, Jewish Studies and American history. Gone are the days when a budding scholar would be told that American Jewish history was mere journalism and not fit for serious scholarship. The field has burgeoned and is now often on the cutting edge of scholarship, serving to stimulate research questions that advance other fields as well. Having run a workshop two years ago with Beth Wenger for graduate students writing dissertations that dealt with politics and American Jewish history, I was impressed with the exciting new work being done. However, there are still occasional barriers to be overcome, both internal and external. The former refers to a kind of parochialism that measures the Jewishness of American Jewish history by its content rather than looking at how thinking with Jews about particular issues can illuminate large questions that involve others as well. The latter references other scholars who can’t see historical figures as Jews in their scholarship because they really don’t quite know what to do with them.

]]> 0
My Husband’s Not Gay: Homosexuality and the LDS Church Wed, 04 Feb 2015 16:08:02 +0000  

(AP Photo/Rick Bowmer)

(AP Photo/Rick Bowmer)

On Sunday, January 11, TLC debuted My Husband’s Not Gay, a show about a small but increasingly uncloseted community living out its own complex form of sexuality. My Husband’s Not Gay profiles Mormon men living in Utah who openly acknowledge that they live with same-sex attraction (SSA), but who are married to women. The show follows three married couples in mixed-orientation relationships and one single man as they negotiate their sexual desires and religious convictions. The men are open about their attractions to other men, but they pursue relationships, including sexual relationships, with women, who support them and know fully of their attractions.

These men do not identify as homosexual. The particular terminology they and their wives use—“SSA, not gay” as one wife, Tanya, put it—comes directly from their church, the Church of Jesus Christ of Latter-day Saints (LDS). In recent years, the LDS Church has struggled to be more sensitive and open around the issue of homosexuality, both outside the church and within the community, which is still dealing with the negative attention it received for its support of California’s Proposition 8, which prohibited same-sex marriages. Just last week, Mormon leaders announced they would support anti-discrimination protections for LGBT people—as long as such laws also protect religious liberty.

Inside the faith, the LDS Church has attempted to carve out a middle ground for its members who are attracted to the same sex. A statement on the church’s website,, which launched in December 2012, outlines the official LDS policy on homosexuality:

The experience of same-sex attraction is a complex reality for many people. The attraction itself is not a sin, but acting on it is. Even though individuals do not choose to have such attractions, they do choose how to respond to them. With love and understanding, the Church reaches out to all God’s children, including our gay and lesbian brothers and sisters.

TLC’s My Husband’s Not Gay captures “not gay” Mormon men as they attempt live out their church’s theology despite their attraction to men. These men are also members of an independent organization called North Star whose mission is to help LGBT Mormons live within the boundaries of the faith. As opposed to the increasingly besieged “ex-gay” approach, the “not gay” perspective is somewhat of an evolution of religious sexual identity. It does not purport that opposite-sex marriage is a “cure” for same-sex attraction, as the church once did. Instead, it offers heterosexual marriage as an option that may be possible for some men and women without the expectation or requirement that one change one’s desires.

Much of the media coverage of My Husband’s Not Gay has labeled the show as dangerous for LBGT people. The show’s critics claim that it is a tacit endorsement of “reparative” therapy, and they deride its representation of mixed-orientation marriages as viable alternatives to either living the “homosexual lifestyle,” as it is often described in Mormonism, or total celibacy. It is important to mention that many of the strongest opponents of such mixed-orientation families are the one-time members of these couples—the “not gay” husbands and their wives—who have tried and failed to live in heterosexual relationships, often with traumatic outcomes for themselves and, perhaps most importantly, for their children.

The heated debate about how the Mormon men and women featured on the show reconcile their desires with their chosen relationships pathologizes them as deluded and repressed, victims of an intolerant religious culture. They have chosen church over sex and sexual identity. And that’s the wrong choice, according to some who celebrate sexual self-realization over religious affiliation.

Mormons have long become accustomed to the role of the sexual deviant. In the nineteenth century, the Mormon polygamist was at the center of the national debate about the limits of religious freedom when it came to “barbaric” sexual practices. The LDS Church spent much of the twentieth century retreating from its polygamist past by cultivating the image of a religion that promoted the quintessential American family, staking out moderate-to-conservative positions on gender roles, divorce, women working outside the home, and same-sex relationships. My Husband’s Not Gay demonstrates that in the American imagination, some Mormons have replaced the ghosts of their polygamist past with a new sexual taboo—the mixed-orientation marriage.

The show’s couples reveal some of the tensions around homosexuality within Mormonism. For instance, they hesitate to use the term “gay.” That ambivalence highlights a language trend in the LDS Church, which only recently began to deploy the terms “gay” and “lesbian” in its literature. For many years, the church not only insisted on the unnaturalness of homosexuality, but it also used circumlocutions to avoid language that suggested homosexual identity was in any way fixed and immutable to change. The church promoted same-sex, or same-gender attraction, as a psychological condition, one with perhaps a cure, rather than a sexual identity.

In recent years, more married and unmarried gay Mormon men and women have come out, following broader American shifts in accepting same-sex desires, and that has sparked some change. In 2007, Brigham Young University amended its Honor Code to say that “sexual orientation is not an Honor Code issue.” For the first time, students could openly call themselves gay without fearing expulsion from the church’s flagship university. The launch of was both for those who wanted to identify as gay and Mormon and also for straight members to demonstrate greater compassion for gay Mormons.

At the same time the church shifted its rhetoric to call for more tolerance, it also reaffirmed that heterosexual marriage remains the only legitimate space for sexual relationships—for both gay and straight Latter-day Saints. Contemporary Mormon theologies emphasize the sacredness of heterosexual marriages and teach that husbands and wives should have children and raise them responsibly.

Many people do not understand why someone would choose religion over sexual satisfaction, but for many gay Mormons the choice is an existential one. In the Mormon cosmos, as presently understood, there is simply no room for same-sex relationships. For Mormons, the afterlife consists of heterosexual pairs of divinized men and women. Often church leaders have counseled Mormons who experience same-sex attraction that their unwelcome feelings will disappear in the afterlife. The rejection of homosexual relationships is not just a matter of biblical literalism or conservative politics, but a view that the very structure of heaven can only accommodate opposite-sex marriages. For mixed-orientation couples, this understanding may make for a compelling trade-off: in exchange for diminished sexual satisfaction in this life, conformity with heterosexual norms of marriage promises eternal happiness in the life to come—and an eternity lasts longer than one mortal lifetime.

The marriages of “not gay” Mormons are less about individualist notions of personal sexual satisfaction and more about commitment, love, and a duty to raise children. In this sense, what is striking is how people in these marriages see them as similar to any other marriage that would exhibit imperfections. These couples attest that they have sexually fulfilling relationships. Some boast that their relationships are more satisfying than many straight couples they know. More importantly, they see marital love as both greater than and non-reducible to sexual attraction. As such, they cultivate an idea of marriage as both a personal and social good, as well as a locus of struggle and personal development.

Perhaps unwittingly, the Mormons who participate in these mixed-orientation relationships increasingly appeal to ideas of sexuality that are similar to postmodern theories of sexual fluidity, as well as classical liberal notions of sexual agency. While critics of My Husband’s Not Gay may see these couples as deluded, some of those critics are also operating on a strict homosexual/heterosexual binary. Mixed-orientation couples acknowledge that while they may not choose their orientation or desires, they can choose with whom to have a relationship. As such, they emphasize their agency, choice, and sexual honesty in response to accusations that they are constrained by their religion.

As the show’s title hints, what does it means to be “gay” in 2015? This question strikes deeply at the identity politics of gay and straight categories. Many liberal thinkers have been caught off-guard at the ways in which these politically and religiously conservative Mormons in Utah—these “not gay” men and their wives—increasingly appropriate the language of queer and postmodern gender theory to justify their conventional heterosexual marriages. Refusing the label “gay” for many is not about denying their attractions or desires, but about refusing the various presuppositions about that term, just as bisexual, trans and queer folk frustrate the categories of a stable homosexual identity.

What then are we to make of My Husband’s Not Gay? Perhaps the challenge of gay and queer politics is to affirm self-determination and to acknowledge the complex ways people negotiate their religious and sexual lives, while also creating space for dialogue and criticism of the symbolic work these relationships perform. These relationships implicitly and explicitly delegitimize the relationship choices of gay men and women as inferior to opposite-sex relationships. The queer politics of these relationships must navigate some sensitive terrain.

Taylor G. Petrey is Lucinda Hinsdale Stone Assistant Professor of Religion and Director of the Women, Gender, and Sexuality Program at Kalamazoo College in Kalamazoo, Michigan. He is the author of “Toward a Post-Heterosexual Mormon Theology.”

]]> 0
Holt v. Hobbs: Does a Muslim Prisoner’s Case Foreshadow the End of Affirmative Action? Wed, 28 Jan 2015 17:21:39 +0000 Holt v Hobbs

(AP Photo/J. Scott Applewhite) Attorney Douglas Laycock, center, speaks with reporters after his argument before the Supreme Court in Holt v. Hobbs. At left is Hannah Smith, a senior counsel with the Becket Fund for Religious Liberty.

Last Tuesday, the Supreme Court ruled unanimously in favor of an Arkansas inmate who had been barred from growing, for religious reasons, a half-inch beard by the state prison system. The case, however, is not merely about inmates and prisons. It confirms that we are in an era of robust judicial protection for religious freedom, and it informs the Supreme Court’s jurisprudence in other contentious areas of individual rights.

The case, Holt v. Hobbs, was set in motion by Gregory Holt, who also goes by Abdul Maalik Muhammad. Holt, an inmate housed by the Arkansas Department of Correction, sought to grow a beard in accordance with his Muslim faith. The Department prohibits inmates from growing beards, although inmates with a dermatological condition may grow a beard no longer than a fourth of an inch. Holt proposed a compromise: he would grow a half-inch beard. The Department did not budge. Accordingly, Holt proceeded to federal court.

Inmates shed many of the rights they otherwise enjoyed in civilian life. Holt’s religious rights ordinarily would be among those rights that he would cede to prison authorities. But, Holt filed his lawsuit under a federal statute, the Religious Land Use and Institutionalized Persons Act (RLUIPA), which was enacted by Congress in 2000 to accord special protection to inmates’ religious freedom.

Traditionally, the Free Exercise Clause of the First Amendment was read to protect religious freedom only to the extent that the challenged law itself carved out a religious exception. In 1963, however, the Supreme Court interpreted the Free Exercise Clause to require a religious exemption to any generally applicable law that imposed a substantial burden on the individual’s religious exercise, unless the government could prove that the law was necessary to further a compelling governmental purpose. A “substantial burden” generally occurs when the law either compels an individual to do that which violates the individual’s religious beliefs, or prohibits an individual from doing that which is mandated by the individual’s religious beliefs.

In a 1990 case, the Court effectively reverted back to the pre-1963 understanding of the Free Exercise Clause. In response, Congress passed the Religious Freedom Restoration Act of 1993 (RFRA), which expanded religious protections to levels established by the Court in 1963. Under RFRA, a law that substantially burdens an individual’s sincere religious beliefs must give way, unless the government can demonstrate that its action furthers a compelling purpose in a way that is the least restrictive of religious freedom. With the ball back in its court, the Supreme Court determined that that Congress did not have the constitutional power to enact RFRA, thereby striking it down insofar as it applied to the states and leaving it binding only on the federal government.

The back-and-forth continued until Congress passed RLUIPA. RLUIPA contains the same standards as RFRA, but it applies only to land use and prison contexts and rests on a different source of constitutional authority. Under RLUIPA, Holt had to prove that the Department of Correction substantially burdened his sincerely held religious beliefs. Holt asserted, and the Department did not dispute, that Holt’s interest in growing a beard was based on a sincerely held religious belief. Further, it was undisputed that the grooming policy substantially burdened Holt’s religious beliefs, as the policy placed him in a bind: grow his beard and face discipline for violating the Department’s policy, or shave completely and violate his sincere religious beliefs.

With Holt having met this threshold, RLUIPA required the Department to establish that its policy furthers a compelling purpose in a way that is least restrictive of religious freedom. The Department defended its grooming policy on two principal grounds: first, that an inmate would be able to conceal contraband in a half-inch beard; and second, that an inmate would be able to frustrate or evade quick detection in the event of a prison emergency or prisoner escape.

The Supreme Court agreed that both of these purposes were compelling. But the Court ruled that the policy was not the least restrictive ways to advance these purposes. First, the Court doubted that contraband could get lost in a half-inch beard. It was “almost preposterous,” a U.S. Magistrate Judge said, that contraband could be hidden in Holt’s beard. Rather than impose a ban on such beards, the Court noted that the Department could search prisoners’ beards or require prisoners to run a comb through their beards. Contraband, such as a “revolver,” Justice Samuel Alito quipped, would fall out from such combing. Second, the Court noted that the Department could facilitate the quick and reliable identification of prisoners by having two photographs of each prisoner on hand: one clean-shaven, and one bearded. These twin photographs could then be referenced in the event of an incident.

Further, the Court stated that the Department’s security and identification arguments were tough to square with the fact that the Department permits prisoners to grow a fourth-inch beard for medical reasons and permits prisoners to grow hair on their head beyond the half-inch limit. The Department’s arguments also were undermined by the fact that a vast majority of state prison systems, and the federal Bureau of Prisons, allow inmates to grow their hair, either for any reason or for religious reasons, despite having the same or concerns about safety and identification. The Department fell short in its effort to explain that it has unique circumstances necessitating special rules. Indeed, the Department did not give any examples of situations in which beards hindered the Department’s safety interests. The closest the Department came was its mentions of incidents in which a prisoner killed a guard with a “shank” and in which Holt placed a knife against the neck of another inmate. But these two situations say nothing about the relationship between security and identification, on one hand, and beards on the other. All told, the Court had little trouble ruling that the Department’s refusal to allow Holt to grow a religious beard constituted a violation of RLUIPA.

The decision has wide-ranging implications. The current Supreme Court has made clear that it intends to give full effect to Congress’s intent to afford broad protections to incarcerated individuals’ religious freedom. The extent to which RLUIPA meaningfully shielded prisoners’ religious freedom was unclear. Indeed, the two lower federal courts sided with the Department, and federal courts had ruled for prison systems in similar RLUIPA cases. These courts did so primarily because courts routinely have deferred to the expertise of prison officials. In Holt, the Supreme Court clarified that deference must be predicated upon specific information related to the desired religious practice, not speculative statements or generalized concerns about prison safety and security. In the absence of those details, deference is not owed and any judicial deference still given would be tantamount to judicial abdication.

In terms of balancing government interests and religious freedom, Holt further suggests that the Court’s pendulum has swung towards the protective end of the religious freedom spectrum. Eric Rassbach of the Becket Fund, the public interest law firm that was part of Holt’s legal team, notes that the case “heralds a new period of rigorous enforcement of federal civil rights statutes concerning religious practices.” This recognition of religious freedom extends and includes both majoritarian and non-majoritarian faiths. The Court repeatedly has vindicated the rights of non-Christians. But context matters. The Supreme Court’s polarizing opinion in Burwell v. Hobby Lobby—a RFRA ruling for Christian owners of closely held corporations—fueled the impression that the Court gave special solicitude to religious rights claims brought by Christians. In her Hobby Lobby dissent, Justice Ruth Bader Ginsburg asked whether the Court was truly inclined to recognize the religious rights of religious minorities. Holt therefore represented, as The New York TimesLinda Greenhouse wrote, an opportunity for the Court to “allay suspicions that they are only interested in the free-exercise rights of Christians.” The Court seized this opportunity, confirming that it confers religious protection upon Christians and non-Christians alike.

A plausible claim can also be made that Holt foreshadows the end of affirmative action in the United States. The connection between religious rights and affirmative action may not be obvious, but race-based affirmative action is subject to a demanding standard—whether the admissions policies are “narrowly tailored” to further a “compelling” governmental objective—that is similar to the standard in RLUIPA. Accordingly, the Court’s response to the grooming policies at issue in Holt may inform its potential reaction to affirmative action.

As in Holt, the Supreme Court has determined that the reason why colleges and universities adopt affirmative action—to achieve the educational benefits of a diverse student body—is compelling. Accordingly, as in Holt, the permissibility of affirmative action boils down to the courts’ assessment of how colleges and universities attempt to achieve that objective. Holt sends a strong signal that the Court will closely scrutinize the government’s selected approach and the government’s claims as to the insufficiency of alternatives that don’t implicate protected rights. An ongoing issue in the affirmative action context, however, is that courts have not been given meaningful information by which to evaluate whether race-neutral alternatives may yield a sufficiently diverse student body, in which case the schools’ current use of race would be gratuitous. If the Court reviews the means used by colleges and universities with the same vigor it did in Holt, affirmative action policies could be in danger.

Holt is important in its own right because it eliminates outlier grooming policies to the benefit of prisoners nationwide. Beyond this, Holt helps to restore the expansive bounds of religious freedom in this country—and it hints at future Court shifts on religion and race.

Dawinder Sidhu is a law professor at the University of New Mexico, where he teaches and writes in the areas of constitutional law and criminal law, and is a former Supreme Court Fellow. Follow him on Twitter: @profsidhu

]]> 0
The Politics of Poverty and Race Tue, 20 Jan 2015 16:35:29 +0000 President Johnson signs the "War on Poverty" bill into law in 1964. (Bettmann/Corbis/AP Images)

(Bettmann/Corbis/AP Images) President Johnson signs the “War on Poverty” bill into law in 1964.

This month marks 51 years since Lyndon Johnson’s “War on Poverty” speech. On January 8, 1964, during his State of the Union address, he urged the joint session of Congress to join him in a battle that the “richest Nation on earth can afford to win.” Over the past year, current elected officials have been reflecting on the legacy of Johnson’s war on poverty as a way of assessing contemporary anti-poverty policies and programs. Some Republicans have taken the opportunity to expound on the failures of the current liberal-progressive agenda in light of a poor economy and the nearly 50 million people still living in below the poverty line. With Representative Paul Ryan as their spokesperson, they have collectively resurrected an assessment Ronald Reagan made in 1987 that the government “waged a war on poverty, and poverty won.” Ryan has followed the standard conservative party logic: big government spending on “counterproductive” federal programs induces the poverty trap—a mechanism that impedes upward mobility from poverty to the middle class. For Ryan, who is the current chairman of the House Budget Committee and was the GOP’s 2012 vice presidential candidate, the current federal policies dissuade the free-market values of work and independence that are essential to the American way of happiness and success.

Last July, Ryan released his latest proposal, “Expanding Opportunity in America,” a program designed to reduce poverty and increase social mobility. In a speech at the American Enterprise Institute that unveiled the proposal, Ryan said, “We need to cut down the bureaucratic red tape. A lot of families are trying to get ahead, but Washington is just simply getting in the way.” He also embarked on a poverty tour around the country, visiting local leaders and learning how they combat poverty in their communities. Last August, he released a book title The Way Forward: Renewing the American Idea, which advocates for growing the economy and civil society—not government programs. Ryan and other Republicans have once again strategically transported the discussion of poverty to the domain of individual cultural behaviors. Inspired by local religious and charity organizations’ successes at eliminating gangs from school grounds and assisting men with drug addictions, the party has adopted the cultural deficit hypothesis—blaming poverty on broken families and dependency rather than on social inequities. Against the American “nuclear family,” the “broken family” is a coded term for the single, female-headed household crowded with illegitimate children, which Ryan identifies as the primary cause of intergenerational poverty.

Ryan has pushed for drastic cuts to safety net programs such as food stamps and housing vouchers. He stresses that welfare programs must be centered on a work-first mentality. Underneath this emphasis is the ideology of dependence and a presumption that poor people lack the motivation to work. In March of 2014, Ryan told conservative radio host Bill Bennett that there is a “tailspin of culture, in our inner cities in particular, of men not working and just generations of men not thinking about working or learning the value and the culture of work.” It is not surprising that on the air Ryan mentioned neoconservative social scientist Charles Murray, the controversial co-author of The Bell Curve, to justify his claim about the tailspin of culture. These arguments date back to the Progressive Era and to debates around “Negro loaferism,” which depicted blacks as unwilling or unable to work; that trope intellectually grounded the notion of the “undeserving poor” and vagrancy laws. Ryan later apologized in a statement sent to reporters and concluded that his comments were “inarticulate.”

Ryan’s rumination on the broken family and dependency is a footnote to the larger discourse of the urban black underclass that resurfaced during the Reagan administration. The problems with dependency and the female-headed household were popularized in the 1976 Republican primary, when then-California Governor Ronald Reagan told audiences the story of a lascivious, lazy, and criminally minded, Cadillac-driving “Welfare Queen,” who abused the system in deindustrialized Chicago. Although Reagan lost a close race to Gerald Ford, his attack on the war on poverty provided the intellectual foundations for a theoretical shift in the poverty debates from wage distribution and federal policies to cultural and behavioral patterns. The “Welfare Queen” speech showed that urban blacks were easy targets (or villains!) to support Cold War-era, free-market values and to rail against the welfare state and civil rights legislation. Although Reagan’s welfare queen was an isolated case and an exaggerated tale, it did not matter. That symbol prompted pundits to legitimate urban black cultural deficiencies—single-parent households, teenage pregnancy, laziness, drug addictions, and high-school dropouts—through pseudo-scientific and arbitrary statistics, graphs, and I.Q. testing. During the Reagan Revolution, the black underclass was transformed into a purely cultural category designed to delineate a set of urban behaviors that were deemed pathological or deficient, according to historian Alice O’Connor in her volume, Poverty-Knowledge. In 1986, liberal journalist Nicholas Lemann wondered in The Atlantic how the bifurcation of the middle and lower class in Black America continued, even “during a period of relative prosperity and of national commitment to black progress.” His answer? “In the ghettos … it appears that the distinctive culture is now the greatest barrier to progress by the black underclass, rather than either unemployment or welfare.” In essence, Lemann was saying that economic policies paled in the face of an overpowering culture that confined the black underclass to a life of destruction. Although Lemann’s analysis of the black underclass sought to transcend either a Republican or Democratic solution, his attention to the negative power of culture definitely fueled neoconservatives’ cultural deficit hypothesis. In the 2012 Republican primaries, the rhetoric around government handouts and the criticism of Barack Obama as the “food-stamp president” underscored how American politics was and is still deeply wedded to the discourse around the “Welfare Queen” and the black underclass of the late 1970s and 80s. Democrats have not been immune either. The Clinton administration’s welfare reforms of the 1990s dismantled many programs, and made cash benefits to poor families temporary and contingent on finding employment. The cultural deficiency hypothesis—the idea that deficits in black culture keep black Americans in poverty—continues to frame the discussion of poverty in American politics.

Often overlooked in the cultural deficit hypothesis is the role that religion has played in stereotyping the urban, black lower class. In the late 1930s and 40s, liberal-minded social scientists began to study black laborers at the height of the New Deal era. They were invested in chronicling the psychological ramifications of segregation in urban race relations. In doing so, social scientists, such as E. Franklin Frazier, John Dollard, and Allison Davis, helped to legitimate the cultural assumptions about impoverished, black laborers—from their parenting skills and sexual relationships to leisure activities and group affiliations. In her book Your Spirits Walk Beside Us, Barbara Savage writes that these social scientists “often advanced old arguments without questioning them or listening to the ideas of their human subjects.” Savage adds that within lower-class, black religious circles, these scholars “occupied overlapping roles as trespassers, as intermediaries, as experts, and, ultimately, as creators of narratives they told about respectability and deviance in black communities.”

In investigating religion, ethnographers chronicled the beliefs, songs, worship styles, testimonies, visions, and prayers in the growing Pentecostal, Holiness, Spiritual, and “independent churches” in the urban South and North. Studies concentrated on how these emerging “sects and cults” were fundamentally changing the religious and cultural landscape in modern America. Ethnographers were interested in exploring the appeal of these churches and denominations to black lower classes, especially to black women who were domestic servants. They interpreted the theologies, rituals, and expressions as the behavioral repository for black, lower-class expressions and their reactions to the social inequalities that engulfed them.

In September of 1939, social anthropologist Guy Johnson attended a late-night, religious ceremony at Father Divine’s Peace Missions in Harlem, New York, as part of his work with the Myrdal-Carnegie Corporation race-relations research team. Johnson devoted his attention mainly to single, black, middle-aged women and their “hand-clapping, foot-patting, and swaying bodies, sometimes, in a shuffle dance during the spirit-led and “lustily” sung songs. Johnson was shocked at the women’s emotional and physical stamina that reached its peak in the “hysterical shrieking, fainting spells, shouts,” once Father Divine appeared well after midnight. After visiting the same church, economist Gunnar Myrdal concluded that a “person acquainted with the problems and techniques of abnormal psychology” could particularly well assess the impact of black religion among the black lower class.

American sociologists and anthropologists certainly paid close attention to the shouting, singing, dancing, and rhythmic oral expressions in Negro revivals (i.e. prayers, sermons, call-and-response) to capture what they deemed to be inherited racial characteristics. But Johnson and other racial liberals, however, did not interpret these religious experiences in independent churches as natural behaviors. Instead, they interpreted the crying, fainting spells, hand-clapping, and swaying bodies of black, middle-aged women as signs of the “lack of training and discriminatory labor policies, low economic status, and poor housing” in urban environments, as Edward Palmer wrote in 1945 in the Quarterly Review of Higher Education among Negroes. Johnson and his colleague Palmer saw a relationship between middle-aged women’s religious experiences and their restriction to the domestic occupations. They argued that the popularity of these independent religious movements stemmed from the freedom they offered to low-income workers, who could release their daily frustrations and rage organically and emotionally. Palmer asserted that expressive “religion always appears when people are thwarted and when they can do little to remove the limitations which encircle them.”

These liberal-minded social scientists’ reflections compared the religious behaviors of the black, lower class with dominant, cultural norms and values. For Gunnar Myrdal, the “hysterical” behaviors exhibited in the independent churches represented “distorted development” and the spiritual and cultural lag of low-income blacks, i.e. the cultural deficit hypothesis. These assumptions increasingly sought to use black religious rituals and practices as data to highlight the marginalization of black, low-income communities from modern life and to make the case that the federal government should legislate policies to assimilate blacks into the larger, mainstream society.

The cultural deficit hypothesis emerged out of liberal and conservative commentaries on the black underclass. It continues to dominate our discussion of poverty in America today. It lurks beneath Ryan and other Republicans’ declaration that Lyndon Johnson’s unconditional war on poverty has been soundly defeated. Many unjust systems, past and current, keep Americans in poverty, from housing discrimination and unequal access to quality education to mass incarceration and racial profiling. The national discussion of poverty is still framed around cultural behaviors while ignoring the deep structural inequities that have fostered high rates of poverty in the contemporary economy. In the end, ascribing certain behaviors or pathologies that actually cut across race, class, and region solely to a particular people underscores a blind faith in the myth of free-market society.

Jamil Drake is a Ph.D. candidate in American Religious Cultures in the Graduate Division of Religion at Emory University. He is currently completing his dissertation, “To Know the Soul of the People: The Field Study of the ‘Folk Negro’ and the Making of Popular Religion in Modern America, 1924-1945.”

]]> 0
The Fate of American Religious Freedom: An Interview with Legal Scholar Steven D. Smith Tue, 13 Jan 2015 17:34:18 +0000 (AP Photo/Matt Rourke)

(AP Photo/Matt Rourke)

In the wake of important judicial rulings on culture war issues such as same-sex marriage and contraception, “religious freedom” has emerged as one of the most hotly contested terms in American political discourse. As public opinion on these issues has liberalized, many conservatives have embraced religious freedom as a safe vantage from which to legislate. In response, many progressives cite the secularity of the United States Constitution to argue against overtly religious policy.

Steven D. Smith is the Warren Distinguished Professor of Law at the University of San Diego. His most recent book, The Rise and Decline of American Religious Freedom, advocates a return to what he calls “the American settlement”—an arrangement under which the constitution is read to be neither religious nor secular, but rather open to the best argument of either persuasion. Eric C. Miller spoke with Smith about his project.

R&P: In recent years, scholars of law and history have published a lot of interesting books about religious freedom in America. Your book is on the “rise and decline” of religious freedom, and I’ve read others on “the myth,” “the tragedy,” and “the impossibility” of religious freedom. Why is there currently so much interest in this subject, and why is it cast in such dramatic terms?

SS: I think there are two main reasons (which may ultimately come down to the same reason) for the interest, and for the woeful tone. One reason is that religion is at the core of the culture wars, which seem to be intensifying. A stark manifestation of this fact was a finding in the opinion of Judge Vaughn Walker, the federal district judge who invalidated California’s Proposition 8. The judge found that something like 85 percent of voters who attended church regularly voted in favor of the measure—in favor of traditional marriage, basically—while close to 85 percent of people who never attend church voted against it. Given divergences like this, people on the “progressive” side of the culture wars often come to view religion as the enemy. And they may come to see religious freedom as empowering that enemy.

Which leads to the second reason for the interest, and the apocalyptic tone: the traditional commitment to religious freedom seems more embattled today, and more vulnerable, than at any time in the modern period. Just a few years ago it was liberals (like Justice Brennan) who were the champions of religious freedom; today they are often opponents or skeptics (as the recent furor over the Hobby Lobby decision reflected). And the dominant opinion among legal scholars who work in this area seems to be that special constitutional protection for religious freedom is a product of contingent features of the founding period but is not something that could be justified today.

My book tries to offer some background for and insight into these developments. I suggest that the traditional “American settlement” with respect to religious pluralism centered on a principle of open contestation under which both providentialist and more secularist interpretations of the Republic had an assured place in the public square. This settlement was theoretically inelegant and sometimes messy in practice, but it allowed for peaceful engagement—and for an expansion of religious pluralism.

Beginning with the school prayer decisions in the early 1960s, however, the Supreme Court in effect repudiated this settlement, elevated a secularist interpretation to the status of constitutional orthodoxy, and demoted the providentialist view to the position of constitutional heresy.

One consequence of this repudiation was a sort of revival of the old “wars of religion” (in a less violent form, thankfully). The older battle lines had been between Catholics and Protestants; the newer division is between secularists and providentialists. A second consequence has been that the classic justifications for religious freedom, as articulated by Locke, Jefferson, Madison and others, were rendered inadmissible, because they were all theological in character. As a result, the commitment to religious freedom comes to be less defensible.

R&P: I really enjoyed your book, in part because it challenged some of my progressive assumptions about the American settlement. But it seems to me that much of the concern—in the culture war realm, anyway—focuses on exception rather than rule. Religious people remain perfectly free to practice their faith in countless ways without any governmental interference. But in a few cases—like Prop 8 and Hobby Lobby—religious citizens have claimed the right to impose their beliefs on people who don’t share them. Isn’t it fair to draw a line here?

SS: I have to say, Eric, that the all-too-familiar objection to “imposing beliefs [or values] on others” is in my view a rhetorically potent but question-begging and wholly unhelpful way of addressing these kinds of conflicts. That is because the description equally applies to both sides of the controversies.

You mention the Hobby Lobby controversy. Hobby Lobby’s owners, the Green family, evidently believe that abortion is a sin, and that it would be a violation of their Christian commitments for them to facilitate that sin by providing insurance that covers some prescriptions they regard as abortifacients. If the Greens are excused from providing such coverage, you can say if you like that they are “imposing their beliefs” on their employees. (Although I confess that this description seems to me a bit strained, and tendentious: no employees are required to believe anything, or to forgo abortion or contraception.) Conversely, if the government forces the Greens to provide such coverage, this is clearly a case of the government imposing some set of (to them) alien values or requirements. “Imposition” occurs either way.

As it happens, in this particular instance the burden of the imposition on the Greens seems considerably more severe than the burden of an exemption on the employees. If an exemption is given, the burden on a Hobby Lobby employee who wants or needs contraceptives is that she will have to obtain them in some other way, or else try to find another employer. That is a burden, to be sure. Still, contraceptives are readily available, and there are lots of employers in America. If an exemption is denied, conversely, the burden on the Greens (if they remain faithful to their convictions) is, basically, that they will probably have to shut down their business.

Of course, you may not share the Greens’ beliefs—not many people today do—and so you may not sympathize with them. But, seriously, which burden seems more onerous?

In Rise and Decline I suggest that our contemporary approach to religious pluralism might accurately be characterized as one of denial (or self-deception). We intone, over and over again, that government must be “neutral” toward all religions. And then we desperately try to ignore or obfuscate the fact that in cases of genuine conflict, there simply is no meaningfully neutral position.

In this vein, a pervasive strategy is to criticize your opponent’s position for departing from neutrality (as it will, inevitably) while distracting attention (other people’s and your own) away from the fact that your own position is equally a departure from neutrality. There are various techniques for accomplishing this. But the language of “imposing values on others” is one very common (and often rhetorically effective) way of practicing this sort of deception or self-deception.

R&P: I’m not quite ready to concede the point, but I think I can concede this example and still argue that, in the vast majority of cases, the government does not interfere in religious practice. High profile claims of interference always seem to coincide with the interests of conservative politics, which makes folks like me a little cynical. But here’s a question: if we endorse an environment of open contestation, rather than enforced secularism, how should controversies like these be decided? Since Prop 8 lost and Hobby Lobby won, are we sort of there already?

SS: It would be a mistake, I think, to suppose that important free exercise claims always arise on the conservative side. Protecting the right of a Muslim prisoner to wear a beard—which is the issue before the Supreme Court this term—isn’t exactly a conservative cause. But you’re right: the most visible free exercise cases in recent years—such as Hobby Lobby—have involved claims by traditionalists or religious conservatives.

This fact might help explain why liberals have largely shifted their attitudes toward religious freedom. You mention a cynical attitude; a cynical suggestion from the other direction might say that from John Stuart Mill through Justice Brennan, liberals were the great champions of religious freedom as long as the leading beneficiaries—in England, dissenters from the established church; in this country, draft resisters, Native American peyote users, the Amish—were themselves on the liberal side, or at least were people with whom liberals could readily sympathize.

But as the beneficiaries have come to be more on the traditionalist side, liberals now perceive religious freedom as an impediment to their agenda. That diagnosis is too cynical and simple—or at least I hope it is—but it may contain some truth.

As to how current controversies over the contraception mandate or objections to same-sex marriage would come out under “the American settlement,” there’s no way to say for sure. The whole point of the principle of open contestation was to assure the contending parties, whether secularist or providentialist, a place at the political table, so to speak, so that they could argue out the issues on the merits.

The argument was open because there was no presumption, as there is now, that religious or “providentialist” reasons for political decisions are illegitimate—and no expectation that the Supreme Court would step in and settle controversies by fiat. Which side would “win” depended, consequently, on who could mobilize the most support and make the most persuasive case—persuasive not just in mundane political or pragmatic terms, but in terms of appeals to our (lower case) “constitution,” or to the values, principles, and traditions that constitute us as a people. Secularist positions would prevail on some issues in some times and places, more religious positions for other issues, times, and places.

R&P: I think I am representative of a lot of progressives in that I consider myself an advocate of religious freedom, but I object to (what I see as) its opportunistic deployment. In my view, many of those who appeal to religious freedom these days really only care about conservative Christian freedom, or otherwise embrace freedom as a cloaking device for oppression and moral condemnation. But I also don’t want to fall into the trap you’ve identified—of embracing or opposing freedom based on my own partisan interest. You suggest that open contestation would not only improve our legal structure, but the quality of our public discourse on controversial subjects. How?

SS: Two points in response. First, I suppose it’s natural for any of us to care about a legal right—religious freedom, freedom of speech, right to counsel, whatever—when it’s working to protect us or people we sympathize with. And, conversely, to be more suspicious when the right is helping people we disagree with. It’s also possible, as you say, to use rights opportunistically. If religious freedom can get you out of going to Vietnam, for instance, there’s an incentive to try to exploit the right.

Back in high school, a good friend—who was a thoughtful, earnest pacifist but not a religious person—asked if I could help him prepare a religious justification for avoiding military service. The incentives were strong.

By and large, though, I think it’s more charitable but also more realistic to treat claims of religious freedom as sincere, whether they arise on the right or the left. Would a Muslim prisoner litigate a claim over wearing a beard if he didn’t have a sincere religious conviction? Maybe, but … And purely as a business proposition, Hobby Lobby only hurts itself by closing on Sundays, for instance, or by forfeiting the services of qualified workers who don’t like the business’s Christian policies. Why would the Green family adopt such profit-reducing policies if they didn’t have a sincere commitment?

Second, you ask how open contestation would affect public discourse. With apologies, I’m inclined to refer to another book I did several years ago, called The Disenchantment of Secular Discourse. The basic thesis is that legally or culturally imposed secularist constraints inhibit us from presenting, defending, and examining our deepest normative commitments; we’re forced instead to “smuggle” in those commitments under the heading of generic values like liberty or equality. The result is a public discourse that is impoverished, inefficacious, sometimes disingenuous.

Or worse. Often, when our real normative commitments can’t be presented, the best or only remaining rhetorical strategy is to dismiss those we disagree with on the assumption that they are acting from bad faith, bigotry, or hatred. That strategy, and that kind of dismissiveness, are pervasive these days (as your question itself may suggest).

An egregious example, in my view, is the Supreme Court’s majority opinion in United States v. Windsor, which invalidated a portion of the federal Defense of Marriage Act. Justice Kennedy said the law was invalid because it was enacted from “a bare desire to harm a politically unpopular group,” or from a “purpose to demean” or “to injure.” All of the familiar (and fiercely contested) reasons given for DOMA and equivalent state laws are thereby implicitly declared to be not merely unpersuasive, but pretextual: the millions of Americans who purport to believe those reasons are essentially lying, or deceiving themselves, to conceal what is in reality pure irrational malevolence.

But how could Anthony Kennedy possibly know this to be so? Does appointment to the Supreme Court confer an ability to look into the hearts and minds of millions of people he has never met? And can you think of any accusation better calculated to promote resentment and cultural division? This is judicial discourse at its most degraded, I believe, truly unworthy of Supreme Court justices, but it’s linked to the limitations created by secularist constraints.

Whether at this point easing those constraints would lead to improvements in the discourse is hard to predict: bad discursive habits may be hard to break. But I would say it’s worth a try.

Eric C. Miller is assistant professor in the Department of Communication Studies at Bloomsburg University of Pennsylvania.

]]> 0
Pope Francis Causes Division Among Cubans in Miami Wed, 07 Jan 2015 16:29:10 +0000 Pope Francis and President Obama

(Getty/Saul Loeb)

Hours after the news broke in December that the United States and Cuba were reinstating diplomatic relations, I arrived at a Catholic Church in one of Miami’s largest parishes. The church’s priest is a charismatic man of Cuban descent well known throughout Miami. He was in meetings all morning and thus had only heard rumors that something had happened. “Padre,” I asked him in Spanish, “did you hear the news?”

In the flurry of conversation that happened in the hallway—a discussion that only grew bigger as the cleaning ladies, IT guys, seminarians, and front office staff joined in—the details surfaced. Months of secret meetings between government officials had culminated in the announcement on Wednesday, December 17, 2014. Both the United States and Cuba would ease restrictions on travel and financial transactions between the two countries; prisoners would go free; and President Barack Obama said he would push to end the 54-year-old trade embargo.

“And best of all,” said one of the staff, “is that the pope helped make it all happen.”

Indeed, reports detailed that Pope Francis had urged accord between the two nations, writing letters to President Raúl Castro and President Obama and holding a diplomatic meeting at the Vatican.

The parish priest, who asked not to be identified, was torn. He was happy that the pope had been influential, but he was also deeply concerned. “Uff,” he said with a tired drop of his arms. “Now who is going to put up with all the Cubans saying that it was the pope’s fault?”

No es facil,” a woman who works in the church office later said about the pope’s intervention. “It’s not easy.” This phrase is commonplace in Miami, a catch-all phrase that is used just as easily in jest or in seriousness. Stuck in traffic? No es facil. Loved one dying in the hospital? No es facil. Lost your job? No es facil.

But standing there in the parking lot outside of the parish, her words weighed heavily. Her husband had been one of the many Cuban counter-revolutionaries who worked for the CIA in Miami to topple the Castro dictatorship in the early 1960s. They both had left everything behind on the island in order to make it in the United States. And now the Catholic Church had been integral in the renewal of ties to their homeland, a homeland with both positive and negative memories.

“I know what this means,” she told me, emphasizing that her past allows her to fully understand the announcement. “This is big news, news that will have profound effects.”

And about Pope Francis? “He works for peace,” she said with a shrug before she got in her car, “pero no es facil.”


TO UNDERSTAND MODERN-DAY Miami necessitates understanding the history of Cuba—especially post-1959 Cuba. And part of this history is deeply entwined with the actions of the Catholic Church.

It was the Cuban Revolution of 1959 that began the first large-scale immigration of Cubans to Miami. As Alejandro Portes and Alex Stepick detail in their seminal book, City on the Edge, many families, mostly from the educated business class, cut their losses in Cuba and took what they could to Miami. “The first two years of the Cuban Revolution,” they write, “saw the gradual return to Miami … of the very groups who had known the city as a playground: first, the privileged for whom Miami was a day trip, and then those who could afford to come every summer.” These Cubans managed to carve a significant economic, political, and social niche in Miami. The isolation they received from the city’s white population both defined the community and strengthened its solidarity; they were able to form, in the words of Portes and Stepick, a “moral community” that helped them survive.

Beyond these economic elites, however, part of this first wave of refugees included several thousand unaccompanied minors that arrived in Miami without any contacts or family members to receive them. The Catholic Welfare Bureau stepped in to help with this emergency through Operation Peter Pan: a massive relocation project that sent these minors to foster and group homes nationwide. Headed by a 30-year-old Irish priest, the Rev. Bryan O. Walsh (later dubbed the “Father of the Exodus”), this program sent more than 14,000 children to homes between December 1960 and October 1962.

Two decades later, just as this first wave of Cubans had carved a foothold in Miami, Fidel Castro opened the Mariel harbor and permitted thousands of Cubans to leave the island. Beginning in April 1980, droves of refugees left Cuba. By the time the Mariel harbor was closed in September, approximately 125,000 new refugees had arrived on the shores of Miami.

The Archdiocese of Miami stepped in and offered a tremendous amount of assistance to the refugees in the form of food, shelter, clothing, and other amenities. But one project in particular, La Ermita de la Caridad (the Shrine to Our Lady of Charity), was integral in bringing the Cuban community together. The shrine’s construction began with a provisional chapel in 1967, and it centers around Our Lady of Charity, a Marian image who also serves as the patroness of Cuba.

As Thomas A. Tweed details in his book, Our Lady of the Exile, the shrine helped the Cuban exile community to identify as such: a displaced people, exiles undergoing struggle together. “For many Cuban exiles,” Tweed said in a phone interview, “La Caridad is the unifying symbol of religion and nation.” The site was where Cubans went to hear about loved ones still on the island, where newly arrived refugees could go for information and social services, and where news could be disseminated. Over the years, the site would remain important for both the religious and political lives of Cubans in Miami. “Lots of Cubans would say that they disagree about everything,” Tweed said, “but not about La Caridad.”

La Caridad was the first visit I made on the day the news broke. When I arrived, however, all I found was an empty parking lot and a few devotees. As I returned over the next few days, I continually found the same: no meetings, no announcements. The priest did not mention the news outright during daily mass but rather asked for all to pray for God to guide our politicians and the pope. A staff member blocked me from reaching the priests for comment, and few devotees felt comfortable speaking with me.

One of the shrine’s longtime volunteers was not surprised when I told him that I was having a hard time getting people to talk. “Even if they agree [with the news],” he said, “they won’t admit it to you. This is too sensitive a topic, too divisive an issue. Many of the people feel betrayed: why would the Vatican do this?”

Ambivalence was found elsewhere in Miami’s Catholic circles. The Rev. Arturo Kannee, of San Juan Bosco Church in Little Havana, said he also avoided discussing the news in his homily because “it’s a very, very sensitive issue.” Although his congregation is now predominantly Central American, the church is still called the Cathedral of Exiled Cubans because of all the Cubans that used to go to services there. “I say to pray for Cuba, but that is not a topic that you can touch,” he said. Still, he added, “Thank God the pope got involved.”


MANY OF THOSE I approached in Miami recommended that I go to the one place where I was sure to get opinions: Versailles Cuban Restaurant and Café. Established in 1971, Versailles touts itself as “the world’s most famous Cuban restaurant” and is the epicenter for political conversation among Cubans in Miami. This is where politicians come to round up their Cuban constituents and where local Cubans have loud conversations about all topics.

Sure enough, the place was a madhouse the day of the announcement and those following: news vans and cameras littered the parking lot, people chanted slogans in front of the café window, and a man in a homemade oversized Obama mask walked around the area for people to take pictures of themselves knocking Obama out.

Efraín Rivas, a 53-year-old maintenance man and former political prisoner in Cuba, said, “This is treason against us. The pope is supposed to be about honesty, not about secrecy. How could he have participated in secret talks for 18 months like this?” He is a devout man, he told me, a Catholic man until the day he dies. But, he blurted, “I am now a Catholic without a pope.”

Carlos Alcover, 66, is a former Peter Pan refugee who has lived in the United States for more than 50 years. Although angered by the decision, he was careful to not criticize the pope. “We should give him thanks because he has been serving as a bridge between two sides,” he said, adding, “I hope he continues serving.”

Listening nearby was Barbara Pernaris, 48, who waited until Alcover ended before giving me her opinion. “The pope is supposed to be about peace and unification of the world,” she said, “but he’s caused a divide in the Cuban population.” She told me that she believed Pope Francis was completely in the wrong for getting involved. He is not a diplomat, she said, and it is important to maintain a separation between church and state.

The pope’s role complicates the Cuban response in Miami. The same Church that served the exiles so faithfully by saving thousands of Cuban refugee children and building a place for exiles in Miami has now become involved in the renewal of ties to Cuba. Whereas the Church once brought the Cuban exiles together, it seems to now be tearing them apart.

By the end of December, however, in the midst of the holiday season, the air appeared calmer in Miami. On December 30, I joined a friend at Versailles restaurant. I stood at the café window sipping my cortadito and asked the barista about the chaos that was still engulfing the café just days before. “This place?” she said. “This was una locura [a crazy scene]. No way, mijo, thank God that that passed.” In between steaming the milk for the next Cuban espresso and teasing another one of the baristas, she repeated what so many Cuban Americans had. “No es facil,” she said. It’s not easy.

Alfredo Garcia is a graduate student in sociology at Princeton University.

]]> 0
Marilynne Robinson in Montgomery Mon, 22 Dec 2014 16:22:49 +0000 (Ulf Andersen/Getty)

(Ulf Andersen/Getty)

Marilynne Robinson’s new novel Lila has been greeted with rapture—not just by critics but also by a host of readers who rely on Robinson for novels that change the way they experience life in the world. During the last days of the countdown to Lila’s release, breathless fans took to the Internet to testify to the power of her prose. One commenter on the website The Toast wrote that Gilead “hooked me like a gasping fish”; another said that as she read it “I kept feeling like I’d been hit in the stomach by something huge and wonderful, and I’d have to stagger off and deal with my pathetic scrabbling soul until I was able to face reading more. It was like staring at the rising sun.” Anticipating Lila, a third reader vowed, “I will read this book slowly and intently and then reread it seventy times seven.”

I have been one of these ardent, gasping, staggering fans. Two years ago when I had the opportunity to teach a senior seminar at Yale on anything I wanted, I chose to teach one on James Baldwin, Toni Morrison, and Marilynne Robinson. My students and I read all of Robinson’s novels and spent a reverent afternoon with her papers in the Beinecke Rare Book & Manuscript Library. We reached into boxes and pulled out translucent, grease-spotted letters written while Robinson was cooking dinner, and spiral-bound notebooks filled with the transcendent sentences that would become her first novel Housekeeping, her neat cursive words about loss and resurrection inscribed next to crude, crayoned cars drawn by her small son. We held in our hands tangible evidence of the miraculous intimacy between the quotidian and the sublime.

It is this sacramental significance that makes Robinson’s writing feel so transformative and true. She evokes the hope of heaven in the everyday, and the promise of baptismal blessing in ordinary water. In this way, reading her books can be a religious experience. As one reader writes, “Whenever I’m reading a Marilynne Robinson book, I mostly believe in God and I have like sense memories of what real religion feels like to my body.” For some readers her books have even been a way back into formal religious faith. After reading Gilead and Home, my friend Francisco, who was raised Catholic and evangelical and had drifted away from both, sought and found a new spiritual home in his local Congregationalist church.

Even when she doesn’t bring people back to church, Robinson’s books can restore a kind of religious revelation that had seemed lost. In an essay on Buzzfeed called “Why I Read Marilynne Robinson,” Anne Helen Petersen writes about how Robinson’s novels allow her to set aside the “shame and alienation” of some of her evangelical experiences and remind her instead of “the religion I remember with fondness, both for its intellectual rigor and the righteousness of its teachings, which seem, at least in hindsight, the closest translations of the transgressive, progressive teachings of Jesus.” Petersen writes that this selfless and contemplative form of Christianity is “absent of the suffocating, contradictory ideologies that characterize much of its popularized iteration today.” For these reasons and others, Marilynne Robinson is an important figure for those of us who care about the role of religion in our national life. For many, she is a rare writer who can be trusted to represent Christianity to a culture that often sees faith as anti-intellectual or reactionary or easy to dismiss. As Mark O’Connell muses on The New Yorker’s website: “Hers is the sort of Christianity, I suppose, that Christ could probably get behind.”

Robinson has not only been hailed as the best person to define Christianity for our age—she’s been held up as a critically needed political voice. President Obama has named her as an important influence on his thought. And the former Archbishop of Canterbury, Rowan Williams, who calls Lila “unmistakably a Christian story,” believes Robinson’s fiction has profound public importance beyond the boundaries of Christendom: “Its moral acuity and insistence on what it means to allow the voiceless to speak give it a political and ethical weight well beyond any confessional limits.” For Williams and many others, Robinson’s writing both represents Christianity and transcends it, narrating a political and ethical vision that can serve as a kind of public conscience. To borrow a phrase from The New Yorker, there is now a “First Church of Marilynne Robinson,” and its adherents are everywhere: in pulpits and libraries and online and at the National Book Awards and in the White House. In her own writing and speaking, Robinson embraces this public role for herself, consciously re-interpreting traditional American Calvinism as a moral model for modern times.


MAKING CALVINIST THEOLOGY MEANINGFUL to modern Americans is a tough challenge, but insofar as it can be done, Robinson does it. In her Iowa trilogy (Gilead, Home, and Lila), she takes a classic, white, educated Calvinist vision of grace, a kind of loving and restrained Midwestern serenity, and opens it up. She shows how this deeply thought-out faith interacts with the disorienting extremes of slavery, racism, alcoholism, prison, poverty, illiteracy, and prostitution—extremes that are made manifest in the small town of Gilead through the experiences of damaged, outcast characters. Robinson’s great theological achievement is to show us the predictable limits yet surprising expansiveness of this fatalistic faith, which she demonstrates in plots that trace the ways white, male ministers and their families rise to the occasion of grace, or don’t, and in sentences that express a remarkable aesthetic vision that finds beauty and radiance in almost everything.

Gilead is narrated by the aging minister John Ames, and Home contains the same events told from the perspective of his best friend’s daughter Glory Boughton. In Lila, a prequel, Robinson returns to an outsider perspective reminiscent of her long-ago first book Housekeeping to show the encounter with grace from the perspective of a woman on the margins, Lila Dahl. Though Lila eventually marries the middle-class Ames, she grows up as a migrant farmworker, raised by a beloved foster mother whom she loses to jail. Armed with wariness and a knife, Lila makes her desolate way through the fields and brothels of Missouri and Iowa, finally arriving in the sanctuary of Gilead. For a while Lila lives in a ruined cabin in the woods outside of town, haunting the church and parsonage and graveyard, craving baptism for reasons she can’t understand, and teaching herself to write by copying Bible verses in a tablet. Eventually she and Ames begin an unlikely marriage that brings them unprecedented consolation, but also leaves Lila with unresolved desires to return to the wild world outside Gilead, to unbaptize herself and claim kinship with the lost people who live beyond the reach of religion.

In Lila’s story, Robinson extends the reach of grace farther than she ever has before— stretching it across boundaries of literacy and class, and testing it with extremes of evil and loss, and yet it survives, lovely and glowing. It’s an extraordinary thing to read and very moving. In a recent interview in The New York Times, Robinson tells a story about Oseola McCarty, an African American laundress of Lila’s generation who gained fame when, after a long and frugal life, she donated her surprisingly large life savings to the University of Southern Mississippi: “McCarty took down this Bible and First Corinthians fell out of it, it had been so read. And you think, Here is this woman that, by many standards, might have been considered marginally literate, that by another standard would have been considered to be a major expert on the meaning of First Corinthians!Robinson delights in religious narratives like Lila’s and Oseola’s: testimonies of fervent textual engagement that unsettle common assumptions about theological expertise and the relative worth of persons.

But despite this democratic expansiveness, there are some limits of Robinson’s religious vision that she doesn’t test or stretch—aspects of our world that simply don’t exist in the world of her novels. I don’t just mean limits of subject matter. Call them limits of community. Like Robinson herself, every one of her characters is an introvert, a loner, a person filled with the passion of loneliness (to borrow a phrase from Robinson herself). It’s impossible to imagine her writing about anyone who wasn’t. It’s not surprising that in a 2012 essay Robinson defines community in fairly disembodied terms, as an imaginative act that is almost indistinguishable from the practice of reading or writing fiction: “I would say, for the moment, that community, at least community larger than the immediate family, consists very largely of imaginative love for people we do not know or whom we know very slightly. This thesis may be influenced by the fact that I have spent literal years of my life lovingly absorbed in the thoughts and perceptions of … people who do not exist.” In her fiction, grace is communal only in the sense that it sometimes stretches to connect two people for a little while: a sister trying her best to understand an elusive long-lost brother, or a mother clasping her child close while he’s still small enough to be held. And even these moments of connection are savored in relation to the knowledge of their precariousness and the aching anticipation of their loss.

The novels’ power lies in their unsparing depictions of the isolated soul communing with itself or nature or God, thrown into relief by moments of mercy when the excluded prodigal or prostitute is welcomed home. But this gracious welcome doesn’t extend to everyone. The novels quietly perpetuate another kind of exclusion: the marginalization of embodied, literal community as a reliable source of solace and ethical vision. Though Ames has been a minister his whole life, he unsurprisingly admits that he prefers the church when it’s empty: “After a while I did begin to wonder if I liked the church better with no people in it.” (And of course he appreciates the empty church even more because he knows it’s about to be torn down.) Glory’s definition of church is likewise unpopulated except for the minister:

For her, church was an airy white room with tall windows looking out on God’s good world, with God’s good sunlight pouring in through those windows and falling across the pulpit where her father stood, straight and strong, parsing the broken heart of humankind and praising the loving heart of Christ. That was church.

In the hundreds of pages of these novels about ministers and their families, congregants and townspeople are barely mentioned. We know they are there because unseen people sometimes silently drop off pies and casseroles at the parsonage, tactfully refraining from ringing the bell.

I believe Robinson’s deeply spiritual vision of loneliness, of ecstatic and resigned and despairing and meaningful disconnection, is part of what makes readers respond to her so rapturously in the Internet age. Her novels are a kind of digital Sabbath. As our inboxes overflow and our alerts and notifications multiply, her characters wait in vain for letters that don’t come, and lose track of people they once knew, and fail to make it to the phone in time to hear the faraway voice of the one they love. Through it all, they ache and yearn for a word, a sign, an echo or trace of what they have lost, or what they know they are about to lose. Her books have to be historical novels; it is not an accident they are set between sixty and a hundred years ago. But despite or because of their temporal remove, they are apparently exactly what many of us want to read now. Her characters breathe an unclouded atmosphere that speaks to our discontents as denizens of a world swirling with ambient data.

As a result, her religious vision excludes almost all of us. She can’t represent those of us who are tweeting and commenting and blogging and chatting about her books’ beauty, or comprehend those of us who find ourselves immersed in thick webs of connection and collectivity and populated chaos. Though Robinson clearly cares deeply about what might be called “social problems,” her stories of individual reckoning and resignation have little to say about lives lived in the midst of congregations or in the shadow of corporations. Whether we resist constant compulsory connection or revel in it or both, we are living outside her novels’ theological and political categories.


DO THESE LIMITS MATTER? It seems almost ungrateful to point them out. Robinson already stirs our souls with her stories of solitude and hard-won hope; does she really have to write beautifully about community and politics as well?

Joan Acocella says no. In her review of Lila in The New Yorker, she admits that “Robinson’s use of politics is … to some extent, a weakness of the Gilead novels.” But Acocella argues that the political limits of Robinson’s religious vision don’t matter because Robinson’s mystical insight is so strong: “Robinson writes about religion two ways. One is meliorist, reformist. The other is rapturous, visionary. Many people have been good at the first kind; few at the second kind, at least today. The second kind is Robinson’s forte. She knows this, and works it.”

I agree with Acocella that Robinson works it, and furthermore that her work gives us painful insights into the spiritually corrosive effects of poverty that “meliorist, reformist” writing rarely does. There is a dire need for lamentation in liberal Protestantism, and I am immeasurably grateful to Robinson for supplying it. But I also believe that Robinson’s political limitations matter a great deal, because she has been cast as a public religious voice and conscience by so many, and has taken on this role for herself both inside and outside her novels. And since she has been heralded as the best contemporary expression of public Christianity, it matters what she is leaving out or getting wrong.

As it happens, one of the things she gets wrong is the politics of race. In saying this I don’t mean what my friend Jess Row argues in his Boston Review essay “White Flights”: that Robinson, like many other post-1960 white writers, assumes “a systematically, if not intentionally, denuded, sanitized landscape, at least when it comes to matters of race,” or that in her novels “whiteness is once again normative, invisible, unquestioned, and unthreatened.” Row uses persuasive examples from Housekeeping to bookend his essay, but his critique is inapplicable to Gilead and Home. Their racial problem is quite different.

The race problem in the Iowa trilogy is not that Robinson ignores non-white people and their violent eviction from white landscapes and white religion. Gilead and Home are Robinson’s attempt to reckon with that horrible history. She mourns the ethical declension that turned the multi-racial abolitionist outposts of the 1850s into the white sundown towns of the 1950s. She repeatedly shows us the traces of racial terror on the Iowa farmland and the hushed-up events led to this “denuded, sanitized landscape”—the burning embers of black churches and the black flights through and from Gilead, from slavery days to Jim Crow. Race is likewise at the center of the novels’ plots and their family dramas: Ames’s grandfather was a John-Brown-style radical abolitionist who attended black churches because the preaching was better, but Ames’s pacifist father disavowed that militant legacy, creating a bitter rift. Meanwhile Jack Boughton, the prodigal son of Ames’s best friend, is secretly and illegally married to a black woman and they have a son, which is why he believes he can never be fully received back into his white family.

Furthermore, the problem is not that Robinson fails to call whites to account for their racial complacency. The character of Jack Boughton allows her to indict the kind of white Christian obliviousness that is effectively white Christian racism. When Jack shows Ames a picture of his black wife and child to try to gauge how his own father might respond to having an interracial family, Ames realizes that even after a lifetime of friendship he has no idea how his best friend would react: “Now, the fact is, I don’t know how old Boughton would take all this. It surprised me to realize that. I think it is an issue we never discussed in all our years of discussing everything. It just didn’t come up.” When Ames observes that interracial marriage is legal in Iowa, Jack indulges in a bitter aside: “Yes, Iowa, the shining star of radicalism.” Except for Ames, Jack keeps his secret to himself, but he talks to his sister about W.E.B. DuBois and pushes his minister father to take responsibility for racial injustice, telling him about the murder of Emmett Till, and quoting an article that argues that “the seriousness of American Christianity was called into question by our treatment of the Negro.” His father inadequately responds that if black people are good Christians, “then we can’t have done so badly by them, can we?” Jack deferentially disagrees. Through Jack, Robinson endorses a racial standard as a valid one for assessing the seriousness of white American Christianity, and she shows us how her white characters fail to live up to it.

But even as Jack demonstrates the limits of his family’s racial vision, he inadvertently shows the limits of Robinson’s as well. When I was re-reading Home recently I stumbled on a curious and troubling anachronism in the novel’s account of the Civil Rights Movement. In a dramatic passage, a TV broadcast of a brutal police crackdown on black protesters in Montgomery prompts a fraught racial conversation between Jack and his father and sister. The problem is that the events Robinson describes bear no resemblance to what actually happened in Montgomery in 1956. What really happened was a yearlong bus boycott that was sparked by Rosa Parks, supported by a coalition of churches and community organizations, and sustained by tens of thousands of ordinary people: ‘‘the nameless cooks and maids who walked endless miles for a year to bring about the breach in the walls of segregation,” in the words of Montgomery activist Mary Fair Burks. Instead, Robinson erroneously represents “Montgomery” as a violent showdown between cops, dogs, and black children, much like what happened in Kelly Ingram Park in Birmingham seven years later.

This strange substitution begins when Jack is standing on the sidewalk watching a TV in the window of the hardware store, transfixed by “the silently fulminating authorities and the Negro crowds.” He tells his sister it is “Montgomery,” and though this makes chronological sense since the novel is set in 1956, it is unclear how the image on the screen corresponds with a bus boycott. Later Jack watches the news with his father and sister at home:

On the screen white police with riot sticks were pushing and dragging black demonstrators. There were dogs.

His father said, “There’s no reason to let that sort of trouble upset you. In six months nobody will remember one thing about it.”

Jack said, “Some people will probably remember it.” …

Police were pushing the black crowd back with dogs, turning fire hoses on them. Jack said, “Jesus Christ!”

His father shifted in his chair. “That kind of language has never been acceptable in this house.”

Jack said, “I—” as if he were about to say more. But he stopped himself. “Sorry.”

On the screen an official was declaring his intention to enforce the letter of the law. Jack said something under his breath, then glanced at his father.

Later Jack tries to explain his agitation to his sister Glory: “I shouldn’t have said what I did. But things keep getting worse—” She thinks he means his father’s health, but he clarifies: “No. No, I mean the dogs. The fire hoses. Fire hoses. There were kids—” Glory reassures him, “None of that will be a problem for you if you stay here.” He replies, “Oh Glory, it’s a problem. Believe me. It’s a problem.”

So: In a scene in which remembering “Montgomery” is equated with racial awareness, and forgetting it is equated with racial obliviousness, Robinson “forgets” Montgomery, or at least remembers it as something very different. This is not just a slip-up about a name; it is a series of counterfactual descriptions. In 1963, when Birmingham cops attacked young people with dogs and water cannons, the images were considered so shocking and unprecedented that they appeared on the front page of newspapers around the country, and a couple years later in 1965 ABC interrupted a broadcast of Judgment at Nuremberg to show footage of white police in riot gear using billy clubs to beat black protesters on Bloody Sunday in Selma. But neither the police attacks nor the media events happened in 1956. As Jack would say: “Believe me. It’s a problem.” But what does it mean?

One answer, a simple and troubling enough answer, is that Robinson simply made a mistake—one that reflects the limits of her racial attention. Robinson mixes up Montgomery and Birmingham because her precision when it comes to figurative language or classic theology doesn’t extend to major events in American racial history. For decades she has immersed herself in rigorous reading of Calvin and Shakespeare and the Puritans and the Latin Vulgate, but she hasn’t read enough about the Civil Rights Movement to get it right; Calvin is clear but black people are a blur. And insofar as she is using undifferentiated black people on TV as a way to throw her white characters’ moral development into relief, it might not much matter to her what happened in Montgomery. It’s also possible that she decided that conflating the facts would work better to characterize her white characters, so she silently changed them. Either way, she could be seen as illustrating Toni Morrison’s critique in Playing in the Dark of “the way black people ignite critical moments of discovery or change or emphasis in literature not written by them.” Morrison sees white writers’ ubiquitous instrumental invocation of blackness as a “sometimes sinister, frequently lazy, almost always predictable employment of racially informed and determined chains.” (Robinson’s potentially sinister imprecision is further blurred in Acocella’s New Yorker review: Acocella inaccurately refers to the Montgomery bus boycott as “the Montgomery riots” and calls the black people on TV “rioters.”)

I believe Morrison’s theory about white writers and blackness applies to Gilead and Home, but I suspect Robinson’s propensity for “playing in the dark” is not the whole explanation of why she gets this history so wrong. I believe her failure to represent the real Montgomery is evidence of something else as well, something much closer to the core of her tragic, individualistic theology. I think it speaks to the perilous political tendencies of her particular version of Calvinism.

Unlike versions of Christianity which see suffering as something to be resisted or triumphed over, Calvinism tends to view both suffering and grace as arbitrary, mysterious, and predestined. The forces of fate are inscrutable and immense; the capacity of human agency is comparatively small. Perhaps because of her acute awareness of the cosmic imbalance of power between the human and the divine, Robinson represents religious faith less as a spur to action and more as a beautiful individual reckoning with inevitable loss and anguish. Above all, her writing honors an individual’s submission to the deepest sorrow in order to plumb all the meaning it will yield.

Over and over again, Robinson’s characters find a kind of peace in accepting their arduous lot: Ames spends decades praying in an empty house without seeking the comfort of a human touch; Glory gives up her dreams of a husband and home of her own with a sighed “Ah, well”; Jack painfully accepts exile from both his white and black families without ever telling his sister or father his racial secret, or opening the door to the possibility of embodied beloved community. We watch him as he walks away into an emptied world, Christ-like in his weary submission to his fate: “a man of sorrows and acquainted with grief, and as one from whom men hide their face. Ah, Jack.”

Robinson teaches us that these resignations, these “Ah, [fill in the blank]” moments, are their own redemptive reward. Over and over again, in a paradoxical pattern that Amy Hungerford calls Robinson’s “logic of absence,” the novels state that lack is its own fulfillment; loss its own restoration; sorrow its own solace. As Robinson writes in Housekeeping, “need can blossom into all the compensation it requires,” or, as Lila says, “fear and comfort could be the same thing.” In surrendering themselves to the passion of loneliness, in nourishing themselves with a spiritual imagination that turns the stones of sorrow into bread, Robinson’s characters find grace in the midst of death and dearth. In the world’s fallenness, they envision a paradise regained.

When you consider Robinson’s deep disinterest in embodied communities and profound interest in the aesthetics and theology of resignation, it makes sense that a successful boycott could never be represented in her fiction. Robinson ignores black community organizing in Montgomery for some of the same reasons she ignores the white congregation in Gilead: she is not interested in representing embodied collective life. But beyond that, her displacement of the Montgomery bus boycott with images of brutality and suffering seems almost predestined by her theology. She is replacing a story of black people successfully coming together to transform their society with images of black people enduring pain inflicted by the powers that be. The protesters in her Montgomery do not walk together with tired feet and rested souls for 381 days. Instead they are passive objects of violence, pushed and dragged by police. (Robinson’s fictionalization of the Civil Rights Movement is entirely reduced to these brief images of black suffering: her novels do not include speeches, sermons, sit-ins, strategies, meetings, music, marches, legal battles, freedom rides, or voter registration drives.) Though Robinson mentions Rosa Parks in her essays, her novels dwell on the private, pious perspectives of white people who resemble Oseola McCarty. She is not interested in telling the stories of people who fight their fate, alone or together.

Still, Robinson is unparalleled at finding meaning and beauty in suffering and deprivation. This is why her novels are so heart-wrenchingly gorgeous. It is also why they are troubling when they are used to define religion or politics for our time, or when they are claimed as a public conscience for the oppressed and voiceless. There are dangers both in what she leaves out of her fiction and what she puts into it. And the beauty and peril of Robinson’s vision can be seen with stunning clarity in the last pages of Home.

A few days after Jack has left Gilead, probably forever, his wife and son, Della and Robert, show up at his family home looking for him. Glory, who knows that Jack has a wife but does not know she is black, doesn’t recognize who they are at first. When Della asks after Jack and finds he is gone, she prepares to go away in silent sadness without explaining who she is (ah, Della). But Glory, yearning for an impossible momentary connection, stops her: “You’re Della, aren’t you. You’re Jack’s wife.” They talk together about Jack in a reserved, tentative, heartrending way. Glory chats with her nephew about baseball, and he reverently touches a tree in his father’s yard, “just to touch it.” Tears are quietly shed and wiped away. And then Della and Robert leave without ever walking in the front door. As Della explains, they have to leave before sundown: “We have the boy with us. His father wouldn’t want us to be taking any chances.”

Overcome in their absence, Glory sits on the porch steps and reflects on her meeting with her black family. She is overwhelmed by a sense of the cruelty of the situation and her own inability to make it different: “Dear Lord in heaven, she could never change anything.” In a moment of empathetic imagination, she sees Gilead through Della’s eyes, grieving that Della “felt she had to come into Gilead as if it were a foreign and a hostile country.” Her own sense of her home is transformed, made alien. And then, in the last paragraphs of the novel, Glory consoles herself for her own sadness and for Jack’s and Robert’s and Della’s, as members of a family torn apart by racist anti-miscegenation laws and Jim Crow. In a rapturous vision of imagined connection, Glory pictures her nephew’s brief return, decades into the future: “Maybe this Robert will come back someday. … And he will be very kind to me. … He will talk to me a little while, too shy to tell me why he has come, and then he will thank me and leave, walking backward a few steps, thinking, … This was my father’s house. And I will think, He is young. He cannot know that my whole life has come down to this moment.”

This is the power and inadequacy of Robinson’s racial vision. An empathetic encounter with a black person can totally transform a white person’s view of their own place in the world; and a dream of interracial connection (however partial and temporary) is enough to give meaning to a white person’s entire life, and incidentally to wrap up the worn and ragged threads of the novel. It’s a lovely liberal reverie, and its limits make it even more poignant: even in her wildest dreams, Glory can’t imagine Robert being welcomed into his white father’s childhood home. But Glory does nothing to make even this modest fantasy of a family reunion come true. The dream of Robert’s return is so consoling to her, so meaningful, that for Glory’s emotional purposes, and for the purposes of the novel, it doesn’t much matter whether it actually happens. The mere longing is enough: It feels more satisfying than any real attempt at interracial community or racial justice could ever be. Actual black people need never displace the shy, grateful, undemanding black man of Glory’s dreams.

This kind of consolation can be captivating, if you identify with Glory and not with Robert or Della, and if you don’t think too much about the implications. And of course, characters and novels don’t have to be moral models. We can love Glory and Home without following in their steps. But as I write in the wake of mass protests against racial injustice in Ferguson and New York and around the world, I can’t accept unfulfilled cravings, empathetic fantasies, and suffering beautifully borne as the best possible public Christianity for our age.


I WILL FOREVER READ all the fiction Robinson writes. We who love her books read them because they give us what we miss, a specter of a stripped simplicity we’ve lost or never had, imbued with a fullness of meaning that we can hardly bear. I’ve barely quoted Robinson in this essay because I suspect that the sheer beauty of her words would overwhelm any criticism I could possibly make. Writing about Montgomery and what it means has been like trying to pry her books out of my own hands. But I know that when I close Robinson’s novels and step out of the baptismal pool of her pages, I re-enter a world I could never find in Gilead: a world full of struggling and striving people of every religion and race, classrooms full of clamorous voices, bright threads of friendship woven across the Internet, and wild desires for change and justice and beloved community that overcome all my half-hearted attempts at relentless resolute Calvinist resignation.

Novels can be partial and still be perfect, but religion needs to be practical. These are beautiful novels, complete in themselves, but insofar as they are held up as a political and ethical example they are far from enough. We need to read Marilynne Robinson, but we need to read Morrison too, and so many others. And we need to imagine a more capacious and yet unwritten vision of grace for our moment. We need a grace large enough to extend to those who prefer churches with people in them; a religious sensibility that is finely attuned enough to care when and where people are staging boycotts or facing down cops and dogs for freedom, and new prophetic voices that will inspire us to join them.

I read Lila in a day, marveling in the quiet words, sometimes stopping to wait for my tears to subside so I could see the page. Some sentences I read aloud to myself so I could hear them spoken, just as Reverend Ames read aloud during his long decades of solitude. I copied bright phrases into a commonplace book like Lila copying the prophecies of Ezekiel in her ruined cabin. In the end, I was grateful to have ached and starved and wept with Lila, and I was ready to let her go.


Briallen Hopper is a Lecturer in English at Yale and the Faculty Fellow at the University Church in Yale.  

]]> 0
Ebola and U.S. Hospital Chaplains: A (Deliberately) Untold Story Tue, 16 Dec 2014 16:05:48 +0000 (Owen Humphreys/PA Wire/AP Images)

(Owen Humphreys/PA Wire/AP Images)

In August, Dr. Kent Brantly and Nancy Writebol, medical missionaries who were serving in Liberia, arrived at Atlanta’s Emory University Hospital. The facility was the first to treat Ebola patients on American soil, and in the early days there was no shortage of public criticism. “People responded viscerally on social media, fearing that we risked spreading Ebola to the United States,” wrote Emory’s head of nursing in The Washington Post. Later that month, a poll by the Harvard School of Public Health found that four in ten adults still feared a massive U.S. Ebola outbreak. Even as the staff performed their duties with confidence, the Rev. Robin Brown-Haithco, the hospital’s director of spiritual health, could sense anxiety among some of the personnel. So she wrote a two-page letter reminding employees what was easy to forget: why their work matters.

In her memo, distributed to the entire clinical staff, Brown-Haithco invited healthcare professionals to compare their own vocation with that of the missionaries, who followed their callings to help those in need in Liberia. “When Emory heard that same call a little over a week ago, we also knew there was only one way to respond,” she wrote. “We knew it was our ethical and moral responsibility to open our doors to receive the missionary aid workers and to provide the care we provide for all who come through our doors. We responded, not because it would bring notoriety or fame, but because it is our calling as a health care institution.”

In the days that followed, many healthcare workers talked with Brown-Haithco about their vocations. These conversations often mirrored the tone she had set in her memo, neither ignoring the risks of treating Ebola patients nor succumbing to panic. A calling doesn’t exclude fear, she explained, but fear “does not prevent us from moving with compassion toward someone in need.”

Neither does fear encourage a dull news cycle. When the Ebola outbreak began, the American public heard from doctors, nurses, public health experts, and WHO officials. Once healthcare workers were diagnosed in Dallas, we heard about PPE procedures, CDC guidelines, and airport screenings. We heard about hospital employees in New York who faced discrimination for working near an infected patient, and about the exotic dancers who started a GoFundMe account to support their voluntarily quarantine. Most recently, we heard about the $27,000 the city of Dallas spent taking care of Bentley, the beloved dog of Dallas nurse and recovered Ebola patient, Nina Pham.

But during the initial frenzy of U.S. Ebola coverage, we didn’t hear much about hospital chaplains, the members of hospital teams tasked with providing spiritual and emotional support to patients, their families, and medical staff. According to university estimates, there were 42,410 stories mentioning Emory and Ebola published between July 31 and September 22; Brown-Haithco and her chaplain colleagues were interviewed four times, including a segment with Matt Lauer that never aired.

And really, the public isn’t supposed to hear from chaplains: Chaplains are trained to keep a low profile, remaining calm in health crises, not interfering with the lifesaving work of medical personnel. Professional chaplaincy standards emphasize sensitivity, respect for boundaries, and self-awareness, managing and minimizing one’s own emotions and religious preferences to better respond to the needs of others. A chaplain’s work isn’t flashy: listening, praying, and simply being present to those who suffer.

Not to mention: the dual confines of HIPAA and clergy confidentiality limit the information chaplains are allowed to share—hardly ideal interviewees for eager reporters.

Yet, silence isn’t absence. In the five American hospitals that have treated Ebola patients, chaplains have been a key part of the healthcare team, quietly alleviating anxiety amid national paranoia, tackling loneliness amid clinical isolation, and protecting patient privacy amid intense public scrutiny. And although these chaplains have taken their responsibility to the U.S.’s 10 Ebola patients seriously, they are also mindful of the larger health crisis at hand—a global epidemic that has infected more than 18,000 people and claimed the lives of more than 7,000.


IN LATE OCTOBER, I talked with the Rev. Paul Steinke, a Lutheran pastor and chaplain at Bellevue Hospital in Manhattan. It had been a week since his hospital admitted its first Ebola patient, Dr. Craig Spencer, a Doctors Without Borders volunteer who had been treating Ebola patients in Guinea. “It’s kind of nuts,” Steinke said. “Thirteen thousand people in three African countries have Ebola; we only have one patient. There are still video trucks outside.”

As he saw it, there was nothing newsworthy about a hospital treating an infectious patient. “We’re a hospital. This is what we do. We take care of sick people,” Steinke said. He added: “And we do a damn good job of it.”

The Rev. Joyce Miller, also a Lutheran pastor, works as a chaplain at Nebraska Medical Center, which has cared for three Ebola patients to date—medical missionary Dr. Rick Sacra and NBC cameraman Ashoka Mukpo, who both recovered, and Dr. Martin Salia, a surgeon serving in Sierra Leone, who died in November. She concurred with Steinke’s assessment that much of the fear surrounding Ebola patients is unwarranted: “I’ve been in chaplaincy long enough to know that I have gone through outbreaks of HIV/AIDS, influenza, RSV, and all kinds of stuff that has scared people,” she said. “It’s scary stuff, but the biggest danger is our fear and the best way to deal with that is education. So, yes, this is another health crisis, but it’s what we do.”

And I heard this same unflappable, business-as-usual approach when I asked chaplains how best to minister to Ebola patients. “I don’t know that I see my role with an Ebola patient any differently than I do with a patient who is here for a stem cell transplant,” said John M. Pollack, a Catholic deacon and chief of the spiritual care department at the National Institute of Health Clinical Center in Maryland where nurse Nina Pham was treated and later released. “I think largely the greatest spiritual issues we encounter here are loneliness and despair. And those are universal questions that come with a rupture in health,” Pollack explained. “This is a different disease than we were used to seeing, but the spiritual issues are very much the same.”

Paul Steinke agreed. He said the best way to care for any patient is the “old-fashioned, chaplain-talking-to-patient” approach. The only real trick was doing that within isolation guidelines. Chaplains, like patients’ family, could not interact with Ebola patients face-to-face due to the intense training required to meet CDC requirements. The chaplains instead turned to technology. Due to HIPAA, none of the chaplains could confirm whether they had contact with Ebola patients, but Pollack said that “in the event that we had a patient in isolation where it would be unsafe for a chaplain to work with a patient, then we would use Facetime or Skype.” Other chaplains indicated that if an Ebola patient wanted to speak with a chaplain they would use the telephone.

But chaplains weren’t the only ones who wanted to minister to the patients. As attitudes about treating Ebola patients shifted from national anxiety to approval, the chaplains were faced with a new problem: how to handle the well-meaning community groups who wished to show their support for Ebola patients—often in ways the hospital could not accommodate.

Steinke said someone mailed him a box of stones inscribed with words like “hope” and “faith” and requested that the stones be delivered to the Ebola patient in isolation. But quarantine procedures made the sender’s request impossible. And besides, people don’t want inspirational rocks, Steinke said. “People in the hospital want a connection with a human being that can talk to them.”

At Emory, Brown-Haithco reported there were church groups, especially among Atlanta’s Liberian Christians, that wanted to host prayer vigils in the hospital’s small, interfaith chapel. She ultimately had to turn all the religious groups away. “We wanted to protect our campus and protect our other patients and our other families and their privacy,” Brown-Haithco told me. The hospital chapel was intended primarily for patients and staff, not the city. She encouraged groups to pray for Ebola patients around the world—at their own churches.

During the Ebola ordeal, Emory’s hospital administration invited chaplains to join their leadership team meetings—something the chaplains described as “unprecedented.” The Rev. George Grant, who oversees spiritual health throughout the Emory network, said the chaplains’ inclusion points to a growing acceptance of integrative healthcare, a model that considers patients’ mental and emotional wellbeing alongside physical needs. His chaplaincy staff encouraged the hospital administration to be sensitive to the medical personnel’s emotional needs and to the Ebola patients’ faith traditions. “There’s something about persons of other disciplines gathering together and having those disciplines cooperate, collaborate toward this whole person health perspective,” Grant said. “That took us into another kind of level of care that Emory heretofore has not been about.”

Dr. Arthur Kleinman, a physician and anthropologist at Harvard, said that the growing inclusion of chaplains—religious professionals—in mainstream healthcare isn’t so unusual. He cited the large number of programs dedicated to spirituality and health at a number of elite universities. “We’ve become more fluid in moving back and forth between values and professions, between technical practices and moral practices,” he said. “And I think it’s not surprising then that rather the separate the sacred and the secular, we’re more comfortable seeing them connected.”


IN ATLANTA, SHORTLY AFTER Brantly was declared Ebola-free, a local news station produced a three-minute segment focusing on the role of divine intercession in his recovery. “Instead of getting down on himself, going into a depression, he looked to a higher power: his faith,” says the reporter, as the camera slowly pans to a church steeple on the Atlanta skyline. “People here in Georgia, the U.S., and around the world prayed with him.” The segment, entitled “Power of Prayer,” featured a snippet from Brantly’s press conference, in which he said, “God saved my life—a direct answer to thousands and thousands of prayers.”

Brown-Haithco, who was also interviewed for the segment, was frustrated with the shallow portrait of prayer the segment seemed to offer. If prayer is powerful when a patient recovers, what do we say when a patient prays but still gets sicker? As professionals caring for the critically ill, hospital chaplains are all-too-aware that prayer doesn’t guarantee medical miracles. Prayer is “not just the traditional form of prayer where we have our hands together and we’re on bended knee, praying to a deity,” Brown-Haithco said. “For us, prayer is about accompaniment. It’s about journeying with people in critical and dark times.”

Ultimately, this kind of prayer is the heart of a chaplain’s work: they don’t try to heal people—they leave that to the medical staff. Instead, chaplains simply listen to people who are suffering and give them a place talk about what they’re experiencing. “I think once that pain is able to be expressed to someone who is able to listen, I often think the pain dissipates,” said John Pollack. “I wouldn’t say that it goes away completely, but I would say that it’s a sharing of the burden.”

Chaplains know that the world does not share Ebola’s burden evenly. “We have been barely touched by Ebola in this country,” said Miller at Nebraska Medical Center. “My pain is that this is very much a crisis in Africa and we don’t see one quarter of the coverage of what’s happening there, except maybe some fear-mongering stuff that we should seal our borders and that will fix the problem—and it won’t.”

Pollack pointed to the high level of medical care that has boosted Ebola survival rates in the U.S. and Europe. “There is a troubling sense of inequity that it’s not also the same case for the people who are suffering with this in West Africa,” he said. He praised the “compassionate response of caregivers,” like Doctors Without Borders volunteers who traveled to West Africa and N.I.H’s own staff who volunteered to serve in the isolation unit. “That’s a tremendously courageous thing to do and it really does come from a place of compassion, which in my view really emanates from God.”

Brown-Haithco agreed. When I asked her where she has seen God, she responded: “Right smack in the middle.” She cited the doctors and nurses at Emory who volunteered to treat Ebola patients even though there was no known cure. “They walked voluntarily into that situation with their own fear,” she said. “But they went anyway.”


Betsy Shirley writes about religion, faith, and social justice. She studies American religious history at Yale Divinity School. Follow her @BetsyShirley.

]]> 0