Monday, March 19, 2018

A Black History Gem for Today's Struggle: Kwame Ture (Stokely Carmichael) On What We Want



What We Want

Stokely Carmichael
One of the tragedies of the struggle against racism is that up to now there has been no national organization which could speak to the growing militancy of young black people in the urban ghetto. There has been only a civil rights movement, whose tone of voice was adapted to an audience of liberal whites. It served as a sort of buffer zone between them and angry young blacks.

None of its so-called leaders could go into a rioting community and be listened to. In a sense, I blame ourselves—together with the mass media—for what has happened in Watts, Harlem, Chicago, Cleveland, Omaha. Each time the people in those cities saw Martin Luther King get slapped, they became angry; when they saw four little black girls bombed to death, they were angrier; and when nothing happened, they were steaming.

We had nothing to offer that they could see, except to go out and be beaten again. We helped to build their frustration.

For too many years, black Americans marched and had their heads broken and got shot. They were saying to the country, “Look, you guys are supposed to be nice guys and we are only going to do what we are supposed to do—why do you beat us up, why don’t you give us what we ask, why don’t you straighten yourselves out?” After years of this, we are at almost the same point—because we demonstrated from a position of weakness. We cannot be expected any longer to march and have our heads broken in order to say to whites: come on, you’re nice guys. For you are not nice guys. We have found you out.
 
An organization which claims to speak for the needs of a community—as does the Student Nonviolent Coordinating Committee—must speak in the tone of that community, not as somebody else’s buffer zone. This is the significance of black power as a slogan. For once, black people are going to use the words they want to use—not just the words whites want to hear. And they will do this no matter how often the press tries to stop the use of the slogan by equating it with racism or separatism.

An organization which claims to be working for the needs of a community—as SNCC does—must work to provide that community with a position of strength from which to make its voice heard. This is the significance of black power beyond the slogan.

BLACK POWER can be clearly defined for those who do not attach the fears of white America to their questions about it. We should begin with the basic fact that black Americans have two problems: they are poor and they are black. All other problems arise from this two-sided reality: lack of education, the so-called apathy of black men. Any program to end racism must address itself to that double reality.

Almost from its beginning, SNCC sought to address itself to both conditions with a program aimed at winning political power for impoverished Southern blacks. We had to begin with politics because black Americans are a propertyless people in a country where property is valued above all. We had to work for power, because this country does not function by morality, love, and nonviolence, but by power. Thus we determined to win political power, with the idea of moving on from there into activity that would have economic effects. With power, the masses could make or participate in making the decisions which govern their destinies, and thus create basic change in their day-to-day lives.

But if political power seemed to be the key to self-determination, it was also obvious that the key had been thrown down a deep well many years earlier. Disenfranchisement, maintained by racist terror, makes it impossible to talk about organizing for political power in 1960. The right to vote had to be won, and SNCC workers devoted their energies to this from 1961 to 1965. They set up voter registration drives in the Deep South. They created pressure for the vote by holding mock elections in Mississippi in 1963 and by helping to establish the Mississippi Freedom Democratic Party (MFDP) in 1964. That struggle was eased, though not won, with the passage of the 1965 Voting Rights Act. SNCC workers could then address themselves to the question: “Who can we vote for, to have our needs met—how do we make our vote meaningful?”

SNCC had already gone to Atlantic City for recognition of the Mississippi Freedom Democratic Party by the Democratic convention and been rejected; it had gone with the MFDP to Washington for recognition by Congress and been rejected. In Arkansas, SNCC helped thirty Negroes to run for School Board elections; all but one were defeated, and there was evidence of fraud and intimidation sufficient to cause their defeat. In Atlanta, Julian Bond ran for the state legislature and was elected—twice—and unseated—twice. In several states, black farmers ran in elections for agricultural committees which make crucial decisions concerning land use, loans, etc. Although they won places on a number of committees, they never gained the majorities needed to control them.

ALL OF THE EFFORTS were attempts to win black power. Then, in Alabama, the opportunity came to see how blacks could be organized on an independent party basis. An unusual Alabama law provides that any group of citizens can nominate candidates for county office and, if they win 20 per cent of the vote, may be recognized as a county political party. The same then applies on a state level. SNCC went to organize in several counties such as Lowndes, where black people—who form 80 per cent of the population and have an average annual income of $943—felt they could accomplish nothing within the framework of the Alabama Democratic Party because of its racism and because the qualifying fee for this year’s elections was raised from $50 to $500 in order to prevent most Negroes from becoming candidates. On May 3, five new county “freedom organizations” convened and nominated candidates for the offices of sheriff, tax assessor, members of the school boards. 

These men and women are up for election in November—if they live until then. 

Their ballot symbol is the black panther: a bold, beautiful animal, representing the strength and dignity of black demands today. A man needs a black panther on his side when he and his family must endure—as hundreds of Alabamians have endured—loss of job, eviction, starvation, and sometimes death, for political activity. He may also need a gun and SNCC reaffirms the right of black men everywhere to defend themselves when threatened or attacked. 

As for initiating the use of violence, we hope that such programs as ours will make that unnecessary; but it is not for us to tell black communities whether they can or cannot use any particular form of action to resolve their problems. Responsibility for the use of violence by black men, whether in self defense or initiated by them, lies with the white community.
Stokely Carmichael (aka Kwame Ture) in 1966
This is the specific historical experience from which SNCC’s call for “black power” emerged on the Mississippi march last July. But the concept of “black power” is not a recent or isolated phenomenon: It has grown out of the ferment of agitation and activity by different people and organizations in many black communities over the years. Our last year of work in Alabama added a new concrete possibility. In Lowndes county, for example, black power will mean that if a Negro is elected sheriff, he can end police brutality. If a black man is elected tax assessor, he can collect and channel funds for the building of better roads and schools serving black people—thus advancing the move from political power into the economic arena. In such areas as Lowndes, where black men have a majority, they will attempt to use it to exercise control.

This is what they seek: control.

Where Negroes lack a majority, black power means proper representation and sharing of control. It means the creation of power bases from which black people can work to change statewide or nationwide patterns of oppression through pressure from strength—instead of weakness.

Politically, black power means what it has always meant to SNCC: the coming-together of black people to elect representatives and to force those representatives to speak to their needs. It does not mean merely putting black faces into office. A man or woman who is black and from the slums cannot be automatically expected to speak to the needs of black people. Most of the black politicians we see around the country today are not what SNCC means by black power. The power must be that of a community, and emanate from there.

SNCC today is working in both North and South on programs of voter registration and independent political organizing. In some places, such as Alabama, Los Angeles, New York, Philadelphia, and New Jersey, independent organizing under the black panther symbol is in progress. The creation of a national “black panther party” must come about; it will take time to build, and it is much too early to predict its success. We have no infallible master plan and we make no claim to exclusive knowledge of how to end racism; different groups will work in their own different ways. SNCC cannot spell out the full logistics of self-determination but it can address itself to the problem by helping black communities define their needs, realize their strength, and go into action along a variety of lines which they must choose for themselves.

Without knowing all the answers, it can address itself to the basic problem of poverty; to the fact that in Lowndes County, 86 white families own 90 per cent of the land. What are black people in that county going to do for jobs, where are they going to get money? There must be reallocation of land, of money.

ULTIMATELY, the economic foundations of this country must be shaken if black people are to control their lives. The colonies of the United States—and this includes the black ghettoes within its borders, north and south—must be liberated. For a century, this nation has been like an octopus of exploitation, its tentacles stretching from Mississippi and Harlem to South America, the Middle East, southern Africa, and Vietnam; the form of exploitation varies from area to area but the essential result has been the same—a powerful few have been maintained and enriched at the expense of the poor and voiceless colored masses. This pattern must be broken. As its grip loosens here and there around the world, the hopes of black Americans become more realistic. For racism to die, a totally different America must be born.

This is what the white society does not wish to face; this is why that society prefers to talk about integration. But integration speaks not at all to the problem of poverty, only to the problem of blackness. Integration today means the man who “makes it,” leaving his black brothers behind in the ghetto as fast as his new sports car will take him. It has no relevance to the Harlem wino or to the cotton-picker making three dollars a day. As a lady I know in Alabama once said, “the food that Ralph Bunche eats doesn’t fill my stomach.”

Integration, moreover, speaks to the problem of blackness in a despicable way. As a goal, it has been based on complete acceptance of the fact that in order to have a decent house or education, blacks must move into a white neighborhood or send their children to a white school. This reinforces, among both black and white, the idea that “white” is automatically better and “black” is by definition inferior. This is why integration is a subterfuge for the maintenance of white supremacy. It allows the nation to focus on a handful of Southern children who get into white schools, at great price, and to ignore the 94 per cent who are left behind in unimproved all-black schools.

Such situations will not change until black people have power—to control their own school boards, in this case.

Then Negroes become equal in a way that means something, and integration ceases to be a one-way street. Then integration doesn’t mean draining skills and energies from the ghetto into white neighborhoods; then it can mean white people moving from Beverly Hills into Watts, white people joining the Lowndes County Freedom Organization. Then integration becomes relevant.

Last April, before the furor over black power, Christopher Jencks wrote in a New Republic article on white Mississippi’s manipulation of the anti-poverty program:
The war on poverty has been predicated on the notion that there is such a thing as a community which can be defined geographically and mobilized for a collective effort to help the poor. This theory has no relationship to reality in the Deep South. In every Mississippi county there are two communities. Despite all the pious platitudes of the moderates on both sides, these two communities habitually see their interests in terms of conflict rather than cooperation. Only when the Negro community can muster enough political, economic and professional strength to compete on somewhat equal terms, will Negroes believe in the possibility of true cooperation and whites accept its necessity. En route to integration, the Negro community needs to develop greater independence—a chance to run its own affairs and not cave in whenever “the man” barks…Or so it seems to me, and to most of the knowledgeable people with whom I talked in Mississippi. To OEO, this judgment may sound like black nationalism…
MR. JENCKS, a white reporter, perceived the reason why America’s anti-poverty program has been a sick farce in both North and South. In the South, it is clearly racism which prevents the poor from running their own programs; in the North, it more often seems to be politicking and bureaucracy. But the results are not so different: In the North, non-whites make up 42 per cent of all families in metropolitan “poverty areas” and only 6 per cent of families in areas classified as not poor. SNCC has been working with local residents in Arkansas, Alabama, and Mississippi to achieve control by the poor of the program and its funds; it has also been working with groups in the North, and the struggle is no less difficult. Behind it all is a federal government which cares far more about winning the war on the Vietnamese than the war on poverty; which has put the poverty program in the hands of self-serving politicians and bureaucrats rather than the poor themselves; which is unwilling to curb the misuse of white power but quick to condemn black power.

To most whites, black power seems to mean that the Mau Mau are coming to the suburbs at night. The Mau Mau are coming, and whites must stop them. Articles appear about plots to “get Whitey,” creating an atmosphere in which “law and order must be maintained.” Once again, responsibility is shifted from the oppressor to the oppressed. Other whites chide, “Don’t forget—you’re only 10 per cent of the population; if you get too smart, we’ll wipe you out.” If they are liberals, they complain, “what about me?—don’t you want my help any more?”

These are people supposedly concerned about black Americans, but today they think first of themselves, of their feelings of rejection. Or they admonish, “you can’t get anywhere without coalitions,” when there is in fact no group at present with whom to form a coalition in which blacks will not be absorbed and betrayed. Or they accuse us of “polarizing the races” by our calls for black unity, when the true responsibility for polarization lies with whites who will not accept their responsibility as the majority power for making the democratic process work.

White America will not face the problem of color, the reality of it. The well-intended say: “We’re all human, everybody is really decent, we must forget color.” But color cannot be “forgotten” until its weight is recognized and dealt with. White America will not acknowledge that the ways in which this country sees itself are contradicted by being black—and always have been. Whereas most of the people who settled this country came here for freedom or for economic opportunity, blacks were brought here to be slaves.
When the Lowndes County Freedom Organization chose the black panther as its symbol, it was christened by the press “the Black Panther Party”—but the Alabama Democratic Party, whose symbol is a rooster, has never been called the White Cock Party. No one ever talked about “white power” because power in this country is white. All this adds up to more than merely identifying a group phenomenon by some catchy name or adjective. The furor over that black panther reveals the problems that white America has with color and sex; the furor over “black power” reveals how deep racism runs and the great fear which is attached to it.

WHITES WILL NOT SEE that I, for example, as a person oppressed because of my blackness, have common cause with other blacks who are oppressed because of blackness. This is not to say that there are no white people who see things as I do, but that it is black people I must speak to first. It must be the oppressed to whom SNCC addresses itself primarily, not to friends from the oppressing group.

From birth, black people are told a set of lies about themselves. We are told that we are lazy—yet I drive through the Delta area of Mississippi and watch black people picking cotton in the hot sun for fourteen hours. We are told, “If you work hard, you’ll succeed”—but if that were true, black people would own this country. We are oppressed because we are black—not because we are ignorant, not because we are lazy, not because we’re stupid (and got good rhythm), but because we’re black.

I remember that when I was a boy, I used to go to see Tarzan movies on Saturday. White Tarzan used to beat up the black natives. I would sit there yelling, “Kill the beasts, kill the savages, kill ’em!” I was saying: Kill me. It was as if a Jewish boy watched Nazis taking Jews off to concentration camps and cheered them on. Today, I want the chief to beat hell out of Tarzan and send him back to Europe. But it takes time to become free of the lies and their shaming effect on black minds. It takes time to reject the most important lie: that black people inherently can’t do to same things white people can do, unless white people help them.

The need for psychological equality is the reason why SNCC today believes that blacks must organize in the black community. Only black people can convey the revolutionary idea that black people are able to do things themselves. Only they can help create in the community an aroused and continuing black consciousness that will provide the basis for political strength. In the past, white allies have furthered white supremacy without the whites involved realizing it—or wanting it, I think. Black people must do things for themselves; they must get poverty money they will control and spend themselves, they must conduct tutorial programs themselves so that black children can identify with black people. This is one reason Africa has such importance: The reality of black men ruling their own natives gives blacks elsewhere a sense of possibility, of power, which they do not now have.

This does not mean we don’t welcome help, or friends. But we want the right to decide whether anyone is, in fact, our friend. In the past, black Americans have been almost the only people whom everybody and his momma could jump up and call their friends. We have been tokens, symbols, objects—as I was in high school to many young whites, who liked having “a Negro friend.” We want to decide who is our friend, and we will not accept someone who comes to us and says: “If you do X, Y, and Z, then I’ll help you.” We will not be told whom we should choose as allies. We will not be isolated from any group or nation except by our own choice. We cannot have the oppressors telling the oppressed how to rid themselves of the oppressor.

I HAVE SAID that most liberal whites react to “black power” with the question, What about me?, rather than saying: Tell me what you want me to do and I’ll see if I can do it. There are answers to the right question. One of the most disturbing things about almost all white supporters of the movement has been that they are afraid to go into their own communities—which is where the racism exists—and work to get rid of it. They want to run from Berkeley to tell, us what to do in Mississippi; let them look instead at Berkeley. They admonish blacks to be nonviolent; let them preach non-violence in the white community. They come to teach me Negro history; let them go to the suburbs and open up freedom schools for whites. Let them work to stop America’s racist foreign policy; let them press this government to cease supporting the economy of South Africa.

There is a vital job to be done among poor whites. We hope to see, eventually, a coalition between poor blacks and poor whites. That is the only coalition which seems acceptable to us, and we see such a coalition as the major internal instrument of change in American society. SNCC has tried several times to organize poor whites; we are trying again now, with an initial training program in Tennessee. It is purely academic today to talk about bringing poor blacks and whites together, but the job of creating a poor-white power bloc must be attempted. The main responsibility for it falls upon whites. Black and white can work together in the white community where possible; it is not possible, however, to go into a poor Southern town and talk about integration. Poor whites everywhere are becoming more hostile—not less—partly because they see the nation’s attention focussed on black poverty and nobody coming to them. Too many young middle-class Americans, like some sort of Pepsi generation, have wanted to come alive through the black community; they’ve wanted to be where the action is—and the action has been in the black community.

Black people do not want to “take over” this country. They don’t want to “get whitey”; they just want to get him off their backs, as the saying goes. It was for example the exploitation by Jewish landlords and merchants which first created black resentment toward Jews—not Judaism. The white man is irrelevant to blacks, except as an oppressive force. Blacks want to be in his place, yes, but not in order to terrorize and lynch and starve him. They want to be in his place because that is where a decent life can be had.

But our vision is not merely of a society in which all black men have enough to buy the good things of life. When we urge that black money go into black pockets, we mean the communal pocket. We want to see money go back into the community and used to benefit it. We want to see the cooperative concept applied in business and banking. We want to see black ghetto residents demand that an exploiting store keeper sell them, at minimal cost, a building or a shop that they will own and improve cooperatively; they can back their demand with a rent strike, or a boycott, and a community so unified behind them that no one else will move into the building or buy at the store.

The society we seek to build among black people, then, is not a capitalist one. It is a society in which the spirit of community and humanistic love prevail.

The word love is suspect; black expectations of what it might produce have been betrayed too often. But those were expectations of a response from the white community, which failed us.

The love we seek to encourage is within the black community, the only American community where men call each other “brother” when they meet. We can build a community of love only where we have the ability and power to do so: among blacks.
AS FOR WHITE AMERICA, perhaps it can stop crying out against “black supremacy,” “black nationalism,” “racism in reverse,” and begin facing reality. The reality is that this nation, from top to bottom, is racist; that racism is not primarily a problem of “human relations” but of an exploitation maintained—either actively or through silence—by the society as a whole. Camus and Sartre have asked, can a man condemn himself? Can whites, particularly liberal whites, condemn themselves? Can they stop blaming us, and blame their own system? Are they capable of the shame which might become a revolutionary emotion?

We have found that they usually cannot condemn themselves, and so we have done it. But the rebuilding of this society, if at all possible, is basically the responsibility of whites—not blacks.

We won’t fight to save the present society, in Vietnam or anywhere else. We are just going to work, in the way we see fit, and on goals we define, not for civil rights but for all our human rights.
----------------
Black Power December 1, 1966

Monday, March 05, 2018

The Unwelcome Revival of ‘Race Science’

The unwelcome revival of ‘race science’

Its defenders claim to be standing up for uncomfortable truths, but race science is still as bogus as ever.

By




The claim that there is a link between race and intelligence is the main tenet of what is known as “race science” or, in many cases, “scientific racism”. Race scientists claim there are evolutionary bases for disparities in social outcomes – such as life expectancy, educational attainment, wealth, and incarceration rates – between racial groups. In particular, many of them argue that black people fare worse than white people because they tend to be less naturally intelligent.

Although race science has been repeatedly debunked by scholarly research, in recent years it has made a comeback. Many of the keenest promoters of race science today are stars of the “alt-right”, who like to use pseudoscience to lend intellectual justification to ethno-nationalist politics. If you believe that poor people are poor because they are inherently less intelligent, then it is easy to leap to the conclusion that liberal remedies, such as affirmative action or foreign aid, are doomed to fail.

There are scores of recent examples of rightwingers banging the drum for race science. In July 2016, for example, Steve Bannon, who was then Breitbart boss and would go on to be Donald Trump’s chief strategist, wrote an article in which he suggested that some black people who had been shot by the police might have deserved it. “There are, after all, in this world, some people who are naturally aggressive and violent,” Bannon wrote, evoking one of scientific racism’s ugliest contentions: that black people are more genetically predisposed to violence than others.

One of the people behind the revival of race science was, not long ago, a mainstream figure. In 2014, Nicholas Wade, a former New York Times science correspondent, wrote what must rank as the most toxic book on race science to appear in the last 20 years. In A Troublesome Inheritance, he repeated three race-science shibboleths: that the notion of “race” corresponds to profound biological differences among groups of humans; that human brains evolved differently from race to race; and that this is supported by different racial averages in IQ scores.

Wade’s book prompted 139 of the world’s leading population geneticists and evolutionary theorists to sign a letter in the New York Times accusing Wade of misappropriating research from their field, and several academics offered more detailed critiques. The University of Chicago geneticist Jerry Coyne described it as “simply bad science”. Yet some on the right have, perhaps unsurprisingly, latched on to Wade’s ideas, rebranding him as a paragon of intellectual honesty who had been silenced not by experts, but by political correctness.

“That attack on my book was purely political,” Wade told Stefan Molyneux, one of the most popular promoters of the alt-right’s new scientific racism. They were speaking a month after Trump’s election on Molyneux’s YouTube show, whose episodes have been viewed tens of millions of times. Wade continued: “It had no scientific basis whatever and it showed the more ridiculous side of this herd belief.”

Another of Molyneux’s recent guests was the political scientist Charles Murray, who co-authored The Bell Curve. The book argued that poor people, and particularly poor black people, were inherently less intelligent than white or Asian people. When it was first published in 1994, it became a New York Times bestseller, but over the next few years it was picked to pieces by academic critics.

As a frequent target for protest on college campuses, Murray has become a figurehead for conservatives who want to portray progressives as unthinking hypocrites who have abandoned the principles of open discourse that underwrite a liberal society. And this logic has prompted some mainstream cultural figures to embrace Murray as an icon of scientific debate, or at least as an emblem of their own openness to the possibility that the truth can, at times, be uncomfortable. Last April, Murray appeared on the podcast of the popular nonfiction author Sam Harris. Murray used the platform to claim his liberal academic critics “lied without any apparent shadow of guilt because, I guess, in their own minds, they thought they were doing the Lord’s work.” (The podcast episode was entitled “Forbidden knowledge”.)


Students in Vermont turn their backs to Charles Murray during a lecture in March last year. 

In the past, race science has shaped not only political discourse, but also public policy. The year after The Bell Curve was published, in the lead-up to a Republican congress slashing benefits for poorer Americans, Murray gave expert testimony before a Senate committee on welfare reform; more recently, congressman Paul Ryan, who helped push the Republicans’ latest tax cuts for the wealthy, has claimed Murray as an expert on poverty.

Now, as race science leaches back into mainstream discourse, it has also been mainlined into the upper echelons of the US government through figures such as Bannon. The UK has not been spared this revival: the London Student newspaper recently exposed a semi-clandestine conference on intelligence and genetics held for the last three years at UCL without the university’s knowledge. One of the participants was the 88-year-old Ulster-based evolutionary psychologist Richard Lynn, who has described himself as a “scientific racist”.

One of the reasons scientific racism hasn’t gone away is that the public hears more about the racism than it does about the science. This has left an opening for people such as Murray and Wade, in conjunction with their media boosters, to hold themselves up as humble defenders of rational enquiry. With so much focus on their apparent bias, we’ve done too little to discuss the science. Which raises the question: why, exactly, are the race scientists wrong?

Race, like intelligence, is a notoriously slippery concept. Individuals often share more genes with members of other races than with members of their own race. Indeed, many academics have argued that race is a social construct – which is not to deny that there are groups of people (“population groups”, in the scientific nomenclature) that share a high amount of genetic inheritance. Race science therefore starts out on treacherous scientific footing.

The supposed science of race is at least as old as slavery and colonialism, and it was considered conventional wisdom in many western countries until 1945. Though it was rejected by a new generation of scholars and humanists after the Holocaust, it began to bubble up again in the 1970s, and has returned to mainstream discourse every so often since then.

In 1977, during my final year in state high school in apartheid South Africa, a sociology lecturer from the local university addressed us and then took questions. He was asked whether black people were as intelligent as white people. No, he said: IQ tests show that white people are more intelligent. He was referring to a paper published in 1969 by Arthur Jensen, an American psychologist who claimed that IQ was 80% a product of our genes rather than our environments, and that the differences between black and white IQs were largely rooted in genetics.

In apartheid South Africa, the idea that each race had its own character, personality traits and intellectual potential was part of the justification for the system of white rule. The subject of race and IQ was similarly politicised in the US, where Jensen’s paper was used to oppose welfare schemes, such as the Head Start programme, which were designed to lift children out of poverty.

But the paper met with an immediate and overwhelmingly negative reaction – “an international firestorm,” the New York Times called it 43 years later, in Jensen’s obituary – especially on American university campuses, where academics issued dozens of rebuttals, and students burned him in effigy.

The recent revival of ideas about race and IQ began with a seemingly benign scientific observation. In 2005, Steven Pinker, one of the world’s most prominent evolutionary psychologists, began promoting the view that Ashkenazi Jews are innately particularly intelligent – first in a lecture to a Jewish studies institute, then in a lengthy article in the liberal American magazine The New Republic the following year. This claim has long been the smiling face of race science; if it is true that Jews are naturally more intelligent, then it’s only logical to say that others are naturally less so.

The background to Pinker’s essay was a 2005 paper entitled “Natural history of Ashkenazi intelligence”, written by a trio of anthropologists at the University of Utah. In their 2005 paper, the anthropologists argued that high IQ scores among Ashkenazi Jews indicated that they evolved to be smarter than anyone else (including other groups of Jews).

This evolutionary development supposedly took root between 800 and 1650 AD, when Ashkenazis, who primarily lived in Europe, were pushed by antisemitism into money-lending, which was stigmatised among Christians. This rapid evolution was possible, the paper argued, in part because the practice of not marrying outside the Jewish community meant a “very low inward gene flow”. This was also a factor behind the disproportionate prevalence in Ashkenazi Jews of genetic diseases such as Tay-Sachs and Gaucher’s, which the researchers claimed were a byproduct of natural selection for higher intelligence; those carrying the gene variants, or alleles, for these diseases were said to be smarter than the rest.

Pinker followed this logic in his New Republic article, and elsewhere described the Ashkenazi paper as “thorough and well-argued”. He went on to castigate those who doubted the scientific value of talking about genetic differences between races, and claimed that “personality traits are measurable, heritable within a group and slightly different, on average, between groups”.

In subsequent years, Nicholas Wade, Charles Murray, Richard Lynn, the increasingly popular Canadian psychologist Jordan Peterson and others have all piled in on the Jewish intelligence thesis, using it as ballast for their views that different population groups inherit different mental capacities. Another member of this chorus is the journalist Andrew Sullivan, who was one of the loudest cheerleaders for The Bell Curve in 1994, featuring it prominently in The New Republic, which he edited at the time. He returned to the fray in 2011, using his popular blog, The Dish, to promote the view that population groups had different innate potentials when it came to intelligence.

Sullivan noted that the differences between Ashkenazi and Sephardic Jews were “striking in the data”. It was a prime example of the rhetoric of race science, whose proponents love to claim that they are honouring the data, not political commitments. The far right has even rebranded race science with an alternative name that sounds like it was taken straight from the pages of a university course catalogue: “human biodiversity”.

A common theme in the rhetoric of race science is that its opponents are guilty of wishful thinking about the nature of human equality. “The IQ literature reveals that which no one would want to be the case,” Peterson told Molyneux on his YouTube show recently. Even the prominent social scientist Jonathan Haidt has criticised liberals as “IQ deniers”, who reject the truth of inherited IQ difference between groups because of a misguided commitment to the idea that social outcomes depend entirely on nurture, and are therefore mutable.

Defenders of race science claim they are simply describing the facts as they are – and the truth isn’t always comfortable. “We remain the same species, just as a poodle and a beagle are of the same species,” Sullivan wrote in 2013. “But poodles, in general, are smarter than beagles, and beagles have a much better sense of smell.”

The race “science” that has re-emerged into public discourse today – whether in the form of outright racism against black people, or supposedly friendlier claims of Ashkenazis’ superior intelligence – usually involves at least one of three claims, each of which has no grounding in scientific fact.

The first claim is that when white Europeans’ Cro-Magnon ancestors arrived on the continent 45,000 years ago, they faced more trying conditions than in Africa. Greater environmental challenges led to the evolution of higher intelligence. Faced with the icy climate of the north, Richard Lynn wrote in 2006, “less intelligent individuals and tribes would have died out, leaving as survivors the more intelligent”.

Set aside for a moment the fact that agriculture, towns and alphabets first emerged in Mesopotamia, a region not known for its cold spells. There is ample scientific evidence of modern intelligence in prehistoric sub-Saharan Africa. In the past 15 years, cave finds along the South African Indian Ocean coastline have shown that, between 70,000 and 100,000 years ago, biologically modern humans were carefully blending paint by mixing ochre with bone-marrow fat and charcoal, fashioning beads for self-adornment, and making fish hooks, arrows and other sophisticated tools, sometimes by heating them to 315C (600F). Those studying the evidence, such as the South African archaeologist Christopher Henshilwood, argue that these were intelligent, creative people – just like us. As he put it: “We’re pushing back the date of symbolic thinking in modern humans – far, far back.”
 A 77,000-year-old piece of red ochre with a deliberately engraved design discovered at Blombos Cave, South Africa.  
 
A second plank of the race science case goes like this: human bodies continued to evolve, at least until recently – with different groups developing different skin colours, predispositions to certain diseases, and things such as lactose tolerance. So why wouldn’t human brains continue evolving, too?

The problem here is that race scientists are not comparing like with like. Most of these physical changes involve single gene mutations, which can spread throughout a population in a relatively short span of evolutionary time. By contrast, intelligence – even the rather specific version measured by IQ – involves a network of potentially thousands of genes, which probably takes at least 100 millennia to evolve appreciably.

Given that so many genes, operating in different parts of the brain, contribute in some way to intelligence, it is hardly surprising that there is scant evidence of cognitive advance, at least over the last 100,000 years. The American palaeoanthropologist Ian Tattersall, widely acknowledged as one of the world’s leading experts on Cro-Magnons, has said that long before humans left Africa for Asia and Europe, they had already reached the end of the evolutionary line in terms of brain power. “We don’t have the right conditions for any meaningful biological evolution of the species,” he told an interviewer in 2000.

In fact, when it comes to potential differences in intelligence between groups, one of the remarkable dimensions of the human genome is how little genetic variation there is. DNA research conducted in 1987 suggested a common, African ancestor for all humans alive today: “mitochondrial Eve”, who lived around 200,000 years ago. Because of this relatively recent (in evolutionary terms) common ancestry, human beings share a remarkably high proportion of their genes compared to other mammals. The single subspecies of chimpanzee that lives in central Africa, for example, has significantly more genetic variation than does the entire human race.

No one has successfully isolated any genes “for” intelligence at all, and claims in this direction have turned to dust when subjected to peer review. As the Edinburgh University cognitive ageing specialist Prof Ian Deary put it, “It is difficult to name even one gene that is reliably associated with normal intelligence in young, healthy adults.” Intelligence doesn’t come neatly packaged and labelled on any single strand of DNA.

Ultimately, race science depends on a third claim: that different IQ averages between population groups have a genetic basis. If this case falls, the whole edifice – from Ashkenazi exceptionalism to the supposed inevitability of black poverty – collapses with it.

A Brief History of IQ 

Before we can properly assess these claims, it is worth looking at the history of IQ testing. The public perception of IQ tests is that they provide a measure of unchanging intelligence, but when we look deeper, a very different picture emerges. Alfred Binet, the modest Frenchman who invented IQ testing in 1904, knew that intelligence was too complex to be expressed in a single number. “Intellectual qualities … cannot be measured as linear surfaces are measured,” he insisted, adding that giving IQ too much significance “may give place to illusions.”

But Binet’s tests were embraced by Americans who assumed IQ was innate, and used it to inform immigration, segregationist and eugenic policies. Early IQ tests were packed with culturally loaded questions. (“The number of a Kaffir’s legs is: 2, 4, 6, 8?” was one of the questions in IQ tests given to US soldiers during the first world war.) Over time, the tests became less skewed and began to prove useful in measuring some forms of mental aptitude. But this tells us nothing about whether scores are mainly the product of genes or of environment. Further information is needed.

One way to test this hypothesis would be to see if you can increase IQ by learning. If so, this would show that education levels, which are purely environmental, affect the scores. It is now well-known that if you practise IQ tests your score will rise, but other forms of study can also help. In 2008, Swiss researchers recruited 70 students and had half of them practise a memory-based computer game. All 35 of these students saw their IQs increase, and those who practised daily, for the full 19 weeks of the trial, showed the most improvement.

Another way to establish the extent to which IQ is determined by nature rather than nurture would be to find identical twins separated at birth and subsequently raised in very different circumstances. But such cases are unusual, and some of the most influential research – such as the work of the 20th-century English psychologist Cyril Burt, who claimed to have shown that IQ was innate – has been dubious. (After Burt’s death, it was revealed that he had falsified much of his data.)

A genuine twin study was launched by the Minneapolis-based psychologist Thomas Bouchard in 1979, and although he was generously backed by the overtly racist Pioneer Fund, his results make interesting reading. He studied identical twins, who have the same genes, but who were separated close to birth. This allowed him to consider the different contributions that environment and biology played in their development. His idea was that if the twins emerged with the same traits despite being raised in different environments, the main explanation would be genetic.

The problem was that most of his identical twins were adopted into the same kinds of middle-class families. So it was hardly surprising that they ended up with similar IQs. In the relatively few cases where twins were adopted into families of different social classes and education levels, there ended up being huge disparities in IQ – in one case a 20-point gap; in another, 29 points, or the difference between “dullness” and “superior intelligence” in the parlance of some IQ classifications. In other words, where the environments differed substantially, nurture seems to have been a far more powerful influence than nature on IQ.

But what happens when you move from individuals to whole populations? Could nature still have a role in influencing IQ averages? Perhaps the most significant IQ researcher of the last half century is the New Zealander Jim Flynn. IQ tests are calibrated so that the average IQ of all test subjects at any particular time is 100. In the 1990s, Flynn discovered that each generation of IQ tests had to be more challenging if this average was to be maintained. Projecting back 100 years, he found that average IQ scores measured by current standards would be about 70.

Yet people have not changed genetically since then. Instead, Flynn noted, they have become more exposed to abstract logic, which is the sliver of intelligence that IQ tests measure. Some populations are more exposed to abstraction than others, which is why their average IQ scores differ. Flynn found that the different averages between populations were therefore entirely environmental.

This finding has been reinforced by the changes in average IQ scores observed in some populations. The most rapid has been among Kenyan children – a rise of 26.3 points in the 14 years between 1984 and 1998, according to one study. The reason has nothing to do with genes. Instead, researchers found that, in the course of half a generation, nutrition, health and parental literacy had improved.

So, what about the Ashkenazis? Since the 2005 University of Utah paper was published, DNA research by other scientists has shown that Ashkenazi Jews are far less genetically isolated than the paper argued. On the claims that Ashkenazi diseases were caused by rapid natural selection, further research has shown that they were caused by a random mutation. And there is no evidence that those carrying the gene variants for these diseases are any more or less intelligent than the rest of the community.

But it was on IQ that the paper’s case really floundered. Tests conducted in the first two decades of the 20th century routinely showed Ashkenazi Jewish Americans scoring below average. For example, the IQ tests conducted on American soldiers during the first world war found Nordics scoring well above Jews. Carl Brigham, the Princeton professor who analysed the exam data, wrote: “Our figures … would rather tend to disprove the popular belief that the Jew is highly intelligent”. And yet, by the second world war, Jewish IQ scores were above average.

A similar pattern could be seen from studies of two generations of Mizrahi Jewish children in Israel: the older generation had a mean IQ of 92.8, the younger of 101.3. And it wasn’t just a Jewish thing. Chinese Americans recorded average IQ scores of 97 in 1948, and 108.6 in 1990. And the gap between African Americans and white Americans narrowed by 5.5 points between 1972 and 2002.

No one could reasonably claim that there had been genetic changes in the Jewish, Chinese American or African American populations in a generation or two. After reading the University of Utah paper, Harry Ostrer, who headed New York University’s human genetics programme, took the opposite view to Steven Pinker: “It’s bad science – not because it’s provocative, but because it’s bad genetics and bad epidemiology.”


Ten years ago, our grasp of the actual science was firm enough for Craig Venter, the American biologist who led the private effort to decode the human genome, to respond to claims of a link between race and intelligence by declaring: “There is no basis in scientific fact or in the human genetic code for the notion that skin colour will be predictive of intelligence.”

Yet race science maintains its hold on the imagination of the right, and today’s rightwing activists have learned some important lessons from past controversies. Using YouTube in particular, they attack the left-liberal media and academic establishment for its unwillingness to engage with the “facts”, and then employ race science as a political battering ram to push forward their small-state, anti-welfare, anti-foreign-aid agenda.

These political goals have become ever more explicit. When interviewing Nicholas Wade, Stefan Molyneux argued that different social outcomes were the result of different innate IQs among the races – as he put it, high-IQ Ashkenazi Jews and low-IQ black people. Wade agreed, saying that the “role played by prejudice” in shaping black people’s social outcomes “is small and diminishing”, before condemning “wasted foreign aid” for African countries.

Similarly, when Sam Harris, in his podcast interview with Charles Murray, pointed out the troubling fact that The Bell Curve was beloved by white supremacists and asked what the purpose of exploring race-based differences in intelligence was, Murray didn’t miss a beat. Its use, Murray said, came in countering policies, such as affirmative action in education and employment, based on the premise that “everybody is equal above the neck … whether it’s men or women or whether it’s ethnicities”.

Race science isn’t going away any time soon. Its claims can only be countered by the slow, deliberate work of science and education. And they need to be – not only because of their potentially horrible human consequences, but because they are factually wrong. The problem is not, as the right would have it, that these ideas are under threat of censorship or stigmatisation because they are politically inconvenient. Race science is bad science. Or rather, it is not science at all.
















Friday, November 17, 2017

The Racist/Sexist/Sordid History Behind NYC's Statue of J. Marion Sims, the so-called Father of Gynecology

Monumental Error

Will New York City finally tear down a statue?

harpers.org 

In 1899, the art critic Layton Crippen complained in the New York Times that private donors and committees had been permitted to run amok, erecting all across the city a large number of “painfully ugly monuments.” The very worst statues had been dumped in Central Park. “The sculptures go as far toward spoiling the Park as it is possible to spoil it,” he wrote. Even worse, he lamented, no organization had “power of removal” to correct the damage that was being done.

Crippen criticized more than two dozen statues for their aesthetic failures, mocking Beethoven’s frown and the epicene figure of Bertel Thorvaldsen. Yet he took pains to single out the bronze monument to J. Marion Sims, the so-called Father of Gynecology, for its foolish “combination toga-overcoat.” Would visitors really be so hurt, Crippen asked, if the Sims statue, then situated in Manhattan’s Bryant Park, was removed?

A little more than a century later — after it had been refurbished and moved to Central Park — the Sims statue has once again prompted angry calls for its removal. This time, the complaint is not that it is ugly. Rather, East Harlem residents learned that their neighborhood housed a monument to a doctor whose renown stems almost exclusively from a series of experimental surgeries that he had performed, without the use of anesthesia, on a number of young slave women between 1845 and 1849.
Illustrations by Lincoln Agnew

Sims was attempting to discover a cure for vesicovaginal fistula (VVF), a common affliction that is caused by prolonged obstructed labor. The timing, nature, and purpose of his experiments make for an impossibly tangled knot of ethical dilemmas. Most prominently, they raise the issue of medical consent. Did Sims obtain consent from his subjects, as he later claimed — and if he did, could a slave truly provide it? What woman would agree to be operated on, without anesthesia, upwards of thirty times? On the other hand, given the horrific nature of VVF, wouldn’t most women endure additional horrors in pursuit of a cure? And without a willing patient, would delicate surgery on a wound barely visible to the eye even be possible? What of the fact that if Sims managed to cure the women, they would be promptly returned to the plantations, where little awaited them but backbreaking work, use as breeders of additional slaves, and state-sanctioned rape?

All these questions came to the surface a couple of months ago, when activists long opposed to the Sims statue linked it to the Confederate war memorials being torn down in cities across America. They staged a protest in front of the statue in August, and an image from the event — four women of color in blood-soaked gowns, representing Sims’s experimental subjects — went viral. Newspaper accounts across the country soon followed. Would the monument to Sims be the very first in New York City to go to the chopping block?
1 Two partial exceptions to this rule are Richard Serra’s Tilted Arc, which was removed in 1989, and Frederick MacMonnies’s Civic Virtue Triumphant over Unrighteousness, which
 was relocated to Green-Wood Cemetery in 2012. In both cases, however, 
city officials insisted that the decision was practical: Tilted Arc was removed because it was said to block foot traffic, and Civic Virtue for restoration purposes.

That, too, is a more complicated question than it seems. What Crippen noted in 1899 is still true today. Even minor alterations to works of public art in New York City are subject to an arcane system of approval, and there is no formal mechanism in place for citizens to challenge the decisions of earlier times. The governing assumption is that if a memorial has realized permanent form, it represents a consensus that should be preserved. Not a single statue in the history of New York City has ever been permanently removed as a result of official action.1


 Illustration of Dr. J. Marion Sims with Anarcha by Robert Thom. Courtesy of Southern Illinois University School of Medicine, Pearson Museum. 

In 1845, Marion Sims was a thirty-two-year-old doctor with ten years of experience in the South’s Black Belt. He served Alabama’s free black population; he contracted to care for the slaves of local plantation owners; and his office and home in downtown Montgomery included a small backyard facility he called the Negro Hospital. Tending to the medical needs of current and former slaves was an economic necessity in an area where two thirds of the population was black. Indeed, Sims was a slaveholder himself: he had accepted an enslaved couple as a wedding present from his in-laws, and he came to own as many as seventeen slaves before he moved to New York City in 1853. Letters to his wife (“Negroes and children always expect liberal presents on Christmas”) betray a rank paternalism typical of antebellum Southerners.

Medicine had been a default vocation rather than a calling. Sims’s mother steered him toward the cloth, his father toward the law, and the latter complained, when his son settled on medicine, that there was no “honor” or “science” in it. Sims attended medical schools in South Carolina and Philadelphia, and soon settled on surgical innovation as the best path to a lucrative practice and a permanent legacy. At the time, this involved learning new procedures from medical journals, and Sims made a name for himself by treating clubfoot and crossed eyes.
Source photographs: bust of Confederate general Stonewall Jackson © Drew Angerer/Getty Images; statue of Chief Justice Roger Brooke Taney © Dennis MacDonald/Alamy Stock Photo; statue of a horse in the Confederate Army © Jerry Jackson/Baltimore Sun/TNS via Getty Images

More grandiosely, he announced that he had devised a better method for dislodging foreign objects from the ear, and that he had discovered the cure for infant lockjaw. He would later apologize for the first claim, acknowledging that others had preceded him in syringing the ear. But he went to his grave insisting that his cure for lockjaw was his “first great discovery in medicine.” He couldn’t have been more wrong. Zealous in his belief that most maladies were by nature mechanical, Sims had attempted to cure a number of suffering slave babies by prying up their skull plates with an awl. Shortly after Sims died, in 1883, scientists identified lockjaw as a bacterial infection, also known as tetanus.

By Sims’s account — as related in The Story of My Life (1885), published posthumously and excerpted in this magazine — his next great discovery came just two months after the first. In the summer of 1845, he was asked to treat three young female slaves with holes inside their vaginas. A few days after delivery, fistula sufferers experience a sloughing away of dead tissue, most often leaving an opening between the vaginal canal and the bladder. Once afflicted, women are cursed with a perpetual leak of urine from their vaginas, frequently resulting in severe ulceration of the vulva and upper thighs.

These were the first cases of VVF that Sims had encountered. It’s not surprising, given his later confession that he had initially “hated investigating the organs of the female pelvis.” A little research revealed that doctors throughout history had been stymied by the affliction. The basic problem, surgically speaking, was that you had little room to see the wound you were attempting to close, let alone to stitch sutures in the secreting tissue. Sims concluded that all three of the women were untreatable, but the last, having traveled from Macon County, was permitted to spend the night in his Negro Hospital, the idea being that she would leave by train the following afternoon.

There the story might have ended — except that the next morning, Sims was called to attend to an emergency. A white seamstress had dislocated her uterus in a fall from her horse. Sims grudgingly made his way to her home and placed her facedown with her buttocks awkwardly elevated in what doctors called the knee-chest position. The idea was to vigorously push her uterus back into place. Sims was first surprised when the woman’s entire womb seemed to vanish, leaving his fingers flailing about in an apparent void — yet somehow this worked, her pain was immediately relieved. He was surprised again when the woman, lowering herself onto her side, produced a blast of air from her vagina.

The seamstress was mortified, but Sims rejoiced. The accident explained what had happened — and offered great promise besides. The position of her body and the action of his fingers against her perineum and the rear of the vaginal wall caused an inrush of air that inflated her vagina. Sims immediately thought of the young woman still waiting for a train in his backyard clinic. Might not the ballooning action of the vagina enable a doctor to clearly observe a fistula, and thereby cure a condition that had baffled the world’s leading medical minds for centuries?

Sims rushed home, stopping on the way to purchase a large pewter spoon that he believed would function more efficiently than his fingers. Two medical students assisted him with the woman — her name was either Lucy or Betsey, depending on how you read Sims’s account — and as soon as they put her in the knee-chest position and pulled open her buttocks, her vagina began to dilate with a puffing sound. Sims sat down behind her, bent the spoon, and turned it around to insert it handle first. He elevated her perineum and looked inside. He could see the fistula as plainly as a hole in a sheet of paper. Years later, Sims described the moment as if he had summited a mountain or landed on the surface of the moon.

“I saw everything,” he wrote, “as no man had ever seen before.”

This was the first of many epiphanies in a life that would come to be characterized, by Sims himself and by others after him, as having proceeded along the lines of a fantastical romance. For the next four years, the fairy tale goes, Sims labored to cure those first three slaves, along with a number of other fistula sufferers whom he sought out in neighboring communities. Progress was incremental, levying a tax on the young physician’s soul and wallet (he paid the cost of room and board for his enslaved subjects). Finally, in 1849, he managed to successfully close a fistula — and soon thereafter, he grandly claimed, he cured all the slaves in his care. At least some portion of the fame he coveted now came his way: the tool and the position he used to cure fistulas have been known ever since as the Sims speculum and the Sims position.

What followed was a period of collapse, probably from dysentery. Assuming he was gravely ill, and concerned that he “might die without the world’s reaping the benefits of my labors,” Sims published “On the Treatment of Vesico-Vaginal Fistula” in The American Journal of the Medical Sciences in 1852. The paper was an immediate success. Sims claimed that his surgery was easier to perform and produced more consistent results than had any previous techniques. Citing health reasons (Alabama colleagues thought him more ambitious than ill), he moved to New York City the next year, and soon proposed establishing Woman’s Hospital. This would be one of the first institutions in the world devoted to those conditions “of the female pelvis” that he had once deplored.

A pattern emerged. As Sims saw it, he would be presented with a series of women suffering from mysterious maladies — and, devising his own cures or improving on the cures of others, he would conquer each illness in turn. In addition to being crowned the Father of Gynecology, Sims attached his name to dozens of tools and procedures. His fame became international when he spent the Civil War years abroad, spreading the gospel of his work and tending to the medical needs of empresses and countesses. For the rest of his life, he remained a continent-hopping cosmopolite, attending conferences and practicing medicine in New York City, London, Paris, Geneva, and Vienna.

The effort to erect a monument to Sims began less than a month after his death in 1883. A Baltimore physician wrote a letter to the Medical Record, the day’s leading organ for surgeons and doctors, to suggest that a statue be commissioned and erected in Central Park.

The editor agreed. The magazine announced that it would raise the necessary funds from doctors — and from the many women who owed their health and happiness to Sims’s “amelioration of their numerous and distressing ailments.” Prominent surgeons offered pledges and praise, and suggested that a Sims Memorial Fund Committee, made up “partly of gentleman and partly of ladies,” be formed to take charge of the effort.

It was perhaps inevitable that Sims would wind up in bronze. The rhetorical mold had first been cast in 1857, by a woman named Caroline Thompson, who gave a speech to the New York state legislature after being treated by Sims. Boasting a fatality rate near zero, Woman’s Hospital was attempting to expand and become a state institution, and Thompson told legislators that a vote in favor would “build for [them] a monument in the hearts of women more durable than granite.”

The fund drive for the Central Park monument began in 1884. The Medical Record published the name of each donor and the amount of each donation, most often $1, as they came in from across the country. When sufficient funds were raised, the committee hired Ferdinand von Miller II, a German sculptor who lived in an Italian castle. He eagerly set to work, and the Sims memorial arrived in the United States in April 1892. At once the committee approached the Department of Public Parks about the statue, kicking off a cursory period of municipal assessment. Consistent with the practice at the time, no public comment was invited.

A Central Park placement was initially denied. Instead, the statue was unveiled in Bryant Park in October 1894. A “goodly number of ladies” attended the ceremony, it was reported, but in the end not a single woman served on the Sims Memorial Fund Committee, and only a tiny portion of the monument’s donations had come from the surgeon’s former patients — a tip-off, perhaps, that the hearts of women were less receptive to Sims’s legacy than they were supposed to be.

Criticism of Sims began early and never quite went away. His assistant in Alabama, Nathan Bozeman — who would himself become a gynecologist of international renown — alleged that Sims’s fistula cure had been successful only half the time. Others noted that every aspect of the cure, including both the Sims speculum and the Sims position, had been anticipated by other practitioners.

No matter. In the wake of Sims’s death and for many decades afterward, the voices questioning his legacy were drowned out by a chorus of hagiographers, whose fact-free defense of their idol amounts to a study in mass delusion. In addition to the New York monument, there were statues in South Carolina and Alabama, a Sims-branded medical school and foundation (defunct and extant, respectively), and comically laudatory profiles (“Savior of Women”) in dozens of publications. He was included on short lists of civilizational greats alongside George Washington, and likened to the divine figures in Homer and Virgil. He was dubbed the Architect of the Vagina. The apotheosis peaked in 1950 with a radio-theater adaptation of the only book-length biography of Sims, with the Oscar-winning actor Ray Milland playing the title role in Sir Galahad in Manhattan.

In recent decades, however, this began to change. A series of scholarly books — all of them brilliant but problematic — steadily chiseled away at the Sims edifice. In the late 1960s, a young scholar named G. J. Barker-Benfield produced a dissertation on how the “physiological minority” of Wasp males had come to dominate nineteenth-century America, later published as The Horrors of the Half-Known Life (1976). Smart and copious, the book included several chapters on Sims, viewing him with refreshing skepticism. “Woman’s Hospital,” Barker-Benfield wrote, “was founded very largely as a demonstration ground for Sims’s surgical skill. He needed food and fame.” Yet Barker-Benfield flubbed numerous details of the story, conflating, for example, the displaced uterus of the seamstress with the damaged vagina of the first enslaved patient. And only the profoundly Freudian predilection of so much midcentury American scholarship can explain the author’s claim that Sims harbored a “hatred for women’s sexual organs” — one that he overcame by “his use of the knife.”

Twenty years later, in From Midwives to Medicine, Deborah Kuhn McGregor recounted the history of Woman’s Hospital as an emblem of the male establishment’s hostile takeover of obstetrics, a jurisdiction traditionally overseen by women. This exhaustive volume is often on the mark: “Although J. Marion Sims is pivotal in the history of gynecology, he did not create it by himself.” But McGregor, too, commits casual errors: she mistakenly describes the VVF wound as a “tear” (a peeve of clinical specialists), and creates confusion with equivocal language and even imprecise grammar. Worse, a story that is fraught with horror and drama is reduced to stale summary by the truth-destroying academic conviction that to be dull is to be serious.

Both Barker-Benfield and McGregor failed to penetrate the membrane that separates the world of academic squabbles from that of the people who walk past the Sims statue every day. They did inspire a new generation of scholarship, but a tendency to fight fire with fire resulted in an inferno of questionable claims. Sims was soon described by one detractor as “Father Butcher,” a sadistic proto-Mengele. Even before the debate’s most indignant voices chimed in, Sims’s biography had become a kind of post-truth zone. His defenders engaged in flagrant invention, creating a saintly caricature that outstripped even Sims’s own efforts to inflate his reputation; his detractors introduced inaccuracies and exaggerations that morphed into outright falsehoods as they ricocheted from source to source.

Forty years after its dedication, the Sims statue, along with a statue of Washington Irving, was removed from Bryant Park. The year was 1932, and the nation was about to observe the bicentennial of George Washington’s birth. To commemorate the occasion, Sears, Roebuck and Company erected in the park a temporary replica of Federal Hall, from which Washington delivered his first inaugural. The statues, which were in the way of this patriotic simulacrum, were dragged away.

Robert Moses was named the commissioner of parks a short time later. He disliked statues in general, and almost immediately proposed a dramatic overhaul of Bryant Park that did not include the reinstallation of the Sims and Irving monuments. This was fortuitous, as the statues had been misplaced — five tons of granite and metal had somehow gone missing. The good luck turned into headache, however, when the Art Commission (which was later renamed the Public Design Commission, and today has final say over all public-art decisions in New York City) rejected his proposal. The statues had to come back.

Reports differ on what came next. Some say the statues turned up by accident in a Parks Department storage yard. Moses told the New York Times a different story: a protracted effort led searchers to a storage area beneath the Williamsburg Bridge, where they found the monuments wrapped in tarpaulins. Moses reiterated his belief that the “city could get along very well” without them. Still, to keep Sims from mucking up his plans, he consented to a request from the New York Academy of Medicine that the monument be installed across from its Fifth Avenue location, in a niche on the outer wall of Central Park.

Again, the public was afforded no opportunity to comment. The statue was rededicated on October 20, 1934. The speakers echoed those who had first lobbied for a Sims monument, hailing his supposed innovations without ever really addressing what such a memorial was for. In 1884, another celebrated surgeon, Samuel Gross, had argued in his letter of support for a Sims statue that monuments are not intended for the dead. Rather, they should act as a stimulus for the living to “imitate the example” of the figure memorialized. But what sort of inspiration would the Sims statue provide? After all, the man in the strange bronze overcoat was, as the Medical Record noted, distinguished mostly for his readiness to employ “the one needful thing, the knife.”

Sims would have yet another memorial before the roof fell in. In the late 1950s, the pharmaceutical giant Parke-Davis commissioned the artist Robert Thom to produce a series of forty-five oil paintings illustrating the history of medicine. One painting depicted Sims’s fistula experiments: clutching his trademark speculum, the doctor stands in his ramshackle clinic before two acolytes and the three worried slave women who would serve as his initial subjects.

Parke-Davis was sold in 1970 to another pharmaceutical giant, Warner-Lambert, which appears to have had no qualms about the painting: the company granted permission for the image to be used on the cover of McGregor’s From Midwives to Medicine. In 2000, however, Warner-Lambert was purchased by Pfizer — and Pfizer did have qualms. Harriet Washington’s Medical Apartheid, the next scholarly book to take aim at Sims, begins with an account of her attempt to secure the rights to the image. She, too, hoped to use Thom’s painting on the jacket of her book. Pfizer asked to review the manuscript before making a decision, and she refused to comply. Later, she submitted a request to use a smaller version of the image in the book’s interior and never got an answer.2
2 In 2007, Pfizer donated all forty-five paintings 
to the University of Michigan. The painting of Sims is currently in 
storage. A less prominent painting of Sims was commissioned by the 
University of Alabama in 1982. It was removed from public view in 2005 
after a visiting lecturer from Harvard complained about it.
 
Medical Apartheid is a vast and sweeping work, which ranges from gynecology to eugenics, radiation, and bioterrorism. It is notable for having won the 2007 National Book Critics Circle award in general non-fiction, among several other honors. Yet even though only a small portion of Medical Apartheid is devoted to Sims, a number of errors crop up: for example, the author describes the bronze statue of Sims as a “marble colossus,” misstates the original location of Woman’s Hospital, claims that only one of Sims’s slave subjects was ever cured, and wrongly suggests that Sims once etherized wives to enable intercourse.

Nevertheless, Medical Apartheid finally penetrated the scholar-public divide, and efforts got under way to have the statue removed. They began with a woman, fired up by Washington, handing out flyers in East Harlem. Viola Plummer, now chief of staff to New York State Assemblyman Charles Barron, had been working with several colleagues on health care disparities, and who knows how they first came to focus on the Sims statue? It was back during the Bush Administration, Plummer recalled, when there was torture and waterboarding going on, and maybe the details of Sims’s experiments, as recounted in Medical Apartheid, resonated with all that. Or maybe it was because a statue was a tangible thing, so perhaps you could actually do something about it.
 A bronze likeness of Dr. James Marion Sims stands at the entrance of Central Park at Fifth Avenue and 103rd Street in Manhattan. Two women protesters painted their clothes as part of their demonstration in August 2017.

Plummer’s pamphlets caught the eye of a group called East Harlem Preservation, which put her petition online. Eventually, it attracted enough media attention that the New York City Parks Department sent someone to explain to the members of Community Board 11, also involved by that point, that the city had a policy of not removing art for content. Removing a statue, any statue, would amount to expunging history.

Albeit on a lark rather than a mission, the department had been thinking about its statuary for a while. In 1996, Commissioner Henry Stern — a colorful character who bestowed code names on Parks staffers, his own being Starquest — launched an effort to erect signs to contextualize each of the statues, busts, and monuments under Parks supervision, of which there were more than 800. A statue should be more than a grave site, Stern’s thinking went. It should tell a story.

One of the people carrying out this mission was the new art and antiquities director, Jonathan Kuhn (code name: Archive), who continues on in the same position today. In 1996, the Sims statue was for Kuhn little more than a punch line — he proudly told the New York Times that the city’s statues included a “fifteenth-century martyr, a sled dog, and two gynecologists.” The signage effort coincided with the digital revolution, so only a few summaries were ever installed in Central Park as physical signs. The Sims summary was one of the many that appeared only online.

The original version of this summary, which has since been finessed and corrected, was notable for vagueness and factual errors. First, it repeated the common but inaccurate claim that Sims innovated the use of silver wire as an antibacterial suture material. The text also asserted that the statue had been funded by donations from “thousands of Sims’s medical peers and many of his own patients,” and as late as 2016, the Parks website specified 12,000 individual donors. The actual numbers are much more modest: 789 male doctors, forty-one women, and twenty-eight medical societies. In any case, nobody at the department paid much attention to the Sims summary. It was one headache among many, and why quibble with a memorial to a man whose “groundbreaking surgical methods,” as the original summary read, “earned him worldwide notoriety”?

In 2007, at roughly the same time that Viola Plummer was handing out letters in East Harlem, Mary Bassett, then the deputy commissioner of the New York City Department of Health and Mental Hygiene, also read Medical Apartheid. Bassett was uniquely positioned to appreciate what is undeniably the most scruples-testing aspect of the Sims legacy. A physician herself, she had spent nearly two decades in Zimbabwe, where the epidemiological nightmare of VVF rages on today. Largely eradicated in the West because of the prevalence of caesarean section, the condition still blankets the African continent, with estimates of as many as 100,000 new sufferers annually. There has been a recent rise in clinics dedicated to the disorder, whose victims often wind up divorced, ostracized, depressed, and suicidal. These clinics all descend from a single source: the Addis Ababa Fistula Hospital, in Ethiopia, which was founded in 1974 by the Hamlins, an Australian couple, both gynecologists, who planned their facility by carefully studying Sims’s The Story of My Life.3
3 A brick from the original Woman’s Hospital was 
transported to Ethiopia and used in the construction of the Hamlin 
fistula clinic.
 
The advent of African fistula clinics aside, Bassett believed that Sims’s surgical subjects must have perceived his initial experiments as a form of torture. Rather than handing out flyers, Bassett invited Harriet Washington to give a talk at a health department gathering. It was Washington’s lecture on Sims and the broader history of medical experimentation that got staffers brainstorming about what could be done about the statue. They came up with the idea of a contextualizing plaque to be added to the statue itself, which would tell the story of Sims’s initial procedures.

Kuhn dismissed the idea of a plaque. Instead, he suggested, they should propose additions to the existing online summary. That’s basically what happened. In 2008, the department added nine lines to the text — which, true to form, introduced more historical errors. For one thing, the revised summary claimed that Sims had been on hand to tend to President Garfield’s gunshot wound: false. More meaningfully, the new text noted that during the period of Sims’s fistula experiments, he had “declined or could not use anesthesia.”

This skirts one of the most contentious aspects of the Sims debate. During the mid-1840s, when he experimented on the enslaved women, ether had just been introduced as a surgical anesthetic; it was not approved for safe use until 1849. As for chloroform, it would make its debut in 1847 and become widely known for killing patients in the hands of inexperienced physicians. Sims’s detractors have argued that he reserved anesthesia for his white patients. This isn’t true, and for his part, Sims claimed that the pain of fistula surgery did not merit the risk of anesthesia in any patient.4
4 Even after anesthesia came into common use, Sims 
varied from his stance only in VVF cases where the damage extended to 
the urethra or the neck of the uterus. It is critical to note, however, 
that Sims did sometimes display a shockingly callous disregard for the 
suffering experienced by his slave subjects. To further complicate 
matters, Sims’s detractors have also accused him of believing that 
African women had a special genetic endowment that made them resistant 
to pain. In fact, it was his biographer, Seale Harris, who made this 
claim a hundred years later in Woman’s Surgeon: The Life Story of J. Marion Sims (1950).

Beyond the error-speckled lines added to the online text, nothing happened. Adrian Benepe, who succeeded Henry Stern, was more concerned with health initiatives, such as smoking in public parks. For that matter, Benepe later recalled, it wasn’t like there had ever been a grand public chorus rising up to complain about the Sims statue. And when you’re the commissioner, that’s what you do: you deal with things that take up a lot of media and public attention. The Sims controversy? It wasn’t even in the same ballpark as what PETA did to Mayor Bill de Blasio over the Central Park horses in 2014.

Since the 1990s, one of the most prominent figures in the Sims controversy has been L. Lewis Wall. Wall’s résumé makes you feel like you’ve wasted your life. He holds two doctorates, is a professor of medicine, social anthropology, and bioethics, and founded the Worldwide Fistula Fund, which has launched clinical programs to combat the scourge in Niger, Ethiopia, and Uganda. Wall has performed hundreds of fistula surgeries in Africa, and has seen firsthand the struggles of aid efforts — including local corruption and political exploitation. Just as onerous, in his view, was “fistula tourism”: non-African doctors making blitzkrieg trips to Africa to rack up “good cases.” Wall responded with two articles, “A Bill of Rights for Patients with Obstetric Fistula” and “A Code of Ethics for the Fistula Surgeon.”

The latter manifesto stands in stark contrast to Sims’s lifelong hostility toward medical ethics. He always hated rules, and a petulant inability to follow even those he had agreed to has been viewed by his champions as an element of his puckish persona. Yet Sims did sometimes pay for his rule-flouting tendencies. In 1870 — thirteen years before assisting with the Sims Memorial Fund Committee — the New York Academy of Medicine put him on trial for ethics violations.

Sims had written publicly about the condition of the theater star Charlotte Cushman, whom he had once seen in private practice. In doing so, he violated his patient’s confidence and ignored an ethical prohibition against doctors seeking publicity — hardly a first for Sims, who had a ringmaster’s flair for self-promotion and had once socialized with P. T. Barnum.5 Sims was found guilty. He was given a formal reprimand, which would subsequently be characterized by his detractors as a draconian penalty and by his supporters as a slap on the wrist.
5 There is no evidence yet to suggest that pomposity
 and narcissism are hereditary conditions. Let’s recall, however, that 
our current president’s tasteless retreat at Mar-a-Lago was designed by 
the grandson of J. Marion Sims.
 
Judging from this, one might suspect that Wall would have pitched his tent in the camp of Sims’s critics. Instead, as the debate turned rabid, Wall kicked back against Sims’s detractors. No, he argued, Sims did not deliberately addict his experimental subjects to opium. As to anesthesia, Wall calmly noted, the exterior of human genitals is indeed sensitive, but that the inner lining of the vagina is not nearly as innervated as one might expect.

Wall is not above reproach. For example, he decided on the basis of the little information available that Sims’s experiments were “performed explicitly for therapeutic purposes.” This conclusion overlooks the social and economic realities of the South, and the less than altruistic reasons that a plantation owner might send a woman suffering from a fistula in search of a cure: the sexual exploitation of slaves, and the financial benefits to be reaped from breeding additional human chattel. In any event, in the zero-sum game of journalism, Wall found himself positioned as Sims’s highest-profile defender, even though he had been the first to suggest that there should be a monument to Anarcha, Betsey, and Lucy.

It is worth noting that while Sims is remembered primarily for his VVF surgeries, these account for only a small fraction of his lengthy practice. Indeed, after he moved to New York City, he left the bulk of fistula procedures to Thomas Addis Emmet, who became his assistant in 1856 and further perfected the process, curing many patients that his superior regarded as lost causes.

Over the next two decades, Sims would dabble with a range of horrific procedures, including clitoridectomy (performed at least once, in 1862) and so-called female castration. Indeed, Sims later became a fervent champion of “normal ovariotomy,” in which one or both healthy ovaries were removed as specious cures for dysmenorrhea, diarrhea, and epilepsy. He performed the operation a dozen times himself, killing several women and mutilating others.

Earlier in his career, however, Sims turned his attention to procreation. He hoped to make advances that would ensure the perpetuation of honorable families and powerful dynasties. His investigations into sterility would result in his prescribing intercourse at particular times of the day, and then swabbing his patients’ vaginas (to count sperm under a microscope) at such increasingly rapid postcoital intervals that critics wondered exactly what kind of bargain had been struck between husband and physician.

Sims signed on to a simple anatomical tenet of the day: if the neck of a woman’s uterus did not offer a clear pathway, then the egress of menstrual matter from the womb, and the ingress of sperm into it, could be impaired. In his view, this could lead to sterility and painful menses. His solution (and he was not the first to suggest it) was to surgically open the passage with one of a variety of multibladed dilating tools, some of which were activated with a spring mechanism once inserted into the patient’s womb: the blades popped open and made multiple incisions as the device was drawn out again.

In 1878, he published a kind of summa, “On the Surgical Treatment of Stenosis of the Cervix Uteri,” reflecting at length on a procedure that Sims estimated he had performed as many as a thousand times. Like his early publications, this one seemed designed to ensure that nobody could snatch away credit that was properly his. In this case, Sims wished to cement his claim to a particular incision made to the cervical canal. “The antero-posterior incision belongs to Sims,” he declared, “and not to Emmet, or any one else.”

The paper was presented to the American Gynecological Society that same year, and while Sims was not present, other doctors spoke up to praise or critique his claims. The most interesting response came from Fordyce Barker, President Grant’s personal physician, who had championed Sims from the moment of his arrival in New York City, launching the young doctor’s career (and canonization) with a public description of his “brilliant” fistula operation.

Twenty-five years later, Barker rose to offer a less enchanted view. He began by noting that it was unclear whether a womb with a narrow neck was even pathological. In recent years, many unnecessary operations had been performed, often with injurious results. Worse, the procedure had been adopted by untrained physicians or downright charlatans. In any event, how could it be that Sims had performed these operations five times as often as many other capable surgeons?

His skills were undeniable, Barker concluded, but it was for precisely this reason that his arguments should be scrutinized, for it had been the tendency of the profession to accept the dicta of such men unquestioned.

Four years later, Barker accepted the chairmanship of the Sims Memorial Fund Committee. He died before the statue was dedicated.

In March 2014, the Sims debate reignited with another New York Times article, which described the limbo into which the controversy had fallen after 2011. Now the Parks Department and Community Board 11, which had been fighting the Sims case for seven years, agreed to meet and settle things once and for all.

The city, still resistant to removing the statue, sought out experts to make its case. They enlisted Robert Baker, a professor of philosophy at Union College and the author of Before Bioethics (2013). Baker acknowledged Sims to be precisely the kind of doctor that had necessitated the bioethics revolution: bioethics holds that science-minded physicians shouldn’t be trusted to monitor their own ethical behavior. Yet in Before Bioethics, Baker takes Sims at more than his word. For example, Baker claims that Sims freed his slaves before he moved to New York City in 1853. This is patently untrue: he leased his slaves before he left Alabama, and during his difficult first year in the city, they likely formed an important part of his income. Baker even argues that The Story of My Life should be forgiven for its use of the word “nigger” because Sims only uses it when quoting other people. Actually, that’s not true — but even if it were, who cares?

It was Baker who provided the department with a three-page “deposition” on the controversy.

This document reads like a disheveled Wikipedia entry. Baker’s claim about Sims’s own slaves is there, along with an inaccurate assertion that Sims repeatedly sought consent for surgery from his enslaved patients. The document also notes that Sims offered credit to his slave subjects and that they came to serve as his assistants. These assertions are true, yet all they do is add another twist to the complicated knot of consent. Slaves cannot provide consent for surgery — they do not have true agency. Similarly, should a slave be applauded for performing labor that she is in any event compelled to perform? Regardless, Baker concluded that additional information about the three slaves on or near the Sims monument would be an appropriate way to “follow Sims’s example [and honor] the courage of these African American women.”

Parks also contacted the art historian (and former vice president of the New York City Art Commission) Michele Bogart, whose position couldn’t have been clearer: she was vehemently opposed to the removal of the Sims statue. Bogart didn’t know a lot about Sims. In her view, however, the details didn’t matter: you simply didn’t remove art for content. Bogart didn’t buy the claims that modern sensibilities had been injured. Get over it, she thought. It boiled down to expertise. What Bogart believed — and she was undeniably an expert — was that the Sims statue had stood in New York City for more than 120 years, and that even false history was of historic interest if it managed to persevere.

The meeting was held in June 2014. Baker’s deposition was read aloud to members of the Parks subcommittee, and Bogart briefly addressed the importance of using city monuments as educational tools. A deputy commissioner apologized for the years it had taken to produce a response, then reiterated that the statue would not be removed. However, the department was ready to consider a freestanding sign, and the committee voted unanimously that Parks, in a timely manner, should return when a complete plan had been formed. In other words, it was back to bureaucratic limbo, where the argument over the Sims statue — which had long since become a symbol of how the fraudulent past becomes official history — had resided for nearly a decade.

In May 1857, Sims was approached in private practice by a forty-five-year-old woman possessed of irritability of the bladder and uterine displacement. She was a curious case, married at twenty but still a virgin. Sims attempted an examination, only to find that the slightest touch to her vagina caused her to shriek, spasm, and cry. A second examination, under the influence of ether, revealed minor uterine retroversion — but her vagina was perfectly normal. Medical books threw no light on the matter. The only rational treatment, Sims concluded, would be to cut into the muscles and the nerves of the vulval opening. Alas, the woman’s “position in society” made her an unsuitable candidate for such an experimental procedure.

Fifteen months later, Sims was sent a similar case from Detroit, a young virgin with the same dread of having her vagina touched. This time, he decided, the risk was justified: her husband had threatened divorce. Cutting into the hymen offered the young woman no relief, but incisions into the mucous membrane and the sphincter muscle were slightly more effective. By that point, her mother concluded that Sims was experimenting on her daughter — which, of course, he was — and yanked her from his care.

A few weeks later, another case fell into his hands, followed by two more. By now, Sims had a name for the condition: vaginismus. He had also devised a cure, aimed primarily at permitting coitus between husband and wife: amputate the hymen in full, then make several deep, two-inch-long incisions into the vaginal tissue and the perineum. As with his cervical stenosis surgery, this would be followed by the insertion of glass or metal dilating plugs as the wounds healed. Several years later, in Clinical Notes on Uterine Surgery (1866) — sometimes characterized as modern gynecology’s inaugural text — he claimed to have encountered thirty-nine instances of vaginismus and achieved a perfect cure in every case.

Sims’s claims were challenged even before he finished making them. English doctors rejected the notion that the condition had never before been described, and London’s Medical Times and Gazette noted that British surgeons would no sooner resort to excision for a mild case of vaginismus than they would cut off a patient’s eyelid because he had a twitch. French doctors agreed. They had been researching the condition since at least 1834. They regarded the “Sims operation” as too bloody and dangerous, and one French doctor dismissed it as too mechanical, “too American.

American doctors eventually rejected the procedure as well, using it for only the most severe cases. They also came to dispute Sims’s claim to thirty-nine perfect cures. Years later, one Woman’s Hospital surgeon insisted that he was aware of only a single cure, and vividly recalled two patients who had been left in far worse shape after the procedure. Another doctor remembered cases in which failed Sims operations — performed by surgeons other than Sims — were followed by so many futile attempts at treatments that the women’s vaginas looked as though they had been splashed with nitric acid. A year before the Sims statue was erected, A.J.C. Skene — the other gynecologist in New York City’s statuary pantheon — claimed that he had never seen a case of vaginismus for which the Sims operation “would have been of any value.”
J Marion Sims with his medals of "Honor".
The debate over the Sims monument has tended to focus on his VVF experiments — but that’s only the beginning of the story. After Sims exploited a vulnerable population to achieve a minor victory that he successfully parlayed into international fame, he claimed credit for a series of bogus breakthroughs and performed thousands of surgeries, often at the behest of distressed husbands, which left many women mutilated or dead. This does not make Sims a Gilded Age Mengele. Mengele killed his Jewish subjects by degrees, extracting data along the way, while Sims was always attempting to ameliorate something. Good intentions, however, don’t erase the enormous pain and injury that he inflicted, nor the sense of violation — one felt by women today every time they pass the statue on the sidewalk.

The anti-Sims movement has never had the fervor of a student uprising. And for more than a decade, it lacked even the figurehead of a vigilante arrested for defacing the statue in a pique of righteous inspiration. That shouldn’t matter. Not all scholars of public art agree that statues should remain in place forever. Experts of a different kidney, such as Erika Doss, a professor of American studies at the University of Notre Dame, are perfectly comfortable with monuments being “defaced, despoiled, removed, resisted, dismantled, destroyed and/or forgotten” when they represent “beliefs no longer considered viable.” These acts of symbolic vandalism embody Emerson’s insistence that good men must not obey laws too well.

Like history itself, activism seems to move very slowly at times, then abruptly accelerates. In June 2016, the long-awaited language for what had evolved into a freestanding-sign-plus-plaque solution was presented to Community Board 11. The expectation was that the board would provide yet another rubber stamp for yet another round of evasive action. Instead, a subcommittee balked — and after another presentation, two weeks later, the full board voted to remove the statue. Then the Confederate flag came down over the South Carolina statehouse, and Confederate statues vanished in New Orleans, Baltimore, Orlando, St. Louis — and in the wake of Charlottesville came a growing sense that the nation could no longer tolerate commemorations of its most shameful moments. And finally, on August 19, protesters congregated around the Sims statue and demanded that the city remove it.

In the media storm that followed, Mayor de Blasio instituted a ninety-day period of reevaluation for the city’s sprawling statuary. After years of telling activists that there was no way to remove statues, the city invented one. Still, it wasn’t enough for one protester, who at last seized the initiative and spray-painted racist across the statue’s back and gave it red, villainous eyes.

Surely this Emersonian good man — if it was a man — had been prodded into action by the activists, one of whom condemned “imperialist slaveholders, murderers, and torturers like J. Marion Sims.” But truth be told, that’s not quite right, either. For all his crimes, Sims was not a torturer or a murderer. Which means that his detractors are on the right side of history, but for the wrong, or incomplete, reasons. And maybe that doesn’t matter. For ten years, the Parks Department and the city itself resisted removing the statue not because they cared about Sims but because they feared a precedent that would bring a cascade of other statues down as well.

That’s exactly what should happen, in New York and elsewhere. In an age defined by changing values and an evolving notion of what constitutes a fact, the Sims statue stands as a monument to truth’s susceptibility to lies and political indifference. Removing it represents an awareness that history is fluid, but bronze is not.