SAN FRANCISCO — In a heated moment during  last week's hearings on Russian social media ads, Rep. Terri Sewell questioned whether the dearth of African Americans in Facebook's workforce contributed to the company's failure to catch Russian operatives using fake accounts to stoke racial tensions ahead of last year's presidential election.

Displayed behind Sewell, a Democrat from Alabama, was one of the Russian-backed ads sharing a famous black-and-white photograph of the Black Panthers from 1968. The message: “Black Panthers were dismantled by U.S. government because they were black men and women standing up for justice and equality." 

The Facebook ad, which also pointed out that the Ku Klux Klan was not disbanded, was intended to exploit racial divisions and get African-American users to follow a fake Russian account called Blacktivist. It was shared on Facebook at least 29,000 times.

"Who are your vetters and are they a diverse group of people?" Sewell asked Facebook's general counsel Colin Stretch.

Stretch didn't answer directly. The majority of ads, placed through Facebook's largely self-service, automated system, don't get reviewed by the teams of reviewers whom Facebook relies on to moderate content. Even if they do, odds are the reviewers are not African American. 

Despite repeated pledges to close the racial gap in its U.S. workforce, a tiny fraction — 3% — of Facebook is African-American. In all, Facebook employs 259 black people, according to the company's most recent government filing. That's out of 11,241 people.

With moderators spread around the globe, it's probable that few of them are African American and even fewer are familiar with how messages are racially coded in the U.S., says University of Southern California professor Safiya Umoja Noble, author of the upcoming book Algorithms of Oppression: How Search Engines Reinforce Racism.

Facebook declined to comment on the demographics of its content reviewers. During the hearing on Capitol Hill, Stretch said Facebook is committed "to building a workforce that is as diverse as the community we serve."

Thousands of reviewers who speak more than 40 languages, some of them experts in child safety, hate speech and counter-terrorism, moderate content on Facebook, the company says. Hounded by lawmakers for failing to catch and stop the Russian propaganda machine, Facebook says it's planning to double the number of employees and contractors who handle safety and security issues to 20,000 by the end of 2018. And, going forward, it's planning to manually review all campaigns that involve politics, religion, ethnicity and social issues. 

Critics say its homogeneous workforce left it vulnerable to Russian infiltration that preyed on racial divides in American society — just the most recent blind spot for Silicon Valley's top tech firms that are staffed mostly by white and Asian men.

"The lack of diversity at Facebook clearly shows that Facebook is lacking the empathy and understanding to prevent abuse and exploitation throughout its platform that impacts, not only the tech industry, but the world," said Wayne Sutton, co-founder and chief technology officer with Change Catalyst, which advocates for diversity in the tech industry.

One of the racially divisive ads bought by Russian operatives that was displayed behind Rep. Terri Sewell, a member of the Congressional Black Caucus, during a hearing on Capitol Hill.

Sewell's face-off with Facebook during Wednesday's House Intelligence Committee hearing on Capitol Hill follows weeks of criticism from the Congressional Black Caucus over Facebook's handling of fake Russian pages and ads. 

The content the Russians disseminated, rigged to look like the work of American activists, spread incendiary messages during and after the presidential campaign on a range of hot-button social issues from gay rights to gun rights with the aim of provoking maximum outrage.

And it worked. Facebook metadata shared by the House Intelligence Committee shows that many of the ads had click-through rates that were unusually high. 

Facebook repeatedly denied the Russians exploited its platform until September. Last week, the company admitted that 146 million Americans, or nearly half of the U.S. population, may have been reached on Facebook and Instagram by the misinformation campaign.

Freddie Gray protests

Well before those disclosures, the black community had raised suspicions about those posing as activists on Facebook and Twitter, though plenty were duped by the ads.

When the Russian-backed "Blacktivist" Facebook page and Twitter account called for a march in Baltimore in April 2016 after the police custody death of Freddie Gray, Rev. Heber Brown III, pastor of a Baltimore church, confronted Blacktivist, asking if those behind the account organizing the police brutality march were local. The account responded that it was not based in Baltimore, but was "looking for friendship, because we are fighting for the same reasons."

Brown retorted that Blacktivist should "come learn and listen before you lead." When he later learned that the account was fake and based in Russia,  Brown said he was stunned that he had disrupted a "Russian op."

Black Matters US, another fake Facebook page operated by the Internet Research Agency in St. Petersburg — the operation behind the covert campaign — promoted a protest in New York City the Saturday after the presidential election with a Facebook ad. More than 16,700 people signed up to attend on the event page and 33,000 said they were interested in the event. Tens of thousands showed up to the protest.

Inciting racial animosity is a common technique to deepen divisions to amass and maintain power, says Allison Jones, director of communications of Code2040, a San Francisco organization that works to increase the diversity of the tech industry's workforce. Now, with the enormous reach and influence of social media in today's society, it's more effective than ever, making it even more crucial that Facebook and other technology companies employ diverse workforces, she says. 

More: Russians used Facebook the way other advertisers do: By tapping into its data-mining machine

More: See the fake Facebook ads Russians ran on Clinton, guns, race, Christianity

More: Facebook's Zuckerberg says he's 'dead serious' about Russia, warns security spending will hurt profits

More: Facebook political ads are coming out of the shadows — why you should care

"These technologies are increasingly the filter through which we interpret the world around us. There is huge potential for them to increase empathy, understanding and connectivity, but there are also great risks that they will only amplify biases already held, deepening inequity as well as polarizing people across the country," Jones says. 

That has added urgency to the Congressional Black Caucus push to get Silicon Valley to hire more people of color.

Last month representatives met with Facebook's chief operating officer Sheryl Sandberg in Washington and then with executives in Facebook's Menlo Park, Calif., headquarters. They are speaking up about the lack of diversity at all levels of the company, from the board of directors to the engineers who build the company's products. Sandberg has committed to recruiting an African American to the all-white board. Even fewer African Americans — just 1% — hold technical roles at Facebook. 

"Having a diverse workforce in all aspects of tech companies, especially as relates to reviewing content and ensuring user safety, is critical to the success of the tech industry," Rep. G.K. Butterfield, a North Carolina Democrat and former chairman of the caucus, told USA TODAY in an email.

Facebook's General Counsel Colin Stretch, left, speaks during a Senate Intelligence Committee hearing on Russian election activity and technology on Wednesday.

Facebook's critics don't understand why a company that knows what shoes and boots you like and how you're feeling at any given moment can't detect a massive fraud operation by a adversarial foreign power. But they also warn that Facebook may be placing too much faith in yet-to-be-proven artificial intelligence to monitor the vast volumes of content streaming through its platforms.

And, too often, these technologies harbor the biases of the people who build them and the biases they learn from the Internet, says USC's Noble.

More: Follow USA TODAY Money and Tech on Facebook

When artificial intelligence expert Rob Speer built a restaurant review algorithm, it rated Mexican dining spots lower because it learned to associate “Mexican” with negative words like “illegal.” The reason, the chief science officer of Luminoso says, is that the system learned the word "Mexican" from the Web, where Mexican is frequently used with the word "illegal"as in "illegal immigrants."

A Stanford University study found that Internet-trained artificial intelligence associated stereotypically white names with positive words such as love and laughter and stereotypically black names with negative words such as failure and cancer.

"Technology platforms and AI are increasingly influential in our lives, from determining the news and ads we see to the rates we receive on purchases and loans," Jones of 2040 says. "If these companies are serious about building platforms that create connectivity — as opposed to fueling racial tensions, the behavior of trolls and others who seek to do harm — they need to invest in building diverse, inclusive teams."

More USA TODAY coverage of inclusion and diversity in tech