Posted on

Tech at Work: Amazon caravan protest, Genderify’s algorithmic bias and using ‘BIPOC’

Welcome back to Tech at Work, where we look at labor, diversity and inclusion. Given the amount of activity in this space, we’re going to ramp this up from bi-weekly to weekly.

This week, we’re looking at the latest action from a group of Amazon warehouse workers in the San Francisco Bay Area, how to avoid Genderify’s massive algorithmic bias fail and the rise of the use of BIPOC, which stands for Black, Indigenous and people of color, and how to properly use the term.


Stay woke


Amazon warehouse workers stage sunrise action

Amazon delivery drivers in the San Francisco Bay Area are kicking off the month by protesting the e-commerce giant’s labor practices related to the COVID-19 pandemic. As part of a caravan, workers plan to head to Amazon’s San Leandro warehouse this morning to pressure the company to shut down the facility for a thorough cleaning.

“They are having COVID cases reported and they’re not being truthful about how many, and they’re not being reported right away,” Amazon worker Adrienne Williams told TechCrunch. “We’re seeing this pattern of Amazon finding out and then not telling people for two weeks so they don’t have to pay anyone.”

In a statement to TechCrunch, Amazon said:

Nothing is more important than health and well-being of our employees, and we are doing everything we can to keep them as safe as possible. We’ve invested over $800 million in the first half of this year implementing 150 significant process changes on COVID-19 safety measures by purchasing items like masks, hand sanitizer, thermal cameras, thermometers, sanitizing wipes, gloves, additional handwashing stations, and adding disinfectant spraying in buildings, procuring COVID testing supplies, and additional janitorial teams.

In addition to shutting down the warehouse for sanitizing, workers are asking for better communication.

“The drivers have no idea if there are ever any cases because we don’t have access to the internal warehouse A to Z communications they have,” Williams, who works at the Richmond warehouse, said. “So we never get the alerts if there are COVID cases. We’re not on that internal communication but we go in those warehouses twice a day to get our shifts and packages.”

Because drivers are generally employed by delivery service partners, Amazon says it does not have direct communication with them. However, Amazon says it immediately notifies the delivery service partner who then communicates with the drivers.

By staging the action so early, the hope is to prevent workers from being able to load delivery vehicles, Williams said.

“If the vans are left in the warehouse, Jeff Bezos takes the financial hit,” she said. “Halting deliveries and keeping them in the warehouse means Amazon gets hit with the bill.”

Lesson for startups: Treat all of your workers with dignity and respect.

Read More

Posted on

Biased AI perpetuates racial injustice

The murder of George Floyd was shocking, but we know that his death was not unique. Too many Black lives have been stolen from their families and communities as a result of historical racism. There are deep and numerous threads woven into racial injustice that plague our country that have come to a head following the recent murders of George Floyd, Ahmaud Arbery and Breonna Taylor.

Just as important as the process underway to admit to and understand the origin of racial discrimination will be our collective determination to forge a more equitable and inclusive path forward. As we commit to address this intolerable and untenable reality, our discussions must include the role of artificial intelligence (AI) . While racism has permeated our history, AI now plays a role in creating, exacerbating and hiding these disparities behind the facade of a seemingly neutral, scientific machine. In reality, AI is a mirror that reflects and magnifies the bias in our society.

I had the privilege of working with Deputy Attorney General Sally Yates to introduce implicit bias training to federal law enforcement at the Department of Justice, which I found to be as educational for those working on the curriculum as it was to those participating. Implicit bias is a fact of humanity that both facilitates (e.g., knowing it’s safe to cross the street) and impedes (e.g., false initial impressions based on race or gender) our activities. This phenomenon is now playing out at scale with AI.

As we have learned, law enforcement activities such as predictive policing have too often targeted communities of color, resulting in a disproportionate number of arrests of persons of color. These arrests are then logged into the system and become data points, which are aggregated into larger data sets and, in recent years, have been used to create AI systems. This process creates a feedback loop where predictive policing algorithms lead law enforcement to patrol and thus observe crime only in neighborhoods they patrol, influencing the data and thus future recommendations. Likewise, arrests made during the current protests will result in data points in future data sets that will be used to build AI systems.

This feedback loop of bias within AI plays out throughout the criminal justice system and our society at large, such as determining how long to sentence a defendant, whether to approve an application for a home loan or whether to schedule an interview with a job candidate. In short, many AI programs are built on and propagate bias in decisions that will determine an individual and their family’s financial security and opportunities, or lack thereof — often without the user even knowing their role in perpetuating bias.

This dangerous and unjust loop did not create all of the racial disparities under protest, but it reinforced and normalized them under the protected cover of a black box.

This is all happening against the backdrop of a historic pandemic, which is disproportionately impacting persons of color. Not only have communities of color been most at risk to contract COVID-19, they have been most likely to lose jobs and economic security at a time when unemployment rates have skyrocketed. Biased AI is further compounding the discrimination in this realm as well.

This issue has solutions: diversity of ideas and experience in the creation of AI. However, despite years of promises to increase diversity — particularly in gender and race, from those in tech who seem able to remedy other intractable issues (from putting computers in our pockets and connecting with machines outside the earth to directing our movements over GPS) — recently released reports show that at Google and Microsoft, the share of technical employees who are Black or Latinx rose by less than a percentage point since 2014. The share of Black technical workers at Apple has not changed from 6%, which is at least reported, as opposed to Amazon, which does not report tech workforce demographics.

In the meantime, ethics should be part of a computer science-related education and employment in the tech space. AI teams should be trained on anti-discrimination laws and implicit bias, emphasizing that negative impacts on protected classes and the real human impacts of getting this wrong. Companies need to do better in incorporating diverse perspectives into the creation of its AI, and they need the government to be a partner, establishing clear expectations and guardrails.

There have been bills to ensure oversight and accountability for biased data and the FTC recently issued thoughtful guidance holding companies responsible for understanding the data underlying AI, as well as its implications, and to provide consumers with transparent and explainable outcomes. And in light of the crucial role that federal support is playing and our accelerated use of AI, one of the most important solutions is to require assurance of legal compliance with existing laws from the recipients of federal relief funding employing AI technologies for critical uses. Such an effort was started recently by several members of Congress to safeguard protected persons and classes — and should be enacted.

We all must do our part to end the cycles of bias and discrimination. We owe it to those whose lives have been taken or altered due to racism to look within ourselves, our communities and our organizations to ensure change. As we increasingly rely on AI, we must be vigilant to ensure these programs are helping to solve problems of racial injustice, rather than perpetuate and magnify them.

Read More

Posted on

Silicon Valley can fight systemic racism by supporting Black-owned businesses

As the United States sees its second week of large-scale protests against police brutality, it’s painfully clear that the country’s racial divide requires significant short- and long-term action. But most of these calls for change gloss over the role Silicon Valley can and should play in mending the racial divide.

Right now, activists are rightfully urging the public to take two crucial steps: vote out state and local government leaders and support Black-owned businesses. Both steps are necessary, but the importance of the latter has been largely overshadowed. Leaders can enact policy change, but much of the structural racial disparity in the U.S. is economic. Black workers are vastly overrepresented in low-paying agricultural, domestic and service jobs.

They’re also far more likely to be unemployed (in normal economic circumstances, and especially during the pandemic). A Stanford University study found that only 1% of Black-owned businesses receive loans in their first year. That’s seven times lower than the percentage for white businesses.

Put simply, enacting new laws and overturning old ones won’t suddenly reverse decades of biased investment decisions. That’s why all over social media, there are grassroots pushes to shop Black. Apps like WeBuyBlack and eatOkra collate businesses and restaurants into one centralized database, while organizations like Bank Black encourage investment in Black-owned funds or Black-owned businesses.

But what happens when the hashtags stop trending, the protests stop attracting crowds, and the Twitter feeds return to celebrity gossip and reality show reactions? Many organizers worry that, after the media cycle of the George Floyd protests expire, widespread interest in fixing systemic racism will go away too. Apps may be helpful in propping up Black businesses, but they rely on customers fundamentally changing their purchasing and consumption habits. Perhaps the perfect storm of COVID-19 and Mr. Floyd’s death will result in a wide-scale transformation of consumer behavior. But that’s not a given, and even if it were, it wouldn’t be enough.

To systematically fix underinvestment in Black businesses, we need big tech to step up. Now.

In particular, while there’s been a lot of recent talk about “algorithmic bias” (preventing algorithms on sites like Facebook or Google from implicitly discriminating on the basis of race), there hasn’t been enough talk about proactively demanding “algorithmic equality.” What if, for instance, tech companies didn’t just focus on erasing the entrenched bias in their systems, but actually reprogrammed algos to elevate Black businesses, Black investors and Black voices?

This shift could involve deliberately increasing the proportion of Black-created products or restaurants that make it onto the landing pages of sites like Amazon and Grubhub. Less dramatically, it could tweak SEO language to better accommodate racial and regional differences among users. The algorithmic structures behind updates like Panda could be repurposed to systematically encourage the consumption of Black-created content, allowing Black voices and Black businesses to get proportional purchase in the American consumer diet.

There’s also no compelling reason to believe that these changes would harm user experience. A recent Brookings study found that minority-owned businesses are rated just as highly on Yelp as white-owned businesses. However, these minority-owned businesses grow more slowly and gain less traction than their white-owned counterparts — resulting in an annual loss of $3.9 billion across all Black businesses. To help resolve this glaring (and needless) inequality, Yelp could modify its algorithms to amplify high-performing Black-owned businesses. This could significantly increase the annual income of quality Black entrepreneurs, while also increasing the likelihood in overall investment in Black small businesses.

At the very least, giving Black business a short-term algorithmic advantage in take-out and delivery services could help stem the massive economic breach caused by the coronavirus and could help save the 40% of minority-owned businesses that have shut down because of the pandemic.

Nothing can undo the losses of George Floyd, Breonna Taylor, Ahmaud Arbery or the countless other Black Americans who unjustly died as a result of this country’s broken system. What we can do is demand accountability and action, both from our political leaders and from the Silicon Valley CEOs who structure e-commerce.

With thoughtful, data-based modifications, online platforms can give Black entrepreneurs, creators and voices the opportunity to compete — an equality that has been denied for far too long.

Read More

Posted on

AI Transparency, Fairness Get Boost with Naming of Prof. Judea Pearl of UCLA

Efforts to boost AI transparency and fairness got a boost with the naming of Judea Pearl of UCLA as the World Leader of 2020 by the AI World Society. (GETTY IMAGES)
By AI Trends Staff
Efforts to further AI transparency and fairness got a boost recently with the naming of Prof. …

Read More

Posted on

Vatican, DoD Weigh in on Ethical AI Principles in Same Week


St. Peter’s Basilica in Vatican City, where the Pope last week issued ethical principles to guide AI developers. (GETTY IMAGES)

By AI Trends Staff

The Vatican and the Department of Defense both took stances on AI ethics last week.

The Department of Defense on Monday held a press conference to announce its principles of AI ethics to guide development of new systems. The Vatican on Friday received support from IBM and Microsoft for its guidance for developers of AI rooted in Catholic social teaching.

The Rome Call for AI Ethics was drafted by the Pontifical Academy for Life, an advisory body to Pope Francis. It outlines six principles to define the ethical use of AI, to ensure that AI is developed and used to serve and protect people and the environment. Microsoft and IBM announced support for the charter, reported WSJPro  on Feb. 28.

IBM Executive VP John Kelly and Microsoft President Brad Smith were scheduled to travel to the Vatican to sign the document. IBM and Microsoft provided feedback to the creators of the document as it was being developed.

There is precedent for the Vatican to put out position papers and calls for guidelines on the environment and the planet for example.  “But this is the first one that I’m aware of where they really put out a definite document or set of guidelines around a technology,” stated Kelly. “And also the first I’m aware of where they invited a big tech company—and follow-on tech companies—to sign on.”

Signing the document shows a sincerity and seriousness of purpose, said Smith of Microsoft. “Our signature affirms our commitment to develop and deploy artificial intelligence with a clear focus on ethical issues,” he stated.

The Rome Call for AI Ethics identifies six principles: transparency, in that AI systems need to be explainable; inclusion, so the needs of all people are considered, that all benefit from the technology, and those who design and deploy do so with caution; impartiality, so that developers of systems do so without bias, and build systems that safeguard fairness and human dignity; reliability, so the AI systems are dependable; and security and privacy, so that systems are safeguarded and respect privacy.

None of the principles suggested by the Vatican are new ideas, suggested an account in Vox with the headline, “The Pope’s Plan to Battle Evil AI.” They echo some of the nonbinding AI guidelines issued by the European Union last year and the Trump Administration in January.

Technology company leaders have been frequenting the Vatican in recent years. In addition to the Pontifical Academy for Life, the pope has hosted the Pontifical Academy of Social Sciences and the Pontifical Academy of Sciences, to address questions raised by robotics and AI. Attendees have included DeepMind CEO Demis Hassabis, Facebook computer scientist Yann LeCun, and LinkedIn founder Reid Hoffman.

The Vatican’s vision for AI so far mirrors what the tech giants are saying, suggested Vox, namely: “regulate our new technology, but don’t ban it outright.”

DoD Adopts Five Principles of Ethical Use of AI

Meanwhile in Washington, DC, at a press conference on Feb. 24, the US Department of Defense officially adopted five principles for the ethical use of AI, with a focus on ensuring the military can retain full control and understanding over how machines make decisions, according to an account in fedscoop.

“We believe the nation that successfully implements AI principles will lead in AI for many years,” stated Lt. Gen. Jack Shanahan, the director of the Joint AI Center.

Lt. Gen. Jack Shanahan, director, Joint AI Center, DoD

The final DoD principles map closely to recommendations submitted by the Defense Innovation Board to Secretary of Defense Mark Esper in October.

The five DoD principles for the ethical use of AI are: to be responsible, exercising appropriate levels of judgement; equitable, taking steps to minimize unintended bias; traceable, with capabilities developed and deployed to be transparent and able to be audited; reliable, with safety, security and effectiveness subject to testing; and governable, with AI capabilities designed to fulfill their intended functions and avoid unintended consequences, and the ability to deactivate deployed systems that demonstrate unintended behavior.

The DoD’s Joint AI Center will take the lead in deploying the ethical AI principals across the agency. “Ethics remain at the forefront of everything the department does with AI technology, and our teams will use these principles to guide the testing, fielding and scaling of AI-enabled capabilities across the DoD,” stated CIO Dana Deasy, according to an account in MeriTalk.

Lt. Gen. Shanahan was quoted as saying that DoD will “design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences,” in an account in Space News. The general said the intelligence community is likely to embrace similar guidelines and discussions among agencies and international allies have been going on for months.

Read the source articles in WSJPro, Vox, fedscoop. MeriTalk and  Space News.

Posted on

Need to Build Trustworthy AI Systems Gains Importance as AI Progresses


As AI systems take on more responsibility, the strengths and weaknesses current AI systems need to be recognized to help build a foundation of trust. (GETTY IMAGES)

By John P. Desmond, Editor, AI Trends

The push is on to build trusted AI systems with an eye toward instilling confidence that results will be fair, accuracy will be sufficient, and safety will be preserved.

Gary Marcus, the successful entrepreneur who sold his startup Geometric Intelligence to Uber in 2016, issued a wakeup call to the AI industry as co-author with Ernest Davis of “Rebooting AI,” (Pantheon, 2019) an analysis of the strengths and weaknesses of current AI, where the field is going, and what we should be doing.

Marcus spoke about building trusted AI in a recent interview with The Economist. Here are some highlights:

“Trustworthy AI has to start with good engineering practices, mandated by laws and industry standards, both of which are currently largely absent. Too much of AI thus far has consisted of short-term solutions, code that gets a system to work immediately, without a critical layer of engineering guarantees that are often taken for granted in other fields. The kinds of stress tests that are standard in the development of an automobile (such as crash tests and climate challenges), for example, are rarely seen in AI. AI could learn a lot from how other engineers do business.”

AI developers, “can’t even devise procedures for making guarantees that given systems work within a certain tolerance, the way an auto part or airplane manufacturer would be required to do.”

“The assumption in AI has generally been that if it works often enough to be useful, then that’s good enough, but that casual attitude is not appropriate when the stakes are high.”

IBM Team Identifies Four Pillars of Trusted AI

Support for building trust in AI systems was furthered in a recent paper by an IBM team suggesting Four Pillars to Trusted AI, as described in a recent account in Towards Data Science from Jesus Rodriguez, chief scientist and managing partner at Invector Labs.

“The non-deterministic nature of artificial intelligence(AI) systems breaks the pattern of traditional software applications and introduces new dimensions to enable trust in AI agents,” Rodriquez states. Trust in software development has been built through procedures around testing, auditing, documentation, and many other aspects of the discipline of software engineering. AI agents execute behavior based on knowledge that evolves over time. It’s difficult to understand.

Rodriguez suggests the Four Pillars from IBM are a viable idea for establishing the foundation of trust in AI systems. The foundations are:

  • Fairness: AI systems should use training data and models that are free of bias, to avoid unfair treatment of certain groups.
  • Robustness: AI systems should be safe and secure, not vulnerable to tampering or compromising the data they are trained on.
  • Explainability: AI systems should provide decisions or suggestions that can be understood by their users and developers.
  • Lineage: AI systems should include details of their development, deployment, and maintenance so they can be audited throughout their lifecycle.

To help identify whether an AI system is built consistent with the four pillars of trust AI, IBM proposes a Supplier’s Declaration of Conformity (SDoC, or factsheet, for short) that helps to provide information. It should answer basic questions, including this selection:

  • Does the dataset used to train the service have a data sheet or data statement?
  • Was the dataset and model checked for biases? If “yes” describes bias policies that were checked, bias checking methods, and results.
  • Was any bias mitigation performed on the dataset? If “yes” describes the mitigation method.
  • Are algorithm outputs explainable/interpretable? If yes, explain how is explainability achieved (e.g. directly explainable algorithm, local explainability, explanations via examples).
  • Describe the testing methodology.

Human Observers Need to Understand the AI System

Trust in an AI system is built by repeated correct performance of the system, making it highly reliable, and also a system that human observers can understand. When working with resilient, intelligent robotic systems, for example in the military, that are built to adapt and evolve to yield increasingly improved performance, it is challenging to understand the system. The human observers need to be able to understand how the system is improving through experience, suggests Nathan Michael, CTO of Shield AI, writing recently in National Defense. Shield develops AI for national security and defense applications.

“One of the greatest challenges with artificial intelligence is that there is an overwhelming impression that magic underlies the system. But it is not magic, it’s mathematics.

What is being accomplished by AI systems is exciting, but it is also simply theory and fundamentals and engineering. As the development of AI progresses, we will see, more and more, the role of trust in this technology, Michael stated.

Read the source articles in The Economist, Towards Data Science and in National Defense.

Source: AI Trends

Posted on

People-Centered Design For Deep Learning


The design of deep learning systems needs to incorporate transparency, explainability and reversibility to ensure positive results for business.

In an MIT Sloan Management Review article published last week, David A. Bray and Ray Wang outline the challenges ahead for incorporating people-centered design principles for deep learning.

Deep learning, like other types of AI, trains itself, raising questions about accuracy and fairness in the findings. As companies adopt these technologies, “leadership must ensure that artificial neural networks are accurate and precise because poorly tuned networks can affect business decisions and potentially hurt customers, products, and services,” Bray and Wang write.

They advocate for “a people-centered approach to deep learning ethics,” which benefits not just a few individuals, but entire communities. The approach is built on transparency, explainability, and reversibility, they write, which should be the foundation for any AI implementation.

In order to achieve that, Bray and Wang suggest three methods, “to reduce the risk of introducing poorly tuned AI systems and inaccurate or biased decision-making in pilots and implementations.” Companies should create data advocates, establish a mindful monitoring system, and clearly define expectations.

Read their full explanation here.

Source: AI Trends