The two-party system has been an entrenched feature of the U.S. electoral system since its inception. Is change even possible?
In the second episode of this special two-part discussion, business leader Katherine Gehl and Harvard Business School professor Michael Porter discuss innovative reforms, like ranked-choice voting, that promote competition and accountability.
Later in the show, we talk with attorney and author Jeff Clements, of American Promise, about his efforts to reform campaign finance laws.
HBR Presents is a network of podcasts curated by HBR editors, bringing you the best business ideas from the leading minds in management. The views and opinions expressed are solely those of the authors and do not necessarily reflect the official policy or position of Harvard Business Review or its affiliates.
sThe United States government began measures today to end its special status with Hong Kong, one month after Secretary of State Michael Pompeo told Congress that Hong Kong should no longer be considered autonomous from China. The new actions include suspending export license exceptions for sensitive U.S. technology and ending the export of defense equipment to Hong Kong. Both the Commerce and State Departments also said further restrictions are being evaluated.
The U.S. government’s announcements were made a few hours before news broke that China had passed a new national security law that will give it greater control over Hong Kong. It is expected to take effect on July 1, according to the South China Morning Post.
The term “special status” refers to arrangements that recognized the difference between Hong Kong and mainland China under the “one country, two systems” policy put into place when the United Kingdom handed control of Hong Kong back to Beijing in 1997. These included different export controls, immigration policies and lower tariffs. But that preferential treatment was put into jeopardy after China proposed the new national security law, which many Hong Kong residents fear will end the region’s judicial independence from Beijing.
The U.S Commerce Department and State Department issued separate statements today detailing the new restrictions on Hong Kong. Secretary of Commerce Wilbur Ross said the Commerce Department will suspend export license exceptions for sensitive U.S. technology, and that “further actions to eliminate differential treatment are also being evaluated.”
The State Department said that it will end exports of U.S. defense equipment and also “take steps toward imposing the same restrictions on U.S. defense and dual-use technologies to Hong Kong as it does for China.”
In a statement to Reuters, Kurt Tong, a former U.S. consul general in Hong Kong, said that the U.S. government’s decisions today would not impact a large amount of trade between the U.S. and Hong Kong because the territory is not a major manufacturing center and its economy is mostly services.
According to figures from the Office of the United States Trade Representative, Hong Kong accounted for 2.2% of overall U.S. exports in 2018, totaling $37.3 billion, with the top export categories being electrical machinery, precious metal and stones, art and antiques, and beef. But the new restrictions could make it more difficult for U.S. semiconductor and other technology companies to do business with Hong Kong clients.
Both the State and Commerce departments said that the restrictions were put into place for national security reasons. “We can no longer distinguish between the export of controlled items to Hong Kong or to mainland China,” Pompeo wrote. “We cannot risk these items falling into the hands of the People’s Liberation Army, whose primary purpose is to uphold the dictatorship of the CCP by any means necessary.”
In his statement, Ross said, “With the Chinese Communist Party’s imposition of new security measures on Hong Kong, the risk that sensitive U.S. technology will be diverted to the People’s Liberation Army or Ministry of State Security has increased, all while undermining the territory’s autonomy.”
Blackbaud offers enterprise tools ostensibly in a campaign to support social good, but the company also provides services to far-right organizations the Heritage Foundation and the Center for Security Policy, TechCrunch has discovered.
Blackbaud describes itself as “the world’s leading cloud software company powering social good,” and collects revenues in the hundreds of millions of dollars from that business. Nothing about that mission is partisan, and good can of course be accomplished by groups all across the current American political spectrum.
But while conservative causes are by no means excluded from the category, the far-right stances of Heritage and especially those of CSP are difficult to square with even the broadest interpretation of social good.
The decades-old Heritage Foundation has been behind lobbying efforts against climate change action, equal rights for LGBTQ Americans, and immigration modernization efforts. It has worked on behalf of the oil and tobacco industries and opposed health care reform, and recommended the likes of Betsy DeVos and Scott Pruitt to the administration.
According to GLAAD, Heritage “has made it a focused mission to stop all laws protecting on the basis of sexual orientation and gender identity.” This alone makes it a poor match for a company that just weeks ago said in celebration of Pride that “we want to underscore that LGBTQ+ rights are human rights.”
Yet according to documents reviewed by TechCrunch, Blackbaud collects about $180,000 in annual revenue from the Heritage Foundation, and has worked with them for about 15 years.
The Center for Security Policy is a more extreme case. Designated as a hate group by the Southern Poverty Law Center, CSP has pursued a hardline anti-Muslim stance for years. It publishes reports saying jihadists are infiltrating the U.S. government, and was commissioned to perform polling to show support for Trump’s Muslim travel ban. A CSP executive was hired by John Bolton to serve in the administration, and later left to rejoin the anti-Muslim organization as its head.
Image Credits: Blackbaud
One recent study warns of a Muslim plot “even more sinister” than the widespread sexual abuse revealed in the #MeToo era: “Sharia-supremacists are insinuating themselves into script-writing, Hollywood ‘consulting,’ film production, and even financial scholarships designed to facilitate young Muslims’ penetration of the entertainment industry.”
The documents show a smaller contract with CSP, amounting to about $11,000 in annual revenue.
Blackbaud records show interactions with both companies within the last month or so; these are current contracts. Neither Heritage nor CSP responded to requests for comment, and Blackbaud would not confirm they are customers as a matter of policy.
“Blackbaud provides cloud software, services, data intelligence and expertise to a wide spectrum of registered 501c3 organizations and companies that are lawfully conducting business. Those organizations are diverse in their missions and belief systems, but we remain committed to building the best software to support all who are truly doing good in achieving their missions,” the company said in a statement. It then referred me to a recent blog post entitled “EQUALITY.”
While doing business with a couple of bad actors doesn’t negate Blackbaud’s work with other organizations actually working for the social good, the discrepancy bears highlighting given the company’s virtue-first brand. If the concept of social good they are working with is mutable enough that it includes hate groups, other organizations might think twice about trusting that message.
At times like the present, companies are being called on to not just say they are dedicated to things like human rights, anti-racism and other causes, but to demonstrate it and respond to criticism. According to Blackbaud:
“Racism and acts of hate strip people of basic human rights and defy the very principles of what ‘good’ stands for. We condemn racism and discrimination and seek solutions to end the suffering in our communities and world.
Equality must be more than a motto. It must be a promise to each other and the world.”
By espousing equality on one hand and making millions from those who oppose it on the other, it may be fairly questioned whether that promise is being kept.
The Trump administration’sdecision to extend its ban on issuing work visas to the end of this year “would be a blow to very early-stage tech companies trying to get off the ground,” Silicon Valley immigration lawyer Sophie Alcorn told TechCrunch this week.
In 2019, the federal government issued more than 188,000 H-1B visas — thousands of workers who live in the San Francisco Bay Area and other startup hubs hold H-1B and H-2B visas or J and L visas, which are explicitly prohibited under the president’s ban. Normally, the government would process tens of thousands of visa applications and renewals in October at the start of its fiscal year, but the executive order all but guarantees new visas won’t be granted until 2021.
Four TechCrunch staffers analyzed the president’s move in an attempt to see what it portends for the tech industry, the U.S. economy and our national image:
Danny Crichton: Trump’s ban is a “self-inflicted” blow to our precarious economy
America’s economic supremacy is increasingly precarious.
Outsourcing and offshoring led to a generational loss of manufacturing skills, management incompetence killed off many of the country’s leading businesses and the nation now competes directly with China and other countries in critical emerging industries like 5G, artificial intelligence and the other alphabet soup of technological acronyms.
We have one thing going for us that no other country can rival: our ability to attract top talent. No other country hosts more immigrants, nor does any other country capture the imagination of a greater portion of the world’s top minds. America — whether Silicon Valley, Wall Street, Hollywood, Harvard Square or anywhere in between — is where smart people congregate.
Or at least, it was.
The coronavirus was the first major blow, partially self-inflicted. Remote work pushed employers toward keeping workers where they are (both domestically and overseas) rather than centralizing them in a handful of corporate HQs. Meanwhile, students — the first step for many talented workers to enter the United States — are taking a pause, fearing renewed outbreaks of COVID-19 in America while much of the rest of the developed world reopens with few cases.
The second blow was entirely self-inflicted. Earlier this week, President Donald Trump announced that his administration would halt processing critical worker visas like the H-1B due to the current state of the American economy.
It’s time to put on our thinking caps so we can discuss an esoteric but important policy change and how it is going to impact the VC world.
The 2008 financial crisis devastated the global economy. One of the reforms that came from the detritus of that situation was a policy known as the Volcker Rule.
The rule, proposed by former Fed chairman Paul Volcker and passed into law with the Dodd-Frank reform bill, was designed to limit the ways that banks could invest their balance sheets to avoid the kind of cataclysmic systemic risks that the world witnessed during the crisis. Many banks faced a liquidity crunch after investing in mortgage-backed securities (MBSs), collateralized debt obligations (CDOs), and other even more arcane speculative financial instruments (like POGs, or Piles Of Garbage) in seeking profits.
A number of reforms are underway to the Volcker Rule, which has been a domestic regulatory priority for the Trump administration since Inauguration Day.
One of the unintended consequences of the Rule is that it limited banks from investing in certain “covered funds,” which was written broadly enough that it, well, covered VC firms as well as hedge funds and other private equity vehicles. Reforms to that policy (and to the Rule in general) have been proposed for a decade with little traction until recently.
Now, a number of reforms are underway to the Volcker Rule, which has been a domestic regulatory priority for the Trump administration since Inauguration Day.
A framework for ensuring fairness in digital marketplaces and tackling abusive behavior online is brewing in Europe, fed by a smorgasbord of issues and ideas, from online safety and the spread of disinformation, to platform accountability, data portability and the fair functioning of digital markets.
European Commission lawmakers are even turning their eye to labor rights, spurred by regional concern over unfair conditions for platform workers.
On the content side, the core question is how to balance individual freedom of expression online against threats to public discourse, safety and democracy from illegal or junk content that can be deployed cheaply, anonymously and at massive scale to pollute genuine public debate.
The age-old conviction that the cure for bad speech is more speech can stumble in the face of such scale. While illegal or harmful content can be a money spinner, outrage-driven engagement is an economic incentive that often gets overlooked or edited out of this policy debate.
Certainly the platform giants — whose business models depend on background data-mining of internet users in order to program their content-sorting and behavioral ad-targeting (activity that, notably, remains under regulatory scrutiny in relation to EU data protection law) — prefer to frame what’s at stake as a matter of free speech, rather than bad business models.
But with EU lawmakers opening a wide-ranging consultation about the future of digital regulation, there’s a chance for broader perspectives on platform power to shape the next decades online, and much more besides.
In search of cutting-edge standards
For the past two decades, the EU’s legal framework for regulating digital services has been the e-commerce Directive — a cornerstone law that harmonizes basic principles and bakes in liabilities exemptions, greasing the groove of cross-border e-commerce.
Microsoft is joining IBM and Amazon in taking a position against the use of facial recognition technology by law enforcement — at least, until more regulation is in place.
During a remote interview at a Washington Post Live event this morning, the company’s president Brad Smith said Microsoft has already been taking a “principled stand” on the proper use of this technology.
“As a result of the principles that we’ve put place, we do not sell facial recognition technology to police departments in the United States today,” Smith said. “But I do think this is a moment in time that really calls on us to listen more, to learn more and most importantly to do more. Given that, we’ve decided that we will not sell facial recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”
Smith went on to say that Microsoft will be putting other “review factors” into place that will govern the use of this technology in “other scenarios.”
Microsoft president @BradSmi says the company does not sell facial recognition software to police depts. in the U.S. today and will not sell the tools to police until there is a national law in place “grounded in human rights.” #postlivepic.twitter.com/lwxBLjrtZL
These comments come after the death of George Floyd has resulted in nationwide and global protests, leading to broader conversations around racial justice and law enforcement.
Microsoft’s position is similar to Amazon’s in its suggestion that the company will revisit the issue when stronger regulation is in place. (Although it’s not referenced explicitly by either company, Congressional Democrats’ proposed Justice in Policing Act would limit how police departments can use the technology.) It does not go as far as IBM, which said it will stop selling facial recognition tech entirely.
Matt Cagle, technology and civil liberties attorney with the ACLU of Northern California, responded to the news with a statement that reads, in part:
When even the makers of face recognition refuse to sell this surveillance technology because it is so dangerous, lawmakers can no longer deny the threats to our rights and liberties. Congress and legislatures nationwide must swiftly stop law enforcement use of face recognition, and companies like Microsoft should work with the civil rights community — not against it — to make that happen. This includes Microsoft halting its current efforts to advance legislation that would legitimize and expand the police use of facial recognition in multiple states nationwide …
We welcome these companies finally taking action — as little and as late as it may be. We also urge these companies to work to forever shut the door on America’s sordid chapter of over-policing of Black and Brown communities, including the surveillance technologies that disproportionately harm them.
Amnesty International, meanwhile, is calling for an outright ban on the use of facial recognition technology by police for the purposes of mass surveillance.
Sophie Alcorn is the founder of Alcorn Immigration Law in Silicon Valley and 2019 Global Law Experts Awards’ “Law Firm of the Year in California for Entrepreneur Immigration Services.” She connects people with the businesses and opportunities that expand their lives.
Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.
“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”
I work in people ops at a biotech startup. We received an application from a very promising candidate from Mexico for a job opening we’ve had listed for quite some time. Our company has never sponsored anyone for a visa. Which type of visa should we pursue, how much will it cost, how long will it take, and what should we keep in mind while working through the process?
—Puzzled in Petaluma
Thank you for your question! I’m excited to hear that your startup is looking to sponsor an international professional for the first time!
Professionals who are citizens of either Mexico or Canada may be eligible for a TN (Treaty National) visa. A TN visa holder’s spouse and dependent children are eligible for a TD (Treaty Dependent) visa.
The European Commission is asking for views on how online platforms should be regulated in future, launching a public consultation today on the forthcoming Digital Services Act (DSA).
This pan-EU legislative proposal, due before the end of the year, is slated to rework the regional rulebook for digital services, including tackling controversial issues such as liability for user-generated content and online disinformation.
Modernising and updating rules related to ecommerce and online marketplaces to foster competitive by ensuring a level playing field in digital markets is another stated aim.
Whether the DSA will prove as divisive as the EU’s copyright reform remains to be seen — but the stakes are high indeed.
In parallel today, the Commission is soliciting views on possible updates to pan-EU competition regulation, asking whether a new tool is needed to beef up enforcement powers in the digital era.
Rebooting Europe’s digital regulation
The DSA consultation, which runs until September 8, covers issues including safety online, freedom of expression, fairness and a level-playing field in the digital economy, per the Commission, which says it’s seeking input from people, businesses, online platforms, academics, civil society and “all interested parties” to shape the planned governance framework for digital services.
But the Commission wants businesses of all stripes and sizes to chip into the consultation. After all, the most dominant platforms have the most to lose from any change of pan-EU rules.
And perhaps especially from changes that result in defining a specific set of “responsibilities” for the largest platforms.
Commenting in a statement, Commission EVP, Margrethe Vestager, said: “The Internet presents citizens and businesses with great opportunities, which they balance against risks that come with working and interacting online. At this time, we are asking for the views of interested citizens and stakeholders on how to make a modern regulatory framework for digital services and online platforms in the EU. Many of these questions impact the day-to-day lives of citizens and we are committing to build a safe and innovative digital future with purpose for them.”
“Online platforms have taken a central role in our life, our economy and our democracy. With such a role comes greater responsibility, but this can happen only against the backdrop of a modern rulebook for digital services,” said Breton in another statement. “We will listen to all views and reflect together to find the right balance between a safe Internet for all, protecting freedom of expression and ensuring space to innovate in the EU single market.”
The DSA package will contain a number of strands, with one set of rules focused on updating the EU’s existing eCommerce Directive — which dates back two decades at this point.
“Building on these principles, we aim to establish clearer and modern rules concerning the role and obligations of online intermediaries, including non-EU ones active in the EU, as well as a more effective governance system to ensure that such rules are correctly enforced across the EU single Market while guaranteeing the respect of fundamental rights,” the Commission said today.
A second component is aimed at ensuring fairness in European digital markets which have become dominated by a few large online platforms that act as gatekeepers.
EU institutions have already adopted one legislative measure aimed at platform marketplace fairness — due to come into force next month. But the Commission believes more is needed and is now exploring building on that foundation with additional rules to foster competition — potentially around (non-personal) data sharing.
“We will explore rules to address these market imbalances, to ensure that consumers have the widest choice and that the EU single market for digital services remains competitive and open to innovation. This could be through additional general rules for all platforms of a certain scale, such as rules on self-preferencing, and/or through tailored regulatory obligations for specific gatekeepers, such as non-personal data access obligations, specific requirements regarding personal data portability, or interoperability requirements,” it said today.
The consultation is also asking for views on other “emergent” issues related to online platforms — including working conditions for platform workers who are providing a service via these marketplaces.
Gig economy platforms continue to face legal challenges in Europe over their classification of platform workers as ‘self employed’, a status that reduces the benefits they are entitled to as a result of their labor.
On competition policy, the Commission has today published an inception impact assessment and opened up another public consultation — inviting comments on whether EU regulators need a new competition tool to allow them to address structural competition problems in a timely and effective manner.
The pace of competition enforcement vs the speed of Internet-enabled disruption has led to criticism that current remedies applied to problematic digital business practices come far too late to be effective.
Commenting on this in another supporting statement, Vestager, who also heads up EU competition policy, said: “The world is changing fast and it is important that the competition rules are fit for that change. Our rules have an inbuilt flexibility which allows us to deal with a broad range of anti-competitive conduct across markets. We see, however, that there are certain structural risks for competition, such as tipping markets, which are not addressed by the current rules. We are seeking the views of stakeholders to explore the need for a possible new competition tool that would allow addressing such structural competition problems, in a timely and effective manner ensuring fair and competitive markets across the economy.”
The Commission says it has concluded that ensuring the “contestability” and “fair functioning” of markets is likely to require a “holistic and comprehensive approach” — emphasizing that this should involve continued vigorous enforcement of existing EU rules (including the use of so-called ‘interim measures’, where appropriate; an old tool Vestager has recently dusted off and unboxed).
But — additionally — it’s considering supplementing existing antitrust rules with ex-ante regulation of digital platforms (“including additional requirements for those that play a gatekeeper role”); and the aforementioned possible new competition tool for dealing with structural competition problems that have proven tricky to tackle with current measures (such as preventing markets from tipping).
“The new competition tool should enable the Commission to address gaps in the current competition rules and to intervene against structural competition problems across markets in a timely and effective manner,” it writes.
“After establishing a structural competition problem through a rigorous market investigation during which rights of defence are fully respected, the new tool should allow the Commission to impose behavioural and where appropriate, structural remedies. However, there would be no finding of an infringement, nor would any fines be imposed on the market participants.”
Stakeholders have until June 30 to submit views on the Commission’s inception impact assessment, while the public consultation on the potential new competition tool is taking submissions until September 8.
Subject to the outcome of the impact assessment the Commission adds that a legislative proposal is scheduled for Q4.
Interestingly, for Commission watchers, the consultation on the possibility of ex-ante regulation of digital platforms — which is clearly forming part of Vestager’s thinking on ensuring functionally competitive markets, given it’s included in the competition reform discussion — has not been included in the competition consultation — but rather slotted into the DSA consultation, which is being led by Breton.
The two commissioners not only have very different personal styles but appear opposed on policy substance, with Vestager being comfortable voicing support for regulating digital technologies while Breton continues to express reluctance to do so, preferring to court industry engagement — and couching regulation as a last, unwelcome resort.
After weeks of intense debate, the French government has released its contact-tracing app StopCovid. While it was supposed to be released on both iOS and Android 6 hours ago, it is only available on the Play Store right now.
It’s unclear what’s the bottleneck on iOS as the government sent me a beta version of the iOS app last week ahed of the final release today. I’ve reached out to the French government but haven’t heard back.
Update: The app has been released on the App Store a few hours after the Android release.
France isn’t relying on Apple and Google’s contact-tracing API. A group of researchers and private companies have worked on a separate, centralized architecture. The server assigns you a permanent ID (a pseudonym) and sends a list of ephemeral IDs derivated from that permanent ID to your phone.
Like most contact-tracing apps, StopCovid relies on Bluetooth Low Energy to build a comprehensive list of other app users you’ve interacted with for more than a few minutes. If you’re using the app, it collects the ephemeral IDs of other app users around you.
The implementation then differs from here. If you’re using StopCovid and you’re diagnosed COVID-19-positive, your doctor, hospital or testing facility will hand you a QR code or a string of letters and numbers. You can choose to open the app and enter that code to share the list of ephemeral IDs of people you’ve interacted with over the past two weeks.
The server back end then flags all those ephemeral IDs as belonging to people who have potentially been exposed to the coronavirus. On the server again, each user is associated with a risk score. If it goes above a certain threshold, the user receives a notification. The app then recommends you to get tested and follow official instructions.
From the very beginning, France’s contact-tracing protocol ROBERT has raised some privacy concerns. Hundreds of French academics signed a letter asking for safeguards. In particular, they said it was unclear whether a contact-tracing app would even be useful when it comes to fighting the coronavirus outbreak.
France’s data protection watchdog CNIL stated that the app was complying with the regulatory framework in France. But CNIL also added a long list of recommendations to protect the privacy of the users of the app.
Last week, the parliament voted in favor of the release of the contact-tracing app. But it was a complicated debate, with a lot of misconceptions and some interesting objections.
Now, let’s see how people react to the release of the app and if they actually download the app. The government said there won’t be any negative consequence if you’re not using StopCovid, nor any privilege if you’re using it.
There’s one thing for sure — not releasing the iOS app at the same time as the Android app introduces some confusion. The most downloaded app on the iOS App Store in France is currently STOP COVID19 CAT, a health information app from the government of Catalonia: