Posted on

Marc Benioff Sets His Sights on Microsoft

SAN FRANCISCO — Five years ago, Marc Benioff negotiated to sell Salesforce, the software company he co-founded in 1999 and has run ever since, to Microsoft. If the deal had gone through, he would have been richly rewarded — but, in the end, just another employee of the tech colossus.With Tuesday’s news that Salesforce was buying Slack for $27.7 billion, Mr. Benioff did something much more difficult. He is now set to directly compete against Microsoft, one of the world’s most valuable companies, in its own favored territory.Microsoft has been slugging it out with Slack in the pandemic-fueled rush to …

Read More

Posted on

Salesforce to Acquire Slack for $27.7 Billion

Salesforce, which provides marketing and sales software, among other products, has been highly acquisitive as it looks to grow. Under Mr. Benioff, Salesforce has bought at least 60 companies, including 27 in the last five years, according to S&P Capital IQ.Salesforce stock has climbed nearly 40 percent this year, valuing the company at $220 billion. On Tuesday, it said its revenue rose 20 percent to $5.24 billion in the three months ending with October.In February, Salesforce paid $1.3 billion for Vlocity, a mobile software provider. Last year, it bought Tableau, a data analytics provider, for $15.3 billion; in 2018, it bought MuleSoft, a data integration company, …

Read More

Posted on

Bitcoin Climbs to Record High

Bitcoin is back. Again.Nearly three years after it went on a hair-bending rise and hit a peak of $19,783, the price of a single Bitcoin rose above that for the first time on Monday, according to the data and news provider CoinDesk. The cryptocurrency has soared since March, after sinking below $4,000 at the outset of the coronavirus pandemic.Bitcoin’s latest climb is different from its last spike in 2017, which was driven largely by investors in Asia who had just learned about cryptocurrencies. Back then, the digital token soon lost momentum as people questioned what it could do other than …

Read More

Posted on

Pushed by Pandemic, Amazon Goes on a Hiring Spree Without Equal

SEATTLE — Amazon has embarked on an extraordinary hiring binge this year, vacuuming up an average of 1,400 new workers a day and solidifying its power as online shopping becomes more entrenched in the coronavirus pandemic.The hiring has taken place at Amazon’s headquarters in Seattle, at its hundreds of warehouses in rural communities and suburbs, and in countries such as India and Italy. Amazon added 427,300 employees between January and October, pushing its work force to more than 1.2 million people globally, up more than 50 percent from a year ago. Its number of workers now approaches the entire population of Dallas.The …

Read More

Posted on

‘Tokenized’: Inside Black Workers’ Struggles at Coinbase

SAN FRANCISCO — One by one, they left. Some quit. Others were fired. All were Black.The 15 people worked at Coinbase, the most valuable U.S. cryptocurrency start-up, where they represented roughly three-quarters of the Black employees at the 600-person company. Before leaving in late 2018 and early 2019, at least 11 of them informed the human resources department or their managers about what they said was racist or discriminatory treatment, five people with knowledge of the situation said.One of the employees was Alysa Butler, 25, who worked in recruiting. During her time at Coinbase, she said, she told her manager several times about …

Read More

Posted on

Roiled by Election, Facebook Struggles to Balance Civility and Growth

SAN FRANCISCO — In the tense days after the presidential election, a team of Facebook employees presented the chief executive, Mark Zuckerberg, with an alarming finding: Election-related misinformation was going viral on the site.President Trump was already casting the election as rigged, and stories from right-wing media outlets with false and misleading claims about discarded ballots, miscounted votes and skewed tallies were among the most popular news stories on the platform.In response, the employees proposed an emergency change to the site’s news feed algorithm, which helps determine what more than two billion people see every day. It involved …

Read More

Posted on

How Misinformation ‘Superspreaders’ Seed False Election Theories

On the morning of Nov. 5, Eric Trump, one of the president’s sons, asked his Facebook followers to report cases of voter fraud with the hashtag, Stop the Steal. His post was shared over 5,000 times.

Image

By late afternoon, the conservative media personalities Diamond and Silk had shared the hashtag along with a video claiming voter fraud in Pennsylvania. Their post was shared over 3,800 times.

Image

That night, the conservative activist Brandon Straka asked people to protest in Michigan under the banner #StoptheSteal. His post was shared more than 3,700 times.

Image

Over the next week, the phrase “Stop the Steal” was used to promote dozens of rallies that spread false voter fraud claims about the U.S. presidential elections.

New research from Avaaz, a global human rights group, the Elections Integrity Partnership and The New York Times shows how a small group of people — mostly right-wing personalities with outsized influence on social media — helped spread the false voter-fraud narrative that led to those rallies.

That group, like the guests of a large wedding held during the pandemic, were “superspreaders” of misinformation around voter fraud, seeding falsehoods that include the claims that dead people voted, voting machines had technical glitches, and mail-in ballots were not correctly counted.

“Because of how Facebook’s algorithm functions, these superspreaders are capable of priming a discourse,” said Fadi Quran, a director at Avaaz. “There is often this assumption that misinformation or rumors just catch on. These superspreaders show that there is an intentional effort to redefine the public narrative.”

Across Facebook, there were roughly 3.5 million interactions — including likes, comments and shares — on public posts referencing “Stop the Steal” during the week of Nov. 3, according to the research. Of those, the profiles of Eric Trump, Diamond and Silk and Mr. Straka accounted for a disproportionate share — roughly 6 percent, or 200,000, of those interactions.

While the group’s impact was notable, it did not come close to the spread of misinformation promoted by President Trump since then. Of the 20 most-engaged Facebook posts over the last week containing the word “election,” all were from Mr. Trump, according to Crowdtangle, a Facebook-owned analytics tool. All of those claims were found to be false or misleading by independent fact checkers.

The baseless election fraud claims have been used by the president and his supporters to challenge the vote in a number of states. Reports that malfunctioning voting machines, intentionally miscounted mail-in votes and other irregularities affecting the vote were investigated by election officials and journalists who found no evidence of widespread voter fraud.

The voter fraud claims have continued to gather steam in recent weeks, thanks in large part to prominent accounts. A look at a four-week period starting in mid-October shows that President Trump and the top 25 superspreaders of voter fraud misinformation accounted for 28.6 percent of the interactions people had with that content, according to an analysis by Avaaz.

“What we see these people doing is kind of like setting a fire down with fuel, it is designed to quickly create a blaze,” Mr. Quran said. “These actors have built enough power they ensure this misinformation reaches millions of Americans.”

In order to find the superspreaders, Avaaz compiled a list of 95,546 Facebook posts that included narratives about voter fraud. Those posts were liked, shared or commented on nearly 60 million times by people on Facebook.

Avaaz found that just 33 of the 95,546 posts were responsible for over 13 million of those interactions. Those 33 posts had created a narrative that would go on to shape what millions of people thought about the legitimacy of the U.S. elections.

A spokesman for Facebook said the company had added labels to posts that misrepresented the election process and was directing people to a voting information center.

“We’re taking every opportunity to connect people to reliable information about the election and how votes are being counted,” said Kevin McAlister, a Facebook spokesman. The company has not commented on why accounts that repeatedly share misinformation, such as Mr. Straka’s and Diamond and Silk’s, have not been penalized. Facebook has previously said that President Trump, along with other elected officials, is granted a special status and is not fact-checked.

Many of the superspreader accounts had millions of interactions on their Facebook posts over the last month, and have enjoyed continued growth. The accounts were active on Twitter as well as Facebook, and increasingly spread the same misinformation on new social media sites like Parler, MeWe and Gab.

Dan Bongino, a right-wing commentator with a following of nearly four million people on Facebook, had over 7.7 million interactions on Facebook the week of Nov. 3. Mark Levin, a right-wing radio host, had nearly four million interactions, and Diamond and Silk had 2.5 million. A review of their pages by The Times shows that a majority of their posts have focused on the recent elections, and voter fraud narratives around them.

None of the superspreaders identified in this article responded to requests for comment.

One of the most prominent false claims promoted by the superspreaders was that Dominion voting software deleted votes for Mr. Trump, or somehow changed vote tallies in several swing states. Election officials have found no evidence that the machines malfunctioned, but posts about the machines have been widely shared by Mr. Trump and his supporters.

Over the last week, just seven posts from the top 25 superspreaders of the Dominion voter fraud claim accounted for 13 percent of the total interactions on Facebook about the claim.

Many of those same accounts were also top superspreaders of the Dominion claim, and other voter fraud theories, on Twitter. The accounts of President Trump, his son Eric, Mr. Straka and Mr. Levin were all among the top 20 accounts that spread misinformation about voter fraud on Twitter, according to Ian Kennedy, a researcher at the University of Washington who works with the Elections Integrity Partnership.

Mr. Trump had by far the largest influence on Twitter. A single tweet by the president accusing Dominion voting systems of deleting 2.7 million votes in his favor was shared over 185,000 times, and liked over 600,000 times.

Like the other false claims about voter fraud, Mr. Trump’s tweet included a label by Twitter that he was sharing information that was not accurate.

Twitter, like Facebook, has said that those labels help prevent false claims from being shared and direct people toward more authoritative sources of information.

Earlier this week, BuzzFeed News reported that Facebook employees questioned whether the labels were effective. Within the company, employees have sought out their own data on how well national newspapers performed during the elections, according to one Facebook employee.

On the #StoptheSteal hashtag, they found that both The New York Times and The Washington Post were among the top 25 pages with interactions on that hashtag — mainly from readers sharing articles and using the hashtag in those posts.

Combined, the two publications had approximately 44,000 interactions on Facebook under that hashtag. By comparison, Mr. Straka, the conservative activist who shared the call to action on voter fraud, got three times that number of interactions sharing material under the same hashtag on his own Facebook account.

Jacob Silver contributed reporting.

Posted on

Intel and Nvidia Chips Power a Chinese Surveillance System

URUMQI, China — At the end of a desolate road rimmed by prisons, deep within a complex bristling with cameras, American technology is powering one of the most invasive parts of China’s surveillance state.

The computers inside the complex, known as the Urumqi Cloud Computing Center, are among the world’s most powerful. They can watch more surveillance footage in a day than one person could in a year. They look for faces and patterns of human behavior. They track cars. They monitor phones.

The Chinese government uses these computers to watch untold numbers of people in Xinjiang, a western region of China where Beijing has unleashed a campaign of surveillance and suppression in the name of combating terrorism.

Chips made by Intel and Nvidia, the American semiconductor companies, have powered the complex since it opened in 2016. By 2019, at a time when reports said that Beijing was using advanced technology to imprison and track Xinjiang’s mostly Muslim minorities, new U.S.-made chips helped the complex join the list of the world’s fastest supercomputers. Both Intel and Nvidia say they were unaware of what they called misuse of their technology.

Powerful American technology and its potential misuse cut to the heart of the decisions the Biden administration must face as it tackles the country’s increasingly bitter relationship with China. The Trump administration last year banned the sale of advanced semiconductors and other technology to Chinese companies implicated in national security or humans rights issues. A crucial early question for Mr. Biden will be whether to firm up, loosen or rethink those restrictions.

Some figures in the technology industry argue that the ban went too far, cutting off valuable sales of American product with plenty of harmless uses and spurring China to create its own advanced semiconductors. Indeed, China is spending billions of dollars to develop high-end chips.

By contrast, critics of the use of American technology in repressive systems say that buyers exploit workarounds and that the industry and officials should track sales and usage more closely.

Companies often point out that they have little say over where their products end up. The chips in the Urumqi complex, for example, were sold by Intel and Nvidia to Sugon, the Chinese company backing the center. Sugon is an important supplier to Chinese military and security forces, but it also makes computers for ordinary companies.

That argument is not good enough anymore, said Jason Matheny, the founding director of Georgetown University’s Center for Security and Emerging Technology and a former U.S. intelligence official.

“Government and industry need to be more thoughtful now that technologies are advancing to a point where you could be doing real-time surveillance using a single supercomputer on millions of people potentially,” he said.

Image
Credit…Paul Mozur/The New York Times

There is no evidence the sale of Nvidia or Intel chip, which predate the Trump order, broke any laws. Intel said it no longer sells semiconductors for supercomputers to Sugon. Still, both continue to sell chips to the Chinese firm.

The Urumqi complex’s existence and use of U.S. chips are no secret, and there was no shortage of clues that Beijing was using it for surveillance in Xinjiang. Since 2015, when the complex began development, state media and Sugon had boasted of its ties to the police.

In five-year old marketing materials distributed in China, Nvidia promoted the Urumqi complex’s capabilities and boasted that the “high capacity video surveillance application” there had won customer satisfaction.

Nvidia said that the materials referred to older versions of its products and that video surveillance then was a normal part of the discussion around “smart cities,” an effort in China to use technology to solve urban issues like pollution, traffic and crime. A spokesman for Nvidia said the company had no reason to believe its products would be used “for any improper purpose.”

The spokesman added that Sugon “hasn’t been a significant Nvidia customer” since last year’s ban. He also said that Nvidia had not provided technical assistance for Sugon since then.

A spokesman for Intel, which still sells Sugon lower-end chips, said it would restrict or stop business with any customer that it found had used its products to violate human rights.

Publicity over Intel’s China business appears to have had an impact within the company. One business unit last year drafted ethics guidelines for its technology’s A.I. applications, according to three people familiar with the matter who asked not to be named because Intel had not made the guidelines public.

Sugon said in a statement that the complex was originally aimed at tracking license plates and managing other smart city tasks, but its systems proved ineffective and were switched to other uses. But as recently as September, official Chinese government media described the complex as a center for processing video and images for managing cities.

Advances in technology have given the authorities around the world substantial power to watch and sort people. In China, leaders have pushed technology to an even greater extreme. Artificial intelligence and genetic testing are used to screen people to see whether they are Uighurs, one of Xinjiang’s minority groups. Chinese companies and the authorities claim their systems can detect religious extremism or opposition to the Communist Party.

Image

Credit…Christie Hemm Klok for The New York Times

The Urumqi Cloud Computing Center — also sometimes called the Xinjiang Supercomputing Center — broke onto the list of the world’s fastest computers in 2018, ranking No. 221. In November 2019, new chips helped push its computer to No. 135.

Two data centers run by Chinese security forces sit next door, a way to potentially cut down on lag time, according to experts. Also nearby are six prisons and re-education centers.

When a New York Times reporter tried to visit the center in 2019, he was followed by plainclothes police officers. A guard turned him away.

The official Chinese media and Sugon’s previous statements depict the complex as a surveillance center, among other uses. In August 2017, local officials said that the center would support a Chinese police surveillance project called Sharp Eyes and that it could search 100 million photos in a second. By 2018, according to company disclosures, its computers could connect to 10,000 video feeds and analyze 1,000 simultaneously, using artificial intelligence.

“With the help of cloud computing, big data, deep learning and other technologies, the intelligent video analysis engine can integrate police data and applications from video footage, Wi-Fi hot spots, checkpoint information, and facial recognition analysis to support the operations of different departments” within the Chinese police, Sugon said in a 2018 article posted to an official social media account.

On the occasion of a visit by local Communist Party leaders to the complex that year, it wrote on its website that the computers had “upgraded the thinking from after-the-fact tracking to before-the-fact predictive policing.”

In Xinjiang, predictive policing often serves as shorthand for pre-emptive arrests aimed at behavior deemed disloyal or threatening to the party. That could include a show of Muslim piety, links to family living overseas or owning two phones or not owning a phone, according to Uighur testimony and official Chinese policy documents.

Technology helps sort vast amounts of data that humans cannot process, said Jack Poulson, a former Google engineer and founder of the advocacy group Tech Inquiry.

“When you have something approaching a surveillance state, your primary limitation is on your ability to identify events of interest within your feeds,” he said. “The way you scale up your surveillance is through machine learning and large scale A.I.”

Image

Credit…Greg Baker/Agence France-Presse — Getty Images

The Urumqi complex went into development before reports of abuses in Xinjiang were widespread. By 2019, governments around the world were protesting China’s conduct in Xinjiang. That year, the Sugon computer appeared on the international supercomputing rankings, using Intel Xeon Gold 5118 processors and Nvidia Tesla V100 advanced artificial intelligence chips.

It is not clear how or whether Sugon will obtain chips powerful enough keep the Urumqi complex on that list. But lesser technology typically used to run harmless tasks can also be used for surveillance and suppression. Customers can also use resellers in other countries or chips made by American companies overseas.

Last year, the police in two Xinjiang counties, Yanqi and Qitai, purchased surveillance systems that ran on lower-level Intel chips, according to government procurement documents. The Kizilsu Kyrgyz Autonomous Prefecture public security bureau in April purchased a computing platform that used servers running less-powerful Intel chips, according to the documents, though the agency had been placed on a Trump administration blacklist last year for its involvement in surveillance.

China’s dependence on American chips has, for now, helped the world push back, said Maya Wang, a China researcher with Human Rights Watch.

“I’m afraid in a few years time, Chinese companies and government will find their own way to develop chips and these capabilities,” Ms. Wang said. “Then there will be no way to get a handle on trying to stop these abuses.”

Paul Mozur reported from Urumqi, China, and Don Clark from San Francisco.

Posted on

Disappearing Tweets? Twitter Now Has a Feature for That

SAN FRANCISCO — First Snapchat did it. Then Instagram and Facebook jumped in. Now Twitter is joining in, too.

On Tuesday, Twitter said it would introduce a feature called Fleets, allowing users to post ephemeral photos or text that will automatically disappear after 24 hours. Fleets, a name that refers to the “fleeting” nature of a thought or expression, will roll out to all iPhone and Android users globally over the coming days, the company said.

Twitter said its main “global town square” service, which people such as President Trump use to broadcast their thoughts to followers, remained its marquee product. But the company said it recognized that many users simply lurked on the platform and rarely posted. Fleets, it said, could make it easier for people to communicate without worrying about wider scrutiny of their posts.

“We’ve learned that some people feel more comfortable joining conversations on Twitter with this ephemeral format, so what they’re saying lives just for a moment in time,” said Joshua Harris, a Twitter director of design. “We can create a space with less pressure that allows people to express themselves in a way that feels a bit more safe.”

Twitter’s move is part of a larger shift by social media companies toward more private and temporary modes of sharing. As public sharing on social media has spread toxic content and misinformation, many people have looked to minimize their digital footprints and communicate in more intimate groups.

Ephemeral sharing was pioneered by Snapchat. Evan Spiegel, chief executive of Snap, Snapchat’s parent company, has said he noticed that young people wished to keep their photos and communications private and temporary when he founded his company in 2011. Snapchat’s Stories feature, which broadcasts someone’s posts to his or her followers before disappearing 24 hours later, has become hugely popular.

Snapchat’s competitors have taken note. Facebook and its family of apps, including the photo-sharing site Instagram and the WhatsApp messaging app, have replicated the feature in recent years. Others, such as LinkedIn and Pinterest, have also followed suit.

Twitter has been late to the trend, partly because of the public nature of its platform. It is also more difficult to create advertising for ephemeral posts than for more permanent content, another hurdle to building the product.

In tests of Fleets in a handful of countries outside the United States recently, users gravitated to the new feature, Twitter executives said. “Those new to Twitter found Fleets to be an easier way to share what’s on their mind,” Mr. Harris said.

Smaller tech companies have also worked on innovating alternate forms of communication over the past few years. Discord, a popular communications start-up among gamers and others, has popularized group chat rooms that use video, voice or text chat. Clubhouse, another start-up, has pushed an all-audio, all-ephemeral social chat room approach.

Twitter said it was also experimenting with audio forms of communication among users. One of those products, called Spaces, looks and acts similar to Clubhouse, with small groups of people able to speak privately with no permanent recording of the conversation. Spaces is still in its early stages, the company said.

In shifting toward more private communications, Twitter will have to strike a balance between monitoring and limiting abusive content with the privacy of its users. Social media services tend to find it difficult to root out toxic posts and falsehoods when people are engaged in more intimate conversations and are not publicly posting.

Twitter said it was offering more tools for users to report harmful or abusive content, among other solutions.

“There is a ton we’re doing behind the scenes, expanding our rules and trying to prevent abuse and harassment before it happens,” said Christine Su, a Twitter product manager. More than half of all tweets that break the rules are removed before they are reported to the company, she said.

Posted on

The Hot New Covid Tech Is Wearable and Constantly Tracks You

In Rochester, Mich., Oakland University is preparing to hand out wearable devices to students that log skin temperature once a minute — or more than 1,400 times per day — in the hopes of pinpointing early signs of the coronavirus.

In Plano, Texas, employees at the headquarters of Rent-A-Center recently started wearing proximity detectors that log their close contacts with one another and can be used to alert them to possible virus exposure.

And in Knoxville, students on the University of Tennessee football team tuck proximity trackers under their shoulder pads during games — allowing the team’s medical director to trace which players may have spent more than 15 minutes near a teammate or an opposing player.

The powerful new surveillance systems, wearable devices that continuously monitor users, are the latest high-tech gadgets to emerge in the battle to hinder the coronavirus. Some sports leagues, factories and nursing homes have already deployed them. Resorts are rushing to adopt them. A few schools are preparing to try them. And the conference industry is eyeing them as a potential tool to help reopen convention centers.

“Everyone is in the early stages of this,” said Laura Becker, a research manager focusing on employee experience at the International Data Corporation, a market research firm. “If it works, the market could be huge because everyone wants to get back to some sense of normalcy.”

Image
Credit…BioIntelliSense, Inc.

Companies and industry analysts say the wearable trackers fill an important gap in pandemic safety. Many employers and colleges have adopted virus screening tools like symptom-checking apps and temperature-scanning cameras. But they are not designed to catch the estimated 40 percent of people with Covid-19 infections who may never develop symptoms like fevers.

Some offices have also adopted smartphone virus-tracing apps that detect users’ proximity. But the new wearable trackers serve a different audience: workplaces like factories where workers cannot bring their phones, or sports teams whose athletes spend time close together.

This spring, when coronavirus infections began to spike, many professional football and basketball teams in the United States were already using sports performance monitoring technology from Kinexon, a company in Munich whose wearable sensors track data like an athlete’s speed and distance. The company quickly adapted its devices for the pandemic, introducing SafeZone, a system that logs close contacts between players or coaches and emits a warning light if they get within six feet. The National Football League began requiring players, coaches and staff to wear the trackers in September.

Image

Credit…Brandon Wade/Associated Press

The data has helped trace the contacts of about 140 N.F.L. players and personnel who have tested positive since September, including an outbreak among the Tennessee Titans, said Dr. Thom Mayer, the medical director of the N.F.L. Players Association. The system is particularly helpful in ruling out people who spent less than 15 minutes near infected colleagues, he added.

College football teams in the Southeastern Conference also use Kinexon trackers. Dr. Chris Klenck, the head team physician at the University of Tennessee, said the proximity data helped teams understand when the athletes spent more than 15 minutes close together. They discovered it was rarely on the field during games, but often on the sideline.

“We’re able to tabulate that data, and from that information we can help identify people who are close contacts to someone who’s positive,” Dr. Klenck said.

Civil rights and privacy experts warn that the spread of such wearable continuous-monitoring devices could lead to new forms of surveillance that outlast the pandemic — ushering into the real world the same kind of extensive tracking that companies like Facebook and Google have instituted online. They also caution that some wearable sensors could enable employers, colleges or law enforcement agencies to reconstruct people’s locations or social networks, chilling their ability to meet and speak freely. And they say these data-mining risks could disproportionately affect certain workers or students, like undocumented immigrants or political activists.

“It’s chilling that these invasive and unproven devices could become a condition for keeping our jobs, attending school or taking part in public life,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, a nonprofit in Manhattan. “Even worse, there’s nothing to stop police or ICE from requiring schools and employers to hand over this data.”

Executives at Kinexon and other companies that market the wearable trackers said in recent interviews that they had thought deeply about the novel data-mining risks and had taken steps to mitigate them.

Devices from Microshare, a workplace analytics company that makes proximity detection sensors, use Bluetooth technology to detect and log people wearing the trackers who come into close contact with one another for more than 10 or 15 minutes. But the system does not continuously monitor users’ locations, said Ron Rock, the chief executive of Microshare. And it uses ID codes, not employees’ real names, to log close contacts.

Mr. Rock added that the system was designed for human resources managers or security officials at client companies to use to identify and alert employees who spent time near an infected person, not to map workers’ social connections.

GlaxoSmithKline, the pharmaceutical giant, recently began working with Microshare to develop a virus-tracing system for its sites that make over-the-counter drugs. Budaja Lim, head of digital supply chain technology for Asia Pacific at the company’s consumer health care division, said he wanted to ensure maximum privacy for workers who would wear the proximity detection sensors.

As a result, he said, the system silos the data it collects. It logs close contacts between workers using ID numbers, he said. And it separately records the ID numbers of workers who spent time in certain locations — like a packaging station in a warehouse — enabling the company to hyper-clean specific areas where an infected person may have spent time.

GlaxoSmithKline recently tested the system at a site in Malaysia and is rolling it out to other consumer health plants in Africa, Asia and Europe. The tracking data has also allowed the company to see where workers seem to be spending an unusual amount of time close together, like a security desk, and modify procedures to improve social distancing, Mr. Lim said.

“It was really designed to be a reactive type of solution” to trace workers with possible virus exposure, he said. “But it has actually become a really powerful tool to proactively manage and protect our employee safety.”

Image

Credit…Emily Rose Bennett for The New York Times

Oakland University, a public research university near Detroit, is at the forefront of schools and companies preparing to making the leap to the BioButton, a novel coin-size sensor attached to the skin 24/7 that uses algorithms to try to detect possible signs of Covid-19.

Whether such continuous surveillance of students, a young and largely healthy population, is beneficial is not yet known. Researchers are only in the early phases of studying whether wearable technology could help flag signs of the disease.

David A. Stone, vice president for research at Oakland University, said school officials had carefully vetted the BioButton and concluded it was a low-risk device that, added to measures like social distancing and mask wearing, might help hinder the spread of the virus. The technology will alert campus health services to students with possible virus symptoms, he said, but the school will not receive specific data like their temperature readings.

“In an ideal world, we would love to be able to wait until this is an F.D.A.-approved diagnostic,” Dr. Stone said. But, he added, “nothing about this pandemic has been in an ideal world.”

Dr. James Mault, chief executive of BioIntelliSense, the start-up behind the BioButton, said students with privacy concerns could ask to have their personal details stripped from the company’s records. He added that BioIntelliSense was preparing to conduct a large-scale study examining its system’s effectiveness for Covid-19.

Oakland had initially planned to require athletes and dorm residents to wear the BioButton. But the university reversed course this summer after nearly 2,500 students and staff members signed a petition objecting to the policy. The tracker will now be optional for students.

“A lot of colleges are doing masks and social distancing,” said Tyler Dixon, a senior at the school who started the petition, “but this seemed like one step too far.”