Posted on

This is how police request customer data from Amazon

Anyone can access portions of a web portal, used by law enforcement to request customer data from Amazon, even though the portal is supposed to require a verified email address and password.

Amazon’s law enforcement request portal allows police and federal agents to submit formal requests for customer data along with a legal order, like a subpoena, a search warrant, or a court order. The portal is publicly accessible from the internet, but law enforcement must register an account with the site in order to allow Amazon to “authenticate” the requesting officer’s credentials before they can make requests.

Only time sensitive emergency requests can be submitted without an account, but this requires the user to “declare and acknowledge” that they are an authorized law enforcement officer before they can submit a request.

The portal does not display customer data or allow access to existing law enforcement requests. But parts of the website still load without needing to log in, including its dashboard and the “standard” request form used by law enforcement to request customer data.

The portal provides a rare glimpse into how Amazon handles law enforcement requests.

This form allows law enforcement to request customer data using a wide variety of data points, including Amazon order numbers, serial numbers of Amazon Echo and Fire devices, credit cards details and bank account numbers, gift cards, delivery and shipping numbers, and even the Social Security number of delivery drivers.

It also allows law enforcement to obtain records related to Amazon Web Services accounts by submitting domain names or IP addresses related to the request.

Assuming this was a bug, we sent Amazon several emails prior to publication but did not hear back.

Amazon is not the only tech company with a portal for law enforcement requests. Many of the bigger tech companies with millions or even billions of users around the world, like Google and Twitter, have built portals to allow law enforcement to request customer and user data.

Motherboard reported a similar issue earlier this month that allowed anyone with an email address to access law enforcement portals set up by Facebook and WhatsApp.

Read More

Posted on

Senate’s encryption backdoor bill is ‘dangerous for Americans,’ says Rep. Lofgren

A Senate bill that would compel tech companies to build backdoors to allow law enforcement access to encrypted devices and data would be “very dangerous” for Americans, said a leading House Democrat.

Law enforcement frequently spars with tech companies over their use of strong encryption, which protects user data from hackers and theft, but the government says makes it harder to catch criminals accused of serious crime. Tech companies like Apple and Google have in recent years doubled down on their security efforts by securing data with encryption that even they cannot unlock.

Senate Republicans in June introduced their latest “lawful access” bill, renewing previous efforts to force tech companies to allow law enforcement access to a user’s data when presented with a court order.

“It’s dangerous for Americans, because it will be hacked, it will be utilized, and there’s no way to make it secure,” Rep. Zoe Lofgren, whose congressional seat covers much of Silicon Valley, told TechCrunch at Disrupt 2020. “If we eliminate encryption, we’re just opening ourselves up to massive hacking and disruption,” she said.

Lofgren’s comments echo those of critics and security experts, who have long criticized efforts to undermine encryption, arguing that there is no way to build a backdoor for law enforcement that could not also be exploited by hackers.

Several previous efforts by lawmakers to weaken and undermine encryption have failed. Currently, law enforcement has to use existing tools and techniques to find weaknesses in phones and computers. The FBI claimed for years that it had thousands of devices that it couldn’t get into, but admitted in 2018 that it repeatedly overstated the number of encrypted devices it had and the number of investigations that were negatively impacted as a result.

Lofgren has served in Congress since 1995 during the first so-called “Crypto Wars,” during which the security community fought the federal government to limit access to strong encryption. In 2016, Lofgren was part of an encryption working group on the House Judiciary Committee. The group’s final report, bipartisan but not binding, found that any measures to undermine encryption “works against the national interest.”

Still, it’s a talking point that the government continues to push, even as recently as this year when U.S. Attorney General William Barr said that Americans should accept the security risks that encryption backdoors pose.

“You cannot eliminate encryption safely,” Lofgren told TechCrunch. “And if you do, you will create chaos in the country and for Americans, not to mention others around the world,” she said. “It’s just an unsafe thing to do, and we can’t permit it.”

Read More

Posted on

Apple opens up — slightly — on Hong Kong’s national security law

After Beijing unilaterally imposed a new national security law on Hong Kong on July 1, many saw the move as an effort by Beijing to crack down on dissent and protests in the semi-autonomous region.

Soon after, a number of tech giants — including Microsoft, Twitter and Google — said they would stop processing requests for user data from Hong Kong authorities, fearing that the requested data could end up in the hands of Beijing.

But Apple was noticeably absent from the list. Instead, Apple said it was “assessing” the new law.

When reached by TechCrunch, Apple did not say how many requests for user data it had received from Hong Kong authorities since the new national security law went into effect. But the company reiterated that it doesn’t receive requests for user content directly from Hong Kong. Instead, it relies on a long-established so-called mutual legal assistance treaty, allowing U.S. authorities to first review requests from foreign governments.

Apple said it stores iCloud data for Hong Kong users in the United States, so any requests by Hong Kong authorities for user content has to be first approved by the Justice Department, and a warrant has to be issued by a U.S. federal judge before the data can be handed over to Hong Kong.

The company said that it received a limited number of non-content requests from Hong Kong related to fraud or stolen devices, and that the number of requests it received from Hong Kong authorities since the introduction of the national security law will be included in an upcoming transparency report.

Hong Kong authorities made 604 requests for device information, 310 requests for financial data, and 10 requests for user account data during 2019.

The report also said that Apple received 5,295 requests from U.S. authorities during the second half of last year for data related to 80,235 devices, a seven-fold increase from the previous six months.

Apple also received 4,095 requests from U.S. authorities for user data stored in iCloud on 31,780 accounts, twice the number of accounts affected during the previous six months.

Most of the requests related to ongoing return and repair fraud investigations, Apple said.

The report said it received 2,522 requests from U.S. authorities to preserve data on 6,741 user accounts, allowing law enforcement to obtain the right legal process to access the data.

Apple also said it received between 0-499 national security requests for non-content data on between 15,500 and 15,999 users or accounts, an increase of 40% on the previous report.

Tech companies are only allowed to report the number of national security requests in ranges, per rules set out by the Justice Department.

The company also published two FBI national security letters, or NSLs, from 2019, which the company petitioned to make public. These letters are subpoenas issued by the FBI with no judicial oversight and often with a gag order preventing the company from disclosing their existence. Since the introduction of the Freedom Act in 2015, the FBI was required to periodically review the gag orders and lift them when they were no longer deemed necessary.

Apple also said it received 54 requests from governments to remove 258 apps from its app store. China filed the vast majority of requests.

Read More

Posted on

Decrypted: Uber’s former security chief charged, FBI’s ‘vishing’ warning

A lot happened in cybersecurity over the past week.

The University of Utah paid almost half a million dollars to stop hackers from leaking sensitive student data after a ransomware attack. Two major ATM makers patched flaws that could’ve allowed for fraudulent cash withdrawals from vulnerable ATMs. Grant Schneider, the U.S. federal chief information security officer, is leaving his post after more than three decades in government. And, a new peer-to-peer botnet is spreading like wildfire and infecting millions of machines around the world.

In this week’s column, we look at how Uber’s handling of its 2016 data breach put the company’s former chief security officer in hot water with federal prosecutors. And, what is “vishing” and why should companies take note?


THE BIG PICTURE

Uber’s former security chief charged with data breach cover-up

Joe Sullivan, Uber’s former security chief, was indicted this week by federal prosecutors for allegedly trying to cover up a data breach in 2016 that saw 57 million rider and driver records stolen.

Sullivan paid $100,000 in a “bug bounty” payment to the two hackers, who were also charged with the breach, in exchange for signing a nondisclosure agreement. It wasn’t until a year after the breach that former Uber chief executive Travis Kalanick was forced out and replaced with Dara Khosrowshahi, who fired Sullivan after learning of the cyberattack. Sullivan now serves as Cloudflare’s chief security officer.

The payout itself isn’t the issue, as some had claimed. Prosecutors in San Francisco took issue with how Sullivan allegedly tried to bury the breach, which later resulted in a massive $148 million settlement with the Federal Trade Commission.

Read More

Posted on

Decrypted: How a teenager hacked Twitter, Garmin’s ransomware aftermath

A 17-year-old Florida teenager is accused of perpetrating one of the year’s biggest and most high-profile hacks: Twitter.

A federal 30-count indictment filed in Tampa said Graham Ivan Clark used a phone spearphishing attack to pivot through multiple layers of Twitter’s security and bypassed its two-factor authentication to gain access to an internal “admin” tool that let the hacker take over any account. With two accomplices named in a separate federal indictment, Clark — who went by the online handle “Kirk” — allegedly used the tool to hijack the accounts of dozens of celebrities and public figures, including Bill Gates, Elon Musk and former president Barack Obama, to post a cryptocurrency scam netting over $100,000 in bitcoin in just a few hours.

It was, by all accounts, a sophisticated attack that required technical skills and an ability to trick and deceive to pull off the scam. Some security professionals were impressed, comparing the attack to one that had the finesse and professionalism of a well-resourced nation-state attacker.

But a profile in The New York Times describes Clark was an “adept scammer with an explosive temper.”

In the teenager’s defense, the attack could have been much worse. Instead of pushing a scam that promised to “double your money,” Clark and his compatriots could have wreaked havoc. In 2013, hackers hijacked the Associated Press’ Twitter account and tweeted a fake bomb attack on the White House, sending the markets plummeting — only to quickly recover after the all-clear was given.

But with control of some of the world’s most popular Twitter accounts, Clark was for a few hours in July one of the most powerful people in the world. If found guilty, the teenager could spend his better years behind bars.

Here’s more from the past week.


THE BIG PICTURE

Garmin hobbles back after ransomware attack, but questions remain

Read More

Posted on

Gedmatch confirms data breach after users’ DNA profile data made available to police

Gedmatch, the DNA analysis site that police used to catch the so-called Golden State Killer, was pulled briefly offline on Sunday while its parent company investigated how its users’ DNA profile data apparently became available to law enforcement searches.

The company confirmed Wednesday that the permissions change was caused by a breach.

The site, which lets users upload their DNA profile data to trace their family tree and ancestors, rose to overnight fame in 2018 after law enforcement used the site to match the DNA from a serial murder suspect against the site’s million-plus DNA profiles in the site’s database without first telling the company.

Gedmatch issued a privacy warning to its users and put in new controls to allow users to opt-in for their DNA to be included in police searches.

But users reported Sunday that those settings had changed without their permission, and that their DNA profiles were made available to law enforcement searches.

In a statement on Wednesday, the company told users by email that it was hit by two security breaches on July 19 and July 20.

“We became aware of the situation a short time later and immediately took the site down. As a result of the breach, all user permissions were reset, making all profiles visible to all users,” the email read. “This was the case for approximately 3 hours. During this time, users who did not opt-in for law enforcement matching were also available for law enforcement matching, and conversely, all law enforcement profiles were made visible to Gedmatch users.”

The statement said that the second breach caused user’s settings to reset, allowing law enforcement to search profile data for users who had previously opted out.

At the time of writing, Gedmatch’s website was offline.

DNA profiling and analysis companies are increasingly popular with users trying to understand their cultural and ethnic backgrounds by discovering new and ancestral family members. But law enforcement are increasingly pushing for access to genetic databases to try to solve crimes from DNA left at crime scenes.

A spokesperson for the company on Wednesday said the company had reported the incident to the authorities. The company told TechCrunch that it had not received or responded to any law enforcement requests during the two-day incident.

Gedmatch does not publish how frequently law enforcement seeks access to the company’s data. Its rivals, like 23andMe and Ancestry.com, have already published these so-called transparency reports. Earlier this year Ancestry.com revealed that it rejected an out-of-state police warrant, indicating that police continue are still using DNA profiling and analysis sites for information.

“The acknowledgement of an issue is a start, but if a ‘resolution’ means simply correcting the error, there are many questions that remain,” Elizabeth Joh, a professor of law at University of California, Davis School of Law, told TechCrunch.

“For instance, does Gedmatch know whether any law enforcement agencies accessed these improperly tagged users? Will they disclose any further details of the breach? And of course, this isn’t simply Gedmatch’s problem: a privacy breach in a genetic genealogy database underscores the woefully inadequate regulatory safeguards for the most sensitive of information, in a novel arena for civil liberties,” she said. “It’s a mess.”

Updated on July 22 with confirmation of the security breach. First published on July 19 at 5:38pm ET.

Read More

Posted on

Gedmatch investigating after users’ DNA profile data made available to police

Gedmatch, the DNA analysis site that police used to catch the so-called Golden State Killer, was pulled briefly offline on Sunday while its parent company investigated how its users’ DNA profile data apparently became available to law enforcement searches.

The site, which lets users upload their DNA profile data to trace their family tree and ancestors, rose to overnight fame in 2018 after law enforcement used the site to match the DNA from a serial murder suspect against the site’s million-plus DNA profiles in the site’s database without first telling the company.

Gedmatch issued a privacy warning to its users and put in new controls to allow users to opt-in for their DNA to be included in police searches.

But users reported Sunday that those settings had changed without their permission, and that their DNA profiles were made available to law enforcement searches.

Users called it a “privacy breach.” But when reached, the company’s owner declined to say if the issue was caused by an error or a security breach, citing an ongoing investigation.

“We are aware of the issue regarding Gedmatch, where user permissions were not set correctly,” said Brett Williams, chief executive of Verogen, which acquired Gedmatch in 2019. “We have resolved that issue; however, as a precaution, we have taken the site down while we are investigating the actual cause of the error. Once we understand the cause, we will be issuing a more formal statement,” he said.

DNA profiling and analysis companies are increasingly popular with users trying to understand their cultural and ethnic backgrounds by discovering new and ancestral family members. But law enforcement are increasingly pushing for access to genetic databases to try to solve crimes from DNA left at crime scenes.

Williams would not say, when asked, if Verogen or Gedmatch have received any law enforcement requests for user data in the past day, or if either company has responded.

Gedmatch does not publish how frequently law enforcement seeks access to the company’s data. Its rivals, like 23andMe and Ancestry.com, have already published these so-called transparency reports. Earlier this year Ancestry.com revealed that it rejected an out-of-state police warrant, indicating that police continue are still using DNA profiling and analysis sites for information.

“The acknowledgement of an issue is a start, but if a ‘resolution’ means simply correcting the error, there are many questions that remain,” Elizabeth Joh, a professor of law at University of California, Davis School of Law, told TechCrunch.

“For instance, does Gedmatch know whether any law enforcement agencies accessed these improperly tagged users? Will they disclose any further details of the breach? And of course, this isn’t simply Gedmatch’s problem: a privacy breach in a genetic genealogy database underscores the woefully inadequate regulatory safeguards for the most sensitive of information, in a novel arena for civil liberties,” she said. “It’s a mess.”


Send tips securely over Signal and WhatsApp to +1 646-755-8849.

Read More

Posted on

Decrypted: As tech giants rally against Hong Kong security law, Apple holds out

It’s not often Silicon Valley gets behind a single cause. Supporting net neutrality was one, reforming government surveillance another. Last week, Big Tech took up its latest: halting any cooperation with Hong Kong police.

Facebook, Google, Microsoft, Twitter, and even China-headquartered TikTok said last week they would no longer respond to demands for user data from Hong Kong law enforcement — read: Chinese authorities — citing the new unilaterally imposed Beijing national security law. Critics say the law, ratified on June 30, effectively kills China’s “one country, two systems” policy allowing Hong Kong to maintain its freedoms and some autonomy after the British handed over control of the city-state back to Beijing in 1997.

Noticeably absent from the list of tech giants pulling cooperation was Apple, which said it was still “assessing the new law.” What’s left to assess remains unclear, given the new powers explicitly allow warrantless searches of data, intercept and restrict internet data, and censor information online, things that Apple has historically opposed if not in so many words.

Facebook, Google and Twitter can live without China. They already do — both Facebook and Twitter are banned on the mainland, and Google pulled out after it accused Beijing of cyberattacks. But Apple cannot. China is at the heart of its iPhone and Mac manufacturing pipeline, and accounts for over 16% of its revenue — some $9 billion last quarter alone. Pulling out of China would be catastrophic for Apple’s finances and market position.

The move by Silicon Valley to cut off Hong Kong authorities from their vast pools of data may be a largely symbolic move, given any overseas data demands are first screened by the Justice Department in a laborious and frequently lengthy legal process. But by holding out, Apple is also sending its own message: Its ardent commitment to human rights — privacy and free speech — stops at the border of Hong Kong.

Here’s what else is in this week’s Decrypted.


THE BIG PICTURE

Police used Twitter-backed Dataminr to snoop on protests

Read More

Posted on

CBP says it’s ‘unrealistic’ for Americans to avoid its license plate surveillance

U.S. Customs and Border Protection has admitted that there is no practical way for Americans to avoid having their movements tracked by its license plate readers, according to its latest privacy assessment.

CBP published its new assessment — three years after its first — to notify the public that it plans to tap into a commercial database, which aggregates license plate data from both private and public sources, as part of its border enforcement efforts.

The U.S. has a massive network of license plate readers, typically found on the roadside, to collect and record the license plates of vehicles passing by. License plate readers can capture thousands of license plates each minute. License plates are recorded and stored in massive databases, giving police and law enforcement agencies the ability to track millions of vehicles across the country.

The agency updated its privacy assessment in part because Americans “may not be aware” that the agency can collect their license plate data.

“CBP cannot provide timely notice of license plate reads obtained from various sources outside of its control,” the privacy assessment said. “Many areas of both public and private property have signage that alerts individuals that the area is under surveillance; however, this signage does not consistently include a description of how and with whom such data may be shared.”

But buried in the document, the agency admitted: “The only way to opt out of such surveillance is to avoid the impacted area, which may pose significant hardships and be generally unrealistic.”

CBP struck a similar tone in 2017 during a trial that scanned the faces of American travelers as they departed the U.S., a move that drew ire from civil liberties advocates at the time. CBP told Americans that travelers who wanted to opt-out of the face scanning had to “refrain from traveling.”

The document added that the privacy risk to Americans is “enhanced” because the agency “may access [license plate data] captured anywhere in the United States,” including outside of the 100-mile border zone within which the CBP typically operates.

CBP said that it will reduce the risk by only accessing license plate data when there is “circumstantial or supporting evidence” to further an investigation, and will only let CBP agents access data within a five-year period from the date of the search.

A spokesperson for CBP did not respond to a request for comment on the latest assessment.

CBP doesn’t have the best track record with license plate data. Last year, CBP confirmed that a subcontractor, Perceptics, improperly copied license plate data on “fewer than 100,000” people over a period of a month-and-a-half at a U.S. port of entry on the southern border. The agency later suspended its contract with Perceptics.

Read More

Posted on

Societal upheaval during the COVID-19 pandemic underscores need for new AI data regulations

As a long-time proponent of AI regulation that is designed to protect public health and safety while also promoting innovation, I believe Congress must not delay in enacting, on a bipartisan basis, Section 102(b) of The Artificial Intelligence Data Protection Act — my proposed legislation and now a House of Representatives Discussion Draft Bill. Guardrails in the form of Section 102(b)’s ethical AI legislation are necessary to maintain the dignity of the individual.

What does Section 102(b) of The AI Data Protection Act provide and why the urgent need for the federal government to enact it now?

To answer these questions, it is first necessary to understand how artificial intelligence (AI) is being used during this historic moment when our democratic society is confronting two simultaneous existential threats. Only then can the risks that AI poses to our individual dignity be recognized, and Section 102(b) be understood as one of the most important remedies to protect the liberties that Americans hold dear and that serve as the bedrock of our society.

America is now experiencing mass protests demanding an end to racism and police brutality, and watching as civil unrest unfolds in the midst of trying to quell the deadly COVID-19 pandemic. Whether we are aware of or approve of it, in both contexts — and in every other facet of our lives — AI technologies are being deployed by government and private actors to make critical decisions about us. In many instances, AI is being utilized to assist society and to get us as quickly as practical to the next normal.

But so far, policymakers have largely overlooked a critical AI-driven public health and safety concern. When it comes to AI, most of the focus has been on the issues of fairness, bias and transparency in data sets used to train algorithms. There is no question that algorithms have yielded bias; one only need to look to employee recruiting and loan underwriting for examples of unfair exclusion of women and racial minorities.

We’ve also seen AI generate unintended, and sometimes unexplainable, outcomes from the data. Consider the recent example of an algorithm that was supposed to assist judges with fair sentencing of nonviolent criminals. For reasons that have yet to be explained, the algorithm assigned higher risk scores to defendants younger than 23, resulting in 12% longer sentences than their older peers who had been incarcerated more frequently, while neither reducing incarceration nor recidivism.

But the current twin crises expose another more vexing problem that has been largely overlooked — how should society address the scenario where the AI algorithm got it right but from an ethical standpoint, society is uncomfortable with the results? Since AI’s essential purpose is to produce accurate predictive data from which humans can make decisions, the time has arrived for lawmakers to resolve not what is possible with respect to AI, but what should be prohibited.

Governments and private corporations have a never-ending appetite for our personal data. Right now, AI algorithms are being utilized around the world, including in the United States, to accurately collect and analyze all kinds of data about all of us. We have facial recognition to surveil protestors in a crowd or to determine whether the general public is observing proper social distancing. There is cell phone data for contact tracing, as well as public social media posts to model the spread of coronavirus to specific zip codes and to predict location, size and potential violence associated with demonstrations. And let’s not forget drone data that is being used to analyze mask usage and fevers, or personal health data used to predict which patients hospitalized with COVID have the greatest chance of deteriorating.

Only through the use of AI can this quantity of personal data be compiled and analyzed on such a massive scale.

This access by algorithms to create an individualized profile of our cell phone data, social behavior, health records, travel patterns and social media content — and many other personal data sets — in the name of keeping the peace and curtailing a devastating pandemic can, and will, result in various governmental actors and corporations creating frighteningly accurate predictive profiles of our most private attributes, political leanings, social circles and behaviors.

Left unregulated, society risks these AI-generated analytics being used by law enforcement, employers, landlords, doctors, insurers — and every other private, commercial and governmental enterprise that can collect or purchase it — to make predictive decisions, be they accurate or not, that impact our lives and strike a blow to the most fundamental notions of a liberal democracy. AI continues to assume an ever-expanding role in the employment context to decide who should be interviewed, hired, promoted and fired. In the criminal justice context, it is used to determine who to incarcerate and what sentence to impose. In other scenarios, AI restrict people to their homes, limit certain treatment at the hospital, deny loans and penalize those who disobey social distancing regulations.

Too often, those who eschew any type of AI regulation seek to dismiss these concerns as hypothetical and alarmist. But just a few weeks ago, Robert Williams, a Black man and Michigan resident, was wrongfully arrested because of a false face recognition match. According to news reports and an ACLU press release, Detroit police handcuffed Mr. Williams on his front lawn in front of his wife and two terrified girls, ages two and five. The police took him to a detention center about 40 minutes away, where he was locked up overnight. After an officer acknowledged during an interrogation the next afternoon that “the computer must have gotten it wrong,” Mr. Williams was finally released — nearly 30 hours after his arrest.

While widely believed to be the first confirmed case of AI’s incorrect facial recognition leading to the arrest of an innocent citizen, it seems clear this won’t be the last. Here, AI served as the primary basis for a critical decision that impacted the individual citizen — being arrested by law enforcement. But we must not only focus on the fact that the AI failed by identifying the wrong person, denying him his freedom. We must identify and proscribe those instances where AI should not be used as the basis for specified critical decisions — even when it gets it “right.”

As a democratic society, we should be no more comfortable with being arrested for a crime we contemplated but did not commit, or being denied medical treatment for a disease that will undoubtedly end in death over time, as we are with Mr. Williams’ mistaken arrest. We must establish an AI “no-fly zone” to preserve our individual freedoms. We must not allow certain key decisions to be left solely to the predictive output of artificially intelligent algorithms.

To be clear, this means that even in situations where every expert agrees that the data in and out is completely unbiased, transparent and accurate, there must be a statutory prohibition on utilizing it for any type of predictive or substantive decision-making. This is admittedly counter-intuitive in a world where we crave mathematical certainty, but necessary.

Section 102(b) of the Artificial Intelligence Data Protection Act properly and rationally accomplishes this in the context of both scenarios — where AI generates correct and/or incorrect outcomes. It does this in two key ways.

First, Section 102(b) specifically identifies those decisions which can never be made in whole or in part by AI. For example, it enumerates specific misuses of AI that would prohibit covered entities’ sole reliance on artificial intelligence to make certain decisions. These include recruitment, hiring and discipline of individuals, the denial or limitation of medical treatment, or medical insurance issuers making decisions regarding coverage of a medical treatment. In light of what society has recently witnessed, the prohibited areas should likely be expanded to further minimize the risk that AI will be used as a tool for racial discrimination and harassment of protected minorities.

Second, for certain other specific decisions based on AI analytics that are not outright prohibited, Section 102(b) define those instances where a human must be involved in the decision-making process.

By enacting Section 102(b) without delay, legislators can maintain the dignity of the individual by not allowing the most critical decisions that impact the individual to be left solely to the predictive output of artificially intelligent algorithms.

Mr. Newman is the chair of Baker McKenzie’s North America Trade Secrets Practice. The views and opinions expressed here are his own.

Read More