Google has added its line of Nest smart home devices to its Advanced Protection Program, a security offering that adds stronger account protections for high-risk users like politicians and journalists.
The program, launched in 2017, allows anyone who signs up access to a range of additional account security features, like limiting third-party access to account data, anti-malware protections, and allowing the use of physical security keys to help thwart some of the most advanced cyberattacks.
Google said that adding Nest to the program was a “top request” from users.
Smart home devices are increasingly a target for hackers largely because many internet-connected devices lack basic security protections and are easy to hack, prompting an effort by states and governments to help device makers improve their security. A successful hack can allow hackers to snoop in on smart home cameras, or ensnare the device into a massive collection of vulnerable devices — a botnet — that can be used to knock websites offline with large amounts of junk traffic.
Although Nest devices are more secure than most, its users are not immune from hackers.
Earlier this year Google began requiring that Nest users must enable two-factor authentication after a spate of reported automated attacks targeting Nest cameras. Google said its systems had not been breached, but warned that hackers were using passwords stolen in other breaches to target Nest users.
While two-factor authentication virtually eliminates these kinds of so-called credential stuffing attacks, Google said its new security improvements will add “yet another layer of protection” to users’ Nest devices.
Twitter and Reddit have filed an amicus brief in support of a lawsuit challenging a U.S. government rule change compelling visa applicants to disclose their social media handles.
The lawsuit, brought by the Knight First Amendment Institute at Columbia University, the Brennan Center for Justice and law firm Simpson Thacher & Bartlett, seeks to undo both the State Department’s requirement that visa applicants must disclose their social media handles prior to obtaining a U.S. visa, as well as related rules over the retention and dissemination of those records.
Last year, the State Department began asking visa applicants for their current and former social media usernames, a move that affects millions of non-citizens applying to travel to the United States each year. The rule change was part of the Trump administration’s effort to expand its “enhanced” screening protocols. At the time, it was reported that the information would be used if the State Department determines that “such information is required to confirm identity or conduct more rigorous national security vetting.”
In a filing supporting the lawsuit, both Twitter and Reddit said the social media policies “unquestionably chill a vast quantity of speech” and that the rules violate the First Amendment rights “to speak anonymously and associate privately.”
Twitter and Reddit, which collectively have more than 560 million users, said their users — many of which don’t use their real names on their platforms — are forced to “surrender their anonymity in order to travel to the United States,” which “violates the First Amendment rights to speak anonymously and associate privately.”
“Twitter and Reddit vigorously guard the right to speak anonymously for people on their platforms, and anonymous individuals correspondingly communicate on these platforms with the expectation that their identities will not be revealed without a specific showing of compelling need,” the brief said.
“That expectation allows the free exchange of ideas to flourish on these platforms.”
Jessica Herrera-Flanigan, Twitter’s policy chief for the Americas, said the social media rule “infringes both of those rights and we are proud to lend our support on these critical legal issues.” Reddit’s general counsel Ben Lee called the rule an “intrusive overreach” by the government.
It’s not known how many, if any, visa applicants have been denied a visa because of their social media content. But since the social media rule went into effect, cases emerged of approved visa holders denied entry to the U.S. for other people’s social media postings. Ismail Ajjawi, a then 17-year-old freshman at Harvard University, was turned away at Boston Logan International Airport after U.S. border officials searched his phone after taking issue with social media postings of Ajjawi’s friends — and not his own.
Abed Ayoub, legal and policy director at the American-Arab Anti-Discrimination Committee, told TechCrunch at the time that Ajjawi’s case was not isolated. A week later, TechCrunch learned of another man who was denied entry to the U.S. because of a WhatsApp message sent by a distant acquaintance.
A spokesperson for the State Department declined to comment on matters under litigation.
Thailand’s largest cell network AIS has pulled a database offline that was spilling billions of real-time internet records on millions of Thai internet users.
Security researcher Justin Paine said in a blog post that he found the database, containing DNS queries and Netflow data, on the internet without a password. With access to this database, Paine said that anyone could “quickly paint a picture” about what an internet user (or their household) does in real-time.
Paine alerted AIS to the open database on May 13. But after not hearing back for a week, Paine reported the apparent security lapse to Thailand’s national computer emergency response team, known as ThaiCERT, which contacted AIS about the open database.
The database was inaccessible a short time later.
It’s not known who owns the database. Paine told TechCrunch that the kind of records found in the database can only come from someone who’s able to monitor internet traffic as it flows across the network. But there is no easy way to differentiate between if the database belongs to the internet provider — or one of its subsidiaries — or a large enterprise customer on AIS’ network. AIS spokespeople did not respond to our emails requesting comment.
DNS queries are a normal side-effect of using the internet. Every time you visit a website, the browser converts a web address into an IP address, which tells the browser where the web page lives on the internet. Although DNS queries don’t carry private messages, emails, or sensitive data like passwords, they can identify which websites you access and which apps you use.
But that could be a major problem for high-risk individuals, like journalists and activists, whose internet records could be used to identify their sources.
Thailand’s internet surveillance laws grant authorities sweeping access to internet user data. Thailand also has some of the strictest censorship laws in Asia, forbidding any kind of criticism against the Thai royal family, national security, and certain political issues. In 2017, the Thai military junta, which took power in a 2015 coup, narrowly backed down from banning Facebook across the country after the social network giant refused to censor certain users’ posts.
DNS query data can also be used to gain insights into a person’s internet activity.
Using the data, Paine showed how anyone with access to the database could learn a number of things from a single internet-connected house, such as the kind of devices they owned, which antivirus they ran, and which browsers they used, and which social media apps and websites they frequented. In households or offices, many people share one internet connection, making it far more difficult to trace internet activity back to a particular person.
Advertisers also find DNS data valuable for serving targeted ads.
Since a 2017 law allowed U.S. internet providers to sell internet records — like DNS queries and browsing histories — of their users, browser makers have pushed back by rolling out privacy-enhancing technologies that make it harder for internet and network providers to snoop.
One such technology, DNS over HTTPS — or DoH — encrypts DNS requests, making it far more difficult for internet or network providers to know which websites a customer is visiting or which apps they use.
A renowned iPhone hacking team has released a new “jailbreak” tool that unlocks every iPhone, even the most recent models running the latest iOS 13.5.
For as long as Apple has kept up its “walled garden” approach to iPhones by only allowing apps and customizations that it approves, hackers have tried to break free from what they call the “jail,” hence the name “jailbreak.” Hackers do this by finding a previously undisclosed vulnerability in iOS that break through some of the many restrictions that Apple puts in place to prevent access to the underlying software. Apple says it does this for security. But jailbreakers say breaking through those restrictions allows them to customize their iPhones more than they would otherwise, in a way that most Android users are already accustomed to.
Details of the vulnerability that the hackers used to build the jailbreak aren’t known, but it’s not expected to last forever. Just as jailbreakers work to find a way in, Apple works fast to patch the flaws and close the jailbreak.
Security experts typically advise iPhone users against jailbreaking, because breaking out of the “walled garden” vastly increases the surface area for new vulnerabilities to exist and to be found.
The jailbreak comes at a time where the shine is wearing off of Apple’s typically strong security image. Last week, Zerodium, a broker for exploits, said it would no longer buy certain iPhone vulnerabilities because there were too many of them. Motherboard reported this week that hackers got their hands on a pre-release version of the upcoming iOS 14 release several months ago.
The debate over encryption continues to drag on without end.
In recent months, the discourse has largely swung away from encrypted smartphones to focus instead on end-to-end encrypted messaging. But a recent press conference by the heads of the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) showed that the debate over device encryption isn’t dead, it was merely resting. And it just won’t go away.
At the presser, Attorney General William Barr and FBI Director Chris Wray announced that after months of work, FBI technicians had succeeded in unlocking the two iPhones used by the Saudi military officer who carried out a terrorist shooting at the Pensacola Naval Air Station in Florida in December 2019. The shooter died in the attack, which was quickly claimed by Al Qaeda in the Arabian Peninsula.
Early this year — a solid month after the shooting — Barr had asked Apple to help unlock the phones (one of which was damaged by a bullet), which were older iPhone 5 and 7 models. Apple provided “gigabytes of information” to investigators, including “iCloud backups, account information and transactional data for multiple accounts,” but drew the line at assisting with the devices. The situation threatened to revive the 2016 “Apple versus FBI” showdown over another locked iPhone following the San Bernardino terror attack.
After the government went to federal court to try to dragoon Apple into doing investigators’ job for them, the dispute ended anticlimactically when the government got into the phone itself after purchasing an exploit from an outside vendor the government refused to identify. The Pensacola case culminated much the same way, except that the FBI apparently used an in-house solution instead of a third party’s exploit.
You’d think the FBI’s success at a tricky task (remember, one of the phones had been shot) would be good news for the Bureau. Yet an unmistakable note of bitterness tinged the laudatory remarks at the press conference for the technicians who made it happen. Despite the Bureau’s impressive achievement, and despite the gobs of data Apple had provided, Barr and Wray devoted much of their remarks to maligning Apple, with Wray going so far as to say the government “received effectively no help” from the company.
This diversion tactic worked: in news stories covering the press conference, headline after headline after headline highlighted the FBI’s slam against Apple instead of focusing on what the press conference was nominally about: the fact that federal law enforcement agencies can get into locked iPhones without Apple’s assistance.
That should be the headline news, because it’s important. That inconvenient truth undercuts the agencies’ longstanding claim that they’re helpless in the face of Apple’s encryption and thus the company should be legally forced to weaken its device encryption for law enforcement access. No wonder Wray and Barr are so mad that their employees keep being good at their jobs.
By reviving the old blame-Apple routine, the two officials managed to evade a number of questions that their press conference left unanswered. What exactly are the FBI’s capabilities when it comes to accessing locked, encrypted smartphones? Wray claimed the technique developed by FBI technicians is “of pretty limited application” beyond the Pensacola iPhones. How limited? What other phone-cracking techniques does the FBI have, and which handset models and which mobile OS versions do those techniques reliably work on? In what kinds of cases, for what kinds of crimes, are these tools being used?
We also don’t know what’s changed internally at the Bureau since that damning 2018 Inspector General postmortem on the San Bernardino affair. Whatever happened with the FBI’s plans, announced in the IG report, to lower the barrier within the agency to using national security tools and techniques in criminal cases? Did that change come to pass, and did it play a role in the Pensacola success? Is the FBI cracking into criminal suspects’ phones using classified techniques from the national security context that might not pass muster in a court proceeding (were their use to be acknowledged at all)?
Finally, who else besides the FBI will be the beneficiary of the technique that worked on the Pensacola phones? Does the FBI share the vendor tools it purchases, or its own home-rolled ones, with other agencies (federal, state, tribal or local)? Which tools, which agencies and for what kinds of cases? Even if it doesn’t share the techniques directly, will it use them to unlock phones for other agencies, as it did for a state prosecutor soon after purchasing the exploit for the San Bernardino iPhone?
We have little idea of the answers to any of these questions, because the FBI’s capabilities are a closely held secret. What advances and breakthroughs it has achieved, and which vendors it has paid, we (who provide the taxpayer dollars to fund this work) aren’t allowed to know. And the agency refuses to answer questions about encryption’s impact on its investigations even from members of Congress, who can be privy to confidential information denied to the general public.
The only public information coming out of the FBI’s phone-hacking black box is nothingburgers like the recent press conference. At an event all about the FBI’s phone-hacking capabilities, Director Wray and AG Barr cunningly managed to deflect the press’s attention onto Apple, dodging any difficult questions, such as what the FBI’s abilities mean for Americans’ privacy, civil liberties and data security, or even basic questions like how much the Pensacola phone-cracking operation cost.
As the recent PR spectacle demonstrated, a press conference isn’t oversight. And instead of exerting its oversight power, mandating more transparency, or requiring an accounting and cost/benefit analysis of the FBI’s phone-hacking expenditures — instead of demanding a straight and conclusive answer to the eternal question of whether, in light of the agency’s continually-evolving capabilities, there’s really any need to force smartphone makers to weaken their device encryption — Congress is instead coming up with dangerous legislation such as the EARN IT Act, which risks undermining encryption right when a population forced by COVID-19 to do everything online from home can least afford it.
The worst-case scenario would be that, between in-house and third-party tools, pretty much any law enforcement agency can now reliably crack into everybody’s phones, and yet nevertheless this turns out to be the year they finally get their legislative victory over encryption anyway. I can’t wait to see what else 2020 has in store.
Jinyan Zang is a researcher at Data Privacy Lab and a Ph.D. candidate in Government at Harvard University .
Latanya Sweeney Contributor
Latanya Sweeney is a professor of government and technology in residence at Harvard University’s Department of Government, editor-in-chief of Technology Science and the founding director of the Technology Science Initiative and the Data Privacy Lab at the Institute for Quantitative Social Science at Harvard.
Max Weiss Contributor
Max Weiss is a senior at Harvard University and the student who implemented the Deepfake Text experiment.
As federal agencies take increasingly stringent actions to try to limit the spread of the novel coronavirus pandemic within the U.S., how can individual Americans and U.S. companies affected by these rules weigh in with their opinions and experiences? Because many of the new rules, such as travel restrictions and increased surveillance, require expansions of federal power beyond normal circumstances, our laws require the federal government to post these rules publicly and allow the public to contribute their comments to the proposed rules online. But are federal public comment websites — a vital institution for American democracy — secure in this time of crisis? Or are they vulnerable to bot attack?
In December 2019, we published a new study to see firsthand just how vulnerable the public comment process is to an automated attack. Using publicly available artificial intelligence (AI) methods, we successfully generated 1,001 comments of deepfake text, computer-generated text that closely mimics human speech, and submitted them to the Centers for Medicare & Medicaid Services’ (CMS) website for a proposed federal rule that would institute mandatory work reporting requirements for citizens on Medicaid in Idaho.
The comments we produced using deepfake text constituted over 55% of the 1,810 total comments submitted during the federal public comment period. In a follow-up study, we asked people to identify whether comments were from a bot or a human. Respondents were only correct half of the time — the same probability as random guessing.
Image Credits: Zang/Weiss/Sweeney
The example above is deepfake text generated by the bot that all survey respondents thought was from a human.
We ultimately informed CMS of our deepfake comments and withdrew them from the public record. But a malicious attacker would likely not do the same.
Previous large-scale fake comment attacks on federal websites have occurred, such as the 2017 attack on the FCC website regarding the proposed rule to end net neutrality regulations.
During the net neutrality comment period, firms hired by industry group Broadband for America used bots to create comments expressing support for the repeal of net neutrality. They then submitted millions of comments, sometimes even using the stolen identities of deceased voters and the names of fictional characters, to distort the appearance of public opinion.
A retroactive text analysis of the comments found that 96-97% of the more than 22 million comments on the FCC’s proposal to repeal net neutrality were likely coordinated bot campaigns. These campaigns used relatively unsophisticated and conspicuous search-and-replace methods — easily detectable even on this mass scale. But even after investigations revealed the comments were fraudulent and made using simple search-and-replace-like computer techniques, the FCC still accepted them as part of the public comment process.
Even these relatively unsophisticated campaigns were able to affect a federal policy outcome. However, our demonstration of the threat from bots submitting deepfake text shows that future attacks can be far more sophisticated and much harder to detect.
The laws and politics of public comments
Let’s be clear: The ability to communicate our needs and have them considered is the cornerstone of the democratic model. As enshrined in the Constitution and defended fiercely by civil liberties organizations, each American is guaranteed a role in participating in government through voting, through self-expression and through dissent.
In fact, we only had a public comment website from CMS to test for vulnerability to deepfake text submissions in our study, because in June 2019, the U.S. Supreme Court ruled in a 7-1 decision that CMS could not skip the public comment requirements of the Administrative Procedure Act in reviewing proposals from state governments to add work reporting requirements to Medicaid eligibility rules within their state.
The impact of public comments on the final rule by a federal agency can be substantial based on political science research. For example, in 2018, Harvard University researchers found that banks that commented on Dodd-Frank-related rules by the Federal Reserve obtained $7 billion in excess returns compared to non-participants. When they examined the submitted comments to the “Volcker Rule” and the debit card interchange rule, they found significant influence from submitted comments by different banks during the “sausage-making process” from the initial proposed rule to the final rule.
Beyond commenting directly using their official corporate names, we’ve also seen how an industry group, Broadband for America, in 2017 would submit millions of fake comments in support of the FCC’s rule to end net neutrality in order to create the false perception of broad political support for the FCC’s rule amongst the American public.
Technology solutions to deepfake text on public comments
While our study highlights the threat of deepfake text to disrupt public comment websites, this doesn’t mean we should end this long-standing institution of American democracy, but rather we need to identify how technology can be used for innovative solutions that accepts public comments from real humans while rejecting deepfake text from bots.
There are two stages in the public comment process — (1) comment submission and (2) comment acceptance — where technology can be used as potential solutions.
In the first stage of comment submission, technology can be used to prevent bots from submitting deepfake comments in the first place; thus raising the cost for an attacker to need to recruit large numbers of humans instead. One technological solution that many are already familiar with are the CAPTCHA boxes that we see at the bottom of internet forms that ask us to identify a word — either visually or audibly — before being able to click submit. CAPTCHAs provide an extra step that makes the submission process increasingly difficult for a bot. While these tools can be improved for accessibility for disabled individuals, they would be a step in the right direction.
However, CAPTCHAs would not prevent an attacker willing to pay for low-cost labor abroad to solve any CAPTCHA tests in order to submit deepfake comments. One way to get around that may be to require strict identification to be provided along with every submission, but that would remove the possibility for anonymous comments that are currently accepted by agencies such as CMS and the Food and Drug Administration (FDA). Anonymous comments serve as a method of privacy protection for individuals who may be significantly affected by a proposed rule on a sensitive topic such as healthcare without needing to disclose their identity. Thus, the technological challenge would be to build a system that can separate the user authentication step from the comment submission step so only authenticated individuals can submit a comment anonymously.
Finally, in the second stage of comment acceptance, better technology can be used to distinguish between deepfake text and human submissions. While our study found that our sample of over 100 people surveyed were not able to identify the deepfake text examples, more sophisticated spam detection algorithms in the future may be more successful. As machine learning methods advance over time, we may see an arms race between deepfake text generation and deepfake text identification algorithms.
In order to develop more robust future technological solutions, we will need to build a collaborative effort between the government, researchers and our innovators in the private sector. That’s why we at Harvard University have joined the Public Interest Technology University Network along with 20 other education institutions, New America, the Ford Foundation and the Hewlett Foundation. Collectively, we are dedicated to helping inspire a new generation of civic-minded technologists and policy leaders. Through curriculum, research and experiential learning programs, we hope to build the field of public interest technology and a future where technology is made and regulated with the public in mind from the beginning.
Federal public comment websites offer the only way for the American public and organizations to express their concerns to the federal agency before the final rules are determined. We must adopt better technological defenses to ensure that deepfake text doesn’t further threaten American democracy during a time of crisis.
A data breach at the U.S. Marshals Service exposed the personal information of current and former prisoners, TechCrunch has learned.
A letter sent to those affected, and obtained by TechCrunch, said the Justice Department notified the U.S. Marshals on December 30, 2019 of a data breach affecting a public-facing server storing personal information on current and former prisoners in its custody. The letter said the breach may have included their address, date of birth and Social Security number, which can be used for identity fraud.
But the notice didn’t say how many current and former prisoners are affected by the breach.
As the law enforcement arm of the federal courts, U.S. Marshals are tasked with capturing fugitives and serving federal arrest warrants. Last year, U.S. Marshals arrested more than 90,000 fugitives and served over 105,000 warrants.
A spokesperson for the Justice Department did not respond to a request for comment by email or phone.
It’s the latest federal government security lapse in recent weeks.
The Defense Information Systems Agency, a Dept. of Defense division charged with providing technology and communications support to the U.S. government — including the president and other senior officials — said a data breach between May and July 2019 resulted in the theft of employees’ personal information.
Last month, the Small Business Administration admitted that 8,000 applicants, who applied for an emergency loan after facing financial difficulties because of the coronavirus pandemic, had their data exposed.
Since the start of the outbreak, governments and companies have scrambled to develop apps and websites that can help users identify COVID-19 symptoms.
India’s largest cell network Jio, a subsidiary of Reliance, launched its coronavirus self-test symptom checker in late March, just before the Indian government imposed a strict nationwide lockdown to prevent the further spread of the coronavirus. The symptom checker allows anyone to check their symptoms from their phone or Jio’s website to see if they may have become infected with COVID-19.
But a security lapse exposed one of the symptom checker’s core databases to the internet without a password, TechCrunch has found.
Jio’s coronavirus symptom checker. One of its databases exposed users’ responses. (Image: TechCrunch)
Security researcher Anurag Sen found the database on May 1, just after it was first exposed, and informed TechCrunch to notify the company. Jio quickly pulled the system offline after TechCrunch made contact. It’s not known if anyone else accessed the database.
“We have taken immediate action,” said Jio spokesperson Tushar Pania. “The logging server was for monitoring performance of our website, intended for the limited purpose of people doing a self-check to see if they have any COVID-19 symptoms.”
The database contains millions of logs and records starting April 17 through to the time that the database was pulled offline. Although the server contained a running log of website errors and other system messages, it also ingested vast numbers of user-generated self-test data. Each self-test was logged in the database and included a record of who took the test — such as “self” or a relative, their age, and their gender.
The data also included the person’s user agent, a small snippet of information about the user’s browser version and the operating system, often used to load the website properly but can also be used to track a user’s online activity.
The database also contains individual records of those who signed up to create a profile, allowing users to update their symptoms over time. These records contained the answers to each question asked by the symptom checker, including what symptoms they are experiencing, who they have been in contact with, and what health conditions they may have.
Some of the records also contained the user’s precise location, but only if the user allowed the symptom checker access to their browser or phone’s location data.
We’ve posted a redacted portion of one of the records below.
A redacted portion of the exposed database. (Image: TechCrunch)
From one sample of data we obtained, we found thousands of users’ precise geolocation from across India. TechCrunch was able to identify people’s homes using the latitude and longitude records found in the database.
Most of the location data is clustered around major cities, like Mumbai and Pune. TechCrunch also found users in the United Kingdom and North America.
The exposure could not come at a more critical time for the Indian telecoms giant. Last week Facebook invested $5.7 billion for a near-10% stake in Jio’s Platforms, valuing the Reliance subsidiary at about $66 billion.
Jio did not answer our follow-up questions, and the company did not say if it will inform those who used the symptom tracker of the security lapse.
Tesla CEO Elon Musk tweeted Friday that the company’s stock price was “too high” in his opinion, immediately sending shares into a free-fall and in possible violation of an agreement reached with the U.S. Securities and Exchange Commission last year.
Tesla shares fell nearly 12% in the half hour following his stock price tweets — just one of many sent out in rapid fire that covered everything from demands to “give people back their freedom” and lines from the U.S. National Anthem to quotes from poet Dylan Thomas and a claim that he will sell all of his possessions.
The SEC declined to comment on whether this was a violation of a settlement agreement. Tesla did not respond to a request for comment.
The meltdown on Twitter occurred as SpaceX — Musk’s other company — participated in a live press conference on one of its most important missions ever.
Musk’s tweet comes almost exactly a year after he reached a settlement agreement with the U.S. Securities and Exchange Commission that gave the CEO freedom to use Twitter — within certain limitations — without fear of being held in contempt for violating an earlier court order.
Under that agreement, Musk can tweet as he wishes except when it’s about certain events or financial milestones. In those cases, Musk must seek pre-approval from a securities lawyer, according to the agreement filed in April 2019 with Manhattan’s federal court.
Musk is supposed to seek pre-approval if his tweets include events regarding the company’s securities, including his acquisition or disposition of shares, nonpublic legal or regulatory findings or decisions.
He’s also supposed to get pre-approval on any tweets about the company’s financial condition or guidance, potential or proposed mergers, acquisitions or joint ventures, sales or delivery numbers, new or proposed business lines or any event requiring the filing of a Form 8-K, such as a change in control or a change in the company’s directors.
Security researchers are sounding the alarm over a newly discovered Android malware that targets banking apps and cryptocurrency wallets.
The malware, which researchers at security firm Cybereason recently discovered and called EventBot, masquerades as a legitimate Android app — like Adobe Flash or Microsoft Word for Android — which abuses Android’s in-built accessibility features to obtain deep access to the device’s operating system.
Once installed — either by an unsuspecting user or by a malicious person with access to a victim’s phone — the EventBot-infected fake app quietly siphons off passwords for more than 200 banking and cryptocurrency apps — including PayPal, Coinbase, CapitalOne and HSBC — and intercepts and two-factor authentication text message codes.
With a victim’s password and two-factor code, the hackers can break into bank accounts, apps and wallets, and steal a victim’s funds.
“The developer behind Eventbot has invested a lot of time and resources into creating the code, and the level of sophistication and capabilities is really high,” Assaf Dahan, head of threat research at Cybereason, told TechCrunch.
The malware quietly records every tap and key press, and can read notifications from other installed apps, giving the hackers a window into what’s happening on a victim’s device.
Over time, the malware siphons off banking and cryptocurrency app passwords back to the hackers’ server.
The researchers said that EventBot remains a work in progress. Over a period of several weeks since its discovery in March, the researchers saw the malware iteratively update every few days to include new malicious features. At one point the malware’s creators improved the encryption scheme it uses to communicate with the hackers’ server, and included a new feature that can grab a user’s device lock code, likely to allow the malware to grant itself higher privileges to the victim’s device like payments and system settings.
But while the researchers are stumped as to who is behind the campaign, their research suggests the malware is brand new.
“Thus far, we haven’t observed clear cases of copy-paste or code reuse from other malware and it seems to have been written from scratch,” said Dahan.
Android malware is not new, but it’s on the rise. Hackers and malware operators have increasingly targeted mobile users because many device owners have their banking apps, social media, and other sensitive services on their device. Google has improved Android security in recent years by screening apps in its app store and proactively blocking third-party apps to cut down on malware — with mixed results. Many malicious apps have evaded Google’s detection.
Cybereason said it has not yet seen EventBot on Android’s app store or in active use in malware campaigns, limiting the exposure to potential victims — for now.
But the researchers said users should avoid untrusted apps from third-party sites and stores, many of which don’t screen their apps for malware.