Trump’s election denialism saw him retaliate in a way that isn’t just putting the remainder of his presidency in jeopardy, it’s already putting the next administration in harm’s way.
In a stunning display of retaliation, Trump fired CISA director Chris Krebs last week after declaring that there was “no evidence that any voting system deleted or lost votes, changed votes or was in any way compromised,” a direct contradiction to the conspiracy-fueled fever dreams of the president who repeatedly claimed, without evidence, that the election had been hijacked by the Democrats. CISA is left distracted by …
Finally. It only took almost three weeks, but the Biden-Harris transition has officially begun.
On Monday, the General Services Administration gave the green light for the Biden-Harris team to transition from political campaign to government administration, allowing the team to receive government resources like office space, but also classified briefings and secure computers. And, with it, comes a shiny new .gov domain.
Transitioning is an obscure part of the law that’s rarely discussed, in large part because outgoing governments and incoming administrations largely get on and try to maintain continuity of government through a peaceful transition of power. The process is formally triggered by the General Services Administration, the lesser-known federal agency tasked with the basic functioning of government, and allows the incoming administration to receive funds, tools, and resources to prepare for entering government.
But this time around, the agency’s head Emily Murphy had been reluctant to trigger the formal transition period after the Trump campaign filed a number of lawsuits challenging the election.
Murphy finally approved the transition on Monday after Michigan certified its election results.
Up until now, the Biden-Harris team buildbackbetter.com to host its transition website. Now it’s hosted at buildbackbetter.gov, a departure from the ptt.gov domain used by the incoming Obama-Biden administration in 2008.
The Wall Street Journal reported last week that until now the Biden-Harris team was using a Google Workspace for email and collaboration, secured with hardware security keys that staff need to log into their accounts. That setup might suffice for an enterprise, but had security experts worried that the lack of government cybersecurity support could make the camp more vulnerable to attacks.
As for the domain, which you might not think much about, the shift to a .gov domain marks a significant step forwards in the camp’s cybersecurity efforts. Government domains, hosted on the .gov domain, are toughened to prevent against domain hijacking or spoofing. In simple terms, they’re far more resilient than your regular web hosting services.
Twitter is the latest social media site to allow users to experiment with posting disappearing content. Fleets, as Twitter calls them, allows its mobile users post short stories, like photos or videos with overlaying text, that are set to vanish after 24 hours.
But a bug meant that fleets weren’t deleting properly and could still be accessed long after 24 hours had expired. Details of the bug were posted in a series of tweets on Saturday, less than a week after the feature launched.
The bug effectively allowed anyone to access and download a user’s fleets without triggering a notification that the user’s fleet had been read and by whom. The implication is that this bug could be abused to archive a user’s fleets after they expire.
Using an app that’s designed to interact with Twitter’s back-end systems via its developer API. What returned was a list of fleets from the server. Each fleet had its own direct URL, which when opened in a browser would load the fleet as an image or a video. But even after the 24 hours elapsed, the server would still return links to fleets that had already disappeared from view in the Twitter app.
When reached, a Twitter spokesperson said a fix was on the way. “We’re aware of a bug accessible through a technical workaround where some Fleets media URLs may be accessible after 24 hours. We are working on a fix that should be rolled out shortly.”
Twitter acknowledged that the fix means that fleets should now expire properly, it said it won’t delete the fleet from its servers for up to 30 days — and that it may hold onto fleets for longer if they violate its rules. We checked that we could still load fleets from their direct URLs even after they expire.
Chris Krebs, one of the most senior cybersecurity officials in the U.S. government, has been fired.
Krebs served as the director of the Cybersecurity and Infrastructure Security Agency (CISA) since its founding in November 2018 until he was removed from his position on Tuesday. It’s not immediately clear who is currently heading the agency. A spokesperson for CISA did not immediately comment.
President Trump fired Krebs in a tweet late on Tuesday, citing a statement published by CISA last week, which found there was “no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.” Trump, who has repeatedly made claims of voter fraud without providing evidence, alleged that CISA’s statement was “highly inaccurate.”
Shortly after, Twitter labeled Trump’s tweet for making a “disputed” claim about election fraud.
The recent statement by Chris Krebs on the security of the 2020 Election was highly inaccurate, in that there were massive improprieties and fraud – including dead people voting, Poll Watchers not allowed into polling locations, “glitches” in the voting machines which changed…
Krebs was appointed by President Trump to head the newly created cybersecurity agency in November 2018, just days after the conclusion of the midterm elections. He previously served as an undersecretary for CISA’s predecessor, the National Protection and Programs Directorate, and also held cybersecurity policy roles at Microsoft.
During his time in government, Krebs became one of the most vocal voices in election security, taking the lead during 2018 and in 2020, which largely escaped from disruptive cyberattacks, thanks to efforts to prepare for cyberattacks and misinformation that plagued the 2016 presidential election.
He was “one of the few people in this administration respected by everyone on both sides of the aisle,” said Sen. Mark Warner, a member of the Senate Intelligence Committee, in a tweet.
Krebs is the latest official to leave CISA in the past year. Brian Harrell, who oversaw infrastructure protection at the agency, resigned in August after less than a year on the job, and Jeanette Manfra left for a role at Google at the end of last year. Cyberscoop reported Thursday that Bryan Ware, CISA’s assistant director for cybersecurity, resigned for a position in the private sector.
WildWorks, the gaming company that makes the popular kids game Animal Jam, has confirmed a data breach.
Animal Jam is one of the most popular games for kids, ranking in the top five games in the 9-11 age category in Apple’s App Store in the U.S., according to data provided by App Annie. But while no data breach is ever good news, WildWorks has been more forthcoming about the incident than most companies would be, making it easier for parents to protect both their information and their kids’ data.
Here’s what we know.
WildWorks said in a detailed statement that a hacker stole 46 million Animal Jam records in early October but that it only learned of the breach in November.
The company said someone broke into one of its systems that the company uses for employees to communicate with each other, and accessed a secret key that allowed the hacker to break into the company’s user database. The bad news is that the stolen data is known to be circulating on at least one cybercrime forum, WildWorks said, meaning that malicious hackers may use (or be using) the stolen information.
The stolen data dates back to over the past 10 years, the company said, so former users may still be affected.
Much of the stolen data wasn’t highly sensitive, but the company warned that 32 million of those stolen records had the player’s username, 23.9 million records had the player’s gender, 14.8 million records contained the player’s birth year, and 5.7 million records had the player’s full date of birth.
But, the company did say that the hacker also took 7 million parent email addresses used to manage their kids’ accounts. It also said that 12,653 parent accounts had a parent’s full name and billing address, and 16,131 parent accounts had a parent’s name but no billing address.
Besides the billing address, the company said no other billing data — such as financial information — was stolen.
WildWorks also said that the hacker also stole player’s passwords, prompting the company to reset every player’s password. (If you can’t log in, that’s probably why. Check your email for a link to reset your password.) WildWorks didn’t say how it scrambled passwords, which leaves open the possibility that they could be unscrambled and potentially used to break into other accounts that have the same password as used on Animal Jam. That’s why it’s so important to use unique passwords for each site or service you use, and use a password manager to store your passwords safely.
The company said it was sharing information about the breach with the FBI and other law enforcement agencies.
So what can parents do?
Thankfully the data associated with kids accounts is limited. But parents, if you have used your Animal Jam password on any other website, make sure you change those passwords to strong and unique passwords so that nobody can break into those other accounts.
Keep an eye out for scams related to the breach. Malicious hackers like to jump on recent news and events to try to trick victims into turning over more information or money in response to a breach.
Following the landmark CJEU ‘Schrems II’ ruling in July, which invalidated the four-year-old EU-US Privacy Shield, European data protection regulators have today published 38-pages of guidance for businesses stuck trying to navigate the uncertainty around how to (legally) transfer personal data out of the European Union.
The European Data Protection Board’s (EDPB) recommendations focus on measures data controllers might be able to put in place to supplement the use of another transfer mechanism: so-called Standard Contractual Clauses (SCCs) to ensure they are complying with the bloc’s General Data Protection Regulation (GDPR) .
The Recommendations on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data are now available here: https://t.co/agY2BHZVku For a quick overview of the different steps data exporters need to take, check out the infographic: pic.twitter.com/sYTMdNgBkn
Unlike Privacy Shield, SCCs were not struck down by the court but their use remains clouded with legal uncertainty. The court made it clear SCCs can only be relied upon for international transfers if the safety of EU citizens’ data can be guaranteed. It also said EU regulators have a duty to intervene when they suspect data is flowing to a location where it will not be safe — meaning options for data transfers out of the EU have both reduced in number and increased in complexity.
One company that’s said it’s waiting for the EDPB guidance is Facebook. It’s already faced a preliminary order to stop transferring EU users data to the US. It petitioned the Irish courts to obtain a stay as it seeks a judicial review of its data protection regulator’s process. It has also brought out its lobbying big guns — former UK deputy PM and ex-MEP Nick Clegg — to try to pressure EU lawmakers over the issue.
Most likely the tech giant is hoping for a ‘Privacy Shield 2.0‘ to be cobbled together and slapped into place to paper over the gap between EU fundamental rights and US surveillance law.
But the Commission has warned there won’t be a quick fix this time.
Changes to US surveillance law are slated as necessary — which means zero chance of anything happening before the Biden administration takes the reins next year. So the legal uncertainty around EU-US transfers is set to stretch well into next year at a minimum. (Politico suggests a new data deal isn’t likely in the first half of 2021.)
In the meanwhile, legal challenges to ongoing EU-US transfers are stacking up — at the same time as EU regulators know they have a legal duty to intervene when data is at risk.
“Standard contractual clauses and other transfer tools mentioned under Article 46 GDPR do not operate in a vacuum,” the EDPB warns in an executive summary. “The Court states that controllers or processors, acting as exporters, are responsible for verifying, on a case-by-case basis and, where appropriate, in collaboration with the importer in the third country, if the law or practice of the third country impinges on the effectiveness of the appropriate safeguards contained in the Article 46 GDPR transfer tools.
“In those cases, the Court still leaves open the possibility for exporters to implement supplementary measures that fill these gaps in the protection and bring it up to the level required by EU law. The Court does not specify which measures these could be. However, the Court underlines that exporters will need to identify them on a case-by-case basis. This is in line with the principle of accountability of Article 5.2 GDPR, which requires controllers to be responsible for, and be able to demonstrate compliance with the GDPR principles relating to processing of personal data.”
The EDPB’s recommendations set out a series of steps for data exporters to take as they go through the complex task of determining whether their particular transfer can play nice with EU data protection law.
Six steps but no one-size-fits-all fix
The basic overview of the process it’s advising is: Step 1) map all intended international transfers; step 2) verify the transfer tools you want to use; step 3) assess whether there’s anything in the law/practice of the destination third country which “may impinge on the effectiveness of the appropriate safeguards of the transfer tools you are relying on, in the context of your specific transfer”, as it puts it; step 4) identify and adopt supplementary measure/s to bring the level of protection up to ‘essential equivalent’ with EU law; step 5) take any formal procedural steps required to adopt the supplementary measure/s; step 6) periodically re-evaluate the level of data protection and monitor any relevant developments.
In short, this is going to involve both a lot of work — and ongoing work. tl;dr: Your duty to watch over the safety of European users’ data is never done.
Moreover, the EDPB makes it clear that there very well may not be any supplementary measures to cover a particular transfer in legal glory.
“You may ultimately find that no supplementary measure can ensure an essentially equivalent level of protection for your specific transfer,” it warns. “In those cases where no supplementary measure is suitable, you must avoid, suspend or terminate the transfer to avoid compromising the level of protection of the personal data. You should also conduct this assessment of supplementary measures with due diligence and document it.”
In instances where supplementary measures could suffice the EDPB says they may have “a contractual, technical or organisational nature” — or, indeed, a combination of some or all of those.
“Combining diverse measures in a way that they support and build on each other may enhance the level of protection and may therefore contribute to reaching EU standards,” it suggests.
However it also goes on to state fairly plainly that technical measures are likely to be the most robust tool against the threat posed by foreign surveillance. But that in turn means there are necessarily limits on the business models that can tap in — anyone wanting to decrypt and process data for themselves in the US, for instance, (hi Facebook!) isn’t going to find much comfort here.
The guidance goes on to include some sample scenarios where it suggests supplementary measures might suffice to render an international transfer legal.
Such as data storage in a third country where there’s no access to decrypted data at the destination and keys are held by the data exporter (or by a trusted entity in the EEA or in a third country that’s considered to have an adequate level of protection for data); or the transfer of pseudonymised data — so individuals can no longer be identified (which means ensuring data cannot be reidentified); or end-to-end encrypted data transiting third countries via encrypted transfer (again data must not be able to be decrypted in a jurisdiction that lacks adequate protection; the EDPB also specifies that the existence of any ‘backdoors’ in hardware or software must have been ruled out, although it’s not clear how that could be done).
Another section of the document discusses scenarios in which no effective supplementary measures could be found — such as transfers to cloud service providers (or similar) which require access to the data in the clear and where “the power granted to public authorities of the recipient country to access the transferred data goes beyond what is necessary and proportionate in a democratic society”.
Again, this is a bit of the document that looks very bad for Facebook.
“The EDPB is, considering the current state of the art, incapable of envisioning an effective technical measure to prevent that access from infringing on data subject rights,” it writes on that, adding that it “does not rule out that further technological development may offer measures that achieve the intended business purposes, without requiring access in the clear”.
“In the given scenarios, where unencrypted personal data is technically necessary for the provision of the service by the processor, transport encryption and data-at-rest encryption even taken together, do not constitute a supplementary measure that ensures an essentially equivalent level of protection if the data importer is in possession of the cryptographic keys,” the EDPB further notes.
It also makes it clear that supplementary contractual clauses aren’t any kind of get-out on this front — so, no, Facebook can’t stick a clause in its SCCs that defuses FISA 702 — with the EDPB writing: “Contractual measures will not be able to rule out the application of the legislation of a third country which does not meet the EDPB European Essential Guarantees standard in those cases in which the legislation obliges importers to comply with the orders to disclose data they receive from public authorities.”
The EDPB does discuss examples of potential clauses data exporters could use to supplement SCCs, depending on the specifics of their data flow situation — alongside specifying “conditions for effectiveness” (or ineffectiveness in many cases, really). And, again, there’s cold comfort here for those wanting to process personal data in the US (or another third country) while it remains at risk from state surveillance.
“The exporter could add annexes to the contract with information that the importer would provide, based on its best efforts, on the access to data by public authorities, including in the field of intelligence provided the legislation complies with the EDPB European Essential Guarantees, in the destination country. This might help the data exporter to meet its obligation to document its assessment of the level of protection in the third country,” the EDPB suggests in one example from a section of the guidance discussing transparency obligations.
However the point of such a clause would be for the data exporter to put up-front conditions on an importer to make it easier for them to avoid getting into a risky contract in the first place — or help them with suspending/terminating a contract if a risk is determined — rather than providing any kind of legal sticking plaster for mass surveillance. Aka: “This obligation can however neither justify the importer’s disclosure of personal data nor give rise to the expectation that there will be no further access requests,” as the EDPB warns.
Another example discussed in the document is the viability of adding clauses to try to get the importer to certify there’s no backdoors in their systems which could put the data at risk.
However the EDPB warns this may just be useless, writing: “The existence of legislation or government policies preventing importers from disclosing this information may render this clause ineffective.” So the example could just be being included to try to kneecap dodgy legal advice that suggests contract clauses are a panacea for US surveillance overreach.
European data protection regulators have inched toward an enforcement decision for a Twitter breach that the company publicly disclosed in 2019, after a majority of EU data supervisors agreed to back a draft settlement submitted earlier by Ireland’s Data Protection Commission (DPC).
Twitter disclosed the bug in its ‘Protect your tweets’ feature at the start of last year — saying at the time that some Android users who’d applied its setting to make their tweets non-public may have had their data exposed to the public Internet since as far back as 2014.
A new data protection regime, meanwhile, came into force in the European Union in May 2018 — meaning the 2014-2019 breach falls under the EU’s General Data Protection Regulation (GDPR).
Ireland’s DPC is the lead supervisor authority in the Twitter case but the cross-border nature of its business means all EU data protection agencies have an interest and the ability to make “relevant and reasoned” objections to the draft. Objections to the DPC’s draft decision were duly raised over the summer — triggering a dispute resolution process for cross-border cases set out in the GDPR.
The European Data Protection Board (EDPB), a body which helps coordinate pan-EU regulatory activity, said today it has adopted its first Article 65 decision — referring to the mechanism for settling disagreement between the EU’s patchwork of data supervisors. This means that at least a two-thirds majority of the EU DPAs have backed the settlement.
“On 9 November 2020, the EDPB adopted its binding decision and will shortly notify it formally to the Irish SA,” it wrote in a statement.
Ireland’s deputy commissioner, Graham Doyle, confirmed the EDPB has informed it of an Article 65 decision — but declined to comment further at this stage.
Ireland’s DPC now has up to a month to issue a final decision.
“The Irish SA [supervisory authority] shall adopt its final decision on the basis of the EDPB decision, which will be addressed to the controller, without undue delay and at the latest one month after the EDPB has notified its decision,” the EDPB statement adds.
Details of any penalties Twitter may face — such as a fine — have not yet been confirmed. But the end of the process is now in sight.
GDPR places a legal obligation on data controllers to adequately protect personal data. Financial penalties for violations of the framework can scale up to 4% of a company’s annual global turnover. (Although, in the case of big tech, the largest GDPR fine to date remains a $57M fine slapped on Google by France’s CNIL.)
Unlike that Google case — which CNIL pursued ahead of Google moving its EU legal base to Ireland — the Twitter case is cross-border and will be the first such big tech GDPR case to be concluded once a final decision is out.
The EU’s flagship data protection regulation continues to face criticism over how long it’s taking for cases and complaints to be investigated and decisions issued — especially those related to big tech.
Last year the Irish regulator said its first cross-border GDPR decisions would be coming “early” in 2020. In the event its first one will arrive before the end of 2020 — but that’s a pace that’s unlikely to silence critics who argue EU regulators are not equipped for the complex, resource-intensive task of overseeing how big tech handles people’s data.
The Twitter breach case is also likely to be considerably less complex than some of the complaint-based GDPR investigations ongoing into big tech platforms — which include probes around the legal bases for Facebook to process user data and how Google’s ad exchange is using Internet users’ data. Yet the EDPB still allowed for a full extra month to the Article 65 process (instead of the default one month) because of what it described as “the complexity of the subject matter”. That hardly bodes well for more contentious cases.
Still, going through dispute resolution over cross-border cases may lead to greater consistency and help DPAs pick up enforcement pace over time.
The UK’s ICO looks like a bit of a cautionary tale in this regard — having recently taken the clippers to massive preliminary fines it announced in a couple of (non-big tech GDPR) data breach cases, meaning enforcement ended up being both later and less stinging than it had first appeared.
Despite critics’ claims that GDPR enforcement continues to be lacking in places where it should be hard-hitting, the question of how to effectively regulate big tech is one that EU lawmakers aren’t backing away from.
On the contrary, the Commission is set to lay out a legislative proposal next month to apply ex ante rules to dominant Internet platforms as part of a planned Digital Markets Act. Under the plans, so-called ‘gatekeepers’ will to be subject to a list of ‘dos and don’ts’ that’s slated to include controls on how they can share data. It could also could see a push to create a pan-EU regulator to oversee major platforms.
Such an approach could help to reduce the oversight burden facing a handful of EU DPAs with an outsized number of big tech giants on their books, such as the Irish DPC. But, again, there’s likely to be a long wait ahead before any new EU platform rules are in a position to be effectively enforced.
Two days ago, about $1 billion worth of bitcoin that had sat dormant since the seizure of the Silk Road marketplace in 2013, one of the biggest underground drug websites on the dark web, suddenly changed hands.
Who took it? Mystery over. It was the U.S. government.
In a statement Thursday, the Justice Department confirmed it had seized the 70,000 bitcoins generated in revenue from drug sales on the Silk Web marketplace from a hacker, known as “Individual X,” who moved the cryptocurrency from Silk Road into a wallet the hacker controlled.
The filing said that the identity of Individual X “is known to the government.”
At the time of the seizure on Tuesday, the bitcoin was worth more than $1 billion.
“Silk Road was the most notorious online criminal marketplace of its day. The successful prosecution of Silk Road’s founder in 2015 left open a billion-dollar question. Where did the money go? Today’s forfeiture complaint answers this open question at least in part,” said U.S. Attorney David Anderson in remarks.
“$1 billion of these criminal proceeds are now in the United States’ possession,” he said.
Silk Road was for a time the “most sophisticated and extensive criminal marketplace on the Internet,” per the Justice Department statement. In 2013, its founder and administrator Ross Ulbricht was arrested and the site seized. Ulbricht was convicted in 2015 and sentenced to two life terms and an additional 40 years, for his role in the operation. Prosecutors said the site had close to 13,000 listings for drugs and other illegal services, and generated millions of bitcoin.
The Justice Department said Thursday that the seized bitcoin would be subject to forfeiture proceedings.
A scourge of robocalls urging Americans to “stay safe and stay home” has gotten the attention of the FBI and the New York attorney general over concerns of voter suppression.
The brief message, which doesn’t specifically mention Election Day, has prompted New York Attorney General Letitia James to launch an investigation into the matter. James announced Tuesday that her office is actively investigating allegations that voters are receiving the robocalls.
“Voting is a cornerstone of our democracy,” James said in a statement Tuesday. “Attempts to hinder voters from exercising their right to cast their ballots are disheartening, disturbing and wrong.”
James added that such calls are illegal and will not be tolerated.
The FBI told TechCrunch that the agency is aware of reports of robocalls. The agency wouldn’t say if it is investigating the robocalls; however, a senior official at the Department of Homeland Security told reporters Tuesday that the FBI was investigating calls that seek to discourage people from voting, according to the AP.
“As a reminder, the FBI encourages the American public to verify any election and voting information they may receive through their local election officials,” the FBI said in a statement sent to TechCrunch.
The announcement from James follows subpoenas issued earlier this week by the New York AG office to investigate the source of these robocalls allegedly spreading disinformation. New York voters who receive concerning disinformation, or face issues at the polls, can contact her office’s Election Protection Hotline at 1-800-771-7755.
“Every voter must be able to exercise their fundamental right to vote without being harassed, coerced, or intimidated. Our nation has a legacy of free and fair elections, and this election will be no different,” James added. “Voters should rest assured that voting is safe and secure, and they should exercise their fundamental right to vote in confidence. We, along with state leaders across the nation, are working hard to protecting your right to vote, and anyone who tries to hinder that right will be held accountable to the fullest extent of the law.”
Last month, the U.S. Department of Justice announced that an interagency working group convened by Attorney General William P. Barr released a report to Congress on efforts to stop illegal robocalls. The report described efforts by the DOJ, including two civil actions filed in January 2020 against U.S.-based Voice over Internet Protocol (VoIP) companies, the Federal Trade Commission and the Federal Communications Commission to combat illegal robocalls. Despite those efforts, and even evidence of some declines in robocalls for a time, the presidential election and the COVID-19 pandemic has fueled a spike in calls.
A shared user account used by WeWork employees to access printer settings and print jobs had an incredibly simple password — so simple that a customer guessed it.
Jake Elsley, who works at a WeWork in London, said he found the user account after a WeWork employee at his location mistakenly left the account logged in.
WeWork customers like Elsley normally have an assigned seven-digit username and a four-digit passcode used for printing documents at WeWork locations. But the username for the account used by WeWork employees was just four-digits: “9999”. Elsley told TechCrunch that he guessed the password because it was the same as the username. (“9999” is ranked as one of the most common passwords in use today, making it highly insecure.)
Read more on Extra Crunch
The “9999” account is used by and shared among WeWork community managers, who oversee day-to-day operations at each location, to print documents for visitors who don’t have accounts to print on their own. The account cannot be used to access print jobs sent to other customer accounts.
Elsley said that the “9999” account could not see the contents of documents beyond file names, but that logging in to the WeWork printing web portal could allow him to release other people’s pending print jobs sent to the “9999” account to any other WeWork printer on the network.
The printing web portal can only be accessed on WeWork’s Wi-Fi networks, said Elsley, but that includes the free guest Wi-Fi network which doesn’t have a password, and WeWork’s main Wi-Fi network, which still uses a password that has been widely circulated on the internet.
Elsley reached out to TechCrunch to ask us to alert the company to the insecure password.
“WeWork is committed to protecting the privacy and security of our members and employees,” said WeWork spokesperson Colin Hart. “We immediately initiated an investigation into this potential issue and took steps to address any concerns. We are also nearing the end of a multi-month process of upgrading all of our printing capabilities to a best in class security and experience solution. We expect this process to be completed in the coming weeks.”
WeWork confirmed that it had since changed the password on the “9999” user account.