Posted on

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

Source: TechCrunch

Posted on

UK public sector failing to be open about its use of AI, review finds

A report into the use of artificial intelligence by the UK’s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens’ lives.

Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer funded healthcare — with health minister Matt Hancock setting out a tech-fuelled vision of “preventative, predictive and personalised care” in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of “healthtech” apps and services.

He has also personally championed a chatbot startup, Babylon Health, that’s using AI for healthcare triage — and which is now selling a service in to the NHS.

Policing is another area where AI is being accelerated into UK public service delivery, with a number of police forces trialing facial recognition technology — and London’s Met Police switching over to a live deployment of the AI technology just last month.

However the rush by cash-strapped public services to tap AI ‘efficiencies’ risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data-sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns — all of which require transparency into AIs if there’s to be accountability over automated outcomes.

The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.

Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.

The court objected to a lack of transparency about how the system functions, as well as the associated lack of controllability — ordering an immediate halt to its use.

The UK parliamentary committee which reviews standards in public life has today sounded a similar warning — publishing a series of recommendations for public sector use of AI and warning that the technology challenges three key principles of service delivery: Openness, accountability, and objectivity.

“Under the principle of openness, a current lack of information about government use of AI risks undermining transparency,” it writes in an executive summary.

“Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.”

“This review found that the government is failing on openness,” it goes on, asserting that: “Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”

In 2018 the UN’s special rapporteur on extreme poverty and human rights raised concerns about the UK’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale — warning then that the impact of a digital welfare state on vulnerable people would be “immense”, and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.

Per the committee’s assessment it is “too early to judge if public sector bodies are successfully upholding accountability”.

Parliamentarians also suggest that “fears over ‘black box’ AI… may be overstated” — and rather dub “explainable AI” a “realistic goal for the public sector”.

On objectivity, they write that data bias is “an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias”.

The use of AI in the UK public sector remains limited at this stage, according to the committee’s review, with healthcare and policing currently having the most developed AI programmes — where the tech is being used to identify eye disease and predict reoffending rates, for example.

“Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage,” the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are “examining how AI can increase efficiency in service delivery”.

It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care — noting the example of Hampshire County Council trialling the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers. And points to a Guardian article which reported that one-third of UK councils use algorithmic systems to make welfare decisions.

But the committee suggests there are still “significant” obstacles to what they describe as “widespread and successful” adoption of AI systems by the UK public sector.

“Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation,” it writes. “It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.”

The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.

“While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users,” it suggests.

Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. “All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery,” the committee writes.

Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI — with the committee noting there are three sets of principles that could apply to the public sector which is generating confusion.

“The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use,” it recommends.

It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies’ use of AI complies with the UK Equality Act 2010.

The committee is not recommending a new regulator should be created to oversee AI — but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.

It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI — supporting the government’s intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalization.)

Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that “ensure that private companies developing AI solutions for the public sector appropriately address public standards”.

“This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements,” it suggests.

Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of “driving blind, with no control over who is in the AI driving seat”.

“This serious report sadly confirms what we know to be the case — that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector,” she said. “The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.

“Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. It’s time for action.”

Source: TechCrunch