On the 14th January 2020, Lord McNally placed a bill infront of Parliament that would “assign certain functions of OFCOM in relation to online harms regulation”. This executive summary appears to be a loose description of what’s contained, with the text seemingly requiring OFCOM to write a report each year with recommendations for the introduction of an Online Harms Reduction Regulator. It is not clear why recommendations are required every year, nor why the lead has now moved fromDCMS to OFCOM (I can only assume that it is because the OFCOM is much closer to the cut-and-thrust of Regulation.
So what might it mean for Platform and Service Providers?
In the short term – probably not a lot, however, there are a couple of key points that providers may want to keepabreast of: 1) We can see that progress is being made – and is more likely to increase than decrease. 2) The Online Harms (as initially laid out in the OHWP) have been narrowed down to focus on certain ones in particular. This means that a Platform Provider is probably well advised to ensure that these are being tackled actively. The Harms laid out in the paper are: (a) terrorism (or it could be in reference to this definition); (b) dangers to persons aged under 18 and vulnerable adults; (c) racial hatred, religious hatred, hatred on the grounds of sex or hatred on the grounds of sexual orientation; (d) discrimination against a person or persons because of a protected characteristic; (e) fraud or financial crime; (f) intellectual property crime; (g) threats which impede or prejudice the integrity and probity of the electoral process; and (h) any other harms that OFCOM deem appropriate.
Unfortunately, as you have probably recognised, these descriptions of Online Harms are not well correlated with those laid out in the OHWP – and so OFCOM will probably struggle initially with the lack of tight definition of these Harms, before it can make any meaningful report.
If you’re in the business of providing online services or platforms that have User Generated Content (UGC) or chat, and you don’t know what this is, then you’re definitely going to want to head over there and take a look. Essentially it’s the ‘Best Practice Guide’ for developing online services that counter the Online Harms that we so often talk about.
SbD Principle 1: Service provider responsibilities. The burden of safety should never fall solely upon the end user. Service providers can take preventative steps to ensure that their service is less likely to facilitate, inflame or encourage illegal and inappropriate behaviours. To help ensure that known and anticipated harms have been evaluated in the design and provision of an online service, a service should take the following steps:
1. Nominate individuals, or teams—and make them accountable—for user-safety policy creation, evaluation, implementation, operations.
2. Develop community standards, terms of service and moderation procedures that are fairly and consistently implemented.
3. Put in place infrastructure that supports internal and external triaging, clear escalation paths and reporting on all user-safety concerns, alongside readily accessible mechanisms for users to flag and report concerns and violations at the point that they occur.
4. Ensure there are clear internal protocols for engaging with law enforcement, support services and illegal content hotlines.
5. Put processes in place to detect, surface, flag and remove illegal and harmful conduct, contact and content with the aim of preventing harms before they occur.
6. Prepare documented risk management and impact assessments to assess and remediate any potential safety harms that could be enabled or facilitated by the product or service.
7. Implement social contracts at the point of registration. These outline the duties and responsibilities of the service, user and third parties for the safety of all users. 8. Consider security-by-design, privacy-by-design and user safety considerations which are balanced when securing the ongoing confidentiality, integrity and availability of personal data and information
SbD Principle 2: User empowerment and autonomy. The dignity of users is of central importance, with users’ best interests a primary consideration. The following steps will go some way to ensure that users have the best chance at safe online interactions, through features, functionality and an inclusive design approach that secures user empowerment and autonomy as part of the in-service experience. Services should aim to:
1. Provide technical measures and tools that adequately allow users to manage their own safety, and that are set to the most secure privacy and safety levels by default.
2. Establish clear protocols and consequences for service violations that serve as meaningful deterrents and reflect the values and expectations of the user base.
3. Leverage the use of technical features to mitigate against risks and harms, which can be flagged to users at point of relevance, and which prompt and optimise safer interactions.
4. Provide built-in support functions and feedback loops for users that inform users on the status of their reports, what outcomes have been taken and offer an opportunity for appeal. 5. Evaluate all design and function features to ensure that risk factors for all users—particularly for those with distinct characteristics and capabilities—have been mitigated before products or features are released to the public.
SbD Principle 3: Transparency and accountability. Transparency and accountability are hallmarks of a robust approach to safety. They not only provide assurances that services are operating according to their published safety objectives, but also assist in educating and empowering users about steps they can take to address safety concerns. To enhance users’ trust, awareness and understanding of the role, and importance, of user safety:
1. Embed user safety considerations, training and practices into the roles, functions and working practices of all individuals who work with, for, or on behalf of the product or service.
2. Ensure that user-safety policies, terms and conditions, community standards and processes about user safety are visible, easy-to-find, regularly updated and easy to understand. Users should be periodically reminded of these policies and proactively notified of changes or updates through targeted in-service communications.
3. Carry out open engagement with a wide user-base, including experts and key stakeholders, on the development, interpretation and application of safety standards and their effectiveness or appropriateness.
4. Publish an annual assessment of reported abuses on the service, alongside the open publication of meaningful analysis of metrics such as abuse data and reports, the effectiveness of moderation efforts and the extent to which community standards and terms of service are being satisfied through enforcement metrics.
5. Commit to consistently innovate and invest in safety-enhancing technologies on an ongoing basis and collaborate and share with others safety-enhancing tools, best practices, processes and technologies.
Your app/platform/website/service is a force for good,
right? I’ll assume so (if it’s not, you’re on the wrong side of us, and your
time is up!) because generally developers, product managers, entrepreneurs and
customer services are out there to add value and delight their customers. So,
you may not be planning to have to spend your scarce development effort on not
just protecting your platform from cyber attack, but from protecting your users
from other users and threat actors. It’s an unfortunate fact of the modern
internet that bad actors are out there, and the chances are they will use your
platform to attach other people.
So, your first question might be…what is it I need to
safeguard my users against?
Great question, and there was a time when that mostly came
down to removing profanity (bad language) and (perhaps) ensuring their was no
abuse or harassment going on. Then came the widespread problem of Online Child
Sexual Exploitation Online (OCSE) and (if you allow file exchange) the transferring
and distribution of Child Sexual Abuse Material (CSAM).
But identifying online harms and coming up with mitigations is not your day job, so how we thought we would make your life a lot easier. Step 1 was to creat a place where you could see the full list of identified harms, with (some sort of!)* definition.
This is only a start of the resources we plan to provide to make your life easier in providing a great technology response.
We’re happy to work with you to provide mitigations for all of these. Some will be technology, some will be process and others may simply be a tweak in your policy, but we believe that knowing what you need to guard against is the first step in providing a response.
We’re passionate about making the internet a safer place for people, but we’re not in the business of Social Media, Chat or Content. But what if you are?
We have pulled together what we think is the most complete and current catalogue of Content Moderation companies on the internet. So if you’re a developer and looking for a service to make your site or app a safer place, then head over to the Content Moderation app store and let us know what you think.