Categories
Platform Integrity

On Humane Trust & Safety and Support Teams

Recently I sat down with a leader at a tech company building a community platform to offer input on running Support and Trust & Safety teams. Later, I sent a followup email and then realized its contents may be useful to other folks in tech wrestling with the ethical quandary of managing teams tasked with cleaning up the internet. My advice has been posted here with their permission:

When we spoke I did a lot a of rambling, because I have an excess of war stories rolling around in my head, but I realize that may have left you swimming. Below, I’ve parsed my thoughts into a handful of things I think are worth trying to act on if you want to create a team doing Support and/or Trust & Safety which is designed to bypass many of the unhealthy dynamics that are usually baked into this work:
1) Pay well for work that doesn’t scale.
It sounded like you may be pushing to have Support be paid on the same scale as engineering. That’s really solid. Typically tech companies hate paying for labor that doesn’t produce scalable output; mushy human stuff is a necessary evil, and they pay as little as they can manage for it. This is really short-sighted, though. Software which has users is going to take work to maintain no matter what, and if you recognize the operational costs upfront, you’ll be helping avoid surprises and burnout for your entire company later.
2) Prepare to increase the size of your Trust & Safety and Support teams proportionally with the scale of your userbase.
This comes from similar dynamics as #1, and is typically really hard to swallow. I’m not suggesting that Trust & Safety and Support will never be able to automate away parts of their work, or that folks on those teams shouldn’t be encouraged to do so as part of their jobs. But you should expect that the load and the complexity of issues will scale proportionally to the number of people using your software, especially as you add more social features, and having an ever growing number of smart, capable people on hand with enough extra bandwidth to think strategically and devise solutions for the stuff they’re seeing on the frontlines will ensure you’re generally not blindsided by your platform unleashing messes on the world.
3) Create a separate time off scheme for Trust & Safety and Support.
I recommend giving everyone doing this work 3 paid months off. This work is going to take a psychological toll and people’s productivity is going to be affected. Usually this results in staff feeling terrible about themselves and papering it over, which is the first step on the road to burnout. Yet if acknowledgement and affordances are created for this outcome, and it’s treated as a normal thing that individuals aren’t failures for and which the company has their back on, it will increase people’s resilience by ensuring they don’t waste more bandwidth on shame.
4) Have Support and Trust & Safety driving platform-level changes.
Another enormous driver of burnout in this work is facing upsetting scenarios which you have no opportunity to resolve. This creates lots of second order trauma. Alternately, if Trust & Safety folks find themselves fielding the output of a huge platform abuse vector and they think of novel ways to solve it, actually seeing it implemented is an incredible morale boost which will prevent against learned helplessness, and help your people stay in the game with you over the long haul.