Maybe I should start a series on the enshittified community web. This AI-driven community world seems to be going just great. 🫠
I previously wrote about the enshittification of the community web and how we're allowing it to happen, and in this case, Digg got hit hard with bots, resorted to layoffs and it has 'temporarily closed doors'.
I have a hunch that the temporary shutdown will turn into a permanent one, time will tell. I checked them out when they launched and a couple of times after that, and I struggled to see any real value in them. It felt like another low value waste of space. Maybe people have time for that in their one life they have to live, not me!
They allowed this to happen by not taking moderation seriously and believing in the over-glorification of AI as a solution to a better community. Little did they realise that it would be the AI bots taking them down rather than helping them. 🤷🏻♀️

Of course, it's hard to know for sure the full story, but here's a snippet from The Verge:
When they announced its relaunch, Rose told The Verge that AI could “remove the janitorial work of moderators and community managers.” Now, the new Digg’s CEO Justin Mezzell writes in a note pinned to the homepage that, “We knew bots were part of the landscape, but we didn’t appreciate the scale, sophistication, or speed at which they’d find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough.”
I feel for Digg, of course, spammers were always more likely to pounce on them, considering the history and previous known nature of Digg. But let's be real, assuming AI can handle all the moderation up front, perfectly, is quite frankly crazy thinking.
Other comments from The Verge community commentary indicated a wider bot problem, not just for Digg, but for Reddit and likely other spaces too:
Can't say I'm surprised. The bot problem on Reddit is probably even worse, and sometimes very obvious. I'm not sure Reddit actually cares to solve that problem, though.
And...
There was no content or community. I tried it out because I'm frustrated with Reddit for a lot of reasons but there was just not enough people/content there to have me return after a couple of days.
To me, this signifies a real and ongoing shift. I'm not quite sure where it's heading, but it is certainly influencing my decisions over at the MoTaverse. The people are noticing too, I detect a shift in people's willingness to continue spend time in these spaces.
The state of the web is that trust feels at an all time low, I can only assume it will get lower, but everything in community, for me, is about building on trust. I don't want to be part of anything that doesn't have a real connection or meaning. I refuse!
We all have the capacity to refuse!
Trust has to be the foundation. If we allow bots in, the trust is gone.
Of course, many people appreciate Reddit, but for me, the largely anon aspect has always put me off. It encourages a certain type of content, that is always going to lack context and depth. They've been trying to maintain the human aspect, I'm not sure how it's going, but I honestly can't see how they can prevent bots, especially as they get smarter. It's a hard nope for me. I've got one life, I'm not going to spend it there.
All of a sudden, running free communities runs real risks of not knowing whether people signing up are human beings or bots. And then we need to do the enshittified work of cleaning things up constantly. The moderation work does not disappear because of AI, it just shifts elsewhere.
We're made to believe that this has to be the way. That's also a hard nope from me.
Not only will people increasingly not want to spend their time checking whether they are speaking to humans and verifying that what they are reading is true, but the people leading communities should be doing good work, not fighting bot spam constantly.
The purpose of bots could become more sophisticated too. As community people, we're used to spammers posting content and having profiles that usually promote something. They have traditionally been easier to spot.
But what if they lurk behind the scenes, doing other things we are not aware of? What if there are agentic created profiles that are slowly created over time, that feel and look human.
What if agents pose as other people?
What if agents are instructed by the person? (I can see this as both negative, but also potentially a positive way to engage in an easier way).
How can people be sure they are speaking to real people? (It's important, you know, despite what others may have you think otherwsie)
There are many risks, both from user and business perspectives, and they are going to be harder to spot and deal with. I could quite easily see agentic tools being built around this.
It's wild and depressing. The web is becoming less trustworthy. Every interaction we have will be followed by the question of whether we can trust it. And again, I can only emphasise that this makes me want to build something that is truly more human-led.
But with it comes opportunity. And that's where my mind is right now. There is so much in flux and with that comes the option to rethink what we
What's clear, is that when we design for community, we now need to protect ourselves from the AI slop and the bots. It has to be baked into our strategy and we can't expect how things we did in the past to work today.
Are you even building community if you aren't protecting yourself from AI enshittification?
Learn more:

