- cross-posted to:
- fediverse@zerobytes.monster
- cross-posted to:
- fediverse@zerobytes.monster
An in-depth report reveals an ugly truth about isolated, unmoderated parts of the Fediverse. It’s a solvable problem, with challenges.
Wait until the people who wrote the report learn about the 4chan random board
This isn’t a problem with the fediverse. It’s a problem with people who are ok with this stuff hosting their own servers. Real cp is a quick torch (or even google because this stuff is on the clearnet too) search away, even before the fediverse
Anyway, just join an instance that blocks this
I agree that the problem isn’t with the Fediverse itself, any more than it is with email, usenet, encrypted messengers, etc.
The thing is, it’s a problem that affects the network. While “block and move on” is a reasonable strategy for getting that crap out of your own instance’s feeds, the real meat and potatoes of the issue have to do with legal and legislative repercussions. If an admin comes across this stuff, they have a legal obligation to report it, in most jurisdictions. In fact, the EARN IT and STOP CSAM acts that politicians are trying to push through Congress are likely to make companies overreact to any potential penalty that could come from accidental cross-pollination of CSAM between servers.
Unfortunately, this thing becomes a whole lot messier when an instance discovers cached CSAM after the fact. There was a Mastodon instance that was recently taken down without any turnaround time given to the admin to look into it, the hosting company was just ordered to comply with a CSAM request that basically said “This server has child porn on it.”
Also, regardless of whether you report it or block it and pretend you never saw anything, that doesn’t change the fact that it’s still happening. At the very least, having tooling to make the reporting easier would probably be a big boon to knocking those servers off the network.
I wonder what kind of computing resources that Microsoft service needs. Isn’t it essentially just a set of hashes? My point being that centralization does not have to be an issue.
It’s a bit of an unknown, since the service is a proprietary black box. With that being said, my guess:
- A database with perceptual hash data for volumes and volumes of CSAM.
- Means to generate new hashes from media
- Infrastructure for adding and auditing more of it
- REST API for hash comparisons and reporting
- Integration for pushing reports to NCMEC and law enforcement.
None of those things are impossible or out of reach…but, collecting a new database of hashes is challenging. Where do you get it from? How is it stored? Do you allow the public to access the hash data correctly, or do you keep it secret like all the other solutions do?
I’m imagining a solution where servers aggregate all of this data up to a dispatch platform like the one described above, possibly run by a non-profit or NGO, which then dispatches the data to NCMEC directly.
The other thing to keep in mind is that solutions like photoDNA are HUGE. I’m talking like hundreds of thousands of pieces of reported media per year. It’s something that would require a lot of uptime, and the ability to handle a significantly high amount of requests on a daily basis.
Thanks for the thought you put into your answer.
I’ve been thinking: CSAM is just one of the many problems communities face. E.g. Youtube is unable to moderate transphobia properly, which has significant consequences as well.
Let’s say we had an ideal federated copy of the existing system. It would still not detect many other types of antisocial behavior. All I’ms saying is that the existing approach by M$ feels a bit like it’s based on a moral tunnel vision and trying to solve complex human social issues by using some kind of silver bullet. It lacks nuance. Whereas in fact this is a community management issue.
Honestly I feel it’s really a matter of having manageable communities with strong moderation. And the ability to report anonymously, in case one becomes involved in something bad and wants out.
Thoughts?
IMO the hardest part is the legal side, and in fact I’m not very clear how MS skirted that issue other than through US lax enforcement on corporations. In order to have a db like this one must store stuff that is, ordinarily, illegal to store. Because of the use of imperfect, so-called perceptual hashes, and in case of algorithm updates, I don’t think one can get away with simply storing the hash of the file. Some kind of computer vision/AI-ish solution might work out, but I wouldn’t want to be the person compiling that training set…
Perhaps the manual reporting tool is enough? Then that content can be forwarded to the central ms service. I wonder if that API can report back to say whether it is positive.
Can you elaborate on the hash problem?
Personally I was thinking of generating a federated set based on user reporting. Perhaps enhanced by checking with the central service as mentioned above. This db can then be synced with trusted instances.
Perhaps the manual reporting tool is enough? Then that content can be forwarded to the central ms service. I wonder if that API can report back to say whether it is positive.
The problem with a lot of this tooling is you need some sort of accreditation to use it, because it somewhat relies on security through obscurity. As far as I know you can’t just hit MS’s servers and ask “is this CSAM?” If something like that were possible it might work.
Can you elaborate on the hash problem?
Sure. When you have an image, you can do lots of things to it that change it in some way: change the compression, the format, crop it, apply a filter… This all changes the file and so it changes the hash. The perceptual hash system works on the basis of some computer vision stuff and the idea is that it will try to generate the same hash for pictures that are substantially the same. But this tech is imperfect and probably will have changes. So if there’s a change in the way the hash gets calculated, it wouldn’t be enough with keeping hashes, you’d have to keep the original file to recalculate, which is storing CSAM, which is ordinarily not allowed and for good reason.
For a hint on how bad these hashes can get, they are reversible, vulnerable to pre-image attacks, and so on.
Some of this is probably inevitable in this type of systems. You don’t want to make it easy for someone to hit the servers with a large number of hashes, and then use IPFS or BitTorrent DHT to retrieve positives (you’d be helping people getting CSAM). The problem is hard.
Personally I was thinking of generating a federated set based on user reporting. Perhaps enhanced by checking with the central service as mentioned above. This db can then be synced with trusted instances.
Something like that could work, maybe obscuring some of the hash content (random parts of it) so that it doesn’t become a way to actually find the stuff.
Whatever decisions are made have to be well thought through so as not to make the problem worse.
Perhaps this technical approach is the wrong way entirely. In a scale free network it might seem like a good approach because of the seemingly infinite number of edges the hub nodes service (yt, twitter). The numbers are so large that you have a tendency to come up with a technical solution.
However a network can be laid out in a way that is more conducive to meaningful moderation. With meaningful in this case I am referring to there being people involved rather than algos. This requires having small world communities with influential core members or moderators.
This allows for a more inclusive/wider and more nuanced moderation. For example I assume that yt detects and removes CSAM, however it still has CSAM-like content because it is legal, but it would still be filtered otherwise. Likewise issues such as transphobia are not legal problems and thus are not properly moderated. On the flip end, stuff gets removed that has nothing wrong with it. When different communities create their own meaning through values and principles based on those values, we will have more diversity, and this allows for social progress in the long run.
This might be the case for the federated structure of Lemmy.
Of course this ignores communities that break off and do their own thing and polarize into a more extreme form. I feel that is a different problem that requires a different solution.
Excuse me for being all over the place with this post, but I have to run :)
Well, in a way that’s what we’re doing now, and by and large it works but obviously there’s some leakage, which is impossible to bring down to zero but which makes sense working on improving.
The other side of the coin is that the price of this moderation model is subjecting a lot more people to a lot more horrible shit, and I unfortunately don’t know any way around that.
Maybe it would be good to know more about this leakage. Are these isolated communities? I’ve personally not encountered any CSAM so far. The only thing I’ve seen so far was a transphobe and they were banned quickly.
And about that subjecting moderators to bad stuff. Is that true? Why would anyone constantly post CSAM in a place where it constantly gets removed and their accounts banned?
ATM it seems to me like these are isolated instances?
deleted by creator