The incoming build v1.1.0b (2023.08.29) will have a new option in settings under a section called Filtering.

The option is NSFW Extended

This will run an offline CoreML model that will run through all thumbnails and present media that is SFW. This is based on the data set used here https://github.com/bhky/opennsfw2

I will always find offline solutions first. The goal is for data to never leave the device when incorporating such solutions.

I will stay vigilant on better and more up-to date solutions. Since this is a fairly new integration, I will also spend time optimizing, but the speed of inferencing is negligible and almost instant, so experience does not seem to be affected.

I have made a new open-source package called ModerationKit, where I will incorporate solutions of any kind to prevent harmful media from entering a user’s feed.

2 Notes to be aware of:

  1. There is an issue with config restoration per app launch that will be resolved later this week. You may need to re-enable this filter each open.

  2. Computer Vision is never a 100% guarantee.