I say we poison the well. We create a subreddit called r/AIPoison. An automoderator will tell any user that requests it a randomly selected subreddit to post coherent plausible nonsense. Since there is no public record of which subreddit is being poisoned, this can’t be easily filtered out in training data.
I say we poison the well. We create a subreddit called r/AIPoison. An automoderator will tell any user that requests it a randomly selected subreddit to post coherent plausible nonsense. Since there is no public record of which subreddit is being poisoned, this can’t be easily filtered out in training data.