-
Notifications
You must be signed in to change notification settings - Fork 359
Open
Description
Description:
I propose adding a Toxic Comment Classification feature that classifies comments as "toxic," or "not" The implementation will.
Solution:
Preprocess the text (remove stopwords, punctuation, lowercase).
Use TF-IDF for feature extraction.
Train a simple classifier.
Evaluate using accuracy and F1-score.
Alternatives:
Pretrained Models (e.g., BERT): These are more complex and require higher resources. Starting with a simpler approach is easier for beginners.
Manual Moderation: It’s time-consuming and not scalable compared to an automated model.
Kindly assign me this issue.
Reactions are currently unavailable