This repository contains code which might be required while working on projects related to Twitter data. You could use any module separately or start from processing your raw data (dataset with only Tweet IDs) in sequential order (order to be followed according to the directory's numbering).
- If you just have IDs of the tweets, you could follow the ordering given to the directories. You could start from:
- Collecting the tweets from their tweet IDs.
- Group collected tweets according to the number of hashtags (optional, needed depending on the project).
- Filter all the tweets' data from the tweet dump and put them in an organized format (CSV).
- Preprocess the tweet text (like removing emojis, usermentions, urls, text processing etc) before actually starting to work on your project.
- Extract the
noun chunksfrom the processed tweet text (optional, depends on the project). - Extract the
named entitiesfrom the processed tweet text (optional, depends on the project).
- All the codes were written and tested on
python3.5.2. - The modules required to run the codes are put in the
requirements.txtfile. - The modules can be installed by using the command
pip -r install requirements.txt. Make sure to usesudoincase you are installing the modules in the root system (i.e. when not usingvirtualenv). - Additional requirements for
tweet_preprocessingtask:- The
tweet_preprocessing_part2task requires thewordsegmodule which can be installed only from source. - To install
wordsegfrom source, goto the wordseg [homepage][https://github.com/jchook/wordseg] and clone or download the project. - Then,
cdinto cloned directory and run the commandpython setup.py install. This will successfully install thewordsegmodule.
- The
- Additional requirements for
Spacymodule (extract_noun_phrasestask):- The Spacy module requires the
enML model to be downloaded before using it. So, install it using the commandpython -m spacy download enbefore using spacy module.
- The Spacy module requires the
- We could extract additional information about the tweet content from the web using the tweet's noun phrases and named entities. I will add those codes later.
- Please read the modules' corresponding README files before running the code.
- Please feel free to play around with the code and send a PR incase you want to add any other module related to Twitter projects or you want to improve the existing modules.