A powerful and reliable Python tool to automate the export of all your saved Instapaper bookmarks into various formats, giving you full ownership of your data.
- Scrapes all bookmarks from your Instapaper account.
- Supports scraping from specific folders.
- Exports data to CSV, JSON, or a SQLite database.
- Securely stores your session for future runs.
- Modern, modular, and tested architecture.
- Python 3.9+
This package is available on PyPI and can be installed with pip:
pip install instapaper-scraperRun the tool from the command line, specifying your desired output format:
# Scrape and export to the default CSV format
instapaper-scraper
# Scrape and export to JSON
instapaper-scraper --format json
# Scrape and export to a SQLite database with a custom name
instapaper-scraper --format sqlite --output my_articles.dbThe script authenticates using one of the following methods, in order of priority:
-
Command-line Arguments: Provide your username and password directly when running the script:
instapaper-scraper --username your_username --password your_password
-
Session Files (
.session_key,.instapaper_session): The script attempts to load these files in the following order: a. Path specified by--session-fileor--key-filearguments. b. Files in the current working directory (e.g.,./.session_key). c. Files in the user's configuration directory (~/.config/instapaper-scraper/). After the first successful login, the script creates an encrypted.instapaper_sessionfile and a.session_keyfile to reuse your session securely. -
Interactive Prompt: If no other method is available, the script will prompt you for your username and password.
Note on Security: Your session file (
.instapaper_session) and the encryption key (.session_key) are stored with secure permissions (read/write for the owner only) to protect your credentials.
You can define and quickly access your Instapaper folders and set default output fields using a config.toml file. The scraper will look for this file in the following locations (in order of precedence):
- The path specified by the
--config-pathargument. config.tomlin the current working directory.~/.config/instapaper-scraper/config.toml
Here is an example of config.toml:
# Default output filename for non-folder mode
output_filename = "home-articles.csv"
# Optional fields to include in the output.
# These can be overridden by command-line flags.
[fields]
read_url = false
article_preview = false
[[folders]]
key = "ml"
id = "1234567"
slug = "machine-learning"
output_filename = "ml-articles.json"
[[folders]]
key = "python"
id = "7654321"
slug = "python-programming"
output_filename = "python-articles.db"- output_filename (top-level): The default output filename to use when not in folder mode.
- [fields]: A section to control which optional data fields are included in the output.
read_url: Set totrueto include the Instapaper read URL for each article.article_preview: Set totrueto include the article's text preview.
- [[folders]]: Each
[[folders]]block defines a specific folder.- key: A short alias for the folder.
- id: The folder ID from the Instapaper URL.
- slug: The human-readable part of the folder URL.
- output_filename (folder-specific): A preset output filename for scraped articles from this specific folder.
When a config.toml file is present and no --folder argument is provided, the scraper will prompt you to select a folder. You can also specify a folder directly using the --folder argument with its key, ID, or slug. Use --folder=none to explicitly disable folder mode and scrape all articles.
| Argument | Description |
|---|---|
--config-path <path> |
Path to the configuration file. Searches ~/.config/instapaper-scraper/config.toml and config.toml in the current directory by default. |
--folder <value> |
Specify a folder by key, ID, or slug from your config.toml. Requires a configuration file to be loaded. Use none to explicitly disable folder mode. If a configuration file is not found or fails to load, and this option is used (not set to none), the program will exit. |
--format <format> |
Output format (csv, json, sqlite). Default: csv. |
--output <filename> |
Specify a custom output filename. The file extension will be automatically corrected to match the selected format. |
--username <user> |
Your Instapaper account username. |
--password <pass> |
Your Instapaper account password. |
--[no-]read-url |
Includes the Instapaper read URL. (Old flag --add-instapaper-url is deprecated but supported). Can be set in config.toml. Overrides config. |
--[no-]article-preview |
Includes the article preview text. (Old flag --add-article-preview is deprecated but supported). Can be set in config.toml. Overrides config. |
You can control the output format using the --format argument. The supported formats are:
csv(default): Exports data tooutput/bookmarks.csv.json: Exports data tooutput/bookmarks.json.sqlite: Exports data to anarticlestable inoutput/bookmarks.db.
If the --format flag is omitted, the script will default to csv.
When using --output <filename>, the file extension is automatically corrected to match the chosen format. For example, instapaper-scraper --format json --output my_articles.txt will create my_articles.json.
The output data includes a unique id for each article. You can use this ID to construct a URL to the article's reader view: https://www.instapaper.com/read/<article_id>.
For convenience, you can use the --read-url flag to have the script include a full, clickable URL in the output.
instapaper-scraper --read-urlThis adds a instapaper_url field to each article in the JSON output and a instapaper_url column in the CSV and SQLite outputs. The original id field is preserved.
The tool is designed with a modular architecture for reliability and maintainability.
- Authentication: The
InstapaperAuthenticatorhandles secure login and session management. - Scraping: The
InstapaperClientiterates through all pages of your bookmarks, fetching the metadata for each article with robust error handling and retries. Shared constants, like the Instapaper base URL, are managed throughsrc/instapaper_scraper/constants.py. - Data Collection: All fetched articles are aggregated into a single list.
- Export: Finally, the collected data is written to a file in your chosen format (
.csv,.json, or.db).
"id","instapaper_url","title","url","article_preview"
"999901234","https://www.instapaper.com/read/999901234","Article 1","https://www.example.com/page-1/","This is a preview of article 1."
"999002345","https://www.instapaper.com/read/999002345","Article 2","https://www.example.com/page-2/","This is a preview of article 2."[
{
"id": "999901234",
"title": "Article 1",
"url": "https://www.example.com/page-1/",
"instapaper_url": "https://www.instapaper.com/read/999901234",
"article_preview": "This is a preview of article 1."
},
{
"id": "999002345",
"title": "Article 2",
"url": "https://www.example.com/page-2/",
"instapaper_url": "https://www.instapaper.com/read/999002345",
"article_preview": "This is a preview of article 2."
}
]A SQLite database file is created with an articles table. The table includes id, title, and url columns. If the --add-instapaper-url flag is used, a instapaper_url column is also included. This feature is fully backward-compatible and will automatically adapt to the user's installed SQLite version, using an efficient generated column on modern versions (3.31.0+) and a fallback for older versions.
- 🐛 Bug Reports: For any bugs or unexpected behavior, please open an issue on GitHub.
- 💬 Questions & General Discussion: For questions, feature requests, or general discussion, please use our GitHub Discussions.
Instapaper Scraper is a free and open-source project that requires significant time and effort to maintain and improve. If you find this tool useful, please consider supporting its development. Your contribution helps ensure the project stays healthy, active, and continuously updated.
- Sponsor on GitHub: The best way to support the project with recurring monthly donations. Tiers with special rewards like priority support are available!
- Buy Me a Coffee: Perfect for a one-time thank you.
Contributions are welcome! Whether it's a bug fix, a new feature, or documentation improvements, please feel free to open a pull request.
Please read the Contribution Guidelines before you start.
This project uses pytest for testing, ruff for code formatting and linting, and mypy for static type checking. A Makefile is provided to simplify common development tasks.
The most common commands are:
make install: Installs development dependencies.make format: Formats the entire codebase.make check: Runs the linter, type checker, and test suite.make test: Runs the test suite.make build: Builds the distributable packages.
Run make help to see all available commands.
To install the development dependencies:
pip install -e .[dev]To set up the pre-commit hooks:
pre-commit installTo run the scraper directly without installing the package:
python -m src.instapaper_scraper.cliTo run the tests, execute the following command from the project root (or use make test):
pytestTo check test coverage (or use make test-cov):
pytest --cov=src/instapaper_scraper --cov-report=term-missingYou can use the Makefile for convenience (e.g., make format, make lint).
To format the code with ruff:
ruff format .To check for linting errors with ruff:
ruff check .To run static type checking with mypy:
mypy srcTo run license checks:
licensecheck --zeroThis script requires valid Instapaper credentials. Use it responsibly and in accordance with Instapaper’s Terms of Service.
This project is licensed under the terms of the GNU General Public License v3.0. See the LICENSE file for the full license text.
Made with contrib.rocks.