No description
Find a file
2020-10-04 20:15:36 +02:00
.github mv dependabot config 2020-07-16 06:14:26 -07:00
parsers Added parser for hervecuisine.com 2020-10-04 20:15:36 +02:00
screenshots add screenshots 2020-04-16 13:05:04 -04:00
static/styles 📃 Fix print styles 2020-08-17 17:48:28 -07:00
templates 💇‍♀️ Split some of the templates 2020-08-18 09:29:28 -07:00
.gcloudignore deploy to appengine 2020-04-17 19:39:02 -04:00
.gitignore deploy to appengine 2020-04-17 19:39:02 -04:00
app.yaml update readme 2020-04-19 23:34:36 -04:00
deploy.sh add rudimentary support for recipe sections 2020-08-11 09:49:45 -04:00
LICENSE Initial commit 2020-04-16 10:54:35 -04:00
main.py add glebekitchen.com 2020-08-11 09:28:33 -04:00
README.md update readme 2020-10-02 21:18:34 -04:00
requirements.txt Prefer local scrapers over dependencies 2020-08-09 09:31:45 -04:00

This program parses recipes from common websites and displays them using plain-old HTML.

You can use it here: https://www.plainoldrecipe.com/

Screenshots

Home Page: Home Page

View the recipe in your browser: Recipe

If you print the recipe, shows with minimal formatting: Print View

Deploy

Run deploy.sh

Acknowledgements

Contributing

  1. If you want to add a new scraper, please feel free to make a PR. Your diff should have exactly two files: parsers/__init__.py and add a new class in the parsers/ directory. Here is an example of what your commit might look like.

  2. If you want to fix a bug in an existing scraper, please feel free to do so, and include an example URL which you aim to fix. Your PR should modify exactly one file, which is the corresponding module in the parsers/ directory.

  3. If you want to make any other modification or refactor: please create an issue and ask prior to making your PR. Of course, you are welcome to fork, modify, and distribute this code with your changes in accordance with the LICENSE.

  4. I don't guarantee that I will keep this repo up to date, or that I will respond in any sort of timely fashion! Your best bet for any change is to keep PRs small and focused on the minimum changeset to add your scraper :)

Testing PRs Locally

git fetch origin pull/ID/head:BRANCHNAME