f | |
| :mod:`robotparser` --- Parser for robots.txt |
| ============================================= |
| |
| .. module:: robotparser |
n | :synopsis: Loads a robots.txt file and answers questions about fetchability of other URLs. |
n | :synopsis: Loads a robots.txt file and answers questions about |
| fetchability of other URLs. |
| .. sectionauthor:: Skip Montanaro <skip@mojam.com> |
| .. sectionauthor:: Skip Montanaro <skip@pobox.com> |
| |
| |
| .. index:: |
| single: WWW |
| single: World Wide Web |
| single: URL |
| single: robots.txt |
| |
n | .. note:: |
| The :mod:`robotparser` module has been renamed :mod:`urllib.robotparser` in |
| Python 3.0. |
| The :term:`2to3` tool will automatically adapt imports when converting |
| your sources to 3.0. |
| |
| This module provides a single class, :class:`RobotFileParser`, which answers |
| questions about whether or not a particular user agent can fetch a URL on the |
n | Web site that published the :file:`robots.txt` file. For more details on the |
n | Web site that published the :file:`robots.txt` file. For more details on the |
| structure of :file:`robots.txt` files, see |
| structure of :file:`robots.txt` files, see http://www.robotstxt.org/orig.html. |
| `<http://www.robotstxt.org/wc/norobots.html>`_. |
| |
| |
| .. class:: RobotFileParser() |
| |
n | This class provides a set of methods to read, parse and answer questions about a |
n | This class provides a set of methods to read, parse and answer questions |
| single :file:`robots.txt` file. |
| about a single :file:`robots.txt` file. |
| |
| |
n | .. method:: RobotFileParser.set_url(url) |
n | .. method:: set_url(url) |
| |
| Sets the URL referring to a :file:`robots.txt` file. |
| |
| |
n | .. method:: RobotFileParser.read() |
n | .. method:: read() |
| |
| Reads the :file:`robots.txt` URL and feeds it to the parser. |
| |
| |
n | .. method:: RobotFileParser.parse(lines) |
n | .. method:: parse(lines) |
| |
| Parses the lines argument. |
| |
| |
n | .. method:: RobotFileParser.can_fetch(useragent, url) |
n | .. method:: can_fetch(useragent, url) |
| |
n | Returns ``True`` if the *useragent* is allowed to fetch the *url* according to |
n | Returns ``True`` if the *useragent* is allowed to fetch the *url* |
| the rules contained in the parsed :file:`robots.txt` file. |
| according to the rules contained in the parsed :file:`robots.txt` |
| file. |
| |
| |
n | .. method:: RobotFileParser.mtime() |
n | .. method:: mtime() |
| |
n | Returns the time the ``robots.txt`` file was last fetched. This is useful for |
n | Returns the time the ``robots.txt`` file was last fetched. This is |
| long-running web spiders that need to check for new ``robots.txt`` files |
| useful for long-running web spiders that need to check for new |
| periodically. |
| ``robots.txt`` files periodically. |
| |
| |
n | .. method:: RobotFileParser.modified() |
n | .. method:: modified() |
| |
t | Sets the time the ``robots.txt`` file was last fetched to the current time. |
t | Sets the time the ``robots.txt`` file was last fetched to the current |
| time. |
| |
| The following example demonstrates basic use of the RobotFileParser class. :: |
| |
| >>> import robotparser |
| >>> rp = robotparser.RobotFileParser() |
| >>> rp.set_url("http://www.musi-cal.com/robots.txt") |
| >>> rp.read() |
| >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco") |