Robot.txt to exclude crawling "claim" pages
I'd like to reduce my crawl errors by preventing the pages associated with "claiming listings" from being crawled.
A typical "claim" url looks like this:
http://www.mysite.com/listings/claim/6795/
In my robots.txt file, I have the usual:
---------
User-agent: *
Disallow: /wp-admin/
Disallow: /wp-includes/
Sitemap: http://www.mysite.com/sitemap.xml.gz
----------
What I'd like to add, is a line which says
Disallow: /listings/claim/
... to prevent those "claim" URLs from being crawled.
Is that correct? Or, do I need something further like:
Disallow: /listings/claim/*
... to be sure it gets
EVERY claim file (there are several hundred of them that I want to disallow).
My concern is simple: I'm nervous about how I include the
/listings/ portion of the folders because if I code it incorrectly, the valid (live and published) listings won't be crawled, like this one:
http://www.mysite.com/listings/JoesCrabShack
Much obliged for your input!