Robot.txt to exclude crawling "claim" pages
I'd like to reduce my crawl errors by preventing the pages associated with "claiming listings" from being crawled.
A typical "claim" url looks like this:
In my robots.txt file, I have the usual:
What I'd like to add, is a line which says
... to prevent those "claim" URLs from being crawled.
Is that correct? Or, do I need something further like:
... to be sure it gets EVERY
claim file (there are several hundred of them that I want to disallow).
My concern is simple: I'm nervous about how I include the /listings/
portion of the folders because if I code it incorrectly, the valid (live and published) listings won't be crawled, like this one:
Much obliged for your input!