Generally speaking the Robots.txt file is an exclusionary document, now it’s a inclusionary document, in one fell swoop Google, Yahoo, MSN and Ask.Com changed what has been the standard for over a decade. As per the recent post on Google’s Official Blog The search engines have just asked everyone to re-think the use of the robots.txt file.

What is the difference between exclusionary and inclusionary?

In short, exclusionary means that everything in that document is not to be spidered, traditionally this is what a robots.txt file is. It’s a text file that tells the search engines not to go into these areas of a website.

Well low and behold, now it’s an inclusionary document, meaning that we have to put stuff in the robots.txt that we WANT to be spidered (at least as far as the sitemaps go).  Meaning that the document is both inclusionary for the sitemaps and exclusionary for the urls otherwise listed.
The search engines should have decided to make this another file and not plug it into the same file.