robots.txt is the name of a text file file that tells search engines which URLs or directories in a site should not be crawled. This file contains rules that block individual URLs or entire directorie| support.google.com
A Search Console property that, as defined, includes the protocol (http or https) and can include a path string when you created the property. You can see the property URL in the property selector dro| support.google.com
A Search Console property defined without the protocol (the http:// or https:// prefix) and without any path string (/some/path/). Can include subdomains. Examples: example.com m.example.com e| support.google.com
Learn specific details about the different robots.txt file rules and how Google interprets the robots.txt specification.| Google for Developers
A property is Search Console's term for a discrete thing that you can examine or manage in Search Console. A website property represents a website: that is, all pages that share the common domain o| support.google.com
This document specifies and extends the "Robots Exclusion Protocol" method originally defined by Martijn Koster in 1994 for service owners to control how content served by their services may be accessed, if at all, by automatic clients known as crawlers. Specifically, it adds definition language for the protocol, instructions for handling errors, and instructions for caching.| www.rfc-editor.org