• gayhitler420@lemm.ee
    link
    fedilink
    arrow-up
    46
    ·
    9 months ago

    robots.txt isn’t a basic social contract, it’s a file intended to save web crawlers precious resources.

    • umbraroze@kbin.social
      link
      fedilink
      arrow-up
      32
      ·
      9 months ago

      Yup. The robots.txt file is not only meant to block robots from accessing the site, it’s also meant to block bots from accessing resources that are not interesting for human readers, even indirectly.

      For example, MediaWiki installations are pretty clever in that by default, /w/ is blocked and /wiki/ is encouraged. Because nobody wants technical pages and wiki histories in search results, they only want the current versions of the pages.

      Fun tidbit: in the late 1990s, there was a real epidemic of spammers scraping the web pages for email addresses. Some people developed wpoison.cgi, a script whose sole purpose was to generate garbage web pages with bogus email addresses. Real search engines ignored these, thanks to robots.txt. Guess what the spam bots did?

      Do the AI bros really want to go there? Are they asking for model collapse?

      • gayhitler420@lemm.ee
        link
        fedilink
        arrow-up
        14
        ·
        9 months ago

        Of course they want the model collapse. Literally no American tech company has been about reliably, sustainably supplying a good or service or stewarding some public good.

        They’re doing the vc -> juice stock -> gut resources cycle. Nobody cares about the model.

      • Gamma@beehaw.org
        link
        fedilink
        English
        arrow-up
        12
        ·
        9 months ago

        Considering Reddit has decided to start selling user content for training, yeah I guess they want their models to collapse. There’s so much bot generated content nowadays