A crawler (also called collector, web crawler, web spider) automatically scans the web for specific criteria for web pages to analyze. The crawler acts independently and automatically repeats the task assigned to it. Search engines like Google use these robots ( searchbots ) to maintain their index . New web pages are quickly listed because the existing index is constantly being updated.;u=500358;sa=summary

トップ   編集 凍結 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2020-03-19 (木) 00:12:42 (307d)