Search engines basically perform these steps crawling, indexing, processing, calculating relevancy, and retrieving.
First the software called spider also called crawler visits the site, reads its contents, follow other linked pages and save its results. Crawler performs this action on regular basis but not every day. A crawler does not come after fixed duration to visit a site. However if you update your site regularly, it attracts the crawler to come back early.
In the second part, spider indexes every page according to the type of page's contents. Indexing is just like a catalog rather a huge catalog. Website pages cache is put in the index, and updated, when the crawler visits the site next time.
In the third part, the cache is retrieved by search engine when relevant keywords are typed for search. The pages are also ranked according to the relevancy of keywords. Now the most relevant (high ranked) pages
are displayed.