To own and operate a website is not an easy job. You will have to monitor your website performance at all time in order to have top-notch performance and make sure your potential customers can reach your website easily and speedily.
There are many searching engines globally use bots and spiders to crawl your site. It is nice to have many backlinks from different website, however, it is not so easy to be selective. Fortunately, in windows servers, we can use web.config file while using .htaccess in linux apache servers to regulate who can access your website or not.
Today, we are here to discuss how to block the traffic that you may not want.
Step 1. Block Traffic by IP Addresses
You can block unwanted traffic by their ip addresses either using web.config or .htaccess depending on your server type. You need to make sure where your targeted customers are and where your local server is located. Rule of thumb of managing a global website is to redirect visitor traffic based on their geographical location, however, it is not so easy to do so if you just start your own website. Fortunately, we can block ip addresses. However, blocking traffic by its ip addresses is an expensive way since it will consume your cpu and memory resource on the server. Alternatively, we can block traffic by User Agent.
Step 2. Block Traffic by User-Agent
What is User-Agent? Normally, a normal online visitor visit your website will leave a trace as which browser version they are using. A good spider and bot crawling your site will identify itself.