[ad_1]
What do you do with a benchmark server? You use it to serve minimal content to other servers around the world and see how responsive your Web server is. If your Web server is located in New York City and you fetch the page from Los Angeles, what is contributing to the lag times between fetch and render?
At a minimum your fetch machine’s request must be handed off to at least one router. But to get to that router you may have to go through a modem or multiplexer. Home users go through modems. Business users who have T-1 or faster lines go through multiplexers.
The Lifetime Journey of An Internet Packet
From your physical location every IP request {packet} must travel to a nearby node managed by your Internet Service Provider. That node could be choked with traffic from several hundred, perhaps even several thousand people around you. This is the so-called “Last Mile” that connects every Internet user to a major trunk line.
Once your request gets past that last mile node it zooms across your ISP’s network to their nearest data center {this could entail several hops}. From there your request goes to an Internet Exchange Point (IEP or IXP). There used to be only a very small number of IXPs in North America but now there are many.
MAE West, MAE Central, and MAE East are the oldest and most well-known North American IXPs. I believe the MAE exchanges have all been shut down for a few years.
Major ISPs “peer” with each other in these exchanges. I guess you could loosely describe them as huge super multiplexers. The multiplexers are connected to demultiplexers. Basically you bring a lot of lines together and combine their signals into one line through a multiplexer and you break up their signals and split them across multiple lines via DEmultiplexers. Multiplexers can be large or small.
In any event, our fetch request will eventually end up in one or more IXPs where our Internet Service Provider will pass on the request to others. They will figure out which one can best get the request as close to the Web server as possible. It’s not uncommon for a fetch request to pass through several IXPs.
Eventually the request leaves the last IXP and is delivered to the major service provider that handles the trunk leading to wherever the server is located. This is usually a data center and because data centers require a huge I/O capacity their Internet connections are not usually not grouped with small customer “last mile” connections.
Pinging In the Blink of An Eye
So the question everyone needs to ask is, “How long does it take the fetch request to reach the Web server?”
In the old days you could use a ping request to test connectivity. You could also use a traceroute (or tracert depending on your O/S) to look at how long it takes a basic request to reach a server. The problem with pinging and tracing routes, however, is that you’re not sending a full-featured request to the Web server. When a browser asks for a Web page it’s sending a more sophisticated request and expecting a much more robust reply. Also, the hardware responds to the ping but the Web Server Application responds to the get or put (the basic page fetch).
When your browser requests anything from a Web server a whole lot of handshaking goes on, where the browser and server pass information back and forth, first establishing a TCP connection, then determining if there is encryption involved, whether someone needs to authenticate, etc. The server sends meta information with each packet (an “envelope”) and the Web document itself also has meta information (which varies depending on server configuration, sometimes depending on other applications).
How long does all this stuff take? This is why you may want to invest a little time in creating a Benchmark Server. You don’t need to do anything other than set up a subdomain (or a dedicated folder on your root host) and then create a special document.
The benchmark document should contain as little data as possible. Don’t embed any Javascript, don’t use any CSS, and don’t embed any pictures. You just want to pass a simple, bare-bones HTML document.
The Root Benchmark Document Establishes A Base Line
Base lines are tricky things when you create a benchmark. Normally people just assume they only need “one good capture” to establish a benchmark. In reality you should capture multiple images or data points to find an average response time. It would not hurt to compute a standard deviation. The SD establishes your tolerances. If a future benchmark test falls outside those tolerances then you know there is a problem somewhere along a fetch path.
It works best to establish multiple benchmarks from around the world, depending on where your target audience is. Many non-American companies still prefer to host their sites in the United States. However, the popularity of Content Delivery Networks and major cloud hosting networks probably means most major Websites around the world are mirrored closer to where they need to be retrieved.
If you do business on two continents it’s probably a good idea to establish benchmarks from two or three access points per continent. Physical distance between access point and Web server data center provides a rough approximation of how much complexity exists between the access point and the server.
Use Several Benchmark Documents to Test Element Response Times
I considered adding several documents to the Reflective Dynamics benchmark server but I am afraid someone would be tempted to unleash a crawler against it {if its location is ever leaked}. You SEOs and your crawlers. That’s the wrong way to measure a Website, but I will lecture no more about crawlers in this article.
You can add degrees of complexity by creating several documents on your benchmark server. For example, add a call to the stylesheet in the HEAD section of document two. Add a call to the analytics script in the HEAD section of document three. Combine these calls in document four.
If that sounds tedious, it is. Test what you need to test. You don’t have to keep everything on the benchmark server all the time. Only your developers really need to use the server. But an SEO who wants to audit site speed should ask for a benchmark server. Your client or employer may not provide one but it would be good to ask for it.
I would only request a benchmark server if I was seeing page response times above 10 seconds. In the past I would have only used benchmarks for sites with 30+ second response times.
What Affects Your Page Speed?
Without being exhaustive, everything in this list can affect perceived page speed on any given fetch (all of them together and/or only some of them):
- Available processing power of the requesting device
- Available memory on the requesting device (does it swap to storage?)
- Which browser is being used (Chrome is no longer the fastest Cf. http://tinyurl.com/hmylnhz )
- Presence of malware on the requesting device
- Device to router connection quality
- State of the local router
- Last mile connection quality
- Local ISP network status
- User’s DNS resolver
- All major peering handoffs (IXP transfers)
- Data center local ISP network status
- Data center network status
- Available server connections
- Available server processing power
- Available server memory (is it swapping to storage?)
So far we haven’t even begun loading a Web document. You could have 1 million of these pathways every month, each constructed dynamically at need. You could have 10 million of them, or 100 million. Every visitor to your Website represents a uniquely constructed data pathway that contains most if not all of the performance vulnerabilities in the list above. These are just machine-to-machine performance points.
You can have a host of problems on the fetching machine but we’ll ignore that list of possibilities for the sake of convenience.
At the Web server level the following things could be affecting response time:
- Number of active connections
- Number of active processes
- Number of database operations (“number” is not the right word)
- Quality of server configuration
- Quality of database configuration (including “garbage” and “clutter”)
- Number of policies and other meta configuration items (per page)
- Number of resources per page
- Size of each resource
- Location of each foreign resource (on different servers)
- Security protocols
What The Page Developer Normally Controls
A typical Website is publishing meta information (stuff like security policies, X-whatever headers, and other “HEAD” junk outside the page), the source document itself, and usually several supporting documents like CSS files, Javascript files, images, etc.
The page developer is responsible for determining what is used and how it is served. This is where most people concentrate their efforts on improving page speed. It’s just natural to think that the majority of speed issues are created in the design of the Website. Sometimes this assumption is correct. You have to develop a good sense for intuitively diagnosing speed issues. There is no check list that will do this for you.
A browser has to issue a request for every external resource required by the document. So, for example, you may see requests like:
- HTML document
- CSS file 1
- CSS file 2
- Analytics script
- Font script 1
- Font script 2
- Main (header) image
- Advertising widget (source code + image)
And we all know there could be dozens or even hundreds of these requests per page. The browser has to request each resource separately and that means each request and whatever response is sent back has to traverse all that felgercarb I described above.
Every host that contributes at least one fetchable element to your page requires the user’s browser to perform an additional DNS lookup. The DNS information may be cached so you will benefit if your visitors are retrieving resources (for other sites they visit) from the same external hosts you reference, such as Google Fonts, Google Analytics, various social media buttons, remote video hosting platforms, etc.
Now It Should Be More Clear Why A Benchmark Server Helps
Given just how many resources a typical Web document includes in 2017, there is no reason to assume that all slow sites are slow because of the complexity of their design. These sites MIGHT be serving 20-50 1-2 megabyte images per page, but the problem could be a faulty router somewhere. It could also be that the user is connecting to a faulty DNS resolver.
A benchmark server will tell you if the connection is slow or if the page is too complex.
When I work with clients who have slow systems I usually request that they create an almost empty document. That barebones document acts as a poor man’s benchmark server for me. It’s easier to ask for that than to explain what a benchmark server is and why I would want to work with one. If it takes a long time to fetch a document that makes no external requests, the problem is not really with the Website design.
If you can compare HTTP to HTTPS requests and see a significant delay, that could indicate that a lot of handshaking is going on. The server may not be properly configured. It could be it doesn’t have enough older TLS encryption protocols installed. That’s just an illustrative example. You’re more likely to find a problem somewhere else.
Conclusion
If you’re diagnosing site speed problems without a proper benchmark document or server, you are more-or-less shooting in the dark. You may not have the time and resources to benchmark everything from 20 locations around the world, but you should be able to test the server’s ability to deliver “clean” Web documents before you conclude that an expensive solution is required.
Related
[ad_2]
Source link