1. First Things First: The Very Big Picture

Last Updated: 2018-01-29

Biggest Myth - WordPress is slow.

WordPress is blazing fast. Hosting can be slow. Code installed on WordPress can grind it to a halt.

This has nothing to do with WordPress and everything to do with site owner choices.

WordPress client sites I deliver run at ~1,000,000+ requests/minute.

This type of speed is for anonymous (non-logged in users).

Similar speed can be tooled for membership sites also (logged in users) and membership site unique tooling is required based on MS Plugin (Membership System) and LMS Plugin (Learning Management System) being run.

This type of speed is essential for sites which must remain stable under high traffic situations, like…​

  1. Launch Traffic - which must tolerate massive traffic spikes. I’ve seen 100K+ pageviews/minute on some of my client sites. This level of traffic will take most sites down in a few seconds.

  2. High Continuous Daily Traffic - which must provide 100% error free content serving.

  3. E-Commerce Traffic Spikes - during specials or year end holidays, like Black Friday.

  4. Click Arbitrage Traffic - where sites running massive paid traffic related to trending news stories to grab and hold Google News traction, for a 48 hour news cycle.

If your site must maintain speed and stability (100% error free content serving), this checklist may provide your tech team some clues about how to handle high traffic situations, fairly hands free, most of the time.

I’ve included many factors which might seem unrelated to performance. Things like hacked sites or crashed sites which can’t be restored from a working backup run at zero speed, so I consider zero speed to be slow, so best to deal with these issues right up front.

2. First Things First: Contacting Me

Before you contact me, keep this in mind…​

  • I’m happy to help you with real projects…​ with budgets…​

  • I work on a retainer basis. Determine your budget before you email me.

  • Our process will be to first create a Hot Spare of your site, then analyze what requires fixing.

Contacting me…​

  • david@davidfavor.com is the best way to contact me.

  • Use the subject line WordPress Speed Up Inquiry so I prioritize your email.

  • Include a copy of your monthly hosting bill, attached to your inquiry.

3. First Things First: Top Site Speed Determiner

The Server Savant you keep on your daily payroll is your first line determines your site speed, stability, security, seoity.

Here’s how you can easily find this person.

  1. Have your would be Server Savant migrate your system to their runtime environment.

  2. Provide you with root ssh access.

  3. Run your own h2load test against your migrated site.

This will tell you the truth about your would be Server Savant’s skill set.

4. First Things First: Running Fast WordPress Sites

Maintaining WordPress speed resembles car maintenance. You watch your warning lights while you’re driving and if one comes on, you stop…​ have your car towed…​ fix the problem. Also, there’s planned maintenance. What you do to keep your warning lights off, most of the time.

Running Fast WordPress Sites occurs in two stages.

  1. Hosting and Site Setup.

  2. Maintaining WordPress speed moment to moment, all day, every day.

Maintaining WordPress site speed is a process, not an event.

Fake Managed Hosting like HostGator, GoDaddy and WPEngine will work for hobby (low) traffic projects.

Real Managed Hosting is required for money (high) traffic projects.

You can tell the difference between the two, because the cost of Real Managed Hosting will feel like a gut punch.

Hire someone to handle this checklist, unless you’re a hard core Geek.

If you have a truckload of money on the line, best to keep a Linux/WordPress Savant on your daily payroll to handle all these issues, as they’re massively complex to get right and keep right…​ all day…​ every day.

5. First Things First: Avoiding Incompetent Experts

The number of incorrect and outdated "How to speed up WordPress guides* currently online is impressive. I’ve read 100s and based on Google search result counts, 1000s more exist.

Most simply repackage/spout myths like…​

Using CDNs like CloudFlare is good.

Using the W3TC caching plugin is good.

Using NGINX is good.

All this is wrong.

How do you know what’s right and wrong?

Simple testing shows this every time. Testing any marginally competent tech person can do, if they just take the time. This tells you WordPress speed up guide writers are posers.

6. First Things First: Poser Test Questions

You can ask a few simple questions to instantly identify posers.

  1. * WordPress slow.* Answer - No. WordPress is blazing fast, unless you install a Granny-on-a-walker-slow theme or plugins. If they say WordPress is slow, you have a poser.

  2. Should I use CloudFlare? Answer - No. You can’t fix site speed by glossing over problems. When problems are fixed, you’ll require no cruft to gloss over problems.

  3. Should I use NGINX? Answer - No. Same reason as using CloudFlare.

  4. What’s the best caching plugin? Wrong Answer - W3TC. Posers will always say W3TC, because of all the other posers writing articles saying W3TC is useful. Savants will say, only testing will determine the best caching plugin for your site. By the way, if you actually test site speed, you’ll be surprised how W3TC almost always slows down site speed.

  5. How fast can WordPress serve content? Right Answer - I’ll migrate your site and give you root ssh, so you can test site speed yourself.

My current WordPress site speed is 1,000,000+ requests/minute for non-membership sites, on cheap, commodity hardware.

Running high speed Membership sites, with 1000s of simultaneously logged in users requires custom tooling geared to each specific membership plugin or LMS plugin being used.

7. First Things First: Escaping Your Current Hosting Nightmare

This is a tough one.

Sometimes it can take days to migrate a site out of a bad hosting situation. Sometimes only a few minutes.

My suggestion is just pay your new hosting company to migrate all your sites. In the long run, you’ll likely pay far less and stay out of padded rooms.

And before you start this process, keep this in mind. There’s a big difference between migrating a single WordPress install site and anything else…​ where anything else means, many nested WordPress installs or custom database system, that have been cobbled on top of your site.

Site migration can be extremely difficult to get right.

Since, 1994, I’ve likely migrated 1000s of sites and you’d be surprised how many times I migrate a site and the migration fails the first time because of nested installs, or custom database addons, or the site backups are just broken and some sort of manual migration process is required.

I’m to a point now with new clients, where I always do migrations. This way, I know for a fact the migration is 100% correct.

Likely best if you always have a pro do your migrations between hosting situations.

8. First Things First: Hot Spare Site Migration

One easy way to migrate money sites is using Hot Spare Migration.

A Hot Spare is exactly what is sound like.

Exact backups or clones of your running production/money sites under different host names.

Something like https://spare1.yoursite.com where there may be may enumerated spares, like 1…​ 2…​ 3…​ etc…​

Running Hot Spares allow easy testing and optimization of production sites, with no chance of destroying your cash flow, due to some technical problem created during this process.

Running Hot Spares also means, if your production site crashes, transforming a Hot Spare into a production site requires a simple rename of site links (to remove the hostname, like spare1) so occurs in minutes, rather than the painful process of restoring a database.

9. Stellar Hosting: Selection

Here’s how I do my client intake. I suggest you find someone who provides a similar intake process.

First I have someone send me a copy of their hosting bill, so I can determine if I’m willing to take them on as a client.

Someone looking for $5/year hosting will suffer a massive coronary when I give them a quote. People suffering the evil clutches of companies like WPEngine, RackSpace, WordPress VIP hosting may be a good match.

Then I migrate their sites to one of my servers. I do this at my normal bill rate. Period. No exceptions.

Then clients are provided ssh access, so they or anyone they like can test their site speed. This is essential to knowing what you’re getting for your hosting dollars.

I provide 100% transparency and truth.

If you have big money on the line, I suggest you look for a similar type company. Someone able to prove their speed claims.

10. Stellar Hosting: Resource Overages

I don’t charge for resource overages, because they’re stupid.

Sites either run optimized or not. For unoptimized sites, my hosting fees can be astronomical. Some client’s don’t care. They’re making money and they’d prefer nothing be touched. That’s fine.

Most clients prefer to optimize their sites, so sites can scale to handle any amount of traffic.

When I have a client who’s site starts draining machine resources, we have a conversation. They either pay me or someone else to fix their site or pay more for hosting.

To me, this is the correct way to handle outages. No extra $1000s/month hosting fees. No account suspensions. No upsells to stupid/expensive tech like hardware load balancers or captive CDNS or any other nonsense.

11. Stellar Hosting: Software Updates

Recently I scanned back through several year of notes, related to hacked sites I’ve fixed/cleansed. Here’s my best guess about how sites were hacked.

10%ish - WordPress based Hacks - Because sites were running themes or plugins with backdoors or outdated/hackable WordPress core or theme or plugins.

90%ish - Hosting based Hacks - Because hosting companies were running outdated/hackable versions of Linux or PHP.

One of the most important Hosting company considerations to prioritize is how they handle software updates.

My approach is to run latest Ubuntu at a host level and latest Ubuntu in all LXD containers.

Also, updates refers to all related LAMP Stack components. I normally run updates every few days, across all LXD containers.

Keep in mind, managing this update process is complex and time consuming.

The reason most hosting companies never do major OS updates or even incremental LAMP updates (security and performance patches), is they have no staff able to do these updates and recover any problems which may arise.

Because this takes time, expect to pay much more for hosting which provides constant updates to always run latest stable code.

12. Backups: Creating Backups and Performance Degradation

I’m constantly amazed at how many sites can’t make a working backup. Either the backup procedure or plugin fails or the backup files produces can’t be restored.

Your Hosting company should provide you a runtime environment which allows you to run any backup plugin you like. Avoid hosting companies which place prohibitions on backup plugins you can install.

Also test that your backups actually work. The only way to truly test if backups are working is to restore a backup.

Taking backups can destroy site performance.

Many sites completely stop serving content during backups.

The reason for this is most hosting companies incorrectly configure the entire hosting environment, starting with how filesystems are configured. Usually old or incorrect filesystem types are setup. Also boot time mount options usually are left at default values, which simply won’t work for high traffic sites.

During backups, massive CPU and Memory and Disk I/O resources must be used. Then the resulting backup file must be transferred offsite, which requires massive Network I/O resources.

13. Backups: Near Zero Resource Backups

The correct way to do backups, so near zero resources are used, follows this sequence.

  1. Setup a Hot Spare site(s).

  2. Clone files to the Hot Spare using rsync (files) and mysqldump (databases).

  3. On Hot Spare site(s) create backup files.

The benefit of this backup approach relates to typical site use where only a few files change infrequently.

CPU and Memory usage, during compression of backup file, moves from production site to Hot Spare, so production site feels no effect.

Disk I/O for reading files limits to only changed files.

Network I/O disappears, because large (sometimes multi Gig) files never occurs.

14. Backups: Normal/Slow Restores

In general, most CMS systems have slow content changes, maybe a few changes each week. For these systems, taking a nightly backup and restoring from this backup in case of site or machine failure is sufficient.

For write intensive sites, like CRMs or accounting or any other site processing many transactions which write data every few minutes or seconds, other steps must be taken.

If an old backup must be restored, then this may result in a site outage.

Best to work through all these details with your Hosting Company to ensure you understand all issues, including amount of downtime required to restore your site to working order, if a restore must be done.

This is one of the first considerations I cover with clients, if they have any custom code or rapidly changing content.

15. Backups: Instant Restores

Running Hot Spare sites means backup restoration becomes simple. Just repoint the production site IP to one of the Hot Spare sites.

Normally taking a nightly backup and reloading this backup into a Hot Spare sites is sufficient.

For sites with rapidly changing data, Database Replication may be required.

Database Replication is extremely complex to get right. Also expensive to maintain. Best to avoid this unless necessary.

16. Testing: General Testing

http://WebPageTest.org provides a simple report card system for scoring Websites.

Target having all "A" scores and "X" for CDN, because using external CDNs can cost you a truckload of money, when a CDN glitches, slows down or dies.

Never make your money reliant on a CDN or any other external site.

17. Testing: Pre Launch Load Testing

There are several testing tools for you to use.

http://WebPageTest.org provides the definitive tester for site speed.

GTMetrix and Pingdom and Google Pagespeed provide different views of similar data.

These testers provide information about single visits.

Pre Launch Testing must be done on site using h2load.

To me, a site is Launch Traffic Ready when the site can produce a local throughput speed of 1,000,000+ requests/minute, with zero errors and zero crashes.

Using Google Search Console and Google Mobile Friendly Tester and W3C Validator provide other essential keys to site performance.

These are less about speed and more about assisting with SEO and content rendering across all devices.

Covering these topics is best handled by having conversations with your hosting company about your long term site goals.

For example, using the W3C Validator and outline mode will tell you instantly if your site has a prayer of generating a coveted/lucrative Featured Snippet Search Result.

I’ll just say this. Site owners have told me off the record they see 40-50%+ CTR (Click Through Rate) when they have a Featured Snippet listed verses just having a Page #1 Search Result.

Featured Snippets are the New SEO.

Passing all three of these tools with flying colors puts you on the road to Featured Snippets Riches.

19. LAMP: Linux and Apache and MariaDB/MySQL and PHP

LAMP normally refers to Linux and Apache and MySQL and PHP.

For our conversation, LAMP extends to include Linux and Apache/HTTP2/SSL + MariaDB (fast/working MySQL) and PHP/FPM and WordPress Core/Theme/Plugins.

20. LAMP: Zero Hack Security

Security takes highest priority.

If your site gets hacked, Google and Netcraft will publish your hacked site status to the world. Your site’s SEO will circle the drain
Ad Networks will charge more for Ad Units and worst case your hosting company will disable your machine till you do a full reinstall, obliterating all your sites.

So…​ Best never get hacked and also have a regular restore process, where you install a backup regularly, to ensure your backups are working. If you haven’t restored your site from a backup, then you have no idea if your backup process works.

Recently I went through years of notes and looks like reason for site hacks I’ve fixed breaks down to roughly 90% due to broken hosting, with the remaining 10% due to running old WordPress code.

Broken hosting means sites running on old LAMP code. Many times code which is years out of date, with many known hacks.

21. Hosting: Hosting Verses Provisioning

There’s a fair bit of difference between a Hosting and Provisioning.

Hosting Companies add layers of cruft which insulate you from underlying systems. CPanel is a good example.

Provisioning Companies (what I use) only deliver bare metal hardware. These machines have zero code installed. No OS. No LAMP. Nada. Just the way machines should be delivered.

If you’re running serious traffic, using Provisioning is essential, as this is the only way to make every single choice about customization you’ll require to run at high speed, under high traffic load.

22. Hosting: High Speed

High Speed relates to how fast the HTML component of a site serves. Once this first HTML component arrives at a visitor’s browser.

Then your Apache server configuration comes into play, which should include HTTP2 and SSL and correctly setup expiration headers for each asset (file).

Use https://WebPageTest.org to test your site speed. You’re targeting all "A" scores with an "X" for CDN score.

Ensure LAMP Stack is tooled to serve your first HTML component in <1sec, preferably <500ms.

23. Hosting: High Speed With High Traffic

For a site to maintain high speed during high traffic is a monumental task. There’s in no guaranteed way to achieve this. Just recently I’ve opened bugs against latest PHP Opcache and WP Super Cache, which mean sites which depend on these two technologies to maintain speed under load, will fail horribly.

My approach is to run one simple command against a site, before I deliver it to a client.

h2load -ph2c -t16 -c16 -m16 -n1000000 https://blackfridayhosting.com/

This command hammers a site with 1,000,000 simultaneous site visits. Most sites will either die, take forever to finish, or return mostly errors rather than 100% success.

Sites I deliver roughly 1,000,000 request/minute for WordPress sites + 3,000,000 requests/minute for static sites. To most people these numbers seem high. They’re normal for LAMP Stacks which are well tuned and well maintained, on a daily basis.

When considering new hosting, pay your hosting company whatever they normally charge for migration and then have them provide you with ssh access to your site with a copy of h2load installed. Then run your own site speed test. Only believe what testing tells you.

24. Hosting: Managed

Earlier I talked a bit about the difference between Fake and Real Managed Hosting, which is the primary determiner of site speed.

Managed Hosting is not what HostGator and GoDaddy and WPEngine do. They all take the approach. They tell you what you can and can’t install.

Managed Hosting should provide you information about how your site will run under traffic load and also may require increased resource
cost or retooling of your site, which ever you prefer. That’s the real difference.

Real Managed Hosting will let you run whatever code you require
your hosting fees will reflect your design choices. Most of my clients opt to fix site problems, rather than pay to fix their sites. A few just say, I’m making a truckload of money, just charge me higher hosting fees and let it ride. This choice should always be up to you.

25. Hosting: Problem: NGINX Gateway Errors

More about this later and the simple fix is, never use NGINX for any production site. WordPress sites I’m tooling for clients now run in the 1,000,000+ requests/minute range, so requiring NGINX for speed is a serious myth.

The fix for all NGINX errors - Deinstall NGINX.

26. Hosting: Problem: Anything With Cloud or VPS In The Name

Ah…​ The Cloud…​ Reminds me of when Heroin was the panacea (cure all) of every ailment.

The terms Cloud and VPS can be used interchangeably, as they use the same implementation tech.

Let’s go over how this works. Once you understand the tech involved, you’ll understand why this tech is so slow, compared with bare metal servers.

A Cloud is created by slicing up Big Iron using VMs.

Big Iron refers to large machines - many CPUS and mucho memory and large disk arrays.

VM (Virtual Machine) code runs to slice up Big Iron (bare metal servers) into logical or virtual servers. The idea is to provide high security and resource management on a per VM basis.

VM technology allows you to run any OS on any Iron, so Linux, BSD, Solaris, Windows, etc.

The problem with VM tech is all OS and application code must be run through a conversion layer, where the actual machine instructions of the OS and application are interpreted, then run by some sort of OS emulator.

This means servers (Iron) are running VMs (software) which interprets other OS and application software.

The net result is a massive decrease in speed.

I’ve been working with software and hardware since writing my first FORTRAN code in 1978/1979 ish. I’ve yet to find an application which requires any type of Cloud/VM setup to work and scale.

The inside joke with developers is…​ if you run in the Cloud, where normal site speed drops by say 50% over bare metal servers…​

Well good news for developers and Cloud providers, because you as the site owner will have to pay for complex solutions and more Cloud resources, just to get back to bare metal speed.

Anyone who pitches you with a Cloud/VPS solution for Easier Scalability, is either incompetent or is looking for a big payday, at your expense.

Better to look for a Server Savant who can design you a simple, bare metal system, for peanuts compared to Cloud tech.

27. Hosting: Problem: Highly Variable Site Speed

There can be many reasons for this and the primary reason is the technology being used for actual hosting.

Which is…​ drum roll please…​ VMs as we just covered in the Cloud/VPS tech section.

With many VMs running, any single or group of VMs which draw huge resources out of the bare metal pool, drag down performance of all VMs/Sites running on any given machine.

Fix: Move to bare metal hosting or hosting which only uses LXD containers.

28. Hosting: Problem: Resource Overages

Once you understand, what I call the HostGator model, you’ll understand about resource overage problems.

Hosting companies use a simple model.

  1. Use cheap hardware.

  2. Cram as many sites as possible on every machine.

  3. Generally most sites will have no traffic.

  4. Anytime a site has traffic, this will trigger a site resource overage.

  5. Once overage triggers, suspend client’s account (usually many sites)

  6. Start upselling client on whatever we can dupe them into buying. None of which will make any real difference, so just keep upselling them, till they pay or leave.

The approach I use is very different.

  1. For new sites with substantial traffic, place on a massive machine with many CPUs and memory.

  2. Track resource usage.

  3. If problems surface, visit with client about how these problems can either be resolved or if client has deep pockets and prefers, just move site to bigger machine. Sometimes even leasing a captive machine for a single site.

  4. The point here is sites receiving traffic, tend to be generating cashflow, so never, ever, ever suspend any site for resource overages. Just adjust hosting to handle traffic.

29. Linux: Distro Selection

What Distro (Distribution) you choose makes a huge difference in amount of time required to maintain your system. Especially keeping your sites updated with all security patches and performance enhancements

The following few items describe why Ubuntu provides far smoother operations than RedHat/Fedora/CentOS.

Moving from one Distro to another can be far more complex than first appearances. I’ve done this many times for clients.

My suggestion is you use the Ubuntu Distro and save yourself a massive amount of time and expense.

30. Linux: Ubuntu: Avoid RedHat/Fedora/CentOS

Even the most recent versions of RedHat and derivatives only provide Kernel 3.10.x versions, which are very old. Kernel 3.18.x provides a near complete rework of the network stack for better network speed and stability. Kernel 4.13.x reorganizes how SSL/TLS Kernel and user space code interacts, providing a substantial speed increase.

Only use RedHat/Fedora/CentOS if you don’t care about speed.

Ubuntu always runs latest stable Kernel version.

31. Linux: Ubuntu: Kernel Version

If you choose Ubuntu, then you’ll always be running a very recent Kernel. If you choose some other OS, then likely be faced with building a custom Kernel or installing Kernels from a non-standard repository.

I strongly recommend you simply run with latest Ubuntu and avoid the entire Kernel issue.

For example, CentOS 7.4-1708 shows to be using Kernel-3.10.0-693, which is KernelNewbies shows to have released on Sun, 30 Jun 2013.

So this means using CentOS latest (as of today - Dec 11 2017), you’ll be starting with a 3.5 year outdated Kernel.

Okay for hobby sites. Unacceptable for real sites.

Running old Kernel’s is for Thrillseekers who love their sites running slow and being repeatedly hacked.

32. Linux: Ubuntu: LXD

Read on for more about LXD, which resolves a massive amount of problems during normal daily operations, I’ll cover shortly.

Ubuntu is the native development Distro for LXD. This means all machine level LXD admin and container level runtime functions are debugged best on Ubuntu.

This also means, when a LXD problem arises and you’re using Ubuntu (at machine level and inside containers), that time to resolve your problem will tend to be quicker, than if you’re running some other Distro.

33. Linux: Ubuntu: Package Manager

Package management determines whether your daily operations run smooth or rough (time consuming, frustrating, costly).

Debian/Ubuntu/Derivatives all use the APT Package Manager.

The primary difference between APT and other systems, like yum/rpm used on RedHat/Fedora/CentOS, relates to dependency management.

With APT, when you install some complex package with 100s of dependencies (other packages to install), APT installs all these dependencies for you.

With yum/rpm, you’re one your own. If you install a complex package, you’ll receive an error about which dependencies are missing, then you’ll have to install each manually. Many times dependencies will chain, causing hours of wasted time, just installing packages and getting dependency errors and installing each dependency, until one installs, then you must keep a list of the previous failures and install these all in reverse order, one by one.

Working with non-APT systems can become mind numbing, time sucking, drudgery each time you’d just like to install a minor package update.

34. Linux: Ubuntu: Package Repositories

As of today, latest stable releases of LAMP Stack Software are…​

Apache-2.4.29 and PHP-7.2 and MariaDB-10.2.11 and OpenSSL-1.1.0g

I’m running all these versions for most of my clients, well PHP-7.1 for some sites still, as I’m vetting PHP-7.2 to verify all’s well for all sites.

To install all the above versions of code, all I had to do was setup all the correct PPAs (Package Repositories) and then issue a simple APT install command sequence and all latest packages installed auto-magically.

Many hosting companies are running versions Kernels or Apache or PHP or OpenSSL, which are unsupportable and hackable or just buggy.

Installing latest stable software on Ubuntu usually requires only a simple APT install command.

Accomplishing this on RedHat/Fedora/CentOS might require you build Kernel/Apache/PHP/OpenSSL software from scratch and try installing it on your machine without crashing your machine or sites.

Or might require you install from some rogue repository, where you have no clue about the quality of security (backdoors may be injected) of the code you’re installing.

Using Ubuntu simplifies all this down to simple install or update commands, any mere mortal can issue.

Building complex software from source and getting this software integrated into an already build system, which was build with different software versions…​ many times requires God like or Savant level abilities.

35. Linux: Ubuntu: Major Distro Updates

Major Distro Updates normally occur once or twice each year.

The original reason I switched from Fedora to Ubuntu years ago, was because RedHat/Fedora/CentOS major system upgrades work roughly 50% of the time. This means 50% of all major updates leave the machine either unbootable or major systems like Apache would fail to start after an upgrade.

Apache failures are annoying and usually fairly easy to fix.

Boot failures require massive time and intelligence to sort out and fix. Meanwhile, if a machine won’t boot, all sites running on the machine are down, till the machine can be recovered…​ if it can be recovered…​

If it can’t be recovered, then a fresh install on the machine must be done and all sites restored from backups and reconfigured in Apache and likely have new SSL certs generated or old SSL certs regenerated.

For machines running 100s of real, cashflowing sites, hours to days of downtime is unacceptable.

I’ve been using Ubuntu since probably around 2003 and I’ve only had one major upgrade fail since then. I did have to reboot the machine in rescue mode and was able to fix the problem in a few minutes, so downtime for all sites was roughly 30 minutes.

Thinking about this, that’s roughly 30 minutes downtime over 14+ years, due to OS upgrades. Pretty good.

36. Linux: Containers: LXD, LXC, Docker

Containers allow slicing up a physical machine into many other bootable machines, which all run at bare metal (hardware) speed. Contrast this with heavy VM systems like Cloud Solutions, VirtualBox, VMWare, etc. which can drop site speed by 90%+ just by running with these.

LXD has replace LXC now, fixing many LXC annoyances and adding a few LXD annoyances. The primary difference between LXD
Docker, is LXD provides an entire bootable entity, just like a physical machine. Where data persists between container reboots/restarts. Docker wraps applications and has no concept of persistent storage, like databases, so doesn’t really apply well to LAMP Stack applications.

LXD is crucial for many operations, like testing major upgrades to Ubuntu or Apache or PHP or MariaDB. Just clone a container
run the upgrade and see if the site survives. If not, problems can be fixed in the new site clone, so production sites can continue to run, uneffected by upgrade testing.

LXD also provides a way to completely partition off dev or staging sites from production sites and rapidly switch between the two
revert to an original, if a switch to a new site has problems.

37. Linux: Tuning: Supercharge Local DNS

First, remove the horribly broken systemd-resolved facility completely. Disable it. Remove all related packages. Set you package system to block reinstalling these packages in the future.

Most Linux Distros now use systemd for service management, which means systemd-resolved is running.

The systemd-resolved provides a seriously flawed approach to simplifying local DNS lookups.

I’m reminded of sad tales of woe, which begin with…​ "It was a simple plan…​"

The systemd-resolved code is slow and fails to correctly cache lookups, so each DNS lookup tends to go through an entire network lookup, rather than be returned from the local cache.

Worse, systemd-resolved seems to get wedged sometimes, where it does lookups…​ which are un-cached…​ then returns incorrect results, then stops attempting to do cached lookups ever again…​ for anything…​

The fix I use in my hosting environment is to deinstall all systemd-resolved related code and replace it with dnsmasq, which is the workhorse of DNS caching servers and always works.

If you’re WordPress admin pages are slow (minutes to respond), likely systemd-resolved is the reason why. WordPress admin pages should render instantly…​ always…​

38. Linux: Tuning: Filesystem

Choosing the correct filesystem and correct mount options is essential for ensuring MariaDB runs fast.

I stick with the ext4 filesystem as it’s generally the fastest, for a broad spectrum of workloads and by far the best maintained.

If you use ZFS, your life will be consumed by ZFS management and ZFS, in my experience slows to a crawl for now reason and then speeds up. Avoid ZFS for money projects.

BTRFS (Butter FS or Better FS), seems more stable than ZFS. In some special workload situation (like managing 1,000,000s of email messages) ext4 still out performances BTRFS.

Until Ubuntu changes their default filesystem from ext4 to something else…​ I’m no thrillseeker, so I’ll stick with ext for now.

39. Linux: Tuning: Filesystem Mount Options

With ext4 there are a few mount options which provide best MariaDB speed.

I use…​

/  ext4  errors=remount-ro,noatime,dioread_nolock 0 1

The noatime option (which also implies nodiratime) disables access time updates on all files and directories. Running with access time enabled can bring any machine to it’s knees, under heavy traffic load, because each file being access hammers the disk with write i/o to update access time on each file and each parent directory, all the way back to "/" (top of directory hierarchy).

The dioread_nolock fixes a long time filesystem bug, which caused disk i/o to hang when using O_DIRECT based i/o, which provides highest MariaDB speed.

40. Linux: Tuning: Run /tmp in memory

Here’s how I do this. Running at 2G may seem like a large number. I use 2G, because some OS software package updates use /tmp and if one of these updates fills /tmp during normal operations, you’ll start getting database errors and content will stop serving.

/tmp  tmpfs rw,noatime,mode=1777,size=2G 0 0

41. CPanel: Problem: Speed

CPanel uses the ITK MPM which, last time I tested, dropped site speed by 90%+.

Other Panels are better about this ISPConfig is good.

Webmin is best avoided, as some versions go into an infinite loop, spinning CPUs at 100% and Webmin support was never able to explain or fix this, when I hit this problem.

42. CPanel: Problem: RedHat Only Support

CPanel only runs on RedHat derivatives, which means you’re stuck with the many headaches of RedHat, as previously described.

LXD problems can arise. If CPanel must be run, install Ubuntu at the host/machine level, then install RedHat and CPanel in as few containers as possible.

43. CPanel: Problem: Updating CPanel System

CPanel is notoriously hard to update.

Most updates break all site functionality, requiring many hours of manual intervention by someone how understands CPanel’s internal code very well.

If you’re running 100s of place holder or hobby sites, maybe having them offline for days or weeks is okay.

This situation is unacceptable for real sites, generating cashflows.

44. CPanel: Problem: Security

Because CPanel is so hard to update, most hosting companies don’t force updates. If they did, then they’d have to bill clients to fix what CPanel updates break or absorb the cost of fixing these problems.

Most hosting companies prefer to just run your credit card, doing as little as possible. Ignoring what this might mean for clients, like sites getting hacked and Google deindexing your hacked sites.

This means that CPanel code tends to be old and as soon as some Black Hat Hacker figures out how to hack an old version of CPanel, this data is then shared in Dark Net forums and every where this old version of CPanel is running will be hacked in no time flat.

45. CPanel: Problem: Action Reproducibility

CPanel is GUI based. This means actions performed at the GUI level, can’t be reliably repeated.

Trying to write up a multi-step checklist some physical person must go through to setup a site or perform some type of ongoing maintenance is highly error prone.

Okay for hobby sites. Unacceptable for real sites.

Best way to handle install and maintenance admin tasks is to use scripts which reproduce repeatable results. Then document what scripts are run when, so date and time.

I personally keep text files for each of my hosting clients, where most actions I perform are recorded, by cut and paste of my exact command into these files.

This way I remember exactly when and what I did, for each client and client site.

46. CPanel: Problem: Config File Obliteration

Since CPanel is a GUI based system, actual system config files are generated by injecting GUI form values into templates, then compiling templates into config files.

This also means any changes outside the GUI, may be obliterated at any time.

For example, manual system tuning done to fix a performance or security issue, can be destroyed by seemingly unrelated GUI actions, which may trigger complex regeneration of many config files.

47. Apache: 2.4 Verses 2.2

Apache-2.2 reached EOL (End-of-Life) on 2017-07-11. This means no further updates, including security patches will be released. This means the next bug which allows Apache to be crashed by some odd request or hacked, will never be fixed.

If a hosting company tries to provide you with Apache-2.2, best turn and run, as this will cause you much future grief.

48. Apache: MPM

Apache provides many MPMs. Think of these as plugins to changing behavior of how Apache handles content requests. By far, mpm_event is the best choice. This MPM multiplexes many requests over one connection, for fastest content serving speed, with least resource (CPU and Memory) usage.

49. Apache: HTTP2

HTTP2 leverages mpm_event to implement full HTTP2 protocol in native Apache. This change fixes many of the significant performance problems in HTTP1.1 and also mod_spdy.

If a hosting company tries to provide you with anything other than HTTP2, best turn and run, as this will cause you much future grief.

50. Apache: SSL

https://LetsEncrypt.org has provided free and strong SSL certs for years. To me there’s no reason to ever deploy a non-SSL site today. In fact, across all my hosting companies, I only host SSL wrapped sites.

51. Apache: Expires Headers

Very few hosting companies enable Expires Headers and when they do it’s usually wrong.

Use https://WebPageTest.org to verify Expires Headers are enabled for your site and also that they’re working correctly.

100% of your onsite links should use Expires Headers.

For offsite/external assets, you have no control over these, so if there’s a problem contact the site your linking and ask them to fix their Expires Headers.

52. Apache: Gzip Compression

This should be on by default in your hosting environment.

If this is disabled, find a new hosting company.

Be very careful enabling Gzip compression in any plugins, caching or otherwise.

Many plugins incorrectly sense if compression is already enabled. This results in doing double Gzipping, which increases server load and breaks caching in various ways and then at the browser end, there’s no hope of correctly displaying double or triple Gzipped content.

Gzip compression should be on at the server level and off at the WordPress plugin level, else trouble will ensue.

53. Apache: KeepAlives

This should be on by default in your hosting environment.

If this is disabled, find a new hosting company.

54. PHP: 7.X Verses 5.X

PHP 7.X dramatically increases PHP speed, as it’s a near total rewrite if the PHP interpreter. PHP-5.6 is nearing EOL (End-of-Life), so will shortly go the way of Apache-2.2 and should not be used for any new projects. PHP-5.5 and below are even worse, as all these versions have known hacks, which currently are the most common cause of WordPress and other PHP sites being hacked.

If a hosting company tries to provide you with any PHP below 7.1 , best turn and run, as this will cause you much future grief. Shortly, this will become 7.2, as this new version is due out any day now.

55. PHP: FPM

If you’re using mpm_event and HTTP2, then you must use FPM, as FPM supports multi-threaded PHP and HTTP2 is highly multi-threaded. FPM allows PHP to be split out of Apache and run under the FPM manager. Running FPM and correct FPM custom logging is the only way to determine if a site can sustain high traffic loads.

After setting up custom FPM logging, running h2load tests (see above), should run with near zero PHP involvement. This means Apache is returning cached content out of memory (Kernel managed file buffers), rather than running PHP for every request. Nothing will kill a high traffic site faster, than running PHP on every request.

56. PHP: Opcache

Opcache allows PHP to read files off the disk and compile them once and then for future page requests, use this precompiled pseudo code. Having Opcache tuned and working well is essential for high traffic sites, especially if caching is inefficient and PHP is involved in page requests.

Run Opcache Control Panel or some other tool to verify Opcache has plenty of key and storage memory and a high cache hit ratio.

57. PHP: Opcache: Current Speed Killing Bug

Current there’s a ugly Opcache Speed Killing Bug which has been outstanding since 2017-11-06 and has had no movement yet.

The bug effects all PHP-7.1 and PHP-7.2 versions and likely effect PHP-7.0 and PHP-5.6 also.

High traffic WordPress sites using broken caching plugins (W3TC and WP Super Cache and many others) will hit this bug at the worst possible time. When site is under heaviest load.

The only indication of site trouble, is when site simply stops responding.

This problem can only be seen by correctly setting up PHP FPM and then correctly configuring logging to emit CPU time for each file and then doing a load test on https://yoursite.com/index.php which will emit messages about of an index.php storm, along with escalating CPU usage. This pattern will continue till all CPUs (or threads, for hyperthreaded) machines are overrun and machine has no CPU cycles available for real traffic/visits.

Best fix is to ensure you use a mod_rewrite based caching plugin and test that your plugin is working correctly. Many mod_rewrite based plugins don’t work correctly and fail under high traffic loads.

58. MariaDB: Instead of MySQL

MySQL’s history is dark as a Marvel comic book hero’s origin story.

If you’re really interested, you can read 100s of article about how first Sun and then Oracle have attempted to destroy MySQL, so they can upsell clients to expensive and proprietary systems.

Michael (Monty) Widenius and an inner circle of original MySQL developers finally had enough of this nonsense and much like the American people voted for Trump rather than the criminal cartel alternative, Monty and crew opted to do what’s right and return MySQL back to it’s original approach of providing stellar code for free…​ for ever…​

Hence now we have MySQL’s replacement, MariaDB.

Think of MariaDB as MySQL++, a MySQL version which actually works and is much faster. I say actually works, because one of the Sun/Oracle strategies to keep MySQL hobbled and broken, was to defer (forever) rolling in a decade worth of bug fixes. So Oracle resembles the establishment Republican criminals. "We’ll repeal Obamacare" and just never get round to it. Oracle was the same way, "We’ll roll in those fixes" and just deferred this process forever.

Simple throughput testing, normally produces a 30%-50%+ speed increase, just by removing MySQL packages and installing MariaDB.

Since MariaDB really is MySQL, it’s a drop in replacement. No database unload/reload required. Just remove MySQL software packages, then install MariaDB software packages.

In fact, future Debian and Ubuntu releases have moved to installing MariaDB packages when MySQL packages are installed.

59. MariaDB: Storage Engines

Think of Storage Engines as plugins which implement i/o primitives, creating behavior of how indexes are created and managed.

Many WordPress sites run slow, because they’re using the old MyISAM storage engine, rather than InnoDB. Since MariaDB has updated the FTS (Full Text Search) code to work with InnoDB, there’s no longer any reason to use MyISAM.

For read intensive sites, like WordPress sites, use the InnoDB storage engine.

For write intensive sites, use the MyRocks storage engine.

60. MariaDB: Realtime Replication

Setting up and managing replication can add many hours/month to site maintenance.

Normally there’s no real business reason for running replication.

Before you believe any developer who tries selling you on replication, run your site design by several old dogs, before diving down the replication abyss.

Old Dog: Someone who’s personally coded using punch cards, punch tape, loader toggle switches, magnetic tape, can understand most assembly code at a glance.

61. MariaDB: I/O Thrashing

Database I/O tends to be the primary performance killer for all sites. This especially relates to database writes, so SQL - INSERT and UPDATE and DELETE operations.

Many plugins can take a site out in no time because of their database access patterns, which is to say, way to many writes for no good reason.

Worst offenders include…​ Most redirection plugins which in anyway change a link. All security plugins. Many site statistics plugins. Any plugin which duplicates normal Apache logging, especially 404 logging, as 404 pages are never cached. This means attackers can easily take down a WordPress site running any type of security plugin, by issuing a simple 404 attack.

Real Managed Hosting will provide you an avenue to have a very smart coder check any theme or plugin, before you install it, to determine potential problems.

62. MariaDB: Tuning

Database tuning is fairly simple. Only use MariaDB. Only use the InnoDB storage engine, replacing MyISAM. Use the MyRocks storage engine for write intensive applications. Use mysqltuner periodically, correcting all emitted diagnostics.

63. WordPress: CMS Verses Static Content

Choosing when to use a CMS or static content can be tricky. My guideline is simple. Anytime I have to manage sessions (logged in users), membership sites for example, then I use WordPress. For a site like this, I use static content, as a CMS tends to get in the way of writing massive amounts of content over short periods of time.

64. WordPress: Theme

WordPress themes tend to be a constant source of problems. I’ve tested…​ geez…​ probably close to 400 https://themeforest.net themes. Only two of these themes have been 100% clean of all backdoors, so this means the rest all contain backdoors.

Before you ever start working with a theme, especially a full site redesign, have someone smart vet both the theme and development process which will be used to implement the theme. Hint: Hacking of theme code is unacceptable. My rule, any developer who suggests writing a custom theme should be instantly fired. They should only develop Child Themes.

My strong suggestion is start with GeneratePress, then make changes using a Child Theme. Also, use Elementor Page Builder allows extremely complex sites to be setup quickly.

65. WordPress: Plugins

Plugins are best vetted same as themes. Also, avoid encrypted plugins, which require IonCube or some other real time decrypter. This type of code has to be reencrypted for each major release of PHP. I’ve had many clients have to retool sites completely after some company went out of business, so code was no longer available for new versions of PHP.

66. WordPress: Content Caching Plugin

The only way to maintain high site speed under high traffic load is to have a mod_rewrite based caching plugin that actually works.

Just because a caching plugin uses mod_rewrite, doesn’t mean they use mod_rewrite correctly. The only way to verify correct function, is to run PHP FPM and custom logging and run h2load testing after each caching plugin update, to ensure caching is still working.

My current preference is WP Fastest Cache as this caching plugin correctly serves mod_rewrite cached content, with no leakage to PHP.

67. WordPress: Database Query Caching Plugin

Normal CMS systems don’t require this.

If you have a membership system with 1000s of simultaneously logged in users or a write intensive application, than likely you’ll require both Query Caching and extensive MariaDB database tuning.

Best to have a conversation with your hosting company, if you meet one of these criteria.

68. WordPress: Redirection Plugins

Redirections can either be done at the Apache level (fast) using .htaccess syntax or at the WordPress level (slow) where the entire LAMP stack is involved.

Recently one of my client sites, tuned to nearly 1,000,000 requests/minute, dropped to 3000 requests/minute. This may seem like it’s still fast and this client sometimes runs 50K-100K+ visits/minute, so this speed drop meant any traffic running would cause site (really the entire machine) to grind to a halt.

The problem was a redirection added from the top level domain to some post slug.

When you run massive traffic, small changes (like adding a redirect), can destroy site performance.

Consider a site running a poorly coded Redirection with 100 redirects.

Let’s say this plugin reads every redirect from the database and writes to the database to log what it does.

That means each site page access requires 100 database reads (one database table row/record per redirect) and one database write (which can be the equivalent resource usage of 100s or 1000s of database reads) and then finds no match and serves the requested page.

You can see how a site with many redirects will fail rapidly under load.

69. WordPress: Implicit Redirects (Post/Page Renaming)

When you rename a post slug or page slug, WordPress auto-magically generates an internal, PHP level, redirect. If the post/page you rename receives a huge amount of paid or organic traffic, a simple rename can destroy your site performance.

Be cautious of any minor changes.

If you’re running a money generating site, be sure you run even simple changes by your Server Savant, to ensure site speed and stability remains at peak levels.

70. WordPress: Minification

I suggest you work with unminified copies of all your code, especially .css and .js files, as this keeps debugging simple and clean.

WP Fastest Cache allows you to easily toggle on/off minification of HTML and CSS and Javascript, so when you have to debug something, just flip off minification and clear your cache.

71. WordPress: Image Compression

Image compression is still highly useful today, especially when serving traffic to mobile devices and with the advent of HTTP2, there’s much less call for complex technologies like Lazy Loading + Sprites. If you compress your images and run HTTP2, you can skip all the old school tech bandages which attempted to fix HTTP1.1 protocol issues.

72. WordPress: Image Dimensions

Most images are served with dimensions added, by various layers of WordPress code. Sometimes a theme. Sometimes plugins.

Best to check to make sure.

Missing image dimensions can cause major browser rendering slow downs.

73. WordPress: Load Analytics Asynchronously

This one is tricky, as every analytics system if different.

My suggestion, use GTM (Google Tag Manager) and let GTM manager all your analytics tracking and pixel retargeting code.

74. WordPress: Load Ad Networks Code Asynchronously

When I see this item listed in a speed up guide…​ with no caveats/exceptions listed…​ I can tell immediately, the writer has never run any traffic personally.

If you blindly follow this suggestion, you’ll likely get your account banned instantly with many Ad Networks, because most Ad Networks have stringent requirements for flowing their Ad Units inline (synchronously) at page render time. Many specifically disallow asynchronous serving of their Ad Units.

This said, check with your Ad Networks and ask for an exception. Likely you’ll have to be running substantial traffic, before you can ask for an exception and I have many clients who’ve successfully negotiated exceptions, so this is possible.

75. Speed Myth: Apache is Slow

Ah the old NGINX marketing nonsense about Apache being slow.

Here’s a random static content site I picked off one of my machines which shows Apache speed serving straight text.

lxd: net11-david-favor # h2speed --count=100000 https://LaunchSpeedHosting.com/
h2load -ph2c -t16 -c16 -m16 -n1600000 https://LaunchSpeedHosting.com/
finished in 42.80s, 37379.63 req/s, 255.35MB/s
requests: 1600000 total, 1600000 started, 1600000 done, 1600000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 1600000 2xx, 0 3xx, 0 4xx, 0 5xx
Requests per second: 37,379.63
Requests per minute: 2,242,777.8
Requests per hour  : 134,566,668

So 2,000,000+ request/minute for Apache. I’d say this is fast enough for any site.

76. Speed Myth: WordPress is Slow

WordPress is blazing fast, out of the box.

If you add slow and badly written code (theme and plugins), then you can slow WordPress to a crawl.

This has nothing to do with WordPress and everything to do with your code choices.

Most of my clients ask me to vet themes and plugins for both speed and security, before ever installing them.

Likely best if you find someone to vet all your code for speed and security the same way I vet code for my clients.

Here’s a random WordPress site I picked off one of my machines which shows Apache speed serving straight text.

Site name changed at client’s request.

lxd: net11-dev # h2speed --count=10000 https://yoursite.com/
h2load -ph2c -t16 -c16 -m16 -n160000 https://yoursite.com/
finished in 13.40s, 11936.79 req/s, 647.26MB/s
requests: 160000 total, 160000 started, 160000 done, 160000 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 160000 2xx, 0 3xx, 0 4xx, 0 5xx
Requests per second: 11,936.79
Requests per minute: 716,207.4
Requests per hour  : 42,972,444

This site is a bit slower than I like and it was tooled by someone else. For correctly tooled WordPress sites, speed should be 1,000,000+ requests/second. This site still clocks in at a respectable, 700,000+ requests/second.

77. Speed Myth: SSL is Slow

This is as silly as saying WordPress is slow.

If you run HTTP2 and a well tuned SSL configuration, your sites will run as fast or faster than non-SSL sites.

If you run HTTP1.1 or HTTP2 and a poorly tuned SSL configuration, you can slow your site to a crawl.

This is where Stellar Hosting comes to your rescue again.

Stellar Hosting should setup SSL to be blazing fast.

78. Speed Myth: Hardware Load Balancers

Load balancers are devices which site between visitors and browsers. Their premise is to somehow allow your site to handle more traffic.

If you think through this, it makes no sense, at least the way most load balancers are setup.

Normal setup is to place 2 or more load balancers in front of your site, must like CloudFlare, then round robin requests to your site.

Problem is very few people understand how to do this correctly, meaning setting up caching correctly on each load balancer. If caching is incorrect, the only thing load balancers do is…​ well…​ nothing…​ If traffic arrives on multiple load balancers, which flow through to your site (with broken caching), then the exact same level of traffic reaches your site.

If caching is working, then likely it’s misconfigured and you can end up changing content on your site and having no visitors see your content changes for hours.

The true way to load balance is not to use load balancers. The true way to correctly approach this is to run multiple site instances using MariaDB Master/Master Replication which is complex and time consuming to setup and maintain.

If your traffic is lower than 1,000,000 page views/minute, just hire someone to setup a well tuned LAMP Stack. This will be far quicker and cheaper to setup and maintain.

79. Speed Myth: Hardware DOS/DDOS Mitigators

DOS/DDOS (Denial of Service/Distributed Denial of Service) mitigators are devices designed to mitigate or block this type of attack.

10 years ago these might have been useful.

Today simply using Apache mpm_event and PHP FPM and Fail2Ban correctly can sense and block DOS/DDOS attacks simply and cheaply.

For this approach to work also requires a well tuned LAMP Stack, so LAMP can handle attack traffic for time required for Fail2Ban to sense and block this attack traffic.

80. Speed Myth: CloudFlare and Other CDNs

Sigh…​ This is a horrible myth.

The premise is CDNs provide faster Edge Serving, which means since visitors are closer to one physical server, they see content faster.

This was absolutely true in the early 1990s.

This is still true if most of your visitors live in Thailand (which isn’t connected to the global Internet Backbone) or in Africa, where most connections a 3G mobile.

If your visitors are anywhere else, the only speed increase you’ll see is a few microseconds or maybe milliseconds, which is 1 1000th of a second.

Your visitors will never notice the speed difference, because it’s barely measurable.

The downside of CDNs is immense. When you use any external/offsite tech you make your site speed and stability dependent on the speed and stability of the aggregate (combination) of all external/offsite technologies you use.

Let’s say you use 5 external services. If 1 out of 5 is slow or down, your site is slow or down.

CDNs, like CloudFlare are even worse, because they stand between your entire content serving process and your visitors.

Most of my clients quickly learn CloudFlare is awful, when they end up losing a truckload of money due to some CloudFlare glitch. This only happens once.

My suggestion is you avoid having this happen even once.

Work with a Stellar Hosting Company and you’ll require no ugly tech like this.

81. Speed Myth: NGINX and Varnish and Squid

Years ago, before mpm_event and HTTP2 support, there might have been some usefulness to running one of these proxies.

These days Apache is so fast, it’s just not worth dealing with another layer of complexity and cruft and debugging all the problems they cause.

WordPress sites I currently deliver to clients run at a local speed of a minimum of 1,000,000+ request/minute.

Placing any proxy in front of this type of high speed site, will only slow down throughput, as anytime you go through another layer of code, the only possible outcome is less throughput.

all my client’s traffic requirements, so running Apache without NGINX problems provides a much easier runtime environment to manage.

82. Speed Myth: Database Replication

Database Replication only provides meaningful speed increase when you have a write intensive application, like realtime data logging or a CRM or some other event capture type system, where events occur very rapidly.

WordPress is a read intensive application. Even if you’re publishing a post every minute, this activity produces minuscule write burden on MariaDB.

The first step of coding a write intensive application is to work out how the application scales and prunes old data. This is the point at which you determine if you require Database Replication.

This situation rarely occurs with WordPress sites.

Best to avoid Database replication unless your application requires this specifically, to keep up with rapid event processing.

83. Speed Myth: W3TC Caching Plugin

If you actually test this plugin’s speed you’ll be surprised that in most cases W3TC slows down sites.

Well, at least well tuned sites. W3TC and CloudFlare and NGINX go in the same gnarly, black bucket of sludge with me. For poorly tooled sites, they make gloss over problems for a while and better to correctly tool your site in the beginning.

I once invested 4 hours of time testing various W3TC tunings and could only get around 25% the speed of WP Super Cache.

And with WP Super Cache, there’s only a single option to toggle for fairly fast speed and only a handful of other options for blazing fast speed.

84. Speed Myth: WP Super Cache Plugin

WP Super Cache use to be the definitive WordPress caching plugin and currently this code is badly broken.

When mod_rewrite caching is configured page requests are correctly served from the cache and then incorrectly leaked through to PHP.

The net result of this is 100% of page requests involve PHP, causing high traffic sites to crash within minutes.

Until WP Super Cache mod_rewrite bug is fixed, use WP Fastest Cache which correctly processed mod_rewrite requests from cache with no leakage down to PHP.

85. Speed Myth: Block Image Hotlinking

Setting this up and getting it right for all situations is rarely as easy as people espouse. More likely you’ll end up with images missing and end up with incredibly hard to debug problems.

If you’re running an asset download site - images, audios, videos, fonts - it’s unlikely your site is important enough for anyone to hotlink.

Even if they do, who cares. Likely this will be minuscule traffic and hours of headaches.

86. Speed Myth: Use Image Sprites

This only provides speed increase for old HTTP1.1 sites.

Sprites are complex to implement and manage, as Sprite regeneration must occur each time you add a new image.

Better to setup HTTP2, which nullifies any large gain from Sprites.

87. Speed Myth: Image Lazy Loading

This could be useful for image heavy sites, before HTTP2 became available.

Image Lazy Loading is near impossible to get right on all devices, especially mobile devices. The technology is complex and will have you pulling your hair out in no time.

Impossible to tell if it’s working correctly on all visitor devices and even if someone does tell you it’s broken, debugging is another nightmare.

What Lazy Loading proposes is deferring image loading till a person scrolls to the image.

Better to use HTTP2 where all assets, including all images, all stream over one connection. Images which appear first on a page will have a higher priority and tend to load first.

If you use HTTP2 and use https://WebPageTest.org for mobile testing, you’ll end up with a far more optimized page, which renders images correctly for all devices.

Better to setup HTTP2, which nullifies any large gain from Image Lazy Loading.

88. Speed Myth: Remove Unnecessary Plugins and Add-ons

Plugins and Addons only come into play the first time a page is requested. At this point your caching system should only return a cached HTML file for future requests…​ till cache expiration…​ then this process recurs.

I say do remove all unnecessary code, as it makes sites more complex to manage and tends to bloat database backups.

And for a well tooled site, Plugins and Add-ons only effect site speed when an page is cached for the first time.

Many of my client’s site have several 100 plugins with no speed effect, because caching works correctly.

If caching or LAMP Stack tuning is broken (poorly implemented), a site with very few plugins will still go down under high traffic loads.

A site with good caching and LAMP Tuning can run 100s of plugins and still run blazing fast under high traffic loads.

89. Speed Myth: Limit or Remove Social Sharing Buttons

This myth has the same answer as the number of plugins myth.

Social sharing buttons, if they produce any load at all (many don’t), only come into play when a page is first cached.

So Social Sharing Buttons have no real effect on WordPress Speed.

Well…​ no speed effect at the server level.

At the browser level is another matter. The best way to implement Social Sharing Buttons is to ensure they don’t block your content rendering. In other words, have your Social Sharing Buttons associated with <div> tag somewhere on your pages, which is populated with Social Sharing Buttons, after the Document Ready event fires.

If Social Sharing Buttons block rendering your document on browsers, then they can have a disastrous effect on conversions.

And all this has nothing to do with your WordPress site speed.

This only has to do with Browser render speed.

90. Speed Myth: Regularly Optimize Your Database

Again, since database access only occurs on the first page reference and then when your cache expires pages, the effort required to get this right and no completely corrupt your database is monumental, for no measurable speed increase.

Yes - optimizing your database will keep your backups small.

No - optimizing your database will effect your site speed, if your site’s caching is working correctly.

91. Security Myth: Security Plugins

No WordPress security plugin is useful. They all slow down site speed to a crawl and worse, they only report site hacks after your site is hacked.

Better to use Stellar Hosting, a the best way to fix a hacked site, is never be hacked in the first place.

92. Security Myth: Changing Your Database Prefix

All the articles suggesting you should change your database prefix are written by idiots. To strong? Okay, people who are intellectually challenged around site security.

If a hacker penetrates to a point where they can access your database tables by prefix and table name, then it’s all over.

Any hacker penetrating to this level already has access to your table data, even if your prefix is jumpin-jack-flash.

Changing your database prefix won’t slow down your site. This will cause many plugins and other command line tools to break, so there’s just no good reason to do this.

93. Security Myth: Changing Login Slugs To Block Brute Force Attacks

All the articles suggesting you should change your login slugs to hide wp-admin and wp-login.php are written by idiots. Oops, to strong again…​ Okay, they’re intellectually challenged around site security.

Changing your slug will cause WordPress to fail in many odd ways. If you’re really interested in problems this causes, you can search for admin-ajax.php related problems related to login slug changes.

The correct way to handle this is just let the brute force attacks happen and run fail2ban. I run a fail2ban recipe which tracks logins, so 3 failed login attempts in 1 hour, blocks the attacking IP for 1 hour.

Brute Force Attacks typically fire 100s or 1000s of simultaneous login attacks, so only a handful make it to your site and fail2ban blocks the rest.

Using fail2ban should always be your first line of security defense, which renders the idea of Login Slug double passwording just a bunch of nonsense.

94. Security Myth: Wrapping Login Slugs With Another Login

This myth is propagated by idiots too.

Wrapping wp-admin and wp-login.php with a second login layer does absolutely nothing for security and can subtlety break your site and certainly break many server level tools for checking site health and performance.

Using fail2ban should always be your first line of security defense, which renders the idea of Login Slug double passwording just a bunch of nonsense.

95. Safe Guard: Managing Custom Code

If you must use Premium Themes or Plugins for a long running project, best you implement part of the https://WordPress.org toolset which reviews code.

Use the PHP Compatibility Tester to ensure the code your running actually works with the version of PHP you’re running.

Just because your site serves content, doesn’t mean all’s well.

Many edge conditions can occur, which cause sites to fail. Many of these edge conditions show up under high traffic loads.

For clients sites I host with custom code, I setup the PHP Compatibility Tester for the code developers to run on a regular basis.

Before any PHP upgrade is installed, we first verify the site will survive the PHP upgrade.

This essential step is required for custom code and isn’t required for https://WordPress.org hosted code, as they already run this code during Core and Theme and Plugin reviews.

96. Avoid: Premium Themes: Hackable By Design

There’s just no way to sugar coat this.

Many Theme resellers have a business model of selling themes with known backdoors and then hack your sites and steal your users data.

I refer to this type of attack as Data Siphoning, where hackers siphon off your user data on a regular basis.

Before you ever activate a theme, test it with the plugins Theme Check and TAC - Theme Authenticity Checker.

If your theme passes with zero errors/warnings, you have a good theme.

If you have any errors/warnings, your best option is just use a clean theme. Your other option is to hire someone to go through and vet your theme, to determine if the reported errors suggest a Siphon Backdoor.

97. Avoid: Premium Themes: Hackable Over Time

Consider the vetting/review process of a https://WordPress.org hosted theme.

This process is grueling. If you take a look at Paid/Premium themes hosted at https://WordPress.org you’ll notice a common business model.

Consider the GeneratePress Theme which follows this model…​

  1. Themes are free and full featured.

  2. The theme developers turn out stellar code, which means their code passes the https://WordPress.org review process.

  3. Themes are 100% free of backdoors, including Siphon Backdoors.

  4. A Paid/Premium version usually adds fast support and sometimes a few additional features.

Your cheapest and best option is use a https://WordPress.org hosted theme like GeneratePress along with a plugin like the Elementor Page Builder.

Now consider as PHP upgrades, from 5.6 to 7.0/7.1, now 7.2 and shortly 8.0 will release.

https://WordPress.org hosted themes have to go through a PHP version vetting process too. In other words, these themes must work with all versions of PHP WordPress supports.

Custom themes with no stringent reviews like this, have many subtle errors which tend to increase their instability over time and finally the company goes out of business or gives up on a theme and deprecates/retires the theme.

You’ll always end up paying way more than you imagine for a Paid/Premium theme, for any long running project sites.

98. Avoid: Premium Plugins: Hackable Over Time

The code quality for Paid/Premium plugins tends to be much lower than themes.

Plugins rarely have built in backdoors, like themes.

All the same hackable considerations apply to plugins as themes and usually plugins tend to suffer far worse from PHP upgrades than themes.

With themes, PHP upgrades tend to cause hard failures, where sites present a white page. With plugins, failures can be more subtle, so database corruption may occur over time, in ways which isn’t readily visible, like theme crashes.

99. Avoid: Custom Themes

I’d say this is by far the most costly mistake I’ve seen people make. When you have a custom theme developed, who’s going to maintain your custom theme code? Each time WordPress APIs and PHP APIs change, so will your theme.

Instead of custom themes, use GeneratePress and Elementor to build your site. There’s no feature a custom theme provides, that can’t be duplicated in GeneratePress and Elementor.

100. Avoid: Offsite Callouts

Anytime you call offsite to do anything, your site speed and stability no becomes tied to the speed and stability of all offsite calls you make. Services like CloudFlare and Click Funnels and Lead Pages
Optimize Press have continual problems.

Use offsite services to develop proof of concept sites and before you hit your launch button, best convert all offsite callouts to onsite assets.

101. Avoid: WordPress Redirection Plugins

Most of these are horribly written. The read every redirect out of database tables, rather than the using the WordPress Transient API.

Best to just use .htaccess file based redirections.

Or use Safe Redirect Manager if you must use a plugin, which is the only redirection plugin currently using the WordPress Transient API.

102. Avoid: WordPress Security Plugins

Do not ever, ever, ever use any WordPress plugin.

They are all scams…​ well maybe that’s bit harsh…​

Better said, coupling WordPress with fail2ban and other OS level security processes the same and usually better results than any security plugins. Also OS based security runs a near zero server load, while WordPress security plugins are some of the biggest resource hogs.

103. Avoid: Considering Post Revisions Count

This isn’t a factor, if you’re WordPress caching plugin is working correctly.

If your WordPress caching is broken, post revision count is the least of your worries.

This said, setting post revision count can keep your workflows sane. Keep whatever number of revisions makes sense for your workflow. Unlimited revisions is just crazy. Likely 1-3 is less than useful. Maybe 10 is perfect.

Set your post revision count to smooth your workflow, not as a performance consideration.

104. Avoid: Considering Pingbacks and Trackbacks

If your site caching is working correctly and database tuned correctly, processing this traffic requires near zero resources.

Carefully read Dealing with Trackbacks and Pingbacks, then determine what’s best for your site.

In other words, your choice should relate to monetization of your site, rather than site performance issues.

105. Avoid: CSS and Javascript: File Concatenation

This can cause a world of hurt and create situation which are impossible to debug.

Better to run HTTP2, which renders concatenations inconsequential.

106. Avoid: CSS and Javascript: File Relocation

If you’d like to create a site impossible to debug and also where you have no clue how your site looks in mobile devices, then use a random plugin which suggests you relocate CSS or Javascript.

This will most certainly break many Ad Unit placements (from Ad Networks). Also this will violate many Ad Network TOS (Terms of Server) agreements, which will cause them to kick you out of their networks forever.

This will also break many Pixel Systems and Tag Manager Systems.

Better to run HTTP2, which renders relocations inconsequential.

107. Avoid: Considering admin-ajax.php Slowness

Many speed up guides suggest using a heartbeat adjustment plugin to fix these problems.

How admin-ajax works, depends on many factors and changing the heartbeat randomly may violate TOS (Terms of Service) agreements for various external sites.

More likely changing the heartbeat period to some crazy long value will just break themes and plugins which required a certain heartbeat frequency to work correctly.

Better to run correct WordPress caching and optimally tuned PHP FPM and PHP Opcache, so you can ignore admin-ajax.php requests and your site will run fine, even when hammered with admin-ajax.php theme and plugin requests.

108. Avoid: Installing Server Side PageSpeed

Ah the mod_pagespeed Apache module.

Just don’t do it. Anyone who tells you to do this, likely best to instantly fire them and move on.

Harsh. I know. Unfortunately, the mod_pagespeed module is a beast. It’s the equivalent of the W3TC. Both W3TC and mod_pagespeed, promise much…​ if you’re willing to spend hours of your life tweaking settings.

Then even after hours of tweaking, you’ll get a fraction of speed increase. Last time I testing mod_pagespeed…​

Hours of tweaking settings only slowed down my well tuning LAMP Stack and correctly implemented mod_rewrite WordPress caching plugin sites.

- David Favor, Server Savant and Head Bottlewasher

© David Favor 1994 - 2018