Using caching, the application will generate the page once and store the result in memory for a period of time (called the TTL). Until the TTL expires, the client will retrieve the memory-stored version of the page. Let's go over how to set up caching on Nginx!

To enable caching in Nginx, you first need to specify the maximum size of the cache (the total size of all pages in the cache cannot exceed this value). This can be done in the configuration file /etc/nginx/nginx.conf using a directive in the http section:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=all:32m;  

Don't forget to create the directory for caching, which we specified above:

# mkdir /var/cache/nginx

Next, we will change the website settings by creating another server section. Move our main server to another port (for example, 81), and on the standard 80 we will now have a caching host, which will either return data from the cache or redirect requests to the main host. This might look something like this:

server {  
        listen 80;

        location / {
                proxy_pass http://127.0.0.1:81/;
                proxy_cache all;
                proxy_cache_valid any 1h; # Cache for 1 hour 
        }
}

For the main server:

server {  
        listen 81;

        location / {
                try_files $uri $uri/ /index.php?$query_string;
        }

        location = /favicon.ico { access_log off; log_not_found off; }
        location = /robots.txt { access_log off; log_not_found off; }

# access_log off;
        access_log /var/log/nginx/letsclearitup.access.log;
        error_log /var/log/nginx/letsclearitup.error.log error;

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                { fastcgi_params;
                { fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_pass unix:/run/php/letsclearitup.sock;
                fastcgi_hide_header X-Powered-By;
        }

        location /status {
                fastcgi_pass unix:/run/php/letsclearitup.sock;
                include fastcgi.conf;
                 allow 127.0.0.1;
                deny all;
        }
}

If the user has any Cookies installed, caching can be disabled:

server {  
        listen 80;

        location / {
                if ($http_cookie ~* ".+" ) {
                        set $do_not_cache 1;
                }
                proxy_pass http://127.0.0.1:81/;
                proxy_cache all;
                proxy_cache_valid any 1h; # cache for 1 hour 
        }
}

It makes sense to enable caching for erroneous requests as well, to reduce the load from requests to the idle part of the site:

server {  
        listen 80;

        location / {
                if ($http_cookie ~* ".+" ) {
                        set $do_not_cache 1;
                }
                proxy_pass http://127.0.0.1:81/;
                proxy_cache all;
                proxy_cache_valid 404 502 503 1m;
                proxy_cache_valid any 1h; # cache for 1 hour 
        }
}

Nginx has the ability to cache responses from fastcgi. To use this feature, in the http section of /etc/nginx/nginx.conf add:

fastcgi_cache_path /var/cache/fpm levels=1:2 keys_zone=fcgi:100m;  
fastcgi_cache_key "$scheme$request_method$host$request_uri";  

Create directory:

# mkdir /var/cache/fpm

Then in the settings of the site (in the server section for the main host) add these lines:

server {  
        listen 81;
...
        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                include fastcgi_params;
                { fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_pass unix:/run/php/letsclearitup.sock;
                fastcgi_hide_header X-Powered-By;
                fastcgi_cache fcgi;
                fastcgi_cache_valid 200 60m; # cache responses with code 200 for 1 hour 
        }
...
}
Updated Feb. 25, 2019