Category Archive: Tutorial

Ajax + jQuery: An Introduction

Since Twitch released their ‘Twitch Extensions’ SDK, I’ve been having a lot of fun playing around with different things I can integrate into Twitch’s panels. I’ve build a WordPress Panel and I’m working on Tumblr and Instagram Panels (which require a lot more API work).  But the crux of everything I’ve been working on relies on getting data from source to destination in a way that makes sense to everyone. At one point, RSS feeds were the de-facto standard. Personally, I think there’s a better solution – one that uses a more pure form of data in transmission: Ajax.

Ajax, or (Asynchronous JavaScript and XML), uses JavaScript to pull a data-based content feed and display it somewhere in a content space.  I prefer using jQuery’s $.ajax function with a JSON feed, but there are numerous combinations of ajax and feeds you can use.  Today, we’re going to write a very simple “feed fetching” application that will allow us to not only fetch the feed, but to re-fetch it on demand.

See a working JSFiddle here:

HTML Markup

First off, we’re going to generate some basic HTML. We’re not going to re-invent the wheel here. We need two items: a container for the content we’re going to pull, and a mechanism for fetching more. A div and button will suffice:

    <div id="feed"></div>
    <button class="button">Random Chuck Norris Joke</button>

Notice how the #feed doesn’t have any additional fields; it’s just an empty container. We’re going to call some basic markup to fill that as we bring the feed in.

The jQuery / JavaScript

First off, we need to ensure that we’ve brought in the jQuery library:

   <script type="text/javascript" src=""></script>
   <script type="text/javascript">
    <div id="feed"></div>
    <button class="button">Random Chuck Norris Joke</button>

We’re also going to include a script tag. Normally, you could include this in a script file to clear up the markup, but for a small example it’s fine to leave it inline so we can show our work.

Now, the fun part: including the $.ajax command that jQuery uses to fetch our feed.

function fetchFeed() {
    url: '',
    dataType: 'json',
    type: 'GET',
    success: function(data) {
      var output = '<p>' + data.value.joke + '</p>';
    error: function(jqXHR, exception) {
      if (jqXHR.status === 0) {
        var output = '<p>Unable to access WordPress Posts. Please check your URL and try again!</p>';
     } else if (jqXHR.status == 404) {
       var output = '<p>The requested page not found. [404]</p>';
     } else if (jqXHR.status == 500) {
       var output = '<p>Internal Server Error [500].</p>';
     } else if (exception === 'parsererror') {
       var output = '<p>Requested JSON parse failed.</p>';
     } else if (exception === 'timeout') {
       var output = '<p>Time out error.</p>';
     } else if (exception === 'abort') {
       var output = '<p>Ajax request aborted.</p>';
     } else {
       var output = '<p>Uncaught Error.\n' + jqXHR.responseText + '</p>';

$('button.button').click(function() {


There’s a lot going on here. Let’s break it down:

  • Between lines 1 and  29 are the fetchFeed() function. This wraps all of the scripting we need to run into a function that we can call 1) automatically on page load and 2) on an action we will define later.
  • Line 2 starts the $.ajax function: a helper function included in jQuery that makes it easier to write AJAX calls.
  • Line 3 is our feed url – in this case, a random “Chuck Norris” joke. Because why not.
  • We’re telling our application what KIND of data to pull in Line 4. In this case, it’s JSON. If it were another format, we could define that here and jQuery would appropriately format said data.
  • Line 5 is how we’re approaching the data. We’re “GET”ing this data, which means we’re fetching data from another source. If we had the appropriate permissions we could also “POST” or “PUT” data onto an external server – good if we’re using a 3rd party access tool for an application. Usually we’d need some sort of API Key to do that. Since the Chuck Norris Joke website’s API is public, we don’t need that to just read the posts.
  • Lines 6-9 defines what we do on a “success” response. In this case, we’re setting an ‘output’ variable, and defining it as a bit of HTML. Inside of the HTML we’re pulling part of our “fetched feed” (data.value.joke). This varies based on the feed we’re working with, but it will always come back as data. Finally, we’re setting the HTML of the #feed container as our generated output.
  • Lines 10-27 are “error handlers”. Based on what response we get back on a failed fetch, i can display an error message. In this case, I’m displaying it in the feed container itself, but I could also echo this as a console.log if I were debugging.
  • Lines 31-33 are setting up a click event on the button we added. When that button is clicked, the fetchFeed() function is called immediately.
  • Line 35, once all of the other functions are done, will fire the fetchFeed() function on the initial page load.

Looking back at the JSFiddle you can see the script in action. On the initial page load, it’ll pull one random joke from the API. Pushing the button will load in a new joke and replace the HTML in the container with a new joke.

AJAX provides a fantastic, easy way to manage data without having to refresh a page. You can seamlessly reload data as it’s added, or provide an easy way to add content to a page without disturbing other elements on the page.

Continue Reading...

Adding a Favicon / Site Icon to your WordPress Site

Chances are, you’ve bookmarked a site to re-visit it later. And, if you have, you’ve seen the small icon that shows up next to the site in your bookmarks.

See the first three? No icon.  The final one, however does have an icon – called a ‘favicon’ associated with it.  WordPress, for a long time, didn’t have a native way to include favicons – we had to use plugins like All-in-One Favicon to get the icon to show up.  Now, the WordPress customizer actually has a native way to bring that icon into WordPress – the “Site Icon” option.

Adding a Favicon

To get to it, open up your WordPress Dashboard and click on “Customize”

The customizer will open – allowing you set various options in your website at a glance.  The top option, Site Identity, is where we’ll find the settings for the Site Icon.

The site icon has to be at least 512px wide and tall – preferably a square image.  The icons can have transparency, and it’s best if you have a .png image to use.

Not only do Site Icons get used as the favicon, but if a user adds your website to their home screen (via Android or iOS), that icon will also appear there as well.

Continue Reading...

Advanced Custom Fields: Building a Client Friendly “Page Builder”, Part 1

There are very few subjects debated so hotly in the WordPress world as the ones regarding “Page Builders”. For the unfamiliar, a page builder allows the end user to set up content without needing knowledge of code.  While – to the end user – the allure of being able to have full control over design and content is nice, most developers shy away from them due to the resources needed to give that sort of power. Page bloat – loading of unnecessary scripts that drives up load time – is usually a problem with such plugins. Even caching plugins can only do so much to curb the bloat.

Over the years, I’ve tried to find a nice, happy medium – something that was “developmentally sound”, but that would allow at least some customization of the content beyond just being able to enter text.

A few years ago, I got a primer from Tammy Hart about Advanced Custom Fields – specifically, the “Flexible Content” fields.  Flexible Content fields allow a user to select from predefined options, fill in blanks, and the code renders based on what content is selected.  Given the combination of options available in ACF, there is a way to build something that resembles a page builder – the ability to have different types of content – without the “page bloat”.

Note: I only recommend paid plugins when the cost is outweighed by the benefits. This plugins has been in my toolkit for several years, so I absolutely recommend it for any developer wanting to extend their capabilities.  You can purchase it here: The free version will not work, as you need the Pro version for the “Flexible Content” add-on.

Page Builder Overview

This is the first in a series of posts. The first post will be the initial setup of the “module loop” and a sample module.  The second post will contain sample modules used in many modern page builders. The final post will be a few “tricks” I’ve learned – global (sitewide) modules, defining module locations with hooks, and other similar aspects.

The Initial Module Loop

Once you’ve purchased and installed the plugin, a new option (“Custom Fields”) will show up on the left side.  Clicking on it reveals a page that shows any field groups you’ve defined.

Building a Page Builder with Advanced Custom Fields

Add a new field group, give the group a name, and admire all of the options you have to set. Luckily, we can avoid most of them in favor of setting the essentials.

Building a Page Builder with Advanced Custom Fields

Yes, there’s a lot of options – ranging from input boxes to date pickers.  For this tutorial, we’re going to focus on the “Flexible Content” option.  Selecting it gives you the basics – the title, the slug, and the ability to define your modules.

Building a Page Builder with Advanced Custom Fields

I’ve called mine “Modules” since we’re thinking about this in a “page builder” mindset. We want something people can define a row and content, add in some text, and call it a day.  For our first module, we’ll do something very simple: A WYSIWYG editor that runs the full container width.

Keep in mind that your code may look similar, but ultimately you need to implement this code in your own way, with your own framework. I’m just providing the basics.

Building a Page Builder with Advanced Custom Fields

The big things to note are that our modules have a “module” slug, and our content block has a “content” slug. We’ll need these as we write our code.

Publish the field group, and let’s swap over to our codebase.

The Page Builder Code

I’m going to define these modules as a “function”. I do this for a few reasons: 1) It keeps our actual content templates free of bloat, and 2) it allows me to re-use the function multiple times – something I do for clients that need modules in different places.  An example: one client I wrote for has these modules either above, below, or replacing their content on a page.

function insert_modules() { ?>
<section id="modules">
    if( have_rows('modules') ):
       while ( have_rows('modules') ) : the_row();
          if( get_row_layout() == 'full_width_content' ): ?>
             <section class="module full_width_content_module">
                <div class="container">
                <?php the_sub_field('content'); ?>
          <?php endif; 
    else :
       echo "<div class='container'>No Modules Defined!</div>";
<?php }

So, let’s look through this:

The first line defines the use of a function. This lets us re-use it later, vs just being able to use it once.

Then, the actual HTML markup for the function. I’m setting up a section (named modules) so I can keep things in line.

Then things get interesting: we’re using an if-then statement to check for any defined “modules”.  If there aren’t any, it falls back to an error message.

Inside of our “loop”, we then query for what “type” of module we’re display, and use that code.  We’ve defined one module – the full width content. If there’s a match on the module type, it runs the code inside (and pulls the relative custom field data).

In our next post, we’re going to set up a few different types of modules (and add them to our “module loop”). We’ll explore what some of the more popular content types are on a site, and see if there’s a way to duplicate those without all of the bloat a “page builder” has.

Drop your questions in the comments below, or if you have any module ideas for the next post, leave those as well!

Continue Reading...

Adding A “Let’s Encrypt” SSL Certificate to an Amazon AWS Instance

A few days ago we showed you how to add an SSL certificate – one that you can purchase – to your newly created Amazon AWS Instances. As I was researching SSL certificates, I came across an interesting initiative: Let’s Encrypt.  It’s an organization dedicated to serving up free SSL certfiicates so you can encrypt your site data.

According to their website, they have six key points to their reasoning:

  • Free: Anyone who owns a domain name can use Let’s Encrypt to obtain a trusted certificate at zero cost.
  • Automatic: Software running on a web server can interact with Let’s Encrypt to painlessly obtain a certificate, securely configure it for use, and automatically take care of renewal.
  • Secure: Let’s Encrypt will serve as a platform for advancing TLS security best practices, both on the CA side and by helping site operators properly secure their servers.
  • Transparent: All certificates issued or revoked will be publicly recorded and available for anyone to inspect.
  • Open: The automatic issuance and renewal protocol will be published as an open standard that others can adopt.
  • Cooperative: Much like the underlying Internet protocols themselves, Let’s Encrypt is a joint effort to benefit the community,

Whereas some people are turned off by the cost of an SSL Certificate from a trusted authority, this becomes a much better option.

Note: There are a couple of other tutorials on setting up the “letsencrypt” package, but I ran into a few snags post-setup that I want to address, specifically on the WordPress front.

Step 0: Prerequisites

We’re setting this up on Ubuntu, so we’ll need:

  • A user with sudo privileges. If you’ve been following the tutorials, you have this.
  • Your domain registrar login information.
  • Your domain pointed to the Amazon AWS Elastic IP with an A record. Letsencrypt validates the domain ownership via the A record, so make sure that the IP address is set up properly in your domain registrar.
  • You’ll need to stop the server for a few minutes to allow letsencrypt to run a web server on port 80. Make any necessary precautions.
  • There is a second, ‘webroot’ option that we can tap into for automatic renewals, so once the initial setup process is done we won’t have to shut down the server again.

Step 1: Installing the Let’s Encrypt Installation Scripts

Right now, it’s not available via the package managers – I have a suspiscion that will change soon. But we can definitely clone it via Git.

Update your packages…

$ sudo apt-get update

…and install both git and bc.

$ sudo apt-get -y install git bc

BC, by the way, is an “arbitrary precision language calculator”. Neat!

Clone the letsencrypt package into /opt:

$ sudo git clone /opt/letsencrypt

Now that it’s installed and we have our dependencies, we can start the installation package.

Step 2: Using letsencrypt and Obtaining a Certificate

Stop your Nginx server…

$ sudo service nginx stop

…and check to see if port 80 is open and in use.

$ netstat -na | grep ':80.*LISTEN'

If you don’t seen any output… congrats! You’re ready to go!

$ cd /opt/letsencrypt
$ ./letsencrypt-auto certonly --standalone

You’ll see a few things initialize, and you’ll be asked for a few bits of information. If this is your first time setting up, you’ll be asked for your email address. You’ll use this to receive notices and any recovery options, so make sure it’s valid and hit <OK>.


Then a terms of service screen. Read the terms and click <Agree>.


Now, enter the domain names you want to secure. If you have multiple subdomains (www vs non-www) enter them both. I usually enter the root top level domain (non-www) first, but that’s my preference:


Let the command finish running. You should see a large wall of text with a few important pieces of information:

  • Location of the saved certificate chain and keys:
  • Expiration date of the certificate

You’ll notice that the certificate expires quickly – 90 days. We’ll automate the renewal process later in the tutorial.

You’ll also notice, if you list the location above…

$ sudo ls /etc/letsencrypt/live/

That there are four files:

  • cert.pem – your domain’s certificate
  • chain.pem – the Let’s Encrypt chain certificate
  • fullchain.pem – a concatenated (combined) file of cert.pem and chain.pem
  • privkey.pem – your certificate’s private key

You’ll need fullchain.pem and privkey.pem, so mentally note where those are:


Without those, we can’t set up the next part: the Nginx configuration.

Step 3: Setting up Nginx

This is where the tutorial diverges from other similar tutorials. Other tutorials have you redirect the non-SSL traffic into the SSL traffic, but we actually need to keep the non-SSL and SSL traffic open with the same configuration, and use plugins to force SSL on the WordPress site. Otherwise, plugins such as Jetpack will cease to function correctly. 

We’re going to change the server configuration slightly from when we set it up initially. We’re still going to use all of the same variables from before, but we’re adding our elements to define the SSL certificates:

# Define the microcache path.
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=microcache:100m inactive=60m;

server {
 listen [::]:80;
 listen 80;
 listen 443 ssl;

 root /sites/;
 index index.php index.html index.htm;


 ssl_certificate /etc/letsencrypt/live/;
 ssl_certificate_key /etc/letsencrypt/live/;
 ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
 ssl_prefer_server_ciphers on;

 # Include the basic h5bp config set
 include h5bp/basic.conf;
 location / {
 index index.php;
 try_files $uri $uri/ /index.php?$args;
 location ~ \.php$ {
 fastcgi_cache microcache;
 fastcgi_cache_key $scheme$host$request_method$request_uri;
 fastcgi_cache_valid 200 304 10m;
 fastcgi_cache_use_stale updating;
 fastcgi_max_temp_file_size 1M;
 fastcgi_index index.php;
 fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
 include fastcgi_params;
 # Local variables to track whether to serve a microcached page or not.
 set $no_cache_set 0;
 set $no_cache_get 0;
 # If a request comes in with a X-Nginx-Cache-Purge: 1 header, do not grab from cache
 # But note that we will still store to cache
 # We use this to proactively update items in the cache!
 if ( $http_x_nginx_cache_purge ) {
 set $no_cache_get 1;
 # If the user has a user logged-in cookie, circumvent the microcache.
 if ( $http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) {
 set $no_cache_set 1;
 set $no_cache_get 1;
 # fastcgi_no_cache means "Do not store this proxy response in the cache"
 fastcgi_no_cache $no_cache_set;
 # fastcgi_cache_bypass means "Do not look in the cache for this request"
 fastcgi_cache_bypass $no_cache_get;

The variables in red are new.  Walking through them, we end up doing the following:

  • Adding a ‘listen’ reference to allow traffic to flow through port 443, the secure SSL port.
  • setting the paths for the certificate and certificate key – fullchain.pem and privkey.pem – that we generated earlier.
  • Setting an additional layer of security by specifying what ciphers and protocols we want to allow. The list above is a very modern list and will disallow outdated or insecure protocols from attempting to use the secure site.

Save and exit, and restart your webserver:

$ sudo service nginx start

Technically, we’re finished! You can visit or and see the secure site in its full glory!

Step 4: Post-Setup for WordPress

You may notice that the lock isn’t green (it may be greyed out). We still need to set WordPress to specifically use the HTTPS traffic over the non-HTTPS traffic. We can do that with two steps:

  1. Change the site_url and blog_url to You can do this either in the WordPress Settings > General screen, or by editing your wp-config.php file and adding these variables:
  2. Using the WordPress Force SSL to redirect all other traffic (images, links, etc) from non-HTTPS to HTTPS.

Note: Other tutorials, as I mentioned above, require you to redirect the non-SSL traffic in, but I found that this caused problems with plugins that require port 80 to be free and clear – like Jetpack. I wasn’t getting any site stats for a few days and had to dive in to figure out why. Turns out it was being rejected at the redirection point. Opening port 80, and forcing the redirection at the WordPress level, did the trick.

Step 5: Auto-Renewals of SSL Certificates

The Let’s Encrypt certificates expire after 90 days. Let’s set up a server-side cron job (a script that runs automatically) to make sure we don’t have to worry about it again.

Instead of the standalone script we ran earlier, we’re going to set up a webroot script to run automagically in the background. Let’s Encrypt will use a special folder in your web root (/.well-known) to place its files, so let’s make sure that the server can access anything in those folders:

$ sudo nano /etc/nginx/sites-enabled/

In the “server” block, add the following:

location ~ /.well-known {
     allow all;

Save and exit. This will open up that location’s files for letsencrypt, and drop in a hidden file that the scripts will use for validation. This way, we can keep our web server running – and not have to stop it to renew the certificates.

You may want to figure out what your web root folder is. If you only have one site, it’ll be at /sites/ (if you’ve been following my tutorials – otherwise, it’ll be different for each user).

Run this script, and make sure the variables are filled in with your information:

$ cd /opt/letsencrypt
./letsencrypt-auto certonly -a webroot --agree-tos --renew-by-default --webroot-path=/sites/ -d -d

This goes through the prompts we filled in earlier, renews the certificate, and passes it through to the server (in the same locations you set up earlier).

Restart Nginx…

$ sudo service nginx reload

…to have it use the newly created certificates.

Now we can create a cron job to automate this process.

Instead of having this long string entered in each time, let’s set up a configuration file we can refer to if we have problems or questions. A sample file is already provided, so we can start there to save time:

$ sudo cp /opt/letsencrypt/examples/cli.ini /usr/local/etc/le-renew-webroot.ini

Edit the file with your editor of choice:

$ sudo nano /usr/local/etc/le-renew-webroot.ini

There’s a lot of commented-out (and useless) stuff in this file. So, let’s cut it down and fill in only the essentials: email address, domains, and the webroot authentication stuff.  You should have something similar to this:

# Use a 4096 bit RSA key instead of 2048
rsa-key-size = 4096
# Registered Email
email =
# Domain(s) to Secure
domains =,
# Webroot Authentication and Path
authenticator = webroot
webroot-path = /sites/

There’s all the information we used earlier, in a nice and compact format. Let’s test it to see if the certificate renews this way:

$ cd opt/letsencrypt
$ ./letsencrypt-auto certonly -a webroot --renew-by-default --config /usr/local/etc/le-renew-webroot.ini

Notice how we set the –config flag to read the file we just created.  If all goes well, reload nginx again – just in case!

Finally we can set up this script with a cron job.

There’s a Gist that has a nice packaged script we can run. Do your due dilligence and look it over before you install it.

$ sudo curl -L -o /usr/local/sbin/le-renew-webroot
$ sudo chmod +x /usr/local/sbin/le-renew-webroot

This script can be executed anywhere. If you want, you can use it now to see how many days are left until your certificate needs to be renewed:

$ sudo le-renew-webroot
Checking expiration date for
The certificate is up to date, no need for renewal (89 days left).

Then, the cron job itself:

$ sudo crontab -e

Head to the bottom of the file and insert this all onto one line:

30 2 * * 1 /usr/local/sbin/le-renew-webroot >> /var/log/le-renewal.log

There’ll be a log file created at /var/log/le-renewal.log, so if anything goes awry you’ll have a reference as to how to fix it.

And that’s it!  You’re now running an amazing free encryption from Let’s Encrypt, letting it renew automatically, and allowing WordPress’ finicky internal features (like Jetpack) still authenticate.

Continue Reading...

Adding SSL To Your Amazon AWS Instance

The last few blog posts have been an in-depth tutorial on how to set up an Amazon AWS Instance.  And, while the basics were more than covered, I had a few reader asking me about a super important topic: SSL and Encryption.

What exactly is SSL Encryption?

The short answer: it’s that fancy green lock on your favorite sites that keeps information secure.

The long answer: it’s a signed certificate that establishes a secure, crytptographically keyed connection between the web server and the client (the person viewing the site). While a certificate can be “self signed”, most people choose to allow an authorized 3rd party (such as Comodo, RapidSSL, or GeoTrust) to verify the encryption.

Why bother with SSL?

These days, you can’t throw a keyboard without hitting someone who’s had their identity stolen. Data transferring between the user and a server by default isn’t safe – it can be sniffed out by dasterly do-baddies who can steal the information being transmitted. Usernames, passwords, credit card numbers; anything’s fair game to a hacker.

An SSL Certificate encrypts the data – encodes it, if you will – as it’s being sent along the information superhighway. The data that comes in would be gibberish to any hacker trying to access it – provided they can even get access at this point. However, upon hitting the destination, the encryption is decoded and served to the user.

I personally won’t shop on a client’s website if there’s no SSL certificate. Even eCommerce sites who use PayPal or some other off-site payment gateway, since there’s still certain information that’s being transmitted that I just don’t want prying eyes seeing.

Installing the Certificate on an Amazon AWS Instance.

If you’ve been following along, we (over the last few blog posts), have successfully set up, installed, and managed an Amazon AWS Instance. Now that it’s up, we can encrypt the data we’re sending out.

Step 0: Sign Up for an SSL Certificate

The process is different for each SSL provider, but I’m going to use NameCheap because it’s my personal go-to for these sorts of things.

Head over to NameCheap (affiliate link) and hit the SSL Certificates section. We just need something simple for our site, but if you need more robust encryption those options are also there.

Screen Shot 2016-01-17 at 9.35.44 PM

We’re going to go with “Domain Validation” since we’re just setting up one domain.

You’ll see lots of choices on this page; some more expensive than others. Let’s stick with the basics. The $9.00/year “PositiveSSL” is perfect for encrypting data on non-eCommerce sites, or on sites with low sales volume. The “QuickSSL Premium” is good for larger eCommerce sites, but it’s also more expensive at ~$57/year.

Screen Shot 2016-01-17 at 9.35.40 PM

Check out, confirm, and pay. You’ll get your typical login credentials, but you won’t need them just yet.

Step 1: Generate a CSR (Certificate Signing Request)

We’re going to take care of the stuff server-side first. Log in through SSH to your Amazon AWS Instance. This sequence:

$ cd ~
$ openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr

…visits your home directory and begins the process for generating two files needed for the certificate: a private key and the Certificate Signing Request.

You’ll want to have some information on hand for this:

  • Common Name: Domain name of the server in question – ‘’
  • Country : Two letter official abbreviation – ‘US’
  • State (or province): Two letter official abbreviation – ‘TN’
  • Locality (or city): City name
  • Organization: Your business name if applicable. Write NA if it’s not.
  • Organizational Unit (Department): Your department. Write NA if you don’t have one or it isn’t applicable to your situation.
  • E-mail address: I’d strongly suggest using the same address as your namecheap account. You *will* receive emails independent of the namecheap account to this, so make sure it’s at least a valid email address.

It’s strongly recommended that you fill in all the fields.  You’ll have two files created when you’re finished, and you’ll need both of them eventually.

Open the CSR with your text editor:

$ sudo nano server.csr

and copy everything you see. It’ll look something like this:


Make sure you grab all of it – including the BEGIN and END lines.

Step 2: Signing the Certificate

(Thanks to Namecheap’s Support Portal for these next screenshots!)

Login to namecheap, and visit your Dashboard:


Look on the left sidebar and find Product List > SSL Certificates:


Next to the SSL Certificate you just purchase will be a big “Activate” button. Click that and paste in that CSR request you just made.

sslactivation4The Web Server is Nginx, so make sure the “Apache, Nginx, cPanel or other” box is selected, and that the information looks correct. Hit submit.

Choose Email Validation. It should give you one of the email addresses you’ve entered in as an option. Otherwise, select an address you can receive mail to and hit Submit.

Verify the Administrative contact information and fill out the fields. Make sure they match the CSR you generated.


Hit Next, and then Confirm on the next screen once you’re satisifed with all of the options.

You should get an email with a link to download a .zip file of the certificate files.  Log in with SFTP and send them all to your home directory. We’ll move them later, but for now it’ll be good to have them all in one place.

Step 3: Installation

You should see a few different files when you upload the contents of the zip file:

  • domain_name.crt

Those are both necessary, but we can combine them into a single file in order to keep things easy:

$cat domain_name.crt bundle >> domain_name_chain.crt

Then, we can move them into position on the server…

$ sudo mv ./domain_name_chain.crt ./server.key /etc/ssl

This moves both the domain certificate and the private key we set up earlier into a permanent home.

We set up Nginx last time to only accept non-ssl traffic, which defaults to port 80. What we need to do now is to redirect all that traffic automatically to the https version of the site. The https site, then, needs to be set up to accept traffic and pass it through like normal.

Note: my original post had two server blocks – one SSL and one non-SSL – and had the non-SSL traffic redirecting to the SSL traffic. Turns out this causes all sorts of problem with Jetpack, and a few other plugins that require port 80 to not be a straight up redirect. So, we’re going to do it a different way instead. We’re going to set up the server block to accept traffic on both ports (443 and 80) and then set WordPress up to handle the SSL so Jetpack doesn’t get confused.

# Define the microcache path.
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=microcache:100m inactive=60m;

server {
  listen [::]:80;
  listen 80;
  listen 443 ssl;


  # Include defaults for allowed SSL/TLS protocols and handshake caches.
  include h5bp/directive-only/ssl.conf;

  # config to enable HSTS(HTTP Strict Transport Security)
  # to avoid ssl stripping
  add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";

  ssl_certificate_key /etc/ssl/server.key;
  ssl_certificate /etc/ssl/domain_name_chain.crt;

  # Path for static files
  root /sites/;

  #Specify a charset
  charset utf-8;

  # Include the basic h5bp config set
  include h5bp/basic.conf;

  location / {
    index index.php;
    try_files $uri $uri/ /index.php?$args;

  location ~ \.php$ {
    fastcgi_cache  microcache;
    fastcgi_cache_key $scheme$host$request_method$request_uri;
    fastcgi_cache_valid 200 304 10m;
    fastcgi_cache_use_stale updating;
    fastcgi_max_temp_file_size 1M;
    fastcgi_index  index.php;
    fastcgi_param  SCRIPT_FILENAME   $document_root$fastcgi_script_name;
    include        fastcgi_params;

    # Local variables to track whether to serve a microcached page or not.
    set $no_cache_set 0;
    set $no_cache_get 0;

    # If a request comes in with a X-Nginx-Cache-Purge: 1 header, do not grab from cache
    # But note that we will still store to cache
    # We use this to proactively update items in the cache!
    if ( $http_x_nginx_cache_purge ) {
      set $no_cache_get 1;

    # If the user has a user logged-in cookie, circumvent the microcache.
    if ( $http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) {
      set $no_cache_set 1;
      set $no_cache_get 1;

    # fastcgi_no_cache means "Do not store this proxy response in the cache"
    fastcgi_no_cache $no_cache_set;
    # fastcgi_cache_bypass means "Do not look in the cache for this request"
    fastcgi_cache_bypass $no_cache_get;

If you used the tutorial, this should look familiar; it’s the same code we used (almost) to set up our initial virtual hosts. The only difference is that now we’re redirecting the non-SSL traffic to SSL-encrypted traffic. The other code bits are the same!

Finally, restart nginx:

$ sudo service nginx restart

And there you have it – the SSL certificate signed, sealed, and delivered to you and installed lovingly on your Amazon AWS Server. Now, any traffic that flows through will automatically be encrypted. You can now install shopping carts and provide user accounts knowing that your users data is safe.


I had to modify this tutorial a bit because Jetpack was throwing an error message because of how the port 80 traffic was redirecting.  Since we’re now including both ports in our server block, we need to do some things in WordPress to get it to force SSL traffic for normal visitors:

  1. Change the site_url and blog_url to You can do this either in the WordPress Settings > General screen, or by editing your wp-config.php file and adding these variables:
  2. Using the WordPress Force SSL to redirect all other traffic (images, links, etc) from non-HTTPS to HTTPS.

This should take care of any issues with Jetpack, and still force SSL for normal traffic and users.

A side note: there’s an initiative called “Let’s Encrypt”that allows users to set up free certificates. It’s still fairly new, and I’m still researching it, but I ultimately want to do another guide that will show how to install those certificates. Keep in mind, though, that you can still pay for a trusted certificate from a vendor like Comodo, and that some people really do care about having that name and/or seal on the sites they visit. That said, don’t run off and ditch your other certificates just yet!

If you run into any snags, let me know in the comments below.  Also, let me know if you’ve successfully installed the certificate! I love it when a good plan comes together!

Continue Reading...

Setting Up An Amazon Web Services (AWS) Instance for WordPress: Part 2

This post is part two of a series on setting up an Amazon Web Service Instance for a WordPress site.  Part 1 covers the initial setup, security, and terminology of Amazon’s AWS.

Edited: Oct 12, 2016 to include newest versions of software.

If you’ve read the link above, you should have a newly created Amazon AWS account (running on Free Tier settings!) with an Ubuntu instance running and operational.  As it stands now, the instance is more like a brand new computer – there’s not much on it, and it really can’t do anything until we put some software on it.  This post covers the actual setup of Nginx, PHP(-FPM), MySQL, and a few other things we need to get the server up and serving websites.

Step 0: Update your System Packages

This should be a regular occurance for you as WordPress rarely sees conflict with the Linux system packages.

$ sudo apt-get update && sudo-apt-get upgrade

This command (minus the $ symbol, of course) will scan the package repositories for updates and will update them, prompting before the actual update runs. I like to reboot my system right after this, and you can do it from the command line:

$ sudo reboot

You’ll get kicked off, as you should since you just did a soft-reset, so ssh back in using the terminal:

$ ssh ubuntu@PUBLIC_HOSTNAME -i ~/.ssh/key-name.pem 

After this, your computer’s packages will be completely up to date. Now we can start installing our own.

Note: Linux uses a package management system to install software. Any dependencies – packages that are needed by other packages – should be installed with the commands we run. If you see a long list of packages being installed, don’t panic – that’s Linux making sure things are working correctly behind the scenes!

Step 1: Install Nginx

Nginx is, plain and simple, a web server. It routes incoming and outgoing connections to give people access to files, images, processed code, etc.  I like Nginx over Apache because the architecture is built to serve sites lean and fast. To install, run this from the command line:

$ sudo apt-get install nginx

We’re going to install the HTML5 Boilerplate’s Nginx Server Configs, which will take care of the heavy lifting of “tuning” our website out of the gate. These tweaks help with performance and security, so don’t skip out!

But, to do that, we need to install Git. Git is a version control system, and the files we need are stored on Github.

$ sudo apt-get install git
$ cd /etc
$ sudo mv nginx nginx-previous
$ sudo git clone nginx

This installs Git, changes to the /etc directory (where all of the misc. packages are installed), moves the initial config into a new folder, and brings down the boilerplate files into a default configuration folder.

Next, we want to tell Nginx what “user” it’ll be running information under. Personally, I like “www-data” since Ubuntu already has information built in for it:

$ cd /etc/nginx
$ sudo nano nginx.conf

Look around for a line that looks similar to this:

# Run as a less privileged user for security reasons.
user nginx nginx;

And change it to:

# Run as a less privileged user for security reasons.
user www-data www-data;

We’re going to go ahead and set up our website’s “virutal host” directory. This ensures that 1) our files all live in one specific place and 2) allows us later on to install more WordPress sites and run them on one server – neat!

$ sudo mkdir -p /sites/

Don’t put the www. in front of this – we’re just going for the root fully qualified domain name.

While we’re at it, let’s grab WordPress and go ahead and get it set up:

$ cd /sites/
$ sudo wget
$ sudo tar zxf latest.tar.gz
$ cd wordpress
$ sudo cp -rpf * ../
$ cd ..
$ sudo rm -rf wordpress/ latest.tar.gz

To walk through what we’re doing:

  • Changing into the directory we just created
  • Using WGET to “fetch” the latest version of WordPress
  • Using UnTar (similar to UnZip) to extract the files in the archive
  • Changing into the WordPress directory
  • Copying all of the files one folder lower (out of the WordPress directory)
  • Moving up to the root directory
  • Removing the wordpress folder (now empty) and the archive (not needed anymore).

We’ll come back here later once we’ve set up a few other things.

But first, we need to set this new directory up under the ownership of the nginx user:

$ sudo chown -R nginx:nginx /sites/

CHOWN (Change Ownership) does this easily enough, and we can hit every directory / file at once.

Back in the Nginx folder, we need to copy a few other configuration files from the old nginx configuration to the new one we just imported:

$ sudo cp /etc/nginx-previous/fastcgi_params /etc/nginx

This step is essential, as without it Nginx wouldn’t know how to run the PHP we’re installing later.

Each Nginx site has its own file that determines what happens when a user visits the site. We can set it up easily enough:

$ sudo nano /etc/nginx/sites-available/

This should bring up a blank text editor.

Note: Now would be the time that we introduce SSL certificates if we were using them. If you need them, great – I’ll do a follow up post that shows you how to install them easily.  For now, we’ll do a non-SSL site for ease of use.

Use this configuration to start things up:

# Define the microcache path.
fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=microcache:100m inactive=60m;

server {
 listen 80;
 listen [::]:80;

 root /sites/;
 index index.php index.html index.htm;


 # Include the basic h5bp config set
 include h5bp/basic.conf;

 location / {
     index index.php;
     try_files $uri $uri/ /index.php?$args;

 location ~ \.php$ {
     fastcgi_cache microcache;
     fastcgi_cache_key $scheme$host$request_method$request_uri;
     fastcgi_cache_valid 200 304 10m;
     fastcgi_cache_use_stale updating;
     fastcgi_max_temp_file_size 1M;
     fastcgi_index index.php;
     fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
     include fastcgi_params;

     # Local variables to track whether to serve a microcached page or not.
     set $no_cache_set 0;
     set $no_cache_get 0;

     # If a request comes in with a X-Nginx-Cache-Purge: 1 header, do not grab from cache
     # But note that we will still store to cache
     # We use this to proactively update items in the cache!
     if ( $http_x_nginx_cache_purge ) {
         set $no_cache_get 1;

     # If the user has a user logged-in cookie, circumvent the microcache.
     if ( $http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" ) {
         set $no_cache_set 1;
         set $no_cache_get 1;

     # fastcgi_no_cache means "Do not store this proxy response in the cache"
     fastcgi_no_cache $no_cache_set;
     # fastcgi_cache_bypass means "Do not look in the cache for this request"
     fastcgi_cache_bypass $no_cache_get;

Suffice it to say there’s a LOT going on here, but this is the configuration for the site. It tells the server what to do when people visit, where to route connections, and so on.

Next, we have to “enable” the site by symlinking – creating a link relationship – to the “sites-enabled” folder in nginx:

sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/

This activates the site, but the server is still not running. We want to set it to run upon server startup:

$ sudo update-rc.d nginx defaults

Step 2: Setting an IP Address

Our Amazon AWS server techincally doesn’t have a public IP address yet, but it’s simple enough to create one.

Log into the Amazon AWS Console and click on the EC2 Link. In the Instance Console, look on the left side for Elastic IP addresses. These IP addresses are technically static, but you can remap them instantly to other containers should the need arise.

Amazon AWS

Hit the Allocate New Address button, and confirm. A blue bar with the IP address will pop up. The top blank, Instance, should be clicked on. Your running instance will pop up. Select it, watch the values populate, and hit Associate (and confirm). Congrats! Your server now has a public facing IP address, which means we can have our domain point to it once we’re done!

Amazon AWS

Nginx should be ready to go now. Start things up:

$ sudo service nginx start

Step 3: Installing PHP 7

If Nginx is the nervous system, then PHP is the brains behind it. PHP serves up any non-markup elements – variables, functions, and computations – that come into play on the web… at least on our server, anyway.

$ sudo apt-get install php7.0-fpm

This will bring in all of the packages and dependencies you need. We need to change the default user, so enter in:

$ sudo nano /etc/php7.0/fpm/pool.d/www.conf

Look for the lines that look like this:

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group will be used.
user = nginx
group = nginx

And replace them with this:

; Unix user/group of processes
; Note: The user is mandatory. If the group is not set, the default user's group will be used.
user = www-data
group = www-data

Once that’s done, we can start the PHP service:

$ sudo service php7.0-fpm start

…and make sure it starts every time we reboot!

$ sudo update-rc.d php7.0-fpm defaults

Step 4: Install MySQL

The engine, the brains, and now the memory center. MySQL is a “relational database” which is a fancy way of saying things are associated with other things. Installation will look familiar at this point:

$ sudo apt-get install mysql-server php7.0-mysql

It’ll ask you to fill in a root password – do so and don’t forget it! We can start this service immediately, as we want to run a few startup commands…

$ sudo service mysql start

…and (you guessed it) start it upon server boot!

$ sudo update-rc.d mysql start

Once that’s done, we can start the initial security lockdown for MySQL. I highly recommend this, as it removes a few unsafe features:

$ sudo /usr/bin/mysql_secure_installation

Enter your root password and use the defaults. Once you’re done, there’ll be no anonymous users, no remote logins, no “test” data, and the “privilege tables” will be refreshed to take the other settings into account.

Next, let’s log into MySQL and set up our initial WordPress database. We’re nearly there – this is the home stretch!

$ mysql -u root -p

Enter your root password. Next, we’re going to create a database and user, associate them with each other, and reload the privilege table.

> CREATE USER 'username'@'localhost' IDENTIFIED BY 'secure_password';
> GRANT ALL ON sitename.* TO 'username'@'localhost';
> exit;

If these look familiar to you – as in you’ve seen WordPress’s install screen before – then you know you have most of what you need to install WordPress.

But, before we do that, we need to associate that IP address with your domain name.

If you don’t have a domain, head over to NameCheap (affiliate link) and pick one up. They value privacy over anything else, and even include WhoIs Guard (their private registration) free for a year!

Once you’ve done that (or logged into your own domain registrar), you’ll want to change the A record (the ‘@’record, and possibly the ‘www’ record) to the IP address you created earlier.  Every domain registrar is different – so you may have to poke around on their forums if you’re unsure.

Set the TTL (Time to Live) to between 5 and 10 minutes – no sense waiting all night, right?

Once that’s done, go pour yourself a celebratory drink, because we’re at the finish line.

Step 5: Set up WordPress

Visit your domain. If you’ve followed all the instructions correctly and didn’t receive any error messages, you should see the WordPress installation screen.  Take your username, sitename, and secure_password variables you defined above and enter them in. For the host, type in ‘localhost’.

And that’s it! WordPress is now running on your very own slice of the web, powered by Amazon’s amazing AWS Elastic Instance.

You’ve set up your own server, added PHP7, MySQL, and Nginx, installed WordPress from the command line, and are now ready to reap the benefits of a server that is truly your own.

If you have any questions, I’ll do my absolute best to answer them below.  If I’ve missed a step, or you’ve got some handy-dandy tips for first-time sysadmins, leave those below as well!

Continue Reading...

Setting Up An Amazon Web Services (AWS) Instance for WordPress: Part 1

For a while now, I’ve been diving more and more into the realm of the “server administrator” and setting up servers for clients. I started with Ramnode, but eventually moved to Digital Ocean because of its ease-of-use for a (then) newbie like myself.

Over the past few days, however, I’ve been migrating a client away from a big-box WordPress host over to something they have more finite control over. The process was nearly seamless, but it took a bit of trial and error to get the server tuned exactly the way I wanted it.

Over the next few days, I want to share that process with you. And the best part? Everything I’m doing is eligible for Amazon’s “Free Tier” program on AWS – meaning that it won’t cost you a cent for certain features.

How does AWS Work?

Amazon’s AWS is more like a “server shop”. We’re going to be focusing most of our time in the Elastic Cloud (or EC2). There are dozens of AMIs (Amazon Machine Instances) which are snapshots of servers in various states. Some are fresh out of the box, some have programs ready to go, and others are backups of live sites.  You can create your own, but there are plenty of ready-to-go server setups to choose from.  An AMI is then dropped onto an Instance – a (usually) virtualized computer segment. These instances have various tiers of computing power – ranging from a single core with 1g of ram to monster boxes that could out perform even your home desktop computer (a side note: some of them even have virutalized graphics cards that could technically be used to stream a video game through, but that’s a different post altogether!)

Once everything is set up and running you can jump into the instance the same way you’d get into your own server – at first via SSH, but then later through FTP if you choose to install an FTP server.

We’re going to be using one of the free tier eligible Linux servers and a free-tier plan with one processing core and one gig of ram. This should be more than fine to start out with. Plus, if you really like it, you can upgrade the server later with minimal downtime and start paying for it!

Step 0: Create the Account

Head on over to Amazon’s AWS and sign up for an account.

Once you log in, you’re treated to an overwhelming amount of options:

Amazon AWS Console

The first column has EC2 at the very top left. The second column has Identity & Access Management in the 3rd group. Those are the only two links we need to worry about for now.

Step 1: Creating your Identity

Before we set up the server, we need to ensure you’re able to do fun things on it, like log in.  So, we’re going to set up a user, a group, give that group administrator privileges, and then add you to it. This’ll let you use the Amazon Command Line Interface to generate access keys for your server.

Click on the Identity & Access Management link. You shouldn’t have anything there with a new account, so click on Users and then Create New Users.


Enter your name and then hit Create. Make sure you download the credentials it gives you.

I’ll say it again: MAKE SURE YOU DOWNLOAD THE CREDENTIALS IT GIVES YOU. Without those, you won’t be able to (eventually) log into the server.

Back at the IAM Management Home, click on Groups. You shouldn’t have any, so let’s create one. Name it something authoritative like “administrators” or “BigBosses”.amazon-aws-3

On the next screen, you’ll attach a policy – a set of rules for that group. We want them to have AdministratorAccess, which is the first policy you’ll see. Check the box and go to the next step.


Review on the next page and submit. Back to the Users page you go. Check the box next to your username and hit “User Actions” > “Add to Groups” and add that user to the group you’ve created.  Congratulations, you’ve made yourself an administrator in the Amazon Console. Now it’s time to set up the security stuff for the server (some of it, anyway).

Step 2: Authentication and Connection to the Server

Passwords are so 2012. We use keys to connect to this server, and that work you just did will allow you to generate the keys you need.

The AWS CLI allows you to control Amazon from your home terminal. We’re going to use a few basic functions of it, but if you hate GUIs then it’s worth learning to use to make your life better. And remember the one rule of thumb: if you can do it from the command line, you can automate it, and who doesn’t want to automate this eventually right?

Install the AWS CLI from Github and run

$ aws configure

from the terminal (as usual, don’t put in the $ – that’s just the prompt). Remember those credentials you were saving? Grab them and enter them when prompted.  For your home AWS Region, use this table to determine:

Code Name
us-east-1 US East (N. Virginia)
us-west-2 US West (Oregon)
us-west-1 US West (N. California)
eu-west-1 EU (Ireland)
eu-central-1 EU (Frankfurt)
ap-southeast-1 Asia Pacific (Singapore)
ap-northeast-1 Asia Pacific (Tokyo)
ap-southeast-2 Asia Pacific (Sydney)
ap-northeast-2 Asia Pacific (Seoul)
sa-east-1 South America (Sao Paulo)

So now we can generate the key pair needed to access the server.  If you don’t have an .ssh directory on your machine, create one:

$ mkdir ./ssh

And then create your key pair (you can name it whatever you like – just make suer the bold parts below match):

$ aws ec2 create-key-pair --query 'KeyMaterial' --key-name key-name --output text > ~/.ssh/key-name.pem

Amazon will do its awesome technomagic and will spit out a file with your authentication key. Finally, restrict access to all but you:

$ chmod 400 ~/.ssh/key-name.pem

Remember what you called that file, by the way – you’ll need it in just a few minutes.

Step 3: Set up the Instance

Now for the fun part! Log back into the AWS console and visit the EC2 Page. Visit the “Instances” page – it should be empty – and click “Launch Instance”.

Step 1 has us choosing an AMI to use. I’m a huge fan of Ubuntu as a server, so let’s choose that.

Check the “Free Tier Only” box, and look for Ubuntu Server 14.04 LTS (HVM), SSD Volume Type. The AMI ID is ami-d05e75b8. Select it.


Step 2 has us putting that AMI onto an Instance. The “t2.micro” tier is free, but if you want some more horsepower and are willing to pay for it, by all means choose a different tier.


Important note: Amazon bills differently than other hosts. The formula is (simplified): Computing + Storage + Data Transfer. This means you pay for the instance (and sometimes the AMI) per hour, the storage per month, and the data transfer per gigabyte in AND out.

As a baseline, an “always on” t2.micro with 1 processor, 1GB RAM, 30GB of storage, and 75GB Bandwidth out would run ~$9.50 /

Don’t skip anything: go to the next section and accept the defaults. In the “storage” section, add on a 30GB “General Purpose SSD” – that’s the max the free trial will allow you. I would have it persist on termination, just in case you ever upgrade the server.

Move past the next section to the “Configure Security Group” section. Here’s where we’ll control the flow of traffic in and out of the instance. Create a new security group and name it something useful like “WordPressSecurity”.

To start with, we want to add SSH, HTTP, and HTTPS traffic rules. The HTTP/S rules can be open (from anywhere: The SSH rule… well, if you’re going to be accessing the server from only one computer, then use that computer’s IP – it’ll absolutely be much more secure that way. For me, I travel a bit and need the flexibility to log in from other places. The key already provides an enormous amount of security compared to a password, so it’s OK in this case to open it. If you’d want, however, you can assign multiple IPs to open up – one per line.


Now is the moment of truth. Review the instance and hit the launch button! You’ll be prompted to choose a key pair once you launch – make sure you choose the one you generated back in the beginning – that’ll associate that file on your computer with being able to access the instance.

You’ll be taken back to the AWS Instances page. Once the light turns green, the instance will be up and running!

Logging in is simple. Simply type in:

$ ssh ubuntu@PUBLIC_HOSTNAME -i ~/.ssh/key-name.pem 

This will log you into the instance, and you can start updating packages, installing your server (Nginx, I hope!), and getting things ready to go.

What to Expect in Part 2

Part two will have us updating packages, setting up the server, WordPress, MySQL, and a few other goodies. For now, though, we have a server that is running and ready for all of the goodness we want to use it for.

This server is more than capable of running a WordPress site with little to no trouble – it’ll be faster than most shared hosting, and you don’t have to worry about what other sites are being hosted on it – meaning SEO / SERPS won’t go down due to having “bad neighbors”.

Let me know if you have questions and I’ll try to answer them the best I can, but in the mean time:

  • Let me know if you try the setup, and if there were any sticky points.
  • Any tips/tricks you can send over to help the process be streamlined
  • If you got the server up and going, and can’t wait for Part 2 when we actually can run a website on it!
Continue Reading...

Creating a Site Specific Plugin in WordPress

Quick: how do you add a custom function into a theme. If you said ‘add it to your functions.php file’, then you would be correct. But is that really the best way to go about things?

The Problem

Let’s say you’ve customized your site with some pretty fancy functions you’ve added in through the hooks and actions in WordPress. You’re in the process of creating a new theme (or selecting a new one for your site), and once it’s done you upload it and activate it.  All of those functions in your functions.php file? They’re still on your old theme, unable to be useful. You could copy and paste them all into your new theme’s functions file, but I think there’s a better way: a “site specific” plugin.

What is a Site Specific Plugin

A site specific plugin is a plugin written by you (the site owner) and is used to house any custom functionality you’ve added.  Unlike a functions.php file, any code in a plugin will be carried over from one theme to the next – this means you can switch themes at your hearts desire and any customizations will be carried over, regardless of which theme you choose.

Here’s how to create a site-specific plugin. Feel free to follow along with the video, or to use the code below.

Step 1

Create a new folder in your WordPress’ plugins folder. You can name it anything you want, but I choose to call it ‘site-plugin’. Do this either on your local development or through your favorite FTP client.

Step 2

Use the code below as a starting point.

Plugin Name: Plugin Name
Plugin URI:
Description: Site Plugin to hold funcions used on Your Site.
Version:     1
Author:      Mitch Canter
Author URI:
Text Domain: mitch-demo

 * Initialize and run setup options on after_theme_setup hook *
add_action( 'after_setup_theme', 'sitedemo_setup' );

function sitedemo_setup() {

	 * Sets up Automatic RSS Links in Header *
	add_theme_support( 'automatic-feed-links' );

	 * Adds Aside and Gallery Post Formats *
	add_theme_support( 'post-formats', array( 'aside', 'gallery' ) );


This allows your site-plugin to be activated through the plugin menu in WordPress. The ‘add_action’ specifies that the plugin file should be run during the ‘after_setup_theme’ hook – which runs after your theme files have all been loaded. The ‘sitedemo_setup’ is a custom function written to house all of the code you wish to run.

Step 3

Save that file. You can save it as any filename you want, as long as it’s in the plugin folder you just created and ends with a .php extension.Creating a Site Specific Plugin in WordPress

Step 4

Head into your WordPress install and click on the plugins screen. You should see the plugin you just created in the list. Activate it here.

And that’s it. Your site is now running code from the site specific plugin, and you can carry it over from theme to theme


Continue Reading...

Redirection: An Easy Way to Handle 301 Redirects

Since I’ve switched to my new design, I’ve also done quite a bit of cleanup work on the content side of things.  I pruned a lot of old articles that weren’t bringing in search traffic (and weren’t related to the site anymore), cleaned up a lot of the categories, and set my permalink structure to something that was a little easier to digest.

The problem, I noticed, came in when I changed the permalinks.  By doing so, any outside links coming in were immediately broken – meaning no search traffic was getting through.  This is, as you can imagine, a problem.

Luckily, I had already taken these things into account, and immediately installed one of my “must use” plugins – Redirection.

What does Redirection Do?

Basically, it serves as a switchboard between an old URL and a new URL.  You set up the relationship in the options panel and – when someone visits the old URL – they are automatically kicked over to the new URL with a 301 Redirect. Even better, when Google’s spiders crawl through and see that redirection, they’ll update their search results accordingly.

Real World Example of a 301 Redirect

Enter into your browser.  This was a site I had going for a while in 2014, but have since moved all of their blog posts and other content here.  UnderstandWP has Redirection running with a command to move *all* traffic here using a 301 redirect.  Since the permalink structures were the same, it was a clean move.

But sometimes that’s not always the case.

What if you change your permalinks and have 1000 articles?  Surely you don’t want to have to add in 1000 manual redirects.  Well, luckily, you have two options:

  1. Use excel and make a chart of all of your old URLs and new URLs (this can be done easily with a few cell formulas).  Then, upload the CSV into Redirection
  2. Use a “regular expression”

Regular Expressions (Regex)

A regular expression is a formula, at its most basic form.  It uses a combination of symbols, letters, and numbers to represent larger pieces of content.  The best thing to do is to take an example and break it down:

  • Source URL: /(\d*)/(\d*)/(\d*)/([A-Za-z0-9-]*)
  • Target URL: /$4

The source URL has four parts: in this case, it’s a year, month, date, and post-name. The \d* is short for “digits” – it means any number from 0-9.  The final piece is the post name.  That long variable string ([A-Za-z0-9-]*), in its most basic term, means “everything” – every letter, number, and symbol gets stored in a temporary value.

The Target URL then takes only the final piece of that puzzle – the post name – and redirects all incoming traffic, stripping out the dates and only using the post name.

In Redirection, it would look like this:

301 Redirection

See that checked box on the right side? That’s important – that’s what tells the plugin to interpret the redirection as an expression.

Without going into the technical detail, because we could spend two or three posts talking about just Regex, know that:

  • Use (\d*) for numbers
  • Use ([A-Za-z0-9-]*) for mixed values (letters and numbers)
  • Count the number of variables you use, and use $ to call those in the Target URL ($4 calls the 4th variable, for example).

Have you ever changed your permalinks? Did you have any other ways to make sure traffic was going to the right place, even after the switch?  Leave us a note in the comments!

Continue Reading...

Podcasting with PowerPress

We are right in the middle of the “second wave” of podcasting.  If you’re a younger person (low to mid 20s) you may not realize just how popular podcasting was back in the 00’s.  It’s back, and bigger than ever, and most brands would benefit from having some sort of podcasting element on their site or blog.  And the easiest way to do that is with PowerPress – the de facto podcasting plugin for WordPress.

Why Podcasting?

Podcasting is, in a nutshell, a “passive medium”.  Unlike YouTube videos or blog posts, which must be actively watched, a podcast can be consumed while doing other things.  A lot of people I know consume podcasts during their exercise time, their daily commute, or while sitting at their desk working.

What is PowerPress?

PowerPress is an easy way to integrate an audio feed – with the proper setup – into your WordPress site.  It allows you to set up your show, add album art, and have a special feed simply for your podcasting episodes.

The video above is a 10 minute introduction to PowerPress – it’ll show you how to install the plugin, set up the basic options, and how to add a podcast episode.

Continue Reading...

Contact/Hire Mitch

Want to book Mitch for a speaking event at your business or church?

Or does your business or project need some amazing design / development muscle?

Use the form below!