Aurora Core: a clean, modern Sass and ES Next-ready project boilerplate

Now that Aurora Theme has a solid foundation built upon Underscores, it’s time to make it a React-ready project. To speed up this process, I am going to integrate Aurora Core, a project boilerplate, into Aurora Theme. Today I’ll discuss a few key features of Aurora Core, the problem it’s meant to solve, and a little about how I’ve built it. In future posts, I’ll talk about how it’ll be integrated into Aurora Theme, and the key differences between Aurora Core and Aurora Theme.

The problem

After building several dozen projects for Bov Academy, I realized most projects had a common set of requirements / tooling. Unless a project requires 200 lines of CSS or less, I always reach for Sass, and I needed a way to compile my styles.

always use JavaScript in all my Bov Academy projects, and typically, I’d have more than one file to deal with. Keeping performance and ease in mind, I needed a way to concatenate and minify to JS, too.

Additionally, I have a preferred structure for each of my projects. HTML files tend to stay at the top level, while Sass, JavaScript, images, and other assets live in an assets/ directory. Why recreate the structural wheel for each project, right?

My first attempts at rolling my Sass and JavaScript requirements together was within my Bov Web Components project. I set up my desired project structure and added a pretty sweet Gruntfile for all sorts of tasks. And it worked pretty decently. Except not all projects require accordions or tabs or [insert other component here]. And ain’t nobody go time for ripping all that junk out at the start of a project (because we all know if you wait to the end, that ish is shipping 😬). After ripping out the same components for the forth or fifth time, I decided it was time to consider building a proper boilerplate.

What makes a good project boilerplate?

The structural and Grunt task running features of Bov Web Components were infinitely useful. Other features, mainly the components themselves, were less so. I wanted to use what worked as a starting point, and expand upon that. So I created a little list of requirements for my boilerplate:

  • Sass support
    • functions
    • mixins
    • a base config file
  • JavaScript / ES Next
  • Task running
    • Sass compilation & minification
    • JavaScript concatenation & minification
    • Image processing (compression)
    • Icon concatenation & minification
  • Linting
    • Stylelint
    • ES Lint
  • Configuration
    • Editor Config
    • Prettier
    • Babel
  • Documentation
    • Sassdoc
    • JS Doc
    • KSS

Once the initial requirements were defined, I started really digging into research. Mina Markham’s Sassy Starter provided a lot of inspiration, as did fellow Bov Student Gabriele Romero’s ES6 Professional Starter Kit. Both are clean slates for a new project, and both are documentation-forward. Sassy Starter has incredibly well documented Sass functions and mixins using Sassdoc, while Gabriel’s ES6 starter kit has JavaScript documentation in mind with JS Doc integration.

I cannot express enough how important this type of documentation is. Have you ever used MDN or the jQuery API docs? Both have rich documentation available for both Sass and JavaScript was really important to me since Aurora Core is an open source project. I want anyone who comes across the boilerplate to feel confident that they can download it and get started with no problems.

To round out my research, I looked to the more popular feature-rich front end frameworks like Bootstrap and Foundation. I wanted to get a sense of what was included beyond a bunch of predefined components (i.e. were there any useful Sass functions or mixins I could replicate). I noted anything that looked interesting in Evernote, and found a few other resources along the way.

With a clear list of requirements, and a bunch of helpful notes research notes, I set to work on the boilerplate.

Laying the foundation

Bov Web Components was a great starting point for this project. In fact, I actually grabbed all the configuration files (Prettier, ESLint, Stylelint, etc.) and the Gruntfile from that project to start Aurora. Around the time I started Aurora Core, I read about Readme Driven Development (RDD). RDD claims to be a happy medium between a waterfall process, where projects are outlined in too much painstaking detail, and agile development where projects aren’t defined in enough detail. While I didn’t write my readme from the very start, I did set to work on it in the first day. Having a project charter has, indeed, been hugely beneficial to keeping my on task.

After the initial project setup, I focused on creating my foundational Sass files. I made detailed lists of must-have functions, mixins, and variables, trolled through old projects for more ideas, and added ideas based on my research. Even though I adore Sass, this part was painstaking. Remember when I mentioned documentation being important? Well, it’s also a PITA to write. It’s especially painful for mixins. Feast your eyes on this as an example:

/// Generate button variations quickly and easily.
///
/// @param {Color}  $background                                    - The desired background color.
/// @param {String} $border            [0]                         - The desired border (use shorthand e.g. 1px solid aqua).
/// @param {Color}  $background-hover  [lighten($background, 10%)] - The hover background color.
/// @param {String} $border-hover      [$border]                   - The hover border (use shorthand).
/// @param {Color}  $background-active [$background-hover]         - The active background.
/// @param {String} $border-active     [$border-hover]             - The active border.
///
/// @example - scss Sample Usage
///  @include button($color__primary, $border__button, $color__accent);
///
/// @todo Determine whether color should be passed, or add way to automatically determine color to use (light or dark).
@mixin button($background, $border: 0, $background-hover: lighten($background, 10%), $border-hover: $border, $background-active: $background-hover, $border-active: $border-hover) {
  background-color: $background;
  border: $border;
  color: $text-default-dark;

  &:focus,
  &:hover {
    background-color: $background-hover;
    border: $border-hover;
    color: $text-default-dark;
  }

  &:disabled {
    background-color: tone($background, 20%);
  }

  &:active {
    background-color: $background-active;
    border: $border-active;
    color: $text-default-dark;
  }
}

There is nearly as much / more documentation for this mixin as there is code! But it sure is helpful to have documentation to which I can refer later.

A screenshot of Sassdoc documentation for a button mixin within Aurora Core.

Introducing Webpack

Grunt is great. It’s familiar, but it is also slow AF. I heard all the kids these days are using Webpack, and as soon as I had an opportunity to learn it, I wanted to integrate it into Aurora Core.

One of the most common conceptions of Webpack is that it’s difficult to learn. And it kind of is. But while it’s not as easy or familiar as Grunt or Gulp, it’s not super duper difficult either. Once you wrap your head around loaders and plugins, you’re golden. 👌

But where Webpack really shines is in it’s ability to play super nicely with not only ES6+ JavaScript, but also with front end frameworks including Vue and React. While Aurora Core’s charter is to be ES6+ ready, I also wanted to make it really easy to integrate JavaScript frameworks.

So not too long after learning Webpack, I set to work researching loaders, plugins, and doing a bunch of trial and error to see what worked, and what didn’t. I’ll readily admit I have started it over once or twice already. But I’m feeling much better about where it is currently. I still need to add a few things specifically for production builds, but once that’s done, it’ll be good to go.

This is so much already, and I haven’t even touched on how Aurora Core will be integrated into Aurora Theme. 😅  In the next post, I’ll talk about adding Aurora Core to Aurora Theme. I’ll talk about preparing the theme for React, and on the differences between how Webpack works within Core vs. the Theme. After that, I’ll talk about how I’m using Aurora Core’s Sass in the Theme. There are some rad benefits of Sass + Webpack + React that I cannot wait to share. And last in this micro series on Aurora Core, I’ll review how I am using code formatting and linting tools, and the differences in them between Core and the Theme.

Setting up project environments

I’m gonna be straight with ya. The main reason I wanted to write this post was because I successfully utilized Let’s Encrypt to add an SSL certificate to aurorathe.me. And SSHing into a remote server, and not breaking it (in fact, making at least some part of it more secure) feels like black magic. So today, let’s make this a broader discussion and talk about the project environments I set up to work on Aurora Theme.

Working Locally

Talking about local development environments sometimes feels like a contentious subject. In the WordPress space, each person seems to have rather strong opinions about why their local dev environment is best. I’m not here to tell you what to use. Hell, I don’t care. What’s important is that you have a local environment that works for you. But if your machine sounds like it’s ready for takeoff, maybe it’s time to consider an alternate option. 😉

Choosing a WordPress environment

I’ve been wavering between Local by Flywheel and VVV for the past year. Local is great because it has a simple interface, and spinning up a new site is super easy. I also really like how it comes preconfigured with MailHog (so you can check emails sent from your local domain), and accessing a site’s database is as easy as clicking a button. VVV, on the other hand, is a command line utility, and offers endless configuration options. Need to test something on an older version of WordPress? No problem. You can adjust that over the command line. Prefer interacting with things over the command line? VVV has you covered.

I swap between using my 2015 MacBook (the little 12-inch retina number) and my 2016 MacBook Pro (the 15-inch version). I mention this because it really doesn’t matter what I use on my MacBook. It has no fans so it never sounds like I’m on an airport runway. My MBP, however, does have fans, and Local + VS Code seems super problematic on that machine for one reason or another. I’m super tempted to try VVV on my MBP, and use Local on the other (which is actually the only option on that machine).

Writing code

I feel pretty strongly about VS Code. I’ve been using it for just over a year, and made the switch because Brackets just couldn’t handle an entire wp-content folder for an enterprise-level site. I shopped around for about a week trying Sublime, Atom, and PHPStorm, but VS Code stood out because it was the most Brackets-like (hello integrated version control! 👋), but also had some of the neat features I quickly got hooked on in Sublime (Cmd + P quick file browsing for the win!). Setting up all my editor tools (linting, snippets, etc.) was super quick and easy, and turned out to be the least painful of all the options. It doesn’t hurt that it already includes most of the “necessities.”

Focusing mostly on JavaScript development over the past few months has made me love VS Code even more. It has incredibly robust debugging features, and is packed with all kinds of goodies that make modern JS development a breeze. For example, recently I’ve been doing a lot more work with ES6 modules, and I love how VS Code can guess which module I’m importing as a type. HELL YEAH!

Keeping code under [version] control

Git is essential. I use it on basically every project, and Aurora Theme will be no different. I’ll be keeping two repos for this project:

  1. for the theme itself
  2. for the entire wp-content directory, which includes both the plugins and themes directories, to manage deployments

I probably could avoid managing the entire wp-content directory, but if I need to add a plugin during development, which will also need to be deployed, I don’t want to waste time on managing these things separately. Since Git lends itself nicely to a deployment discussion, let’s move along to remote environments.

Deployments & production

Aurora Theme is a journey. A terrifying journey into my development and thought process. aurorathe.me is the production site, but this is a personal project, and I want to share my journey while developing the theme. So, I am going to push everything straight to production. Clearly, if I get to a point in a few months where Aurora is stable, and I want to only focus on minor changes, I’ll revisit this approach.

The production environment

I was introduced to Webfaction during my MIGHTYminnow days, and I’ve been pretty hooked on it every since. Webfaction bills itself as “hosting for developers,” offering a lot of storage, bandwidth, and memory for $10 a month. What I ❤ about Webfaction is how easy it is to spin up a new site. Instead of wasting time manually installing something like WordPress, you add a new application to your domain, and Webfaction does the heavy lifting. And there is no shortage of configuration-free options available, so if you’ve always wanted to dabble in Django, Node, Rails, etc. it’s easy to spin up a new installation.

The one thing I did differently with aurorathe.me was set up SSL using Let’s Encrypt, which is a free SSL certificate authority. Because aurorathe.me is my production site, I wanted to make sure I had SSL up and running from the start.

Let’s Encrypt + Webfaction

At the beginning of this post, I mentioned how SSH-ing into a remote server and not breaking everything feels like sorcery. Well, that applies here. Following the chosen answer from this post on the Webfaction forums, plus a helping hand from this post on adding Socat to Webfaction got me sorted out in about 15 minutes. And now that I have both acme.sh and Socat tools set up, adding SSL certificates to new domains is as easy as:

Le_HTTPPort=77777 acme.sh --issue -d mydomain.com -d www.mydomain.com --standalone

More Magic with WP-CLI 🧙‍♀️

The super sweet thing about working with the REST API is that I only need my data in one place. Because I’m in the super-early development phase, and I don’t have any specific requirements for data, I’m using the WP-CLI + Jay Wood’s WP-CLI Random Posts package to generate data for my site (complete side note here, but Jay is amazing. If you ever have an opportunity to work with him, you are incredibly lucky. He’s so happy to share his knowledge with kindness and patience, and that, my friends, is truly amazing 💫). So a little more SSH sorcery with wp jw-random generate 25 --featured-image --img-type=business, and I’ve got myself 25 posts with featured images to work with both locally and remotely. 💥

Deployments

So, how does code get from my machine to production? Well, it’s a magical combination of Git + Github + DeployHQ = ✨.

The beauty of DeployHQ is that it can be set up to automatically deploy to the remote server every time there is a push to the master branch of my project (in this case, the repo with wp-content). There is no mucking about with an FTP client, and I don’t have to remember anything. Of course, it’s totally possible to manually deploy the project, which is what I’d generally recommend for a production site. But again, this particular project, I’ll opt for automatic deployments to keep my environments up-to-date, and let you, friend, follow along.

Project Management

I’d be remiss if I didn’t mention how I’m keeping track of project deliverables, and making sure that I keep this project moving forward. I used to use Basecamp for managing my freelance and personal projects. But paying $30+ a month for just personal projects was a bit steep. More recently, I moved over to Trello, and it’s been pretty perfect. (Want to follow along? Check out the Aurora Theme Trello board.)

Creating lists for each “project phase” is really helpful for me to plan out entire projects, and see the bigger picture. And within each list, I have a card for each “bigger” task that needs to be completed. Some cards have checklists within them, others just have general notes to remind me what I was thinking. I really like that cards can then be shuffled around to “In Progress” or “Complete” lists, and it’s easy to see that progress that’s been made. Or maybe you just want to use labels to denote what’s in progress and what’s complete instead. That’s the beauty of Trello, there are endless possibilities for organizing things.

Whew, that’s a lot, right? 😅 Being organized up-front is super important to the success of any project, but I almost think it’s more important when working on a personal project. It’s so easy to get distracted (anyone else suffer from shiny object syndrome?), and having a concrete plan, with concrete steps is important to achieving your project’s goals. This is probably the most organized I’ve been for a personal project, but I also feel like I’ve really set myself up for success.

RegEx revisited

In the past week or so, I delved deeply into the terrifying world of regular expressions (regex). It turns out that regex isn’t that scary, and if you play with it enough, it is actually kinda…fun? My first foray into regex was when I was working through How to Learn JavaScript Properly. It was a great primer, but what I learned recently at Bov Academy has taken my understanding of regex to an entirely new, terrifying level.

In the course of learning regex, I had two projects to complete:

  1. A simple JSON validator
  2. A link harvesting utility

I thought it might be fun to share a little of what I learned about regex through these projects.

Creating a simple JSON validator using regex

The first project I completed was a simple JSON validator. The gist is that the validator allows a user to either upload a JSON file, or paste their JSON data into a textarea. When the user clicks submit, the validator will use a regular expression to check whether the string is valid JSON. There are some caveats, however. This validator isn’t meant to check all JSON data, so mine will not correctly validate JSON arrays, but only JSON objects.

Because the regex portion of this assignment is what I really care about for this post, that is the only part I’ll cover. However, if you’re interested in the rest of the code, you can check out the Github repo.

Defining valid JSON

The first step in working with any regex project is determining what qualifies as a valid match. In the case of JSON, here are the match parameters:

  1. Starts with an opening curly brace
  2. Keys must be wrapped in double quotes
  3. Keys must be followed by a colon
  4. Values may consist of these types:
    1. Strings (wrapped in double quotes, but not containing double quotes)
    2. Numbers
    3. true / false
    4. undefined
    5. null
    6. Objects
    7. Arrays
  5. When more than one key-value pair exists, the key-value pair must end with a comma
  6. The final key-value pair must not have a comma at the end
  7. The JSON object ends with a closing curly brace

With the valid JSON structure in mind, I created a simple, yet valid object to test against.

{
	"name": "Paul Stephen Forde",
	"age": 1,
	"cute": true,
	"occupation": undefined,
	"friends": [
		"Minnie",
		"Whitney",
		"K2"
	 ],
	 "markings": {
	 	"type": "tabby",
		"color": "orange"
	 }
}

Then I headed to the must-bookmark RegExr site to start playing with my JSON regex.

Capturing the opening curly brace

The first step is pretty easy–let’s make sure there is a match for the opening curly brace:

var pattern = /^{/;

By tacking ^ onto the front of this regex, we’re making sure that our JSON object begins with that character. So whenever something must begin with a character, put a little hat on it (i.e. ^) and party on. 🎉

Identifying the keys to our JSON

Next, we’re going to check for some whitespace (anything like a space or a new line), and our key, which as mentioned above, must be wrapped in double quotes. Keys must also be followed by a colon, so we’ll check for that too.

var pattern = /^{\s*("\w+"):/;

Let’s break this \s*("\w+"): section down a bit.

\s* is what we use to check for our optional whitespace. \s will search through our string and check for any whitespace character such as a space, tab, or new line. Adding * checks whether the proceeding character set (i.e. \s) exists zero or more times. That basically means that we’ll allow as much whitespace as the user wants, including no whitespace.

Parentheses in regex allows us to group a character / search set together. In the case of ("\w+"), I want to look for the JSON key which will have a opening " any combination of one or more word characters (\w, which is A through Z, 0 through 9 and the underscore), followed by a closing ". The + holds a magic power similar to *, but instead of zero or more, + requires at least one or more matches of the previous character set. After the key group, we simply add a : to ensure that is present before checking the value.

Checking JSON values

Checking for valid values is where this regex starts looking gnarly because there are so many different types of values to check. At this stage, it’s best to break each possible value down into its own parts.

Starting with string values

We’ve already checked for strings in our key, so we can use that as a rough guide for checking valid string values. It does require modification, however, because a valid string could include a sentence with whitespace and punctuation. This updates our regex to look something like this:

var pattern = /^{\s*("\w+"):\s*(("[^"][\w\s!@#$%^&*()\-+={}[\];:',.<>?/]+"))/;

Isn’t she a beaut? 💁‍♀️ Let’s go ahead and break down the (("[^"][\w\s!@#$%^&*()\-+={}[\];:',.<>?/]+")) part.

I’m using two sets of parentheses for the capturing group because I’ll want to be able to check any of our valid values (i.e. more than just the string) and within that I want to check for a string group.

This string group starts with the double quote, then we hit this guy [^"], which basically says, “hey double quotes, buzz off you’re not invited to this string party.” So any time you see a party hat within a set of square brackets, that there is a goddamn party pooper. 🙅‍♀️ But in all seriousness, [^] is a negation set, meaning that anything that comes after the party hat but before the closing square bracket will be excluded from set.

[\w\s!@#$%^&*()\-+={}[\];:',.<>?/]+ is our heavily modified key check. And by heavily modified, I mean I added all this garbage: \s!@#$%^&*()\-+={}[\];:',.<>?/. Adding that set of characters allows our string to contain whitespace, and a boatload of characters. One might be tempted to try [\S\s], which allows for any characters, but that ends up allowing almost anything, and won’t return invalid should a double quote sneak its way into the mix. (Fun fact: I actually realized this bug in the process of writing this post.) By specifying exactly what we’re allowing, we can ensure that the regex doesn’t ignore our no double quotes within the string rule.

Numbers, truthy values, object and arrays, oh my!

Checking for strings as values is actually the most complicated part of the value regex because we need to specify what’s allowable within the string. Checking for the remaining valid values is actually relatively easy.

var pattern = /^{\s*(("\w+"):\s*(("[^"][\w\s!@#$%^&amp;*()\-+={}[\];:',.&lt;&gt;?/]+")|\d+|true|false|null|undefined|{[\S\s]*}|\[[\S\s]*]))*/;

That looks gnarly, but let’s break it down starting with |\d+. Whenever we encounter \d in regex, it means that we’re looking for any digit between 0 and 9. And you’ll remember when we add a +, it means we’re looking for one or more of a character group. So in this case, we’re looking for one or more digits. The pipe (|) that proceeds the \d is very similar to the || that we might see in JavaScript or PHP. Basically it says we want to match our string OR digits.

When we add |true|false|null|undefined, we know that we’re looking for a string OR a set of digits OR true OR false OR null OR undefined. We defined these values explicitly because they do not require double quotes like strings do.

Things get a little loosey goosey with the array and object checks, and this is why this is a simple validator. We use {[\S\s]*} to allow an object with any values within the object, and do something similar with our array check: \[[\S\s]*]. This means that we could get a valid result even though there may be invalid data within an object or array value. As I mentioned before, [\S\s] allows any characters, which includes A through Z, 0 through 9, punctuation, whitespace, etc. The * simply checks for zero or more occurrences of a character. And because square brackets are used to define character groups in regex, I had to escape the first bracket in the array check with a backslash: \[.

Capturing multiple key-value pairs

We’ve now defined parameters for keys and all allowable values. But if you plug this into RegExr using the sample valid object above, you’ll notice it only highlights the first line.

JSON validator after first key-value regex checkWhat we also notice is that the regex stops at the comma. It’s almost as simple as tacking on a comma at the end, but not quite. We need to add an opening parenthesis just after the opening curly brace for the entire regex, and we’ll add another closing parenthesis before the closing /. The comma will go between the final two closing parentheses. We’ll also add our handy zero or more * to allow for multiple key-value pairs. Now our regex is looking like this:

var pattern = /^{(\s*(("\w+"):\s*(("[^"][\w\s!@#$%^&*()\-+={}[\];:',.<>?/]+")|\d+|true|false|null|undefined|{[\S\s]*}|\[[\S\s]*])),)*/;

And now we have all but the last key-value pair and closing curly brace highlighted.

JSON regex matching all but last key-value pair

Capturing the final key-value pair

Capturing that last key-value pair isn’t too complicated. We can simply copy the key-value portion of the regex and paste that after the final * in our existing regex. But this time, we need to remove the comma and that final *.

var pattern = /^{(\s*(("\w+"):\s*(("[^"][\w\s!@#$%^&*()\-+={}[\];:',.<>?/]+")|\d+|true|false|null|undefined|{[\S\s]*}|\[[\S\s]*])),)*\s*(("\w+"):\s*(("[^"][\w\s!@#$%^&amp;*()\-+={}[\];:',.<>?/]+")|\d+|true|false|null|undefined|{[\S\s]*}|\[[\S\s]*])\s*)}$/;

I have also added the check for our closing curly brace, }$. The $ indicates that the end of the string must be the curly brace. You may also notice that I added a \s* before the closing parenthesis to allow for any amount of whitespace before we hit the end of the JSON object.

Checking invalid scenarios

The assignment provided an example of invalid data to help us on our way. It’s important to check for both valid and invalid scenarios to ensure that things are working correctly.

{
  "country": "United States",
  "capital": "Washington, DC
  "states":{{}}
}

This scenario is great, but isn’t robust enough. We see that the closing double quote is missing for the capital value, but the comma is also missing. So to test this thoroughly, I’d make sure that we get no matches in the current case, when the ending double quote is present but the comma isn’t, and when the ending quote is missing, but the comma isn’t. I’d also dream up other scenarios to ensure that the validator passes simple tests. Trying a JSON object with just a single key-value pair might be a good test, for example.

Is this JSON validator perfect? Absolutely not, but it does a pretty decent job of at least making sure that all JSON object keys are wrapped in double quotes, and that most of our values are 100 percent correct (with the exception of what’s within an object or array, of course).

Building a Link Harvester

The second regex-heavy assignment was to build a link harvester. The link harvester allows a user to upload an HTML file, or paste the contents of an HTML file into a text area. Upon submitting the data, the harvester outputs all the external links and emails addresses along with their corresponding text.

Again, I’m going to focus on the regex for this post, but if you’re interested in a deeper dive, you can find the repo on Github.

Defining valid link scenarios

As with the JSON validator, the first thing to do was identify what qualified as a valid match. In this case, I was looking for external links and email addresses. I wanted to exclude links within a site, so I’d keep that in the back of my mind when testing various scenarios.

Harvest all the links

I took a few stabs at different ways of identifying links before arriving at my final regex. I knew that a link needed to be wrapped in an a tag, and I knew that external links could start with http or https. Email addresses on the other hand could start with mailto. But what I decided after a few different attempts is that the first pass for harvesting links shouldn’t care what type of link existed, it should just focus on capturing anything wrapped in an a tag. So this is my initial regex:

var pattern = /(<a[\s\w="?:/.@-]*>)([\w\s.,;:-])+<\/a>/gi

Let’s split this up into the three main parts, starting with (<a[\s\w="?:/.@#-]*>). We’ve got this whole thing wrapped in parentheses, so we know this is a capturing group. Within the capturing group, we want to find the opening portion of our a tag <a. We’re going to follow that with optional whitespace, word characters, and a few additional characters we may find in class names, urls, etc. [\s\w="?:/.@#-]* gives us the most flexibility in harvesting the opening a tag without worrying about whether a link contains just an href, a class, an id, or any other combination of attributes. As we did several times in the JSON validator, we add a * to allow for zero or more instances of that character set. Then we’ll look for our closing angle bracket.

The next capturing group, [\s\w="?:/.@#-]* will capture our link’s text. The link text may contain whitespace, word characters and a few additional punctuation characters (I didn’t go overboard here, so it’s entirely possible it may not harvest a link it should).

The final part is simply the closing </a> for our link tag. I’m tempted to make an anchor joke here, but I think I’ll just keep going…⚓️

Weeding out the non-matches

I used the match method to store the links as an array to my links variable. The next job is to loop through all those links, identify which ones are external links and email addresses.

// Pull out the external links & email addresses.
links.forEach(function (link) {
	var address = link.match(/((https*:\/\/)[\w=?:/.@-]+)|(mailto:[\w@.-]+)/gi),
	    text    = link.match(/(?!>)([\w\s.,;:-]+)(?=<\/a>)/gi);
			
	// If after applying our second-layer regex, if empty, bail.
	if (!address) {
		return;
	}
			
	// Push links into our harvested object's links array.
	if (address[0].match(/((https*:\/\/)[\w=?:/.@-]+)/gi)) {
				
		var obj = {
			url: address[0],
			text: text[0]
		};

		harvested.links.push(obj);
	}

	// Push email addresses into our harvested object's email array.
	if (address[0].match(/(mailto:[\w@.-]+)/gi)) {

		harvested.emailAddresses.push(address[0]);
	}
});

You’ll notice that address is doing a further check on the current iteration of the link array, var address = link.match(/((https*:\/\/)[\w=?:/.@-]+)|(mailto:[\w@.-]+)/gi). Were checking for whether the link begins with http. By adding s*, we can add an optional check for an s which will also pass our https check. Then we make sure that is followed by a colon and two forward slashes (which have been escaped). The url itself can contain any combination of word characters, and a select few punctuation characters. Unfortunately, I don’t think this necessarily captures all link scenarios, but I was able to capture fairly common scenarios in my testing.

Since we want to grab email addresses, we have a | and a second capturing group to check for mailto:. I figured there are fewer allowable characters in email addresses, so limited it to word characters, the @, ., and the dash. You’ll note that there is a gi at the end of all regex in this link harvester example. The g flag is what allows us to capture more than one match, and the i flag ignores the character case, so we can match both upper and lowercase characters.

The magic of positive and negative lookaheads

The trickiest bit here wasn’t identifying potential links for harvesting, but trying to figure out how to capture just the text between an opening and closing a tag. Sure, I could have used some substring() voodoo, but why not put regex to the test here too?

You’ll notice the text variable’s regex has some weird extra characters in it, namely (?!>) and (?=<\/a>). We’ve already discussed that parentheses are used for capturing groups, and the same is true in both these instances. Adding ?!, known as a negative lookahead, to the first capturing group, however, tells our regex to look for the closing angle bracket of our opening a tag, but doesn’t actually include it on our match. We need the second part, our positive lookahead ?=, to tell our regex to keep looking until you find this closing a tag, but again, don’t actually include the </a> itself in our results. Basically, the combination of a negative and positive lookahead is creating a boundary for our text regex, and returns only the link text value. Pretty neat, huh? RegExr illustrates this really well:

Regex lookahead example

The rest of the forEach pushes our set of external websites and email addresses to an object, which is rendered to the screen by a separate function.

There is no denying that regex is still tricky, but I’ve been finding it a fun challenge to figure out these types of problems. Clearly, regex is not necessarily easy for everyone to understand, and many consider it an anti-pattern for that reason. So, if there is an easier way to accomplish something without regex, that would be the preferred route. But if you absolutely need to use a regex, hopefully this post has cleared up some confusion or offered you some ideas to solving your regex quandary. 😃

Getting started with task-running & Gulp

Front-end development typically consists of many stages, including scaffolding (setting up the project environment), development of the website or application, testing, optimization, and deployment. Setting up project environments and the rest of the project workflow from scratch project after project can become cumbersome. Task runners were introduced to solve this problem. Task runners help developers manage and automate all the tasks associated with the various stages of development. In this article, I will focus on the popular task runner Gulp, and show you how to set up Gulp with a basic tasks for compiling Sass.

Getting started with Gulp

First thing’s first–with any task runner, you will need to make sure you have Node installed on your computer. You can either visit the Node website and download the appropriate package and install from there, or use Homebrew (my preferred method) to install Node via the command line if you are using a Mac. To install Node via Homebrew, simply type brew install node in your terminal prompt.

Once you have Node installed, you’ll need to use NPM, the package manger that comes bundled with Node, to install Gulp. Several Node packages can be installed globally on your computer, and / or locally to your project. In this case, we’ll want to start by installing Gulp globally. Simply type the following into your terminal prompt:

npm install -g gulp-cli

In this command, the -g flag is what tells Node to install this as a global package.

Now that we have the Gulp CLI installed globally, we’re ready to get started configuring our project. If you already have a project for which you’d like to set up Sass, simply navigate to that project via your terminal. Something like this:

cd path/to/your/project

Otherwise, create a new directory and cd into the directory.

Setting up a Gulp project

Now we’re ready to start setting up our project to use Gulp. The first step in this process is to create a package.json file, which is a manifest that keeps track of all the packages we’re using for development of a specific project. This is particularly important when we’re working on a project across a team because it allows other team members to simply clone a Git repository and install the packages with a single command, rather than going through this entire setup process.

The first step is to ensure you’re in the correct directory for your project. Once you have ensured you’re in the correct directory, we’ll need to create our package.json file by typing the following in the terminal prompt:

npm init

This command will prompt you for information about the project. Follow the prompts and fill out the information as best as you can. If something doesn’t seem relevant at this point, simply hit Enter to accept the default. Once you have completed the steps of npm init, you will see a new package.json file at the root of your project.

Now we’re ready to start adding Node packages!

Adding packages

Earlier I mentioned that Node allows us to install packages globally on our computer, or locally to our project. At this stage, we’re going to install all the packages we need for this specific project locally.

The first package we’ll install is Gulp itself. We can do this by typing the following into the terminal window:

npm install --save-dev gulp

This command downloads Gulp into a node_modules directory in the root of your project. The --save-dev flag tells Node to save this package under the devDependencies object in our package.json file. This means that the next developer to come along can simply run npm install to grab the same dependency. If you were to only type --save in for the flag, it would save to a dependencies object, which is mean for development of Node applications; devDependencies are dependencies that are not required in the production environment.

This may be a good point to stop and add node_modules to your .gitignore file. node_modules tends to be a hefty folder with not just our packages, but their dependencies, and other developers involved in the project can just install these dependencies themselves by running npm install after they have cloned the project.

While we’re in the installation stage, let’s also install the Sass package we’ll need:

npm install --save-dev gulp-sass

Configuring Gulp tasks

Alright, now we have Gulp and our local packages installed, let’s go ahead and start configuring our Gulp tasks. The first thing you’ll need to do is create a Gulpfile.js in the root of your project.

Open the Gulpfile and add the following:

var gulp = require('gulp');

gulp.task('default', function() {
    // place code for your default task here
    console.log('Gulp is working!');
});

The var gulp = require('gulp'); line is telling Gulp to pull in the Gulp package from the node_modules folder.

The next few lines starting with gulp.task are configuring our first task, the default task, which simply prints Gulp is working! to the console. Go ahead and type gulp into the console to ensure everything is working, and that you see the Gulp is working! message.

Now that we’ve established the basic pattern for pulling in our packages and adding a task, let’s set up our Sass task.

Compiling Sass with Gulp

The first thing we need to do when adding a new package for usage with Gulp is load it into our Gulpfile:

var sass = require('gulp-sass');

Now that we have loaded sass to our file, let’s go ahead and set up the Sass task using gulp.task(), which takes two parameters: the name of the task, and a function which tells Gulp what to do to complete this task:

gulp.task('sass', function() {
    return gulp.src('./sass/*.scss')
    .pipe(sass.sync().on('error', sass.logError))
    .pipe(gulp.dest('./'));
});

gulp.src() tells Gulp which files to process during this task using a simple glob pattern–that is, look for anything within the sass directory that has a file extension of .scss.

That information is then piped (using .pipe(), which is similar to piping via the command line) to the sass.sync() function, which is set up to log any errors that may occur during the processing of that task.

Finally, that output, if it didn’t error, is piped to our destination folder, in this case the project root, where we should find our compiled CSS file.

The “guts” of a Gulp task is essentially a function that returns our output.

Now, if we have some Sass files in the Sass directory, we can run gulp sass from the command line, and we should see our compiled file in the root of the project.

Automating with Gulp

As you can imagine, running gulp sass every time you need to compile your stylesheets becomes cumbersome quickly. Let’s go ahead and add a task which will automatically compile our Sass every time we save a file, called watch:

gulp.task('watch', function() {
    gulp.watch('./sass/*.scss', ['sass']);
});

This task simply takes our initial glob pattern from gulp sass, and tells the watch method to watch for changes within files that match this pattern (you can also pass an array of several directories), and the second parameter tells the watch method which task(s) to run when a change is detected.

So now, when we want to work on our Sass, we can simply type gulp watch into the terminal, and anytime a file is saved, Gulp will automatically run our gulp sass task.

Wrapping up

This tutorial is just the tip of the Gulp iceberg. There are so many other helpful Gulp packages that can be added and configured to automate the development process. You can process stylesheets with PostCSS, concatenate and minify JavaScript, control versioning, and even deploy your sites or applications with Gulp. I encourage anyone looking to learn more to check out the Gulp website, read the Gulp docs, or search for Gulp plugins to further explore the possibilities.

In the meantime, if you’re looking for jump start, check out my Getting Started with Task Running & Gulp Gist for a package.json and Gulpfile.js that will get you started with Sass compilation, CSS & JS minification, and more. 😀

Using cat rivalry to understand the bind method

A few weeks ago, I used the rivalry between my girl cats, Whitney and Minnie, to solidify my understanding of JavaScript’s .bind method. I thought I’d share it because let’s be honest, the MDN explanation of .bind() is confusing:

The bind() method creates a new function that, when called, has its this keyword set to the provided value, with a given sequence of arguments preceding any provided when the new function is called.

What .bind() does–a plain English explanation

The JavaScript bind method allows developers to rebind or “rescope” the this keyword. When working with objects, and functions within objects, this refers to the object with which we are working. For example, in the the code below, this within the getKittyInfo method refers to the kitten object:

var nemesis = 'dog';
var kitten = {
    'name': 'Whitney',
    'age': 11,
    'color': 'orange',
    'markings': 'torbie',
    'nemesis': 'Minnie',
    getKittyInfo: function() {
        console.log( this.name + ' is an ' + this.age + ' year-old ' + this.color + ' ' + this.markings );
    }
}
kitten.getKittyInfo();

Output

"Whitney is an 11 year-old orange torbie"

Where things get confusing

However, whenever a function is inside another function, this is bound to the global object.

So, when we add the archNemesis() within within getKittyInfo(), our archNemesis()‘s this is instead bound to the global object (i.e. nemisis) instead of our kitten object:

var nemesis = 'dog';
var kitten = {
    'name': 'Whitney',
    'age': 11,
    'color': 'orange',
    'markings': 'torbie',
    'nemesis': 'Her younger sister, Minnie',
    getKittyInfo: function() {
        console.log( this.name + ' is an ' + this.age + ' year-old ' + this.color + ' ' + this.markings );
            function archNemesis() {
                console.log( 'Arch nemesis: ' + this.nemesis );
            }
        archNemesis();
    }
}
kitten.getKittyInfo();

Output

"Whitney is an 11 year-old orange torbie"
"Arch nemesis: dog"

When archNemesis() is called, the result of nemesis is dog because it is accessing the global object (i.e. our nemesis variable created right before our kitten object. In order to get archNemesis() to use the nemesis defined within our kitten object, we’ll need to bind the archNemesis function like so:

var boundNemesis = archNemesis.bind( this );

This tells our function that we’d prefer to use our kitten object’s this instead of the global this.

In the final example below, you can see how this was done within the kitten object’s archNemesis function:

var nemesis = 'dog';
var kitten = {
    'name': 'Whitney',
    'age': 11,
    'color': 'orange',
    'markings': 'torbie',
    'nemesis': 'Her younger sister, Minnie',
    getKittyInfo: function() {
        console.log( this.name + ' is an ' + this.age + ' year-old ' + this.color + ' ' + this.markings );
            function archNemesis() {
                console.log( 'Arch nemesis: ' + this.nemesis );
            }
        var boundNemesis = archNemesis.bind( this );
        boundNemesis();
    }
}
kitten.getKittyInfo();

Output

"Whitney is an 11 year-old orange torbie"
"Arch nemesis: Her younger sister, Minnie"

Hopefully this example has also helped you wrap your head around .bind(). If in your travels / experiments you come up with examples of your own, please feel free to share them in the comments!

A Style Tile assembled with WP Style Tiles

Announcing WP Style Tiles beta

A Style Tile assembled with WP Style Tiles

I’m thrilled to announce that the WP Style Tiles plugin beta is ready! The plugin has come a very long way since I first mentioned it on the blog, and I think it could be a really useful tool for designers who work primarily with WordPress.

What are style tiles?

Style Tiles are a visual aid that help designers establish a visual design language for a website. Falling somewhere between moodboards and full-scale comps (mock-ups), Style Tiles allow designers to try out a variety of interface elements in a quick, iterative fashion. Each Style Tile might offer a different set of colors, fonts, patterns and textures, and other user interface elements. The designer then presents the Style Tiles to the client. The client can give targeted feedback about which elements she  would like to see in her new website. The Style Tiles website sums up this idea quite nicely:

Style Tiles are similar to the paint chips and fabric swatches an interior designer gets approval on before designing a room. 
An interior designer doesn’t design three different rooms for a client at the first kick-off meeting, so why do Web designers design three different webpage mockups?

The very nature of Style Tiles allows the client to pick and choose their favorite elements from each tile presented. The designer can iterate based on that feedback, and quickly create a new tile that represents the desired visual design style for the client’s website.

Why not use the PSD template?

Samantha Warren's PSD Style Tiles template
An export of Samantha Warren’s PSD Style Tiles template

Samantha Warren, the brains behind the idea of Style Tiles, created a handy very handy Photoshop template for creating Style Tiles. While I think the template is a great starting point, there are several thing I didn’t like about it.

PDS files do not scale well

One of the things that immediately turned me off of working with the Style Tile PSD template was the fact that the text became fuzzy whenever I zoomed in on it. Even at 100 percent zoom, the text looked fuzzy, which I thought detracted slightly from the entire Style Tile experience.

Sometimes you don’t want to deal with Photoshop layers

Trying to find the right PSD layer to edit can be like a treasure hunt. Despite the Style Tile template being very well organized, it was still confusing to trying to figure out which layer altered a specific bit of text, or which layer controlled a specific color.

PDS files offer no interactivity

The Style Tiles template displays two elements which are normally interactive in a website:

  1. Links
  2. Buttons

Clearly, rolling over a PSD, jpg, or other image format doesn’t offer any sort of interactivity. If you’re anything like me, you like to see a button or link change color whenever it’s hovered.

How does it work?

Adding a WP Style Tile button
Adding a WP Style Tile button

WP Style Tiles works by adding a handful components via shortcodes. This allows you to add elements to your tiles in any combination you’d like. If you’re not comfortable working with raw shortcodes, no worries! I’ve added integration with Shortcake UI, which makes adding shortcodes a piece of cake (pun definitely intended). 😉

WP Style Tiles Style Tile CPTAfter installing and activating WP Style Tiles, you will see a new Style Tiles custom post type (CPT) on the WordPress dashboard menu. The Style Tiles CPT allows you to keep your style tiles organized, and easy to find without cluttering your pages or other posts.

You can add a new Style Tile post like any other post, and you’ll see that the post edit screen looks just like the blog post edit screen.

If you’re using Shortcake, you can easily add a new element to your tile by click Add Media > Insert Post Element and selecting the element you’d like to add.

WP Style Tiles gives you the flexibility to add as many or as few elements to your tiles as you’d like. And any element that has interaction, like buttons and links will work!

Once you’ve created a few tiles for your client, you can share the link, and ask your client to leave feedback on directly on the tile. And because this is all within WordPress, you can limit who has access to the style tile posts by adding a password to the post. 🙌

Where can I get WP Style Tiles?

If you want to try integrating style tiles in to your workflow, you can grab a copy of the plugin from Github. If you want to see an example tile, visit the Test Tile.

Remember, this plugin is in beta, and doesn’t currently have fully fleshed out features. In a future iteration, I’d like to figure out how to better integrate fonts, and add front-end styles to the tiles.

Reporting bugs & requesting enhancements

If you download and test the plugin, I’d love your feedback! You can report issues and request enhancements on the Github issues page.

[carrie_forde_button button_text=”Download WP Style Tiles” href=”https://github.com/carrieforde/WordPress-Style-Tiles/archive/master.zip” title=”Download WP Style Tiles from Github” alignment=”center” icon=”download”]

Carrie Forde Portfolio archive, after

Portfolio Updates

Over the last several weeks, I’ve been working on incremental updates to my site. And while the primary focus so far has been on the site’s content, I’ve also been making a few updates to the site’s design. One of the main areas of focus has been my portfolio. There were two main issues:

  1. the work displayed in my portfolio didn’t represent the type of work I wanted to do;
  2. the old portfolio layout abused the post meta to achieve the layout.

Choosing an area of focus

To solve the first problem, I removed all the portfolio projects that weren’t related to web design, WordPress theme development, or WordPress in general. What I realized after removing nearly 2/3 of my work was that the remaining projects were ones that I was happiest with anyway; I had the most fun working on them, and the do a better job of showing off my skills.

Carrie Forde Portfolio archive, before
Before: it looked like I was all over the place with my area of focus, including print & web design
Carrie Forde Portfolio archive, after
After: there is a clearer focus on web projects

Now that the portfolio is focused specifically on web design and WordPress projects, I’m hoping that it will make it easier to understand what I do, what I’m good at doing, and how I might be able to help.

Use post meta wisely

The second issue came up as I was working on WP Style Tiles. Initially, I was using CMB2, which stores data in post meta, to generate a style tile. The problem there was that the list of options to work through was several scrolls long, and felt unnecessary when I could just use shortcodes (even though they kinda suck too). After rewriting the content for my portfolio projects, I realized the same thing was basically true about my portfolio. Post meta is really only needed for the real meta: the services, designer, and live site link.

Carrie Forde single portfolio, before
Before: all of this content was generated with post meta.
Carrie Forde single portfolio, after
After: only the services, designer, and launch button are post meta. Try not to mind the screenshot tool mangling the end of this page. 😬

Everything about the portfolio updates feels right now. And I had a lot of fun trying out new ideas along the way (I dropped Advanced Custom Fields for this in favor of CMB2, for example).

WP Style Tiles progress

I’ve been chipping away at WP Style Tiles all day today. I’m at a point where I’m between trips away, and I’ve been focusing mostly on brushing up on WordPress skills, and the plugin has been a good test.

Here’s what I’ve crossed off the list:

Remove CPT UI dependency
I think there is some sort of glitch with the way CPT UI exports CPT code. It only had to entries in the $labels array, which I’m 99% certain was the issue with my fields not working correctly. This may have only happened because I didn’t fill in all of the fields like I have in the past. So maybe that would fix it…something to look into later.

Remove ACF dependency; move to CMB2
I’m so happy this is done, and now am seeing the pros and cons of both Advanced Custom Fields, and CMB2. ACF has that handy AF interface for setting up fields, but CMB2 has better, more intuitive ways of grouping fields together. There was quite a learning curve updating the output template, but it’s there, and I have more familiarity with CMB2, which is nice. 🙂

Here’s what’s in progress

General code clean up
Oh man, yesterday, I embarrassingly realized I was abusing & misusing esc_attr() when I should have been using esc_html(). Anyway, I’ve fixed that within the plugin (as well as a few other things where I was doing that), and started working on transferring my styles from inline styles to <style> tags injected into the head of the post. It’s somewhat debatable whether it looks nicer, but it will definitely be easier for a user to write custom styles to overwrite things (#deathto!importanttags).

There’s still code to be cleaned up, but I will keep chipping away at things as I go.

New to-do items

Design and coding are iterative processes, and I often find as I’m working through improving something, there are even more things I notice that could be improved in the next pass.

Add isset or ! empty checks on all fields / groups
There are fields or groups in the template that won’t always be completed (like subheadings). In those instances, it seems like a good idea to not print the wrapping code for those groups.

Figure out front-end metaboxes in CMB2
I spent about an hour in a bit of a daze trying to figure out how to get the custom CSS metabox to load on the front end with no success. I think taking a step back will help me gain some perspective so I can figure it out tomorrow.

Other notes

One side benefit of working on WP Style Tiles is working with a few different themes to test things out. Yesterday, I started using it in my theme to make sure it was compatible with Alcatraz (which is working great so far!), and since I was looking at my theme so much, I went in a fixed a few style issues that have been bothering me (namely, pre tags and tables). I want to do a more thorough review before I deploy, but they’re definitely looking better. 🙂

WP Style Tiles POC

I think I’m near the end of the proof of concept phase for WP Style Tiles. At this stage, the plugin isn’t pretty, but it is functional, which is the whole point of a proof of concept. 🙂

Here’s a run down of what needs to be addressed as we move out of the POC phase:

Remove CPT UI dependency

If you look at the code for WP Style Tiles, you can see I have the code for the Style Tile CPT already included, but it’s commented out because I was having trouble with one of my Vagrant sites, and it was working weirdly. This should hopefully be a quick, easy thing to fix.

Remove dependency on Advanced Custom Fields Pro; move to CMB2

A few things I love about ACF include the interface for creating the metaboxes, the ability to create repeating elements, and the ability to create flexible content elements. However, the later two are only included in the Pro version, which cannot be included in an open source plugin (and currently, I’m using it as an outside dependency). In the next phase, I’d like to move toward utilizing CMB2 for the metaboxes because it is open source, and can be included in a freely distributed plugin.

Add remaining UI elements

Two user interface elements that I think would be super handy for a Style Tile are buttons (truly interactive ones that change colors, etc. on hover) and icons.

Find a better way to deal with fonts

Again, I have another dependency on a plugin (ACF Google Font selector), which provides a neat drop-down for Google Fonts, but doesn’t seem to enqueue on the front end correctly, and seems a bit overkill. And, what if I want to use a Typekit font (which, incidentally, is what I’m doing here)? I think having a radio to select whether you want to use a Google Font and showing the drop-down conditionally is a better idea. Or maybe just eliminate Google Fonts altogether and just have the designer enter the font they want to use, and write custom CSS. I dunno, but this point definitely needs more consideration.

General code clean up

My plugin is adding a lot of inline styling, and it’s looking rather ugly. Example:

<p class="wpst-type-paragraph" style="font-family: <?php echo esc_attr( $font['font'] ); ?>; font-size: <?php echo esc_attr( $size ); ?>px;"><a href="#" class="wpst-type-link" style="color: <?php echo esc_attr( $a_color ) ?>; text-decoration: <?php echo esc_attr( $text_decoration ); ?>; border: <?php echo esc_attr( $text_decoration ); ?>; box-shadow: <?php echo esc_attr( $text_decoration ); ?>;" onmouseover="this.style.color='<?php echo esc_js( $a_hover_color ); ?>'" onmouseout="this.style.color='<?php echo esc_js( $a_color ); ?>'">Click here for more information.</a></p>

Can you even tell that’s supposed to be a link? I think I have a pretty decent solution, which is to try hooking into wp_head and outputting the styles in <style> tags.

Improve front-end experience of the Custom CSS

I added what I think is really neat functionality to add custom CSS to a each style tile on a post-by-post basis. Visibility is currently limited to admins (so if you look at a sample tile now, you won’t see it), so no need to worry about some random dude adding malicious things to your tile.

The best part about the Custom CSS is that there is an event listener on the input box which updates the selectors that are being modified in real time. That way, you can see what your changes will look like before you save them to the post.

The main thing I want to improve here is add a fixed toggle in the approximate current location of the Custom CSS input box, and add a toggle to close the box, so it’s more like a fixed modal.

Add comment support

The entire point of WP Style Tiles is to enable collaboration between designers, developers, and clients, which to me means making sure comments are enabled on these posts by default so you as a client can tell me what you do or do not like about a style tile I’ve assembled, or I can dive deeper into what you’re thinking with your comments.

Improve CSS generally

I’m trying really hard to keep WP Style Tiles lean on CSS, but I do think there are layout considerations I need to make. In the original Style Tiles PSD template by Samantha Toy Warren, the type is grouped together on one side of the tile, while the visual elements are grouped together on the other. This allows the viewer to get a better sense of what’s going on at a glance, whereas now, a user needs to scroll through the Style Tile post and figure out how the items relate.

Give proper credits

Naturally, this project wouldn’t have come about if it wasn’t for the brilliant work that Samantha Toy Warren has done on the original Style Tiles project. She’s definitely getting a huge shoutout in my plugins readme.

Alright, time to get back to work!