RebootJeff.com

Try. Learn. Repeat.

Welcome to RebootJeff.com

I’m a JavaScripter writing about web software development. I write about other interests too.

Tennis, cars, videography, music, composition of the universe, the aesthetics of everything, and Batman tend to be on my mind as well.

Navigation

Find me on

Web Dev Machine Setup 2016

- - posted in dev tools, software development, web development | Comments

My Web Dev Setup

It’s been nearly two years since I wrote about my preferences for Sublime Text 2. In those two years, I’ve accrued more tools, and I’ve installed/configured a better web dev setup for myself. My current setup is documented in a repo, which will be kept up to date so then any time I need to set up a new machine, I have a quick guide that I can reference.

There are a few aspects of my setup worth explaining…

Atom vs Sublime Text

I’ve switched to Atom. There’s no doubt that Sublime Text is way faster. I still use it to read and edit extremely large files (e.g., large JSON). But Sublime Text always felt a bit clunky in its GUI. Most notably, the package manager for Sublime Text wasn’t very nice to use.

On Macs, Atom has a lot of great features for updating the software and packages. It has one-click installation of shell commands (e.g., atom [filename] to open a file in Atom via terminal).

On Linux, these features are missing, but I still use Atom on my Ubuntu machine because its GUI feels more modern than Sublime Text’s GUI. Also, Atom’s built-in Markdown and Git features are pretty sweet.

That said, Microsoft’s Visual Code Studio looks enticing. The battle of free code editors is really heating up! Visual Code Studio appears to be more powerful than Atom – and I’ve got to give it a try eventually – but Atom’s community is at least 2x larger at the moment. The ecosystem of Atom packages is outstanding.

Some of my favorite Atom packages: - pigments to highlight colors (great for CSS/SCSS/LESS code that deal with colors) - file-icons to show icons specific to different file types - autocomplete-emojis because emojis can spice up any comment/documentation! 🌟

Atom screenshot

Browsers

A few words about Chrome

I still use Google Chrome as my main browser for development, and now I use a few Chrome extensions a whole lot: Advanced REST Client for testing REST APIs and JSONView for browsing JSON data.

For development, it can be helpful to disable the “prefetch resources” advanced setting. If it’s enabled, the network panel of your Chrome DevTools might jump the gun, confusing you in the process.

Firefox is getting better

That said, I’m enjoying Mozilla Firefox Developer Edition. It’s got some fancy dev tools that I haven’t gotten a chance to use much, but the browser itself feels pretty speedy. Unlike normal Firefox, the dev edition has separate processes for each tab.

Mini-Rant: Modern Browser Wars

Also, I like supporting Mozilla by using their browser. It’s good to have some competition in browsers, and Mozilla has no conflict of interest with the web. They’re more likely to promote an open web as much as possible whereas Google and especially Apple have a conflict of supporting web versus supporting iOS/Android apps. Some say Safari is neglected by Apple because they care more about dedicating resources to iOS + the App Store.

There’s also a nice conspiracy theory suggesting that Apple would rather web apps not rise in popularity because that would detract from the App Store’s prominence. You could argue the same goes for Google and the Play store, but Google’s done some amazing work on “Progressive Web Apps” to make web tech (push notifications, offline support, etc) as powerful as native mobile apps.

Mac Tips & Tricks

My documentation repo has more tips & tricks, but I’ll lay out a few here. They all happen to relate to making development on a Mac even better (I guess I’ve been using my Mac way more than my Ubuntu machine lately).

Window Management

The latest Mac OSX has a built-in window layout feature, but it sucks. I continue to use SizeUp. It’s free and more powerful/flexible.

Installing software on Macs

I try to install as much as possible via Homebrew. It makes updating installed software a bit easier, and it can help you avoid common pitfalls (e.g., installing Node.js via Homebrew avoids permissions issues with npm install -g that you’d normally have to fix yourself).

Horrible “Smart Quotes”

Do yourself a favor and go into the keyboard settings to disable Smart Quotes. Otherwise, they could eventually find their way into your code and screw you up in the most insidious way (it could take awhile for you to realize you’ve accidentally typed/copy/pasted some Smart Quotes into your code).

[Example] Refactoring to Functional JS - Combine Keyed Lists

- - posted in JavaScript, engineering, example code, functional programming, programming, technical | Comments

pipe

Why the image of a water pipe with flowing water? It will all make sense soon, my dear reader.

The Premise

Given a bunch of arrays kept within a JavaScript hash table (plain object), we want to extract the arrays and combine them. In other words, we’re given a collection of arrays of elements and we want a single array of elements.

This example was inspired by some code I found in the codebase where I work. The use case was different, but the overall idea (extracting elements from within arrays that are within an object) is the same. To make things slightly more complex, the arrays of the input object could possibly contain null elements because the elements were being provided by a service that could sometimes return null.

Example Input/Output

Example Input/Output Data
// example input
var usersBySocialNetwork = {
  twitter: [
    { name: '@RebootJeff' },
    { name: '@doitwithalambda' },
    null
  ],
  facebook: [
    null,
    { name: 'Kevin' },
    { name: 'Bianca' },
  ]
};

// expected output
var users = [
  { name: '@RebootJeff' },
  { name: '@doitwithalambda' },
  { name: 'Kevin' },
  { name: 'Bianca' }
];

The output has the nulls removed. We can pretty much assume we only want to see user objects in the output array; no other kinds of elements.

The Original Solution

The following code snippet is a slightly modified version of someone else’s work. I’ve changed the variable names and comments, but the core logic/algorithm is the same.

Original Solution
var _ = require('lodash');

function combineKeyedArrays(keyedArrays){
  var flattened = [];

  // produce a flat Array from an Object with values that are arrays
  _.each(keyedArrays, function(array){
    flattened = flattened.concat(array);
  });

  // only return the truthy elements of flat Array
  return _.filter(flattened, function(element) {
    return Boolean(element);
  });
}
  • The combination of _.each and Array.prototype.concat creates one big array from all the arrays within the input object called keyedArrays.
  • The combo of filter and Boolean rids the big array of falsey values to ensure no null elements end up in the output.

Let’s Refactor!

Refactor 1 - Using Lodash’s Chain

Refactored Version 1
var _ = require('lodash');

function combineKeyedArrays(keyedArrays){
  return _.chain(keyedArrays)
    .reduce(concatArray, [])
    .filter(Boolean)
    .value();
}

function concatArray(arr, val) {
  return arr.concat(val);
}

Sadly, we need to create our own concatArray because Lodash doesn’t have such a utility method (I swear it used to exist in an earlier version …maybe).

Thankfully, we can actually use Lodash’s reduce on objects (not just arrays). I see the replacement of each with reduce as a win because the end result is more expressive. each is vague whereas reduce makes it more clear that we intend to go from a collection of things (in this case, a collection of arrays) to just a single thing (in this case, just a single array).

Refactor 2 - Using Lodash’s Flow

Refactored Version 2
var _ = require('lodash');

var combineKeyedArrays = _.flow(
  _.values,
  _.flatten,
  _.compact
);

Now we use function composition via flow, which uses left-to-right direction. Standard function composition via compose would read from right-to-left, but I prefer LTR for a more familiar aesthetic. My friends who are more advanced in functional programming assure me I’ll get used to the RTL direction if I give it a shot, but for now, I protest (i.e., I’m lazy).

With flow, we can read combineKeyedArrays as a series of 3 steps. First, we extract values from an object via values, then we flatten the resulting array via flatten, then we reject any falsey elements from the array via compact.

Notes:

  • values obviates the need for the combo of reduce + concat
  • flatten is shallow by default
  • compact obviates the need for the combo of filter + Boolean

OMG WHERE DID THE INPUT/PARAMETER GO?!

–You (probably)

We can stop referring to the input as keyedArrays. Our function combineKeyedArrays has now been written in a pointfree (aka point-free aka tacit) style. In other words, we no longer need to name - and refer back to - any parameter variable.

Think of it like the verbs “hit” and “type” in the English language. The word “hit” is a bit vague, so you probably should include more context or references for clarity. Are you hitting a person in a fight? Are you hitting some books to study? Are you hitting the bed to sleep? Are you hitting a keyboard button to type?

The word “type” is more specific. You already can infer you’ll be dealing with a keyboard. You don’t need to mention the keyboard at all when you use the verb “type” instead of the verb “hit”.

“I’m typing UNIX commands” is more concise and direct than “I’m hitting buttons on the keyboard to issue UNIX commands”. Both are valid, but the former is easier to understand even though it’s less comprehensive.

Refactor 3 - Using Ramda

Now, let’s translate from Lodash to Ramda, a utility library that is much more aligned with the functional programming style. I’ve covered how to get started with Ramda in an earlier blog post that one friend labeled as an “excellent summary”. I must be pretty awesome :D.

Refactored Version 3
var R = require('ramda');

var combineKeyedArrays = R.pipe(
  R.values,
  R.unnest,
  R.filter(Boolean)
);

Notes:

  • pipe is Ramda’s _.flow. I appreciate the name “pipe” over “flow” because “pipe” reminds me of Bash’s | operator.
  • unnest is Ramda’s shallow array-flattening method.
  • Ramda lacks a compact :(
  • R.filter(Boolean) is leveraging currying / using partial application to yield the same effect as _.compact.

Let’s Review

We’ve gained so much:

  • Expressiveness! Remember that each is vague; the refactored versions using flow and pipe are far more direct and straightforward (assuming you’re familiar with the library methods). Also, the combo of _.chain and _.value adds unnecessary boilerplate cruft compared to the simplicity of flow or pipe.
  • Brevity! Shorter code isn’t always better code, but if expressiveness and legibility remain high as code length decreases, that’s generally what scientists refer to as a victory.
  • Robustness! We’re using well-tested library methods. There are fewer possible typos after refactoring to simpler code.
  • Fun! Wasn’t that so much fun?! Hell yeah it was!! *level-up*

By the way, possible documentation for our refactored, pointfree combineKeyedArrays involves a type signature as a comment, but admittedly, I’m still learning how to do proper, FP-style type signatures. Also, keep in mind that the names of your functions should help tell others what it does, and the fact that it’s composed of 3 easy-to-read methods is quite helpful as well.

Refactored Version 3 with comment for documentation
var R = require('ramda');

// Object<Array> --> Array (more old-fashioned)
// ...or...maybe...
// {k: [v]} --> [v] (similar to Ramda docs)
var combineKeyedArrays = R.pipe(
  R.values,
  R.unnest,
  R.filter(Boolean)
);

Why don’t we say something more specific such as // {k: [user]} --> [user]? Because combineKeyedArrays clearly works with any type of element inside the arrays. It could even be considered as a utility function and added to an internal library of helpers. Whoooaaaaaa…

And because I appreciate you as a cool person, here’s a Gist that has all the code in one spot for your future reference.

Why Fast Code Matters Even When Phones Have Octa-core CPUs

- - posted in JavaScript, engineering, performance, programming, technical, web development | Comments

Have you seen the new Nexus 6P smartphone? It packs a “system on a chip” that features two CPUs, each with four cores. What a crazy, powerful world we live in! Surely modern smartphones can run your JavaScript code without breaking a sweat, right?

Snapdragon 810 promo material

Writing Performant Code is Hard

It’s true that really low-level performance optimizations often don’t feel like they’re worth learning or worrying about. You’ve got to deal with complicated business logic and juggling user data and state! You don’t have time to record CPU profiles for every new function you write!

On top of that, computers keep getting more powerful, right?

But What Does the Future Hold?

Here’s the insight*: if you’re targeting laptops/desktops, then you can probably feel safe about imperfect code in many respects. However, the trend of computers getting more powerful isn’t what it seems.

Devices speed up after slow starts

Look at the trend from a bigger picture perspective: modern tech has gone from powerful desktops to less powerful laptops (and netbooks and Chromebooks!) to even less powerful smartphones/tablets to much less powerful wearables and IoT devices. Consider that smartphone apps might not be so popular if web apps were more performant earlier in the history of iOS, Android, and web views.

*Disclaimer: I can’t take credit for the insight. I read it somewhere on the Internet, and I don’t remember where. Probably Quora though.

I’ll also add my own thought to chew on: Memory optimizations might still be important as folks browsing the web tend to leave a ton of tabs open and don’t close/reopen their browsers as often as they used to.

What’s a well-intentioned developer to do?

To be fair, browsers have come a long way. And it can be tough to care about tiny performance optimizations when browsers might end up handling them for you. For example, string concatenation used to be a no-no in JavaScript. The recommended best practice was to use Array.prototype.join instead of string concatenation.

This “best practice” is now very outdated.

Like most decisions in reality, there will be trade-offs and ROI concerns. Like most decisions, the right answer is probably somewhere in between two extremes. Like most of my commentary on this blog, I’m dispensing info with JavaScript in mind, but some takeaways are language-agnostic.

My decision-making process for how to spend my time on performance involves a few key points:

  • Always stay curious about current best practices.
  • Don’t learn a “best practice” and expect it to remain “best” forever. If someone teaches you a performance optimization tactic, check the Internet to make sure it’s still relevant.
  • Focus on higher-level performance optimizations (e.g., learning performant animation techniques, shaming nested loops/traversals, plugging memory leaks, refactoring to recursion for Tail Call Optimization) rather than lower-level concerns (e.g., while loop vs for loop, i++ vs ++i, etc).
  • Learn how code is actually digested by your target platform (e.g., for browser-based apps, learn the Critical Rendering Path, learn the JavaScript Event Loop –and Web Workers as a bonus).
  • Readability matters. If other devs can’t understand your code because of obscure micro-optimizations, then you’re probably hurting the team. Consider sacrificing the optimizations to prioritize collaboration.
  • Keep dreaming for the day when platforms will optimize your code for you! Just kidding. It’s kinda sorta already happening (learn about JIT compilers).

I’ve noticed many of my “key points” really just boil down to “do your best, buddy!” Freaking brilliant.

P.S.

Because JavaScript is single-threaded, the multi-core loveliness of modern CPUs doesn’t directly help your web app unless you use web workers.

That said, there will be some benefit regardless of web workers just because devices such as smartphones usually have to juggle more than just your web app (e.g., background apps, managing sensors, etc). The extra cores should help prevent the phone from stressing out from the juggling, so that tangentially helps your web app’s performance.

Refactoring Towards Functional Programming in JavaScript

- - posted in JavaScript, engineering, example code, functional programming, programming, technical | Comments

Ramda.js logo

This is not a “What is FP?” guide that uses JavaScript. If that’s what you’re looking for, you’ll love Brian Lonsdorf’s free GitHub-based guide. For this blog post, I will assume you already know currying and composition. I won’t assume you know functors, monads, and the other funky whatchamacallits that I’m still trying to learn for myself.

There are a lot of blogs and presentations that answer “What is Functional Programming?” and “Why bother with Functional Programming?”. There aren’t a lot of resources answering “How do I start using Functional Programming in REAL life?”. Most intro-to-FP resources leave you feeling like you’re supposed to just drop everything and start coding from scratch in Haskell or an FP-focused language that transpiles into JavaScript (e.g., Elm and ClosureScript).

My team at work has recently been exploring FP in JavaScript by using a library called Ramda. It’s a functional programming JavaScript library. Ramda offers some common FP utilities to help you code in the FP style or just slowly convert parts of your codebase to the FP style.

Most of the team is unfamiliar with FP, so rather than diving into massive re-writes to convert large chunks of code from Object-Oriented Programming to FP, we’ve been starting small. Along the way, we’ve learned some solid steps for introducing FP into an existing codebase at a comfy pace. The gist of it is: don’t dive into the world of endofunctors, monoids, and catamorphisms. Instead, focus on treating functions differently by cutting down on anonymous functions, subdividing functions into tiny functions, and using the simplest FP concepts such as currying and composition.

Code smells

These are some signs that code is very imperative and not very FP-like:

  • Anonymous callbacks - It’s harder to re-use functions that don’t have names, it’s harder to write pointfree code with anonymous callbacks in particular, and function expressions will be more commonplace when you start using more FP (due to frequent use of curry and composition).
  • Suboptimal parameter order - Function signatures should have parameters arranged in an order that fits currying. This means putting config-like parameters first and main data parameters last (which is pretty much the exact opposite order that we’re all used to).
  • Loops - In JS, loops are usually for-loops that iterate over collections. There are specialized methods such as map, reduce, and filter that can perform the most common looping operations in a style that is more functional and declarative.
  • Localized mutation - This is a bit harder to explain, but local mutation (usually limited to the scope of a single function and a few nested anonymous callbacks) generally seems innocent enough until you realize it makes it more difficult to split up your functions into tiny functions, which is a major part of refactoring towards FP.
  • Side-effects from functions - One of the major principles in FP is that functions should be pure. When functions affect data outside their own local scope, it is usually due to IO actions or an OOP construct such as a method operating on the properties of its context object.

Refactoring steps

Easy Difficulty

  • Use named functions - This will make it easier to write pointfree code and to compose functions.
  • Use predicates - Functions that encapsulate conditional statements can be composed with other functions for the FP/declarative equivalent of imperative control flow.
  • Refactor loops via each, map, filter, reduce, etc - Using these FP iteration functions encourage you to also write small helper functions and predicates. They will guide you towards more FP.

Medium Difficulty

  • Focus on simple FP utilities - R.curry, R.compose, R.composeP, R.prop, R.is, R.has, R.anyPass/R.allPass are all worth checking out. Set a goal to use these as much as possible. It’s a great (and reasonable!) goal to get started with the FP style without getting too overwhelmed.
    • Using curry and compose get you to the heart of FP’s flexibility. Your code will look significantly different once you start currying and composing functions.
    • Dot notation for accessing properties that will be used as input to a function (use R.prop or R.has as needed).
  • Simplify all functions - Break down larger functions into smaller functions; break down helper functions into more and more generalized helper functions.
    • Minimize the number of arguments
    • Write pure functions as often as possible
  • Segregate mutation/state - If mutation/state is absolutely necessary, then try to separate the mutation into a traditional function and the rest into something that can be more FP-like. For example, if a function called foo changes some parent scope variables in addition to performing some calculation, then change foo so it calls two helper functions: the parent scope manipulation is done by one helper function while the calculation is done by another helper function.

Getting Comfortable

What can you expect as you start writing FP code?

  • Function names should be very expressive and more verbose.
    • …which leads to code that looks more semantic.
  • Higher-level functions should be composed of smaller, lower-level functions.
    • Making functions from functions will look/feel like a tree of nested functions.
    • Lower-level functions should be only a handful of lines (and 1-line functions become common-place). Higher-level functions might also be really short because they just rely on calling multiple functions without much additional logic.
  • Remember: Function compositions are normally read from right to left.
  • Debugging may be tricky at first, but you should be able to easily test lower-level functions, which means higher-level functions should be less fragile.
    • For debugging with console.log, you may have to add it to compositions. E.g., var processData = R.compose(calculateStuff, logFilteredData, filterData); You can find a more detailed example of this logging tactic later in this blog post.
  • Naming functions becomes even more important; names no longer always start with verbs because they are often treated as data (nouns) rather than actions/procedures (verbs).
    • However, due to FP’s relative obscurity, naming conventions are not as widespread, which could lead to codebases with poorly named functions (significantly more helper functions means more opportunities to get function names messed up). Make sure your team is on the same page for nomenclature.

Examples

Keep in mind that I’m using Ramda.js for these examples.

Ex: Filtering an array

Example - Filtering for odd numbers and multiples of 6
var originalArray = [1, 2, 3, 4];

// Bad - using a for-loop to mutate a new array
var filteredArray = [];
for(var i = 0; i < originalArray.length; i++) {
  var number = originalArray[i];
  if(number % 2 || number % 6 === 0) {
    filteredArray.push(number);
  }
}

// Better - using the native Array filter method with a typical anonymous function
var filteredArray = originalArray.filter(function(number) {
  return number % 2 || number % 6 === 0;
});

// Most Functional - using predicates with a filter method
var isOdd = function(number) {
  return number % 2;
};
var isDivisibleBySix = function(number) {
  return number % 6 === 0;
};
var isValid = R.allPass([isOdd, isDivisibleBySix]);
var filteredArray = R.filter(isValid, originalArray);

The “most functional” technique may seem unappealling because it requires so many lines of code, but it’s vital to remember that predicates serve as re-usable, easily testable utilities. Also, R.allPass([isOdd, isDivisibleBySix]) is more expressive than number % 2 || number % 6 === 0. In the latter case, readers must remember how % works and how the result is a number that gets coerced into a boolean value for truthiness/falsiness.

Ex: Debugging via console.log

Example - Adding a logger for debugging
// Let's try to debug the following function
var processData = R.compose(calculateStuff, sortByDate, filterByStatus);

// First, we need an FP-friendly logger that works with composition
function log(note, input) {
  console.log(note + ' --- ' + input);
  return input; // this return is vital
}

// Second, we insert the logger into the composition to check if filtering worked
var processData = R.compose(calculateStuff, sortByDate, log('filtered data'), filterByStatus);

// Then we run processData with some data, check the log output, and adjust
// the placement of the log within the composition until we find where
// things go wrong.

Once again, it may seem a tad painful. You’re being forced to create a special logger function. But much like in the previous example, keep in mind that you’re being forced to create specialized functions that will probably be useful enough to be part of your project’s internal library of utilities and helpers.

Ex: Promises

Let’s pretend we need to grab data about an animal. First, we query our database of animals. Second, we use our query results to get more info from a 3rd-party animal API. Third, we use some part of that info to search for relevant photos from the Flickr API.

Example - Writing promise chains
// Bad - using typical anonymous function boilerplate
function getAnimalData() {
  return getAnimalInfoFromDatabase().then(function(response1) {
    return getRelevantInfoFrom3rdPartyAPI(response1);
  }).then(function(response2) {
    return getRelevantPhotoFromFlickrAPI(response2);
  }).then(function(response3) {
    return response3;
    // Note: This last part of the promise chain is actually unnecessary, but
    // newbies tend to include it.
  });
}

// Better - using pointfree style
function getAnimalData() {
  return getAnimalInfoFromDatabase()
    .then(getRelevantInfoFrom3rdPartyAPI)
    .then(getRelevantPhotoFromFlickrAPI);
}

// Most Radtastic - using Ramda's promise composer
var getAnimalData = R.composeP(
  getRelevantPhotoFromFlickrAPI,
  getRelevantInfoFrom3rdPartyAPI,
  getAnimalInfoFromDatabase
);
// Notice how the order of composition goes from right to left.

Pair Programming Impressions and Tips

- - posted in Communication for Engineers, communication Skills, programming, technical | Comments

two rubber ducks pair programming

This is the best photo I’ve ever taken.

Some programming friends think I’m crazy, but I most definitely <3 pair programming. I dig the human interaction. I appreciate enduring the horrors of debugging with a comrade. I love the anti-ego culture.

On top of all that, pairing reduces the risk of burnout for a couple of reasons. Firstly, the average level of focus stays high throughout the day so you don’t have to work as many hours. Secondly, any stress, tedium, and brain workouts are shared by two folks instead of one. Therefore, individuals are less likely to get overwhelmed or feel alone in handling responsibilities or overcoming blockers.

Admittedly, there are times when I want to get in the flow by myself without the need to constantly talk to another person, but usually, I embrace 1-on-1 talks. Why? (1) Discussion activates more of my brain. (2) I’m a big fan of communication skills. (3) Considering another individual’s perspective gives me more to think about, and I love thinking about thinking.

For nearly 2 years, I’ve been pair programming. During this time, I’ve picked up on a few tips and pet-peeves. Read on for some musings on software development in dynamic duos.

Communication Tips

These tips are good for all communication, not just pair programming. But bad communication skills become an unavoidable problem when you pair up, so consider improving how you talk and listen to become a better paired engineer.

Tone: be inquisitive, not accusatory

Another way to put it: be curious about your own assumptions, conclusions, and judgements. Unless you are 100% certain, give your partner the benefit of the doubt.

  • DO: What if X? Will that affect idea Y?
  • DON’T: Your idea (Y) won’t work because of X.
  • DO: What are the obstacles? Let’s see if we can tackle them together.
  • DON’T: I imagine what we need to do should be easy. Why don’t you think so?

Precision: be specific; use idiomatic terms; avoid vague pronouns

The below example is more pertinent to a senior teaching a junior, but even proficient engineers get out of sync when generic words like “that one” are used instead of precise words like “the [insert object name] at [insert context or line number].”

  • DO: The promise returned by the request at line 31 will resolve with a response body containing the JSON we need to parse and possibly flatten.
  • DON’T: That method call will give us the data we need to check out.

Keyboard & Mouse Tips

Maybe it’s just me, but I find it painful to watch someone use only arrow keys to move a cursor or use slow mouse movements to scroll to the top or bottom of a file. Although, I admit that I could be a tad unfair in the typing department (I rock triple-digit WPM so …booya).

  • Please learn general typing shortcuts such as moving the cursor to beginning/end of word/line/file. Use these cursor movement shortcuts in conjunction with shift/delete to select/remove code quickly.
  • Learn IDE shortcuts such as multi-selection/cursors, vertical/block selection, switching tabs, and deleting current line.
  • Use the mouse to point at parts of the screen, not your finger. You don’t want to block parts of the screen with your hand/arm, and you don’t want to reach over to your partner’s monitor if you’ve got a setup with dual-mirrored-monitors.

Recap of LambdaConf 2015 - Where Brains Explode

- - posted in conferences, functional programming, technical | Comments

On May 21st, I traveled to Boulder for LambdaConf 2015, a functional programming (FP) conference. Overall, it wasn’t as beginner-friendly as I had hoped, but it gave me plenty of food for thought as I explore FP. Today, I’m going to summarize some key points and takeaways from the talks that really stuck with me. I didn’t understand everything I heard, but still learned a thing or two …I think.

Recaps

Keynote: “Ipecac for the Ouroboros” by Paul Phillips

Apparently, the Ouroboros (serpent eating its own tail) in this talk represents all programmers (and perhaps even all computer users) who are eating their own tails by accepting the current model for filesystems (files, folders, directories, etc). Phillips suggests that he has the cure (the ipecac).

What if computers used virtual filesystems? Imagine if filesystems were more like databases, and retrieving files would be like using expressions that define queries and which files to retrieve. You would not need to rely on knowing filepaths and file names. You wouldn’t end up with multiple copies of the same files/folders in various directories. You would rely on useful metadata to get your latest photos. You could even ask for “all the photos with person X” (via facial recognition).

“Selfish Purity: How Functional Programming Makes Every-Day Jobs Easier” by Daniel Spiewak

Spiewak claims that FP gurus suck at articulating to non-FP programmers about why FP rocks. According to him, FP’s strengths boil down to reasonability, testability, and concurrency.

As a beginner in the world of FP, I partially agree. For me, it’s easy to grasp why FP rocks in theory, but sometimes it’s hard to understand in practice because it can be so difficult to start writing FP code.

Instead of going on about abstract algebra, Spiewak says FP evangelists should emphasize reasonability, testability, and concurrency. For intro classes, FP teachers should focus less on manipulating data and more on how to do data-centric programming instead of behavior-centric programming.

Reasonability

FP emphasizes data over behavior, and it’s easier to reason about data than behavior (especially if the behavior can be different due to side-effects, implicit inputs, etc –in other words, impurity makes it difficult to reason about behavior).

Testability

Side-effects leads to difficult testability leads to devs hating testing leads to devs writing fewer/poorer tests leads to lower software quality. This plagues the software industry.

Using FP algebra leads to simpler logic and better testability. By writing a “real” interpreter that performs side-effects and a “fake” interpreter that inspects, testing becomes easy. Free monads enable this pattern. Free monads make it easy to write these 2 “interpreters”.

Concurrency

  • sequential ~ flatMap or for-comprehensions ~ monads
  • Parallel ~ zip ~ applicatives

Note: In case it isn’t obvious already, I had trouble comprehending this part of the Spiewak’s talk. My understanding is that the purity of FP lends itself to easily distributing computation of expressions and composing the results.

“Why I Like FP” by Adelbert Chang

Imperative programming requires you to maintain state in your head. At the very least, you have to remember values stored in all sorts of variables, maybe different state for each iteration of a loop, etc. With imperative programming, your brain uses a lot of energy on maintaining state (and types in an untyped language) when it should be focused on just solving the problem. This is really annoying if you love the expressions from physics/math where you simply derive solutions to problems.

Math has referential transparency (algebra is just lots of substitution), which is straightforward. FP brings referential transparency (and therefore, more algrebraic concepts) into programming.

“How I Learned Haskell in 5 Years” by Chris Allen

Or: Thoughts on teaching Haskell (and just “teaching” in general).

Allen is a professional Haskeller, but he also does a lot of teaching. He spent a lot of time introducing co-workers to the language, and he eventually created a free guide on Haskell. It was great to hear his perspective on education in the realm of coding.

  • It took Allen 5 years to learn Haskell because he went through unproductive cycles: complete a tutorial, try a practical project, get frustrated, stop …repeat.
  • When he first started teaching, he sucked at it. He emphasizes that his first audiences were test subjects, and novice teachers should be grateful to their first audiences.
  • His Haskell book, Haskell Programming from first principles is for self-learners and doesn’t assume recent Computer Science education. It doesn’t even assume programming experience because going from something like JavaScript to Haskell will feel like starting from scratch anyway (yikes!).
  • Handwaving over explanations is problematic. Allen warns that teachers should avoid giving definitions before explanations. That tactic runs counter to how humans learn via intuition and informal observations that eventually coalesce into formal explanations.
  • I found a few particularly interesting blog posts by Allen:

“Programming and Math” by Harold Carr

Boom: http://www4.di.uminho.pt/~jno/ps/pdbc_part.pdf

Further Impressions

FP in the real world

Functional programming used to be considered rather academic and unpractical, but nowadays, there are a lot of languages and corresponding communities that make FP friendlier and useful. Consequently, there are plenty of people using FP for “real” software.

FP languages

Judging from the conference, Scala, Clojure, and Haskell are the most popular functional programming languages. Haskell seems to have the fewest programmers, but the most momentum/interest. Not only is Haskell favored by the purists, but its static type system is lauded as being the near-panacea that programmers don’t realize they need until they learn it.

However, I saw enough Clojure code to walk away impressed by it. It seems far less intimidating than Haskell, but perhaps that’s largely due to my background in JavaScript (JS and Clojure are both dynamically typed). The funny thing is that there seemed to be a theme where Clojure developers end up finding enlightenment in Haskell thanks to its strict ways. Perhaps learning Clojure is the perfect stepping stone for learning Haskell? If you’re intrigued, check out: