The command line is often a source of massive intimidation for newcomers to the field, and many people understandably never get past the rudimentary commands. As I mentioned in my productivity post, you can easily get by without knowing command line too well, but you will miss out a lot of opportunities to streamline your workflow.

To get some technicalities out of the way, when I discuss “command line,” I mean Bash, the default shell that you typically see when you open the Terminal on Unix systems like Mac OS X and Linux. The standard command line cmd.exe that ships with Windows won’t be covered at all.

I also qualified this post with “for web developers” because it’s not necessary for developers to have a comprehensive mastery of Bash. Systems administrators should know way more about command line than developers should. My goal is to give an overview of the few tools you can get the most mileage out of. In service of this goal, a lot of what I write may be a bit simplified.

I understand how situational the command line is, so I’ll try and preface each section with why I think the concept is important for developers in particular.

I try and assume no knowledge for the sake of those who only ever use GUIs and text editors, so feel free to skip around to what looks useful to you. I’ll cover some basics in this post and follow up with more advanced concepts in the future, so hopefully this series will have something for everyone in it.

For starters, when you open your terminal, you’ll see your prompt, and it probably looks like this:


In this post and in many command line tutorials, your prompt is simplified to $, so you’ll see commands that look like:

$ npm install request

This just means everything after the $ is to be typed into your prompt.

Table of contents:

Finding help

At its core, Bash comprises a bunch of programs and built-in commands, though the distinction between the two isn’t super critical for this series. The programs were created according to the Unix philosophy, which is that each program does a single thing and does it well. Documentation is built into Bash through the man command (short for manual), so if you’re ever confused about a command, you can type man [command]. For instance, if you type into your terminal:

man find

you’ll see a screen that looks like this:

Manual page for find command

You can even get the man page for man itself by typing man man.

Programs typically tend follow the same format: the program name, any options, then arguments. The man page above shows the basic purpose of the program, followed by its exact syntax, then possible flags and their descriptions. These different will become more clear as I discuss various individual commands with their accompanying options and arguments.

While you can search a man page by pressing “/” and typing in a regex-based query, it can be hard to know the exact wording to search for, and the above screenshot is just a tiny slice of the man page for find. The number of options is almost always overwhelming and you will never even touch the majority of them, so consider man pages a reference rather than a way to learn.

Working with your filesystem

Why does this matter? Your OS almost certainly provides a GUI way of dealing with files too, but if you’re already working in the shell, it’s much faster to just do things there. Your shell is also more convenient for things like deleting all files of a single extension or creating nested directories.


Typing ls in any directory by itself lists the contents of that directory. Calling it with an argument like

$ ls *.txt

lists all the files ending with .txt in your directory. The “*” symbol causes Bash to replace “*.txt” with a list of files that end with .txt, so if you had file1.txt and file2.txt in your directory, the above command is equivalent to calling

$ ls file1.txt file2.txt

Useful options:

  • -l (that's a lowercase L) — changes the output to long format. The long format looks like this: Screenshot of ls -l This gives a little extra information as you can permissions and owner information (will discuss later), the size of the files, and the last modified time. I like having this extra information and the format is a lot more regular to me than the default multi-column format, so I basically always use this option.
  • -A — shows "hidden" files that wouldn't show up in a GUI file browser. This includes files that begin with a "." (a.k.a dotfiles). I can't see any reason a developer wouldn't want these to always show up, so I always use this option as well.
  • -h — only useful when in long format (using -l), but displays the file sizes (column #5) in human-readable format, e.g. 240484 => 235K.
  • -R — recursively lists all the directories deeper than the current directory.

pwd, cd

pwd displays the current working directory of your shell. The default working directory when you open your shell is your home directory (typically abbreviated as “~”). Your prompt by default will usually contain your working directory.

cd (short for change directory) is how you change the current working directory of your shell. Most people will be familiar with this concept, but if you type:

$ cd Downloads

you’ll be put into the Downloads directory. Like I said above, your prompt generally contains your working directory, so you’ll see your prompt update. It’s not only convenient to be in the right directory when you’re working, but certain paths in the programs you run depend on your current working directory, such as require() paths in Node.js.

cd - will go to the last directory you were in, which can be really useful if you are working with deeply nested paths. Typing cd by itself will go back to your home directory. Lastly, “.” and “..” by themselves have special meanings as directories. “.” refers to the current directory, so if you”ve seen “./foo” in Node or elsewhere, it means a file/directory named “foo” that is sitting in the current directory. “..” refers to the directory above the one you’re in, so cd .. will allow you to go up the directory tree.


rm removes files (and directories, when used with -r). It’s important to note for newcomers that this is not the same as putting something in the Trash or Recycling Bin. When you rm something, it is extremely difficult to recover it. Bash doesn’t forgive, so take a lot of care when you’re deleting files! As with ls, rm is often used with the “*” expansion to delete multiple files at once, e.g.

$ rm Downloads/*.rb

deletes all the files ending with .rb in your Downloads directory. Interesting to note: rm doesn’t tell you if it successfully deletes something. It only shows an error like rm: foo: No such file or directory when you attempt to delete a non-existent file.

Useful options:

  • -r — as mentioned above, deletes recursively. When used on a directory, it will delete the directory and all the files and directories within it. Obviously you will want to use this cautiously as its potential for destruction is massive.
  • -f — short for force, which doesn’t prompt when deleting readonly files. I don’t think I’ve ever marked a file as readonly but sometimes I’ll use tools that create readonly files. If you’re trying to delete a large amount of files and you’re sure you know what you’re doing, this flag can save you a lot of time.
  • -i — short for interactive, will prompt you for every single file that you’re deleting. Should be used when you’re only kinda aware of what you’re deleting.


mkdir makes directories; just type mkdir [directory-name]. To make nested directories, use mkdir -p (I don’t know why this isn’t on by default), e.g.:

$ mkdir -p foo/bar/baz

Streams, pipes, and redirects (oh my)

Why does this matter? Because of the Unix philosophy, Bash’s programs tend to be small and won’t do what you want them to do when used by themselves. Bash is most powerful when its programs are used together, and streams, pipes, and redirects are the way to combine them. When you’re using command line, it’s most likely that you have some input that you want to turn into output that can be read by another program (or by you).

For instance, I once had to take the HTML of a webpage (several thousand lines) and extract all the .png and .jpg filenames from it as a simple list. That’s not too difficult to do with a scripting language of course, but this can be done with a single command in Bash. This example is really specific, but so are a lot of the situations we face as developers. Having a working knowledge of the command line will help you deal with each situation and construct the right command that pulls together different programs and solves your problem.

To help understand how different programs work together, we’ll go over a few more of the basic ones.


cat (short for concatenate) outputs the contents of a file or multiple files to your terminal:

$ cat file1.txt file2.txt

would output both files to the terminal, one after another. Go ahead and try it on a few files in your terminal.

Most people reasonably open their files in their text editor instead, but I tend to use this when I’m in my terminal already and need to look at small files. For larger files you can use:


less allows you to read a long piece of content in the terminal much more easily.

$ less huge-file.txt

opens a program that will allow you to scroll around the file using arrow keys and search through it by typing “/” followed by a regex-based search query. Type “q” to stop looking at the file.

Combining commands

If for some strange reason you want to carefully examine multiple large files at once, you could now type:

$ cat large-file1.txt large-file2.txt | less

The “|” character is known as a pipe, and it allows you to feed the output of one program directly into the input of another program. The contents of the two files is sent from cat as a stream, which put simply is a sequence of characters sent a little bit at a time. less receives this input as a stream and displays it to you.

There are only three streams that you need to concern yourself with:

  • Standard in (stdin) serves as the input to programs. It can come from a variety of sources, such as files on your hard drive, keystrokes from your keyboard, and output from running other programs.
  • Standard out (stdout) is the normal output from a program.
  • Standard error (stderr) this includes errors, as the name implies, but also may include a variety of system messages intended for the user to see but that shouldn’t generally be piped to other programs.

Notice that when you just use cat without a pipe, it displays the output right in your terminal window. This is because stdout by default is your terminal window, and the same is true for stderr. When you have a pipe at the end of your command, stdout becomes whatever the next program is, in this case less, but stderr will continue by default to be displayed in the terminal window.

The output for stderr and stdout look identical in your terminal window, so this can be a source of some confusion to beginners. We didn’t have any output from stderr for this command, but if you were playing around with rm and cd, the error messages when you tried to execute those programs on non-existent files/directories were piped to stderr. I’ll make a note from time to time when this subject is particualrly relevant.

stdin is a bit trickier to understand because as mentioned above, input for programs can come from different places. Observe how using less directly with a file name as an argument worked, and sending the cat output to less also worked. In the former case, the file was read in as stdin to less, and in the latter case, what came to the left of the pipe acted as stdin. This concept of either being able to accept either a file or another program’s output as stdin applies to a lot of Bash programs.

As a side note, cat also has a different means of accepting input: if you type just cat without a filename, your keyboard will become stdin, and each line you type will just be echoed back to you. I can’t see this particular functionality being useful to you, but it does illustrate further that stdin is totally dependent on what program you’re using.

Controlling your streams

Say you’ve put together a great command whose output you now want to preserve as a file. You need to redirect the output of the last command to a file. This can be done simply by appending the redirect operator “>” to your command with a filename after it, e.g.:

$ cat file1.txt file2.txt > file3.txt

file3.txt would then be a combination of file1.txt and file2.txt. This can be a nifty way to create simple new files, since you can use echo, which just takes text and sends it to stdout. For example, if you wanted a simple .gitignore file that has “node_modules/” in it, instead of popping open your text editor, you could just run:

$ echo node_modules/ > .gitignore

Important note: “>” does what is known as clobbering, which is to say it wipes the file that you are redirecting to, with no way to recover the original contents. As with rm, exercise great caution.

If you want to append to a file instead of blowing it away, use “>>” instead of “>”.

Note that these operators by themselves only redirect stdout. It’s really rare to need to redirect stderr, but I’ll touch briefly on this in the next post.


Ideally the concepts in this post will enable you to begin to read and understand long and terrifying commands that you might see other people write for the command line. We’ll get into some more practical concepts in future posts. As always, feel free to leave feedback or comments below or find me on Twitter if you have any questions.