Home » Miscellanea » Combinatorics » Combinations and Permutations

# Combinations and Permutations

When approaching combinatorics there are two terms that may often cause confusion. Those are combinations and permutations.
Let’s try to clarify them better.

Informally, the term “combination” is used in natural language (i.e., English) when we want to refer to a set of “objects” whose order doesn’t matter, whereas we use the word “permutation” when we want to explicitly emphasize the order in which things appear.
For instance, we may say the following:

• My ice cream is a combination of banana, strawberry, and lemon“.
Here, we don’t care about the order of the ice cream flavours! We simply state that our ice cream is made of “banana, strawberry, and lemon”. In fact, we could state also that it is made of “lemon, banana, and strawberry” or “strawberry, lemon, and banana”, and so on and so forth.
• My security pin code is 456“.
Now, except for the potential security issues you would encounter when saying something like the above, here you do care about the order in which the numbers appear. The system won’t authenticate you if you’ll digit 654 or 546!

According to the mathematical language, where statements have to be rigorous and precise differently from natural language sentences, we denote by combination any set of objects where order doesn’t matter while permutation indicates any set of ordered objects, or, equivalently, a particular (i.e., ordered) combination.

Let’s now explore those two concepts separately, first starting from permutations.

PERMUTATIONS
Roughly speaking, there are two types of permutations: with or without repetition allowed.

Permutations with Repetitions
These are the easiest to compute.
Suppose we have a set $S$ of $n$ items to choose from, i.e., $S=\{s_1,\ldots, s_n\}$. In addition, suppose we’d like to choose $k\leq n$ elements from $S$.
We denote by $S^{(0)}$ the very first element picked from the entire set $S$ at the first stage (i.e., stage $0$).
More generally, at each later stage $i,~1\leq i \leq k-1$ we pick an item $S^{(i)}$ from the set $S \setminus S^{(i-1)} \cup S^{(i-1)} = S$.
That is, each time after an element is removed from the original set $S$ that is immediately put back before the next picking take place. In this way, at each stage we can always choose from the whole set of $n$ elements, namely that we have exactly $n$ options to select from at stage $0$, $n$ at stage $1$, …, and $n$ at the last stage $k-1$:

$\underbrace{n \times n \times \ldots \times n}_{k} = n^k.$

A typical example of this kind of permutations is when you are asking of how many possible numbers you can represent using $k$ digits each in the range $[0..n-1]$. For instance, if you want to figure out how many permutations exist for a $3$-digit decimal lock (i.e., $k = 3$ and $n = 10$) you’ll get: $10^3 = 1,000$.
Similarly, you can count how many numbers you are able to represent with $8$ binary digits (i.e., $k = 8$, and $n = 2$), which is $2^8 = 256$.

Permutations without Repetitions
Differently from permutations with repetitions allowed, here we have to reduce the number of available choices we may select from at each stage.
We use the same notation above, with $S$ being a set of $n$ items. In addition, we want to select $k\leq n$ items but this time we remove from the original set the element we picked at each stage. Therefore, if we denote by $S^{(0)}$ the very first element picked from the entire set $S$ at the first stage (i.e., stage $0$) the item we pick at the generic $i$ stage ($1\leq i \leq k-1$) is $S^{(i)}$, which is chosen from the set $S \setminus S^{(i-1)}$.
For instance, what order could $n$ objects be in?
Well, intuitively we have $n$ options for the first choice, $n-1$ for the second, $n-2$ for the third, etc.
In the end, the total number of all possible orderings for the set of objects is (i.e., $k=n$):

$\underbrace{n \times (n-1) \times (n-2) \times \ldots \times 2\times 1}_{k=n} = n!.$ where the “$!$” symbol is the factorial function.

It turns out that if you want to select only $k items then we have to compute the following:

$\underbrace{n \times (n-1) \times \ldots \times (n-k+1)}_{k}.$

Now, is there a way to use the factorial to express the equation above? Well, it is and it can be shown as follows. By convention, $0! = 1$. Though it may appear counterintuitive or wrong that multiplying no numbers together gets us $1$, this helps simplify a lot of equations.
To derive the second equation above from the first one which already uses the factorial we may notice that the second equation is exactly the first one where the last $n-k$ terms have been canceled out, that is:

$\underbrace{n \times (n-1) \times \ldots \times (n-k+1)}_{k} = \frac{\overbrace{n \times (n-1) \times \ldots \times (n-k+1) \times (n-k) \times \ldots \times 1}^{n}}{\underbrace{(n-k) \times (n-k-1) \times \ldots \times 1}_{n-k}} =$

$= \frac{n!}{(n-k)!}.$

Instead of writing the whole formula, you may find several notations to succinctly refer to permutations without repetitions, such as the following:

$P(n,k) = {}^{n}P_{k} = {}_{n}P_{k} = \frac{n!}{(n-k)!}.$

Now, suppose you are interested in knowing how many possible ways a group of $16$ pool balls can be arranged (i.e., $n=16$).
According to our definition of permutation, the whole set of pool balls can be arranged in $n! = 16! = 20,922,789,888,000$ ways, which is about 21 trillions: a very very huge number!
Instead, if we are only interested in finding the number of ways a group of $3$ pool balls can be arranged out of all the $16$ pool balls that are available, we are actually referring to $P(n,k)$, where $k = 3$ and still $n = 16$. It turns out that:

$P(16,3) = \frac{16!}{(16-3)!} = \frac{16!}{13!} = \frac{16\times 15\times 14 \times 13 \times \cdots}{13\times 12\times \cdots} = 16\times 15\times 14 = 3,360.$

COMBINATIONS
Differently from permutations, with combinations the order of elements does not matter anymore.
Anyway, we can still classify combinations in the same way we classified permutations, that is with or without repetition allowed.
Let’s start first from combinations without repetitions, which by the way is far easier to understand.

Combinations without Repetitions
This is exactly how lotteries work. The numbers are picked one at a time, and if you are enough lucky to have the all numbers that have been extracted (no matter what order), you win!
The easiest way to explain combinations without repetition could be the following: first, assume that the order matters and treat it as permutations, and then alter it so that the order does not matter.
Going back to the pool ball example, let’s say we just want to know which $3$ pool balls were chosen disregarding of their order, assuming each pool ball is identified by an integer in the range $[1..16]$.
We already found that there exist $3,360$ possible ways of arranging $3$ pool balls out of $16$. However, many of those permutations can be considered as the same now that we don’t care about the order. For instance, assume that balls $3$, $6$, $8$ were chosen. Then, we have the following correspondence:

 Permutations (order does matter) Combinations (order does not matter) $\langle 3,6,8\rangle$ $\{3,6,8\}$ $\langle 3,8,6\rangle$ $\{3,6,8\}$ $\langle 6,3,8\rangle$ $\{3,6,8\}$ $\langle 6,8,3\rangle$ $\{3,6,8\}$ $\langle 8,3,6\rangle$ $\{3,6,8\}$ $\langle 8,6,3\rangle$ $\{3,6,8\}$

As it turns out, any permutation on the left – which is actually a sequence – corresponds to the same unique combination, namely the set $\{3,6,8\}$. Thereby, permutations have $6$ times as many possibilities.
Indeed, there is an easy way to work out how many ways the items “$3,6,8$” can be placed in order, and we have already seen it. The answer is simply: $3! = 3\times 2\times 1 = 6$.
So, we can adjust our permutations to reduce it by how many ways the items could be in order (because, again, we are not interested in their order anymore!), as follows:

$\frac{P(n,k)}{k!} = \frac{n!}{k!(n-k)!} = \binom{n}{k}.$

The formula above is often referred to as binomial coefficient or, more informally, as “$n$ choose $k$“, and it can be denoted by the following expressions:

$C(n,k) = {}^{n}C_{k} = {}_{n}C_{k} = \frac{n!}{k!(n-k)!} = \binom{n}{k} .$

Therefore, to answer our question about the possible set of $3$ pool balls out of the total $16$ pool balls, we simply need to write down the formula above, where $n = 16$ and $k = 3$:

$C(16,3) = \binom{16}{3} = \frac{16!}{3!\times (16-3)!} = \frac{16!}{3!\times 13!} = 560.$

Interestingly enough the formula of binomial coefficient is symmetrical:

$\binom{n}{k} = \frac{n!}{k!\times (n-k)!} = \frac{n!}{(n-k)!\times (n-(n-k))!} = \binom{n}{n-k}.$

In other words, choosing $3$ out of $16$ pool balls or $13$ out of $16$ leads exactly to the same number of combinations.

Another way to figure out the combinations of $k$ elements out of $n$ requires using the Pascal’s triangle. Indeed, you have to simply locate the “cell” of the triangle corresponding to the row number $n$ and the slot number $k$ (remember that numbers on the triangle start from $0$).

Combinations with Repetitions
This is the hardest type of combinations you might be asked to compute. Let’s try to explain it by using an example.

Suppose there are 5 flavours of ice cream: banana, chocolate, lemon, strawberry and vanilla but we can have just 3 scoops. How many variations will there be?
Let’s use the letters $\{B,C,L,S,V\}$ to represent the set of five flavours. Some possible ice cream selections could be the following:

• $\{C,C,C\}$: three scoops of chocolate;
• $\{B,L,V\}$: one scoop of banana, one of lemon, and one of vanilla;
• $\{B,V,V\}$: one scoop of banana and two of vanilla.

First of all, note that the usage of set notation is not totally precise due to some items appearing multiple times within the same set like in $\{C,C,C\}$ and $\{B,V,V\}$. In fact, the usage of multi-set would be preferable (i.e., $\{\{C\},\{C\},\{C\}\}$ and $\{\{B\},\{V\},\{V\}\}$) but this would also lead to an hard-to-read notation.

So, just to be clear, there are $n=5$ items we can choose from and we want to select $k=3$ of those: order does not matter and we can select the same item multiple times.
Imagine the ice cream flavours to be located in five boxes one next to the other in this order: $B,C,L,S,V$.
In addition, suppose you codify the action of picking a flavour with a box (i.e., the symbol $\Box$), whereas you use a triangle (i.e., the symbol $\triangleright$) to denote the action of skipping to the next flavour.
With this codification in mind, we can represent the above combination as follows:

• $\{C,C,C\}$ = $~\triangleright~\Box~\Box~\Box~\triangleright~\triangleright~\triangleright$
• $\{B,L,V\}$ = $~\Box~\triangleright~\triangleright~\Box~\triangleright~\triangleright~\Box$
• $\{B,V,V\}$ = $~\Box~\triangleright~\triangleright~\triangleright~\Box~\Box$

Therefore, instead of worrying about different flavours, we have now a simpler question: “How many different ways can we arrange boxes and triangles?”
Notice that there are always $3$ boxes (i.e., $3$ scoops of ice cream) and $4$ triangles (i.e., we need to move $4$ times to go from the first to fifth box).

Generally speaking, there are $(n+k-1)$ total positions, and we want to choose $k$ of them to have boxes ($\Box$).
This is like saying “We have $(n+k-1)$ pool balls and we want to choose $k$ of them”. In other words, it is now like the pool balls question, but with slightly changed numbers, that is we want to compute $C(n+k-1,k)$ and we know how to handle this:

$C(n+k-1,k) = \binom{n+k-1}{k} = \frac{(n+k-1)!}{k!\times (n+k-1-k)!} = \frac{(n+k-1)!}{k!\times (n-1)!}.$

Interestingly, we could have looked at the triangles instead of the boxes, and we would have then been saying “We have $(n+k-1)$ positions and want to choose $(n-1)$ of them to have triangles”, and the answer would be the same due to the symmetry of the binomial coefficient:

$\binom{n+k-1}{k} = \binom{n+k-1}{n-1} = \frac{(n+k-1)!}{k!\times (n-1)!}.$

Finally, coming back to our ice cream example, where $n=5$ and $k=3$, the answer to our question is the following:

$\binom{5+3-1}{3} = \frac{(5+3-1)!}{3!\times (5-1)!} = \frac{7!}{3!\times 4!} = 35.$

The last result about Combinations with Repetitions is particularly useful to enumerate all the possible re-samples of size $n$ which can be generated from an original sample also of size $n$ when performing bootstrapping. At the core of bootstrapping is the generation of many (i.e. thousands or tens of thousands) bootstrap random samples with replacement, each one having the same size $n$ of the original sample.
More formally, let us assume to have a sample $S$ of $n$ instances from an unknown population, i.e. $S=\{x_1, x_2, \ldots, x_n\}$. Bootstrapping requires to generate $r$ random samples (with replacement) $S^{(1)}, S^{(2)}, \ldots, S^{(r)}$, also called re-samples, from $S$, so that $|S^{(i)}| = n,~\forall i\in \{1,\ldots,r\}$.
To find the total number $b$ of such possible random samples, when known $n$, this actually reduces to find the total number of combinations with repetitions, each of size $n$, which can be obtained from a collection of $n$ objects.
We already know from above that the number of combinations with repetitions, each of size $k$, out of a collection of $n$ elements is:

$\binom{n+k-1}{k} = \frac{(n+k-1)!}{k!\times (n+k-1-k)!} = \frac{(n+k-1)!}{k!\times (n-1)!}.$

If we substitute $k = n$ into the formula above, we can compute $b$ as follows:

$b = \binom{n+n-1}{n} = \binom{2n-1}{n} = \frac{(2n-1)!}{n!\times (2n-1-n)!} = \frac{(2n-1)!}{n!\times (n-1)!}.$

The bootstrapping process can be thus seen as a repetition of $r$ independent trials each of which leads to a “success” for exactly one of $b$ categories (i.e. re-samples), with each re-sample having a given fixed success probability (i.e. the probability of being picked) $p_1,p_2,\ldots, p_b$. Due to the random sampling procedure, we can fairly assume the re-samples to be uniformly distributed, each with probability $p_i = \frac{1}{b}$, where $b = \binom{2n-1}{n}$.

More formally, the bootstrapping process can be thought of as a Multinomial trials process, namely a sequence of $r$ independent and identically distributed random variables collectively represented by the vector ${\boldsymbol X}=(X_1, X_2, \ldots X_r)$, where each $X_i$ can take on $b$ possible values. Therefore, the multinomial trials process is a simple generalization of the Bernoulli trials process (which corresponds to $b=2$). We denote the set of outcomes by $\{1,2,\ldots,b\}$, and we denote the common probability density function of the trial variables by $p_i = P(X_j = i)~\forall i \in \{1,2,\ldots,b\}$.
As already stated above, $p_i >0$ for each $i$ and $\sum_{i=1}^b p_i = 1$.
In the special case of Bernoulli trial processes (i.e. when $b=2$), each random variable $X_i$ is actually a (discrete) binary random variable. This is distributed according to a Bernoulli distribution parametrized by $p = p_1$, which is the probability of observing the outcome $1$ out of the set of two possible outcomes $\{1,2\}$. This can be written as $X_i\sim \text{Bernoulli}(p)$. Note also that the probability $p_2$ can be directly inferred from $p=p_1$ as it always holds that $p_2 = (1-p_1) = (1-p)$.
More generally, in the case of Multinomial trial process (i.e. when $b>2$), each random variable $X_i$ is a (discrete) $b$-ary random variable, which is distributed according to a Categorical distribution parametrized by the following vector of parameters ${\boldsymbol p} = (p_1,p_2, \ldots, p_{b})$. This can be written as $X_i\sim \text{Categorical}({\boldsymbol p})$. Note however that we don’t need to specify the full set of $b$ parameters as the last one $p_b$ can be easily derived once the other $b-1$ are known since $p_b = 1-\sum_{i=1}^{b-1}p_i$.

Usually, we are interested in counting the number of times each outcome $i\in \{1,2,\ldots,b\}$ occurs in the $r$ trials. This number is represented by the following random variable:
$Y_i = |\bigcup_{j=1}^r\{j: X_j=i\}|,~ i \in\{1,\ldots,b\}$
The resulting vector of random variables ${\boldsymbol Y}=(Y_1, Y_2, \ldots, Y_b)$ is known to be distributed according to a Multinomial distribution, with parameters $r$ and ${\boldsymbol p} = (p_1,p_2, \ldots, p_{b})$. This is typically written as ${\boldsymbol Y}\sim \text{Multinomial}(r,{\boldsymbol p})$.
In the special case of Bernoulli trials process we just have two random variables as two are the possible outcomes of each trial; therefore ${\boldsymbol Y}=(Y_1, Y_2)$ is said to be distributed according to a Binomial distribution, whose parameters are $r$ and $p=p_1$. This is usually written as ${\boldsymbol Y}\sim \text{Binomial}(r,p)$.

At this point, we can compute the following joint probability distribution:
$P(Y_1=j_1, Y_2=j_2, \ldots, Y_b=j_b) = \binom{r}{j_1,j_2,\ldots,j_b}p_1^{j_1}p_2^{j_2}\cdots p_b^{j_b}$

In the specific context of bootstrapping, we might be asked what is the probability of having anyone of the $b$ re-samples selected more than, say, $1$ times during the $r$ bootstraps. This basically means to compute what is the probability that there exists (at least) one $Y_i$ such that $Y_i > 1$.

Therefore, as $n$ is getting larger and if we assume each re-sample $S^{(i)}$ equiprobable the probability of selecting the same re-sample more than once is rapidly decreasing to $0$