In this section I will discuss on some of the fundamental concepts and results about **Computability** and **Complexity Theory**.

Other than highly fascinating from an historical perspective, these subjects are extremely important to understand what *computing* actually means.

In a nutshell, computability theory answers the question: “What does it mean for a function (on natural numbers) to be *computable*?”. Moreover, “Is there any function which cannot be computed?”.

Intuitively, a “computable function” is a function that can be calculated using a mechanical calculation device given unlimited amounts of time and storage space. As we will see, this exactly reminds to the idea of *algorithm*, which is just the mechanical application of a sequence of mathematical steps.

On the other hand, (computational) complexity theory focuses on computable functions (i.e., problems that can be solved by an algorithm), and tries to provide a “classification” of those according to their difficulty. Indeed, some computable functions (i.e., solvable problems) seem significantly simpler than others. For instance, consider the very simple computable function `mult(a,b)`

, which returns the product between two natural numbers `a`

and `b`

. A very simple algorithm to compute this function comes directly from the definition of multiplication, and requires summing `b`

times `a`

to itself.

Then, consider the computable function `subset_sum(S,n)`

, which instead return `1`

if there exists a subset `s in S`

whose elements sum up to `n`

, `0`

otherwise. Just having a look at this second function and you realize that it appears a way more difficult to compute than the multiplication above.

… Is that really true?

Home » Miscellanea » Theory of Computation