Understanding time complexity on simple examples - Part 1
Starting today, we are launching a series of articles on time complexity. Our objective is to determine the superiority of one algorithm over another when faced with two or more options. While we strive for simplicity, there may be instances where we delve into the intricate details as necessary.
Everyone appreciates computers for their ability to swiftly accomplish tasks. There is a universal expectation that computers exhibit responsiveness and high performance. However, behind every task assigned to a computer lies a sequence of instructions that must be executed in a specific order. Within this framework, variations can arise in the way instructions are defined, necessitating the exploration of methods that are more efficient than others. The question then arises: how can one determine that one method is superior to another?
We will structure this series into five distinct parts, outlined below.
- Understanding time complexity on simple examples - Part 2
- Understanding time complexity on simple examples - Part 3
- Understanding time complexity on simple examples - Part 4
Three authoritative textbooks on this topic merit consultation. They are widely recognized as standard references and are universally employed in courses.
- Introduction to Algorithms (Cormen, Leiserson, Rivest, Stein)
- Algorithms (Sedgewick, Wayne)
- The Art of Computer Programming (Knuth)
This series is largely inspired by a computer science course taught at Lycée Descartes in Tours, France.
Without further ado and as usual, let's begin with a few prerequisites to correctly understand the underlying concepts. Continue here.