Yeyy finally! I got time! I am so happy that I am finally going to read the analytic theory! However, I have added some CS stuff. Not sure if they are accurate. I have taken up CS in school so hopefully, it's correct!
Big $O$ notation:
We say $f(x)=O(g(x))$ if there exists $c$ such that $|f(x)|< c(g(x)$ for all $x\ge x_0$.
Arithmetic of Big $O$ notation:
- $O(f(x))+O(g(x))= O(f(x)+g(x))$. Moreover, if $f(x)=O(g(x))\implies O(f(x)+g(x))=O(g(x)) $
- $O(f(x))\times O(g(x))= O(f(x)\cdot g(x))$
- $O(f(x)) - O(g(x))= O(f(x) +g(x))$ ( Yes, it is "+")
- $O(\int f(x) dx)=\int O(f(x) dx$
Introduction
In CS, it is important to measure the efficiency of the algorithm before applying it on a big size of data. The "performance" depends on internal and external factors. Internal factors ( which we care more) specify the algorithm's efficiency in terms of: Time required to run and Space required to run.
External is like Size of the input, quality of compiler etc.
The motivation to study time complexity is to compare algorithms and use the one that is the most efficient in a particular situation.
The big $O$ notation ( wrt CS)
The big $O$ notation is used to depict an algorithm's growth rate. The growth rate determines the algorithm's performance when its input size grows. Through big-$O$, the upper bound of an algorithm's performance is specified i.e if we say an algorithm takes $O(n^2)$ times; this means that this algorithm will take at most $N^2$ steps for size $N$.
The big-$O$ notation is very useful for comparing performance of two or more algorithms. For example, the algorithm with time complexity $O(n^2)$ performs better than the algorithm with time complexity $O(2^n)$.
n=int(input("Enter the value of n:")) i=1 sum=0 while i<=n: sum += i**2 i+= 1 print("sum=", sum)
You're learning this in school? 🗿
ReplyDeleteThe hell.
Haha yeah
Delete