Amdahl's law is related to the speedup of a parallel computer. When a program is run on a parallel computer then the computation may be serial, parallel or both. At times, there will be a certain part of the program which has to run serially. This part is termed to be a sequential fraction of computing. Consider the sequential fraction of the program to be F. Then the parallel computation of the program will be 1-F. Based on this, Amdahl's law states that the speedup of parallel computer is given by the relation.
where 'p' is the number of processors. To prove the above relation, let us take an example of execution of a task on a computer with a single processor and p number of processors. We already know that the speedup is given by the relation,
Consider that the execution time required to complete the given task on a computer with single processor is T. When the same task is executed on parallel computer with p processors, then the time required will depend upon sequential computing time and parallel computing time that is,
where 'p' is the number of processors. Therefore, by equation 1
From equation (2), we can say that S(p) will always be ≤ 1, that is
S(p)≤1F+1−Fp
S(p)=T(s)T(p) ...(1)
Consider that the execution time required to complete the given task on a computer with single processor is T. When the same task is executed on parallel computer with p processors, then the time required will depend upon sequential computing time and parallel computing time that is,
T(p) = Sequential computing Time + Parallel Computing Time
T(p)=(F).(T)+(1−F).Tp
where 'p' is the number of processors. Therefore, by equation 1
S(p)=T(F).(T)+(1−F).Tp=1F+1−Fp ...(2)
From equation (2), we can say that S(p) will always be ≤ 1, that is
S(p)≤1F+1−Fp