Complexities: Sequential and Parallel Computational Complexity, Anomalies in Parallel Algorithms.

Complexities: Sequential and Parallel Computational Complexity, Anomalies in Parallel Algorithms

Sequential Computational Complexity:

The sequential computational complexity of an algorithm refers to the amount of time and space resources required to solve a problem on a single processor. The complexity is usually measured in terms of the input size, denoted by n. Some common measures of sequential computational complexity include:

- Time Complexity: The time complexity of an algorithm is the number of operations required to solve a problem, as a function of the input size.
- Space Complexity: The space complexity of an algorithm is the amount of memory required to solve a problem, as a function of the input size.

Parallel Computational Complexity:

The parallel computational complexity of an algorithm refers to the amount of time and space resources required to solve a problem on multiple processors. The complexity is usually measured in terms of the input size, denoted by n, and the number of processors, denoted by p. Some common measures of parallel computational complexity include:

- Parallel Time Complexity: The parallel time complexity of an algorithm is the time required to solve a problem on p processors, as a function of n and p.
- Parallel Space Complexity: The parallel space complexity of an algorithm is the amount of memory required to solve a problem on p processors, as a function of n and p.
- Work: The work of an algorithm is the total number of operations required to solve a problem on p processors.
- Efficiency: The efficiency of an algorithm is the ratio of the sequential time complexity to the parallel time complexity.
- Scalability: The scalability of an algorithm is the ability to solve larger problems with increasing numbers of processors, while maintaining a reasonable efficiency.

Anomalies in Parallel Algorithms:

Parallel algorithms can exhibit some anomalies that do not occur in sequential algorithms. Some common anomalies include:

- Load Imbalance: Load imbalance occurs when some processors have more work to do than others, which can lead to idle processors and decreased performance.
- Communication Overhead: Communication overhead occurs when the cost of exchanging data between processors becomes a significant factor in the overall performance of the algorithm.
- Contention: Contention occurs when multiple processors try to access the same resource, such as a shared memory location, at the same time, which can lead to performance degradation.
- Granularity: Granularity refers to the size of the work units in an algorithm. Fine-grained algorithms can have high communication overhead, while coarse-grained algorithms can have high load imbalance.

Overall, understanding the complexities and anomalies of parallel algorithms is crucial for designing and implementing efficient and scalable parallel computing systems.