Thursday, March 24, 2011

Komputasi Dengan paralel Processing

Parallel computing is a form of Computation in the which many Calculations are carried out simultaneously, operating on the principle That Often large problems cans be divided into Smaller ones, the which are then solved concurrently ("in parallel"). There are several different forms of parallel computing: bit-level , instruction level , data , and task parallelism . There are different installments forms of parallel computing: bit-level , instruction level , data , and task parallelism . Parallelism has been employed for many years, mainly in high-performance computing , but interest in it has grown lately due to the physical constraints preventing frequency scaling . As power consumption (and consequently heat generation) by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture , mainly in the form of multicore processors . Parallelism has been employed for many years, mainly in high-performance computing , but interest in it has Grown lately due to the physical constraints preventing frequency scaling . As power consumption (and consequently heat generation) by computers has changed from a concern in recent years, Parallel computing has changed from the dominant paradigm in computer architecture , mainly in the form of multicore processors .

Parallel computers can be roughly classified according to the level at which the hardware supports parallelism—with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters , MPPs , and grids use multiple computers to work on the same task. Parallel computers cans be roughly classified According to the level at the which the hardware supports parallelism-with multi-core and multi-processor computers having multiple processing elements Within a single machine, while clusters , MPPs , and grids use multiple computers to work on the Same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Specialized parallel computer architectures are Sometimes Used alongside traditional processors, for Accelerating specific tasks.

Parallel computer programs are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs , of which race conditions are the most common. Communication and synchronization between the different subtasks are typically one of the greatest obstacles to getting good parallel program performance. Parallel computer programs are more Difficult to write Than sequential ones, Because concurrency introduces new installments classes of potential software bugs , of the which race conditions are the most common. munication and synchronization Between the different subtasks are typically one of the greatest obstacles to getting good parallel program performance.

The maximum possible speed-up of a program as a result of parallelization is observed as Amdahl's law . The maximum possible speed-ups of a program as a result of parallelization is observed as Amdahl's law .


No comments:

Post a Comment