Parallel Computing 249
Parallel computing is the science and art of programming computers that can do more than one operation at once, concurrently, during the same cycle, often via having more than one processor. Some parallel computers are just regular workstations that have more than one processor in them; others are giant single computers with many processors (these are generally referred to as supercomputers); and others are networks of individual computers. A network of computers configured to coordinate on computing problems is also called a cluster. Parallel computers can run some types of programs far faster than traditional single processor computers, often termed the von Neumann architecture.
Programs that work on a single-processor computer don't automatically work on a parallel computer. Programmers must explicitly specify how to divide up the computing work between all available nodes. Information about writing programs especially for parallel computers is in Parallel_Computing/Programming. Many people have written libraries to help programmers write programs for parallel computers.
Parallel computing is a very similar field to distributed computing. Both types of computing involve breaking apart a problem into many pieces and assigning each part to a computer, but the nodes of a distributed computer normally not communicate with each other while performing their computations, because they may be great distances apart. Sites related to distributed computing are in Computers/Computer_Science/Distributed_Computing.
Information about supercomputers is located in Computers/Supercomputing.