Home arrow C++ arrow Intel Threading Building Blocks

Intel Threading Building Blocks

In this second part of a two-part article on Intel Threading Building Blocks, you will learn about programming with raw threads versus MPI, and OpenMP. This article is excerpted from chapter one of the book Intel Threading Building Blocks: Outfitting C++ for Multi-core Processor Parallelism, written by James Reinders (O'Reilly; ISBN: 9780596514808). Copyright 2007 O'Reilly Media, Inc. All rights reserved. Used with permission from the publisher. Available from booksellers or direct from O'Reilly Media.

Author Info:
By: O'Reilly Media
Rating: 5 stars5 stars5 stars5 stars5 stars / 1
May 12, 2011
  1. · Intel Threading Building Blocks
  2. · Comparison with OpenMP

print this article

Intel Threading Building Blocks
(Page 1 of 2 )

Comparison with Raw Threads and MPI

Programming using a raw thread interface, such as POSIX threads (pthreads) or Windows threads, has been an option that many programmers of shared memory parallelism have used. There are wrappers that increase portability, such as Boost Threads, which are a very portable raw threads interface. Supercomputer users, with their thousands of processors, do not generally have the luxury of shared memory, so they use message passing, most often through the popular Message Passing Interface (MPI) standard.

Raw threads and MPI expose the control of parallelism at its lowest level. They represent the assembly languages of parallelism. As such, they offer maximum flexibility, but at a high cost in terms of programmer effort, debugging time, and maintenance costs.

In order to program parallel machines, such as multi-core processors, we need the ability to express our parallelism without having to manage every detail. Issues such as optimal management of a thread pool, and proper distribution of tasks with load balancing and cache affinity in mind, should not be the focus of a programmer when working on expressing the parallelism in a program.

When using raw threads, programmers find basic coordination and data sharing to be difficult and tedious to write correctly and efficiently. Code often becomes very dependent on the particular threading facilities of an operating system. Raw thread-level programming is too low-level to be intuitive, and it seldom results in code designed for scalable performance. Nested parallelism expressed with raw threads creates a lot of complexities, which I will not go into here, other than to say that these complexities are handled for you with Threading Building Blocks.

Another advantage of tasks versus logical threads is that tasks are much lighter weight. On Linux systems, starting and terminating a task is about 18 times faster than starting and terminating a thread. On Windows systems, the ratio is more than 100-fold.

With threads and with MPI, you wind up mapping tasks onto processor cores explicitly. Using Threading Building Blocks to express parallelism with tasks allows developers to express more concurrency and finer-grained concurrency than would be possible with threads, leading to increased scalability.

blog comments powered by Disqus

- Intel Threading Building Blocks
- Threading Building Blocks with C++
- Video Memory Programming in Text Mode
- More Tricks to Gain Speed in Programming Con...
- Easy and Efficient Programming for Contests
- Preparing For Programming Contests
- Programming Contests: Why Bother?
- Polymorphism in C++
- Overview of Virtual Functions
- Inheritance in C++
- Extending the Basic Streams in C++
- Using Stringstreams in C++
- Custom Stream Manipulation in C++
- General Stream Manipulation in C++
- Serialize Your Class into Streams in C++

Watch our Tech Videos 
Dev Articles Forums 
 RSS  Articles
 RSS  Forums
 RSS  All Feeds
Write For Us 
Weekly Newsletter
Developer Updates  
Free Website Content 
Contact Us 
Site Map 
Privacy Policy 

Developer Shed Affiliates


© 2003-2019 by Developer Shed. All rights reserved. DS Cluster - Follow our Sitemap
Popular Web Development Topics
All Web Development Tutorials