Microsoft & .NETVisual C#Native Parallel Programming for Visual C++: State Management

Native Parallel Programming for Visual C++: State Management

Developer.com content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

At TechEd 2009, Microsoft announced the release of Beta 1 of Visual Studio 2010. This release contains a number of new features in the Parallel Programming Library (PPL) over the October 2008 CTP. In addition to some changes to task groups (covered in-depth previously), Visual Studio 2010 Beta 1 includes new state management functionality that simplifies multi-task development.

Task Group Cancellation

One of the new Beta 1 PPL features is the ability to cancel a running task group. Joining the run and wait method on the task_group type is a new cancel method. There is also a corresponding is_canceling method that you can use to check whether a cancelation is in progress. The task_group_status enumeration also has a new value called canceled that lets you check whether cancelation occurred. The following code demonstrates these new features:

//declare tasks and run tasks
task_group tg; 
tg.run(([]{printf("consoleWrite0n");}));
tg.run(([]{printf("consoleWrite1n");}));
//cancel tasks
tg.cancel();
//check whether tasks are being cancelled
bool taskGroupIsCanceling = tg.is_canceling();
//check on status of task group
task_group_status status = tg.wait();
if (status == completed){
   printf("Tasks were completed successfullyn");
}
else if (status == canceled){
   printf("Tasks were canceled during task executionn");
}
else{
   printf("An exception occured during task executionn");
}

Combinable

One of the most effective patterns to achieve the maximum benefit from executing programming tasks in parallel is for each parallel branch to work on a local copy or subset of the data being processed, and then combine the results when processing completes. This pattern minimizes resource contention, and eliminates the potential for deadlock and data inconsistency bugs that can occur when parallel threads attempt to update the same memory location.

There is nothing overly complex about using this pattern, but it can be tedious to code it manually for each use. To simplify pattern usage, Beta 1 of Visual C++ 2010 adds the combinable templated type. The template parameter passed to combinable is the type of the object that each task will operate on. The type must have both a default constructor and a copy constructor. Each task accesses its own copy of the combinable managed resource using the local method. After all tasks are complete, you can combine the results into a single result set using either the combine or combine_each method.

The following code adds elements to a vector using three separate tasks, and then combines the results into a single vector using both combination methods.

//declare a combinable vector of integers
combinable<vector<int>> v; 
//add an element to the vector using three separate tasks
parallel_invoke(
   [&]{ v.local().push_back(1); },                 
   [&]{ v.local().push_back(2); },                 
   [&]{ v.local().push_back(3); }
); 
//merge the task-local copies using combine_each
vector<int> result1;
v.combine_each(
   [&](vector<int>& local)
   {
      result1.insert(result1.end(), 
         local.begin(), local.end());
   }
);
//merge the task-local copies using combine
vector<int> result2 = v.combine(
   [](vector<int> left, vector<int> right)->vector<int>{    
      left.insert(left.end(), right.begin(), right.end());
      return left;
   });

Note the use of the explicit return type declaration in the lambda expression in the last code statement. The two statements inside the lambda expression prevent the compiler from correctly inferring the return type, so manual declaration is required.

It’s possible to use combinable for types that do not have a default constructor (or in situations where the use of the default constructor is not appropriate) by using the combinable constructor, which takes a generator function that creates objects of the type of the template parameter. The first few lines of the preceding code sample are rewritten below using the overloaded generator constructor. In this case, the generator function returns a vector that already contains an element.

//declare a combinable vector of integers
combinable<vector<int>> v([]{return vector<int>(1, 0);});

Synchronization


The use of combinable is dependent on one task not needing the results of processing on other tasks. In cases where multiple tasks need to share an object, you must use more traditional synchronization strategies. The PPL ships with three synchronization primitives – critical_section, reader_writer_lock and event. A critical_section locks a memory location against access for all tasks that do not hold the lock. For memory locations that will have many simultaneous readers and fewer writers, the more optimized reader_writer_lock is available, which allows multiple readers to acquire the lock and access the memory location concurrently. The final primitive is event, which is used to signal between tasks and threads.


The synchronization primitives are defined in the concrt.h header file—the base header file for the PPL (included in ppl.h). Most of the types defined in concrt.h are targeted more toward library authors rather than application developers, but anyone interested in deeper-level concurrency development is free to investigate and use appropriate features from the exposed types.


The API of the critical_section type is extremely simple; you use a blocking lock method to acquire the lock, a non-blocking try_lock is to attempt to acquire the lock if it is available, and unlock to release a locked critical_section.


The reader_writer_lock is only marginally more complex. It adds two methods, lock_read and try_lock_read that support acquiring a reader lock. The unlock method remains the same as critical_section, and will release the appropriate lock based on the type of lock that is being held.


The final synchronization primitive is event, which represents a manual reset event (i.e. the event stays set until it is manually reset by external code). Code can wait for a single event to be set by calling the instance wait method, which also supports an optional timeout value. When no timeout is specified, the wait time is infinite. You can wait on multiple events by using the static wait_for_multiple, which accepts a C-style array of event pointers. The wait_for_multiple method waits for either a single event or for all the events passed into the method call. The code below waits for both events to be set:

event* events[2];
events[0] = new event();
events[1] = new event();
parallel_invoke(
[&]{ events[0]->set(); },
[&]{ events[1]->set(); }
);
bool waitForAllEvents = true;
event::wait_for_multiple(events, 2, waitForAllEvents);

Dealing with state management when executing tasks concurrently is a notoriously difficult undertaking. The PPL provides support for a pattern of state management where each thread operates on a local version of the shared object, with the results combined together at the completion of processing. For scenarios where segregated state management is not appropriate, the PPL provides traditional synchronization primitives in the form of critical sections, reader writer locks and events.


About the Author


Nick Wienholt is an independent Windows and .NET consultant based in Sydney. He is the author of Maximizing .NET Performance and co-author of A Programmer's Introduction to C# 2.0, from Apress, and specializes in system-level software architecture and development, with a particular focus on performance, security, interoperability, and debugging.


Nick is a keen and active participant in the .NET community. He is the co-founder of the Sydney Deep .NET User group and writes technical articles for the Australian Developer Journal, ZDNet, Pinnacle Publishing, CodeGuru, MSDN Magazine (Australia and New Zealand Edition) and the Microsoft Developer Network. In recognition of his work in the .NET area, Nick was awarded the Microsoft Most Valued Professional Award from 2002 through 2007.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Latest Posts

Related Stories