Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Nbsp;   Parallel’s Static For, ForEach, and Invoke Methods

There are some common programming scenarios that can potentially benefit from the improved performance possible with tasks. To simplify programming, the static System.Threading.Tasks. Parallel class encapsulates these common scenarios while using Task objects internally. For ex- ample, instead of processing all the items in a collection like this.

 

// One thread performs all this work sequentially for (Int32 i = 0; i < 1000; i++) DoWork(i);

 

you can instead get multiple thread pool threads to assist in performing this work by using the

Parallel class’s For method.

 

// The thread pool’s threads process the work in parallel Parallel.For(0, 1000, i => DoWork(i));

 

Similarly, if you have a collection, instead of doing this:

 

// One thread performs all this work sequentially foreach (var item in collection) DoWork(item);

 

you can do this.

 

// The thread pool's threads process the work in parallel Parallel.ForEach(collection, item => DoWork(item));

 

If you can use either For or ForEach in your code, then it is recommended that you use For

because it executes faster.

 

And finally, if you have several methods that you need to execute, you could execute them all

sequentially, like this:

 

// One thread executes all the methods sequentially Method1();

Method2(); Method3();


or you could execute them in parallel, like this.

 

// The thread pool’s threads execute the methods in parallel Parallel.Invoke(

() => Method1(),

() => Method2(),

() => Method3());

 

All of Parallel’s methods have the calling thread participate in the processing of the work, which is good in terms of resource usage because we wouldn’t want the calling thread to just suspend itself while waiting for thread pool threads to do all the work. However, if the calling thread finishes its work before the thread pool threads complete their part of the work, then the calling thread will sus- pend itself until all the work is done, which is also good because this gives you the same semantics as you’d have when using a for or foreach loop: the thread doesn’t continue running until all the work is done. Also note that if any operation throws an unhandled exception, the Parallel method you called will ultimately throw an AggregateException.

Of course, you should not go through all your source code replacing for loops with calls to Parallel.For and foreach loops with calls to Parallel.ForEach. When calling the Parallel method, there is an assumption that it is OK for the work items to be performed concurrently. There- fore, do not use the Parallel methods if the work must be processed in sequential order. Also, avoid work items that modify any kind of shared data because the data could get corrupted if it is manipu- lated by multiple threads simultaneously. Normally, you would fix this by adding thread synchroniza- tion locks around the data access, but if you do this, then one thread at a time can access the data and you would lose the benefit of processing multiple items in parallel.



In addition, there is overhead associated with the Parallel methods; delegate objects have to be allocated, and these delegates are invoked once for each work item. If you have lots of work items that can be processed by multiple threads, then you might gain a performance increase. Also, if you have lots of work to do for each item, then the performance hit of calling through the delegate is negligible. You will actually hurt your performance if you use the Parallel methods for just a few work items or for work items that are processed very quickly.

I should mention that Parallel’s For, ForEach, and Invoke methods all have overloads that ac- cept a ParallelOptions object, which looks like this.

 

public class ParallelOptions{ public ParallelOptions();

 

// Allows cancellation of the operation

public CancellationToken CancellationToken { get; set; } // Default=CancellationToken.None

 

// Allows you to specify the maximum number of work items

// that can be operated on concurrently

public Int32 MaxDegreeOfParallelism { get; set; } // Default=­1 (# of available CPUs)

 

// Allows you to specify which TaskScheduler to use

public TaskScheduler TaskScheduler { get; set; } // Default=TaskScheduler.Default

}


In addition, there are overloads of the For and ForEach methods that let you pass three delegates:

■ The task local initialization delegate (localInit) is invoked once for each task participating in the work. This delegate is invoked before the task is asked to process a work item.

■ The body delegate (body) is invoked once for each item being processed by the various threads participating in the work.

■ The task local finally delegate (localFinally) is invoked once for each task participating in the work. This delegate is invoked after the task has processed all the work items that will be dispatched to it. It is even invoked if the body delegate code experiences an unhandled exception.

Here is some sample code that demonstrates the use of the three delegates by adding up the

bytes for all files contained within a directory.

 

private static Int64 DirectoryBytes(String path, String searchPattern, SearchOption searchOption) {

var files = Directory.EnumerateFiles(path, searchPattern, searchOption); Int64 masterTotal = 0;

 

ParallelLoopResult result = Parallel.ForEach<String, Int64>( files,

 

() => { // localInit: Invoked once per task at start

// Initialize that this task has seen 0 bytes

return 0; // Set taskLocalTotal initial value to 0

},

 

(file, loopState, index, taskLocalTotal) => { // body: Invoked once per work item

// Get this file's size and add it to this task's running total Int64 fileLength = 0;

FileStream fs = null; try {

fs = File.OpenRead(file); fileLength = fs.Length;

}

catch (IOException) { /* Ignore any files we can't access */ } finally { if (fs != null) fs.Dispose(); }

return taskLocalTotal + fileLength;

},

 

taskLocalTotal => { // localFinally: Invoked once per task at end

// Atomically add this task's total to the "master" total Interlocked.Add(ref masterTotal, taskLocalTotal);

});

 

return masterTotal;

}


Each task maintains its own running total (in the taskLocalTotal variable) for the files that it is given. As each task completes its work, the master total is updated in a thread-safe way by call- ing the Interlocked.Add method (discussed in Chapter 29, “Primitive Thread Synchronization Constructs”). Because each task has its own running total, no thread synchronization is required during the processing of the items. Because thread synchronization would hurt performance, not requiring thread synchronization is good. It’s only after each task returns that masterTotal has to be updated in a thread-safe way, so the performance hit of calling Interlocked.Add occurs only once per task instead of once per work item.

You’ll notice that the body delegate is passed a ParallelLoopState object, which looks like this.

 

public class ParallelLoopState{ public void Stop();

public Boolean IsStopped { get; }

 

public void Break();

public Int64? LowestBreakIteration{ get; }

 

public Boolean IsExceptional { get; }

public Boolean ShouldExitCurrentIteration { get; }

}

 

Each task participating in the work gets its own ParallelLoopState object, and it can use this object to interact with the other task participating in the work. The Stop method tells the loop to stop processing any more work, and future querying of the IsStopped property will return true. The Break method tells the loop to stop processing any items that are beyond the current item. For example, let’s say that ForEach is told to process 100 items and Break is called while processing the fifth item, then the loop will make sure that the first five items are processed before ForEach returns. Note, however, that additional items may have been processed. The LowestBreakIteration prop- erty returns the lowest item number whose processing called the Break method. The LowestBreak­ Iteration property returns null if Break was never called.

The IsException property returns true if the processing of any item resulted in an unhandled exception. If the processing of an item takes a long time, your code can query the ShouldExit­ CurrentIteration property to see if it should exit prematurely. This property returns true if Stop was called, Break was called, the CancellationTokenSource (referred to by the Parallel­ Option’s CancellationToken property) is canceled, or if the processing of an item resulted in an unhandled exception.

Parallel’s For and ForEach methods both return a ParallelLoopResult instance, which looks like this.

 

public struct ParallelLoopResult {

// Returns false if the operation was ended prematurely public Boolean IsCompleted { get; }

public Int64? LowestBreakIteration{ get; }

}


You can examine the properties to determine the result of the loop. If IsCompleted returns true, then the loop ran to completion and all the items were processed. If IsCompleted is false and LowestBreakIteration is null, then some thread participating in the work called the Stop method. If IsCompleted is false and LowestBreakIteration is not null, then some thread par- ticipating in the work called the Break method and the Int64 value returned from LowestBreak­

Iteration indicates the index of the lowest item guaranteed to have been processed. If an exception is thrown, then you should catch an AggregateException in order to recover gracefully.

 

 


Date: 2016-03-03; view: 747


<== previous page | next page ==>
Nbsp;   Cooperative Cancellation and Timeout | Nbsp;   Parallel Language Integrated Query
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.008 sec.)