Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






TheCountdownEvent Class

The next construct is System.Threading.CountdownEvent. Internally, this construct uses a Manual­ ResetEventSlim object. This construct blocks a thread until its internal counter reaches 0. In a way, this construct’s behavior is the opposite of that of a Semaphore (which blocks threads while its count is 0). Here is what this class looks like (some method overloads are not shown).

 

public class CountdownEvent : IDisposable { public CountdownEvent(Int32 initialCount); public void Dispose();

public void Reset(Int32 count); // Set CurrentCount to count

public void AddCount(Int32 signalCount); // Increments CurrentCount by signalCount public Boolean TryAddCount(Int32 signalCount); // Increments CurrentCount by signalCount public Boolean Signal(Int32 signalCount); // Decrements CurrentCount by signameCount public Boolean Wait(Int32 millisecondsTimeout, CancellationToken cancellationToken);

 

public Int32 CurrentCount { get; }

public Boolean IsSet { get; } // true if CurrentCount is 0 public WaitHandle WaitHandle { get; }

}

 

After a CountdownEvent’s CurrentCount reaches 0, it cannot be changed. The AddCount method throws InvalidOperationException when CurrentCount is 0, whereas the TryAdd­ Count method simply returns false if CurrentCount is 0.


TheBarrier Class

The System.Threading.Barrier construct is designed to solve a very rare problem, so it is unlikely that you will have a use for it. Barrier is used to control a set of threads that are working together in parallel so that they can step through phases of the algorithm together. Perhaps an example is in order: when the CLR is using the server version of its garbage collector (GC), the GC algorithm cre- ates one thread per core. These threads walk up different application threads’ stacks, concurrently marking objects in the heap. As each thread completes its portion of the work, it must stop waiting

for the other threads to complete their portion of the work. After all threads have marked the objects, then the threads can compact different portions of the heap concurrently. As each thread finishes compacting its portion of the heap, the thread must block waiting for the other threads. After all the threads have finished compacting their portion of the heap, then all the threads walk up the applica- tion’s threads’ stacks, fixing up roots to refer to the new location of the compacted object. Only after all the threads have completed this work is the garbage collector considered complete and the ap- plication’s threads can be resumed.

This scenario is easily solved using the Barrier class, which looks like this (some method over- loads are not shown).

 

public class Barrier : IDisposable {

public Barrier(Int32 participantCount, Action<Barrier> postPhaseAction); public void Dispose();

public Int64 AddParticipants(Int32 participantCount); // Adds participants public void RemoveParticipants(Int32 participantCount); // Subtracts participants public Boolean SignalAndWait(Int32 millisecondsTimeout, CancellationToken



cancellationToken);

 

public Int64 CurrentPhaseNumber { get; } // Indicates phase in process (starts at 0) public Int32 ParticipantCount { get; } // Number of participants

public Int32 ParticipantsRemaining { get; } // # of threads needing to call SignalAndWait

}

 

When you construct a Barrier, you tell it how many threads are participating in the work, and you can also pass an Action<Barrier> delegate referring to code that will be invoked whenever all participants complete a phase of the work. You can dynamically add and remove participating threads from the Barrier by calling the AddParticipant and RemoveParticipant methods but,

in practice, this is rarely done. As each thread completes its phase of the work, it should call Signal­ AndWait, which tells the Barrier that the thread is done and the Barrier blocks the thread (using a ManualResetEventSlim). After all participants call SignalAndWait, the Barrier invokes the delegate (using the last thread that called SignalAndWait) and then unblocks all the waiting threads so they can begin the next phase.

 

Thread Synchronization Construct Summary

My recommendation always is to avoid writing code that blocks any threads. When performing asynchronous compute or I/O operations, hand the data off from thread to thread in such a way to avoid the chance that multiple threads could access the data simultaneously. If you are unable to fully


accomplish this, then try to use the Volatile and Interlocked methods because they are fast and they also never block a thread. Unfortunately, these methods manipulate only simple types, but you can perform rich operations on these types as described in the “The Interlocked Anything Pattern” section in Chapter 29.

There are two main reasons why you would consider blocking threads:

 

The programming model is simplifiedBy blocking a thread, you are sacrificing some resources and performance so that you can write your application code sequentially without using callback methods. But C#’s async methods feature gives you a simplified programming model without blocking threads.

A thread has a dedicated purposeSome threads must be used for specific tasks. The best example is an application’s primary thread. If an application’s primary thread doesn’t block, then it will eventually return and the whole process will terminate. Another example is an application’s GUI thread or threads. Windows requires that a window or control always be manipulated by the thread that created it, so we sometimes write code that blocks a GUI

thread until some other operation is done, and then the GUI thread updates any windows and controls as needed. Of course, blocking the GUI thread hangs the application and provides a bad end-user experience.

To avoid blocking threads, don’t mentally assign a label to your threads. For example, don’t cre- ate a spell-checking thread, a grammar-checking thread, a thread that handles this particular client request, and so on. The moment you assign a label to a thread, you have also said to yourself that the thread can’t do anything else. But threads are too expensive a resource to have them dedicated to a particular purpose. Instead, you should use the thread pool to rent threads for short periods of time. So, a thread pool thread starts out spell checking, then it changes to grammar checking, and then it changes again to perform work on behalf of a client request, and so on.

If, in spite of this discussion, you decide to block threads, then use the kernel object constructs if you want to synchronize threads that are running in different AppDomains or processes. To atomically manipulate state via a set of operations, use the Monitor class with a private field.5 Alternatively, you could use a reader-writer lock instead of Monitor. Reader-writer locks are generally slower than Monitor, but they allow multiple reader threads to execute concurrently, which improves overall performance and minimizes the chance of blocking threads.

In addition, avoid using recursive locks (especially recursive reader-writer locks) because they hurt performance. However, Monitor is recursive and its performance is very good.6 Also, avoid releasing a lock in a finally block because entering and leaving exception-handling blocks incurs a perfor- mance hit, and if an exception is thrown while mutating state, then the state is corrupted, and other threads that manipulate it will experience unpredictable behavior and security bugs.

 

 

 
 

5 You could use a SpinLock instead of Monitor because SpinLocks are slightly faster. But a SpinLock is potentially

dangerous because it can waste CPU time and, in my opinion, it is not sufficiently faster than Monitor to justify its use.

6 This is partially because Monitor is actually implemented in native code, not managed code.


Of course, if you do write code that holds a lock, your code should not hold the lock for a long time, because this increases the likelihood of threads blocking. In the “Asynchronous Synchronization” section later in this chapter, I will show a technique that uses collection classes as a way to avoid hold- ing a lock for a long time.

Finally, for compute-bound work, you can use tasks (discussed in Chapter 27) to avoid a lot of the thread synchronization constructs. In particular, I love that each task can have one or more continue- with tasks associated with it that execute via some thread pool thread when some operation com- pletes. This is much better than having a thread block waiting for some operation to complete. For I/O-bound work, call the various XxxAsync methods that cause your code to continue running after the I/O operation completes; this is similar to a task’s continue-with task.

 

 


Date: 2016-03-03; view: 701


<== previous page | next page ==>
TheOneManyLock Class | Nbsp;   The Famous Double-Check Locking Technique
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.008 sec.)