Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Nbsp;   Asynchronous Synchronization

I’m not terribly fond of any of the thread synchronization constructs that use kernel-mode primitives, because all of these primitives exist to block a thread from running, and threads are just too expen- sive to create and not have them run. Here is an example that hopefully clarifies the problem.

Imagine a website into which clients make requests. When a client request arrives, a thread pool thread starts processing the client’s request. Let’s say that this client wants to modify some data in the server in a thread-safe way, so it acquires a reader-writer lock for writing. Let’s pretend that this lock is held for a long time. As the lock is held, another client request comes in, so that thread pool creates a new thread for the client request, and then the thread blocks trying to acquire the reader-writer lock for reading. In fact, as more and more client requests come in, the thread pool creates more and more threads. Thus, all these threads are just blocking themselves on the lock. The server is spending all its time creating threads so that they can stop running! This server does not scale well at all.

Then, to make matters worse, when the writer thread releases the lock, all the reader threads un- block simultaneously and get to run, but now there may be lots of threads trying to run on relatively few CPUs, so Windows is context switching between the threads constantly. The result is that the workload is not being processed as quickly as it could because of all the overhead associated with the context switches.

If you look over all the constructs shown in this chapter, many of the problems that these constructs are trying to solve can be much better accomplished using the Task class discussed in Chapter 27. Take the Barrier class, for example: you could spawn several Task objects to work on a phase and then, when all these tasks complete, you could continue with one or more other Task objects. Compared to many of the constructs shown in this chapter, tasks have many advantages:

■ Tasks use much less memory than threads and they take much less time to create and destroy.

 

■ The thread pool automatically scales the tasks across available CPUs.

 

■ As each task completes a phase, the thread running that task goes back to the thread pool, where it can do other work, if any is available for it.

■ The thread pool has a process-global view of tasks and, as such, it can better schedule these tasks, reducing the number of threads in the process and also reducing context switching.

Locks are popular but, when held for a long time, they introduce significant scalability issues. What would really be useful is if we had asynchronous synchronization constructs where your code indi- cates that it wants a lock. If the thread can’t have it, it can just return and perform some other work, rather than blocking indefinitely. Then, when the lock becomes available, your code somehow gets resumed, so it can access the resource that the lock protects. I came up with this idea after trying to solve a big scalability problem for a customer, and I then sold the patent rights to Microsoft. In 2009, the Patent Office issued the patent (Patent #7,603,502).




The SemaphoreSlim class implements this idea via its WaitAsync method. Here is the signature for the most complicated overload of this method.

 

public Task<Boolean> WaitAsync(Int32 millisecondsTimeout, CancellationToken cancellationToken);

 

With this, you can synchronize access to a resource asynchronously (without blocking any thread).

 

private static async Task AccessResourceViaAsyncSynchronization(SemaphoreSlim asyncLock) {

// TODO: Execute whatever code you want here...

 

await asyncLock.WaitAsync(); // Request exclusive access to a resource via its lock

// When we get here, we know that no other thread is accessing the resource

// TODO: Access the resource (exclusively)...

 

// When done accessing resource, relinquish lock so other code can access the resource asyncLock.Release();

 

// TODO: Execute whatever code you want here...

}

 

The SemaphoreSlim’s WaitAsync method is very useful but, of course, it gives you semaphore semantics. Usually, you’ll create the SemaphoreSlim with a maximum count of 1, which gives you mutual-exclusive access to the resource that the SemaphoreSlim protects. So, this is similar to the behavior you get when using Monitor, except that SemaphoreSlim does not offer thread ownership and recursion semantics (which is good).

But, what about reader-writer semantics? Well, the .NET Framework has a class called Concurrent­ ExclusiveSchedulerPair, which looks like this.

 

public class ConcurrentExclusiveSchedulerPair { public ConcurrentExclusiveSchedulerPair();

 

public TaskScheduler ExclusiveScheduler { get; } public TaskScheduler ConcurrentScheduler { get; }

 

// Other methods not shown...

}

 

An instance of this class comes with two TaskScheduler objects that work together to provide reader/writer semantics when scheduling tasks. Any tasks scheduled by using ExclusiveScheduler will execute one at a time, as long as no tasks are running that were scheduled using the Concurrent­ Scheduler. And, any tasks scheduled using the ConcurrentScheduler can all run simultaneously, as long as no tasks are running that were scheduled by using the ExclusiveScheduler. Here is some code that demonstrates the use of this class.

 

private static void ConcurrentExclusiveSchedulerDemo() { var cesp = new ConcurrentExclusiveSchedulerPair();

var tfExclusive = new TaskFactory(cesp.ExclusiveScheduler); var tfConcurrent = new TaskFactory(cesp.ConcurrentScheduler);


for (Int32 operation = 0; operation < 5; operation++) {

var exclusive = operation < 2; // For demo, I make 2 exclusive & 3 concurrent

 

(exclusive ? tfExclusive : tfConcurrent).StartNew(() => { Console.WriteLine("{0} access", exclusive ? "exclusive" : "concurrent");

// TODO: Do exclusive write or concurrent read computation here...

});

}

}

 

Unfortunately, the .NET Framework doesn’t come with an asynchronous lock offering reader-writer semantics. However, I have built such a class, which I call AsyncOneManyLock. You use it the same way that you’d use a SemaphoreSlim. Here is an example.

 

private static async Task AccessResourceViaAsyncSynchronization(AsyncOneManyLock asyncLock) {

// TODO: Execute whatever code you want here...

 

// Pass OneManyMode.Exclusive or OneManyMode.Shared for wanted concurrent access await asyncLock.AcquireAsync(OneManyMode.Shared); // Request shared access

// When we get here, no threads are writing to the resource; other threads may be reading

// TODO: Read from the resource...

 

// When done accessing resource, relinquish lock so other code can access the resource asyncLock.Release();

 

// TODO: Execute whatever code you want here...

}

 

The following is the code for my AsyncOneManyLock.

public enum OneManyMode { Exclusive, Shared } public sealed class AsyncOneManyLock {

#region Lock code

private SpinLock m_lock = new SpinLock(true); // Don't use readonly with a SpinLock private void Lock() { Boolean taken = false; m_lock.Enter(ref taken); }

private void Unlock() { m_lock.Exit(); }

#endregion

 

#region Lock state and helper methods private Int32 m_state = 0;

private Boolean IsFree { get { return m_state == 0; } }

private Boolean IsOwnedByWriter { get { return m_state == ­1; } } private Boolean IsOwnedByReaders { get { return m_state > 0; } } private Int32 AddReaders(Int32 count) { return m_state += count; } private Int32 SubtractReader() { return ­­m_state; }

private void MakeWriter() { m_state = ­1; } private void MakeFree() { m_state = 0; }

#endregion

 

// For the no­contention case to improve performance and reduce memory consumption private readonly Task m_noContentionAccessGranter;

 

// Each waiting writers wakes up via their own TaskCompletionSource queued here


private readonly Queue<TaskCompletionSource<Object>> m_qWaitingWriters = new Queue<TaskCompletionSource<Object>>();

 

// All waiting readers wake up by signaling a single TaskCompletionSource private TaskCompletionSource<Object> m_waitingReadersSignal =

new TaskCompletionSource<Object>(); private Int32 m_numWaitingReaders = 0;

 

 

public AsyncOneManyLock() {

m_noContentionAccessGranter = Task.FromResult<Object>(null);

}

 

public Task WaitAsync(OneManyMode mode) {

Task accressGranter = m_noContentionAccessGranter; // Assume no contention

 

Lock();

switch (mode) {

case OneManyMode.Exclusive: if (IsFree) {

MakeWriter(); // No contention

} else {

// Contention: Queue new writer task & return it so writer waits var tcs = new TaskCompletionSource<Object>(); m_qWaitingWriters.Enqueue(tcs);

accressGranter = tcs.Task;

}

break;

 

case OneManyMode.Shared:

if (IsFree || (IsOwnedByReaders && m_qWaitingWriters.Count == 0)) { AddReaders(1); // No contention

} else { // Contention

// Contention: Increment waiting readers & return reader task so reader waits m_numWaitingReaders++;

accressGranter = m_waitingReadersSignal.Task.ContinueWith(t => t.Result);

}

break;

}

Unlock();

 

return accressGranter;

}

 

public void Release() {

TaskCompletionSource<Object> accessGranter = null; // Assume no code is released

 

Lock();

if (IsOwnedByWriter) MakeFree(); // The writer left else SubtractReader(); // A reader left

 

if (IsFree) {

// If free, wake 1 waiting writer or all waiting readers


if (m_qWaitingWriters.Count > 0) { MakeWriter();

accessGranter = m_qWaitingWriters.Dequeue();

} else if (m_numWaitingReaders > 0) { AddReaders(m_numWaitingReaders); m_numWaitingReaders = 0;

accessGranter = m_waitingReadersSignal;

 

// Create a new TCS for future readers that need to wait m_waitingReadersSignal = new TaskCompletionSource<Object>();

}

}

Unlock();

 

// Wake the writer/reader outside the lock to reduce

// chance of contention improving performance

if (accessGranter != null) accessGranter.SetResult(null);

}

}

 

As I said, this code never blocks a thread. The reason is because it doesn’t use any kernel con- structs internally. Now, it does use a SpinLock that internally uses user-mode constructs. But, if you recall from the discussion about SpinLocks in Chapter 29, a SpinLock can only be used when held over sections of code that are guaranteed to execute in a short and finite amount of time. If you examine my WaitAsync method, you’ll notice that all I do while holding the lock is some integer cal- culations and comparison and perhaps construct a TaskCompletionSource and add it to a queue. This can’t take very long at all, so the lock is guaranteed to be held for a very short period of time.

Similarly, if you examine my Release method, you’ll notice that all I do is some integer calcula- tions, a comparison and perhaps dequeue a TaskCompletionSource or possibly construct a Task­ CompletionSource. Again, this can’t take very long either. The end result is that I feel very comfort- able using a SpinLock to guard access to the Queue. Therefore, threads never block when using this lock, which allows me to build responsive and scalable software.

 

 


Date: 2016-03-03; view: 736


<== previous page | next page ==>
Nbsp;   The Condition Variable Pattern | Nbsp;   The Concurrent Collection Classes
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.009 sec.)