Home Random Page


CATEGORIES:

BiologyChemistryConstructionCultureEcologyEconomyElectronicsFinanceGeographyHistoryInformaticsLawMathematicsMechanicsMedicineOtherPedagogyPhilosophyPhysicsPolicyPsychologySociologySportTourism






Nbsp;   Reasons to Use Threads

There are really two reasons to use threads:

 

Responsiveness (typically for client-side GUI applications)Windows gives each process its own thread so that one application entering an infinite loop doesn’t prevent the user from working with other applications. Similarly, within your client-side GUI application, you could spawn some work off onto a thread so that your GUI thread remains responsive to user input events. In this example, you are possibly creating more threads than available cores on the machine, so you are wasting system resources and hurting performance. However, the user is gaining a responsive user interface and therefore having a better overall experience with your application.

Performance (for client and server side applications)Because Windows can schedule one thread per CPU and because the CPUs can execute these threads concurrently, your appli- cation can improve its performance by having multiple operations executing at the same time in parallel. Of course, you only get the improved performance if and only if your application

is running on a machine with multiple CPUs in it. Today, machines with multiple CPUs in them are quite common, so designing your application to use multiple cores makes sense and is the focus of Chapter 27 and Chapter 28, “I/O-Bound Asynchronous Operations.”

Now, I’d like to share with you a theory of mine. Every computer has an incredibly powerful re- source inside it: the CPU itself. If someone spends money on a computer, then that computer should be working all the time. In other words, I believe that all the CPUs in a computer should be running at 100 percent utilization all the time. I will qualify this statement with two caveats. First, you may not want the CPUs running at 100 percent utilization if the computer is on battery power, because that may drain the battery too quickly. Second, some data centers would prefer to have 10 machines running at 50 percent CPU utilization rather than 5 machines running at 100 percent CPU utilization, because running CPUs at full power tends to generate heat, which requires cooling systems, and

powering an HVAC cooling system can be more expensive than powering more computers running at reduced capacity. Although data centers find it increasingly expensive to maintain multiple machines, because each machine has to have periodic hardware and software upgrades and monitoring, this has to be weighed against the expense of running a cooling system.

Now, if you agree with my theory, then the next step is to figure out what the CPUs should be do- ing. Before I give you my ideas here, let me say something else first. In the past, developers and end users always felt that the computer was not powerful enough. Therefore, we developers would never just execute code unless the end users give us permission to do so and indicate that it is OK for the application to consume CPU resources via UI elements, such as menu items, buttons, and check boxes.




But now, times have changed. Computers ship with phenomenal amounts of computing power. Earlier in this chapter, I showed you how Task Manager was reporting that my CPU was busy just 5 percent of the time. If my computer contained a quad-core CPU in it instead of the dual-core CPU that it now has, then Task Manager will report 2 percent more often. When an 80-core processor comes out, the machine will look like it’s doing nothing almost all the time. To computer purchasers, it looks like they’re spending more money for more CPUs and the computer is doing less work!

This is the reason why the hardware manufacturers are having a hard time selling multi-core computers to users: the software isn’t taking advantage of the hardware and users get no benefit from buying machines with additional CPUs. What I’m saying is that we now have an abundance of computing power available and more is on the way, so developers can aggressively consume it.

That’s right—in the past, we would never dream of having our applications perform some computa- tion unless we knew the end user wanted the result of that computation. But now that we have extra computing power, we can dream like this.

Here’s an example: when you stop typing in Visual Studio’s editor, Visual Studio automatically spawns the compiler and compiles your code. This makes developers incredibly productive because they can see warnings and errors in their source code as they type and can fix things immediately. In fact, what developers think of today as the Edit-Build-Debug cycle will become just the Edit-Debug cycle, because building (compiling) code will just happen all the time. You, as an end user, won’t notice this because there is a lot of CPU power available and other things you’re doing will barely be affected by the frequent running of the compiler. In fact, I would expect that in some future version of Visual Studio, the Build menu item will disappear completely, because building will just become automatic. Not only does the application’s UI get simpler, but the application also offers “answers” to the end user, making them more productive.

When we remove UI components like menu items, computers get simpler for end users. There are fewer options for them and fewer concepts for them to read and understand. It is the multi-core revolution that allows us to remove these UI elements, thereby making software so much simpler for end users that my grandmother might someday feel comfortable using a computer. For developers,

removing UI elements usually results in less testing, and offering fewer options to the end user simpli- fies the code base. And if you currently localize the text in your UI elements and your documentation (like Microsoft does), then removing the UI elements means that you write less documentation and you don’t have to localize this documentation anymore. All of this can save your organization a lot of time and money.

Here are some more examples of aggressive CPU consumption: spell checking and grammar checking of documents, recalculation of spreadsheets, indexing files on your disk for fast searching, and defragmenting your hard disk to improve I/O performance.


I want to live in a world where the UI is reduced and simplified, I have more screen real estate to visualize the data that I’m actually working on, and applications offer me information that helps me get my work done quickly and efficiently instead of me telling the application to go get information for me. I think the hardware has been there for software developers to use for the past few years. It’s time for the software to start using the hardware creatively.

 

 


Date: 2016-03-03; view: 700


<== previous page | next page ==>
Nbsp;   Using a Dedicated Thread to Perform an Asynchronous Compute-Bound Operation | Nbsp;   Thread Scheduling and Priorities
doclecture.net - lectures - 2014-2024 year. Copyright infringement or personal data (0.007 sec.)