A Fistful of WaitHandles - Part Two

by Scosby Wednesday, February 22, 2012

Introduction

This is the second and final post in the series. The first post talks about the scenario for and behavior of the Job class. This post talks about the technical implementation of the Job class and how to extend it to suit your needs.

I’ve included the scenario from the first post, just in case you haven’t read it yet.

Scenario

If you need to run an operation after a specific amount of time, then it is likely you are familiar with one of the many timers available in the .NET Framework. This is a good approach and is well documented. Furthermore, this approach will continue to be a valuable tool for many developers to use in many different applications.

If you are interested in running the timer’s job on demand, in addition to its interval, then you will need to do a bit more work. Of course, this is still reasonable to do with a Timer but it does provide an opportunity to consider another approach. You will learn about scheduling jobs to the ThreadPool in a way that resembles the familiar timers in the .NET Framework.

Code Samples

The following code sample represents the Job class. As a reminder, this class is designed to run after a certain amount of time passes, additionally, it can be run on demand. The Job class encapsulates the code to do those behaviors.

 Job Class

    using System.Threading;

 

    public class Job : IDisposable

    {

        private AutoResetEvent runWaitHandle = new AutoResetEvent(false);

        private RegisteredWaitHandle registeredWaitHandle;

 

        public Job()

        {

            this.Interval = -1;

        }

 

        public int Interval { get; set; }

 

        public void Start()

        {

            WaitOrTimerCallback callback =

                (userState, interval) =>

                {

                    if (interval)

                    {

                        Console.WriteLine("Operation ran on schedule.");

                    }

                    else

                    {

                        Console.WriteLine("Operation ran on demand.");

                    }

                };

 

            registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(

                runWaitHandle, //A WaitHandle to be used by the thread pool

                callback, //The operation to execute

                null, //User state passed to the operation (not used here)

                this.Interval, //How often to execute the operation

                false); //Run once or not

        }

 

        public void Stop()

        {

            registeredWaitHandle.Unregister(null);

        }

 

        public void Run()

        {

            runWaitHandle.Set();

        }

 

        public void Dispose()

        {

            if (registeredWaitHandle != null)

            {

                registeredWaitHandle.Unregister(null);

            }

 

            if (runWaitHandle != null)

            {

                runWaitHandle.Dispose();

            }

        }

    }

If you remember the first post, we discussed the three steps for scheduling jobs to the ThreadPool. Let’s look at those steps now and see how they are implemented in the Job class.

1.      You need to start, or register, the job.

a.      The Job class exposes a Start method, which queues the operation to the ThreadPool.

b.      If the Interval property is -1 the operation will not run on a schedule.

2.      You provide a special object that helps the ThreadPool know when to run your job.

a.      The ThreadPool.RegisterWaitForSingleObject method uses a WaitHandle to control the operation execution.

b.      When using an AutoResetEvent, not only will the ThreadPool run the operation on a schedule but it is also possible to tell the ThreadPool to run your operation on demand.

c.       Since the AutoResetEvent is a member field that implements IDisposable, our Job class needs to implement the same interface and cleanup the AutoResetEvent.

3.      In order to stop your job, you need to keep a reference to the object returned after you registered the job.

a.      The ThreadPool.RegisterWaitForSingleObject method returns a RegisteredWaitHandle object after you start the job.

b.      The RegisteredWaitHandle can be used to stop the job.

c.       Stopping the operation is easy as calling the RegisteredWaitHandle.Unregister method. This is done when disposing the class too.

As you can see, the Job class is a neat way to wrap up all the behavior described by the scenario. Additionally, it provides a foundation you can build upon for other uses. I will finish the post by describing a few ways you could extend the Job class.

Extending The Job Class

The following ideas could be incorporated into the Job class. These ideas have varying degrees of complexity. Hopefully, you can use these ideas as inspiration to improve the Job class and customize it to your needs.

·         Add reentrancy to the callback operation.

·         Make the callback operation a protected virtual method to enable sub classing specific jobs.

·         Support stopping and then restarting the job.

o   Add a property to the Job class to check if it is registered or not.

·         Add a public property allowing run once configuration when registering to the ThreadPool.

Summary

This series covered a complex scenario: running a job periodically and sometimes on demand. You have seen one way to do this by using the ThreadPool.RegisterWaitForSingleObject method. Furthermore, you have a seen the benefits of abstracting the problem away from the code.

Tags: ,

IT | Programming

A Fistful of WaitHandles - Part One

by Scosby Tuesday, February 21, 2012

Introduction

This is the first post in the series. It talks about the scenario that introduces the new approach and behavior of the Job class in that approach. The second post talks about the technical implementation of the Job class and how to extend the main application’s behavior to suit your needs.

Scenario

If you need to run an operation after a specific amount of time, then it is likely you are familiar with one of the many timers available in the .NET Framework. This is a good approach and is well documented. Furthermore, this approach will continue to be a valuable tool for many developers to use in many different applications.

If you are interested in running the timer’s job on demand, in addition to its interval, then you will need to do a bit more work. Of course, this is still reasonable to do with a Timer but it does provide an opportunity to consider another approach. You will learn about scheduling jobs to the ThreadPool in a way that resembles the familiar timers in the .NET Framework.

Scheduling Jobs to The ThreadPool

The ThreadPool.RegisterWaitForSingleObject method allows you to set up an operation to be run on a schedule or even on demand. This method can seem complex to use at first glance but it can be broken into three parts.

First, you need to start, or register, the job. Next, you provide a special object that helps the ThreadPool know when to run your job. Finally, in order to stop your job, you need to keep a reference to the object returned after you registered the job.

Rather than explain these parts in more detail, I will show you some code and explain it in the context of the previously described scenario. Again, this post is focused on how a job should behave, in order to better understand the problem. The second post will cover all the technical details of the Job class.

Code Samples

The following code samples represent a console application. However, the techniques used can be applied easily in other types of applications, such as a Windows Service or a WPF application. If you can think of any additional uses feel free to comment on the post. Additionally, if you’re up for a challenge, try modifying the program to read job configuration data from the file system at launch.

Program class

        static void Main(string[] args)

        {

            Console.WriteLine("Start executing timer operations:" + Environment.NewLine);

 

            RunJobWithInterval();

 

            RunJobWithNoInterval();

 

            Console.WriteLine("Done executing timer operations." + Environment.NewLine);

 

            Console.Write("Press any key to exit...");

 

            Console.ReadKey(true);

        }

 

        private static void RunJobWithInterval()

        {

            Console.WriteLine("Running Job 1---------------------------");

 

            using (Job job = new Job())

            {

                job.Interval = 500;

 

                job.Start();

 

                Thread.Sleep(1000);

 

                job.Run(); //Possible to run on demand even with an interval

 

                Thread.Sleep(4000);

 

                job.Stop();

            }

        }

 

        private static void RunJobWithNoInterval()

        {

            Console.WriteLine("Running Job 2---------------------------");

 

            using (Job job = new Job())

            {

                job.Start();

 

                job.Run();

 

                Thread.Sleep(200);

 

                job.Stop();

            }

        }

 

This code sample starts with the Main method of the Program class. It calls two other methods, each runs a job either with or without a schedule. It is possible for both jobs to run at the same time. In fact, the jobs are completely independent. This code sample keeps them separated to reduce confusion.

The RunJobWithInterval method first creates a new job, sets the interval, and then starts the job. Next, the application then blocks the thread while the job is running. This gives the job time to write to the console and would not be done in a production application. Next, the method runs the job on demand. Even though the job has a schedule, it is possible to run the job as needed. You can compare and contrast the job’s output for on demand and scheduled operations. Next, the method blocks the thread again to demonstrate how the job will continue to run on its schedule. Finally, the job is stopped.

The RunJobWithNoInterval method is similar but differs in two ways. First, instead of running the job on a schedule, this job is only run on demand. Finally, it only blocks the thread once to allow the job enough time to run before it is stopped.

Thinking in The Problem’s Domain

I have not explained the Job class first, because I feel the semantics of the Job class should not be overlooked. It is important to think of the job in an abstract manner. In fact, this approach is very useful for thinking through other problems. It would have been easy to show the functionality without the Job class. However, in that case you would not get to see the benefits of encapsulation with the Job class. In other words, it should be clear how a job is supposed to behave right now. After all, if you don’t know how something is supposed to behave it becomes much more difficult to write effective code.

Tags: ,

IT | Programming

Performing long running operations in Windows Forms on another thread

by Scosby Monday, August 16, 2010

This post will introduce developers to the BackgroundWorker class. Often, people ask how to perform an asynchronous operation and update their User Interface (UI) with some kind of progress information. The purpose of this post is to provide an overview of using the BackgroundWorker class to accomplish this common task. You can download the full class file at the end of this post.

While you could use a timer to perform an asynchronous operation, I feel the BackgroundWorker class is the better class to use for most peoples’ needs in a Windows Forms application. Let’s write a simple application that meets the following requirements:

  • Processes a long running operation on another thread asynchronously
  • Passes an argument to the operation to provide additional information
  • Restricts the user to running only one operation at a time
  • Updates the UI on the operation’s progress
  • Allows the user to cancel the operation

Create a new Windows Forms Application and design a form to look like the following:
Windows Forms Application Example

We have created a simple form with a textbox at the very top which will display the progress of our operation. The user can run or cancel the operation by clicking the appropriate buttons. Finally, the Options group box contains some additional information we can pass to our operation: whether to throw an exception, a user defined argument, and how many “records” we will process during the operation. Be sure to drag a BackgroundWorker control onto the form from the Components section of your toolbox.

Let’s look at the code you will need to write in order to meet the requirements of our simple application. First, we already know the BackgroundWorker can process a long running operation, so we can rely on that class to meet the first requirement. Let's construct our BackgroundWorker in the form's Load event. Add code similar to the following, note that you could also perform these actions in the designer:

private void BGWorker_Load(object sender, EventArgs e)

{

    this.backgroundWorker1 = new BackgroundWorker();

 

    //Set properties

    this.backgroundWorker1.WorkerSupportsCancellation = true;

    this.backgroundWorker1.WorkerReportsProgress = true;

 

    //Register event handlers

    this.backgroundWorker1.RunWorkerCompleted += new RunWorkerCompletedEventHandler(backgroundWorker1_RunWorkerCompleted);

    this.backgroundWorker1.ProgressChanged += new ProgressChangedEventHandler(backgroundWorker1_ProgressChanged);

    this.backgroundWorker1.DoWork += new DoWorkEventHandler(backgroundWorker1_DoWork);

}

 

Next, we need to pass an argument to the asynch operation. The RunWorkerAsync method has an overload with a parameter of type object. Thus, we can encapsulate all of our “options” into a new class and pass this into our operation. Create a nested, private class named Options in your form’s code behind to represent our group box on the form:

private class Options

{

    public bool ThrowException { get; set; }

    public string Arguments { get; set; }

    public int RecordCount { get; set; }

}

 

Next, we need to begin processing the operation.  Add a click event handler to your “Run” button similar to the following code: 

private void button1_Click(object sender, EventArgs e)

{

    if (!this.backgroundWorker1.IsBusy)

    {

        //Encapsulate our state information into a new class and pass this as an argument to the BackgroundWorker

        Options options = new Options();

        options.Arguments = this.textBoxArgs.Text;

        options.ThrowException = this.checkBox1.Checked;

        options.RecordCount = int.Parse(this.textBoxRecordCnt.Text);

 

        this.backgroundWorker1.RunWorkerAsync(options);

    }

}

In the button's click event handler, we are meeting the requirement to only allow one operation to run at a time. This is accomplished by checking to make sure the BackgroundWorker is not busy via the IsBusy property. If it were busy, than another operation is already running. You could display this information to the user in a message box if you wish by adding an else block. Otherwise, we create an instance of Options and set the properties appropriately. In the code aboe, the RecordCount property does not check for a valid int before parsing. This could cause an error and you should use better validation in production code.

 

Next, let's look at the DoWork event handler that is responsible for processing our operation on another thread. The main goal here is to retrieve our Options class, determine the configuration, and process the "records" reporting our progress back to the UI. Add code similar to the following:

private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)

{

    //Cast the argument to our Options class

    Options options = (Options)e.Argument;

 

    int total = options.RecordCount;

 

    for (int i = 0; i < total; i++)

    {

        //Check if the user cancelled the operation at the beginning of every iteration.

        if (this.backgroundWorker1.CancellationPending)

        {

            e.Cancel = true; //User wants us to quit processing and return.

        }

        else

        {

            Thread.Sleep(100); //simulate processing

 

            int currentRecord = i + 1; //offset zero based loop for progress reporting

 

            string message = "Records processed: " + currentRecord.ToString();

 

            // Get progress percentage

            // Note: Force decimal division else the result is rounded to the nearest integer before we can convert it to a percentage.

            decimal progress = (currentRecord / (decimal)total) * 100;

 

            //Raise the event to report progress, the UI thread will handle this in backgroundWorker1_ProgressChanged

            this.backgroundWorker1.ReportProgress((int)progress, message);

 

            if (options.ThrowException)

            {

                //This exception will be supressed at runtime and be exposed in the RunWorkerCompletedEventArgs.Error property.

                throw new InvalidOperationException("You checked the box to throw an error.");

            }

        }

    }

 

    e.Result = "I was processed on another thread. Your arguments: " + options.Arguments;

}

The most important piece of the DoWork event handler is retrieving our Options class from e.Argument. This allows us to determine what our operation should be doing. After determining our record count, we ensure the user has not clicked the "cancel" button. If the user cancelled, we must set e.Cancel to true so we know the user explicitly cancelled. Otherwise, we begin processing the operation. This consists of us simulating a call to a long running operation by calling Thread.Sleep. The other important discussion point is our requirement of informing the UI after we have processed each record. This is accomplish by the call to backgroundWorker.ReportProgress. This method allows us to report back a percentage complete and an object to represent state. In our case, we just send back a string but you could easily use the technique discussed above for passing in a custom class similar to our Options class. This technique would allow you to handle more complex scenarios than our example demonstrates.

 

Next, let's handle the ProgressChanged event. The UI responds to this event raised during the async operation, this allows you to update your UI without having to invoke a method from a non-UI thread. Add code similar to the following:

private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)

{

    this.progressBar1.Value = e.ProgressPercentage;

 

    this.textBoxProgress.Text = e.UserState.ToString();

}

The ProgressChanged event handler simply takes our progress percentage and updates a progress bar on the form and displays the message we sent back in the UserState parameter of the ReportProgress method. This event will be raised each time you call ReportProgress. This is the real beauty of the BackgroundWorker. This eventing pattern makes it very simple for you to have a robust async operation that you can handle in a flexible way.

 

Next, we need to handle the RunWorkerCompleted event. This event always occurs and you need to handle it for 3 reasons:

  1. Determine if the operation threw an exception
  2. Determine if the user cancelled the operation
  3. Determine if the operation completed successfully

How you decide to handle each of these scenarios is equally important as handling the RunWorkerCompleted event itself. I will leave it up to you to determine what is appropriate, but you should start with the following code:

private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)

{

    if (e.Error != null)

    {

        var newEx = new InvalidOperationException("An error occured during processing.", e.Error);

 

        MessageBox.Show(newEx.ToString());

    }

    else if (e.Cancelled)

    {

        MessageBox.Show("User cancelled operation!");

    }

    else

    {

        this.textBoxProgress.Text = e.Result.ToString();

    }

}

 

Our final requirement is to allow the user to cancel the operation. As we already covered above, this will still raise the RunWorkerCompleted event. By adding a button, the user can simply click it to cancel the operation. Only one thing happens in this method, we inform the background worker that a cancellation is pending. Our DoWork event handler is already checking for the CancellationPending to be true, and this method is what sets that property to true. Add code similar to the following to handle your cancel button's click event:

private void button2_Click(object sender, EventArgs e)

{

    this.backgroundWorker1.CancelAsync();

}

 

This post has given you an overview of how to use the BackgroundWorker class. Review the MSDN documentation for additional information on the BackgroundWorker class.

 

You can download the entire class file for this example here: BGWorker Class.zip (1.46 kb)

Tags: , , ,

IT | Programming

Thread synchronization with the System.Timers.Timer class (part one)

by Scosby Sunday, January 4, 2009

Part one of this series will focus on what you should do if you run across a System.Timers.Timer component, especially if you are performing maintenance on the code base.

If you need your program to perform some task repeatedly, you’ve probably used one of the three Timers in the .NET framework to do this. I will focus on the System.Timers.Timer class, which is considered a “server” timer.  The other two types are the System.Threading.Timer class and the System.Windows.Forms.Timer class. I will briefly explain the three timers and their usefulness, but you should read about them on MSDN if you want further details.

·         The System.Threading.Timer class is a lightweight multithreaded timer class. It does not raise events but offers the basic functionality of the other two timer classes.

·         The System.Windows.Forms.Timer class is designed for Windows Forms applications that have a UI to display. It is single-threaded, has a limited accuracy of 55 milliseconds, and requires a UI message pump.

·         The System.Timers.Timer class is a server-based timer designed for use with worker threads in a multi-threaded environment.

Which timer should I use? The answer to that question is the ubiquitous: it depends. Seriously, it truly depends on what your expectations are and the type of program you are building. I think we can all agree that single thread applications are easier to program, but the time of parallel computing is fast approaching and as developers, we need to learn how to synchronize threads. I will expand on this topic in part  2 of this series, so let’s table this question in the meantime.

For part 1 of this series, let’s assume you run across a Windows Service in your code base. It is supposed to perform some task at a given interval. To do this it is using the System.Timers.Timer class, and in the Elapsed event handler the Timer is stopped to try and prevent additional events from being raised. Let’s look at what is wrong with this implementation and how to fix it if we can’t change the implementation (which part 2 of this series will explore).

There are two issues regarding this approach. First, if you have a timer in your application, it is very likely that your response to the timer’s interval is going to take longer than it will for the interval time span to elapse again. In other words, your callback/event will take longer to execute than it will for the timer to fire off the same callback/event again. Which means you will have overlapping events. Second, I have seen code try to stop the System.Timers.Timer class to prevent situation one, but this technique has an unintended consequence. The Stop() method toggles the Enabled property. By reflecting the class and looking at the Enabled property the problem with this technique becomes obvious. Here is the snippet:

if (!value)
        {
            if (this.timer != null)
            {
                this.cookie = null;
                this.timer.Dispose();
                this.timer = null;
            }
            this.enabled = value;
        }

Notice how the Enabled property disposes the timer (based on ‘value’ which represents the Boolean you’ve set the property to), which could let the timer continue to raise events after you THOUGHT you stopped it (even though the code “nulls” the internal timer object). What that means is out of the scope of this post, but it has to do with how the Garbage Collector finalizes objects for collection. See Jeffery Richter’s book, CLR via C# chapter 20 for a detailed discussion of how this works.

 This is why MSDN recommends you design your event handler to be “reentrant” using the Interlocked class’s CompareExchange() method instead of simply stopping the timer and crossing your fingers. Assuming you only wish for one elapsed event to be handled at any given time, using this technique is appropriate. Here is a code snippet  simplifying the MSDN example of reentrancy avoidance above:

private void m_Timer_Elapsed(object sender, ElapsedEventArgs e)       

{           

if (System.Threading.Interlocked.CompareExchange(ref m_synchPoint, 1, 0) == 0)           

{               

//safe to perform event - no other thread is running the event               

//... implement processing ...                

//Be sure to release control of the syncPoint from this thread when done               

m_syncPoint = 0;           

}            

else           

{               

//another thread is already running the event           

}       

}

In this example, we have a private member field, ‘m_syncPoint’, that is of type Int32. We store a 1 in this field if an event is being processed or a 0 if no event is being processed. The Interlocked.CompareExchange method handles the Thread Synchronization to ensure only 1 thread can change the value when it attempts to process the event handler. Note, be sure to release control of the syncPoint when your processing finishes.In summary, if you are doing some maintence work on a code base and run across a System.Timers.Timer component. I would advise you to add some tracing and verify you are getting the expected behavior. Simply stopping the timer does not guarantee you a thread-safe implementation according to MSDN and my experience with Windows Services! The addition of the Interlocked.CompareExchange method and a private member field, which is really 1 line of code outside of the if/else statement, allows you to easily guarantee a single instance of the event handler is processed at a time.Thread synchronization is an important design consideration when using any timer class, other than System.Windows.Forms.Timers, since you are working with a multi-threaded implementation whether you like it or not.  Part two will expand on this first post to show you how it’s possible to get a better implementation by using  the System.Threading.Timer class, with fewer headaches from the Interlocked class.

Tags: , ,

Technology | Programming