A Fistful of WaitHandles - Part Two

by Scosby Wednesday, February 22, 2012

Introduction

This is the second and final post in the series. The first post talks about the scenario for and behavior of the Job class. This post talks about the technical implementation of the Job class and how to extend it to suit your needs.

I’ve included the scenario from the first post, just in case you haven’t read it yet.

Scenario

If you need to run an operation after a specific amount of time, then it is likely you are familiar with one of the many timers available in the .NET Framework. This is a good approach and is well documented. Furthermore, this approach will continue to be a valuable tool for many developers to use in many different applications.

If you are interested in running the timer’s job on demand, in addition to its interval, then you will need to do a bit more work. Of course, this is still reasonable to do with a Timer but it does provide an opportunity to consider another approach. You will learn about scheduling jobs to the ThreadPool in a way that resembles the familiar timers in the .NET Framework.

Code Samples

The following code sample represents the Job class. As a reminder, this class is designed to run after a certain amount of time passes, additionally, it can be run on demand. The Job class encapsulates the code to do those behaviors.

 Job Class

    using System.Threading;

 

    public class Job : IDisposable

    {

        private AutoResetEvent runWaitHandle = new AutoResetEvent(false);

        private RegisteredWaitHandle registeredWaitHandle;

 

        public Job()

        {

            this.Interval = -1;

        }

 

        public int Interval { get; set; }

 

        public void Start()

        {

            WaitOrTimerCallback callback =

                (userState, interval) =>

                {

                    if (interval)

                    {

                        Console.WriteLine("Operation ran on schedule.");

                    }

                    else

                    {

                        Console.WriteLine("Operation ran on demand.");

                    }

                };

 

            registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(

                runWaitHandle, //A WaitHandle to be used by the thread pool

                callback, //The operation to execute

                null, //User state passed to the operation (not used here)

                this.Interval, //How often to execute the operation

                false); //Run once or not

        }

 

        public void Stop()

        {

            registeredWaitHandle.Unregister(null);

        }

 

        public void Run()

        {

            runWaitHandle.Set();

        }

 

        public void Dispose()

        {

            if (registeredWaitHandle != null)

            {

                registeredWaitHandle.Unregister(null);

            }

 

            if (runWaitHandle != null)

            {

                runWaitHandle.Dispose();

            }

        }

    }

If you remember the first post, we discussed the three steps for scheduling jobs to the ThreadPool. Let’s look at those steps now and see how they are implemented in the Job class.

1.      You need to start, or register, the job.

a.      The Job class exposes a Start method, which queues the operation to the ThreadPool.

b.      If the Interval property is -1 the operation will not run on a schedule.

2.      You provide a special object that helps the ThreadPool know when to run your job.

a.      The ThreadPool.RegisterWaitForSingleObject method uses a WaitHandle to control the operation execution.

b.      When using an AutoResetEvent, not only will the ThreadPool run the operation on a schedule but it is also possible to tell the ThreadPool to run your operation on demand.

c.       Since the AutoResetEvent is a member field that implements IDisposable, our Job class needs to implement the same interface and cleanup the AutoResetEvent.

3.      In order to stop your job, you need to keep a reference to the object returned after you registered the job.

a.      The ThreadPool.RegisterWaitForSingleObject method returns a RegisteredWaitHandle object after you start the job.

b.      The RegisteredWaitHandle can be used to stop the job.

c.       Stopping the operation is easy as calling the RegisteredWaitHandle.Unregister method. This is done when disposing the class too.

As you can see, the Job class is a neat way to wrap up all the behavior described by the scenario. Additionally, it provides a foundation you can build upon for other uses. I will finish the post by describing a few ways you could extend the Job class.

Extending The Job Class

The following ideas could be incorporated into the Job class. These ideas have varying degrees of complexity. Hopefully, you can use these ideas as inspiration to improve the Job class and customize it to your needs.

·         Add reentrancy to the callback operation.

·         Make the callback operation a protected virtual method to enable sub classing specific jobs.

·         Support stopping and then restarting the job.

o   Add a property to the Job class to check if it is registered or not.

·         Add a public property allowing run once configuration when registering to the ThreadPool.

Summary

This series covered a complex scenario: running a job periodically and sometimes on demand. You have seen one way to do this by using the ThreadPool.RegisterWaitForSingleObject method. Furthermore, you have a seen the benefits of abstracting the problem away from the code.

Tags: ,

IT | Programming

A Fistful of WaitHandles - Part One

by Scosby Tuesday, February 21, 2012

Introduction

This is the first post in the series. It talks about the scenario that introduces the new approach and behavior of the Job class in that approach. The second post talks about the technical implementation of the Job class and how to extend the main application’s behavior to suit your needs.

Scenario

If you need to run an operation after a specific amount of time, then it is likely you are familiar with one of the many timers available in the .NET Framework. This is a good approach and is well documented. Furthermore, this approach will continue to be a valuable tool for many developers to use in many different applications.

If you are interested in running the timer’s job on demand, in addition to its interval, then you will need to do a bit more work. Of course, this is still reasonable to do with a Timer but it does provide an opportunity to consider another approach. You will learn about scheduling jobs to the ThreadPool in a way that resembles the familiar timers in the .NET Framework.

Scheduling Jobs to The ThreadPool

The ThreadPool.RegisterWaitForSingleObject method allows you to set up an operation to be run on a schedule or even on demand. This method can seem complex to use at first glance but it can be broken into three parts.

First, you need to start, or register, the job. Next, you provide a special object that helps the ThreadPool know when to run your job. Finally, in order to stop your job, you need to keep a reference to the object returned after you registered the job.

Rather than explain these parts in more detail, I will show you some code and explain it in the context of the previously described scenario. Again, this post is focused on how a job should behave, in order to better understand the problem. The second post will cover all the technical details of the Job class.

Code Samples

The following code samples represent a console application. However, the techniques used can be applied easily in other types of applications, such as a Windows Service or a WPF application. If you can think of any additional uses feel free to comment on the post. Additionally, if you’re up for a challenge, try modifying the program to read job configuration data from the file system at launch.

Program class

        static void Main(string[] args)

        {

            Console.WriteLine("Start executing timer operations:" + Environment.NewLine);

 

            RunJobWithInterval();

 

            RunJobWithNoInterval();

 

            Console.WriteLine("Done executing timer operations." + Environment.NewLine);

 

            Console.Write("Press any key to exit...");

 

            Console.ReadKey(true);

        }

 

        private static void RunJobWithInterval()

        {

            Console.WriteLine("Running Job 1---------------------------");

 

            using (Job job = new Job())

            {

                job.Interval = 500;

 

                job.Start();

 

                Thread.Sleep(1000);

 

                job.Run(); //Possible to run on demand even with an interval

 

                Thread.Sleep(4000);

 

                job.Stop();

            }

        }

 

        private static void RunJobWithNoInterval()

        {

            Console.WriteLine("Running Job 2---------------------------");

 

            using (Job job = new Job())

            {

                job.Start();

 

                job.Run();

 

                Thread.Sleep(200);

 

                job.Stop();

            }

        }

 

This code sample starts with the Main method of the Program class. It calls two other methods, each runs a job either with or without a schedule. It is possible for both jobs to run at the same time. In fact, the jobs are completely independent. This code sample keeps them separated to reduce confusion.

The RunJobWithInterval method first creates a new job, sets the interval, and then starts the job. Next, the application then blocks the thread while the job is running. This gives the job time to write to the console and would not be done in a production application. Next, the method runs the job on demand. Even though the job has a schedule, it is possible to run the job as needed. You can compare and contrast the job’s output for on demand and scheduled operations. Next, the method blocks the thread again to demonstrate how the job will continue to run on its schedule. Finally, the job is stopped.

The RunJobWithNoInterval method is similar but differs in two ways. First, instead of running the job on a schedule, this job is only run on demand. Finally, it only blocks the thread once to allow the job enough time to run before it is stopped.

Thinking in The Problem’s Domain

I have not explained the Job class first, because I feel the semantics of the Job class should not be overlooked. It is important to think of the job in an abstract manner. In fact, this approach is very useful for thinking through other problems. It would have been easy to show the functionality without the Job class. However, in that case you would not get to see the benefits of encapsulation with the Job class. In other words, it should be clear how a job is supposed to behave right now. After all, if you don’t know how something is supposed to behave it becomes much more difficult to write effective code.

Tags: ,

IT | Programming

Cancellable Events

by Scosby Sunday, June 05, 2011

Introduction

This post will introduce the user to cancellable events. This special kind of event can be cancelled based on one or more event handlers. This post will also explore the problems with the default behavior of executing multiple CancelEventHandler methods, and how the user can override the default behavior by controlling the execution of the registered event handlers.

Scenario

Assume a class exists which launches a rocket. This class should encapsulate the required functionality to abort the launch, ignite the rocket, and report lift off. The class defines two events: AbortLaunch and Launched. There is one public method named Launch that raises the AbortLaunch event, if not aborted it then ignites the rocket and raises the Launched event.

A Simple Example

The following code sample demonstrates how easy it is to create a cancellable event and evaluate the result. This post will utilize a console application, but the techniques can also be applied to other types of applications.

Code Sample – A Simple Rocket Class

     public class Rocket

    {

        //Events...

        public event System.ComponentModel.CancelEventHandler AbortLaunch;

        public event System.EventHandler Launched;

 

        //Event Methods...

        protected virtual bool OnAbortLaunch()

        {

            if (this.AbortLaunch != null)

            {

                System.ComponentModel.CancelEventArgs args = new CancelEventArgs();

 

                this.AbortLaunch(this, args);

 

                return args.Cancel;

            }

 

            return false;

        }

 

        //Launch Methods...

        public void Launch()

        {

            if (!this.OnAbortLaunch())

            {

                //3..2..1..ignition!

 

                if (this.Launched != null)

                {

                    this.Launched(this, EventArgs.Empty);

                }

            }

        }

    }
 

Code Sample – A single event handler

Calling the SimpleRocketLaunch method demonstrates how the Rocket class works by default. Try changing the someCondition variable in the rocket_AbortLaunch event handler to see how the behavior is different.

        private static void SimpleRocketLaunch()

        {

            Rocket rocket = new Rocket();

 

            rocket.AbortLaunch += new CancelEventHandler(rocket_AbortLaunch);

 

            rocket.Launched += new EventHandler(rocket_Launched);

 

            rocket.Launch();

 

            Console.WriteLine("Press any key to exit...");

 

            Console.ReadKey(true);

        }

 

        private static void rocket_Launched(object sender, EventArgs e)

        {

            Console.WriteLine("Rocket launched!");

        }

 

        private static void rocket_AbortLaunch(object sender, CancelEventArgs e)

        {

            bool someCondition = false;

 

            e.Cancel = someCondition;

        }

 

System.ComponentModel.CancelEventHandler

This class enables the user to create an event which can be cancelled. An instance of CancelEventArgs is passed to all the event handlers, which can set CancelEventArgs.Cancel to true if the event should be cancelled.

Class Definition:

public sealed class CancelEventHandler : MulticastDelegate

Combining Event Handlers

The user can add more than one event handler to an event. It is important to remember an event handler is always a delegate, while a delegate may or may not be an event handler! The user can combine multiple event handlers to provide a robust means for checking if the event should be cancelled. This is sometimes known as “chaining delegates” in the community. The CancelEventHandler type derives from MulticastDelegate and there is a recommended behavior for calling all of a MulticastDelegate’s event handlers, collectively they are named the invocation chain; as explained in the MSDN documentation:

When a multicast delegate is invoked, the delegates in the invocation list are called synchronously in the order in which they appear. If an error occurs during execution of the list then an exception is thrown.

Code Sample – Multiple Event Handlers

This code sample demonstrates the problem with raising the CancelEventHandler in the current implementation of the Rocket class. In this sample, multiple event handlers have been added, via expression lambdas, to the AbortLaunch method. The user should notice how the launch should be aborted in the 2nd handler. However, after running the code, the user should be surprised to see the rocket was launched!

        private static void ComplexRocketLaunch()

        {

            Rocket rocket = new Rocket();

 

            rocket.AbortLaunch += new CancelEventHandler(rocket_AbortLaunch);

 

            rocket.AbortLaunch +=

                (o, e) =>

                {

                    //CANCEL THE LAUNCH!!!

                    bool anotherCondition = true;

 

                    e.Cancel = anotherCondition;

                };

 

            rocket.AbortLaunch +=

                (o, e) =>

                {

                    bool yetAnotherCondition = false;

 

                    e.Cancel = yetAnotherCondition;

                };

 

            rocket.Launched += new EventHandler(rocket_Launched);

 

            rocket.Launch();

 

            Console.WriteLine("Press any key to exit...");

 

            Console.ReadKey(true);

        }

 Problem

The event passes the same CancelEventArgs instance to all delegates in the invocation list. While this solution works well for events that have only one registered event handler, it creates a problem once the user combines, or chains, multiple event handlers. If the user invokes the CancelEventHandler as shown above in the “Multiple Event Handlers” code sample, a problem can occur. Specifically, the last handler to set the CancelEventArgs.Cancel property is the only relevant handler. In other words, only the last event handler will determine if the event should be cancelled.

Solution

The user must invoke the event differently than other events. The user should retrieve a collection of the registered event handlers by calling the GetInvocationList method on the CancelEventHandler. This will allow the user to determine if the event was cancelled after each event handler.

Code Sample – A Robust Rocket Class


   public class Rocket 
  
{

        //Events...

        public event System.ComponentModel.CancelEventHandler AbortLaunch;

        public event System.EventHandler Launched;

 

        //Event Methods...

        protected virtual bool OnAbortLaunch()

        {

            if (this.AbortLaunch != null)

            {

                System.ComponentModel.CancelEventArgs args = new CancelEventArgs();

 

                foreach (System.ComponentModel.CancelEventHandler handler in this.AbortLaunch.GetInvocationList())

                {

                    if (args.Cancel)

                    {

                        break;

                    }

 

                    handler(this, args);

                }

 

                return args.Cancel;

            }

 

            return false;

        }

 

        //Launch Methods...

        public void Launch()

        {

            if (!this.OnAbortLaunch())

            {

                //3..2..1..ignition!

 

                if (this.Launched != null)

                {

                    this.Launched(this, EventArgs.Empty);

                }

            }

        }

    }

The user will notice how this Rocket class raises the AbortLaunch event differently in its implementation of the OnAbortLaunch method. By iterating over each handler from the invocation list, the user can inspect the result of the previous handler and determine if the next handler should even be called.

The result of this subtle change is a more natural behavior from the Rocket class that most users would expect.  After making these changes to the Rocket class, call the ComplexRocketLaunch method again and notice how the rocket is not launched!

Summary

This post introduced the user to the CancelEventHandler by presenting a rocket class capable of aborting a launch based on a registered event handler. The user was shown the problem with the default behavior of raising the CancelEventHandler event, and then shown how to fix the problem by explicitly invoking the registered event handlers by calling the GetInvocationList method.

For More Information

·         CancelEventHandler Delegate

·         MulticastDelegate

·         MulticastDelegate.GetInvocationList Method

Tags:

IT | Programming

Xml Serializing Internal Classes

by Scosby Monday, May 16, 2011

Introduction

The XmlSerializer does not work with internal classes, instead it only works with public classes. There is a helpful class introduced with WCF in .NET Framework 3.5 called the DataContractSerializer, which can be used for any class. The process is similar to using the XmlSerializer. This article will demonstrate a simple scenario for the user, and explain one approach (out of many!) for serializing a .NET class, defined as internal, into XML.

What is Serialization

Serialization is a complex topic and many users find it confusing. In simple terms, serialization can be defined as: “the process of converting an object into a form that can be easily stored or transmitted”.

This process is a two-way street. When the user is turning an object into XML, this is called as serialization. When the user is turning XML into an object, this is called as deserialization. While both terms are similar, the difference is simply in what direction the process operates.

Serialization is a great tool for the user to have in her toolbox. Often times, an application will need to save data so it can be loaded the next time the application starts up. One such example is the options or settings that can be configured by an end user. While more advanced examples cover the transmission of serialized objects, including web services and remoting.

Serializers

For this post, the user will discuss two different classes that can serialize objects into XML. Of course, this can be accomplished in a variety of ways in the .NET Framework but that is a great exercise for the inquisitive user wishing to learn more. The topic of discussion is how to serialize internal classes, which cannot be accomplished with the XmlSerializer so it will only be discussed briefly.

XmlSerializer

The XmlSerializer class is in the System.Xml.Serialization namespace of the System.Xml.dll assembly. It is capable of converting an object's public properties and fields, given a public class. It will not work with a class that is not publically accessible. This provides a simple and easy to use serializer that is great for many classes. The user should always use this class when she can, because it is very simple and straightforward.

DataContractSerializer

The DataContractSerializer class is in the System.Runtime.Serialization namespace of the System.Runtime.Serialization.dll assembly. It is capable of letting the user declare a contract that governs the serialization of an object. A contract, in this case, means the user gets to choose what is serialized based on attributes used by the DataContractSerializer when serializing an object. This approach sounds more complicated, but is simpler than it first appears! Of course, the trade-off is the requirement forcing the user to declare a contract via attributes. This becomes tedious for large classes with lots of members to serialize, but the pattern is easy.

Scenario

Assume the following scenario for a customer service application: a company class exists that needs to be serialized but it should not be publically accessible outside the assembly. Thus, the class is defined as internal. The company class also defines a list of company employees, another class that is internal. The application outputs a list of the company’s employees. This output should be in an XML file format so it can be interoperable with various other Line of Business (LOB) and 3rd party applications.

 

Code Sample – Company class

    [DataContract(Namespace = "")]

    internal class Company

    {

        [DataMember]

        public string CompanyName { get; set; }

 

        [DataMember]

        public List<Employee> CompanyEmployees { get; set; }

    }

 

 

Code Sample – Employee class

    [DataContract(Namespace = "")]

    internal class Employee

    {

        [DataMember]

        public string Name { get; set; }

 

        public string Address { get; set; }

 

        [DataMember]

        public DateTime StartDate { get; set; }

 

        [DataMember]

        internal int Salary { get; set; }

    }

The first thing the user should note is the DataContract and DataMember attributes. These attributes allow the DataContractSerializer to determine which class members belong in the output. The user should ensure the class is marked with the DataContract attribute and all members to be serialized are marked with the DataMember attribute. 

The astute reader will have noticed the Employee.Address property is not decorated with the DataMember attribute. The serialization process will ignore the Employee.Address property, in this case it serves as a demonstration, but it certainly could be marked as a DataMember if the user wished.

 

 

Code Sample – Serializing the Company Class


       
public static string Serialize(Company value)

        {

            if (value == null)

            {

                throw new ArgumentNullException("value");

            }

 

            StringBuilder xmlData = new StringBuilder();

 

            using (XmlWriter xw = XmlWriter.Create(xmlData, new XmlWriterSettings { Indent = true, OmitXmlDeclaration = true }))

            {

                DataContractSerializer dcs = new DataContractSerializer(typeof(Company));

 

                dcs.WriteObject(xw, value);

            }

 

            return xmlData.ToString();

        }

 

The important thing to notice is how the DataContractSerializer writes to a StringBuilder that is returned to the caller. This string is typically saved to a XML config file. In this case, the user will simply keep the string in memory in order to construct another instance of the Company class. However, it is a good exercise to save this string to a file and then load the file later to deserialize its contents!

After serializing the company class, the XML output will look as follows:

  <Company xmlns:i="http://www.w3.org/2001/XMLSchema-instance">

    <CompanyEmployees>

      <Employee>

      <Name>John Doe</Name>

      <Salary>10</Salary>

      <StartDate>2009-12-15T00:00:00</StartDate>

    </Employee>

      <Employee>

      <Name>Steve Curran</Name>

      <Salary>20</Salary>

      <StartDate>1975-05-22T00:00:00</StartDate>

    </Employee>

  </CompanyEmployees>

  <CompanyName>Contoso</CompanyName>

</Company>

 

Code Sample – Deserializing the Company Class


       
public static Company Deserialize(string xmlData)

        {

            if (string.IsNullOrWhiteSpace(xmlData))

            {

                throw new ArgumentNullException("xmlData");

            }

 

            XmlReaderSettings settings = new XmlReaderSettings()

            {

                CloseInput = true,

            };

 

            using (XmlReader xr = XmlReader.Create(new StringReader(xmlData), settings))

            {

                DataContractSerializer dcs = new DataContractSerializer(typeof(Company));

 

                return (Company)dcs.ReadObject(xr);

            }

        }

 

Here, the important thing to notice is how the DataContractSerializer is using a XmlReader to parse the string and deserialize a new instance of the Company class, which is returned to the caller. Besides reading a string, this method is similar to the one described earlier. In simple terms, the expected result of deserialization is an object in the same state as when it was serialized.

 

In some advanced scenarios, administrators or developers could change the XML values in file system (if the serialization saves the values, of course!). This enables customizations and scripts which can ease the deployment & configuration burden for users of an application. Be aware, this kind of behavior opens the door to malicious attacks and is a primary example of why the user should be cautious about what kind of information is serialized and (arguably more important) deserialized.

 

 

Code Sample – Creating Contoso


       
public static void Run()

        {

            Company contoso = new Company();

 

            contoso.CompanyName = "Contoso";

 

            Employee johnDoe = new Employee();

            johnDoe.Address = "12345 street";

            johnDoe.Name = "John Doe";

            johnDoe.Salary = 10;

            johnDoe.StartDate = new DateTime(2009, 12, 15);

 

            Employee steveCurran = new Employee();

            steveCurran.Address = "56789 blvd";

            steveCurran.Name = "Steve Curran";

            steveCurran.Salary = 20;

            steveCurran.StartDate = new DateTime(1975, 5, 22);

 

            contoso.CompanyEmployees = new List<Employee>() { johnDoe, steveCurran };

 

            string xml = Serialize(contoso);

 

            Company newContoso = Deserialize(xml);

 

            //Compare the two Contoso instances to verify serialization

        }

This method utilizes the code samples the user has learned in this post. A Company instance is created and populated with some mock data. This instance is then serialized and stored in a string variable named “xml”. Finally, a new Company instance is deserialized so the user can compare the two instances and see how they differ.

Concerns

While the DataContractSerializer sounds like a great tool, and it certainly is useful, the user must be careful to understand when and why it would be used to serialize a class. If the user has a public class, then the best way to serialize that class is with the XmlSerializer. The DataContractSerializer has many additional benefits, but for the scope of this article it is important to note that it’s used to get around the XmlSerializer’s limitations with an internal class. The user can find many other uses of the DataContractSerializer in the context of WCF, but this is a good, albeit creative, solution for applications that must XML serialize an internal class.

More Information

Serialization – MSDN topic

Serialization Samples - MSDN samples

ASMX Web Services & XML Serialization - MSDN forums

.NET Remoting & Runtime Serialization – MSDN forums

Tags: ,

IT | Programming

SharePoint 2010 Content Organizer - Client Object Model

by Scosby Wednesday, March 16, 2011

Introduction

SharePoint 2010 introduced the Content Organizer to provide a more formal routing infrastructure for site content. Based on a list of configured rules, content is routed to a library based on conditions such as column values or content type. Read more about the Content Organizer on MSDN. This post assumes the user has installed the SharePoint 2010 Client Object Model redistributable.

Contents

Using the Content Organizer

One particular feature of the Content Organizer is to redirect users to the Drop Off library. This setting is configured in the Content Organizer Settings link located under Site Administration on the Site Settings page. (See figure 1)

Figure 1 – Content Organizer Redirect Setting

Enabling this setting will send all content for a library that has a configured rule to the drop off library. In other words, if users upload a document to the library it will be sent to the drop off library instead. This only applies to libraries that are listed as “target libraries” in a Content Organizer Rule. (See figure 2).

Figure 2 – Content Organizer Rule configured to a target library

Once a new item has been routed to the drop off library, either a manager or a timer job will move the item to its final location once the rule’s conditions have been met. The final location is the target library defined by the rule. A daily timer job, conspicuously called “Content Organizer Processing” (see figure 3), is created for each web application with a content organizer enabled site. This job will evaluate items in the drop off library against the rule’s conditions. The item must satisfy all the rule’s conditions in order to be moved to the target library. Unless a manager submits content before the job is run, any content matching the rule’s conditions will not be routed to the target library (final location) until the job has run. It is possible to run the job on demand, or have a manager submit the item in the drop off library, to move items immediately.

Figure 3 - Content Organizer Timer Job

Problem

When using the SharePoint Client Object model, there is no implementation provided for determining content organizer rules. Additionally, uploading with the client object model ignores the redirect users setting and bypasses the content organizer, ignoring any rules defined for a target library.  By design, when redirecting users to the drop off library, the content organizer restricts a library’s available content types for new documents or uploads to those defined as content organizer rules specifying that library as a target. If no content organizer rules exist, then the library will behave as it does without the content organizer.

Solution

Despite the lack of built in support with the client object model for the content organizer, it is possible to discover the rules and honor the redirect setting. The solution is to mimic the content organizer redirect behavior by releasing to the drop off library instead of uploading to the target library. This post will demonstrate how to retrieve the list of enabled content organizer rules using a CAML query. If there is a rule for the release content type and the redirect users setting is enabled, then files should go to the drop off library instead of the target library.

Designing the Solution

The solution has three high-level steps to perform.

Checking the Content Organizer

Using the client object model, it is possible for the user to determine if a Microsoft.SharePoint.Client.Web (Web) is using the content organizer. The field client_routerenforcerouting should be checked for a bool indicating if the Web is redirecting users to the drop off library.

The user should not simply check for this property on the Web and assume all libraries have rules, however. SharePoint requires the user to define content organizer rules to control the routing of content. Thus, the user must retrieve the content organizer rules and evaluate them against the intended content type for an upload. If a rule matches the content type, then the content should be sent to the drop off library instead of the target library, again, only if client_routerenforcerouting is true.

If a Web does not enforce the content organizer redirect, it is safe to upload directly to any library. Thus, retrieving rules for a Web without the content organizer redirect would not be necessary. This method assumes the user has loaded the Web.AllProperties property on the Web context as follows, else an exception is thrown: context.Load(context.Web, x => x.AllProperties)

Code Sample – checking for content organizer rules


        private static bool RequiresContentOrganizer(Web web)

        {

            if (!web.IsObjectPropertyInstantiated("AllProperties"))

            {

                throw new InvalidOperationException("Web.AllProperties is not initialized!");

            }

 

            string fieldName = "client_routerenforcerouting";

 

            Dictionary<string, object> fieldValues = web.AllProperties.FieldValues;

 

            if (fieldValues.ContainsKey(fieldName))

            {

                object value = fieldValues[fieldName];

 

                if (value != null)

                {

                    bool result = false;

 

                    if (bool.TryParse((string)value, out result))

                    {

                        return result;

                    }

                    else

                    {

                        throw new InvalidOperationException("Unexpected field value in Web properties for content organizer redirect setting!");

                    }

                }

            }

 

            return false;

        }

Query the Content Organizer Rules

The content organizer rules are created in a site list called “Routing Rules”.  This special list has default views which group the rules. The user can choose to display the rules grouped by Content Type or Target Library. Either grouping will provide a collapsible section so the user can easily navigate complex sets of routing rules.

In order for the client object model to release to the drop off library, instead of the target library, it is necessary to construct a CAML query against the routing rules list. The query will return the view fields defined in the special “Rule” content type and allow the user to inspect the routing rule from the client object model.

Querying the rules list can be split into two parts:

·         Find the list.

·         Query the list.

When using the SharePoint client object model, it is possible for the user to easily construct a simple “All Items” CAML query. The user can specify a row limit and a collection of fields to include in the search results. This approach reduces the amount of XML manipulation the user must perform to build the CAML query.

Code Sample – Part One: Find the List

The routing rules list URL should be consistent across Language packs; you can check the RoutingRuleList_ListFolder key in the dlccore resx file located in the SharePoint %14 Hive%\Resources directory. This means that regardless of your site’s language, you can rely on SharePoint naming the routing rules list URL as “…/RoutingRules”.

        string webUrl = context.Web.ServerRelativeUrl;

 

        //Handle root sites & sub-sites differently

        string routingRulesUrl = webUrl.Equals("/", StringComparison.Ordinal) ? "/RoutingRules" : webUrl + "/RoutingRules";

 

        List routingRules = null;

 

        ListCollection lists = context.Web.Lists;

 

        IQueryable<List> queryObjects = lists.Include(list => list.RootFolder).Where(list => list.RootFolder.ServerRelativeUrl == routingRulesUrl);

 

        IEnumerable<List> filteredLists = context.LoadQuery(queryObjects);

 

        context.ExecuteQuery();

 

        routingRules = filteredLists.FirstOrDefault();

 

        if (routingRules == null)

        {

            throw new InvalidOperationException("Could not locate the routing rules list!");

        }

Code Sample – Part Two: Query the List

The query should filter the list results to include only the active (based on the RoutingEnabled field) content organizer rules. The user should notice how the CAML query is restricted to a row limit of 100 and defines a custom set of view fields in the static CamlQuery.CreateAllItemsQuery method. This method generates a basic CAML query, which is then parsed with LINQ  to XML and modified to include the query element.

        private CamlQuery GetContentOrganizerRulesCaml()

        {

            string[] viewFields = new string[]

               {

                   "RoutingConditions",

                   "RoutingContentTypeInternal",

                   "RoutingPriority",

                   "RoutingRuleName",

                   "RoutingTargetFolder",

                   "RoutingTargetLibrary",

                   "RoutingTargetPath"

               };

 

            //view...

            CamlQuery caml = CamlQuery.CreateAllItemsQuery(100, viewFields);

 

            XElement view = XElement.Parse(caml.ViewXml);

 

            //query...

            XElement routingEnabled = new XElement("Eq",

                new XElement("FieldRef", new XAttribute("Name", "RoutingEnabled")),

                new XElement("Value", new XAttribute("Type", "YesNo"), "1"));

 

            XElement query = new XElement("Query", new XElement("Where", routingEnabled));

 

            //Add query element to view element

            view.FirstNode.AddBeforeSelf(query);

 

            caml.ViewXml = view.ToString();

 

            return caml;

        }

 Evaluate the Rules and Conditions

After the user gets the Content Organizer rules it is important to match the release content type id with any of the rules. Should there be a match, the user must release to the drop off library instead of the content type’s library. If no rules match the upload’s content type, then the content can be sent to the content type’s library.

Code Sample – Evaluate Rules and Send Content

This method demonstrates how the user can parse the search results from the Rules List. The “RoutingContentTypeInternal” field needs to be split, in order to determine the rule’s content type and content type ID. If a rule matches, then the user can determine where to correctly send the content.

        private static void EvaluateRules(ListItemCollection items)

        {

            string yourContentTypeId = "0x01010B"; //replace with your upload content type ID.

 

            ListItem rule = null;

 

            foreach (ListItem item in items)

            {

                string contentType = null;

 

                string contentTypeId = null;

 

                if (item.FieldValues.ContainsKey("RoutingContentTypeInternal"))

                {

                    object value = item.FieldValues["RoutingContentTypeInternal"] ?? string.Empty;

 

                    string[] values = value.ToString().Split(new char[] { '|' }, StringSplitOptions.None);

 

                    if (values.Length == 2)

                    {

                        contentType = values[1];

 

                        contentTypeId = values[0];

                    }

                }

 

                if (yourContentTypeId == contentTypeId)

                {

                    rule = item;

 

                    break;

                }

            }

 

            if (rule != null)

            {

                //send to drop off library...

            }

            else

            {

                //send to content type library...

            }

        }

Summary

This post explained the difficulty of using the client object model to release content to a Web with the content organizer enabled (redirecting users to drop off library) and determining which libraries have been impacted by the content organizer rules. This post explained several code snippets from the attached sample class file, so the user can better understand how to implement and use the object model.

Scosby Content Organizer.zip (1.73 kb)

Tags: , ,

IT | Programming

Shawn Cosby Awarded Microsoft Community Contributor for 2011

by Scosby Wednesday, February 16, 2011

Today I got a suprising email from Microsoft. I was honored when I found out my contributions to the Microsoft online technical communities at MSDN have been recognized with the Microsoft Community Contributor for 2011. You can view my MSDN Profile here.

 

Tags: ,

Technology | Programming

Custom Search Results in KnowledgeLake Imaging for SharePoint - Part 3

by Scosby Friday, December 10, 2010

Introduction

This post uses Imaging for SharePoint version 4.1 and requires the SDK. Contact KnowledgeLake to learn more about Imaging for SharePoint or the SDK. Contact KnowledgeLake Support to obtain the latest SDK if your company already has a license.

This post will demonstrate how to create a Silverlight Search Results control in a SharePoint Solution. This post will use the DataGrid control available in Microsoft’s Silverlight Control Toolkit. When doing any development work, one should always test in a staging/testing environment before going live to a production server. Class files are available for download at the end of the post.

The User Control

The project will use the Silverlight Toolkit’s DataGrid for displaying our Search Results to the end user. Add a new user control to the KLSearchExtensions project, and name it CustomQueryReults. This name should be used exactly as shown otherwise KnowledgeLake Imaging for SharePoint will not recognize the user’s custom control.

XAML

<UserControl xmlns:sdk="http://schemas.microsoft.com/winfx/2006/xaml/presentation/sdk"

             x:Class="KLSearchExtensions.CustomQueryResults"

             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

             xmlns:d="http://schemas.microsoft.com/expression/blend/2008"

             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"

             xmlns:viewModel="clr-namespace:KLSearchExtensions.ViewModel"

             xmlns:controlsToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Toolkit"

             mc:Ignorable="d"

             d:DesignHeight="480"

             d:DesignWidth="640">

    <UserControl.DataContext>

        <viewModel:CustomQueryResultsViewModel x:Name="Model"

                                               PropertyChanged="Model_PropertyChanged" />

    </UserControl.DataContext>

    <Grid x:Name="LayoutRoot">

        <controlsToolkit:BusyIndicator IsBusy="{Binding IsBusy}">

            <sdk:DataGrid x:Name="dataGrid"

                          AutoGenerateColumns="False"

                          HorizontalAlignment="Stretch"

                          VerticalAlignment="Stretch"

                          Margin="4" />

        </controlsToolkit:BusyIndicator>

    </Grid>

</UserControl>

 

DataContext

The User Control’s data context is an instance of our ViewModel. In XAML, the User Control should look similar to the following, where the XML namespace viewModel points to your ViewModel’s project namespace (in this case it is KLSearchExtensions.ViewModel):

LayoutRoot

The User Control’s root element is a Grid. Add the Silverlight Toolkit’s BusyIndicator to the Grid. The BusyIndicator control hosts the actual content of our control, but allows the user to display an information message during loading or searching. Add the Silverlight Toolkit’s DataGrid to the BusyIndicator control. Be sure to set the AutoGenerateColumns property to false, the Search Results will be dynamically built for the user to bind to the grid.

Code Behind

A Strongly Typed ViewModel

The user will benefit from creating a convenience property to get an instance of the ViewModel from the User Control. Create a new private property named TypedModel. This property will encapsulate a strongly typed instance of the ViewModel for our User Control. In other words, the property returns the User Control’s DataContext property as an instance of our ViewModel. The code should look like the following:

 

Strongly Typed ViewModel Code Sample

        private CustomQueryResultsViewModel TypedModel

        {

            get

            {

                return this.DataContext as CustomQueryResultsViewModel;

            }

        }

IQueryResults Interface

Implement the IQueryResults interface from the namespace KnowledgeLake.Silverlight.Search.Contracts. After implementing the interface, the Error event will be defined. This event is raised by KnowledgeLake Imaging for SharePoint when an error occurs during the loading of the Search Result extension. The user should not raise this event, but the user can handle this event. It is important to note, since this is a Silverlight application the user will have to log to IsolatedStorage on the client’s machine or call a webservice to record the error. However, handling this error is beyond the scope of this post.

 

The IQueryResults interface members OpenExecuteSearch (method) and Query (property) will be implemented as wrappers around the User Control’s ViewModel members. The ExecuteQueryStringSearch method is beyond the scope of this post and will not be implemented. The interface members’ implementations should look similar to the following:

 

IQueryResults Interface Code Sample

        public event EventHandler<KnowledgeLake.Silverlight.EventArgs<Exception>> Error;

 

        public void ExecuteQueryStringSearch()

        {

            throw new NotImplementedException();

        }

 

        public void OpenExecuteSearch(string keywords)

        {

            this.GetTypedModel().OpenExecuteSearch(keywords);

        }

 

        public string Query

        {

            get { return this.GetTypedModel().Query; }

            set { this.GetTypedModel().Query = value; }

        }

ViewModel PropertyChanged Event

The PropertyChanged event is declared in the User Control’s XAML and wired up to an event handler called Model_PropertyChanged, this handler must respond to the Query and SearchResultSource properties. The method should look like the following:

ViewModel Property Changed Code Sample

        private void Model_PropertyChanged(object sender, PropertyChangedEventArgs e)

        {

            var model = sender as CustomQueryResultsViewModel;

 

            switch (e.PropertyName.ToUpperInvariant())

            {

                case "QUERY":

                    model.ExecuteQuery();

                    break;

 

                case "SEARCHRESULTSOURCE":

                    this.RenderResults(model.SearchResultSource);

                    break;

 

                case "MESSAGE":

                    this.DisplayMessage(model.Message);

                    break;

            }

        }

Displaying Search Results

Create a method called RenderResults which takes a parameter of type SearchResultSource. This method will render the search results into the DataGrid. If the user has added or removed columns to the query since it was last executed, the UpdateColumns property will evaluate to true. In this scenario, it is necessary to clear the grid of all columns and rebuild the grid.

 

The SearchResultSource.ViewColumns items are all represented as a property of each item in the SearchResultSource.Results collection. The SearchResultSource.Results property is dynamically built to contain the corresponding ViewColumn’s value when bound to the DataGrid.ItemsSource property. In other words, these are the rows of the grid with the column values included.

 

Displaying Search Results Code Sample

        private void RenderResults(SearchResultSource result)

        {

            if (result.ViewColumns != null)

            {

                if (result.UpdateColumns) //user changed result columns on the query builder

                {

                    this.dataGrid.Columns.Clear();

 

                    foreach (ViewColumn column in result.ViewColumns)

                    {

                        var item = new DataGridTextColumn();

                        item.IsReadOnly = true;

                        item.Header = column.FriendlyName;

                        item.Binding = new Binding(column.InternalName);

 

                        this.dataGrid.Columns.Add(item);

                    }

                }

 

                this.dataGrid.ItemsSource = result.Results;

            }

        }

 

If the end user added or removed columns to the query, the method clears the grid’s columns and rebuilds them from the SearchResultSource.ViewColumns collection. As the user iterates the ViewColumns collection, be sure to create a new Binding for the column using the ViewColumn.InternalName property. This binding enables the grid to display the appropriate column value for each item in the SearchResultSource.Results collection. This is powerful functionality provided by KnowledgeLake Imaging for SharePoint. Essentially, the user does not need to worry about what columns the user chose to include in the query and how to bind those columns to the Search Results.

 

 

 

 

Download Files: CustomQueryResults.zip

Blog Posts In This Series

View full size...

Finished Custom Control

Tags: , , , ,

IT | Programming

Custom Search Results in KnowledgeLake Imaging for SharePoint - Part 2

by Scosby Friday, December 10, 2010

Introduction

This post uses Imaging for SharePoint version 4.1 and requires the SDK. Contact KnowledgeLake to learn more about Imaging for SharePoint or the SDK. Contact KnowledgeLake Support to obtain the latest SDK if your company already has a license.

This post will demonstrate how to create a Silverlight Search Results control in a SharePoint Solution. This post will use the DataGrid control available in Microsoft’s Silverlight Control Toolkit. When doing any development work, one should always test in a staging/testing environment before going live to a production server. Class files are available for download at the end of the post.

The ViewModel

Create a new folder in the KLSearchExtensions project named ViewModel. Next, add a new class named CustomQueryResultsViewModel to the folder. This class will be responsible for managing the service layer which retrieves search results. Additionally, the User Control will bind to the ViewModel for display of results and status information.

Service Layer

The ViewModel’s constructor should instantiate an instance of the SearchService class, located in the KnowledgeLake.Silverlight.Imaging.Search.Services namespace, into a private member field. The constructor should also wire up the SearchService.FacetSearchCompleted event to a handler that will process the results of our query. In order to use the SearchService class, the Silverlight application needs to have specific bindings in its ServiceReferences.ClientConfig file. The SearchService class will dynamically construct the client proxy with its current URL, thus the client endpoint addresses can be omitted. A downloadable ClientConfig file is included in this post. It should look like the following:

 

<configuration>

  <system.serviceModel>

    <bindings>

      <basicHttpBinding>

        <binding name="FacetQuerySearchSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647">

          <security mode="None" />

        </binding>

        <binding name="LoggerSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647">

          <security mode="None" />

        </binding>

        <binding name="FileInformationSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647">

          <security mode="None" />

        </binding>

        <binding name="WorkflowServiceSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647">

          <security mode="None" />

        </binding>

        <binding name="TaxonomywebserviceSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647">

          <security mode="None" />

        </binding>

        <binding name="RecordsRepositorySoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647">

          <security mode="None" />

        </binding>

        <binding name="IndexWebServiceSoap" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647">

          <security mode="None" />

        </binding>

      </basicHttpBinding>

    </bindings>

    <client>

      <endpoint binding="basicHttpBinding" bindingConfiguration="FacetQuerySearchSoap" contract="SearchClient.FacetQuerySearchSoap" name="FacetQuerySearchSoap" />

      <endpoint binding="basicHttpBinding" bindingConfiguration="LoggerSoap" contract="LoggingClient.LoggerSoap" name="LoggerSoap" />

      <endpoint binding="basicHttpBinding" bindingConfiguration="FileInformationSoap" contract="ViewClient.FileInformationSoap" name="FileInformationSoap" />

      <endpoint binding="basicHttpBinding" bindingConfiguration="WorkflowServiceSoap" contract="WorkflowClient.WorkflowServiceSoap" name="WorkflowServiceSoap" />

      <endpoint binding="basicHttpBinding" bindingConfiguration="TaxonomywebserviceSoap" contract="TaxonomyClient.TaxonomywebserviceSoap" name="TaxonomywebserviceSoap" />

      <endpoint binding="basicHttpBinding" bindingConfiguration="RecordsRepositorySoap" contract="OfficialFileClient.RecordsRepositorySoap" name="RecordsRepositorySoap" />

      <endpoint binding="basicHttpBinding" bindingConfiguration="IndexWebServiceSoap" contract="IndexClient.IndexWebServiceSoap" name="IndexWebServiceSoap" />

    </client>

    <extensions />

  </system.serviceModel>

</configuration>

 

ViewModel Constructor Code Sample

        public CustomQueryResultsViewModel()

        {

            this.searchService = new SearchService();

            this.searchService.FacetSearchCompleted += new EventHandler<FacetSearchEventArgs>(searchService_FacetSearchCompleted);

        }

 

        private void searchService_FacetSearchCompleted(object sender, FacetSearchEventArgs e)

        {

            this.IsBusy = false;

 

            if (e.Results != null && e.Error == null)

            {

                SearchResultSource results = SearchProcessor.ProcessNewSearchResult(e.Results);

 

                results.UpdateColumns = this.UpdateResultColumns(results);

 

                this.SearchResultSource = results;

            }

            else

            {

                this.Message = "Service error getting search!";

            }

        }

 

In the FacetSearchCompleted handler notice two things:

  1. There are a few properties used in the method. The IsBusy and Message properties notify the UI to display certain information to the user, such as a busy indicator or an error message. The SearchResultSource property holds the processed results of our query for later use.
  2. We pass the SearchResults (e.Results) to the SearchProcessor class and the UpdateResultColumns method on the ViewModel. 

The SearchProcessor class is responsible for greatly simplifying the task of binding dynamic results to a data grid. This method is now publically accessible as of KnowledgeLake Imaging for SharePoint 4.0.0.1 (hotfix 1) or higher. The return value from the ProcessNewSearchResult method is of type SearchResultSource. The SearchResultSource class contains an IEnumerable collection of the search results in its Results property.

Detecting New Results Columns

When the Search Service is processing the results, it is important for the grid to know if the end user has added or removed a column so it can rebuild the columns. Thus, the ViewModel needs a method to determine if the results have been updated by the client since a previous query was executed. Create a method named UpdateResultColumns that returns a Boolean and has a parameter of type SearchResultSource. The method should look similar to the following:

 

Detecting New Results Columns Code Sample

        private bool UpdateResultColumns(SearchResultSource results)

        {

            bool updateColumns = true;

 

            if (this.SearchResultSource != null)

            {

                //We need to determine if the user added/removed columns if SearchResultSource is not null.

                if (this.SearchResultSource.ViewColumns.Count < results.ViewColumns.Count)

                {

                    var uniqueColumns = results.ViewColumns.Except(this.SearchResultSource.ViewColumns, new ViewColumnComparer());

 

                    updateColumns = uniqueColumns != null && uniqueColumns.Count() > 0;

                }

                else

                {

                    var uniqueColumns = this.SearchResultSource.ViewColumns.Except(results.ViewColumns, new ViewColumnComparer());

 

                    updateColumns = uniqueColumns != null && uniqueColumns.Count() > 0;

                }

            }

 

            return updateColumns;

        }

 

The UpdateResultColumns method checks to see if the user added or removed columns to the query. If the SearchResultSource parameter has more columns than the model property, the user added columns to the query. In either case, if there are any columns that do not belong the method returns true. The result of the method is used in the FacetSearchCompleted handler to flag the model’s SearchResultSource instance to update the columns. This value is assigned to the SearchResultSource.UpdateColumns property, informing the UI to render the results.

 

Query Layer

The ViewModel should expose two public methods, to be called by the IQueryResults interface from the User Control, named ExecuteQuery and OpenExecuteSearch. The methods should look similar to the following:

Query Layer Code Sample

        public void ExecuteQuery()

        {

            if (!string.IsNullOrWhiteSpace(this.Query))

            {

                this.IsBusy = true;

 

                this.searchService.FacetSearch(this.Query);

            }

        }

 

        public void OpenExecuteSearch(string keywords)

        {

            if (!string.IsNullOrWhiteSpace(keywords))

            {

                string query = QueryManager.GetSearchQuery(keywords, "And", false);

 

                this.Query = query;

            }

        }

 

The OpenExecuteSearch method is called by the KnowledgeLake Search Center’s implementation of the OpenSearch protocol. Visit OpenSearch.org for more information. Review the KnowledgeLake Imaging for SharePoint to learn more about how the Search Center leverages the OpenSearch protocol.

The QueryManager class is responsible for constructing a query into a format used by KnowledgeLake Search for performing a SharePoint Search query. The user should never attempt to manipulate this string; again, the QueryManager class encapsulates all necessary logic to construct the query. The User Control will be notified, via a BindingExpression, of the updated Query property value and call the ExecuteQuery method to begin retrieving search results.

Part three of this series will look at the User Control for the Custom Search Results.

Download Files: CustomQueryResults.zip

Blog Posts In This Series

Tags: , , , ,

IT | Programming

Custom Search Results in KnowledgeLake Imaging for SharePoint - Part 1

by Scosby Friday, December 10, 2010

Introduction

This post uses Imaging for SharePoint version 4.1 and requires the SDK. Contact KnowledgeLake to learn more about Imaging for SharePoint or the SDK. Contact KnowledgeLake Support to obtain the latest SDK if your company already has a license.

This post will demonstrate how to create a Silverlight Search Results control in a SharePoint Solution. This post will use the DataGrid control available in Microsoft’s Silverlight Control Toolkit. When doing any development work, one should always test in a staging/testing environment before going live to a production server. Class files are available for download at the end of the post.

Getting Started

Extending KnowledgeLake Imaging for SharePoint

KnowledgeLake Imaging for SharePoint allows for the extension of the Search Results control in the search center and the web part with a custom Silverlight 4 control. This is a powerful feature because it gives users the ability to customize the display of KnowledgeLake Search Results in a variety of ways. However, any custom Search Results control will replace the default KnowledgeLake control in both the search center and the web part. There is no way to add functionality to the default KnowledgeLake Search Results control. In fact, a custom control will not inherit any functionality from the default control. All the functionality of a custom control must be implemented by the developer from the ground up. See figure 1 below for a picture of the default control in the search center, it would look similar in the web part.

Figure 1 - Default Search Results Control

View full size...

Setting Up the SharePoint Solution Project

Create a new Silverlight 4 application and name it KLSearchExtensions. The Silverlight application does not need to be hosted in a new web site. Uncheck the “Host the Silverlight application in a new Web site” box in the New Silverlight Application dialog, it should now appear similar to Figure 2 below.

Figure 2 - Create a New Silverlight Application
View full size...

In my previous post on Creating SharePoint 2010 Solutions for Silverlight Applications I described how to easily deploy Silverlight applications to SharePoint 2010. This approach allows the developer to hit F5 to build, deploy, and debug the Silverlight extension. Pretty slick stuff! Create a new SharePoint Solution and name it ExtensionSolution, as shown in Figure 3 below.

Figure 3 - Create a New SharePoint Solution

 View full size...

The extension must be placed in the %SharePointRoot%\Template\Layouts\KLClientBin directory, thus set the Deployment Location’s Path property appropriately. This is shown in Figure 7 below. Read more about project output references in my previous post mentioned above.

Figure 4 - Set Deployment Location Path

View full size...

Creating the Extension Project

The extension will be designed to use the Model-View-ViewModel (MVVM) pattern. Add to the project the following references from the KnowledgeLake Imaging for SharePoint SDK and Silverlight Control Toolkit:

·         KnowledgeLake.Silverlight.dll

·         KnowledgeLake.Silverlight.Imaging.Search.dll

·         KnowledgeLake.Silverlight.Search.Contracts.dll

·         System.Windows.Controls.Input.Toolkit.dll

·         System.Windows.Controls.Toolkit.dll

KnowledgeLake Imaging for SharePoint requires a user control to be specifically named and implement a specific interface. Add a new User Control to the KLSearchExtensions project and name it CustomQueryResults, if the control is not exactly named this way the extension won’t be recognized.

Part two of this series will look at the ViewModel for the Custom Search Results.

Download Files: CustomQueryResults.zip (3.46 kb)

Blog Posts In This Series

Tags: , , , ,

IT | Programming

Performing long running operations in Windows Forms on another thread

by Scosby Monday, August 16, 2010

This post will introduce developers to the BackgroundWorker class. Often, people ask how to perform an asynchronous operation and update their User Interface (UI) with some kind of progress information. The purpose of this post is to provide an overview of using the BackgroundWorker class to accomplish this common task. You can download the full class file at the end of this post.

While you could use a timer to perform an asynchronous operation, I feel the BackgroundWorker class is the better class to use for most peoples’ needs in a Windows Forms application. Let’s write a simple application that meets the following requirements:

  • Processes a long running operation on another thread asynchronously
  • Passes an argument to the operation to provide additional information
  • Restricts the user to running only one operation at a time
  • Updates the UI on the operation’s progress
  • Allows the user to cancel the operation

Create a new Windows Forms Application and design a form to look like the following:
Windows Forms Application Example

We have created a simple form with a textbox at the very top which will display the progress of our operation. The user can run or cancel the operation by clicking the appropriate buttons. Finally, the Options group box contains some additional information we can pass to our operation: whether to throw an exception, a user defined argument, and how many “records” we will process during the operation. Be sure to drag a BackgroundWorker control onto the form from the Components section of your toolbox.

Let’s look at the code you will need to write in order to meet the requirements of our simple application. First, we already know the BackgroundWorker can process a long running operation, so we can rely on that class to meet the first requirement. Let's construct our BackgroundWorker in the form's Load event. Add code similar to the following, note that you could also perform these actions in the designer:

private void BGWorker_Load(object sender, EventArgs e)

{

    this.backgroundWorker1 = new BackgroundWorker();

 

    //Set properties

    this.backgroundWorker1.WorkerSupportsCancellation = true;

    this.backgroundWorker1.WorkerReportsProgress = true;

 

    //Register event handlers

    this.backgroundWorker1.RunWorkerCompleted += new RunWorkerCompletedEventHandler(backgroundWorker1_RunWorkerCompleted);

    this.backgroundWorker1.ProgressChanged += new ProgressChangedEventHandler(backgroundWorker1_ProgressChanged);

    this.backgroundWorker1.DoWork += new DoWorkEventHandler(backgroundWorker1_DoWork);

}

 

Next, we need to pass an argument to the asynch operation. The RunWorkerAsync method has an overload with a parameter of type object. Thus, we can encapsulate all of our “options” into a new class and pass this into our operation. Create a nested, private class named Options in your form’s code behind to represent our group box on the form:

private class Options

{

    public bool ThrowException { get; set; }

    public string Arguments { get; set; }

    public int RecordCount { get; set; }

}

 

Next, we need to begin processing the operation.  Add a click event handler to your “Run” button similar to the following code: 

private void button1_Click(object sender, EventArgs e)

{

    if (!this.backgroundWorker1.IsBusy)

    {

        //Encapsulate our state information into a new class and pass this as an argument to the BackgroundWorker

        Options options = new Options();

        options.Arguments = this.textBoxArgs.Text;

        options.ThrowException = this.checkBox1.Checked;

        options.RecordCount = int.Parse(this.textBoxRecordCnt.Text);

 

        this.backgroundWorker1.RunWorkerAsync(options);

    }

}

In the button's click event handler, we are meeting the requirement to only allow one operation to run at a time. This is accomplished by checking to make sure the BackgroundWorker is not busy via the IsBusy property. If it were busy, than another operation is already running. You could display this information to the user in a message box if you wish by adding an else block. Otherwise, we create an instance of Options and set the properties appropriately. In the code aboe, the RecordCount property does not check for a valid int before parsing. This could cause an error and you should use better validation in production code.

 

Next, let's look at the DoWork event handler that is responsible for processing our operation on another thread. The main goal here is to retrieve our Options class, determine the configuration, and process the "records" reporting our progress back to the UI. Add code similar to the following:

private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e)

{

    //Cast the argument to our Options class

    Options options = (Options)e.Argument;

 

    int total = options.RecordCount;

 

    for (int i = 0; i < total; i++)

    {

        //Check if the user cancelled the operation at the beginning of every iteration.

        if (this.backgroundWorker1.CancellationPending)

        {

            e.Cancel = true; //User wants us to quit processing and return.

        }

        else

        {

            Thread.Sleep(100); //simulate processing

 

            int currentRecord = i + 1; //offset zero based loop for progress reporting

 

            string message = "Records processed: " + currentRecord.ToString();

 

            // Get progress percentage

            // Note: Force decimal division else the result is rounded to the nearest integer before we can convert it to a percentage.

            decimal progress = (currentRecord / (decimal)total) * 100;

 

            //Raise the event to report progress, the UI thread will handle this in backgroundWorker1_ProgressChanged

            this.backgroundWorker1.ReportProgress((int)progress, message);

 

            if (options.ThrowException)

            {

                //This exception will be supressed at runtime and be exposed in the RunWorkerCompletedEventArgs.Error property.

                throw new InvalidOperationException("You checked the box to throw an error.");

            }

        }

    }

 

    e.Result = "I was processed on another thread. Your arguments: " + options.Arguments;

}

The most important piece of the DoWork event handler is retrieving our Options class from e.Argument. This allows us to determine what our operation should be doing. After determining our record count, we ensure the user has not clicked the "cancel" button. If the user cancelled, we must set e.Cancel to true so we know the user explicitly cancelled. Otherwise, we begin processing the operation. This consists of us simulating a call to a long running operation by calling Thread.Sleep. The other important discussion point is our requirement of informing the UI after we have processed each record. This is accomplish by the call to backgroundWorker.ReportProgress. This method allows us to report back a percentage complete and an object to represent state. In our case, we just send back a string but you could easily use the technique discussed above for passing in a custom class similar to our Options class. This technique would allow you to handle more complex scenarios than our example demonstrates.

 

Next, let's handle the ProgressChanged event. The UI responds to this event raised during the async operation, this allows you to update your UI without having to invoke a method from a non-UI thread. Add code similar to the following:

private void backgroundWorker1_ProgressChanged(object sender, ProgressChangedEventArgs e)

{

    this.progressBar1.Value = e.ProgressPercentage;

 

    this.textBoxProgress.Text = e.UserState.ToString();

}

The ProgressChanged event handler simply takes our progress percentage and updates a progress bar on the form and displays the message we sent back in the UserState parameter of the ReportProgress method. This event will be raised each time you call ReportProgress. This is the real beauty of the BackgroundWorker. This eventing pattern makes it very simple for you to have a robust async operation that you can handle in a flexible way.

 

Next, we need to handle the RunWorkerCompleted event. This event always occurs and you need to handle it for 3 reasons:

  1. Determine if the operation threw an exception
  2. Determine if the user cancelled the operation
  3. Determine if the operation completed successfully

How you decide to handle each of these scenarios is equally important as handling the RunWorkerCompleted event itself. I will leave it up to you to determine what is appropriate, but you should start with the following code:

private void backgroundWorker1_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)

{

    if (e.Error != null)

    {

        var newEx = new InvalidOperationException("An error occured during processing.", e.Error);

 

        MessageBox.Show(newEx.ToString());

    }

    else if (e.Cancelled)

    {

        MessageBox.Show("User cancelled operation!");

    }

    else

    {

        this.textBoxProgress.Text = e.Result.ToString();

    }

}

 

Our final requirement is to allow the user to cancel the operation. As we already covered above, this will still raise the RunWorkerCompleted event. By adding a button, the user can simply click it to cancel the operation. Only one thing happens in this method, we inform the background worker that a cancellation is pending. Our DoWork event handler is already checking for the CancellationPending to be true, and this method is what sets that property to true. Add code similar to the following to handle your cancel button's click event:

private void button2_Click(object sender, EventArgs e)

{

    this.backgroundWorker1.CancelAsync();

}

 

This post has given you an overview of how to use the BackgroundWorker class. Review the MSDN documentation for additional information on the BackgroundWorker class.

 

You can download the entire class file for this example here: BGWorker Class.zip (1.46 kb)

Tags: , , ,

IT | Programming