Serializing A DataSet With DateTime Columns

by Scosby Friday, March 01, 2013

The Database

I was recently working on a legacy web service that returns a DataSet. Sometimes, these DataSets include DateTime columns with values that come from a database (Figure 1). On the client, it was confusing to see time zone offsets that were not in the database values. The clients could be in different time zones than the web server, so this behavior was easily overlooked during development.

Figure 1 –A Date Value Record

The DataSet

I wondered where the offsets were being added (Figure 2). Clearly, they weren't part of the database. I discovered they are created by the DataSet based on the time zone of the computer where it is serialized. This happens automatically when the web service returns a DataSet to the client. It also happens if you call the DataSet.GetXml method. In either case, 2003-04-06T00:00:00 was becoming 2003-04-06T00:00:00-05:00 and creating the confusion (albeit a different instance in time!).

Figure 2 - DataSet XML With An Offset

I needed to find a way to control how the DataSet was serializing its DateTime columns. After visualizing one of the DataSets (Figure 3), I noticed a property on the DataColumn called DateTimeMode. All of the columns had this property, even the columns that were not DateTime columns. I noticed they were all set to UnspecifiedLocal and decided to investigate.

Figure 3 – Visualizing Columns In A DataSet

 

The Server-Side Fix

Usually, it is not possible to change the DateTimeMode of a column once a DataSet has rows. However, it is possible to change the mode between UnspecifiedLocal and Unspecified because that only affects serialization and not the actual value. This is exactly what I needed and so I wrote the following code on the server side:

System.Data.DataSet data = new DataSet();

 

// ...

 

// Iterate the columns and set DataColumn.DateTimeMode

// to DataSetDateTime.Unspecified to remove offset

// when serialized.

foreach (DataTable table in data.Tables)

{

    foreach (DataColumn item in table.Columns)

    {

        // Switching from UnspecifiedLocal to Unspecified

        // is allowed even after the DataSet has rows.

        if (item.DataType == typeof(DateTime) &&

            item.DateTimeMode == DataSetDateTime.UnspecifiedLocal)

        {

            item.DateTimeMode = DataSetDateTime.Unspecified;

        }

    }

}

This code is pretty straight forward. Changing the column’s mode to Unspecified results in a DateTime value with no offset. This value now matches the database record exactly. This leaves the problem of formatting the value for display completely up to the client-side application.

Figure 4 - DataSet XML With No Offset

Summary

This post discussed how to control the formatting of DateTime columns when a DataSet is serialized. I showed an example of the default behavior which includes the time zone information offset. Finally, I showed a code snippet that changes the behavior to exclude the offset.

Tags: , , ,

IT | Programming

A Fistful of WaitHandles - Part Two

by Scosby Wednesday, February 22, 2012

Introduction

This is the second and final post in the series. The first post talks about the scenario for and behavior of the Job class. This post talks about the technical implementation of the Job class and how to extend it to suit your needs.

I’ve included the scenario from the first post, just in case you haven’t read it yet.

Scenario

If you need to run an operation after a specific amount of time, then it is likely you are familiar with one of the many timers available in the .NET Framework. This is a good approach and is well documented. Furthermore, this approach will continue to be a valuable tool for many developers to use in many different applications.

If you are interested in running the timer’s job on demand, in addition to its interval, then you will need to do a bit more work. Of course, this is still reasonable to do with a Timer but it does provide an opportunity to consider another approach. You will learn about scheduling jobs to the ThreadPool in a way that resembles the familiar timers in the .NET Framework.

Code Samples

The following code sample represents the Job class. As a reminder, this class is designed to run after a certain amount of time passes, additionally, it can be run on demand. The Job class encapsulates the code to do those behaviors.

 Job Class

    using System.Threading;

 

    public class Job : IDisposable

    {

        private AutoResetEvent runWaitHandle = new AutoResetEvent(false);

        private RegisteredWaitHandle registeredWaitHandle;

 

        public Job()

        {

            this.Interval = -1;

        }

 

        public int Interval { get; set; }

 

        public void Start()

        {

            WaitOrTimerCallback callback =

                (userState, interval) =>

                {

                    if (interval)

                    {

                        Console.WriteLine("Operation ran on schedule.");

                    }

                    else

                    {

                        Console.WriteLine("Operation ran on demand.");

                    }

                };

 

            registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(

                runWaitHandle, //A WaitHandle to be used by the thread pool

                callback, //The operation to execute

                null, //User state passed to the operation (not used here)

                this.Interval, //How often to execute the operation

                false); //Run once or not

        }

 

        public void Stop()

        {

            registeredWaitHandle.Unregister(null);

        }

 

        public void Run()

        {

            runWaitHandle.Set();

        }

 

        public void Dispose()

        {

            if (registeredWaitHandle != null)

            {

                registeredWaitHandle.Unregister(null);

            }

 

            if (runWaitHandle != null)

            {

                runWaitHandle.Dispose();

            }

        }

    }

If you remember the first post, we discussed the three steps for scheduling jobs to the ThreadPool. Let’s look at those steps now and see how they are implemented in the Job class.

1.      You need to start, or register, the job.

a.      The Job class exposes a Start method, which queues the operation to the ThreadPool.

b.      If the Interval property is -1 the operation will not run on a schedule.

2.      You provide a special object that helps the ThreadPool know when to run your job.

a.      The ThreadPool.RegisterWaitForSingleObject method uses a WaitHandle to control the operation execution.

b.      When using an AutoResetEvent, not only will the ThreadPool run the operation on a schedule but it is also possible to tell the ThreadPool to run your operation on demand.

c.       Since the AutoResetEvent is a member field that implements IDisposable, our Job class needs to implement the same interface and cleanup the AutoResetEvent.

3.      In order to stop your job, you need to keep a reference to the object returned after you registered the job.

a.      The ThreadPool.RegisterWaitForSingleObject method returns a RegisteredWaitHandle object after you start the job.

b.      The RegisteredWaitHandle can be used to stop the job.

c.       Stopping the operation is easy as calling the RegisteredWaitHandle.Unregister method. This is done when disposing the class too.

As you can see, the Job class is a neat way to wrap up all the behavior described by the scenario. Additionally, it provides a foundation you can build upon for other uses. I will finish the post by describing a few ways you could extend the Job class.

Extending The Job Class

The following ideas could be incorporated into the Job class. These ideas have varying degrees of complexity. Hopefully, you can use these ideas as inspiration to improve the Job class and customize it to your needs.

·         Add reentrancy to the callback operation.

·         Make the callback operation a protected virtual method to enable sub classing specific jobs.

·         Support stopping and then restarting the job.

o   Add a property to the Job class to check if it is registered or not.

·         Add a public property allowing run once configuration when registering to the ThreadPool.

Summary

This series covered a complex scenario: running a job periodically and sometimes on demand. You have seen one way to do this by using the ThreadPool.RegisterWaitForSingleObject method. Furthermore, you have a seen the benefits of abstracting the problem away from the code.

Tags: ,

IT | Programming

A Fistful of WaitHandles - Part One

by Scosby Tuesday, February 21, 2012

Introduction

This is the first post in the series. It talks about the scenario that introduces the new approach and behavior of the Job class in that approach. The second post talks about the technical implementation of the Job class and how to extend the main application’s behavior to suit your needs.

Scenario

If you need to run an operation after a specific amount of time, then it is likely you are familiar with one of the many timers available in the .NET Framework. This is a good approach and is well documented. Furthermore, this approach will continue to be a valuable tool for many developers to use in many different applications.

If you are interested in running the timer’s job on demand, in addition to its interval, then you will need to do a bit more work. Of course, this is still reasonable to do with a Timer but it does provide an opportunity to consider another approach. You will learn about scheduling jobs to the ThreadPool in a way that resembles the familiar timers in the .NET Framework.

Scheduling Jobs to The ThreadPool

The ThreadPool.RegisterWaitForSingleObject method allows you to set up an operation to be run on a schedule or even on demand. This method can seem complex to use at first glance but it can be broken into three parts.

First, you need to start, or register, the job. Next, you provide a special object that helps the ThreadPool know when to run your job. Finally, in order to stop your job, you need to keep a reference to the object returned after you registered the job.

Rather than explain these parts in more detail, I will show you some code and explain it in the context of the previously described scenario. Again, this post is focused on how a job should behave, in order to better understand the problem. The second post will cover all the technical details of the Job class.

Code Samples

The following code samples represent a console application. However, the techniques used can be applied easily in other types of applications, such as a Windows Service or a WPF application. If you can think of any additional uses feel free to comment on the post. Additionally, if you’re up for a challenge, try modifying the program to read job configuration data from the file system at launch.

Program class

        static void Main(string[] args)

        {

            Console.WriteLine("Start executing timer operations:" + Environment.NewLine);

 

            RunJobWithInterval();

 

            RunJobWithNoInterval();

 

            Console.WriteLine("Done executing timer operations." + Environment.NewLine);

 

            Console.Write("Press any key to exit...");

 

            Console.ReadKey(true);

        }

 

        private static void RunJobWithInterval()

        {

            Console.WriteLine("Running Job 1---------------------------");

 

            using (Job job = new Job())

            {

                job.Interval = 500;

 

                job.Start();

 

                Thread.Sleep(1000);

 

                job.Run(); //Possible to run on demand even with an interval

 

                Thread.Sleep(4000);

 

                job.Stop();

            }

        }

 

        private static void RunJobWithNoInterval()

        {

            Console.WriteLine("Running Job 2---------------------------");

 

            using (Job job = new Job())

            {

                job.Start();

 

                job.Run();

 

                Thread.Sleep(200);

 

                job.Stop();

            }

        }

 

This code sample starts with the Main method of the Program class. It calls two other methods, each runs a job either with or without a schedule. It is possible for both jobs to run at the same time. In fact, the jobs are completely independent. This code sample keeps them separated to reduce confusion.

The RunJobWithInterval method first creates a new job, sets the interval, and then starts the job. Next, the application then blocks the thread while the job is running. This gives the job time to write to the console and would not be done in a production application. Next, the method runs the job on demand. Even though the job has a schedule, it is possible to run the job as needed. You can compare and contrast the job’s output for on demand and scheduled operations. Next, the method blocks the thread again to demonstrate how the job will continue to run on its schedule. Finally, the job is stopped.

The RunJobWithNoInterval method is similar but differs in two ways. First, instead of running the job on a schedule, this job is only run on demand. Finally, it only blocks the thread once to allow the job enough time to run before it is stopped.

Thinking in The Problem’s Domain

I have not explained the Job class first, because I feel the semantics of the Job class should not be overlooked. It is important to think of the job in an abstract manner. In fact, this approach is very useful for thinking through other problems. It would have been easy to show the functionality without the Job class. However, in that case you would not get to see the benefits of encapsulation with the Job class. In other words, it should be clear how a job is supposed to behave right now. After all, if you don’t know how something is supposed to behave it becomes much more difficult to write effective code.

Tags: ,

IT | Programming

Detecting SharePoint Forms Services

by Scosby Friday, September 30, 2011

Introduction

The reader will learn how to use the Forms Services Feature Detection Protocol. The solution demonstrated will post a HTTP request to a SharePoint detector page without calling a SharePoint web service. This approach allows even a SharePoint Reader to use the Forms Services Feature Detection Protocol. Future posts about Forms Server will utilize the solution presented in this post.

InfoPath In SharePoint

InfoPath Forms Services, included with Microsoft SharePoint Server 2010, provides a Web browser experience for filling out InfoPath forms. InfoPath 2010 integrates with SharePoint’s Business Connectivity Services (BCS) enabling users to connect their organization's forms to important business data that is stored in external line-of-business systems such as SAP, Oracle or even Twitter! Read the InfoPath Forms Services Overview for more information.

Forms Server Detector

Using the Forms Services Feature Detection Protocol, the reader can easily detect if Forms Server exists and is enabled on a specific site. By posting a query string to the FormServerDetector.aspx page, a response of HTTP No Content 204 will be returned if the request was successful but Forms Server features are not enabled, and a HTTP OK 200 if Forms Server features are enabled. If Forms Server is enabled, the response body contains the detection result but it should always be true according to the protocol.

Solution

The solution first constructs the detector URI from the following parts: the site URL, the FormServerDetector.aspx page and the protocol query parameter. Using the detector URI, the solution posts an HTTP request and inspects the HTTP response to detect the Forms Server status for the site. The protocol specifies the client request headers should include an Accept header with */* as the value. If Forms Server is enabled, the response body will contain the following text:

<server IsFormServerEnabled = 'true' />

Code Sample


        public static bool IsFormsSevicesEnabled()

        {

            UriBuilder builder = new UriBuilder("http://scvm1");

            builder.Path = "/_layouts/FormServerDetector.aspx";

            builder.Query = "IsFormServerEnabled=check";

 

            string servicePath = builder.Uri.GetComponents(UriComponents.AbsoluteUri, UriFormat.SafeUnescaped);

 

            HttpWebRequest request = (HttpWebRequest)WebRequest.Create(servicePath);

            request.Accept = "*/*";

            request.UseDefaultCredentials = true;

 

            HttpWebResponse response = null;

 

            try

            {

                response = (HttpWebResponse)request.GetResponse();

            }

            catch (WebException ex)

            {

                response = (HttpWebResponse)ex.Response;

            }

 

            if (response.StatusCode == HttpStatusCode.OK)

            {

                using (System.IO.StreamReader sr = new StreamReader(response.GetResponseStream()))

                {

                    System.Xml.Linq.XElement doc = XElement.Parse(sr.ReadToEnd());

 

                    XAttribute fsAttribute = doc.Attribute("IsFormServerEnabled");

 

                    return bool.Parse(fsAttribute.Value);

                }

            }

 

            return false;

        }

The reader should take note of the protocol query parameter assigned to the builder.Query property. By using a value of “check” for the detection protocol query parameter, the request will be processed by the protocol server. This solution provides broad support for remote clients to detect Forms Server capabilities, but without resorting to checking for SharePoint Feature definitions.

More Information

·         InfoPath Forms Services Overview

·         Forms Services Feature Detection Protocol Specification [MS-FSFDP]

·         Request for Forms Server Detection

Tags: ,

IT | Programming

Remotely Opening Office Files From SharePoint 2010

by Scosby Monday, August 22, 2011

Introduction

Developing a client side application for SharePoint capable of opening files in their native Office application does not require very much code for a robust solution. This post will explore how to use the well documented SharePoint.OpenDocuments ActiveX control from managed code instead of from Javascript in a Web Browser. This approach will provide the end users with a familiar SharePoint experience in the application.

Problem

There are at least three obvious and important questions to ask regarding this topic:

·         Why not start a new process using the file’s URL?

·         Why not download the file to a temporary location?

·         Why not write a monolithic switch statement, using a handful of Office interop assemblies?

Start Process

The first question asks what is wrong with starting a new process for the file’s URL. This approach is indeed the most simple. However, undesired behavior can occur when attempting to open files from within folders of a document library.

Download File

The second question asks what is wrong with downloading the file. This approach seems more reasonable. However, there is no out-of-the-box integration with SharePoint given this approach. Additionally, this creates a burden on the application to clean up the downloaded file after it is no longer needed.

Interop Assemblies

The third question asks why not write a switch statement to determine which Office interop assembly should be used to open the file? This approach requires much more code. Additionally, it would be an edge case to need such a high level of complexity for opening Office files.

Solution

Fortunately, the three problems described in this post are easily solved using the existing Office tools provided by Microsoft. In particular, SharePoint developers are likely familiar with the SharePoint.OpenDocuments ActiveX control, used by Javascript in a Web Browser, to perform the task at hand.

The SharePoint.OpenDocuments control is defined in the OWSSUPP.dll file that is installed in the %ProgramFiles%\Microsoft Office\Office14 directory on the client computer during Microsoft Office Setup. This assembly is not well documented, but it is the one used by the SharePoint.OpenDocuments control.

Given this information, the reader can navigate through the OWSSUPP assembly with the Visual Studio Object Browser and discover the matching methods needed for the managed code solution. The next section will explain the SharePoint.OpenDocuments control, so the reader can gain a foundation to build upon in the code sample.

SharePoint.OpenDocuments Control

According to the documentation, the control is defined as “An ActiveX control that enables users of Microsoft SharePoint Foundation 2010 to create documents based on a specified template, or to edit documents using their associated applications.”

In other words, this control enables a developer to open a specific Office file from SharePoint in its native Office application on the client. When used in managed code, the control will display the familiar Open Document dialog for Office applications (see figure 1).

 

Figure 1 – The Open Document dialog is shown when opening a file from SharePoint

This post will focus on the OpenDocuments.ViewDocument method. As stated in the documentation, this method “opens the document for reading instead of editing, so that the document is not locked on the server.” This behavior is best for a majority of scenarios. By using this method, the developer allows the end user to decide when to edit or lock the document on the SharePoint server.

Code Sample

This code sample uses a hardcoded URL to open a file. However, real world applications will be working with URLs from many different sources, such as search results or views. The developer needs the following version of Office to complete this code sample:

·         Microsoft Office 2010 (2007 should work but not 2003)

o    32-Bit

The code sample will use a console application to open a hardcoded URL for an Office file by using the COWSNewDocument interface and the IOWSNewDocument2 interface.

1.       Create a new console application in Visual Studio.

2.       Right click the project’s References and select Add Reference.

3.       The developer will not find the OWSSUPP components in the COM tab. Thus, select the Browse tab and navigate to %ProgramFiles%\Microsoft Office\Office14\OWSSUPP.DLL to add the reference to the project.

4.       Double click the newly added OWSSUPP assembly in the console application’s References list to open the Object Browser.

5.       Using the Object Browser, navigate into the Interop.OWSSUPP assembly. Expand the OWSSUPP namespace and explore its members. Be sure to look at the COWSNewDocument interface and the IOWSNewDocument2 interface, since these are the building blocks for the code sample.

6.       Modify the Main method in the Program class to look like the following code snippet, be sure the fileUri variable represents a file that is accessible.

        static void Main(string[] args)

        {

            Uri fileUri = new Uri("http://scvm1/Shared Documents/Document 1.docx");

 

            string fileUrl = fileUri.GetComponents(UriComponents.AbsoluteUri, UriFormat.UriEscaped);

 

            OWSSUPP.IOWSNewDocument2 control = new OWSSUPP.COWSNewDocument() as OWSSUPP.IOWSNewDocument2;

 

            control.ViewDocument(fileUrl);

        }

7.  Start debugging and watch the Open Documents dialog pop up.

 

The code sample is very short. Note how the COWSNewDocument interface is instantiated and cast to the IOWSNewDocument2 control interface. This interface is the earliest implementation of the ViewDocument method. Thus, the reader should use the IOWSNewDocument2 interface, unless additional functionality in the newer versions is needed.

 

For additional functions in the OWSSUPP interfaces, such as EditDocument or CreateNewDocument, it is wise to compare the documentation for the SharePoint.OpenDocuments control with the OWSSUPP members in the Object Browser. Since Microsoft doesn’t document OWSSUPP on MSDN, it is extremely beneficial to refer back to the SharePoint documentation. Most of the times, it is OK to omit the varProgId parameter, in which case that tells the control to default to the currently installed application for that file.

Conclusion

This post showed how easy it is to translate the SharePoint.OpenDocuments ActiveX control into a managed code solution for opening SharePoint Office files in their native application. The reader has learned how to compare the existing documentation for the SharePoint.OpenDocuments control against the OWSSUPP members in the Object Browser. The end users will appreciate a familiar experience in the reader’s application, as the Open Document dialog is the same experience provided by SharePoint from the browser.

Tags: ,

IT | Programming

Cancellable Events

by Scosby Sunday, June 05, 2011

Introduction

This post will introduce the user to cancellable events. This special kind of event can be cancelled based on one or more event handlers. This post will also explore the problems with the default behavior of executing multiple CancelEventHandler methods, and how the user can override the default behavior by controlling the execution of the registered event handlers.

Scenario

Assume a class exists which launches a rocket. This class should encapsulate the required functionality to abort the launch, ignite the rocket, and report lift off. The class defines two events: AbortLaunch and Launched. There is one public method named Launch that raises the AbortLaunch event, if not aborted it then ignites the rocket and raises the Launched event.

A Simple Example

The following code sample demonstrates how easy it is to create a cancellable event and evaluate the result. This post will utilize a console application, but the techniques can also be applied to other types of applications.

Code Sample – A Simple Rocket Class

     public class Rocket

    {

        //Events...

        public event System.ComponentModel.CancelEventHandler AbortLaunch;

        public event System.EventHandler Launched;

 

        //Event Methods...

        protected virtual bool OnAbortLaunch()

        {

            if (this.AbortLaunch != null)

            {

                System.ComponentModel.CancelEventArgs args = new CancelEventArgs();

 

                this.AbortLaunch(this, args);

 

                return args.Cancel;

            }

 

            return false;

        }

 

        //Launch Methods...

        public void Launch()

        {

            if (!this.OnAbortLaunch())

            {

                //3..2..1..ignition!

 

                if (this.Launched != null)

                {

                    this.Launched(this, EventArgs.Empty);

                }

            }

        }

    }
 

Code Sample – A single event handler

Calling the SimpleRocketLaunch method demonstrates how the Rocket class works by default. Try changing the someCondition variable in the rocket_AbortLaunch event handler to see how the behavior is different.

        private static void SimpleRocketLaunch()

        {

            Rocket rocket = new Rocket();

 

            rocket.AbortLaunch += new CancelEventHandler(rocket_AbortLaunch);

 

            rocket.Launched += new EventHandler(rocket_Launched);

 

            rocket.Launch();

 

            Console.WriteLine("Press any key to exit...");

 

            Console.ReadKey(true);

        }

 

        private static void rocket_Launched(object sender, EventArgs e)

        {

            Console.WriteLine("Rocket launched!");

        }

 

        private static void rocket_AbortLaunch(object sender, CancelEventArgs e)

        {

            bool someCondition = false;

 

            e.Cancel = someCondition;

        }

 

System.ComponentModel.CancelEventHandler

This class enables the user to create an event which can be cancelled. An instance of CancelEventArgs is passed to all the event handlers, which can set CancelEventArgs.Cancel to true if the event should be cancelled.

Class Definition:

public sealed class CancelEventHandler : MulticastDelegate

Combining Event Handlers

The user can add more than one event handler to an event. It is important to remember an event handler is always a delegate, while a delegate may or may not be an event handler! The user can combine multiple event handlers to provide a robust means for checking if the event should be cancelled. This is sometimes known as “chaining delegates” in the community. The CancelEventHandler type derives from MulticastDelegate and there is a recommended behavior for calling all of a MulticastDelegate’s event handlers, collectively they are named the invocation chain; as explained in the MSDN documentation:

When a multicast delegate is invoked, the delegates in the invocation list are called synchronously in the order in which they appear. If an error occurs during execution of the list then an exception is thrown.

Code Sample – Multiple Event Handlers

This code sample demonstrates the problem with raising the CancelEventHandler in the current implementation of the Rocket class. In this sample, multiple event handlers have been added, via expression lambdas, to the AbortLaunch method. The user should notice how the launch should be aborted in the 2nd handler. However, after running the code, the user should be surprised to see the rocket was launched!

        private static void ComplexRocketLaunch()

        {

            Rocket rocket = new Rocket();

 

            rocket.AbortLaunch += new CancelEventHandler(rocket_AbortLaunch);

 

            rocket.AbortLaunch +=

                (o, e) =>

                {

                    //CANCEL THE LAUNCH!!!

                    bool anotherCondition = true;

 

                    e.Cancel = anotherCondition;

                };

 

            rocket.AbortLaunch +=

                (o, e) =>

                {

                    bool yetAnotherCondition = false;

 

                    e.Cancel = yetAnotherCondition;

                };

 

            rocket.Launched += new EventHandler(rocket_Launched);

 

            rocket.Launch();

 

            Console.WriteLine("Press any key to exit...");

 

            Console.ReadKey(true);

        }

 Problem

The event passes the same CancelEventArgs instance to all delegates in the invocation list. While this solution works well for events that have only one registered event handler, it creates a problem once the user combines, or chains, multiple event handlers. If the user invokes the CancelEventHandler as shown above in the “Multiple Event Handlers” code sample, a problem can occur. Specifically, the last handler to set the CancelEventArgs.Cancel property is the only relevant handler. In other words, only the last event handler will determine if the event should be cancelled.

Solution

The user must invoke the event differently than other events. The user should retrieve a collection of the registered event handlers by calling the GetInvocationList method on the CancelEventHandler. This will allow the user to determine if the event was cancelled after each event handler.

Code Sample – A Robust Rocket Class


   public class Rocket 
  
{

        //Events...

        public event System.ComponentModel.CancelEventHandler AbortLaunch;

        public event System.EventHandler Launched;

 

        //Event Methods...

        protected virtual bool OnAbortLaunch()

        {

            if (this.AbortLaunch != null)

            {

                System.ComponentModel.CancelEventArgs args = new CancelEventArgs();

 

                foreach (System.ComponentModel.CancelEventHandler handler in this.AbortLaunch.GetInvocationList())

                {

                    if (args.Cancel)

                    {

                        break;

                    }

 

                    handler(this, args);

                }

 

                return args.Cancel;

            }

 

            return false;

        }

 

        //Launch Methods...

        public void Launch()

        {

            if (!this.OnAbortLaunch())

            {

                //3..2..1..ignition!

 

                if (this.Launched != null)

                {

                    this.Launched(this, EventArgs.Empty);

                }

            }

        }

    }

The user will notice how this Rocket class raises the AbortLaunch event differently in its implementation of the OnAbortLaunch method. By iterating over each handler from the invocation list, the user can inspect the result of the previous handler and determine if the next handler should even be called.

The result of this subtle change is a more natural behavior from the Rocket class that most users would expect.  After making these changes to the Rocket class, call the ComplexRocketLaunch method again and notice how the rocket is not launched!

Summary

This post introduced the user to the CancelEventHandler by presenting a rocket class capable of aborting a launch based on a registered event handler. The user was shown the problem with the default behavior of raising the CancelEventHandler event, and then shown how to fix the problem by explicitly invoking the registered event handlers by calling the GetInvocationList method.

For More Information

·         CancelEventHandler Delegate

·         MulticastDelegate

·         MulticastDelegate.GetInvocationList Method

Tags:

IT | Programming

Xml Serializing Internal Classes

by Scosby Monday, May 16, 2011

Introduction

The XmlSerializer does not work with internal classes, instead it only works with public classes. There is a helpful class introduced with WCF in .NET Framework 3.5 called the DataContractSerializer, which can be used for any class. The process is similar to using the XmlSerializer. This article will demonstrate a simple scenario for the user, and explain one approach (out of many!) for serializing a .NET class, defined as internal, into XML.

What is Serialization

Serialization is a complex topic and many users find it confusing. In simple terms, serialization can be defined as: “the process of converting an object into a form that can be easily stored or transmitted”.

This process is a two-way street. When the user is turning an object into XML, this is called as serialization. When the user is turning XML into an object, this is called as deserialization. While both terms are similar, the difference is simply in what direction the process operates.

Serialization is a great tool for the user to have in her toolbox. Often times, an application will need to save data so it can be loaded the next time the application starts up. One such example is the options or settings that can be configured by an end user. While more advanced examples cover the transmission of serialized objects, including web services and remoting.

Serializers

For this post, the user will discuss two different classes that can serialize objects into XML. Of course, this can be accomplished in a variety of ways in the .NET Framework but that is a great exercise for the inquisitive user wishing to learn more. The topic of discussion is how to serialize internal classes, which cannot be accomplished with the XmlSerializer so it will only be discussed briefly.

XmlSerializer

The XmlSerializer class is in the System.Xml.Serialization namespace of the System.Xml.dll assembly. It is capable of converting an object's public properties and fields, given a public class. It will not work with a class that is not publically accessible. This provides a simple and easy to use serializer that is great for many classes. The user should always use this class when she can, because it is very simple and straightforward.

DataContractSerializer

The DataContractSerializer class is in the System.Runtime.Serialization namespace of the System.Runtime.Serialization.dll assembly. It is capable of letting the user declare a contract that governs the serialization of an object. A contract, in this case, means the user gets to choose what is serialized based on attributes used by the DataContractSerializer when serializing an object. This approach sounds more complicated, but is simpler than it first appears! Of course, the trade-off is the requirement forcing the user to declare a contract via attributes. This becomes tedious for large classes with lots of members to serialize, but the pattern is easy.

Scenario

Assume the following scenario for a customer service application: a company class exists that needs to be serialized but it should not be publically accessible outside the assembly. Thus, the class is defined as internal. The company class also defines a list of company employees, another class that is internal. The application outputs a list of the company’s employees. This output should be in an XML file format so it can be interoperable with various other Line of Business (LOB) and 3rd party applications.

 

Code Sample – Company class

    [DataContract(Namespace = "")]

    internal class Company

    {

        [DataMember]

        public string CompanyName { get; set; }

 

        [DataMember]

        public List<Employee> CompanyEmployees { get; set; }

    }

 

 

Code Sample – Employee class

    [DataContract(Namespace = "")]

    internal class Employee

    {

        [DataMember]

        public string Name { get; set; }

 

        public string Address { get; set; }

 

        [DataMember]

        public DateTime StartDate { get; set; }

 

        [DataMember]

        internal int Salary { get; set; }

    }

The first thing the user should note is the DataContract and DataMember attributes. These attributes allow the DataContractSerializer to determine which class members belong in the output. The user should ensure the class is marked with the DataContract attribute and all members to be serialized are marked with the DataMember attribute. 

The astute reader will have noticed the Employee.Address property is not decorated with the DataMember attribute. The serialization process will ignore the Employee.Address property, in this case it serves as a demonstration, but it certainly could be marked as a DataMember if the user wished.

 

 

Code Sample – Serializing the Company Class


       
public static string Serialize(Company value)

        {

            if (value == null)

            {

                throw new ArgumentNullException("value");

            }

 

            StringBuilder xmlData = new StringBuilder();

 

            using (XmlWriter xw = XmlWriter.Create(xmlData, new XmlWriterSettings { Indent = true, OmitXmlDeclaration = true }))

            {

                DataContractSerializer dcs = new DataContractSerializer(typeof(Company));

 

                dcs.WriteObject(xw, value);

            }

 

            return xmlData.ToString();

        }

 

The important thing to notice is how the DataContractSerializer writes to a StringBuilder that is returned to the caller. This string is typically saved to a XML config file. In this case, the user will simply keep the string in memory in order to construct another instance of the Company class. However, it is a good exercise to save this string to a file and then load the file later to deserialize its contents!

After serializing the company class, the XML output will look as follows:

  <Company xmlns:i="http://www.w3.org/2001/XMLSchema-instance">

    <CompanyEmployees>

      <Employee>

      <Name>John Doe</Name>

      <Salary>10</Salary>

      <StartDate>2009-12-15T00:00:00</StartDate>

    </Employee>

      <Employee>

      <Name>Steve Curran</Name>

      <Salary>20</Salary>

      <StartDate>1975-05-22T00:00:00</StartDate>

    </Employee>

  </CompanyEmployees>

  <CompanyName>Contoso</CompanyName>

</Company>

 

Code Sample – Deserializing the Company Class


       
public static Company Deserialize(string xmlData)

        {

            if (string.IsNullOrWhiteSpace(xmlData))

            {

                throw new ArgumentNullException("xmlData");

            }

 

            XmlReaderSettings settings = new XmlReaderSettings()

            {

                CloseInput = true,

            };

 

            using (XmlReader xr = XmlReader.Create(new StringReader(xmlData), settings))

            {

                DataContractSerializer dcs = new DataContractSerializer(typeof(Company));

 

                return (Company)dcs.ReadObject(xr);

            }

        }

 

Here, the important thing to notice is how the DataContractSerializer is using a XmlReader to parse the string and deserialize a new instance of the Company class, which is returned to the caller. Besides reading a string, this method is similar to the one described earlier. In simple terms, the expected result of deserialization is an object in the same state as when it was serialized.

 

In some advanced scenarios, administrators or developers could change the XML values in file system (if the serialization saves the values, of course!). This enables customizations and scripts which can ease the deployment & configuration burden for users of an application. Be aware, this kind of behavior opens the door to malicious attacks and is a primary example of why the user should be cautious about what kind of information is serialized and (arguably more important) deserialized.

 

 

Code Sample – Creating Contoso


       
public static void Run()

        {

            Company contoso = new Company();

 

            contoso.CompanyName = "Contoso";

 

            Employee johnDoe = new Employee();

            johnDoe.Address = "12345 street";

            johnDoe.Name = "John Doe";

            johnDoe.Salary = 10;

            johnDoe.StartDate = new DateTime(2009, 12, 15);

 

            Employee steveCurran = new Employee();

            steveCurran.Address = "56789 blvd";

            steveCurran.Name = "Steve Curran";

            steveCurran.Salary = 20;

            steveCurran.StartDate = new DateTime(1975, 5, 22);

 

            contoso.CompanyEmployees = new List<Employee>() { johnDoe, steveCurran };

 

            string xml = Serialize(contoso);

 

            Company newContoso = Deserialize(xml);

 

            //Compare the two Contoso instances to verify serialization

        }

This method utilizes the code samples the user has learned in this post. A Company instance is created and populated with some mock data. This instance is then serialized and stored in a string variable named “xml”. Finally, a new Company instance is deserialized so the user can compare the two instances and see how they differ.

Concerns

While the DataContractSerializer sounds like a great tool, and it certainly is useful, the user must be careful to understand when and why it would be used to serialize a class. If the user has a public class, then the best way to serialize that class is with the XmlSerializer. The DataContractSerializer has many additional benefits, but for the scope of this article it is important to note that it’s used to get around the XmlSerializer’s limitations with an internal class. The user can find many other uses of the DataContractSerializer in the context of WCF, but this is a good, albeit creative, solution for applications that must XML serialize an internal class.

More Information

Serialization – MSDN topic

Serialization Samples - MSDN samples

ASMX Web Services & XML Serialization - MSDN forums

.NET Remoting & Runtime Serialization – MSDN forums

Tags: ,

IT | Programming

SharePoint 2010 Content Organizer - Client Object Model

by Scosby Wednesday, March 16, 2011

Introduction

SharePoint 2010 introduced the Content Organizer to provide a more formal routing infrastructure for site content. Based on a list of configured rules, content is routed to a library based on conditions such as column values or content type. Read more about the Content Organizer on MSDN. This post assumes the user has installed the SharePoint 2010 Client Object Model redistributable.

Contents

Using the Content Organizer

One particular feature of the Content Organizer is to redirect users to the Drop Off library. This setting is configured in the Content Organizer Settings link located under Site Administration on the Site Settings page. (See figure 1)

Figure 1 – Content Organizer Redirect Setting

Enabling this setting will send all content for a library that has a configured rule to the drop off library. In other words, if users upload a document to the library it will be sent to the drop off library instead. This only applies to libraries that are listed as “target libraries” in a Content Organizer Rule. (See figure 2).

Figure 2 – Content Organizer Rule configured to a target library

Once a new item has been routed to the drop off library, either a manager or a timer job will move the item to its final location once the rule’s conditions have been met. The final location is the target library defined by the rule. A daily timer job, conspicuously called “Content Organizer Processing” (see figure 3), is created for each web application with a content organizer enabled site. This job will evaluate items in the drop off library against the rule’s conditions. The item must satisfy all the rule’s conditions in order to be moved to the target library. Unless a manager submits content before the job is run, any content matching the rule’s conditions will not be routed to the target library (final location) until the job has run. It is possible to run the job on demand, or have a manager submit the item in the drop off library, to move items immediately.

Figure 3 - Content Organizer Timer Job

Problem

When using the SharePoint Client Object model, there is no implementation provided for determining content organizer rules. Additionally, uploading with the client object model ignores the redirect users setting and bypasses the content organizer, ignoring any rules defined for a target library.  By design, when redirecting users to the drop off library, the content organizer restricts a library’s available content types for new documents or uploads to those defined as content organizer rules specifying that library as a target. If no content organizer rules exist, then the library will behave as it does without the content organizer.

Solution

Despite the lack of built in support with the client object model for the content organizer, it is possible to discover the rules and honor the redirect setting. The solution is to mimic the content organizer redirect behavior by releasing to the drop off library instead of uploading to the target library. This post will demonstrate how to retrieve the list of enabled content organizer rules using a CAML query. If there is a rule for the release content type and the redirect users setting is enabled, then files should go to the drop off library instead of the target library.

Designing the Solution

The solution has three high-level steps to perform.

Checking the Content Organizer

Using the client object model, it is possible for the user to determine if a Microsoft.SharePoint.Client.Web (Web) is using the content organizer. The field client_routerenforcerouting should be checked for a bool indicating if the Web is redirecting users to the drop off library.

The user should not simply check for this property on the Web and assume all libraries have rules, however. SharePoint requires the user to define content organizer rules to control the routing of content. Thus, the user must retrieve the content organizer rules and evaluate them against the intended content type for an upload. If a rule matches the content type, then the content should be sent to the drop off library instead of the target library, again, only if client_routerenforcerouting is true.

If a Web does not enforce the content organizer redirect, it is safe to upload directly to any library. Thus, retrieving rules for a Web without the content organizer redirect would not be necessary. This method assumes the user has loaded the Web.AllProperties property on the Web context as follows, else an exception is thrown: context.Load(context.Web, x => x.AllProperties)

Code Sample – checking for content organizer rules


        private static bool RequiresContentOrganizer(Web web)

        {

            if (!web.IsObjectPropertyInstantiated("AllProperties"))

            {

                throw new InvalidOperationException("Web.AllProperties is not initialized!");

            }

 

            string fieldName = "client_routerenforcerouting";

 

            Dictionary<string, object> fieldValues = web.AllProperties.FieldValues;

 

            if (fieldValues.ContainsKey(fieldName))

            {

                object value = fieldValues[fieldName];

 

                if (value != null)

                {

                    bool result = false;

 

                    if (bool.TryParse((string)value, out result))

                    {

                        return result;

                    }

                    else

                    {

                        throw new InvalidOperationException("Unexpected field value in Web properties for content organizer redirect setting!");

                    }

                }

            }

 

            return false;

        }

Query the Content Organizer Rules

The content organizer rules are created in a site list called “Routing Rules”.  This special list has default views which group the rules. The user can choose to display the rules grouped by Content Type or Target Library. Either grouping will provide a collapsible section so the user can easily navigate complex sets of routing rules.

In order for the client object model to release to the drop off library, instead of the target library, it is necessary to construct a CAML query against the routing rules list. The query will return the view fields defined in the special “Rule” content type and allow the user to inspect the routing rule from the client object model.

Querying the rules list can be split into two parts:

·         Find the list.

·         Query the list.

When using the SharePoint client object model, it is possible for the user to easily construct a simple “All Items” CAML query. The user can specify a row limit and a collection of fields to include in the search results. This approach reduces the amount of XML manipulation the user must perform to build the CAML query.

Code Sample – Part One: Find the List

The routing rules list URL should be consistent across Language packs; you can check the RoutingRuleList_ListFolder key in the dlccore resx file located in the SharePoint %14 Hive%\Resources directory. This means that regardless of your site’s language, you can rely on SharePoint naming the routing rules list URL as “…/RoutingRules”.

        string webUrl = context.Web.ServerRelativeUrl;

 

        //Handle root sites & sub-sites differently

        string routingRulesUrl = webUrl.Equals("/", StringComparison.Ordinal) ? "/RoutingRules" : webUrl + "/RoutingRules";

 

        List routingRules = null;

 

        ListCollection lists = context.Web.Lists;

 

        IQueryable<List> queryObjects = lists.Include(list => list.RootFolder).Where(list => list.RootFolder.ServerRelativeUrl == routingRulesUrl);

 

        IEnumerable<List> filteredLists = context.LoadQuery(queryObjects);

 

        context.ExecuteQuery();

 

        routingRules = filteredLists.FirstOrDefault();

 

        if (routingRules == null)

        {

            throw new InvalidOperationException("Could not locate the routing rules list!");

        }

Code Sample – Part Two: Query the List

The query should filter the list results to include only the active (based on the RoutingEnabled field) content organizer rules. The user should notice how the CAML query is restricted to a row limit of 100 and defines a custom set of view fields in the static CamlQuery.CreateAllItemsQuery method. This method generates a basic CAML query, which is then parsed with LINQ  to XML and modified to include the query element.

        private CamlQuery GetContentOrganizerRulesCaml()

        {

            string[] viewFields = new string[]

               {

                   "RoutingConditions",

                   "RoutingContentTypeInternal",

                   "RoutingPriority",

                   "RoutingRuleName",

                   "RoutingTargetFolder",

                   "RoutingTargetLibrary",

                   "RoutingTargetPath"

               };

 

            //view...

            CamlQuery caml = CamlQuery.CreateAllItemsQuery(100, viewFields);

 

            XElement view = XElement.Parse(caml.ViewXml);

 

            //query...

            XElement routingEnabled = new XElement("Eq",

                new XElement("FieldRef", new XAttribute("Name", "RoutingEnabled")),

                new XElement("Value", new XAttribute("Type", "YesNo"), "1"));

 

            XElement query = new XElement("Query", new XElement("Where", routingEnabled));

 

            //Add query element to view element

            view.FirstNode.AddBeforeSelf(query);

 

            caml.ViewXml = view.ToString();

 

            return caml;

        }

 Evaluate the Rules and Conditions

After the user gets the Content Organizer rules it is important to match the release content type id with any of the rules. Should there be a match, the user must release to the drop off library instead of the content type’s library. If no rules match the upload’s content type, then the content can be sent to the content type’s library.

Code Sample – Evaluate Rules and Send Content

This method demonstrates how the user can parse the search results from the Rules List. The “RoutingContentTypeInternal” field needs to be split, in order to determine the rule’s content type and content type ID. If a rule matches, then the user can determine where to correctly send the content.

        private static void EvaluateRules(ListItemCollection items)

        {

            string yourContentTypeId = "0x01010B"; //replace with your upload content type ID.

 

            ListItem rule = null;

 

            foreach (ListItem item in items)

            {

                string contentType = null;

 

                string contentTypeId = null;

 

                if (item.FieldValues.ContainsKey("RoutingContentTypeInternal"))

                {

                    object value = item.FieldValues["RoutingContentTypeInternal"] ?? string.Empty;

 

                    string[] values = value.ToString().Split(new char[] { '|' }, StringSplitOptions.None);

 

                    if (values.Length == 2)

                    {

                        contentType = values[1];

 

                        contentTypeId = values[0];

                    }

                }

 

                if (yourContentTypeId == contentTypeId)

                {

                    rule = item;

 

                    break;

                }

            }

 

            if (rule != null)

            {

                //send to drop off library...

            }

            else

            {

                //send to content type library...

            }

        }

Summary

This post explained the difficulty of using the client object model to release content to a Web with the content organizer enabled (redirecting users to drop off library) and determining which libraries have been impacted by the content organizer rules. This post explained several code snippets from the attached sample class file, so the user can better understand how to implement and use the object model.

Scosby Content Organizer.zip (1.73 kb)

Tags: , ,

IT | Programming

Shawn Cosby Awarded Microsoft Community Contributor for 2011

by Scosby Wednesday, February 16, 2011

Today I got a suprising email from Microsoft. I was honored when I found out my contributions to the Microsoft online technical communities at MSDN have been recognized with the Microsoft Community Contributor for 2011. You can view my MSDN Profile here.

 

Tags: ,

Technology | Programming

Custom Search Results in KnowledgeLake Imaging for SharePoint - Part 3

by Scosby Friday, December 10, 2010

Introduction

This post uses Imaging for SharePoint version 4.1 and requires the SDK. Contact KnowledgeLake to learn more about Imaging for SharePoint or the SDK. Contact KnowledgeLake Support to obtain the latest SDK if your company already has a license.

This post will demonstrate how to create a Silverlight Search Results control in a SharePoint Solution. This post will use the DataGrid control available in Microsoft’s Silverlight Control Toolkit. When doing any development work, one should always test in a staging/testing environment before going live to a production server. Class files are available for download at the end of the post.

The User Control

The project will use the Silverlight Toolkit’s DataGrid for displaying our Search Results to the end user. Add a new user control to the KLSearchExtensions project, and name it CustomQueryReults. This name should be used exactly as shown otherwise KnowledgeLake Imaging for SharePoint will not recognize the user’s custom control.

XAML

<UserControl xmlns:sdk="http://schemas.microsoft.com/winfx/2006/xaml/presentation/sdk"

             x:Class="KLSearchExtensions.CustomQueryResults"

             xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

             xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"

             xmlns:d="http://schemas.microsoft.com/expression/blend/2008"

             xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"

             xmlns:viewModel="clr-namespace:KLSearchExtensions.ViewModel"

             xmlns:controlsToolkit="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Toolkit"

             mc:Ignorable="d"

             d:DesignHeight="480"

             d:DesignWidth="640">

    <UserControl.DataContext>

        <viewModel:CustomQueryResultsViewModel x:Name="Model"

                                               PropertyChanged="Model_PropertyChanged" />

    </UserControl.DataContext>

    <Grid x:Name="LayoutRoot">

        <controlsToolkit:BusyIndicator IsBusy="{Binding IsBusy}">

            <sdk:DataGrid x:Name="dataGrid"

                          AutoGenerateColumns="False"

                          HorizontalAlignment="Stretch"

                          VerticalAlignment="Stretch"

                          Margin="4" />

        </controlsToolkit:BusyIndicator>

    </Grid>

</UserControl>

 

DataContext

The User Control’s data context is an instance of our ViewModel. In XAML, the User Control should look similar to the following, where the XML namespace viewModel points to your ViewModel’s project namespace (in this case it is KLSearchExtensions.ViewModel):

LayoutRoot

The User Control’s root element is a Grid. Add the Silverlight Toolkit’s BusyIndicator to the Grid. The BusyIndicator control hosts the actual content of our control, but allows the user to display an information message during loading or searching. Add the Silverlight Toolkit’s DataGrid to the BusyIndicator control. Be sure to set the AutoGenerateColumns property to false, the Search Results will be dynamically built for the user to bind to the grid.

Code Behind

A Strongly Typed ViewModel

The user will benefit from creating a convenience property to get an instance of the ViewModel from the User Control. Create a new private property named TypedModel. This property will encapsulate a strongly typed instance of the ViewModel for our User Control. In other words, the property returns the User Control’s DataContext property as an instance of our ViewModel. The code should look like the following:

 

Strongly Typed ViewModel Code Sample

        private CustomQueryResultsViewModel TypedModel

        {

            get

            {

                return this.DataContext as CustomQueryResultsViewModel;

            }

        }

IQueryResults Interface

Implement the IQueryResults interface from the namespace KnowledgeLake.Silverlight.Search.Contracts. After implementing the interface, the Error event will be defined. This event is raised by KnowledgeLake Imaging for SharePoint when an error occurs during the loading of the Search Result extension. The user should not raise this event, but the user can handle this event. It is important to note, since this is a Silverlight application the user will have to log to IsolatedStorage on the client’s machine or call a webservice to record the error. However, handling this error is beyond the scope of this post.

 

The IQueryResults interface members OpenExecuteSearch (method) and Query (property) will be implemented as wrappers around the User Control’s ViewModel members. The ExecuteQueryStringSearch method is beyond the scope of this post and will not be implemented. The interface members’ implementations should look similar to the following:

 

IQueryResults Interface Code Sample

        public event EventHandler<KnowledgeLake.Silverlight.EventArgs<Exception>> Error;

 

        public void ExecuteQueryStringSearch()

        {

            throw new NotImplementedException();

        }

 

        public void OpenExecuteSearch(string keywords)

        {

            this.GetTypedModel().OpenExecuteSearch(keywords);

        }

 

        public string Query

        {

            get { return this.GetTypedModel().Query; }

            set { this.GetTypedModel().Query = value; }

        }

ViewModel PropertyChanged Event

The PropertyChanged event is declared in the User Control’s XAML and wired up to an event handler called Model_PropertyChanged, this handler must respond to the Query and SearchResultSource properties. The method should look like the following:

ViewModel Property Changed Code Sample

        private void Model_PropertyChanged(object sender, PropertyChangedEventArgs e)

        {

            var model = sender as CustomQueryResultsViewModel;

 

            switch (e.PropertyName.ToUpperInvariant())

            {

                case "QUERY":

                    model.ExecuteQuery();

                    break;

 

                case "SEARCHRESULTSOURCE":

                    this.RenderResults(model.SearchResultSource);

                    break;

 

                case "MESSAGE":

                    this.DisplayMessage(model.Message);

                    break;

            }

        }

Displaying Search Results

Create a method called RenderResults which takes a parameter of type SearchResultSource. This method will render the search results into the DataGrid. If the user has added or removed columns to the query since it was last executed, the UpdateColumns property will evaluate to true. In this scenario, it is necessary to clear the grid of all columns and rebuild the grid.

 

The SearchResultSource.ViewColumns items are all represented as a property of each item in the SearchResultSource.Results collection. The SearchResultSource.Results property is dynamically built to contain the corresponding ViewColumn’s value when bound to the DataGrid.ItemsSource property. In other words, these are the rows of the grid with the column values included.

 

Displaying Search Results Code Sample

        private void RenderResults(SearchResultSource result)

        {

            if (result.ViewColumns != null)

            {

                if (result.UpdateColumns) //user changed result columns on the query builder

                {

                    this.dataGrid.Columns.Clear();

 

                    foreach (ViewColumn column in result.ViewColumns)

                    {

                        var item = new DataGridTextColumn();

                        item.IsReadOnly = true;

                        item.Header = column.FriendlyName;

                        item.Binding = new Binding(column.InternalName);

 

                        this.dataGrid.Columns.Add(item);

                    }

                }

 

                this.dataGrid.ItemsSource = result.Results;

            }

        }

 

If the end user added or removed columns to the query, the method clears the grid’s columns and rebuilds them from the SearchResultSource.ViewColumns collection. As the user iterates the ViewColumns collection, be sure to create a new Binding for the column using the ViewColumn.InternalName property. This binding enables the grid to display the appropriate column value for each item in the SearchResultSource.Results collection. This is powerful functionality provided by KnowledgeLake Imaging for SharePoint. Essentially, the user does not need to worry about what columns the user chose to include in the query and how to bind those columns to the Search Results.

 

 

 

 

Download Files: CustomQueryResults.zip

Blog Posts In This Series

View full size...

Finished Custom Control

Tags: , , , ,

IT | Programming