Tag Archives: koirala

Writing a WS-Federation based STS using WIF

Even though SAML and WS-* have started to be looked upon as the old guard of security protocols with the popularity of OAuth 2, they are not without their merits. For one, they are inherently more secure than OAuth (in fact, you need to rely on a separate underlying secure transport for OAuth to be considered secure- and if you are someone who believes SSL is broken, then OAuth is practically insecure). It just so happens that their demerits are very visible to someone trying to implement or integrate them. OAuth, by contrast, is much simpler and easier to implement- and for most purposes, secure enough. If you have settled on WS-Federation as your protocol of choice, Windows Identity Foundation (WIF) is most likely going to be your de-facto choice. While powerful, WIF as a library is not what one would call “easy to use”. If it’s cumbersome when you use it as a relying party, the complexity is ten-fold if you try to build a security token service (STS) based on it.

I decided to take on the challenge and embarked on building a central login system (i.e. passive STS) that I could use for all of my applications. I mostly don’t like to maintain my own user database, so the central login system would then provide a hub for the user to login using any of the various identity providers out there such as Google or Facebook. The main advantage would be that the user would need to login once and be able to use all my applications. The initial plan was to make it protocol agnostic – i.e. something that will look at the incoming request, figure out what kind of request it is, and then delegate to an appropriate protocol handler. This way, the application would be able to support WS-Federation, SAML 2, OAuth 2, what-have-you, as needed. However, I saw what a great job IdentityServer has done in terms of providing a library you can easily build an OAuth based STS with – so that made me not want to pursue the OAuth path at all for this instance. My plan is to someday just build the next version of this as an OAuth STS using IdentityServer3.

With that said, if you look at the source code (available here), you will see that there is a protocol-agnostic pattern with a WS-Federation implementation. The application uses Windows Identity Foundation (WIF) as its core component. It acts as a passive STS (Security Token Service) while dividing the role of IP (Identity Provider) between the target application (or “Relying Party“) and one or more third-party providers such as Google or Facebook. The third-party providers are used for authentication, but the responsibility of storing whatever user information is needed by each application is the responsibility of that application (thus my statement that the identity provider role is divided between the Relying Party and the third-party Identity Providers). The entry point of the WS-Federation communication logic is in the WsFedLoginRequestProcessor class, specifically the ProcessSignIn method.

Each Relying Party is registered with the application through configuration – needs to have three settings populated: the realm URI (a unique identifier for the party – an example being urn:ApplicationName), the reply-to URL (the URL that this application will redirect to once the user is authenticated. This is usually the main entry point URL for the Relying Party application) and the “login service URL”. The Relying Party needs to implement a WCF service with a predefined contract (defined in ILoginService – available in the Commons Library. The service is responsible for providing information about the application as well as any given user to the STS, as well as exposing a way to create new users. The Relying Party application then needs to be configured for WIF and with the STS designated as the token issuer. There are methods available in the Commons Library that facilitate this. Communication between the Relying Party and the STS is encrypted and signed using a shared X.509 certificate.

When you navigate to a protected endpoint in the Relying Party, and are not authenticated, you are redirected to the login page hosted by the STS. This redirection request is generated by WIF and follows standard WS-Federation protocol. The STS then uses the realm information passed in the request to look up information about the party. It gets more information from the Relying Party using the ILoginService WCF endpoint. It uses this to display application-specific information in the login page. From the login page, you can use Google (using it’s OAuth API) or Facebook (using its JavaScript API) to authenticate yourself. The STS then communicates with the Relying Party using the ILoginService WCF endpoint to see if a user matching credentials just provided by Google or Facebook exists. If so, it uses this information to instruct WIF to write an encrypted session token cookie, and redirects back to the Relying Party reply-to URL – where it is now authenticated thanks to the encrypted session token cookie.

If a user is not found, the STS prompts you to enter a display name for the user that is going to be newly created. Once you provide the information, the ILoginService WCF endpoint is again used to create a new user record in the Relying Party application, and the rest of the process is same as above. When you logout from the Relying Party application, WIF sends a logout WS-Federation message to the STS which takes care of processing the logout operation.


Git- Rewriter of History

Undoubtedly one of the biggest advantages that Git provides is using rebasing to maintain a clean commit history. I find that I am using it a lot these days- primarily in three modes:

  • As part of pull (i.e. git pull –rebase)
  • Interactive rebase to: 1) keep my own history clean when I am off working on a branch by myself, and 2) clean up a feature branch’s commit history before merging it into the mainstream
  • Rebase my branch against a more mainstream branch before I merge onto it (i.e. git rebase mainstream-branch)

With interactive rebase, usually what I do is- I will have one initial commit that describes in general the feature I am working on. It will then be followed by a whole bunch of commits that are advancements of or adjustments to that – quick and dirty ones with “WIP” (i.e. work in progress) as the message. If, in the middle of this, I switch to some other significant area, then I will add another commit with a more verbose message, and then again it’s “WIP”, “WIP”, and so on. I will add any thing I need to qualify the “WIP” with if necessary (e.g. if the “WIP” is for a different context than the last few WIPs, or if the WIP does indeed add some more information to the initial commit). In any case, after some time, I will end up with a history that looks a bit like this (in chronological order):

hash0 Last "proper" commit.
hash1 Started implementing feature 1. Blaah blaah.
hash2 WIP
hash3 WIP
hash4 WIP
hash5 Started implementing feature 2. Blaah blaah.
hash6 WIP
hash7 WIP
hash8 WIP (feature 1)
hash9 WIP (feature 1)
hash10 WIP (feature 2)

At this point, I will realize that things are getting a bit unwieldy. So I do an interactive rebase, i.e. git rebase -i hash0, which gives me this:

p hash1 Started implementing feature 1. Blaah blaah.
p hash2 WIP
p hash3 WIP
p hash4 WIP
p hash5 Started implementing feature 2. Blaah blaah.
p hash6 WIP
p hash7 WIP
p hash8 WIP (feature 1)
p hash9 WIP (feature 1)
p hash10 WIP (feature 2)

The first thing I will do is reorder the commits so that they are not interleaving back and forth between what they logically represent (i.e. features 1 and 2 in this case). This, of course, assumes, that there is no overlap in terms of code units touched by features 1 and 2.

p hash1 Started implementing feature 1. Blaah blaah.
p hash2 WIP
p hash3 WIP
p hash4 WIP
p hash8 WIP (feature 1)
p hash9 WIP (feature 1)
p hash5 Started implementing feature 2. Blaah blaah.
p hash6 WIP
p hash7 WIP
p hash10 WIP (feature 2)

Next, I mark the main commits as “r” for reword if I need to improve the commit message, or as “e” for edit if I also need to, for some reason, change the commit date (I will usually do this using git commit –amend –date=now so that the history looks nice and chronological). The “WIP” commits- I mark as “f” for fixup– which is a version of squash that skips the step that lets you combine the commit messages, since “WIP” does not have anything worth combining in terms of the commit message.

e hash1 Started implementing feature 1. Blaah blaah.
f hash2 WIP
f hash3 WIP
f hash4 WIP
f hash8 WIP (feature 1)
f hash9 WIP (feature 1)
e hash5 Started implementing feature 2. Blaah blaah.
f hash6 WIP
f hash7 WIP
f hash10 WIP (feature 2)

When all is said and done and the rebase is complete, I have a nice clean history:

hash0 Last "proper" commit.
hash11 Implemented feature 1.
hash12 Implemented feature 2.

I love Git.

Beware of this WCF Serialization Pitfall

Ideally, one should avoid data contracts with complex graphs- especially with repeated references and definitely ones with circular references. Those can make your payload explode on serialization. With repeated references, you may run into an integrity issue on deserialization. With circular references, the serialization will enter a recursive loop and you will probably run into a stack overflow.

Seeing that in certain situations, this becomes unavoidable, WCF has a way that you can tell it to preserve object references during serialization. You do this by setting IsReference to true on the DataContract attribute that you use to decorate the composite type that is your data contract.

So, for example:

[DataContract(IsReference = true)]
public class MyContract
   public string Member1 { get; set; }

This solves the problem- however, since WCF achieves this by augmenting the WSDL- beware of this when you are exposing your service to third parties (especially ones that are not using WCF or perhaps not .NET at all) and interoperability is a concern. Without IsReference set to true, the WSDL snippet for the above would look something like:

<xs:complexType name="MyContract">
    <xs:element minOccurs="0" name="Member1" nillable="true" type="xs:string"/>

With IsReference set to true, this is what it looks like:

<xs:complexType name="MyContract">
    <xs:element minOccurs="0" name="Member1" nillable="true" type="xs:string"/>
  <xs:attribute ref="ser:Id"/>
  <xs:attribute ref="ser:Ref"/>

See those two lines that got added (i.e. “Id” and “Ref”)? That could very well cause some other party’s WSDL parser/proxy generator to choke. You have been warned.

Writing your own LINQ provider, part 4

This is the last in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics
  3. A simple, pointless solution
  4. A tiny ORM of our own (this post)

A tiny ORM of our own

In the previous post, we took a look at a simple, albeit pointless example of a LINQ provider. We wrap the series up this time by looking at something a little less pointless – a LINQ-based ORM, albeit a very rudimentary one. As with the previous one, it helps to take a look at the source code first:

This is a very simple example implementation and has its limitations. It only works with SQL Server. It only supports reads. It only supports these methods:

  • Any
  • Count
  • First
  • FirstOrDefault
  • Select
  • Single
  • SingleOrDefault
  • Where
  • OrderBy
  • OrderByDescending
  • ThenBy
  • ThenByDescending

There are a few more limitations, but again the point is not to redo NHibernate or Entity Framework. There is also a simple fluent mapping interface that you can use as so:

    .Member(x => x.Id, "id")
    .Member(x => x.Name "thing_name")
    .Member(x => x.Date "thing_date");

Once you’ve got your mappings in place, it is up to you to create the DB connection. Once you’ve done that, you can create an IQueryable<T> out of aSqlConnection instance and then do LINQ on top of it.

using (var conn = new SqlConnection("..."))
    conn.Profile(Console.WriteLine); // Write generated query to console.

    var query = conn.Query<MyThing>();

    var things = query
        .Where(x => x.Id < 1000)
        .OrderBy(x => x.Name)
        .Select(x => new {x.Id, x.Name, x.Date})

Or, using the “other” syntax:

var things = (from item in query
                where item.Id < 1000
                orderby item.Name
                select new { item.Id, item.Name, item.Date }).ToArray();

If you recall Step 2 from provider basics, there were two options. The last solution used Option 1, i.e. there is one queryable that just builds up the expression and the enumerator does the parsing. For this one, we’re using Option 2, where we have a separate implementation of IQueryable<T> for each type of query operation to support.

When you first obtain a queryable, you get an instance of TableQueryable (which is descended from SqlQueryable) corresponding to the table that the type is mapped to. Each call on top of it then wraps the queryable in another descedant of SqlQueryable (e.g. WhereQueryable for Where operations, and so on). This logic is in SqlQueryProvider. Similarly, for executable methods, the appropriate type of ExecutableBase is created and called. Beyond this, the actual query creation logic is implemented in the individual queryables and executables defined within the Constructs namespace.

The queryables and executables work with classes within the QueryModel namespace that represent parts of a SQL query. Based on what operation they stand for, they convert an Expression into a Query, which can then be converted into a SQL string. Each queryable implements a Parse method that does this, and as part of doing this, it parses the queryable it wraps and then works on top of the Query object returned by the inner Parse, and so on until the top-most enumerable gives you the final query. The innermost or leaf queryable is always TableQueryable which simply adds the name of the source table to the query model.

LINQ is undoubtedly awesome, but knowing how it works gives you new appreciation for just how powerful it can be. Man, I love LINQ.

Writing your own LINQ provider, part 3

This is the third in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics
  3. A simple, pointless solution (this post)
  4. A tiny ORM of our own

A simple, pointless solution

In the previous post, we took a look at what happens when you call LINQ methods on IQueryable<T>, and how you can use that to build your own provider. We take that a step further this time by building an actual provider – albeit a somewhat pointless one, in that it adds LINQ support to something that doesn’t really need it. The point, though, is to keep it simple and try to understand how the process works.

The best way to understand is to take a look at the source code first:

Now, a quick summary of what this is.

We have an interface, INextProvider. It has one method, GetNext that is supposed to get the next one in a sequence of items. An example implementation that uses a simple array as the underlying store is also included. Once you have an instance of INextProvider<T>, say, callednextProvider, you can then extract an IQueryable<T> out of it with this call:

var query = nextProvider.AsQueryable();

You can then use standard LINQ on top of it. Now, I know what you’re thinking: this INextProvider seems uncomfortably similar to IEnumerator – why would we need a query provider for this? We don’t, hence the “pointless” part, but again – the idea is to examine how building a provider works.

The entry point is NextProviderQueryable which implements IQueryable<T> and uses NextProviderQueryProvider as its Provider and returns aNextProviderEnumerator from its GetEnumerator() call. This means that whenever one of the LINQ methods are called on an instance ofNextProviderQueryable, one of the following happens:

  • If the method is something that creates another queryable out of the existing one (e.g. WhereSelectSelectManyCast, etc.),NextProviderQueryProvider.CreateQuery() is called. That, in turn, creates a new instance of NextProviderQueryable, but with the Expression set to what has been passed in. Thus, every call to CreateQuery ends up creating a new queryable with the Expression property representing the complete call.
  • If the method is something that enumerates a queryable (e.g. ToListToArray, etc. or a foreach loop), the GetEnumerator() method is called and enumeration starts. This means that NextProviderEnumerator takes place. This object is initialized with the current value of Expression as of the time of enumeration, thus it has complete information to parse it, figure out what needs to be done, and then do it using the INextProvider that it is assigned. The class ExpressionParser is used to convert the expression into a series of “nodes” that act on each item in the underlyingINextProvider and do the appropriate thing based on what it is (e.g. if it’s a WhereNode, it will have a predicate that it will run on each item).
  • If the method is something that returns a scalar (e.g. AnyAllFirst, etc.), NextProviderQueryProvider.Execute is called. In our case, we simply pass control to NextProviderEnumerator to enumerate as mentioned in the previous point, and then perform the appropriate action. We do this by getting an IEnumerable<T> that uses NextProviderEnumerator as its enumerator (and that is the NextProviderEnumerable class), and then calling the appropriate IEnumerable version of the IQueryable method that has been called. All of this is handled by the ExpressionExecutor class.

As of now, only the following methods are supported: AllAnyCastCountDistinctElementAtElementAtOrDefaultFirstFirstOrDefaultLast,LastOrDefaultLongCountOfTypeSelectSelectManySingleSingleOrDefaultSkipTake and Where. If you try to use any other methods, you will get an exception. Even within these methods, if you try to use a variant that is not supported, you will get an exception.

Next time, we’ll try our hands at a more real world implementation, i.e. a tiny, tiny ORM.

Writing your own LINQ provider, part 2

This is the second in a short series of posts on writing your own LINQ provider. A quick outline of the series:

  1. A primer
  2. Provider basics (this post)
  3. A simple, pointless solution
  4. A tiny ORM of our own

Provider Basics

In the previous post, we took a look at the two flavors of LINQ methods, i.e. the methods and classes around IEnumerable<T> and the methods and classes around IQueryable<T>. In this post, we expand upon what happens when you call LINQ methods on IQueryable<T>, and how you can use that to build your own provider.

Once you have an instance of IQueryable<T>, you can do one of three things with it:

  1. Enumerate it, using one of the following methods:
    1. Call a method such as ToListToArray or ToDictionary on it.
    2. Use it in a foreach loop.
    3. Call GetEnumerator() and then use the enumerator you get in the usual way.
  2. Call a LINQ method that returns a scalar result (this also results in the queryable getting enumerated) such as AnyFirstAllSingle, etc.
  3. Call a LINQ method (such as WhereSelectOrderBy etc.) that returns another IQueryable with some rules added that you can again do one of these very three things with.

For the first situation, IQueryable behaves just like any IEnumerable in that the GetEnumerator() method is called – so this is where you implement what you want to happen when the final enumeration happens. Usually, you do this by writing your own implementation of IEnumerator<T> for this purpose that you return from the GetEnumerator() method.

For the remaining two situations, the Provider and Expression properties of IQueryable<T> come into play. When you implement IQueryable<T>, you need to implement the getter for the Provider property to return an implementation of IQueryProvider<T> that does what you want.

In both cases, here is what the LINQ methods do:

  1. Create a lambda expression that represents the LINQ method call.
  2. Get a reference to the Provider for the target IQueryable<T>.
  3. For the first case, call Execute on the IQueryProvider<T> from step 2. For the second case, call CreateQuery on the ‘IQueryProvider` from step 2.
  4. The only thing that is different across different LINQ methods is the type parameters that are passed in, e.g. Any<T> will call Execute<bool> whileFirst will call Execute<T>. Similarly, Where<T> will call CreateQuery<T> while Select<TSource, TResult> will call CreateQuery<TResult>.

To drive the point home, here is the simplified source code for Where<T>:

public static IQueryable<T> Where<T>(
    this IQueryable<T> source, 
    Expression<Func<T, bool>> predicate)
    var currentMethodOpen = (MethodInfo) MethodBase.GetCurrentMethod();
    var currentMethod = currentMethodOpen.MakeGenericMethod(new[] {typeof (T)});
    var callArguments = new[] { source.Expression, Expression.Quote(predicate) };
    var callExpression = Expression.Call(null, currentMethod, callArguments);

    return source.Provider.CreateQuery<T>(callExpression);

And here is the simplified source code for Any<T>:

public static bool Any<T>(this IQueryable<T> source, Expression<Func<T, bool>> predicate)
    var currentMethodOpen = (MethodInfo) MethodBase.GetCurrentMethod();
    var currentMethod = currentMethodOpen.MakeGenericMethod(new[] {typeof (T)});

    return source.Provider.Execute<bool>(Expression.Call(
        null, currentMethod, new[] {source.Expression}));

Note how neither method body does anything specific to what a “Where” or “Any” operation should do. It just wraps that information in an expression and calls the appropriate method on the Provider. It is up to the provider to understand the expression (which is passed in as a parameter to bothCreateQuery and Execute) and perform the correct operation. This is why when you build a LINQ provider, it is up to you to write the translation logic for each LINQ operation as it relates to your data source, or write a fallback that says “this operation is not supported.”

Creating a new LINQ provider, then, can be boiled down to the following steps:

Step 1

Create a class that implements IQueryable<T> (say, YourQueryable<T>).

  1. There should be a way to construct this class and pass in some sort of an interface to the underlying data source to use (e.g. in NHibernate,session.Query<T> on the NHibernate ISession object does this).
  2. The call to GetEnumerator() should return your implementation of IEnumerator<T>, (say, YourEnumerator<T>). It should be initialized with the value of YourQueryable.Expression at the time of the call.
  3. The getter for the Provider property should return an instance of IQueryProvider<T> (say, YourQueryProvider<T>). The provider should have access to the underlying data source interface.

Step 2: Option 1

The logic to parse the final expression can go in YourEnumerator<T>. In this case, YourQueryProvider.CreateQuery simply returns a new instance ofYourQueryable<T> but with the Expression set to what is passed in to CreateQuery. The very first instance of YourQueryable<T> would then set theExpression to Expression.Constant(this). This way, when the time comes to enumerate and you get to YourEnumerator<T>, you have an expression that represents the complete call chain. That is where you then put the hard part of parsing that so that the first call to MoveNext does the right thing against the underlying data source.

Step 2: Option 2

Another option is to have a dumb YourEnumerator<T> and instead have a separate implementation of IQueryable<T> for each type of query operation to support (e.g. WhereQueryable',SelectQueryable’, etc.) In this case, the parsing logic is spread out across these classes, andYourQueryProvider.CreateQuery needs to examine the expression then and there return the correct type of IQueryable<T> with all the necessary information wrapped within. In any case, though, the expression as a whole must be parsed before enumeration happens.

Step 3

YourQueryProvider.Execute then needs to have logic that parses the expression passed in, figures out what needs to be done and return the result. This may involve enumerating the underlying IQueryable<T>. Going back to an ORM that is based on SQL Server, say, you would need to know to generate an WHERE EXISTS clause if you spot an Any in the expression.

Now, granted, all of this sounds pretty convoluted and can be hard to get a grip on without an example. So, we will do just that in the next post. We will start with a simple but pointless solution that does LINQ just for the sake of LINQ. Then, we’ll try to build a rudimentary ORM of our own.

Writing your own LINQ provider, part 1

This is the first in a short series of posts on writing your own LINQ provider. While LINQ is the best thing that ever happened to .NET, and using it is so much fun and makes life so much easier, writing your own LINQ provider is “complicated” to say the least (context- the LINQ interface to NHibernate, RavenDB or Lucene – those are all providers).

A quick outline of the series:

  1. A primer (this post)
  2. Provider basics
  3. A simple, pointless solution
  4. A tiny ORM of our own

A Primer

If you’ve used LINQ, you know there are two distinct syntaxes:

The “query” style:

from item in items
where item.Id == 2
select item.Name

And the “method chaining” style:

items.Where(item => item.Id == 2).Select(item ==> item.Name);

Except for the style, they’re pretty much the same in that the former is really syntactic sugar that compiles down to the latter. Now, the latter, as we know, is a series of extension methods that become available when you import the namespace System.Linq. Within this, though, there are two flavors of LINQ that are very different in terms of their internals:

  • IEnumerable<T> and everything that supports it
  • IQueryable<T> and everything that supports it

This means that when you call the same LINQ methods on an IEnumerable<T> versus an IQueryable<T>, very different things happen.

IEnumerable<T> and everything that supports it

These are simpler to use in that all the work has already been done as part of the .NET Framework. You simply use it. If you want to extend this to a data source of your own, you simply build an enumerator for it (e.g. if you wanted to slap LINQ on top of flat files, you could build a FileEnumerable that uses a FileEnumerator that, in turn, deals with a FileStream).

The extension methods are defined in System.Linq.Enumerable and the way they work is: each method, when called, wraps the IEnumerable it’s called on within a new implementation of IEnumerable that has knowledge of what operation is to be performed. These implementations are all private within theEnumerable class (e.g. Where on an array yields a WhereArrayIterator). When the final enumeration happens, the pipeline executes and gives you the desired result. The scalar-returning methods such as Any and First in this case are simple calls to the enumerator or foreach on top of the underlying enumerable.

All methods in this category deal with Func‘s when it comes to predicates or mapping functions that are passed in.

IQueryable<T> and everything that supports it

This is the focus of this series. You’ll notice that all methods in this category are defined within another class, System.Linq.Queryable and deal not withFunc‘s, but with Expression<Func<>>‘s when it comes to predicates or mapping functions that are passed in. You use this when you are working with a data source that has its own way of extracting data that either does not yield well to the IEnumerable way of doing things, or its own way of extracting data is just better-suited or superior than just enumerating away using IEnumerable. An example is relational databases, where rather than enumerating throw each row in a table and applying predicates or mapping to it, you’re better off running SQL.

The core idea here is to boil the method calls down into a lambda expression tree, then when the time comes to enumerate, parse that expression tree into something the underlying data source understands (using the relational database example, the expression tree needs to be parsed into SQL- that is what ORMs with LINQ providers such as NHibernate or Entity Framework do).

More on this to follow in future posts to come in this series.