Wednesday, June 21, 2006

Ian Launches WpfMsdn

Running with the msdnman ball, Ian has launched WpfMsdn, a GUI browser for the content coming out of the MTPS Content Service.



WPF MSDN Reader


Cool. :)

Monday, June 19, 2006

Mixing Forms and Windows Authentication in ASP.NET 2.0

Almost two years ago, I blogged about how to mix Forms and Windows authentication in an ASP.NET application. It was something I figured out for a client, and they've been using the basic idea ever since. It worked pretty well up until a month or two ago. Then it broke pretty hard. What changed? We upgraded to ASP.NET 2.0.

 

Well, I finally got a chance to sit down and chase the problem, and I figured out what looks like a solution. We have yet to integrate my idea into the product, but I wrote a quick prototype, and it looks like it's going to work, so I thought I would blog it here.

 

The kernel of the problem is that the ASP.NET 2.0 Forms Authentication Module appears to stomp on the "Response.StatusCode = 401" that you need to set in order to make mixed authentication work. I'm not sure exactly where this happens: I got lost in Reflector when I tried to chase it down. But you can easily observe the effect - your login page will redirect back onto itself, with the ReturnUrl query string parameter double-escaped. That is, you'll see something like this in your address bar:

 


 

instead of this

 


 

Notice how the ReturnUrl is actually pointing back to the login page itself, rather than to Default.aspx the way it should be. Again, I haven't chased down exactly where this behavior is coming from, but it's easy enough to observe.

 

My solution was simply to attach a handler to the Application's EndRequest event by putting the following in Global.asax:

 

protected void Application_EndRequest(object sender, EventArgs e)

{

    if (Context.Items["Send401"] != null)

    {

         Response.StatusCode = 401;

         Response.StatusDescription = "Unauthorized";

    }

}

 

Then, in order to trigger this code, all I have to do is put a

 

Context.Items["Send401"] = true;

 

somewhere in my login page when I determine that I need redirection to pick up on the user's Windows credentials. Because Application_EndRequest runs much later in the page lifecycle than the code in my login page, Forms Authentication can't get in the way and screw up my status change.

 

I thought I'd try to be really clever and attach my handler in the page itself by doing something like this:

 

Context.ApplicationInstance.EndRequest += LoginPage_EndRequest;

 

but it doesn't work. I'm not sure why (the page lifecycle is not my strong suit these days), and I'm out of time to chase it down.  If you happen to know what I'm doing wrong, drop a comment. In the meantime, the Global.asax thing works well enough for us.

Friday, June 16, 2006

msdnman Moves To CodePlex

Now that msdnman is public, I need to find a home for it that isn't two zipfiles in a directory somewhere. I said I was going to move it to SourceForge, but Brad Wilson suggested that I host it at CodePlex instead. So I did: msdnman lives here now.

 

CodePlex is part Microsoft's answer to SourceForge, part spiritual successor to GotDotNet. As you know, I was pretty a pretty vocal detractor of GotDotNet, but that's exactly why I wanted to try out CodePlex - I wanted to see if Microsoft had gotten it right this time.

 

Well, I'd say they're on the right track. Having worked with CodePlex for all of two days now, I'm reasonably impressed. To start with, it has a real source control system - TFS. Aspects of the command-line support are slightly annoying (the need to constantly reauthenticate, specifically), but it's a totally modern tool, with support for lots of interesting and useful features (e.g. shelving). Other touches of the site - wiki editing for the project's home page, export of work items to Excel, linking checkins to work items, etc. - are really nice, and a step above what SourceForge provides.

 

My only major complaint is that there's no email support on the site right now. I believe email to be the lifeblood of open source projects. It's certainly how I do everything. Right now CodePlex uses RSS, which I think will serve well enough until email is available (it's on their list).

 

All in all, CodePlex is a pretty good effort for a v1 release. I expect it will improve rapidly, too, given that they have people like Brad and Jim Newkirk (of NUnit fame) on the team.

Monday, June 12, 2006

Announcing msdnman

When Kim, Tim, and I were working on MSDN2 - or more accurately, the Microsoft/Technet Publishing System (MTPS), the system behind MSDN2 - way back when, we always had in mind that it was important to create a system where the content could be leveraged in ways other than to power the MSDN2 website. And we did - every time you hit F1 in Visual Studio 2005, you're accessing the MTPS system. Now that the MTPS Content Service is publicly available, we expect other applications of the data to start popping up.

 

To get the ball rolling, and because we always thought it would be cool, I've written the first one. I call it msdnman, and users of *nix systems should find it fairly familiar: it's written as an analog of the man command. The idea is to provide command-line access to the MTPS content. I'm a big command-line guy myself, so I really like the concept.

 

For now, I've posted the binaries and source here. I'm working on getting it shoved over to SourceForge, but there are a few administrative hurdles to clear first, so it might take me a few weeks.

 

Because I wrote this in my spare time over a couple of days, it's still a bit rough. But it's still pretty servicable for those times where you don't want to fire up a browser, and I think it makes a reasonable demonstration of how to use the web service, too.

 

It's pretty straightfoward to use. Simply grab the binaries and run something like

 

msdnman System.Xml.XmlReader

 

in a console window, and you should see something like this:

 

msdnman screenshot

 

There are a bunch of options that you can specify, too. You can get a list of them by running

 

msdnman -?

 

but briefly, you'll generally either do this:

 

msdnman IDENTIFIER

 

or this

 

msdnman -k KEYWORD

 

where IDENTIFIER identifies some content item in the MTPS system. See here for more information, but usually this will be the name of the namespace, class, method you're looking up, like System.Xml.XmlReader or System.Security.Cryptography.

 

The "-k" option is a nifty little add-on that I did (at the request of John Mollman - this is your blog, right?), which does a keyword search against the MSDN content. So you can do

 

msdnman -k web.config

 

to get back a list of links having something to do with web.config.

 

I implemented the search part by wiring up to the same web service that Visual Studio 2005 uses, as the MTPS web service does not currently support search.

 

So download and enjoy, and let me know if you have any ideas for how to improve it.

Announcing the MTPS Content Service

Over the last few months, you've seen me drop vague hints about the work I'm doing at MSDN. Well, today we launched it at a TechEd chalk talk (DEVTLC03), so I can finally talk about it: the Microsoft/TechNet Publishing System (MTPS) Content Services.

 

In brief, the MTPS Content Services are a set of web services for exposing the content in MTPS. MTPS is the application I helped write a few years back that stores and processes all the content at MSDN2. With the web service, you now have programmatic access to all that data via SOAP. So if you want to embed access to the documentation for System.Xml.XmlTextReader into your application, go for it. If you want to know what the child nodes of System.DateTime.ToString() are in the table of contents, you can go and find that, too. I expect to see some fairly interesting uses of the service pop up in the near future. There's such a huge amount of good information in MTPS that I imagine lots of people will want to leverage it.

 

The web service is reasonably well-documented here (of course, I wrote the web service and the documentation, so maybe I'm not the best person to judge the quality of the docs), but let me give a brief explanation of how it works.

 

The web service consists of two operations: GetContent and GetNavigationPaths. GetContent - as you might imagine - allows you to retrieve content (XHTML, GIFs, etc.) from MTPS. GetNavigationPaths lets you get the table of contents (TOC) data for the items in the system. I imagine most people will use GetContent far more often than GetNavigationPaths.

 

The system is organized around the concept of a content item. A content item is a collection of documents identified collectively by a content key. A document has a type, a format, and some content. The document most people will probably be interested in is the document of type primary, format Mtps.Xhtml, but there are other documents associated with a content item as well (for example, images can be stored in the content item as well). See the docs for more detail.

 

A content key consists of three parts: a content identifier, a locale, and a version. The locale is something like en-us (US English) or de-de (German as they speak it in Germany). The version is something like SQL.90 (SQL Server 2005).

 

The content identifier is a bit more complicated. It can be one of five things:

 


  1. A short ID. This is an eight-character identifier like "ms123401".

  2. A content alias. This is a "friendly name" for the content item, like "System.Xml.XmlTextReader".

  3. A content GUID. Topics can also be identified by a GUID.

  4. A content URL. To allow for easy integration with the HTML front end of MTPS URLs like http://msdn2.microsoft.com/en-us/library/b8a5e1s5(VS.80).aspx can also be used to identify a content item.

  5. An asset ID. This is how topics are identified internally by the system, and they occasionally appear in the output. They always begin with "AssetId:".

 

With the exception of asset IDs, these are all the same pieces that you can already use in the URLs for MSDN2, so the concepts should be familiar if you've spent any time looking at that stuff.

 

There are two slightly funky (but highly intentional) things about what GetContent returns that you'll need to keep in mind. The first is that, by default, the body of the documents that make up a content are not returned. Unless you list a document in the requestedDocuments section of the request message, you'll just get the types and formats of the available documents. This is because documents can be quite large, and it would be a waste to transmit all of them every time.

 

The other thing to be aware of is the idea of available versions and locales. If you send in a request for content item ms123401, locale en-us, version MSDN.10, you'll get back that content item, but you'll also receive a list that will tell you that the content item is also available for locale/version fr-fr/MSDN.10 and locale/version en-us/MSDN.20. This list is particularly valuable when the content key you request does not correspond to a known content item - in that case it represents the best guess by the MTPS system for reasonable alternatives.

 

GetNavigationPaths has a few twists as well. First, there's the name. We seriously considered calling it GetToc, but it's not exactly TOC data, since it's used for other things, like that little trail of links (sometimes called "breadcrumbs" or the "eyebrow") at the top of MSDN2 pages. What it really returns is all the ways to navigate between two content items. Hence, GetNavigationPaths.

 

GetNavigationPaths accepts two content keys. In this case, the identifier in the keys must be a short ID. (If you need to, you can resolve a short ID from an alias, a GUID, a URL or an asset ID via a call to GetContent first.) The first key identifies the root, which is the content item you'd like to start at, and second key identifies the target, which is the content item you'd like to wind up on.

 

What you get back is a list of navigation paths between the root and the target. There might be more than one path, because a content item can appear in more than one place in the TOC. A navigation path is a list of navigation nodes, where each navigation node is made up of a title, a navigation node key, and a content node key. There's also some information about something called phantoms, but I'll defer that to the docs.

 

The title is fairly self-explanatory, but the distinction between a navigation node key and a content node key is somewhat less intuitive…I had to have it explained to me more than a few times when I was writing the system. Basically, it arises out of the fact that every node in the TOC is itself a separate content item in the system, whose content consists of a reference to the content item that TOC node represents and a list of child nodes. So the navigation node key is a content key (identifier plus version plus locale) that represents the TOC node itself, and the content node key identifies the content item the TOC node corresponds to. You can tell the difference between the two because the content item identified by the navigation node key will always have a primary document of format "Mtps.Toc".

 

Another way to look at it is that the navigation node key tells you where you are in the left hand tree of MSDN2, and the content node key tells you what goes in the right hand content pane.

 

Like I said, I don't expect as many people to use GetNavigationPaths as to use GetContent, so I wouldn't lose too much sleep over the details. Of course, if you do wind up using it (or any part of the system), I'd love to hear about it, or about how we could make it or the documentation better.

 

This was a very interesting system to write for a variety of reasons, but I think I'll save the "how" for another post. I've also got what I think is a pretty cool application of the service that (at least some) people are really going to like. More on that later, too.

 

We consider the system to be roughly in beta, as we already know several things we need to improve or change. That said, we feel good enough about it to turn the world loose on it. If you come up with any cool ideas about how to use the service, or ideas about how we could improve it, drop a comment here.