Saturday, December 23, 2006

Verbal Shotgun

Update: fixed incorrect link to the MCM.

 

We're in Minnesota for the Christmas holiday, visiting my parents. On Thursday morning, we took Ellen to the Minnesota Children's Museum with my mother. The place was absolutely great - I enjoyed it, Ellen enjoyed it, Alice enjoyed it, and my mom enjoyed it. Highly recommended.

 

The last thing we did before we went across the street to Fuji Ya (probably our favorite sushi restaurant - also highly recommended), was to stop in at the toddler area of the museum. Cutely entitled Habbitot. They've got a series of platforms, slides, and stairs that are just perfect for the kids to explore.

 

Ellen was playing on one set of stairs, and being the bossy little girl she is, decided that no one else should be able to play on them. The first little person that went by, she just yelled "no!" to. That got my antenna up - but I didn't do anything about it. Then a little boy - probably a year her junior - crawled by, and Ellen raised her hand to hit him. She's got a temper on her, but she's also pretty obedient, so I did the Dad thing and barked, "Don't you even think about it!" quickly enough that she thought the better of it and put her hand down.

 

Unfortunately, I think I put a little too much oompf into the command, because I looked over and the little boy she'd been about to hit was looking at me like I was Godzilla, tears already brimming in his little eyes.

 

Oops.

 

I apologized to his parents (who were very understanding), and fled. :)

Thursday, November 30, 2006

FlexWiki 2.0 Alpha 1 Released

I just posted FlexWiki 2.0 alpha 1 on the SourceForge website. Frankly, I'm rather excited. Although it's only at alpha stage, I've been working on this project for well over a year. After extensive rearchitecture, I'm finally at the point where I can start working on the thing that sent me down this path in the first place: a reasonable security model for FlexWiki.

If you have a chance to try out the new bits, please do so. Keep in mind that they are alpha: things generally work, but you're bound to run into issues now and then. Things that specifically don't work right now are caching (although you could argue it never worked that great in the first place) and SQL Server support. Those will get implemented in the coming months.

Enjoy!

FlexWiki 1.8.0.1739 Released

First, apologies for the god-awful formatting of my posts just now. I just reinstalled, and I haven't got anything decent to post with - the .Text web page that I'm using right now sucks. Anyway.

I just posted FlexWiki 1.8.0.1739 to the SourceForge website (http://sf.net/projects/flexwiki). This is primarily a bugfix release. Given that our last official release was in late 2005, there are a fair number of bugs that have been fixed, too. There are also some minor feature enhancements - you can check out the change list on the project website.

There are three reasons that it's been so long since the last official release: 1) We've been focused on FlexWiki 2.0. More about that soon. 2) We do continuous integration at http://builds.flexwiki.com, so anyone that wanted updated bits could have gotten then from there any time. 3) We haven't had a release manager, so posting bits on SourceForge has been low priority. Hopefully #3 will change soon - we're working on it.

As always, the best place to get support with FlexWiki remains the flexwiki-users mailing list. You can subscribe via the FlexWiki project website.

Missing IIS Administration Tools in Vista

I've just recently installed Vista for the first time - I never even looked at any of the Betas, CTPs, or RCs. I guess I'm getting old (turned 35 yesterday, in fact), since I no longer leap on beta technology like a pack of starving hyenas on a wounded wildebeest.

So far it's going pretty well. I'm not sure I much like the new UI features - I find Glass to be a bit too, well, glassy. If I want glare I'll point my binoculars at the sun (good gods I *am* a cranky old man) . But I'm going to leave it on and see if I can get used to it.

One hitch I hit was stupid, but I'm guessing that I won't be the only one to hit it. I installed IIS7, and then spent the next half hour trying to find the management application. I located the (awful) command-line tool, but couldn't find the inetmgr.exe GUI. Then I went back into the Windows components installer and realized my mistake...there's a separate checkbox for the management tools that I hadn't checked. Oops.

Anyway, you've been forewarned. Unless the reason you wound up here was you had the same problem and are searching the Internet for the solution. :)

Wednesday, November 22, 2006

Baby Sign Language

I'm a big fan of Scott Hanselman. Aside from his many useful technical posts, I enjoy reading his posts about his son, Zenzo. That's due at least in part to the fact that Zenzo is right around a year younger than my daughter Ellen, so hearing about Zenzo learning to crawl, walk, or whatever brings me back to when Ellen was passing the same milestones.

 

It was with particular interest, however, that I read Scott's latest Zenzo post, entitled "Baby Sign Language". I was going to leave a comment on his blog, but frankly I just don't like comments. It's a subject for another post, but essentially I find that I just never go back to the site after I leave a comment - I've tried stuff like CoComment, but nothing has really been natural and easy. So I figured I'd just post here instead - the trackback is generated automatically, which is better than nothing.

 

But to the subject at hand: Baby Sign Language. We did this with Ellen. I'm a HUGE fan of it. But I knew I would be - I have a brother who is thirteen years younger than me, so I can clearly remember him as a baby. He has Down Syndrome, so he was unable to speak at all until he was nearly three. It was just too difficult a skill. My parents got the brilliant idea of teaching him ASL, and we were all signing simple signs with him for probably a year before he could communicate verbally. It was great - really helped him express himself.

 

Given my experience, I was pretty determined to teach Ellen sign language as well. Alice (my wife) was totally open to the idea (indeed, she probably would have done it regardless of whether I'd be interested) so we started with Ellen when she was something like six months old. For us, the primary sign was for "milk", which we'd show her every time she got fed.

 

Just like Scott, it was initially like signing to a wall. She didn't seem to care, and she certainly didn't sign back. But I knew from my brother that it was just a matter of time, and sure enough, at about eight months, Ellen was able to mime the sign back to us. It's pretty amazing to get any communication whatsoever (other than smiling and crying) from an eight-month-old.

 

Of course, she stopped signing shortly thereafter for a good six weeks, but once she started again, there was no stopping her. She probably knew about ten signs ("milk", "eat", "more", etc.) before she could speak her first word ("ball"), and she probably knew over two dozen when her verbal vocabulary passed her signing one. It was our primary way of communicating with her for months and months, and was a great help to both her and to us in knowing what she wanted.

 

Right around the time Ellen was really catching on to the signs, we had a friend staying with us for an extended time. He'd been hit by a car, so he had a lot of time to sit around and watch Ellen learn. His comment? "I'm totally doing this with my kids when I have them." Which was my exact sentiment from watching my brother.

 

It's funny for me to hear resistance to the idea. The one that really puzzles me is the "it'll slow down their speech" one. Not only is this contrary to clinical evidence (IIRC - we did the research but I no longer have the citation), but my personal experience has been the opposite. Ellen, like Zenzo, is bilingual in verbal languages (Chinese and English), and despite that seems to have verbal capabilities comparable to her contemporaries. Bilingual children generally take a bit longer to reach the same level of speech in a single language than children only learning one language (again, I don't have a citation any more).

 

But setting that aside, I always point out to people that all children use sign language. Every kid reaches their hands up when they want to be picked up, and every kid waves to indicate "goodbye". There are probably other examples - gestures are so baked into communication that it's hard to even think of them. Teaching ASL is just an extension of this.

 

I can't recommend teaching ASL to your baby strongly enough. It's totally worth it. And don't worry if you don't know ASL - just make up some sign and use it consistently. We used to do that all the time for new stuff we didn't know the sign for. Of course, it's not hard to find signs on the Internet, but when you're in the heat of the moment, anything will do as long as you pick something, tell your spouse, and stick with it.

 

I'll close here by answering one more of the questions Scott posed: Does Ellen still sign? Yes she does, but not to communicate. She communicates exclusively (and nearly endlessly :) ) verbally, but there are about five signs she still makes even when speaking. For example, she still signs "sorry" even as she says it - in English or in Chinese.

Wednesday, November 15, 2006

Slides and Code for CapArea .NET UG Talk

I spoke last night at the Capital Area .NET User Group, and while I'm probably not the most objective evaluator, I thought it went pretty well. Which is cool, since I was presenting one of the modules from my upcoming Pluralsight course.

 

I've uploaded the slides and demos here. Any questions, you know how to reach me.

Monday, November 13, 2006

Speaking at CapArea .NET UG

Tomorrow night (Tuesday, November 14th) I'm speaking at the Capital Area .NET Users' Group meeting. My subject will be "Effective URL Design Using ASP.NET". You can read the summary on the website (http://caparea.com).

I'm particularly looking forward to giving this presentation, because the material is part of my upcoming Applied .NET Architecture and Design course (http://pluralsight.com/courses/AppliedDotNetArch.aspx), which I'm in the final stages of writing. Yes, I've been working on it for a long time, but it really is nearly done.

Anyway, hope you'll come on down to the event - there's usually pizza. Say hi if you do.

Tuesday, November 7, 2006

Using a .NET 1.x Control from a MC++ 2.0 Windows Forms Application

OK, this one took me forever to find, even though the solution was simple, so I'm blogging it.


Apparently, there's still at least one person out there using RichTextBox2/RichTextEditor. And he wanted to use it from managed C++, of all places. And it wasn't working, of course. So he emailed me, and I was happy to help. Of course, it took some research, since VC++ isn't exactly my first choice for writing Windows Forms applications. (Like, I didn't even know VC++ has a forms designer now!)


The problem occurs any time you try to add a control to a form when the control was compiled against .NET 1.x. When you do that, you'll see errors like this:


Error C3699: '^' : cannot use this indirection on type 'System::Windows::Forms::Cursor'
warning C4945: 'DescriptionAttribute' : cannot import symbol from 
'c:\windows\assembly\gac\system\1.0.5000.0__b77a5c561934e089\system.dll':
as 'System::ComponentModel::DescriptionAttribute' has already been imported
from another assembly 'System'
c:\windows\assembly\gac\system\1.0.5000.0__b77a5c561934e089\system.dll

This is due to the fact that when you add the control to the form, VC++ helpfully adds references for the control's dependent assemblies to the form. Unfortunately, those dependencies include things like the 1.x version of System.dll, and you've already got references in your project to the 2.0 version of System.dll. Hence the conflict.


The solution is to go into Project->Properties->Common Properties->References, where you must remove references to the duplicate 1.x libraries. Easy. And hopefully easier to find now.

Saturday, October 28, 2006

Happy Birthday Ellen!

Hard to believe this was two years ago. Or that now she can do this when you ask her to make an angry face:

 

 

 
:)

 

Happy birthday sweetie!

Tuesday, October 24, 2006

The Impact of Atlas on Web Services

<prognostication-mode accuracy="questionable"> 

 

I was reading this post today about the Microsoft AJAX Library, and something struck me: the [ScriptService] attribute makes this is a new web service toolkit. "Duh", you're saying. But the interesting thing about this one is that it's a toolkit that .NET web service authors probably can't ignore. Up until now, I've been getting the message from most of my clients that the only web service client they really care about is .NET. That sort of begs the question of "Then why use web services at all?" but that's perhaps a topic for another time.

 

At any rate, given how software from Microsoft is usually received by developers in the Windows world, I suspect that people will be forced to consider the AJAX Library when designing their .NET web services. And granted it's written by the same company, but I'll be completely stunned if the two stacks are completely isomorphic. Hell, it's hard enough to get the .NET stack to talk to itself across XML serialization sometimes.

 

I think this constraint is going to be fundamentally positive. It's generally true that the second client is much harder to write than the third, as the number of generalizations you have to introduce decreases. So if people start testing against two different clients (even if they are both from Microsoft) then it'll be that much easier when it comes time to integrate with Java, or Ruby, or whatever the hell comes next. Which is (at least one of) the point(s) of doing Web Services.

 

Should be interesting. At the very least, it looks like I should devote some time to playing with [ScriptService] to find out what the limitations are.

 

</prognostication-mode>

Monday, October 23, 2006

ScrewTurn Wiki

Ian pointed me to this posting by James Avery, who seems to really like ScrewTurn wiki, an open-source, .NET-based wiki engine just like FlexWiki. But James' statement

 

I have been using FlexWiki for sometime and I have to say that Screwturn really blows it away. It has built in security, one-click backup, its extremely customizable, and much more.

 

sort of caught my eye, as you might imagine. :)

 

Contrary to what seems to be the unfortunately frequent open-source practice, I'm not interested trashtalking ScrewTurn wiki. In my ideal world, ScrewTurn would do everything that FlexWiki does, plus more, and there'd be a syntax converter. Then I could switch to ScrewTurn wiki and spend my limited free time making it (or something else) better. Since that doesn't seem to be the case (it looks like they both have features the other lacks), I guess I'm going to stick with FlexWiki for now. :)

 

At any rate, if you're in the market for a .NET-based wiki, I highly recommend you check out ScrewTurn. I haven't installed it yet, but I've skimmed the docs, and it sure looks like they've got a solid project. It looks like for now the choice between FlexWiki and ScrewTurn depends on your particular requirements and preferences, so it's good to know about the options.

 

And to any ScrewTurn devs that read this - if there's anything FlexWiki can do to help, or if you have suggestions about what we can do better (there's lots), or if you can think of interesting ways to work together, just let us know.

Wednesday, September 20, 2006

Announcing HoboCopy

For a while now, I've been unhappy with the state of my backup strategy. I have a simple script that runs ntbackup every night on a rotating, 14-day schedule: one full backup followed by six incremental backups, then repeat with a different target. In this way, I can always recover to any day in (at least) the last week.

 

So why am I unhappy? Two reasons:

 


  1. ntbackup isn't exactly robust - I've seen all sorts of failures, and restoring files is sort of flaky.

  2. ntbackup locks its information up in a proprietary format. I just want my files, dammit, not some .bks file.

 

Being a cheap bastard with decent programming skills, I started looking at what it would take to wire up my own solution. I thought about using robocopy, the king of copy tools. But it has one major drawback: it can't copy any files that are locked by another program. Some of my friends solve this problem by shutting down all their programs every night, but I knew that there was no way I (or my wife) was going to remember to do that, and of course the one time I'd forget was the day my hard drive would tank. Plus, what about programs like SQL Server that I don't want to ever shut down?

 

Obviously, it's possible to copy files that are in use. After all, ntbackup does it. So I started poking around, and came across VSS. The good VSS (Volume Shadow Service), not the unbelievably crappy one (Visual Source Safe). The Volume Shadow Service is a very cool piece of Windows XP/2003 that lets you "snapshot" a hard drive, creating what's essentially a point-in-time image of what's on the disk. You can then copy files from that image at your leisure. Better, it's done in an efficient manner, so that it doesn't actually copy anything unless someone does a write, and even then it only copies at a block level. Which is good, because otherwise you'd need 50GB free to snapshot 50GB of data.

 

But it's even better than that. VSS includes an API that programs like SQL Server 2005 can use to find out when a snapshot is about to occur. When so notified, VSS-aware programs can flush their state to disk, so you get a consistent backup. Of course, not every program is aware of VSS. For those, you get what the docs call a "crash consistent" snapshot. Translation: whatever the hell was on the disk. In my book, that's still better than not backing up at all. After all, my computer seems to do just fine after a BSOD, which is no different.

 

Armed with the Platform SDK docs, I set out to achieve my goal of a VSS-based backup utility. I would have liked to have used robocopy to do the copy, but I couldn't get it to copy using source paths like \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy1\data\backup\SmtpSend\Properties\Resources.resx,

which how you get to the files in the snapshot.

 

Next, I tried to write a managed tool that would do the copy. That wouldn't have been too bad - recursive file copy is pretty simple to implement. Unfortunately, the VSS API is completely broken for CLR interop - the main object you need to access implements one COM interface,  IVssBackupComponents. But if you try to query that object for that interface, it returns E_NOINTERFACE. Which is wrong, wrong, wrong. And also means that there's no way to use the object from straight managed code.

 

So I decided to write the tool in unmanaged C++. Maybe I could have done it in managed C++, or a combination of C++ and C#, but in the end I decided to make this a project that would help me get a little rust off my C++ skills. Plus, the documentation suggests that paths starting with \\?\GLOBALROOT aren't safe to use from managed code, although it worked fine when I tested it. The end result is HoboCopy. You pass it a source directory and a destination directory, and it makes a recursive copy using VSS. I've run it against my entire hard drive, and it's able to copy everything except files I didn't have permission to copy (e.g. some stuff under Documents and Settings), and for those situations I've got the /skipdenied switch. Some day I'll add a switch that enables the SE_BACKUP privilege to make that behavior even better. Unless one of you wants to submit a patch. :)

 

Speaking of patches. As of now, the project is hosted at SourceForge under the MIT license, so have at it! (Make sure you download the right one - there are separate binaries for XP and W2K3.) Just go easy on me if you look at the source - it's been years since I wrote much C++. :)

Tuesday, September 12, 2006

FolderShare + Bazaar = Tasty

As a consultant, I face the occasional problem of having to work on source code that lives behind my clients' firewalls. Often, getting access to their VPN is difficult, like when the engagement is very short and the VPN red tape copious.

 

I've tried emailing zips, setting up a WebDAV share, and various other strategies, all of which had one shortcoming or other. The best solution is, of course, a source control system of some sort, but even that I've had problems with (e.g. the combination of the client's firewall and my ISP blocks the port(s) we need).

 

For a couple of weeks now, I've been trying a new strategy. Using FolderShare, we create a shared folder that we both have write access to. Then we turn it into a Bazaar repository. On each end, we pull from and push to this folder/repository, making it look a lot like a source control server. And so far, it works great. It even runs over HTTPS, which should make any suspicious sysadmins slightly happier.

 

I imagine one could do the same thing with CVS or Subversion [1], although I haven't tried it. Bazaar seemed like a reasonable choice for two reasons: a) it's more "folder-y" than most SCC systems, so it seemed like a natural fit, and b) Sam Ruby thinks it's cool, so I wanted to try it out. :)

 

[1] You might even be able to make it work with SourceSafe. But then you'd have to use SourceSafe, and why would you use something that's both worse and more expensive?

Friday, September 8, 2006

Inside MSDN - Consuming MSDN Web Services

Well, it's early September, and you know what that means: time for the October MSDN issue. I've been checking the website frequently to see when the new issue would be released, because I've got an article in it! You can read it here. It's the latest installment of "Inside MSDN", a column that talks about how the suite of applications and services that make up MSDN itself is built. My article covers how to use the MTPS Content Service, which I announced here on CraigBlog a few months ago.

 

This is my second MSDN article, but my first solo one. I wrote the first one with Tim on an obscure feature of COM+. ("What's COM+?" you say? Yeah, I feel the same way.) 

 

I hope I can be excused for thinking it's sort of cool to have published in MSDN Magazine. :)

Thursday, August 24, 2006

Console Text Selection Via the Keyboard

I was hanging out with some Pluralsight guys last weekend. Dan Sullivan was talking about SQL Service Broker, so I was already paying close attention, but then he did something that made me sit straight up my seat…he selected text in a console window using only the keyboard!

 

I've said it many times before: I'm a keyboard kind of guy. So it has always bugged me when I have had to take my hands off the keyboard to select text in a console window. Well, now I don't have to any more. Maybe I'm the last guy to figure this out, but apparently when you go into mark mode in a console window, you can use the arrow keys to move the selection point, and shift-arrows to extend the selection. Way cool!

 

Just to summarize, here's how I'd select some text in a console window, using only the keyboard:

 

Alt-Space, e, k  [Enter mark mode]

Down, down, down, right, right, right [Move the cursor]

Shift-right, shift-down, shift-right [Extend the selection]

Enter [Copy the selection to the clipboard]

 

Yay! Thanks Dan!

Friday, July 28, 2006

Producing Open Source Software - A Review

I just now finished reading "Producing Open Source Software: How to Run a Successful Free Software Project". It was, in a word, excellent. Really, really good. And it's available in its entirety for free online.

 

I definitely learned a lot reading this book, despite having had a couple of years' experience helping to run FlexWiki now. I was happy to see that we already follow many of his recommedations, but I also know I'll be changing a few things we do based on what I read.

 

Even setting aside the fantastic advice about how to set up the technical infrastructure, I found Mr. Fogel's insight into the culture of free/open source to be truly insightful. It's no wonder that Subversion rocks so hard with guys like this behind it.

 

If you haven't already, read this book soon.

Thursday, July 27, 2006

Getting CLR Security Right - Seeing Double

This one bit me the other day. Fortunately Dominick was able to straighten me out rather quickly.

 

What I was trying to do was to update a build script to run off of a shared drive (don't ask). Of course, when running .NET code like NAnt and the other 90 tools we use in the build, you need to elevate privileges or your tools are unlikely to work with the decreased permissions they get by default when running from a network drive. That much I knew. Where I got burned was that some of our tools are compiled for .NET 1.1, and some for 2.0. Well, policy for 1.1 is completely separate from policy for 2.0. So I had to change it in both places.

 

Luckily, I didn't waste too much time on this. It helps to be willing to bother your friends. :)

Tuesday, July 25, 2006

It's (sort of) Alive!

Just a few minutes ago, I fired up my web browser and pointed it at the FlexWiki code I've been hacking on. To my complete amazement, the home page actually rendered! It was riddled with errors, and it crashed when I tried to edit a page, but the blasted thing actually showed me HTML that wasn't the ASP.NET error page…I practically peed myself.

 

As you may have read here before, I've been working on refactoring FlexWiki for quite a while. I started tracking my hours in October of 2005, but I really started this effort a few months before that. I generally try to work on it every day, but the realities of work and family mean it's much less often than that. Still, I see I've put in over 100 hours on it. If nothing else, that makes me feel pretty good about having "paid" for all the open source software I use. :)

 

There's a metric boatload of things to be done before FlexWiki 2.0 is ready to go, of course. I think my next few months will look something like this:

 



  1. Fix the 10,000 bugs I introduced.

  2. Develop a performance test suite to characterize the behavior of the application.

  3. Run the test suite.

  4. Analyze the results.

  5. Make improvements, including the caching provider that was part of the original reason for doing this massive refactorization.

  6. Goto 3 a lot of times.

  7. Write the security module that got me started on this hellroad.

  8. Goto 1 a lot of times.

  9. Release!

  10. Goto 1 a lot of times.

  11. Take a break from FlexWiki. Maybe to work on FlexWikiPad. :)

 

Still, this really feels like turning a corner. Yay me.

Wednesday, June 21, 2006

Ian Launches WpfMsdn

Running with the msdnman ball, Ian has launched WpfMsdn, a GUI browser for the content coming out of the MTPS Content Service.



WPF MSDN Reader


Cool. :)

Monday, June 19, 2006

Mixing Forms and Windows Authentication in ASP.NET 2.0

Almost two years ago, I blogged about how to mix Forms and Windows authentication in an ASP.NET application. It was something I figured out for a client, and they've been using the basic idea ever since. It worked pretty well up until a month or two ago. Then it broke pretty hard. What changed? We upgraded to ASP.NET 2.0.

 

Well, I finally got a chance to sit down and chase the problem, and I figured out what looks like a solution. We have yet to integrate my idea into the product, but I wrote a quick prototype, and it looks like it's going to work, so I thought I would blog it here.

 

The kernel of the problem is that the ASP.NET 2.0 Forms Authentication Module appears to stomp on the "Response.StatusCode = 401" that you need to set in order to make mixed authentication work. I'm not sure exactly where this happens: I got lost in Reflector when I tried to chase it down. But you can easily observe the effect - your login page will redirect back onto itself, with the ReturnUrl query string parameter double-escaped. That is, you'll see something like this in your address bar:

 


 

instead of this

 


 

Notice how the ReturnUrl is actually pointing back to the login page itself, rather than to Default.aspx the way it should be. Again, I haven't chased down exactly where this behavior is coming from, but it's easy enough to observe.

 

My solution was simply to attach a handler to the Application's EndRequest event by putting the following in Global.asax:

 

protected void Application_EndRequest(object sender, EventArgs e)

{

    if (Context.Items["Send401"] != null)

    {

         Response.StatusCode = 401;

         Response.StatusDescription = "Unauthorized";

    }

}

 

Then, in order to trigger this code, all I have to do is put a

 

Context.Items["Send401"] = true;

 

somewhere in my login page when I determine that I need redirection to pick up on the user's Windows credentials. Because Application_EndRequest runs much later in the page lifecycle than the code in my login page, Forms Authentication can't get in the way and screw up my status change.

 

I thought I'd try to be really clever and attach my handler in the page itself by doing something like this:

 

Context.ApplicationInstance.EndRequest += LoginPage_EndRequest;

 

but it doesn't work. I'm not sure why (the page lifecycle is not my strong suit these days), and I'm out of time to chase it down.  If you happen to know what I'm doing wrong, drop a comment. In the meantime, the Global.asax thing works well enough for us.

Friday, June 16, 2006

msdnman Moves To CodePlex

Now that msdnman is public, I need to find a home for it that isn't two zipfiles in a directory somewhere. I said I was going to move it to SourceForge, but Brad Wilson suggested that I host it at CodePlex instead. So I did: msdnman lives here now.

 

CodePlex is part Microsoft's answer to SourceForge, part spiritual successor to GotDotNet. As you know, I was pretty a pretty vocal detractor of GotDotNet, but that's exactly why I wanted to try out CodePlex - I wanted to see if Microsoft had gotten it right this time.

 

Well, I'd say they're on the right track. Having worked with CodePlex for all of two days now, I'm reasonably impressed. To start with, it has a real source control system - TFS. Aspects of the command-line support are slightly annoying (the need to constantly reauthenticate, specifically), but it's a totally modern tool, with support for lots of interesting and useful features (e.g. shelving). Other touches of the site - wiki editing for the project's home page, export of work items to Excel, linking checkins to work items, etc. - are really nice, and a step above what SourceForge provides.

 

My only major complaint is that there's no email support on the site right now. I believe email to be the lifeblood of open source projects. It's certainly how I do everything. Right now CodePlex uses RSS, which I think will serve well enough until email is available (it's on their list).

 

All in all, CodePlex is a pretty good effort for a v1 release. I expect it will improve rapidly, too, given that they have people like Brad and Jim Newkirk (of NUnit fame) on the team.

Monday, June 12, 2006

Announcing msdnman

When Kim, Tim, and I were working on MSDN2 - or more accurately, the Microsoft/Technet Publishing System (MTPS), the system behind MSDN2 - way back when, we always had in mind that it was important to create a system where the content could be leveraged in ways other than to power the MSDN2 website. And we did - every time you hit F1 in Visual Studio 2005, you're accessing the MTPS system. Now that the MTPS Content Service is publicly available, we expect other applications of the data to start popping up.

 

To get the ball rolling, and because we always thought it would be cool, I've written the first one. I call it msdnman, and users of *nix systems should find it fairly familiar: it's written as an analog of the man command. The idea is to provide command-line access to the MTPS content. I'm a big command-line guy myself, so I really like the concept.

 

For now, I've posted the binaries and source here. I'm working on getting it shoved over to SourceForge, but there are a few administrative hurdles to clear first, so it might take me a few weeks.

 

Because I wrote this in my spare time over a couple of days, it's still a bit rough. But it's still pretty servicable for those times where you don't want to fire up a browser, and I think it makes a reasonable demonstration of how to use the web service, too.

 

It's pretty straightfoward to use. Simply grab the binaries and run something like

 

msdnman System.Xml.XmlReader

 

in a console window, and you should see something like this:

 

msdnman screenshot

 

There are a bunch of options that you can specify, too. You can get a list of them by running

 

msdnman -?

 

but briefly, you'll generally either do this:

 

msdnman IDENTIFIER

 

or this

 

msdnman -k KEYWORD

 

where IDENTIFIER identifies some content item in the MTPS system. See here for more information, but usually this will be the name of the namespace, class, method you're looking up, like System.Xml.XmlReader or System.Security.Cryptography.

 

The "-k" option is a nifty little add-on that I did (at the request of John Mollman - this is your blog, right?), which does a keyword search against the MSDN content. So you can do

 

msdnman -k web.config

 

to get back a list of links having something to do with web.config.

 

I implemented the search part by wiring up to the same web service that Visual Studio 2005 uses, as the MTPS web service does not currently support search.

 

So download and enjoy, and let me know if you have any ideas for how to improve it.

Announcing the MTPS Content Service

Over the last few months, you've seen me drop vague hints about the work I'm doing at MSDN. Well, today we launched it at a TechEd chalk talk (DEVTLC03), so I can finally talk about it: the Microsoft/TechNet Publishing System (MTPS) Content Services.

 

In brief, the MTPS Content Services are a set of web services for exposing the content in MTPS. MTPS is the application I helped write a few years back that stores and processes all the content at MSDN2. With the web service, you now have programmatic access to all that data via SOAP. So if you want to embed access to the documentation for System.Xml.XmlTextReader into your application, go for it. If you want to know what the child nodes of System.DateTime.ToString() are in the table of contents, you can go and find that, too. I expect to see some fairly interesting uses of the service pop up in the near future. There's such a huge amount of good information in MTPS that I imagine lots of people will want to leverage it.

 

The web service is reasonably well-documented here (of course, I wrote the web service and the documentation, so maybe I'm not the best person to judge the quality of the docs), but let me give a brief explanation of how it works.

 

The web service consists of two operations: GetContent and GetNavigationPaths. GetContent - as you might imagine - allows you to retrieve content (XHTML, GIFs, etc.) from MTPS. GetNavigationPaths lets you get the table of contents (TOC) data for the items in the system. I imagine most people will use GetContent far more often than GetNavigationPaths.

 

The system is organized around the concept of a content item. A content item is a collection of documents identified collectively by a content key. A document has a type, a format, and some content. The document most people will probably be interested in is the document of type primary, format Mtps.Xhtml, but there are other documents associated with a content item as well (for example, images can be stored in the content item as well). See the docs for more detail.

 

A content key consists of three parts: a content identifier, a locale, and a version. The locale is something like en-us (US English) or de-de (German as they speak it in Germany). The version is something like SQL.90 (SQL Server 2005).

 

The content identifier is a bit more complicated. It can be one of five things:

 


  1. A short ID. This is an eight-character identifier like "ms123401".

  2. A content alias. This is a "friendly name" for the content item, like "System.Xml.XmlTextReader".

  3. A content GUID. Topics can also be identified by a GUID.

  4. A content URL. To allow for easy integration with the HTML front end of MTPS URLs like http://msdn2.microsoft.com/en-us/library/b8a5e1s5(VS.80).aspx can also be used to identify a content item.

  5. An asset ID. This is how topics are identified internally by the system, and they occasionally appear in the output. They always begin with "AssetId:".

 

With the exception of asset IDs, these are all the same pieces that you can already use in the URLs for MSDN2, so the concepts should be familiar if you've spent any time looking at that stuff.

 

There are two slightly funky (but highly intentional) things about what GetContent returns that you'll need to keep in mind. The first is that, by default, the body of the documents that make up a content are not returned. Unless you list a document in the requestedDocuments section of the request message, you'll just get the types and formats of the available documents. This is because documents can be quite large, and it would be a waste to transmit all of them every time.

 

The other thing to be aware of is the idea of available versions and locales. If you send in a request for content item ms123401, locale en-us, version MSDN.10, you'll get back that content item, but you'll also receive a list that will tell you that the content item is also available for locale/version fr-fr/MSDN.10 and locale/version en-us/MSDN.20. This list is particularly valuable when the content key you request does not correspond to a known content item - in that case it represents the best guess by the MTPS system for reasonable alternatives.

 

GetNavigationPaths has a few twists as well. First, there's the name. We seriously considered calling it GetToc, but it's not exactly TOC data, since it's used for other things, like that little trail of links (sometimes called "breadcrumbs" or the "eyebrow") at the top of MSDN2 pages. What it really returns is all the ways to navigate between two content items. Hence, GetNavigationPaths.

 

GetNavigationPaths accepts two content keys. In this case, the identifier in the keys must be a short ID. (If you need to, you can resolve a short ID from an alias, a GUID, a URL or an asset ID via a call to GetContent first.) The first key identifies the root, which is the content item you'd like to start at, and second key identifies the target, which is the content item you'd like to wind up on.

 

What you get back is a list of navigation paths between the root and the target. There might be more than one path, because a content item can appear in more than one place in the TOC. A navigation path is a list of navigation nodes, where each navigation node is made up of a title, a navigation node key, and a content node key. There's also some information about something called phantoms, but I'll defer that to the docs.

 

The title is fairly self-explanatory, but the distinction between a navigation node key and a content node key is somewhat less intuitive…I had to have it explained to me more than a few times when I was writing the system. Basically, it arises out of the fact that every node in the TOC is itself a separate content item in the system, whose content consists of a reference to the content item that TOC node represents and a list of child nodes. So the navigation node key is a content key (identifier plus version plus locale) that represents the TOC node itself, and the content node key identifies the content item the TOC node corresponds to. You can tell the difference between the two because the content item identified by the navigation node key will always have a primary document of format "Mtps.Toc".

 

Another way to look at it is that the navigation node key tells you where you are in the left hand tree of MSDN2, and the content node key tells you what goes in the right hand content pane.

 

Like I said, I don't expect as many people to use GetNavigationPaths as to use GetContent, so I wouldn't lose too much sleep over the details. Of course, if you do wind up using it (or any part of the system), I'd love to hear about it, or about how we could make it or the documentation better.

 

This was a very interesting system to write for a variety of reasons, but I think I'll save the "how" for another post. I've also got what I think is a pretty cool application of the service that (at least some) people are really going to like. More on that later, too.

 

We consider the system to be roughly in beta, as we already know several things we need to improve or change. That said, we feel good enough about it to turn the world loose on it. If you come up with any cool ideas about how to use the service, or ideas about how we could improve it, drop a comment here.

Tuesday, May 23, 2006

Electronic Elections: Doomed

Electronic elections in the US are doomed: this pretty much settles it.

 

Our government (and I don't mean "this government" - I think it's non-partisan) appears to be completely, hopelessly addicted to technology while at the same time being pathetically inept at making choices about it.

Monday, May 8, 2006

Trailing Commas

I was reviewing some code the other day and I noticed something like this:

 

public enum Foo {

  One,

  Two,

  Three,

}

 

Note the "extra" comma after the last element. I thought to myself, "There's no way this can compile." But in fact it does, and when I stopped to think about it, I realized how nice this C# feature is. It means I can more easily reorder these elements, more easily generate source code, and in general is just one less thing to worry about. And it turns out this works for static array declarations, too, like

 

string[] foo = { "a", "b", "c", };

 

Little touches like this make life in C# pleasant.

 

I'm sure I’m not the first to notice it, but I'll bet I'm not the last, either. ;)

Friday, May 5, 2006

Thursday, April 27, 2006

Irony, At Least, Is Highly Available

It's a great irony that I'm spending today working with a client trying to design a system that must be highly available, and simultaneously struggling with my email server becoming unavailable. For added contrast, the problem appears to be with the company that hosts my backup mail servers - a service I pay for in the name of high availability. :p

 

At any rate, I hope this will get sorted out soon, because at the moment our email is bouncing immediately rather than simply queuing. If you're trying to reach me (or my wife) and you got a bounce, at least now you know what's happening. You can always use my gmail address instead.

Tuesday, April 25, 2006

It's All Free, Y'All

Phil Haack (one of my favorite bloggers, I'm not ashamed to admit) makes an excellent point. To that end, I hereby state the following:

 

Any code that appears on this blog, or in the sections of the Pluralsight wiki, for which I own the copyright, is hereby licensed under the MIT License, unless otherwise stated.

 

I'll try to make a practice of being more explicit in the future when I post code. In the meantime, you're covered.

Wednesday, April 19, 2006

The Perils of WSDL First (Again)

Tim and Aaron have both pitched in on this Perils of WSDL First thread. I just want to point out that I largely agree with what they are saying. Which is good, because disagreeing with either of them on matters pertaining to web services would be enough to make someone with even my huge ego question themselves! ;)

 

They advocate use of an abstraction layer between your domain types and the XmlSerializable types you use as your web service paramaeters. This is a fantastic idea, and it's the way I write web services myself, having been bitten more than once by direct exposure of domain classes. Generally speaking, this will go a long way towards preventing the sort of problems I warn about in my original post. In many cases, it will even be sufficient, especially when coupled with Tim's idea of having the serializable types be in a separate assembly owned by someone who understands the implications of code changes on the wire protocol. And the DCS stuff that Aaron talks about looks like an excellent further step in the right direction.

 

So if I still appear skittish, it's because I know that most people don't program this way. Most people follow the path of least resistance, and will hesitate to implement writing the boring, repetitive mapping code involved in the most straightforward implementation of such an abstraction layer. Yes, I realize more sophisticated implementations are possible, but I'm talking here about people and organizations that don't necessarily recognize the long-term benefits of such a setup. As a consultant, I've seen this more than once.

 

The other thing that makes me a bit nervous is that even with an abstraction layer, you still have to be very careful not to muck with it in ways that would change the contract - a fact that both Tim and Aaron are clearly aware of. This is obvious to people who've been doing web services for a while, but for a lot of developers the angle brackets are still invisible. Basically, I'm trying to point out that web service development needs to be both first. Fragile code sucks; the more things you have to remember, the more likely you are to make a mistake.

 

Whether or not any of this means your organization needs to go whole-hog and implement IXmlSerializable really depends on the dynamics of your development process. Probably you don't need to go that far, especially given XmlElementAttribute.Order. At the end of the day, just remember that when you let the infrastructure generate your XML automatically, seemingly innocuous changes in your code can create breaking changes in your wire protocol. Remember it, and use appropriate techniques to address the risk.

Tuesday, April 18, 2006

Overkill? I think not

It's good to see Tim blogging again. Hopefully he'll keep it up this time (nudge, nudge, Tim). At any rate, I'll blame his lack of practice for his post "Craig urges overkill, XmlSerializer sky not falling". Either that, or he had a high temperature/blood alcohol level. :)

 

Tim writes:

 

Craig got caught in a very particular set of circumstances. First, he started with WSDL. Then he hand-wrote his serializable types. Then he followed his preferred set of rules for ordering members of those types alphabetically.

 

The implication here is that unless you're doing all those things, you won't have the issues I describe in my post. This is simply not true. The problem I describe is an issue for anyone who does not explicitly control the order of serialization of their web service-visible types. Period. While I wouldn't exactly say the sky is falling, this is definitely a Big Deal.

 

I think the reason you don't see this occurring as a problem more often in the wild is that people tend to write .NET clients for their .NET web services, and XmlSerializer doesn't care about order. Or, more generally, schema validity. But if reach is important to your web service, you should.

 

I actually talked with Tim about this on the phone, and it came out during the conversation that the problem is even worse than I first thought. I had detected the issue with return types, but the fact of the matter is that if you reorder your type members for either input or output parameters, you change the generated schema in the WSDL. And that's a breaking change (from a schema standpoint).

 

Which really was my whole point: you need to be EXCEEDINGLY careful with your types unless you implement IXmlSerializable or use XmlElementAttribute.Order. Reordering type members is just too easy for a developer to do without thinking about it, and isn't the sort of thing that's easy to catch as critical even if your team reviews all source changes.

 

As Tim points out, implementing the read side of IXmlSerializable is a royal pain most of the time (you'll note I only said "consider" using it), but XmlElementAttribute.Order is pretty easy. Of course, it's only available in .NET 2.0. :p

Friday, April 14, 2006

FlexWiki 2.0 Progress - The Big Commit

One of the things I've been working on - a little at a time, usually about half an hour a day - is a fairly extensive rewrite of FlexWiki. We're calling it FlexWiki 2.0, and not coincidentally, it involves an upgrade to .NET/ASP.NET 2.0.

 

I've been working on it for what feels like a loooong time, and for various reasons (some good, some less so) I hadn't checked in until today. Well, today I checked in the code - woohoo! So now we have two branches in CVS, which will make life ever so slightly more complicated, but which will allow me to sleep better at night. I do nightly backups of all my data, but that's not really the same as having it under version control at SourceForge.

 

This partial rewrite was driven by the desire to give FlexWiki a better authorization model. I still haven't even started on that part of it - first I had to untangle the existing caching code from the storage engine, and separate out a bunch of special processing that deals with something called "backing topics". Don't ask.

 

Anyway, what I've got now is a much cleaner, more extensible design than what was there before. Of course, I still have to do performance testing once I finish coding the new security and caching features, and that will almost certainly drive some complexity back into the design. But I remain hopeful that the design will enable future work to be done without the sort of elbow-length-gloved-colonoscopy type of overhaul I did this time around.

 

In October, I started tracking the time I've put into developing this new version of FlexWiki. So far I'm at 75 hours, which is either a lot or not much depending on how you look at it. There's a lot left to do - the source in its current state doesn't even compile - but the main refactoring I was doing has unit tests that cover 94% of the class, so I've got a solid start. My to-do list activities include:

 


  • Getting the build reconfigured to deal with two branches.

  • Getting a build for the 2.0 code working.

  • A few more minor refactorings.

  • Re-implementing support for built-in topics

  • Implementing security support.

  • Re-implementing caching.

  • Establishing some sort of performance testing regimen.

  • Documentation, cleanup, and generally making sure it works after everything else I've done.

  • Taking some sort of serious break from working on FlexWiki. :)

 

Anyway, today was a major milestone for me, so I thought I'd share. If anyone has any comments on the code or the design, I'd love to hear them.

Thursday, April 6, 2006

The Perils of WSDL First

Update: Added the bit about XmlElementAttribute.Order. Thanks Tim!

 

You might recall that a while back, I posted my views about the whole "contract first" or "code first" debate. I'm not here to stir that debate again, but to relate a story from the trenches that reinforced my opinion.

 

Recently, I've been doing some work for a client that involved writing a web service under ASP.NET. For a variety of good reasons, we couldn't go the easy route of letting the system generate our WSDL for us. Instead, we wrote it by hand, and used this technique to make it show up in the right place. It was definitely harder than just writing a bunch of types, but I was pretty pleased with the result in the end.

 

Then I made the mistake of taking a shower while thinking about the code. I should know better, having had too many good ideas while standing naked under hot water. (I will now pause to give you a moment to poke out your mind's eye.) What I realized made me want to yell "Stop the presses!" and indeed will likely delay the release of said web service.

 

The issue, in a nutshell, is this: when writing custom WSDL, it's very, very easy to produce schema-invalid responses, unless you implement IXmlSerializable on all your return types. The problem is one of ordering. Allow me to explain.

 

One of the nice things about XmlSerializer is that it doesn’t really care about order. Let's say you have a class that looks like this:

 

public class Person {

  public string Name;

  public int Age;

}

 

It doesn't really matter whether you deserialize this from XML that looks like this:

 

<Person>

  <Name>Craig</Name>

  <Age>34</Age>

</Person>

 

or this

 

<Person>

  <Age>34</Age>

  <Name>Craig</Name>

</Person>

 

In the end, XmlSerializer will happily work with either flavor. And 99.99% of the time, this is fine, because you don't care. The problem arises because the schema that corresponds to this looks something like this:

 

<xs:element name="Person">

   <xs:complexType>

    <xs:sequence>

      <xs:element name="Name" type="xs:string" />

      <xs:element name="Age" type="xs:int" />

    </xs:sequence>

   </xs:complexType>

</xs:element>

 

Note that this uses a sequence, which means, "These elements must appear in this order." No big deal if the Person type is an input to our WebMethod, because we probably don't care in the code which one came first. But if Person is the return type of a WebMethod, then we've got trouble. Because if I ever saw that Person class, I'd rewrite it like this:

 

public class Person {

  public int Age;

  public string Name;

}

 

because I generally organize my files with members arranged alphabetically within visibility and member type groups. (Why? It makes it a snap to navigate the file in outline mode in VS.) But if I make that change and then serialize this type, it's going to serialize with the <Age> element first, and that's a schema violation.

 

If all your clients use .NET, or other tools that don't care about order, great. But I think producing schema-invalid messages is just plain wrong, and likely to lead to hard-to-diagnose issues down the road as the tools evolve and your client base expands. Not to mention that if one of your clients tries to schema-validate, they're going to get an error.

 

In short, writing web services that are fragile with respect to the order you define your members in your source code is a recipe for disaster. It's way too easy for someone to come along and make this sort of change. Of course, I think it's generally a bad idea to expose your domain entities directly via web services anyway, but even if you have the sort of serialization layer that avoids this problem, you still might get bit by this.

 

And for our system, the problem was even worse. We'd written the schema ourselves rather than letting the system generate it, so either we had to change the schema to match the serialization patterns of our types, or we had to change the serialization to match the schema.

 

Between these two choices, I believe the only realistic one is the latter. For one thing, you may not control the relevant parts of the schema…we didn't in the system I was working on. For another, there's no guarantee that Reflection (which the XmlSerializer relies on) will always return type members in the same order. It does now, and it would break a ton of things horribly if they ever did change it, but all the same the order is not documented nor guaranteed to remain stable version-over-version. But the real reason is the one I already mentioned: you just simply shouldn't have to rely on the order of properties and fields in a source file to produce a correct system. Too fragile.

 

Given that the only realistic choice is to change the serialization, how do we go about it? Well, if you're developing for .NET 1.1 you have to implement IXmlSerializable, the interface that gives you full control of the serialization process. And that's usually a pretty reasonable choice: hand-generating XML via XmlWriter is pretty straightforward, if a little tedious. You can punt the implementation of ReadXml (which is usually much harder) if the type is only used as a return value from your WebMethods (as opposed to being an input), since ReadXml will never be called in this case.

 

If, on the other hand, you're working with .NET 2.0, you can make use of the Order property on XmlElementAttribute, like so:

 

public class Person {

  [XmlElement(Order=2)] public int Age;

  [XmlElement(Order=1)] public string Name;

}

 

If I had to boil all this down to a set of rules, it would be these:

 



  1. Don't serialize your domain objects directly. Use a separate set of serialization types. This is just a general best practice.

  2. Consider implementing IXmlSerializable/XmlElementAttribute.Order on all WebMethod return types.


  3. You really, really should implement IXmlSerializable/XmlElementAttribute.Order on WebMethod return types when you write the WSDL yourself.

 

All this just goes to show that you have to code "both first" - it is perilous to ignore either the XML or the tools. Take control!

Saturday, April 1, 2006

The Great Thing About the Definition of &amp;quot;Architecture&amp;quot;

The great thing about the definition of "architecture" is that everyone has one. Including me.

 

Responses to my post included comments from quite a few people - some private, some as comments on the blog post. Everyone seemed to have an opinion of what an architect does or what architecture means, although I don't think anyone actually contradicted my main (and, I guess, poorly-expressed) point: that the term "architecture" is very poorly understood.

 

What was interesting was that everyone seemed to agree that (a) one of an architect's responsibilities is design, and that (b) the key to being a good architect is having a solid connection with both the technologies involved and the customer. (In fact, Martin Fowler points out how this is exactly the role an architect should take in building construction. And often doesn't.)

 

Michael Platt breaks it down into seven (!) overlapping roles. I'm not denying that the project-management aspects of Michael's breakdown are important functions, but that seems to fall well outside the realm of software, since it would be true of any project of sufficient size of any type. Also, I should point out that I never meant to imply that I thought Michael's definition was wrong or that it somehow missed the point; just that it didn't increase my understanding of anything. Or, to put it another way, I'm still too stupid about exactly what "architecture" means to gain much from the fine points he put on it. :)

 

I think the main truths (such as they are) are this:

 


  1. Nontrivial software needs a design.

  2. There's a need for meta-design in larger organizations to constrain software designs to have commonality where this makes sense.

 

My only point was that I think "architecture" has come to mean both things. I think this makes it rather useless as a term, because the two are so different. But with these two things finally separate in my head, lots of other stuff makes more sense to me now.

Wednesday, March 29, 2006

The Definition of &amp;quot;Architecture&amp;quot;

I came across this post by Michael Platt the other day. It's about the definition of the term "architecture", which is something I've been thinking about a lot recently (due to the fact that I'm writing a class for Pluralsight with the word "architecture" in the title).

 

Here's the thing: Years ago, I used to have the title "system architect", but I don't think I could have told you what "architecture" means. In fact, I don't think I could have told you clearly what it meant if you asked me last month. The closest I could have gotten was probably "what architects do". And with all respect to Mr. Platt, his definition:

 

Architecture is the use of abstractions and models to simplify and communicate complex structures and processes to improve understanding and forecasting.

 

doesn't really do it for me either. Is it really an architect's job exclusively to forecast and communicate? That doesn't sound like a complete or accurate description of what I do when I act as the architect on a project.

 

It was conversations with my friends Tim and Jeff that really cleared it up for me. They pointed a couple of things out to me. First of all, only software architects talk about "an architecture". Architects that build buildings talk about a design. This really resonates with me - I've always felt uncomfortable with the question "What's your architecture?" - I don't think that's a meaningful thing to ask. Whereas the question "What's your design?" seems far more straightforward and utilitarian.

 

So that goes straight to the question, "What should a software architect do?" It now seems fairly obvious to me that the answer is "Design software". Hopefully, these designs are based on the users' requirements - sussing out what those requirements are and turning them into a design is what distinguishes a good architect from a bad one. As is having an intimate knowledge of the technologies involved - good software architects should be reasonably good coders the same way good physical architects should know the properties of drywall and steel! Of course, neither can work in a vacuum, and both should rely on practitioners to verify and enhance the design against real-world constraints.

 

I think there's also another legitimate activity for someone that calls themselves an architect to perform. It involves the distinction between "architecture" and "Enterprise Architecture". The latter term has been so abused as to have nearly lost its meaning, but nevertheless I think it serves as well as any term we have right now as a description of an important activity; namely, that of setting up a framework within which architects can create maximally useful designs. Allow me to elaborate.

 

In "real world" construction (i.e. the construction of physical buildings), there are both architects and city planners. City planners do things like set up building codes, lay out key pieces of infrastructure, and check for conformance to rules. I believe that "Enterprise architects" should play a similar role in organizations where the resulting consistency is beneficial. We probably need a better name for this role, but Enterprise Architect serves well enough for now.

 

It seems strange that it has taken me something like ten years to understand this distinction well enough to be able to put it into words. But I feel better for having done so - as I said, I've always felt uncomfortable when people start slinging around the a-word; I guess I've just encountered too many people sporting the title "architect" who were useless, Ivory Tower, overengineering types. But with the analogy to real-world architecture clearer in my head, I can finally easily distinguish between productive and unproductive architectural activities.

 

I'm sure I'm not the first to arrive here, but based on the massive fog around terms like "architecture", "architect", and "enterprise architecture", I can't imagine I'll be the last, either.

Tuesday, February 28, 2006

SQL Server Service Broker Newbie Problems - Part III

Here's a really handy command for when you get Service Broker into a bad state, or just have so much junk in transmission_queue that you want to start over:

 

ALTER DATABASE foo SET NEW_BROKER

 

This will end any open conversations with an error and create a new broker in the database. Note, however, that it does not drop any queues or services that you may have defined, although it will empty your queues.

 

I could be missing some important side effect of this command, of course: I'm still trying to figure this stuff out.

Thursday, February 23, 2006

SQL Server Service Broker Newbie Problems - Part II

So after figuring out my last problem, I was still unable to do something as simple as send a test message from one simple service to another. Here's how I was setting things up:

 


create queue mysenderqueue with status = ON;

create queue myreceiverqueue with status = ON;

create service mysenderservice on queue mysenderqueue;

create service myreceiverservice on queue myreceiverqueue;

 

declare @dialog_handle uniqueidentifier

begin dialog conversation @dialog_handle from service mysenderservice to service 'myreceiverservice';

send on conversation @dialog_handle ('This is a message');

 

Those of you that know Service Broker will already have spotted the problem. It took me a little longer. :P

 

The symptoms here are what were so confusing to me. After running the above SQL, if I did a

 


select * from mysenderqueue;

select * from myreceiverqueue;

 

I'd see that there were no messages in receiverqueue, but senderqueue had one. WTF? Why didn't my message go to the right queue?

 

With the help of the inestimable Bob Beauchemin, I was able to figure out that what I should have been doing was this instead:

 


select cast(message_body as XML) from mysenderqueue;

 

If I had, I would have seen that the message in senderqueue actually said

 

<Error xmlns="http://schemas.microsoft.com/SQL/ServiceBroker/Error">

  <Code>-8425</Code>

  <Description>The service contract 'DEFAULT' is not found.</Description>

</Error>

 

So really, it wasn't that my message was going to the wrong queue, it was that Service Broker was bouncing my message and queuing an error back to the originating service.

 

Now, I was still confused, because this seems to imply that the DEFAULT contract isn't defined, when I can clearly see that it is via SQL Server Management Studio. A bit more digging revealed the following line in the docs for CREATE SERVICE:

 

"If no contracts are specified, the service may only initiate conversations."

 

While surprising (why name a contract DEFAULT if it's not the default?) this certainly was consistent with what I was seeing. Once I changed my service definition for the receiver service to be

 


create service myreceiverservice on queue myreceiverqueue ( [DEFAULT] );

 

then everything started working. Cool! Now on to the next newbie problem!

SQL Server Service Broker Newbie Problems - Part I

One of the things I've been doing lately is checking out some of the new features in SQL Server 2005. And in particular, I've been looking into Service Broker. From the little research I've done so far, it seems quite exciting: a real queuing system that's integrated right into the database. One of the problems I've always had with MSMQ is that it tends to smear out your application state between a database and the opaque message store. After all, a queue is basically just a database table with some special "triggers" attached to it that move the row to another table.

 

At any rate, after doing a bunch of reading, I sat down to actually use the thing. I was hoping that it would be slam-dunk simple, but I hit a few roadblocks. I'm not ready yet to say whether that means SB is difficult to use, but I thought I'd document my problems (and the solutions) here in case anyone else runs into them.

 

The first problem I ran into was that the database needs a master encryption key. That is, in the database where my queues were set up, I needed to run this statement:

 


create master key encryption by password = 'my lame-ass password'

 

Before I did that, I wasn't seeing anything at all in either my source or my destination queues when sending a message. I was able to track the problem down with the help of Service Broker guru Dan Sullivan. He told me to run

 


select * from sys.transmission_queue

 

and sure enough, there was an error there that read

 

The session keys for this conversation could not be created or accessed. The database master key is required for this operation.

 

which made it pretty obvious what the problem was.

 

Of course, that wasn't my last problem, but I'll save those for other posts.

Wednesday, February 22, 2006

Security Training Modules

Yo, what Keith said. Videos are about 10 minutes - short enough to watch at work while the pointy-haired boss is draining his monitor.

Wednesday, February 8, 2006

Rob Blogs

I see my friend Rob Engberg has a blog. Not only is Rob my oldest friend (we met at age seven), but we "came up" professionally together, working at our first real jobs doing the same thing.

 

It looks like Rob's blog is going to be a mix of technical stuff, adventure racing stuff, and a chronicle of his upcoming move to New Zealand. Given that Rob is one of the guys I go to when I hit a weird problem that I can't Google my way out of, you can definitely count me subscribed. 

Monday, February 6, 2006

VMWare Server Free

This blog has been nearly silent for quite some time now. Why? Frankly, because I'm fairly burnt out at the moment. Like many of you, I've got a lot going on both at home and at work, and it's been wearing on me a bit. Unfortunately, it doesn't look like it's going to let up any time soon.

 

But enough complaining. Here's something interesting: it looks like VMWare Server has gone freeware. Nice. For years I've been hearing my friends say VMWare is superior to Virtual PC. Now I guess I'll get a chance to see for myself.

Monday, January 16, 2006

WebDev.WebServer.exe PathInfo Limitation

I've been interested lately in hosting the ASP.NET runtime. I'll have more to say about it later, but I've got a bunch of experiments I need to run first. One of them I just ran now, and the results were rather disappointing.

 

The easiest way to host the ASP.NET runtime is to use the Visual Studio 2005 web host executable, webdev.webserver.exe. This is the EXE that launches when you hit F5 on a web project in VS2005. You can also run it by hand by doing something like this:

 

webdev.webserver /port:8080 /path:C:\temp\mywebapp /vpath:/mywebapp

 

For the most part, it works well, and I've used it successfully on one production project already. One thing it doesn't do, though, is to deal correctly with the "extra" bits after the end of extended URLs. For example, in FlexWiki, we have URLs like this:

 


 

The reason the URLs look like this is a long and complicated story. But the deal here is that the page that gets executed is default.aspx, and anything that comes after that indicates the particular wiki topic that should be rendered. You can retrieve this extra part via ASP.NET's Request.PathInfo. Unfortunately, webdev.webserver.exe does not deal correctly with paths of this form. It appears to want there to be a file at "C:\whatever\base\directory\default.aspx\FlexWiki\HomePage.html" that actually exists on disk. D'oh!

 

Fortunately, I think I can fix this. But first I need to run some more experiments. :)