Wednesday, March 31, 2004

Where Has Craig Been?

Most of these places, DevelopMentor sent me:


 


create your own personalized map of the USA or write about it on the open travel guide

Big Bummer

Ian Griffiths asks a great question in the comments on my post about System.Xml 2.0. Here it is:


That's excellent! But I have one question: why can't I do this?:

Book book = reader.ReadAsObject < Book > ();

What's with all this casting, in a world with generics? (Of course what you really want is for the template parameter to be inferred, but sadly you cannot infer template arguments from function return types.)


A quick experiment verified my assumption: generics are not CLS Compliant, as evidenced by this code:


using System;
[assembly:CLSCompliant(true)]


namespace GenericsCLSCompliant
{
  public class App
 
{
    static void Main(string[] args)
    {
      new Class1<App>().DoSomething(new App());
    }
  }


  public class Class1<T> where T : new()
  {
    public void DoSomething(T t)
    {
    }
  }
}


and the ensuing compiler error:


error CS3024: 'GenericsCLSCompliant.Class1<T>': type parameters are not CLS-compliant


What this means is that the important bits of the CLR libraries cannot leverage generics. This is a crying shame.

Tuesday, March 30, 2004

System.Xml 2.0 = Coolness

I was just reading this (via Aaron), and saw this piece of code:






if (reader.ReadToDescendant("book"))
{
   do
   {
      Book book = (Book)(reader.ReadAsObject(typeof(Book)));
      ProcessBook (book) // Do some processing on the book object
   } while (reader.ReadToNextSibling("book"));
}






OK, the highlighted bit is just so cool I can barely wait to start using it.


In case it's not obvious from the code, they've married XmlSerializer and XmlReader so you can stream through a document, yanking out objects as you go. They've also added the even cooler ability to write objects into an XmlWriter stream. To resurrect a saying from my youth: mint!

Sunday, March 28, 2004

DC Area Nerd Gathering

Tony Nassar proposes that we do a little get-together for those of us living in the DC area on Wednesday night, March 31st, starting at 6:30 at On The Border in the Tyson's Corner area. I think it's a fine idea! After all, we can't let the Denver and Portland guys have all the fun. Leave comments here if you plan to attend.

Saturday, March 27, 2004

Dot's Response

As is pleasingly typical of when I have an issue with Microsoft, I got a response to my complaints about GotDotNet. And it was a very good one. Thanks Andy! I'm sure people will be pleased to hear you're aware of the issues and are working on the problems.


Of course, I'm still not coming back just yet. :) The fact that the site should getting faster soon is great news, but like I said before, tools and stability are the main issue. I'd happily pitch in and write some decent tools myself, but I have lots of other projects that I need to work on first.

Wednesday, March 24, 2004

Dear GotDotNet

Chris Sells suggested that I send my specific complaints about GotDotNet along to the people that work on the site, and kindly pointed me to the right person at Microsoft. This is something that I should have done before my last post, of course, but better late than never. I sent them this email, which I'm posting here:


 


I'm not sure if you saw my recent blog post about leaving GotDotNet for SourceForge, but after reading it, Chris Sells suggested that I get in touch with you about the specific complaints I have. It's a good idea, and only fair, so here goes.


 


I should be clear that I don't really have a problem with any of the other features - but I've been doing a lot of software development on GDN lately, and my frustration level reached the breaking point. To whit, I have three main complaints with the workspaces feature of GotDotNet:


 


1) Reliability. I find that GDN frequently loses changes that I have checked in. I have to constantly diff a fresh download against what's on my hard drive to make sure I don't lose stuff.


2) Tooling. The tools are really not good. I have tried all five options for interacting with the source control system - web interface, web control, VS.NET integration, wksalone and wkssync - and they are all painful. None of them even comes close to the elegance and ease of use I can achieve using CVS and TortoiseCVS. On top of that, I find the enforcement of the VSS lock-on-checkout model very limiting, although I know that's a matter of personal taste. And I won't even start on the fact that there is no integration whatsoever with popular build tools such as NAnt or CruiseControl.NET - something I find to be crucial to doing distributed team development.


3) Performance. This is actually the least concern, especially if there are decent tools available. But GDN is slow, and that's annoying given the tools that are available.


 


In short, what I want GDN to be is a reliable source control provider with good tools. If, above and beyond that, it provided things like bug tracking, team websites, discussion boards, mailing lists, wikis, and the other bells and whistles that a distributed development project can use (and I know some of those things are available now), that would be icing on the cake. But stability and capability first.


 


I've posted this message on my weblog, to let people know that I was lame enough not to have written this email *before* posting my "Dear Dot" message. I would be more than happy to post any followup you provide. Just let me know.


 


Please do let me know if there's anything I can do to help. I've left GDN for the projects I work on, but that doesn't mean I don't want it to succeed. I'm all for having lots of good options.


 


Thanks!


 


-Craig Andera

Dear Dot

Dear Dot,


I want you to know I'm leaving you. I'd love to use that old line, “It's not you, it's me,” but unfortunately, it's you. I just can't count on you. I can't even remember how many times I've asked you to check in, only to be disappointed. And you're a dawdler - I got tired of asking you a question, only to have to sit around and wait for the answer. To be even more blunt, you're not sophisticated enough - I have fairly refined tastes, and you simply don't seem interested in all the finer things that I need to lead a rich existence.


Since you'll probably hear about it, I might as well tell you that I've met someone else. I don't want to mention any names, so I'll just refer to this other as “SF”. Oh sure, SF isn't perfect, but we share a lot of the same interests, and we just have a lot easier time getting along. And yes, it's true, I've been visiting SF on the side for quite some time now, so this isn't just some fling. SF, frankly, is just a lot more mature than you are, and I need that.


I hope you're able to get over your problems. I'd like to see you succeed in life, if for no other reason than that a lot of my friends still hang out with you. All the best,


-Craig


(With apologies to Don Box.)

Tuesday, March 23, 2004

DotNetDevs

Brad Wilson has launched a new site - DotNetDevs. He's posted a few articles there, and plans to post more. Looks like it's going to be a pretty decent collection of material.

Friday, March 19, 2004

System.IO.Compression

I'm lucky enough to be getting paid to write stuff against Whidbey right now. Although the tooling leaves something to be desired, the APIs are pretty solid, and it's been interesting using the new stuff. Today, I came across a namespace that I think falls squarely into the “I could never give up the CLR libraries, and now they're just getting better” camp:


System.IO.Compression


It has classes like GZipStream in it. The wrap a regular stream, and compress or decompress whatever you pass through it. Nice, eh?

Thursday, March 18, 2004

Ivan Comes Through

I reported yesterday that the autogenerated web service proxies are pretty thoroughly broken when it comes to optimizing network roundtrips. That's still true, but the wonderful Ivan Towlson reminded me of a feature that is currently saving my bacon. He pointed out the UnsafeAuthenticatedConnectionSharing on the proxy that causes it to reuse the connection, saving a whole lotta roundtrips. It's “unsafe” because you need to be careful not to reuse the connection for another user against the same web service, as you might wind up providing the wrong credentials. In my case, that's not really a big deal.


I love weblogging - it's sort of like a reverse Google. You put the query out there (after doing your homework, of course), and the results come to you. In return, you try to provide useful information once in a while.


Thanks Ivan!

Wednesday, March 17, 2004

Horrible, Horrible WebService Proxies

[Update: Ivan helped me find a partial workaround. Details here.]


I knew that my implementation of FwSync (my tool for uploading/downloading files from a FlexWiki instance for offline editing) was a little slow sometimes. I figured it was because the web service interface is a little chatty - it requires a roundtrip for every file I'm submitting or retrieving. Boy, did I not know the half of it.


I whipped out a packet sniffer and took a look at what was going across the wire. Imagine my surprise when I discovered that every request to the webserver was turning into three request-response network roundtrips. Ouch! A little more digging, and I found out that I was far from the first one to have this problem - it turns out to be a consequence of the fact that I have Integrated authentication turned on on the webserver, so I have to make one roundtrip to find out that authentication is required, another to get the challenge from the webserver, and a third to actually make an authenticated request. Because the autogenerated web service proxies (the ones you get when you say “Add Web Reference” don't hold on to connections, this process is repeated for every single method call.


At first, I didn't think this was so bad. That's because I knew there was a PreAuthenticate property on the proxy that supposedly told the proxy to send the credentials with the first request, rather than waiting for the “access denied” message to come back. I'd still have a second roundtrip for the challenge-response, but I figured I could optimize that away using the CredentialCache somehow.


Wrong! Oh so freaking wrong!


As it turns out, the autogenerated proxies are busted in a whole bunch of ways. Jon Flanders set me straight on that count - there is no way to get the proxies to preauthenticate when using Integrated authentication. To even get it to work with Basic authentication, you have to follow the instructions here...but forget about Integrated. Lame!


If someone knows of an easy way to get this to work (without having to muck with the server, although I might do that too), I'd love to hear about it.


 

Monday, March 15, 2004

What a Difference Two Words Makes

I was just reading some piece of senseless anti-Microsoft ranting, and I realized that there are two words people could use to raise my estimation of them enormously. Let me illustrate.


If you say, “Microsoft wants to lock us in to their proprietary protocols so they can rule the world,” the thought going through my head is, “OK, you drank the other team's Kool-Aid. I'm going to disregard what you say from here on in.” Even though I think the sentiment is probably correct at some level.


But if you say, “Companies like Microsoft want to lock us into their proprietary protocols so they can rule the world,” the thought that goes through my head is, “Yep.”

Friday, March 12, 2004

Dealing with Exit in an MDI App

So, I had this problem. I'm writing an MDI application (FlexWikiPad, if you're wondering), and I finally got around to implementing the feature that asks you to save your documents when you close the application. I figured it would be easy. And it was, ultimately, but it took a fair amount of experimentation and documentation-diving. So I figured I'd post the answer here, to benefit future Googlers.


The issue I'm having comes out of the order in which messages are received. When you close an MDI child window, it gets a Closing message, which receives a CancelEventArgs argument. If you don't set the Cancel property to true, you'll later receive a Closed event as well. This is handy, because during the Closing event handler, you can pop up the classing “Would you like to save? Yes, No, or Cancel” dialog box, and cancel the close of the window if they choose the Cancel button. Then you can do any cleanup in the Closed event handler, which will only fire if the close is not canceled.


This has worked great for me for dealing with situations where a user explicitly closes each individual child window. The problem arises when they close the parent application. When they do this, I get Closing events on all of the open child forms before the Closing event of the main application fires. Then all the children forms get individual { Closing, Closed } event pairs. So if I have two children open, the sequence goes something like this:



  1. child1 Closing

  2. child2 Closing

  3. parent Closing

  4. child2 Closed

  5. child2 Closed

  6. parent Closed

Which pretty much sucks for me, because it means I don't know what to do when closing the app. If the parent's Closing method would only fire first, then I could set a flag somewhere that would say “Hey, we're in the middle of closing.” But because I don't know whether the first child1 Closing event is because of a user explicitly closed the child, or because someone did a File->Exit, I'm stuck.


“OK,” you're thinking, “Why not just choose not to prompt in the child Closed event?” Well, the issue here is, it's too late. The form has already closed. I want them to be able to pick Cancel and stop the application from shutting down if they realize halfway through that they've made a mistake. And I can't really just record what they chose in the first child Closing, because at the time the event first, I don't know if it's related to the application shutting down.


Fortunately, the answer appears in the documentation for the Form.Closing event. It turns out that if a child cancels their own close, the CancelEventArgs.Cancel that gets passed in to the parent's Closing event will be set to true. If you keep it set to true, the application close will cancel, and you'll be all set. If you decide you want to force a shutdown, just flip it back to true.


Oh, and if your File->Exit handler uses Application.Exit instead of this.Close(), none of these events will fire, and you'll be screwed. Don't do that. :)

Game Programming Tutorial

Via Rory, I came across this interesting set of tutorials. They walk you through the process of designing and building a 2D game using Managed DirectX. The focus is on the construction of the game itself, rather than on the DirectX APIs, so I think it makes a great complement to the stuff I've been writing.

Wednesday, March 10, 2004

Extreme Game Programming

David Weller suggested that since I'm into DirectX and Test-Driven Development (TDD), I should post about how (in my experience) the two mix. I've gotten far enough now on my game to have run into a few issues, so it seemed like a good time to share.


The fundamental problem with TDD is that certain things are hard to test. Tests are easy to write when you're working with code that's designed to be called by you in some particular order - some sort of library. However, tests are much harder to write for certain mixtures of code that's written by you and called by you, code not written by you but is called by you, code not written by you that calls you, and so on.


The classic example of this is a Windows Form. A typical scenario here is that someone clicks the Open File menu item, which your code reacts to by popping up a dialog box asking for a file name, and then goes off and opens that file. It's a (fairly) complex interaction between code that you wrote and code that you didn't, calling back and forth between each other. Plus, we throw in interaction with the user. Similarly, in a game, we have users providing input via a keyboard or a joystick, to which our code reacts by rendering pixels onto a screen.


Obviously, to get automatability and repeatability, you don't want to have to require users to give input during a test. Particularly in a game, where a difference in user input as small as a microsecond could change the code path through the game. So a common approach is to separate out the pieces that you control from the pieces you don't, but abstracting them through an interface. When testing the application, you hand it an implementation of the interface that pretends to pop up a dialog box (or whatever), but actually always returns the same value. When running the application “for real”, you hand it an implementation of the interface that actually pops the dialog box (or whatever). These fake test-time objects are often called “mock objects”, and you can read more about them here.


The problem with using mock objects is that - although you try to get close to the same behavior - you (obviously) never quite make it. You're testing most of the code you'll be running in production with, but not all of it. I wanted to avoid this problem with my game, so I figured, “Well, I'll just run the tests against the real Direct3D objects.”


At first, everything was great. I was able to write tests that created windows, drove input (from a script, not from the keyboard) and asserted correct behavior. It meant that the unit tests would pop up windows while they were running, but it was sort of funny to watch the game play at like ten times normal speed. But then I tried to integrate the tests into the build.


I've set up Draco.NET to do what's called Continuous Integration (CI). You can read about it here, but the basic idea is to build every time code is checked in. I've found it to be enormously useful for letting me know immediately when I've checked in non-working code, or code that breaks something someone else is working on.


Here's the problem: Draco.NET runs as a service. In a non-interactive window station. Which means that Direct3D refuses to initialize, just like it would if you somehow tried to fire up a game while a screensaver was running. So my unit tests were all failing, and as a result, the project refused to build.


I had two choices. My preference was to figure out a way to get CI working with Direct3D. But after goofing around with it for a while, I realized that either I was going to have to set up the CI service to run as SYSTEM (to allow interation with the desktop) or I'd need to convert the CI system to run in a logged in session. Direct3D just needs to run in an interactive window station.


Since I use the build machine for other purposes, neither of those was attractive. I was going to wind up dealing with having to stay logged in all the time (and disabling the screensaver, too), or having random windows pop up and obscure my work from time to time. No thanks.


So here I am, back at the Mock Object stage. It looks like I'll need to create an IGraphicsDevice interface that will wrap up either a RealDevice or a MockDevice object. RealDevice will call all the Direct3D objects. MockDevice will just simulate those calls, allowing me to test my game logic. It's going to involve a performance hit, but I'm not sure how much of one - my game is 2D, fairly simple, and written in C#, so it probably doesn't matter.


Oh well - at least I can reuse the objects elsewhere when I'm done.

Thursday, March 4, 2004

Scott Colestock Has a Blog

Scott Colestock has a blog. Scott is an old friend from when I used to live in Minneapolis. He was one of the best software guys I knew up there, and it looks like his blog is going to be worth keeping an eye on. Subscribed.

Getting Started With TDD

Someone asked me the other day about how to get started with Test Driven Development (TDD). Since I find TDD to be such an enormous benefit to my own personal software development experience, I thought I'd share how I got going in hopes that it will help someone else.


It turns out to be pretty simple from a technology standpoint. In fact, you don't really need any tools above and beyond what you already have, as TDD is a process and not a technology. That said, I've found that NUnit is a pretty handy thing to have.


You can read in all sorts of places about the general process of TDD, so I'll just summarize: you don't write any code without writing a test for it first. Yes, that means the test won't compile when you first write it. That's okay: think of it as a design document for what you want your code to do. Once you have the test calling the API the way you want it to, you can get the code written to the point where the code (and the test) compiles, and from there, to the point where the test succeeds.


Here's the thing about TDD: it's all about discipline. There's nothing stopping you from falling off the wagon and writing some code without writing a test first. None of the tools will stop you: NUnit won't care, the compiler won't care, no one except you will care. And the rest of your team if they've bought into TDD. Even running code-coverage tests during your build won't ensure that you wrote the tests before you wrote the code.


As a result, it's up to you to ensure that you follow the steps: write the test, then write the code. The thing is, after a while, it gets a lot easier because you start to see the benefits. You still get tempted, “Eh, I'm not changing much,” so it's still about discipline, but discipline is easier in the face of the tangible rewards. Here are some of the benefits I've experienced:



  • Higher code quality. Obviously, testing helps ensure what I'm writing works the way I want it to. But one of the things I didn't see coming when I started doing TDD was that by writing code that uses the code I'm working on first, I tend to get the API more right on the first try. Because a test is also a use-case.

  • Easier to remember what I was doing. When I leave a project and then come back to it later, I find TDD makes it easier to remember what I was working on, because it's almost always in one of the following well-defined states:


  • Not compiling. I was working on either writing a unit test or had finished a test for code I hadn't written yet. In either case, the compiler tells me what I need to do.

  • Compiling but test is failing. I haven't finished writing the method for the failing unit test.

  • Compiling with working unit tests. I had finished the feature I was working on last, and am ready to move on to the next.

  • Easier to make changes with confidence. This is a huge one. When I change something, I already have lots of tests that ensure that what I changed doesn't break the code.

  • Integrates into the build. If you use NUnit, it's easy to integrate into the nightly/hourly/whatever build, so when you forget to check in some configuration file, the unit tests will fail and you'll catch the problem right away.

  • Often results in better design. You've probably experienced the phenomenon that when you have two pieces of code that call your code, the demands of the second caller cause you to refactor in a way that improves your design. Well, the test is your first caller, and the working program you're actually writing is the second. So you start with two callers, and you have to think about the API design that much sooner...usually with good effect.

In short, the best way to get started is just to get started. You'll probably convince yourself that it's a good idea the first time you find a bug you wouldn't have otherwise, or come to a design realization that would have taken a lot longer in the absence of a good test suite.