Saturday, November 29, 2003
Wednesday, November 26, 2003
This week is the Thanksgiving holiday in the US. Next week I'll be in Zurich, Switzerland, speaking at DevDays. As I have no idea if I'll even be able to get online in Europe, the blogging will be a bit light for a while, and I might be a bit slow on the email uptake.
The good news (unless you think not hearing from me for a bit is the good news) is that with my DevDays slides finally done, I can spend my weekends on something else. I plan to use some of that time to write a bit more in the ol' Direct3D Tutorial Series, which I have sadly neglected. That plus a port to dasBlog should keep me busy.
Monday, November 24, 2003
Peter Provost has a great introduction
to test-driven development here.
Although I haven’t yet read any of the Extreme Programming canon, I have
been doing a lot more programming over the last year than was previously the
case, and I find myself gravitating naturally towards some of the precepts of XP.
I’d love to give Test-Driven Development a full-blown try, but when you’re
a consultant, you have to go with what the client wants. Maybe I can find a way to
work it in to one of my side projects.
I've always been a big fan of shortcut keys. Today I happened
to mistakenly hit a few keys, and discovered a few more I never knew about (this is
on Windows XP).
WinKey + LeftArrow = Minimize Current Window
WinKey + UpArrow = Maximize Current Window
WinKey + DownArrow = Restore Current Window
Now if I could just find the shortcut key for FixBug.
Or better yet, CureSleep.
Update: I'm a big fat idiot.
So it seems these aren't commands that are built into the shell, but rather just some
of the default mappings added by WinKey.
WinKey totally rocks - I use it so much, I even forget that it's there. Like I did
for this post.
Friday, November 21, 2003
I ran into something that took me almost all of yesterday to figure out. I was trying
to set up a series of virtual directories in IIS. The basic idea was that I had a
set of ASPX pages and a set of ASP.NET Web Services, and that they should be organized
under a single URL, like http://localhost/MyApplication.
I couldn’t put everything into the same virtual directory, because we already
had a disk structure defined that located the ASMX and ASPX pages in different places.
So I figured I’d map /MyApplication to the ASPX directory, and just create a
child virtual directory like /MyApplication/WebServices that mapped to the ASMX directory.
(You did know that virtual directories can contain other virtual directories, didn’t
you? And that they can point to any physical directory, even one that isn’t
a child of the parent vdir’s physical directory?)
The problem I ran into immediately with this approach was that some of the settings
in my ASPX web.config were incompatible with my web services. This is a problem because
many web.config settings are inherited by child virtual directories. My solution to
this was to create two children of /MyApplication - /MyApplication/UI and /MyApplication/WebServices.
Both children had to be proper virtual directories, and not just regular directories
underneath a virtual directory. This is because
two physical directories in question don’t share a common parent physical directory,
of the things in my web.config file only work when the web.config file lives in a
virtual directory root.
It was a bit annoying that I was going to have to type http://localhost/MyApplication/UI
instead of http://localhost/MyApplication to get to the ASPX pages, but it’s
a minor annoyance, and if it really bugs me I can fix it later with a server-side
remapping or a simple client-side redirect. So I mapped the parent virtual directory
off to a random empty directory on my hard drive, mapped the two child vdirs off to
the appropriate place, and went on coding.
Everything was going great until I tried to install the stuff on someone else’s
machine. I’d written a setup script, and although it seemed to run to completion,
the app didn’t work when I was done. It was failing in all sorts of weird ways,
doing things it just shouldn’t. Most troubling was when it would error out because
it couldn’t find the current username…because even though I had ASP.NET
Forms Authentication set up, it was never redirecting to the login page!
Ultimately, I found the answer through trial and error. It turns out that during setup,
the check-out from source control wasn’t creating the empty directory that the
parent directory virtual directory was mapped to. And for whatever reason, if you
have nested virtual directories, if the parent directory does not exist, ASP.NET
completely ignores the web.config files in the child virtual directories. Leading
to the highly confusing behavior I mentioned. The solution was simple: just make sure
the parent vdir points to a valid physical directory, and everything returns to normal.
Wednesday, November 19, 2003
Monday, November 17, 2003
How cool is that? Evil supervillians of the world, eat my Google dust! :)
It’s this previous post, by the way. Kudos to Rob Engberg for both inspiring the original post and pointing out the tasty Google goodness.
I think this is
a great, short read. It takes a crack at defining a taxonomy of the role of "architect"
and does a passable job of explaining why the term is so nebulous to begin with. The
dig at the AOP at the end didn't hurt my opinion of the piece, either. ;)
The last company I worked at called pretty much everyone
an architect. That always seemed weird. Now I have a better idea why.
Saturday, November 15, 2003
I’ve just created a new GotDotNet workspace for ftpsync,
the tool I’ve been writing this weekend when I should have been doing work.
But you know how it goes – you sit down to update your webpage, you get tired
of trying to figure out which files you’ve changed since the last upload, you
look around for a free tool that does incremental FTP upload, and you don’t
find anything free and good. So you write one yourself…we’ve all been
It turned out reasonably well, given that I only worked on it for a few hours, so
I figure I’d throw it up on a workspace and see what happens. It needs better
error handling, and I’m not super-happy about the FTP library I’m using
which saved me a ton of time, but has an interface which is a bit clunky, and has
somewhat weird error handling. Of course, it works and it’s free, so I’m
not complaining too much.
Basic usage of the tool goes something like this:
ftpsync -s my.ftp.server.com -u myusername
-p mypassword -ld C:\local\directory -rd /initial/remote/directory
This would cause the program to upload via FTP anything that lives in C:\local\directory
present on the FTP server, or
present on the FTP server but is older that what’s in the local directory
You can add the -r switch to recurse directories, and the -rd switch to delete anything
on the remote server that isn’t on the present server. I also threw in support
for the ignore files (-if) and ignore directories (-id) switches. Oh, and a –debug
switch that spews tons of extra info.
Anyway, I’ve already had success integrating this into a Nant build that I run
to update this website, so I’m getting my money’s worth. It seems stable
enough, but if there are feature requests or bug reports, leave a note on the workspace…or
better yet, join and upgrade/fix it yourself ;) .
I’ve got a few other languages in the pipeline – these articles get a fair few hits a day, so if you’re looking for more blog traffic, and would like to do a translation – I’m more than happy to include a link to your website from the articles.
I’ve been on a bit of a quest lately, looking for a better command line experience. I tried bash for a while (under Cygwin), and that was pretty cool. It took me back to my Unix days, and is clearly a first-rate product. But I kept running into differences between Windows idea of paths (e.g. C:\data\writing) and Cgywin’s idea of a path (e.g. /c/data/writing). Personally, I like forward slashes better, but since I’m stuck with Windows pathnames for a lot of applications, I was sort of screwed.
During the course of thinking about the problem, I actually realized that I really have two requirements that I had been confusing:
1) A nice interactive shell for doing things like executing programs and listing directories.
2) A scripting language for automating repetitive tasks.
Shells like bash and tcsh address both features, but really, they’re two separate things, and a large part of the value comes from all the other little command-line tools that are part of something like Cygwin, rather than from the shell itself. Those are available separately, so I figured if I could find two different products to satisfy both needs, then I’d be set. I’m playing with Python to see what I think of it for requirement #2. C# is an option there, too.
To figure out what to do about my first requirement, I asked around a bit, and some hard-core developers over on the Off-Topic Mailing List pointed me to 4NT as a good replacement for the Windows command shell. I downloaded the trial and had to agree that it is indeed pretty darn cool. But I think I’m going to stick with cmd.exe. The big reason I’m even contemplating this is this registry key:
According to the documentation, if you list a file under this key, it will be run at the start of every cmd.exe session. Well, this is awesome. It means that I can set it to something like C:\home\cmdrc.cmd and then populate the file with something like this:
@echo Running cmdrc.cmd
@SET PATH=c:\bin ant-0.8.3.50105;%PATH%
Now, whenever I fire up cmd.exe, it sets my path appropriately, and executes the vsvars32.bat file, which does all sorts of environmental goodness to enable programs like the C# compiler from the command line. Best of all, when I add a new program that I want to be able to run from the command line, I just have to throw a new line into my cmdrc.cmd file, and restart any command shells I have open. This is waaaay better than having to much around with the environment variables dialog box and having to kill explorer.exe. Plus, when I reinstall my system, it’s just a matter of copying this file over to preserve all my hard-fought configuration.
Thursday, November 13, 2003
I'm running smack into two big limitations of WSDL.EXE,
the tool used to generate client-side proxies for web services in .NET. Specifically,
here are the problems:
1) WSDL.EXE relies on the same code as XSD.EXE to map
the XML types into programmatic types. Unfortunately, it generates types with public
fields rather than properties. This prohibits data binding. I'd like to change this.
2) If you run WSDL.EXE against two different WSDL documents with exactly the same
XSD type in them, it generates two programmatic types. That is <foo/> turns
into NamespaceA.Foo and NamespaceB.Foo. This is a problem if you want to read a Foo
from web service A and pass it to web service B.
Neither of these problems are insurmountable. The problem
is the solutions aren't elegant. Better than either would be for someone to tell me,
"Hey, you just need to download SuperWsdl.exe; it does everything you need." Unfortunately,
I'm not sure SuperWsdl.exe exists. I'd prefer not to write it myself.
Anyone got any advice? Leave a comment.
So, is it just me, or is Copy and Paste acting up in IE
lately? I'm seeing this on mutliple machines not even running the same OS, but half
the time when I copy something from a web page via Ctrl-C, it doesn't take. I have
to go back and copy it again.
It's enough to make a guy switch to another browser.
Wednesday, November 12, 2003
Tuesday, November 11, 2003
I’m still reading Beyond
Fear a bit at a time…it’s still excellent. As a species we just don’t get
security, something that has become even more clear in the US in the last two years.
My favorite quote so far:
I have a friend who has, on almost every one of the many flights he has taken since
9/11, presented his homemade “Martian League” photo ID at airport security
checkpoints – an ID that explicitly states that he is a “diplomatic official
of an alien government.” In the few instances when someone notices that he is
not showing an official ID, they simply ask for a valid driver’s license and
allow him to board without a second glance. When he noticed that the gate agents were
scrutinizing expiration dates on IDs, he simply made another version of his Martian
League card that included one.
Aside from being amusing, what’s interesting is that – in the context
of the book – the conclusion we draw isn’t, “Airport screeners are
stupid,” but rather, “Authentication is a hard problem, and is misapplied
and inappropriately used at airports.”
Monday, November 10, 2003
In V1, ASP.NET hit
a home run by focusing like a laser beam on the developer experience. Everyone
put so much effort into building apps, questioning why each step was necessary, and
refining the process. It's great to see that they continue to follow that same
discipline. In the drill-down sessions, over and over again I saw that focus
resulting in a near perfect experience for developers. There
are some other teams, like Avalon, that seem to have a similar religion and are obtaining
similar results. (Though Avalon desperately
needs some tools support. Notepad is
fine for authoring XAML in demos, but I wouldn't want to build a real application
really reinforces what I've been learning over the last year, as the amount of "real
code" I've been writing has skyrocketed from my previous mostly-research-and-sample-code
lifestyle. API design is tough, but when you get it right, it makes a world of difference. Don talks
about this on MSDN
TV, where he speaks at some length about the value and danger of abstractions.
It's one of the main reasons I have hope for Indigo. And it's one of the main reasons
that I worry about WinFS: if the API anything like the ADSI model (or the OLEDB model),
forget it: no one will ever use it. Those that do will hate it. But I've looked
at neither in depth, so this jury is still out.
Most of my development work until recently has been either
without source control (gasp!) or using Visual SourceSafe. Well, for the last year
I've been doing a lot of development, and have unsurprisingly come to really like
source control. At home, I use CVS to do things like track changes to my website,
so I can roll back if I screw something up. At work, I use it for the usual purpose:
coordinating work on the same codebase with other developers.
One of my clients is currently evaluating a new source
code control system - Borland's StarTeam. It looks pretty nice, and there's a reasonable
chance they'll implement it. Since I'm doing prototype work for them, I'm using it
exclusively right now. Having experience with CVS and one or two other non-Visual
SourceSafe products, it wasn't too hard to get used to the model. But it's going to
be a stretch for some of their developers who haven't used a merge model source control
system before, the same way it hurt my brain a bit when I moved off of SourceSafe.
One of the things that I've found helpful is to picture
the various states a file can be in as a grid. The grid tracks the state of the file
in the repository versus the state of the file in the working directory. The file
can be in one of three states in each of these two places: unchanged, changed, or
not present. "Changed" and "unchanged" are relative to the file in the other location,
so "changed" on the repository axis means that the file in the working directory has
changes relative to the working directory (i.e. someone else has checked in a new
revision since you checked it out).
Because one of the things that differs between source
control systems is the terminology, the really valuable part of the grid (for me)
is the intersections, where I record the meaning of each of the possible combinations.
Here's what the grid for StarTeam looks like:
Unchanged Changed Not Present
Unch | Current Modified Missing
Repository Chg | Out of date Merge
Not Pres |
Not in view Not in view N/A
It should be fairly straightforward to substitute the
terms from your source control system for the StarTeam ones. Note that this chart
is a simplification of what's really going on - it doesn't help you figure out branching,
for example - but I found it useful as a starting point, and hopefully this will help
someone avoid one or two of the mistakes I made while getting used to the new systems.
Friday, November 7, 2003
David Weller has a short
example of using the Direct3DX Sprite class from C#. The example only works if
you have the Summer
Update of the DirectX SDK, and even then I have had a few problems. Regardless,
I’m sure David will fix them soon, and the code alone is very handy to demonstrate
technique. You’ll find it particularly useful if you plan to write a 2D game
using Direct3D. Which I will,
when I get some free time.
First, I need to finish writing my slides for Swiss
Thursday, November 6, 2003
I posted yesterday about this article, which talks about the performance of various operations in the CLR. I said it was a good article, and I still think it is. But Ian Griffiths wrote me up to take issue with the fact that that's all I said - he felt that the article in and of itself does not actually tell you anything directly useful...and I agree.
Ian and I have both done more than our share of optimization, and we've both arrived at the same set of rules:
1. Don't do anything intentionally stupid when first writing the code, but
2. Don't spend a lot of time trying to write really fast code up front. Instead,
3. Measure, then optimize the slowest thing.
4. Repeat until performance is good enough.
These rules will hardly be surprising to anyone that's succesfully done performance improvement work. But they surprise the hell out of a lot of people nonetheless. "I thought writing fast applications was all about knowing which sorting algorithm to use and which data structure to pick?" Not really.
I used to work at a mortgage company where I sat next to Jon, a good friend of mine. People would often come to me with a C++ program and tell me that it was too slow. The first question I would ask them is, "Have you talked to Jon yet?" When they said, "No," I'd tell them to go away. See, Jon used to work at Oracle, developing bits of the database, so he knew SQL up and down. He would routinely make queries that originally ran in half an hour, run in 30 seconds. As a pretty good C++ programmer, I could expect to decrease execution time by about 10%, or maybe 25% if I was really lucky. A sixty-fold performance improvement was out of my league...but Jon did it all the time.
As another example, while profiling a web service I've been writing, I found that the following line of code was the slowest thing in the app:
RSACryptoServiceProvider rsa = new RSACryptoServiceProvider(cp);
And I mean it was by far the slowest thing in the app - making this call less frequently had a significant impact on throughput. I don't remember what the timing on the call was exactly, but I never would have guessed that a constructor call could take as long as this did.
All this goes to explain why I claim that the article is useful, but not directly useful: because when it comes time to optimize, you have to measure the slowest thing and fix that. Anything else is a waste of time. And you'll probably be surprised by what the slowest thing is. And it's likely not going to be slow due to any of the things from the article - at the point where you're worrying about the performance of fields versus properties, you've almost certainly already optimized a whole bunch of other stuff that's going to be the dominant factor. But at the same, there may come a time when knowing that the C# as operator is twice as slow as the cast operator in some situations might save you a few hours.
The life of a performance optimizer is a tough one: you have to know everything about how the app works, down to the silicon, since you never know what the bottleneck is going to be. (This is an outcome of the Law of Leaky Abstractions.) But since this is waaaay too much to keep in your head for even a trivial app these days, we need to just make our best guess when first writing the app, then measure to zoom in on the places where bad things are happening.
Oh, and don't forget step #4: stop optimizing when performance is good enough - you're just wasting money after that. Of course, that assumes that you can get a definition of "good enough" from the customer, but maybe that's a post for another time. ;) In the meantime,
Wednesday, November 5, 2003
I started reading Bruce Schneier's Beyond
Fear last night. It looks like it's going to be just great. He's an
excellent writer, and the material is so relevant. More importantly, it says a lot
of totally true stuff that is completely counter to conventional "wisdom".
So far, the biggest theme is "Develop a threat model",
or, if you like, "Know thy enemy." So often, people post questions on the mailing
lists I frequent that go something like, "Should I use encryption?" And the answer
is always, "Who are you trying to protect against?"
Unfortunately, answering the latter question is harder,
probably because it is often out of the programmer's control. If more managers, executives,
and end users would read Schneier's book (as well as the excellent Secrets
and Lies), talking and making intelligent decisions about security would
become much easier. Order a copy for your boss today. ;)
Tuesday, November 4, 2003
By way of Tim
Bray, I read Ole Eichhorn’s rant
about Longhorn. While I find it interesting and in places insightful, I think
he was a bit wide of the mark. The basic mistake he made is an extremely common
one: the assumption that all software is the same. Usually, this is coupled with “and
it’s the same as the software I write.”
I’ve fallen victim to this particular trap myself. One of the major foci in
my career has been the development of large-scale systems. As such, you’ll often
find me saying things like, “Extra database roundtrips are bad.” Or, “Avoid
building objects that map directly to tables and handle their own persistence.”
Both of these are good pieces of advice…if you’re building a system that
needs to scale. But of course, if you’re writing a system that’s going
to be used by ten people ever, you should ignore this and do what’s easy. And
lots of people are indeed writing these sorts of smaller systems, which is a totally
legitimate thing to do.
The keys to being a good architect are 1) knowing what the rules are, and 2) why they
are that when so you know when to ignore them.
So, does Ole have a good point about performance? Yes…if it’s going to
impact you. But in point of fact not everyone is writing the next version of Excel.
There are plenty of applications for which developer productivity is more important
than an extra 10% (or whatever) in response time. And of course the jury’s still
out on what the performance penalty will be. And of course it’ll be different
for every application anyway. But in any event I’d argue that there are at least
ten times as many departmental one-off applications being written as there are commercial
shrink-wrap ones. Whether or not these are “important apps” (Ole’s
term) sort of depends on where you sit.
But Ole’s bit does beg the question, “Is Microsoft right to continue to
emphasize developer productivity?” I think it’s obvious that they are
– the CLR’s big win is clearly this, and it looks like the Longhorn technologies
continue largely in this vein. But you have to ask, “Is this good for the users,
who outnumber the developers by some large factor?”
I don’t know the answer to that. What I do know is that it reminds me of another
familiar market: commercial broadcasting, particularly TV. I often wonder at how bad
broadcast TV is. Sure, there are some shows I like, but a lot of it sucks. There are
two conclusions I can draw from this: 1) that most people have taste that differs
from mine, or 2) the viewer’s preferences are not the controlling factor. While
the former is probably true, it’s the latter factor that’s interesting
here, because from a commercial standpoint, it’s demonstrably true. Advertisers,
after all, pay for the programs, not viewers. So the broadcasters are directly beholden
to advertisers, but only indirectly beholden to viewers. And note that for
the networks where this is not true, like HBO and public television, we get a noticeably
different spectrum of programming.
In Microsoft’s case, we actually have the opposite correlation. The revenue
comes from products like the OS and like Office, and not from Visual Studio.NET and
the CLR. Sure, there’s a linkage – developers create the software that
users’ consume, making the platform more attractive to said users – but
again it’s indirect rather than direct. I’m not sure what
this means…but I find it interesting.
Monday, November 3, 2003
If anyone is interested in translating any of the DirectX
(or other) writing on this website into different languages, please let me know. The
Direct3D tutorial in particular gets a fair amount of traffic, I've noticed referrers
from the Babelfish translation service, and have had requests for the content in other
languages before. This leads me to believe there might be a reasonable demand for
tutorial material in other languages.
I'd be more than happy to let you post the content on
your website if you like, as long as you let me post it here too, and would agree
to link back in your copy.
Email me directly or post a comment if you're interested.