Monday, November 15, 2010

Column Store Indexes in Sql Server Denali

It’s funny because a college and I were having a discussion in the kitchen the other day about the whole ‘no SQL’ movement, and my point to him was that many of the advantages pertained to having a columnar storage model, and (especially in the light of Vertipaq) I didn’t think it would be long before this kind of storage mode migrated to mainstream RDBMs’s like SQL Server.

And then this, in Denali (Sql Server v-next):

“The columnstore index in SQL Server employs Microsoft’s patented Vertipaq™ technology, which it shares with SQL Server Analysis Services and PowerPivot. SQL Server columnstore indexes don’t have to fit in main memory, but they can effectively use as much memory as is available on the server. Portions of columns are moved in and out of memory on demand.”

MVP’s have been able to download CTP1 for a fortnight apparently, which means Mitch has been holding out on me. Damn his poker face.

Thursday, November 04, 2010

Debugging Talk Tonight

Tonight’s talk at the Perth .Net User Group should be pretty good – because it’s me talking! Barring uber-embarrassing stuff-ups, I will be talking about and demonstrating debugging techniques using WinDbg and PowerDbg, and hopefully shedding some light on an area that’s generally under-utilized by many .Net developers.

Join us at Enex 100, Level 3 Seminar room at 5.30pm. More details in the link above.

Thursday, October 21, 2010

Western Power Killed My Pong Clock

No, really. After today’s brown-out my irreplaceable original Buro Vormkrijgers Pong Clock appears to be fried.

Really not happy at all.

Sunday, October 17, 2010

Critical Concepts, Often Confused

These aren’t similes, but they’re often taken as such. I don’t think I’ve worked on a project that hasn’t mixed up at least one of these pairs. Sometimes it takes a heap of suffering before you realise what you’ve done…

Estimates vs. Commitments

The estimate is how long you say it’ll take. The commitment is when you say it’ll be done by. These are not the same thing.

Quite apart from catering for resource levelling, adding a sickness / holiday buffer, catering for pre-sales / training requirements / all the other stuff, you probably shouldn’t be shooting for a point estimate anyway. Ideally you make a range-based estimate, and aim your commitment at a fairly high confidence interval within that (bearing in mind even 95% means you are missing your dates 1-in-20 times). Mistaking these concepts can, alone, be the root cause of all your delivery problems. See Software Estimating (McConnel)

Domain Invariants vs. Validation

If you put all your validation in your domain model you probably just made them all domain invariants. Congratulations. Now try and implement ‘god mode’, privileged system operations, or special-case this one screen where the logic has to be different…

Validation is often highly contextual. What’s valid in the context of one transaction (one screen) may not be in another, so sometime you’ll have to accept the reality that some validation belongs to the operation, not to the domain. Eagerly promote all validation to domain invariants at your peril.

(This is one of the things that scares me about frameworks like Naked Objects)

Business Owner vs Single-Point-Of-Contact

Critical to have a single business owner, yes? So we can just have one person to ask all our questions to? Wrong.

The business owner is the owner of the project, and the arbiter of the decisions. But that doesn’t let you off the hook from talking to all the other stakeholders in the project. They may, and often will, have very different opinions. If you can’t keep them all happy, the owner decides, but if you don’t even ask them you’re relying on your owner to be the single source of all domain knowledge. That’s a fairly dangerous road to be walking down, even before your owner flips out due to project-overload and goes postal in a feature workshop. Canvas more than one opinion.

Friday, October 01, 2010

Windows Mobile 7 vs. the World

Here’s the scenario: you face an uphill battle to regain some kind of presence in a market where you’ve failed in the past, and now battle the huge incumbent advantage of another player. Do you:

  • Come up with an innovative strategy to outflank the incumbent, find a niche or play to your own unique strengths?
  • Copy exactly what they’ve done. It worked for them, right?

Well, er… it seems to me a lot like Microsoft did the latter. With Windows Mobile 7 they’ve done a great job with the UI, the developer experience looks pretty good, using the cloud as a back-end is starting to make sense, etc… but on features alone it’s kinda hard to see why anyone would favour one of these over an iPhone – they’ve picked exactly the same model:

  iPhone Windows Mobile 7 Android
Side-loading of apps (not via app store) No No Potentially, if carrier wants to
Corporate (restricted distribution) apps No No As above
Flash in browser No No (nor Silverlight) 3rd party support available (for OEMs, mind)
Background apps / multitasking No No Yes?
Native Code No No Yes
Video Calls iPhone 4 Optional, depends on H/W [4] No
Tethering No No No (w/o rooting)

Why no Flash / Silverlight in browser? Various Microsofties and MVP’s have tried to tell me it’s a technical limitation, that Silverlight(phone) and and Silverlight(browser) are non-overlapping functionality sets. Whilst that’s true, it’s also B.S.: this is – as in Apple’s case – about control. Rich browser apps are a side-loading vector: if you can run a fully-functional GUI app in the browser, the monopoly of the app store goes away.

Microsoft’s gamble of course is that the consumer market is less about tabular feature comparisons, and more about marketing, branding and emotion. And to a certain extent they’d be right, but that’s why Apple went out and bought the Liquid Metal process. So it’s an uphill battle there too.

Most importantly, unlike Apple, Microsoft don’t make phones. So it’s crazy to attempt (as they are) to follow the ‘own the customer experience’ model of Apple, when they don’t actually own it at all. They can specify the hardware to an extent (and have done), but they’re not a vertical: the manufacturer has a stake here too.

Of course Microsoft’s previous model sucked. They provided a platform, left the experience up to the end-vendor, and what we ended up with was the same tired old Today screen for years and years (with the recent exception of HTC). So no-one wants to go back there. But that’s exactly the Android model, and it seems to be working pretty well for them.

With Android users get a different vendor-specific experience on different phones, and with a partner model that’s a good thing. A Sony should be different from an Samsung or whatever: you buy a Sony for the Sony brand, not the freaking OS. And provided the search bar and maps goes back to Google that seems to suit everyone involved just fine. Backs mutually scratched: it’s the partner model, working how it always should have.

So Microsoft’s approach seems neither fish nor fowl. They plan to compete with Apple on Apple’s terms, whilst Google takes their own partner model and shows them how it’s done. They desperately needed to change something, but I think it was the software, not the business model.

 

(Oh, and the really funny thing: Windows Mobile 6.5 isn’t going away – it continues to be Microsoft’s ‘Platform for Corporate Users’ – basically because of the current sidebanding limitation. Microsoft have said they’ll consider this later, but…)

[2] http://social.msdn.microsoft.com/Forums/en-US/windowsphone7series/thread/2892a6f0-ab26-48d6-b63c-e38f62eda3b3

[4] http://pocketnow.com/tech-news/windows-mobile-7-device-specs-bigger-screens-multi-touch-and-more-memory

Thursday, September 30, 2010

Rethrowing Exceptions Without Losing Original Stack Trace

Everyone knows you should never ‘throw err’:

    try

    {

        // Do something bad

    }

    catch(Exception err)

    {

        // Some error handling, then…

        throw err;

    }

 

…because you overwrite the original stack trace, and end up with no idea what happened where. If you want to re-throw, you just ‘throw’ within the catch block, and the original exception is re-throw unmodified (or wrap-and-throw).

But that’s within the catch block. What do you do if you need to re-throw an exception outside the catch, one you stored earlier? This is exactly what you have to do if you’re implementing an asynchronous (APM / IAsyncResult) call, or marshalling exceptions across app domain / remoting boundaries.

The runtime manages this just fine by ‘freezing’ the exception stack trace. When rethrow, the new stack trace is just appended to the old one – that’s what all those ‘Exception rethrow at [0]’ stuff is in the stack trace. But the method it uses to do this (Exception.PrepForRemoting) is internal. So unfortunately in order to use it, you have to call it by reflection:

    public static void PrepForRemoting(this Exception err)

    {

        typeof(Exception).InvokeMember(

            "PrepForRemoting",

            BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.InvokeMethod,

            (Binder)null, err, new object[0]);

    }

 

    /// <summary>

    /// Rethrow an exception without losing the original stack trace

    /// </summary>

    [DebuggerStepThrough]

    public static void Rethrow(this Exception err)

    {

        err.PrepForRemoting();

        throw err;

    }

Evil I here you cry? Well suck it up, because that’s exactly what Rx does in System.CoreEx:

image

(Tasks in .Net 4 side-step this problem by always wrapping exceptions in a new AggregateException prior to throwing – this also allows a Task to accumulate multiple exceptions throughout its lifecycle, depending on the continuations applied)

Sunday, September 19, 2010

Reacting to Rx

I’ve finally got round to spending a bit of time looking at Rx over the weekend, and my head is still spinning as to just how fantastically relevant this is to some of the stuff I’m working on right now. If have no idea what Rx is, check out these brief Channel 9 videos:

The first will get you interested, the second will make the penny drop[1].

So anyway, I have a class called a MessagePump<T>. Its job is to abstract away a lot of low-level socket guff (fragmentation, parsing etc…) and just deliver messages as they are read off a socket. It basically just sits in a big async loop of BeginRead / EndRead operations, constantly passing itself as the callback (ie never ‘owning’ a thread).

That’s all it does, so to deliver messages into the rest of the system it exposes a MessageReceived event. And sometimes a message might not parse properly, probably someone got out of sync whatever, so there’s a ExceptionReceived event. Oh, and if you get a zero-byte read from BeginRead that means the socket the other end closed, so there’s a Disconnected event

  • MessageReceived(object, EventArgs<T>)
  • ExceptionReceived(object, EventArgs<Exception>)
  • Disconnected(object, EventArgs)

Now compare this to Rx’s IObserver<T> interface:

  • OnNext(T)
  • OnError(Exception)
  • OnCompleted()

It’s like completely the same. I guess there are only so many ways to skin a cat, but I wasn’t expecting it to be quite so aligned. Hopefully I can read this as saying my design is basically sound.

But whatever, what it really means is that dropping in Rx is going to be a bit of a doddle. In fact because the IObserver<T> and IObservable<T> interfaces (alone) are part of the .net 4 framework, even without Rx I can implement the pattern (just without the Rx fruit),which makes leveraging Rx later on (e.g. to filter with Linq) an option for the consumer.

And because the IObserver<T> / IObservable<T> pattern is much more amenable to composition than a raw .net event (which is really, the whole point of Rx), we can use containers like MEF to attach the subscribers at runtime, with (what seems to be) relative ease.

Both temporal and binary decoupling. Cool.

 

[1] For example: did you ever write something like an auto-complete popup? You want to wait a while after each keystroke in case the user didn’t finish typing yet (about 500ms I think). I ended up writing a general-purpose event-buffer class, that only propagated the event after a specified inactivity period (this also worked great for file change notifications). In Rx this is trivial: just use the ‘Throttle’ linq operator over the event sequence. See the hands-on-lab

Saturday, September 18, 2010

Problems With Stuff

image

Being charitable you might point out that as a technology becomes increasingly pervasive it inevitably ends up in the hands of less technically savvy users, but I like to think of it as ‘all our stuff is still a bit crap’.

Wednesday, September 08, 2010

.Net 4 not supported on Windows 2008 Server Core

There is an explanation from the .net SKU owner as to why (which I don’t entirely follow), but the bottom line is that what the download page says is right – it’s just not available. So no Distributed Cache either.

Poo.

(It does support a subset of the net 3.5 functionality, largely orientated towards ASP.Net support – there’s a basic explanation of which bits here)

Visual Studio 2010 build spew in DebugView

If you’re a fan of DebugView (like me) you’d have been driven spare by the reams of spurious debug output that VS 2010 generates when doing a build: some 15,000 lines (in my case) of repeated cruft that drowns your output:

*** HR originated: -2147024774
*** Source File: d:\iso_whid\x86fre\base\isolation\com\copyout.cpp, line 1302


*** HR propagated: -2147024774
*** Source File: d:\iso_whid\x86fre\base\isolation\com\enumidentityattribute.cpp, line 144



This is a known issue on the forums, and there is a Connect Issue associated with it, so please vote for it. Hopefully it’s not too late to get this fixed in SP1.



(I’m optimistic– the bug was raised by Rusty Miller, an (erstwhile?) tester on the VS team)

Tuesday, September 07, 2010

TechEdAu 2010

It was only the week before last, but already I feel the clarity slipping away like a dream in the morning. Ahem. It was quite an interesting year, because apart from Windows Mobile 7, most of the stuff that was being talked about actually exists at RTM today, which was a nice change from learning about stuff you might get to use in 6 month’s time.

Memes this year:

  • Devices are ‘windows’ to the cloud [1]
  • Virtualisation, virtualization, virtualization
  • All I want for Christmas is Windows Mobile 7

Anyway, here’s what I went to

Day 1:

Day 2:

Day 3:

And here’s all the sessions I will be catching up on Online (as and when the videos come up):

…and a couple from TechEd North America that looked fairly promising:

Phew.

 

[1] If you think this cloud stuff is finally becoming the William Gibson / Ian M Banks model of pervasive cyberspace, you’d be right.

Saturday, September 04, 2010

Which WPF Framework?

So it’s way past time that I actually started getting used to a WPF framework, rather than keep re-inventing the wheel. But where to start? I thought it was just between Prism and Caliburn, but then I found WAF, and then researching that I found a whole bunch of others.

I suspect I’ll start with WAF because it describes itself as lightweight. Prism comes from the P&P team, who are normally anything but, and Caliburn supports paradigms other than MVVM, which just seems a bit pointless.

Tuesday, August 10, 2010

PowerDbg is search result #7 for ‘WinDbg’

Ok, this is only on MSDN search, but still that seems pretty damn high:

image

Mind you, we’re #38 on Bing, and #14 on Google so we’re not completely inconspicuous.

Time to pull our fingers out and finish off v6 I think.

Thread Safety in MSDN

Just what exactly is the point of even having a ‘thread safety’ comment in the MSDN doco, if it’s just blatant boiler-plate drivel.

Take, for example, System.Text.ASCIIEncoding. Generally speaking there’s only one of these in play at any one time, because the Encoding.ASCII static property is a singleton (as they all are):



public static Encoding ASCII
{
[TargetedPatchingOptOut(...)]
get
{
if (asciiEncoding == null)
{
asciiEncoding = new ASCIIEncoding();
}
return asciiEncoding;
}
}



So you’d better damn well hope it’s thread safe, otherwise all those concurrent write operations you’re doing, they’re screwed, right? But what does MSDN have to say on the subject:




“Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.”




Oh. Really helpful. Thanks a bunch.



Looking at the usage patterns through the Framework Class Libraries, it’s pretty clear they are thread-safe. Encoding.GetEncoding(int) hands out references to the singletons, which are similarly used with gay abandon in System.IO.Ports.SerialPort, System.IO.File.ReadAllLines, various StreamReader overloads etc… (though BinaryReader chooses to new up its UTF8Encoding, heaven knows why). And the sky would have fallen by now if these usages weren’t at least largely correct.



But poking about in Reflector is clearly not a substitute for accurate documentation, and the ‘parallel processing revolution’ everyone keeps going on about is clearly not going to work if we just keep trotting out the ‘instances members are not guaranteed to be thread safe’ line.



System.Text.Encodings: believed to be thread-safe.

Tuesday, July 27, 2010

Log4Net Active Property Values via Lamdas

Some years ago I wrote a couple of posts on some nasty problems that you could encounter if using log4net contexts in an environment where you didn’t control the thread lifecycle, say ASP.Net. Judging by the amount of coverage it got at the time (and still) I wasn’t the only person caught out by this.

Anyway I was doing something similar recently, not in ASP.Net, but in a Windows Service application with lots of threads. It’s the same kind of problem: there’s some thread-specific context that always exists, which we want to make available to log4net, but putting it in ThreadLocalContext doesn’t really work very well because we’d have to set them up in all our thread-entry methods, which would be everywhere where a callback gets entered – very messy in our (highly asynchronous) application.

Instead I wanted to put something in log4net’s GlobalContext that resolved to the thread’s context value. And actually now we’ve got lamdas and all that nice stuff, I was able to come up with a significantly neater implementation for a general-purpose contextual logging property, which basically answers the original ASP.Net problem too:

 

    /// <summary>

    /// Implements a class that can be used as a global log4net property

    /// to resolve an action to a string at event-fixing-time

    /// </summary>

    /// <remarks>With a suitable lamda expression, you can put this

    /// into your log4net.GlobalContext to resolve at logging time to a variety

    /// of stuff you might want to use in your logging statements.

    /// <example>Using threadId (not thread Name) as a property:<code>

    /// log4net.GlobalContext.Properties["threadId"] =

    /// new Log4NetContextProperty(() => Thread.CurrentThread.ManagedThreadId.ToString());

    /// </code></example>

    /// </remarks>

    public class Log4NetContextProperty : IFixingRequired

    {

        private readonly Func<string> _getValue;

 

        public Log4NetContextProperty(Func<string> getValue)

        {

            _getValue = getValue;

        }

 

        public override string ToString()

        {

            return _getValue();

        }

 

        public object GetFixedObject()

        {

            return ToString();

        }

    }

In this case I wanted ‘threadId’ as a logging property (log4net exposes thread name, which is normally fine, but the R# test runner creates woppingly long thread names that basically hide the actual logging message, and I really just wanted the IDs (hence the example above). But you can see how you can basically use this to expose any context data to log4net if you wanted to.

Wednesday, July 21, 2010

64 Bit Explained

Look, it’s really not that hard.

Programs are still in the same place, in %ProgramFiles%, unless you need the 32 bit version, which is in %ProgramFiles(x86)%, except on a 32 bit machine, where it’s still %ProgramFiles%.

All those dll’s are still in %SystemRoot%\System32, just now they’re 64 bit. The 32 bit ones, they’re in %SystemRoot%\SysWOW64. You’re with me so far, right? Oh, and the 16 bit ones are still in %SystemRoot%\System – moving them would just be weird.

Registry settings are in HKLM\Software, unless you mean the settings for the 32 bit programs, in which case they’re in HKLM\Software\Wow6432Node.

So the rule is easy: stick to the 64 bit versions of apps, and you’ll be fine. Apps without a 64 bit version are pretty obscure anyway, Office and Visual Studio for example[1]. Oh, and stick to the 32 bit version of Internet Explorer (which is the default) if you want any of your add-ins to work. The ‘default’ shortcut for everything else is the 64 bit version. Having two shortcuts to everything can be a bit confusing, so sometimes (cmd.exe) there’s only the one (64 bit) and you’ll have to find the other yourself (back in SysWOW64, of course). And don’t forget to ‘Set-ExecutionPolicy RemoteSigned’ in both your 64 bit and 32 bit PowerShell environments.

Always install 64 bit versions of drivers and stuff, unless there isn’t one (MSDORA, JET), or you need both the 32 bit and 64 bit versions (eg to use SMO / SqlCmd from a 32 bit process like MSBuild). Just don’t do this if the 64 bit installer already installs the 32 bit version for you (like Sql Native Client).

Anything with a ‘32’ is for 64 bit. Anything with a ‘64’ is for 32 bit. Except %ProgramW6432% which is the 64 bit ProgramFiles folder in all cases (well, except on a 32 bit machine). Oh and the .net framework didn’t actually move either, but now it has a Framework64 sibling.

I really don’t understand how people get so worked up over it all.

 

[1] Ok, so there is a 64 bit version of Office 2010, but given the installer pretty much tells you not to install it, it doesn’t count.

Monday, July 19, 2010

P/Invoke Interop Assistant

P/Invoke is like a poke in the eye. Sure the P/Invoke wiki made life a lot more palatable, but it’s at best incomplete, at worst inaccurate, and invariably you’ll find yourself hand-crafting signatures based on Win32 API doco and bringing a production server to its knees because of a stack imbalance.

In my idler moments I’ve often thought that surely parsing the source-of-truth Win32 header files and spitting out P/Invoke signatures couldn’t be that hard. Fortunately for everyone, the Microsoft Interop Team thought so too[1], and released the P/Invoke Interop Assistant to Codeplex. Actually that was about 2 years ago, but I only just noticed, so it’s still exciting for me.

As I understand it this has been made easier because Microsoft have been standardizing their header files and adding some additional metadata [2], which makes it possible to generate accurate signatures (and, presumably, to generate MSDN doco).

Sadly of course, none of this does anything to make any of the underlying API’s any easier to use…

 

[1] Actually if you look on Wikipedia, turn’s out there’s a fair few around.
[2] In retrospect you wonder why managed code took so long to take off as a concept, given how enormously fragile the previous paradigm actually was. SAL’s a great idea, but only highlights how fundamental the problem is.

Friday, June 11, 2010

Converting to Int

You wouldn’t have thought that such as basic operation as turning a double into an integer would be so poorly understood, but it is. There are three basic approaches in .Net:

  • Explicit casting, i.e. (int)x
  • Format, using String.Format, or x.ToString(formatString)
  • Convert.ToInt32

What’s critical to realise is that all of these do different things:

    var testCases = new[] {0.4, 0.5, 0.51, 1.4, 1.5, 1.51};

    Console.WriteLine("Input  Cast   {0:0}  Convert.ToInt32");

    foreach (var testCase in testCases)

    {

        Console.WriteLine("{0,5} {1,5} {2,5:0} {3,5}", testCase, (int)testCase, testCase, Convert.ToInt32(testCase));

    }

Input  Cast   {0:0} Convert.ToInt32
0.4 0 0 0
0.5 0 1 0
0.51 0 1 1
1.4 1 1 1
1.5 1 2 2
1.51 1 2 2


As my basic test above shows, just casting is the equivalent of Math.Floor – it looses the fraction. This surprises some people.



But look again at the results for 0.5 and 1.5. Using a format string rounds up[1], to 1 and 2, whereas using Convert.ToInt32 performs bankers rounding[2] (rounds to even) to 0 and 2. This surprises a lot of people, and you’d be forgiven for missing it in the doco (here vs. here):



Even more interesting is that PowerShell is different, in that the [int] cast in PowerShell is the same as a Convert.Int32, not a Math.Floor():



> $testCases = 0.4,0.5,0.51,1.4,1.5,1.51
> $testCases | % { "{0,5} {1,5} {2,5:0} {3,5}" -f $_,[int]$_,$_,[Convert]::ToInt32($_) }

Input Cast {0:0} Convert.ToInt32
0.4 0 0 0
0.5 0 1 0
0.51 1 1 1
1.4 1 1 1
1.5 2 2 2
1.51 2 2 2


This is a great gotcha, since normally I’d use PowerShell to test this kind of behaviour, and I’d have seen the wrong thing (note to self: use LinqPad more)



 



[1] More precisely it rounds away from zero, since negative numbers round to the larger negative number.



[2] According to Wikipedia bankers rounding is a bit of a misnomer for ‘round to even’, and even the MSDN doco on Math.Round seems to have stopped using the term.

Thursday, June 03, 2010

Splatting Hell

Recently both at work and at home I was faced with the same problem: a PowerShell ‘control’ script that needed to pass parameters down to an arbitrary series of child scripts (i.e. enumerating over scripts in a directory, and executing them in turn).

I needed a way of binding the parameters passed to the child scripts to what was passed to the parent script, and I thought that splatting would be a great fit here. Splatting, if you aren’t aware of it, is a way of binding a hashtable or array to a command’s parameters:

# ie replace this:
dir -Path:C:\temp -Filter:*

# with this:
$dirArgs = @{Filter="*"; Path="C:\temp"}
dir @dirArgs

Note the @ sign on the last line. That’s the splatting operator (yes, its also the hashtable operator as @{}, and the array operator as @(). It’s a busy symbol). It binds $dirArgs to the parameters, rather than attempting to pass $dirArgs as the first positional argument.

So I thought I could just use this to pass any-and-all arguments passed to my ‘master’ script, and get them bound to the child scripts. By name, mind, not by position. That would be bad, because each of the child scripts has different parameters. I want PowerShell to do the heavy lifting of binding the appropriate parameters to the child scripts.

Gotcha #1

I first attempted to splat $args, but I’d forgotten that $args is only the ‘left over’ arguments after all the positional arguments had been taken out. These go into $PSBoundParameters

Gotcha #2

…but only the ones that actually match parameters in the current script/function. Even if you pass an argument to a script in ‘named parameter’ style, like this:

SomeScript.ps1 –someName:someValue

…if there’s no parameter ‘someName’ on that script, this goes into $args as two different items, one being ‘-someName:’ and the next being ‘someValue’. This was surprising. Worse, once the arguments are split up in $args they get splatted positionally, even if they would otherwise match parameters on what’s being called. This seems like a design mistake to me (update: there is a Connect issue for this).

Basically what this meant was that, unless I started parsing $args myself, all the parameters on all the child scripts had to be declared on the parent (or at least all the ones I wanted to splat).

Gotcha #3

Oh, and $PSBoundParameters only contains the named parameters assigned by the caller. Those left unset, i.e. using default values, aren’t in there. So if you want those defaults to propagate, you’ll have to add them back in yourself:

function SomeFunction(
    $someValue = 'my default'
){
    $PSBoundParameters['someValue'] = $someValue

Very tiresome.

Gotcha #4

$PSBoundParameters gets reset after you dotsource another script, so you need to capture a reference to it before that :-(

Gotcha #5

Just when you thought you were finished, if you’re using [CmdLetBinding] then you’ll probably get an error when splatting, because you’re trying to splat more arguments than the script you’re calling actually has parameters.

To avoid the error you’ll have to revert to a ‘vanilla’ from an ‘advanced’ function, but since [CmdLetBinding] is implied by any of the [Parameter] attributes, you’ll have to remove those too :-( So back to $myParam = $(throw ‘MyParam is required’) style validation, unfortunately.

(Also, if you are using CmdLetBinding, remember to remove any [switch]$verbose parameters (or any others that match the ‘common’ cmdlet parameters), or you’ll get another error about duplicate properties when splatting, since your script now has a –Verbose switch automatically. The duplication only becomes an issue when you splat)

What Did We Learn?

Either: Don’t try this at home.

Or: Capture PSBoundParameters, put the defaults back in, splat it to child scripts not using CmdLetBinding or being ‘advanced functions’

Type your parameters, and put your guard throws back, just in case you end up splatting positionally

Have a lie down

Viewing MDX Data with WPF (redux)

Spend most of the day today grappling with binding a WPF datagrid to a DataSet loaded from a parameterized MDX query.

The first gotcha was that SSAS expects its parameterized queries to be passed using the ICommandWithParameters interface, however the OleDb provider for .Net doesn’t support named parameters (except for sprocs). This is a ‘fixed’ Connect issue – fixed as in ‘still broken in .Net 4 but marked as fixed because we can’t be bothered’.

Ahem.

So rather than use ado.net parameters, I’m now using string replacement on my source query text. Just great:

    // So have to do manual parameterization :-(

    query = query

        .Replace("@date", dateKey)

        .Replace("@time", timeKey)

        ;

Then of course the WPF data grid wouldn’t show the data (despite the DataSet visualizer working just fine). It bound and showed columns just fine using AutoGenerateColumns:

    dataGrid1.ItemsSource = dataSet.Tables[0].DefaultView;

 

image

…but all the rows showed blank!

Eventually I noticed a spew of debug output, listing the binding failures:

System.Windows.Data Error: 17 : Cannot get 'Item[]' value (type 'Object') from '' (type 'DataRowView'). BindingExpression:Path=[Blah1].[Blah2].[Blah3].[MEMBER_CAPTION]; DataItem='DataRowView' (HashCode=66744534); target element is 'TextBlock' (Name=''); target property is 'Text' (type 'String') TargetInvocationException:'System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: Blah1 is neither a DataColumn nor a DataRelation for table TheTableName.

at System.Data.DataRowView.get_Item(String property)

--- End of inner exception stack trace ---

This all seemed awfully familiar, and fortunately I happened across a helpful blog article (which I wrote!) explaining the problem. This time it is AutoGenerateColumns that’s generated the wrong binding path, causing WPF to try and find ‘deep’ members (attempting to walk multiple indexers) rather than just bind to a column with that name.

The fix is something like this:

    // This works

    var table = dataSet.Tables[0];

    dataGrid1.Columns.Clear();

    dataGrid1.AutoGenerateColumns = false;

    foreach (DataColumn dataColumn in dataSet.Tables[0].Columns)

    {

        dataGrid1.Columns.Add(new DataGridTextColumn

        {

              Header = dataColumn.ColumnName,

              Binding = new Binding("[" + dataColumn.ColumnName + "]")

        });

    }

    dataGrid1.ItemsSource = table.DefaultView;

Grr.

Tuesday, June 01, 2010

One-line TODO Extractor in PowerShell

I previously wrote a PowerShell TODO extractor, that blasts through an entire source hierarchy looking for TODOs, and reports them to the console, complete with a few lines of context either side so you can tell what you’re looking at. It was like, 20 lines of code.

Well blow me if v2 just doesn’t do it out of the box:

PS > dir . -filter:*.cs -recurse | select-string "\sTODO\s" -context:4 -CaseSensitive

Monday, May 31, 2010

What’s New In PowerShell 2

At work, where I do most of my PowerShell, we’ve only just shifted off XP, so until recently I’d not really looked much into the differences between PowerShell 1 and 2. The ISE is pretty good (its a debugger!), support for webservices is a few years too late (but very welcome) and I can see Remote PowerShell being pretty useful.

So I’d not really been keeping up. If anything I was deliberately ignoring it, to avoid the temptation to write something that would require upgrading the server. But eventually, I cracked[1].

Oh My God.

Put aside for the moment the absolute avalanche[2] of new cmdlets (write-verbose, out-gridview, select-xml[3], measure-object etc…), and put aside for the moment support for background jobs, the wonderful -split and -join operators, and even put aside how tab-completion now works for .net static methods...

Tab completion now works for script functions and their parameters. You can type in a function on one line, and be happily tab-completing it on the next. You can even add comment-based or XML help, though probably not at the console.

Once again, PowerShell rocks

 

[1] Blame PowerDbg

[2] Some guy[4] is writing a blog series on every new cmdlet!

[3] Select-Xml: Here’s one I used today at work to get all the references from all the C# project files within a folder hierarchy. Sure you could do it all before with XmlDocument, but check this out:

PS > dir . -filter:*.csproj -Recurse | `
Select-Xml -XPath:'//*[local-name() = "Reference"]' | `
Select-Object -ExpandProperty Node

Include
-------
System
System.Core
System.Xml.Linq
System.Data.DataSetExtensions
System.Data
System.Xml

[4] He’s called Jonathan Medd, but the ‘some guy’ thing has a certain ring to it…

[5] Oh, and proper try{}catch{}finally{} error handling. I missed that

Friday, May 28, 2010

Problems Running Tests When Preserving Output Directory Structure with Team Build

Previously I’ve posted about how to override the Team Build default output directory scheme and produce something a bit more sane.

Unfortunately if you do implement this it can break the built-in test run task, and most of the recipes related to it. You’ll get the following error in your build logs:

MSBUILD : warning MSB6003: The specified task executable "MSTest.exe" could not be run. The directory name is invalid

If you run the build with /verbosity:detailed to see the actual args passed to MSTest.exe, and then run MSTest yourself interactively, you’ll see the real underlying error:

Directory "(my build path)\Binaries\Debug" not found.
For switch syntax, type "MSTest /help"

The problem here is that (as detailed on the TestToolsTask doco) the team foundation build targets sets up MSTest.exe with SearchPathRoot="$(OutDir)", ie $(BinariesRoot)\$(Configuration). But if you overrode CustomizableOutDir and never actually copied the binaries out to the output folder that directory will never get created.

Fix 1:

If you’re not really using CustomizableOutDir, remove it. Reverting to the default Team Build directory structure is the simplest way of getting the tests to be located and executed and everything to ‘play nice’.

Fix 2:

Make sure that if your TFBuild.proj says CustomizableOutDir you do actually have the corresponding custom tasks in the individual projects to copy the binaries (see my previous post), otherwise you end up with no output whatsoever, and the test task will fail.

Fix 3:

If you want CustomizableOutDir but want to be robust to the possibility that your project builds may not populate the output directory structures properly, you can hack your build to run the tests out of the source \bin\debug folders.

My first pass was just to add the following to my BeforeTestConfiguration target (that I’d added from the Running Unit Tests without a Test List recipie):

    <!--because this is what the TestTask gets its SearchPath set to, it must exist-->

    <MakeDir Directories="$(OutDir)"/>

But that wasn’t good enough on its own, because now the error was:

CoreTestConfiguration:
File "..\..\(blah)\bin\Debug\Assembly.UnitTests.dll" not found

The relative paths to the test assemblies were correct relative to the $(SolutionDir), but not relative to the $(OutDir). So, for want of a better answer, I just overwrite OutDir for the duration of the test task:

   <!—defined elsewhere-->

   <TestsToRun Include="$(SolutionRoot)\%2a%2a\bin\$(Configuration)\%2a.UnitTests.dll" />

 

  <Target Name="BeforeTestConfiguration">

    <!-- normal bits as per the recipe-->

    <Message Text="Using tests from @(TestsToRun)" Condition=" '$(IsDesktopBuild)'=='true' " />

 

    <CreateItem Include="@(TestsToRun)">

      <Output TaskParameter="Include" ItemName="LocalTestContainer"/>

      <Output TaskParameter="Include" ItemName="TestContainer"/>

    </CreateItem>

 

    <Message Text="LocalTestContainer: @(LocalTestContainer)" Condition=" '$(IsDesktopBuild)'=='true' " />

 

    <!--Fix to allow use of CustomizableOutDir -->

    <MakeDir Directories="$(OutDir)"/>

    <PropertyGroup>

      <OldOutDir>$(OutDir)</OldOutDir>

      <OutDir>$(SolutionDir)</OutDir>

    </PropertyGroup>

  </Target>

 

  <Target Name="AfterTestConfiguration">

    <PropertyGroup>

     <OutDir>$(OldOutDir)</OutDir>

    </PropertyGroup>

  </Target>

Whether this is a good idea or not I’m not sure, but it does seem to work. Note that I put it back the way it was afterwards (using AfterTestConfiguration).

Moral

I think the story here is that using CustomizableOutDir is a complete pain in the arse, which ends up requiring considerable customisation of the rest of the build workflow. I don’t mind a prescriptive process per-se, but I do have a real issue with the ‘flat’ output directory structure that Team Build kicks out. But attempting to change it just seems to cause a heap more trouble than it’s worth.

Actually - as Martin Fowler said years ago - using XML as a build language is a really dumb idea in retrospect. Everyone says TeamCity’s pretty cool: might be time to take a look at that…

 

PS: If you’re trying to get your head around what happens where in Team Build (aren’t we all) there’s a great Team Build Target Map over at the Accentient blog

PS: I notice on Aaron Hallberg’s blog there’s a much simpler approach if you just want to separate per-solution output directory structures, which may not suffer the same problems.

Thursday, May 06, 2010

WinDbg Pain Points

Previously I talked about PowerDbg, what an awesome idea it was, but how it lacked some things. Well I spoke to the author, Roberto[1], who asked me to put my code where my mouth was, and now I am working with him on the next version.

So… if there’s anything particularly painful that you do in WinDBG now is the time to shout. You can comment on this blog if you like, but better would be to raise a ‘proposed feature’ on the Codeplex site itself.

A good example would be just how hard it is to work with a .Net Dictionary in WinDBG (except PowerDbg already handles that, and even better in the new version). Anything where you want a slightly ‘higher level’ view of the raw SOS data.

 

[1] Yes, that Roberto.

Tuesday, May 04, 2010

PowerShell 2 Breaking Change When Shelling Out

Whilst PowerShell 2 is by-and-large backwards compatible, I’ve discovered at least one breaking change that appears to be undocumented: the behaviour of argument parsing when calling another executable seems to have changed.

Previous behaviour:

clip_image002

PowerShell has effectively parsed the argument as if it were calling a PowerShell script: splitting it into two parts along the colon, and passing the second part ‘intact’ because it was quote wrapped.

New behaviour in v2:

image

PowerShell has treated the arguments as completely opaque and passed them to the exe using ‘normal’ command line parsing semantics (split on spaces etc…). It has not split the argument along the colon (which was the breaking change for us). In the second case, because the argument didn’t start with a quote (it starts with ‘-test’) the argument is broken in half at the space.

I think this is a good change, in that PowerShell shouldn’t make assumptions about how the exe you are calling likes it’s parameters (I got badly burnt that way trying to call an SSIS package). But it’s certainly one to watch out for.

 

PS: Not sure at all about this behaviour, which is the same in both v1 and v2:

image

Surely the fact you pass the argument as a string variable indicates you want it as one argument. Surely.

Monday, May 03, 2010

Working Directory Independence for PowerShell Scripts

pushd (split-path $myInvocation.MyCommand.Path);

Not quite as simple or memorable as the batch file version sadly…

Thursday, April 22, 2010

Accessing CodePlex using Windows Live ID via Team Explorer

…doesn’t work for me. I eventually remembered what my ‘native’ CodePlex password was, and that worked just fine.

Of course, this turns out to be a RTFM:

Q: Why do I still need a CodePlex account?
A: We still require a CodePlex account to successfully authenticate with the source control servers.

…but it wasn’t like was plastered all over the account linking process page, or (unfortunately) mentioned on that ‘how to set up TFS client’ popup they have.

Tuesday, April 13, 2010

3 Reasons to Avoid Automatic Properties

That’s dramatic overstatement of course, because automatic properties are great in many cases (though are public fields really so wrong?) But now that VB.Net has joined the party too [1], it’s worth remembering that they are not all good news:

1/ They Can’t be Made ReadOnly

Sure you can make them have a private setter, but that’s not the same as a readonly field, which is a great check against whole classes of screw-ups. If a field shouldn’t change during an instance lifetime, make it readonly, and save yourself some pain.

2/ No Field Initializers (in C#)

The nice thing about initializing fields in the field initializers is you can’t forget to do so in one of the constructor overloads, and (in conjunction with readonly fields) you can ensure it can never be null. Since this is all on one line it’s easy to inspect visually, without having to chase down code paths / constructor chains by eye.

(You can vote for this, for all the good it will do [2])

3/ Poor Debugging Experience

Properties are methods, right, even auto-generated ones, and need to be executed for the debugger to ‘see’ the value. But that’s not always possible. If the managed thread is suspended (either via a threading / async wait, or by entering unmanaged code) then the debugger can’t execute the property at all, and you’ll just see errors the below:

Cannot evaluate expression because the current thread is in a sleep, wait or join

Here you can only determine the value of ‘AutoProperty’ through inference and guesswork, whereas ‘ManualProperty’ can always be determined from the backing field. This can be a real pain in the arse, so it’s worth avoiding automatic properties for code around thread synchronisation regions.

As an aside remember that there are backing fields, it’s just you didn’t create them, the compiler did, and it used it’s own naming convention (to avoid collisions) which is a bit odd. So if you write any ‘looping over fields’ diagnostic code you will see some strange names, which might take some getting used to. You’ll also see those in WinDBG and CDB when you’re looking at crash dumps and the like.

 

[1] ...but I bet the VB community spat chips over the curly brackets in Collection Initializers
[2] And yet whilst VB.Net 4 has this, they don’t have mixed accessibility for auto properties yet. Go figure.

Thursday, April 08, 2010

Automating WinDBG with PowerShell

I’ve been doing a bit of WinDBG work recently after a long hiatus, and I’ve been blown away by some of the things I’ve missed.

One of them was PowerDBG: a Powershell (2) module for working with WinDBG in Powershell. How awesome is that? No really, how freaking awesome.

But I couldn’t help but feel the execution was lacking something. It wasn’t, for want of a better word, very Powershelly. For example, this is what you’d do in PowerDBG to look at an object:

PS C:\> connect-windbg "tcp:port=10456,server=mr-its-64-vds"
PS C:\> Send-PowerDbgCommand ".loadby sos mscorwks"
PS C:\> Send-PowerDbgCommand "!do 0000000001af7680"
# Glance at the original WinDBG output make sure it looks ok
PS C:\> $global:g_commandOutput
0:000> Name: MyNamespace.Services.MyService
MethodTable: 000007ff002b1fd8
EEClass: 000007ff002af238
Size: 72(0x48) bytes
(c:\Program Files (x86)\SomeFolder\SomeDll.dll)
Fields:
MT Field Offset Type VT Attr
Value Name
0000000000000000 4000148 8 0 instance 00000000024
09858 _laneGroups
0000000000000000 4000149 10 0 instance 00000000024
04490 _lanes
0000000000000000 400014a 18 0 instance 00000000026
c7730 _routes
0000000000000000 400014b 20 0 instance 00000000024
d4f78 _roadSections
0000000000000000 400014c 28 0 instance 00000000026
cc668 _devices
000007ff007222e0 400014d 30 ...gDatabaseProvider 0 instance 0000000001a
f76c8 _provider
0000000000000000 400014e 38 0 instance 00000000023
16b30 MappingsUpdated

# Call the dump-object parser to stick it in a CSV file
PS C:\> Parse-PowerDbgDSO

# look in the CSV file
PS C:\> type .\POWERDBG-PARSED.LOG
key,value

Name:,MyNamespace.Services.MyService#$#@
:,000007ff002b1fd8#$#@
:,000007ff002af238#$#@
72(0x48),bytes#$#@
4000148,8 0 instance 0000000002409858 _laneGroups#$#@
4000149,10 0 instance 0000000002404490 _lanes#$#@
400014a,18 0 instance 00000000026c7730 _routes#$#@
400014b,20 0 instance 00000000024d4f78 _roadSections#$#@
400014c,28 0 instance 00000000026cc668 _devices#$#@
400014d,30 ...gDatabaseProvider 0 instance 0000000001af76c8 _provider#$#@
400014e,38 0 instance 0000000002316b30 MappingsUpdated#$#
@
PS C:\>


That’s a bit ugh. Commands share state via the global 'g_commandoutput' rather than the pipeline, and the end-goal of most operations seems to be to spit out a CSV file POWERDBG-PARSED.Log.



I think we can do better.



I want objects, preferably ones that look like my original objects. I want to be able to send them down the pipeline, filter on them, sort them and maybe pipe some back to the debugger to pick up more details. And I want cmdlets for common WinDBG /SOS operations like !dumpobject rather than pass every command as a string. In short, I want a real PowerShell experience.



More like this:



PS C:\> $o = dump-object 0000000001af7680
PS C:\> $o

__Name : Mrwa
__MethodTable : 000007ff002b1fd8
__EEClass : 000007ff002af238
__Size : 72
_laneGroups : 0000000002409858
_lanes : 0000000002404490
_routes : 00000000026c7730
_roadSections : 00000000024d4f78
_devices : 00000000026cc668
_provider : 0000000001af76c8
MappingsUpdated : 0000000002316b30
__Fields : {System.Object, System.Object, System.Object, System.Object..
.}


Note how I've mapped the field value/addresses onto a synthetic PowerShell object that uses the same names for the properties as the original fields (which were underscore prefixed, as you can see in the original WinDBG output above). I can then work with the object in the debugger in a more natural way:



PS C:\> $o._lanes | dump-object


__0 : 000
__MethodTable : 000007ff0072b8c8
__EEClass : 000007feede6ba30
__Size : 88
buckets : 00000000024044e8
entries : 00000000024050e8
count : 688
version : 688
freeList : -1
freeCount : 0
comparer : 00000000013da180
keys : 0000000000000000
values : 0000000000000000
_syncRoot : 0000000000000000
m_siInfo : 0000000000000000
__Fields : {System.Object, System.Object, System.Object, System.Object...}


Note also that I've kept the metadata originally available about the object by mapping those WinDBG output lines to double underscore-prefixed properties on the object. And I've not lost all that extra metadata about the fields either: whilst the properties above 'shortcut' to the field value/address, you can look in the __Fields collection to find the details if you need them (it's just much harder to pipeline stuff this way):



PS C:\> $o.__Fields


MT : 0000000000000000
Field : 4000148
Offset : 8
Type :
VT : 0
Attr : instance
Value : 0000000002409858
Name : _laneGroups

MT : 0000000000000000
Field : 4000149
Offset : 10
Type :
VT : 0
Attr : instance
Value : 0000000002404490
Name : _lanes

# ... etc...


Normally looking in arrays and dictionaries via WinDBG is a massive pain in the arse (find the backing array for the dictionary, find the key-value pair for each item, find the object that the value points to). PowerDBG has a script to automate this, and again I've tried to implement a more 'pipeliney' one:



PS C:\> $items = dump-dictionary $o._lanes
PS C:\> $items[0..2]

key value
--- -----
00000000024098f8 00000000024098d0
0000000002409a10 00000000024099e8
0000000002409a68 0000000002409a40


You can easily pipe this to dump-object to inspect the objects themselves. In my case I wanted to know if any of the objects in the dictionary had a given flag set, which ended up looking something like this:



PS C:\> $items | 
% { Dump-Object $_.value } |
? { $_.MyFlag -eq 1 } |
% { $_.MyId } |
Dump-Object |
% { $_.__String }


That's a mouthful, but basically what I'm doing is getting doing a !do for all the values in that dictionary, and for all those that have the MyFlag set true I send the MyId field down the pipeline. That's a string, so I do a dump-object on it, and then send the actual string value to the output.



With a large dictionary this stuff can take quite some time (and seriously chew memory in WinDBG) but then you wouldn’t be worrying about any of this if the dictionary only had two items – you’d do it by hand.



At the moment all this is unashamedly sitting atop PowerDBG’s basic ‘channel’ to WinDBG, but that should probably go too. PowerDBG grabs lines from the console out and concatenates them into a string, but actually want line-by-line output from the debugger, because I want to leverage PowerShell’s streaming pipeline (i.e. emit objects as they are ready, rather than when the command finishes). Maybe another day.



You can get the script for this from my SkyDrive. It’s definitely a first pass, but.

Monday, March 29, 2010

No One Size Fits All

One of the things that gets me particularly hot and bothered under the collar is when people who should know better stand up and claim something as objective truth (I’m going to limit myself to software engineering here, but you can probably infer the rest), when it’s clearly a matter of opinion and circumstance.

Many pundits proselytize agile this way.

For example, people say things like “you should be aiming for 90% test coverage”, and round the room people nod sagely and take notes in their little pads, whilst I’m screaming into my head and fighting the urge to tackle the speaker to the floor and finish him off there and then.

No. There is No One Size Fits All.

It’s kinda the software equivalent of the cover shot, the airbrushed reality held up for us all to feel inadequate against. You’re not doing TDD, therefore you are stupid. You’re not using IOC so your project will fail. And yes, your bum does look big in that form-bound-to-tableAdapter.

Give me a break.

Don’t get me wrong: I like unit tests as much as the next man. That is, unless the next man is a rabid evangelical fanatic, feverishly copulating over a copy of Extreme Programming Explained. Test have a vital role in controlling quality, costs and regressions. But their value lies in helping you achieve your goals: they have no intrinsic worth in of themselves. And they are just one tool in the toolbox, whose relative value on a project is entirely contextual based on the team, the requirements, the business landscape and the technologies.

So the answer, as always is ‘it depends’. And this should always be your talisman for detecting shysters everywhere. If someone deviates from this pattern:

Q: (insert important question here)
A: It depends

…then you know they are either lying, or don’t know. If the question is worth asking, this should be the answer.

If you’re actually giving the answer you probably want to give a bit more than just a literal ‘it depends’ answer, otherwise you still look like you don’t know. You want to couch your answer in terms of various options, and the parameters within which each option becomes viable. But the answer is always ultimately a question for the asker, because there is no truth and all things are relative and beauty is in the eye of the beholder and so on.

So for example the level of automated unit testing on your team should consider things like whether any of your team have written any tests before; the opportunity cost (quality vs. time-to-market); the relative ratios of manual testing vs. developer costs; and especially the amenability of your tech stack to automated testing.

It’s a common - but facile - argument to suggest hard-to-test is somehow the fault of your design, when you may have to work with products like BizTalk, SharePoint, Analysis Services, Reporting Services, Integration Services and – hey – we might even have some legacy code in here too. Do these somehow not count, because in my experience this is where many (if not most) of the problems actually lie.

Similarly, many pundits have taken the ‘people over practices’ mantra to mean ‘hire only the top n%’ (where n is < 10), whereas on your team you need to consider the local market, your costing structure and your growth model. Clearly, not everyone can hire above the average, so how do you scale?

And sorry Dr Neil, but bugs are a fact of life. Nothing else in this world is perfect, why should software be any different? Everything has limits, some designed, some unforseen, but always there is a choice: fix it, or do something else. And that’s a business cost/benefit decision, not a religious tenet: is it worth the cost of fixing? If you are sending people to the moon, or running nuclear power stations[1] you look at things very differently than if you’re running a two week online clickthro campaign for Cialis[2]. Get over it. Bugs are risks, and there is more than one way of managing risk. Remember product recall cost appraisals? Fight Club? Oh well.

Ultimately there is only what works for you, on your project, for your client. Everything else is at best constructive criticism, at worst (more common) a fatal distraction.

There is No One Size Fits All

See also: Atwood and Spolski’s Podcast 38

 

[1] Though of course in either of those cases you wouldn’t be violating the EULA by using the CLR, or – I suspect – reading this blog anyway.
[2] You’re kidding right? Look it up

Friday, March 26, 2010

Break Back Into Locked Out Sql Instance

This is how to get ‘back into’ a SQL instance when the local administrators group have been ‘locked out’ by not being SYSADMIN on the sql instance (and the SA password has been lost / other admin accounts are unknown / inaccessible)

On more than one occasion people who should know better have flat-out told me that this can’t be done, so just while I have the link handy:

…if SQL Server 2005 is started in single-user mode, any user who has membership in the BUILTIN\Administrators group can connect to SQL Server 2005 as a SQL Server administrator. The user can connect regardless of whether the BUILTIN\Administrators group has been granted a server login that is provisioned in the SYSADMIN fixed server role. This behavior is by design. This behavior is intended to be used for data recovery scenarios.

http://support.microsoft.com/default.aspx?scid=kb;en-us;932881&sd=rss&spid=2855

This is also true for Sql 2008. See Starting SQL Server In Single-User Mode

Tuesday, March 23, 2010

Twitter

Microblogging?! Isn’t blogging bad enough?

“It’s a cacophony of people shouting their thoughts into the abyss without listening to what anyone else is saying”

This could have been me in the pub on any of a number of times someone was unfortunate enough to ask my opinion, but it’s not, it’s Joel Spolsky, and that makes it right, or at least marginally more authoritative.

Sadly, as the post above details, Joel is ‘retiring’ from the type of long opinionated tirades we’ve grown to love, and moving into more ‘objective’ territory (I suggest he bypass Atwood altogether, and get it on with McConnell directly). But from where will we get our invective? Wherefore the curmudgeon of the internet, the grumpy old man of programming? I think, with one huge exception, I’ve argued Joel’s side on most software engineering debates I ever had.

How will I know what to think now?

Popular Posts