Learning to delegate is one of those things that you don’t really need to do much at the beginning of your career. And since it’s not (rarely?) taught in school, it’s something you largely have to develop on your own if you aren’t a natural delegater. (Incidentally, that would make a great name for a tasty alligator dish.)

If you’re good at what you do, delegating that work is HARD. How can I give this task to someone else when I know I can do it better, more quickly, or more to my exacting standards? It often feels like it would take as long to explain how to do it and support the person than it would take to do it myself.

Or would it?

If you’re anything like me, you probably do one of two things that sabotage your ability to delegate: you underestimate how long it will take to do something, or underestimate how capable your coworkers are.

I have been focusing heavily lately on delegation. In the project I’m currently involved in, I have done a large chunk of the ‘heavy lifting’ tasks. With deadlines and pressures, this is not practical. Instead, lately, I have been tending to ‘prime’ these tasks: clarifying and preparing things that need to be done so that someone else can take on some of the burden. Then I can evaluate the work of others, assist with remediation, and spend more time helping to plan, estimate, and prime other tasks. Not only does this help relieve some of the burden on me, but giving stretch tasks to others and then providing feedback/evaluation of their work helps them improve their skills. Wins all around.


I spent much of the past few days fixing a bug in some database access code that I wrote last year. Up until now, that code had caused very few problems, and I was quite proud of it. (I still am, in spite of this bug.)

In this case, I was losing database records that linked parent entities to children, but only in a circumstance where an entity with a list of child entities was created, saved, loaded, modified and then saved again. The lazy loading prevented the child entities from loading, as it was supposed to, but upon save, I was erasing all linking records, and then because the children weren’t loaded, skipping the writing part.

Once I identified the problem, I wrote a failing test case, fixed the original code, and confirmed the test passed. But here’s the rub: two new tests began to fail! After questioning whether I really had the correct solution to the original bug, and I wasn’t hallucinating, I began to investigate these failures. It was then that I discovered what had been happening: my code was *depending* on that bug. I had to fix that code too.



At my workplace, zedIT, our development team has been holding a quarterly half-day seminar session for the past few years. It’s an opportunity for us to get together and talk about what we are doing on our respective projects, or what we have learned outside work in our own career/professional development. In a distributed work environment where the developers are often allocated to separate projects, it’s also a rare opportunity to network and socialize. I never* miss it.

Today I ran the session for the first time. (I took over the reins a few months ago.) In addition, I gave a talk about an algorithm that I created in a project I am working on. The talk went fairly well, although I still need to polish my public speaking skills. Maybe if I had finished the talk more than 12 hours before I gave it, I might have had time to practice!

The other talks covered topics such as Microsoft Dynamics AX, Communications in Change Management, Browser Local Storage and NoSQL. It’s great to be able to take work time to learn about things that we would otherwise have to do on our own, which is often hard to do with the busy lives we lead these days.

I expect I’ll run the community for at least a couple of years, depending on when and whether there is someone else interested in taking it over.

* And now that I am in charge of it, it will never be scheduled when I’m on vacation!


Last night I had a dream that I was performing a code review for someone.

Keep in mind that browsers like Chrome, Firefox and others use an integer to represent this, but Internet Explorer and a browser whose logo looks like yellow and black lego blocks use a value that is specific to Windows. This used to be ok until that bastard James Franco messed it all up. Now we have to deal with the fallout. At any rate, just keep it in the back of your mind. I drunk.

I’m not sure if the dream was weirder, or if the fact that I was dreaming about code reviews is weirder. (Ok, I do know which is weirder!)

Nowhere that I have worked has ever had any kind of formal code review process. Usually the directive from management is something like ‘do code reviews’. As you can imagine, the quality of the code reviews that are then performed varies wildly. The most basic code review often only finds either the very obvious problems, or results in a reviewer making comments on things of minimal importance like style. If a developer makes obvious mistakes repeatedly, your problem isn’t solvable with code reviews.

I have also run into very in depth reviews. One reviewer in particular that I worked with took great pains to analyze what was written, and give a detailed and useful review. It was as if this person felt that he had as much of an investment in the code working correctly as I did, even though there was no mandate from management to do that. (He rightly rejected my fix because it was a bandage solution instead of addressing the underlying problem.) This reviewer had a tendency to initially analyze the work on his or her own, with no input from the original developer unless necessary. This helps test how easily the code is understood and remove any tendency for the reviewer to be gentler than necessary to stroke ego. After analysis is complete, the reviewer should work together with the developer to explain the analysis and where improvements are needed.

A very basic level of review should exist in any project. This will help prevent the most egregious problems from making it into the code base. Ideally, both the developer and the reviewer should have roughly the same stake in quality and correctness. This does not happen easily without organizational intervention. The goal for an organization in doing code reviews is primarily to improve code quality, but code reviews can dramatically and quickly help improve developers’ skills, which in turn, of course, help the organization. My advice: establish a code review policy, enforce it at the project or organization level, and help ensure that developers understand it’s about improving quality and not about criticizing work, and by embracing it, even the best developers can improve their skills.


For years I’ve been in the camp that optimization of software should come towards the end of a project. Build the software, profile it, then fix any outlandish problems. And that has largely worked fine. I tend to write the code for projects in which maintainability and simplicity outweigh performance. Part of my reasoning, too, has been because it’s often hard to tell where the performance problems lie until you take a measurement.

And that’s largely still true. But I’ve gradually come to push more in the direction of writing efficient code first. There are a lot of ways to do this, and once they become second nature, the effect it has on your ability to deliver should be minimal, while at the same time, your code will be, even if just a small amount, more efficient than it otherwise would.

Some of these methods include taking iteration-independent calculations outside of loops, choosing the smallest possible data type, filtering lists as early as possible (in the database if you can), reducing the amount of data delivered by web services, designing web services to be more coarse-grained instead of fine-grained, caching database access of things that will be accessed frequently, and perhaps even using fewer mutable objects. I’m sure you can think of many more. Some of these things a smart compiler will probably do for you, but they are good habits to be in nonetheless, particularly if you happen to not be using a smart compiler, or don’t know exactly how much optimization your compiler does.

All of this said, you can’t escape the profiling if you are experiencing performance problems. If nothing else, it will teach you about how to help avoid similar problems in the future.


(Previously posted on my old blog)

Recently, for a project at work I needed a 64-bit Windows version of Perl Compatible Regular Expressions. I didn’t want to use a binary built by a third party, so I chose to build it from the latest sources myself, mainly for reproducibility. Also, I don’t know if a 64-bit Windows binary of PCRE even exists.

Unfortunately, PCRE doesn’t have much support for Windows builds, much less a 64-bit build. 64-bit Visual C++ dictates at least VS 2005 or greater. In my case, I am using VS 2005. I created a makefile for this version by creating a project in Visual Studio 6, exporting a makefile, and then making necessary modifications to the compiler and linker flags in the makefile to get what I wanted. VS 2005 cannot generate makefiles, so it’s good to have a VS 6 kicking around for that purpose.

Here is my makefile, in case it’s of any use to you. Note that I have not built the tests – just the DLL and import library. You will probably want to copy it out and name it ‘Makefile’.

# Microsoft Developer Studio Generated NMAKE File, Based on pcre.dsp
!IF “$(CFG)” == “”
CFG=pcre – x64 Debug
!MESSAGE No configuration specified. Defaulting to pcre – x64 Debug.

!IF “$(CFG)” != “pcre – x64 Release” && “$(CFG)” != “pcre – x64 Debug”
!MESSAGE Invalid configuration “$(CFG)” specified.
!MESSAGE You can specify a configuration when running NMAKE
!MESSAGE by defining the macro CFG on the command line. For example:
!MESSAGE NMAKE /f “pcre.mak” CFG=”pcre – x64 Debug”
!MESSAGE Possible choices for configuration are:
!MESSAGE “pcre – x64 Release” (based on “Win64 Dynamic-Link Library”)
!MESSAGE “pcre – x64 Debug” (based on “Win64 Dynamic-Link Library”)
!ERROR An invalid configuration is specified.

!IF “$(OS)” == “Windows_NT”

!IF “$(CFG)” == “pcre – x64 Release”

# Begin Custom Macros
# End Custom Macros

ALL : “$(OUTDIR)pcre.dll”

-@erase “$(INTDIR)pcre_chartables.obj”
-@erase “$(INTDIR)pcre_compile.obj”
-@erase “$(INTDIR)pcre_config.obj”
-@erase “$(INTDIR)pcre_dfa_exec.obj”
-@erase “$(INTDIR)pcre_exec.obj”
-@erase “$(INTDIR)pcre_fullinfo.obj”
-@erase “$(INTDIR)pcre_get.obj”
-@erase “$(INTDIR)pcre_globals.obj”
-@erase “$(INTDIR)pcre_info.obj”
-@erase “$(INTDIR)pcre_maketables.obj”
-@erase “$(INTDIR)pcre_newline.obj”
-@erase “$(INTDIR)pcre_ord2utf8.obj”
-@erase “$(INTDIR)pcre_refcount.obj”
-@erase “$(INTDIR)pcre_study.obj”
-@erase “$(INTDIR)pcre_tables.obj”
-@erase “$(INTDIR)pcre_try_flipped.obj”
-@erase “$(INTDIR)pcre_ucd.obj”
-@erase “$(INTDIR)pcre_valid_utf8.obj”
-@erase “$(INTDIR)pcre_version.obj”
-@erase “$(INTDIR)pcre_xclass.obj”
-@erase “$(OUTDIR)it_pcre.exp”
-@erase “$(OUTDIR)it_pcre.lib”
-@erase “$(OUTDIR)it_pcre1_vc80x64.dll”
-@erase “$(OUTDIR)it_pcre1_vc80x64.ilk”
-@erase “$(OUTDIR)it_pcre1_vc80x64.pdb”

“$(OUTDIR)” :
if not exist “$(OUTDIR)/$(NULL)” mkdir “$(OUTDIR)”


$(CPP) @<< $(CPP_PROJ) $< << .cpp{$(INTDIR)}.obj:: $(CPP) @<< $(CPP_PROJ) $< << .cxx{$(INTDIR)}.obj:: $(CPP) @<< $(CPP_PROJ) $< << .c{$(INTDIR)}.sbr:: $(CPP) @<< $(CPP_PROJ) $< << .cpp{$(INTDIR)}.sbr:: $(CPP) @<< $(CPP_PROJ) $< << .cxx{$(INTDIR)}.sbr:: $(CPP) @<< $(CPP_PROJ) $< << MTL=midl.exe MTL_PROJ=/nologo /mktyplib203 /win32 RSC=rc.exe BSC32=bscmake.exe BSC32_FLAGS=/nologo /o”$(OUTDIR)pcre.bsc” BSC32_SBRS= LINK32=link.exe LINK32_FLAGS=/nologo /dll /incremental:no /pdb:”$(OUTDIR)pcre.pdb” /debug /machine:x64 /out:”$(OUTDIR)pcre.dll” /implib:”$(OUTDIR)pcre.lib” LINK32_OBJS= “$(INTDIR)pcre_chartables.obj” “$(INTDIR)pcre_compile.obj” “$(INTDIR)pcre_config.obj” “$(INTDIR)pcre_dfa_exec.obj” “$(INTDIR)pcre_exec.obj” “$(INTDIR)pcre_fullinfo.obj” “$(INTDIR)pcre_get.obj” “$(INTDIR)pcre_globals.obj” “$(INTDIR)pcre_info.obj” “$(INTDIR)pcre_maketables.obj” “$(INTDIR)pcre_newline.obj” “$(INTDIR)pcre_ord2utf8.obj” “$(INTDIR)pcre_refcount.obj” “$(INTDIR)pcre_study.obj” “$(INTDIR)pcre_tables.obj” “$(INTDIR)pcre_try_flipped.obj” “$(INTDIR)pcre_ucd.obj” “$(INTDIR)pcre_valid_utf8.obj” “$(INTDIR)pcre_version.obj” “$(INTDIR)pcre_xclass.obj” “$(OUTDIR)pcre.dll” : “$(OUTDIR)” $(DEF_FILE) $(LINK32_OBJS) $(LINK32) @<< $(LINK32_FLAGS) $(LINK32_OBJS) << !ELSEIF “$(CFG)” == “pcre – x64 Debug” OUTDIR=.Debug INTDIR=.Debug # Begin Custom Macros OutDir=.Debug # End Custom Macros ALL : “$(OUTDIR)pcre.dll” CLEAN : -@erase “$(INTDIR)pcre_chartables.obj” -@erase “$(INTDIR)pcre_compile.obj” -@erase “$(INTDIR)pcre_config.obj” -@erase “$(INTDIR)pcre_dfa_exec.obj” -@erase “$(INTDIR)pcre_exec.obj” -@erase “$(INTDIR)pcre_fullinfo.obj” -@erase “$(INTDIR)pcre_get.obj” -@erase “$(INTDIR)pcre_globals.obj” -@erase “$(INTDIR)pcre_info.obj” -@erase “$(INTDIR)pcre_maketables.obj” -@erase “$(INTDIR)pcre_newline.obj” -@erase “$(INTDIR)pcre_ord2utf8.obj” -@erase “$(INTDIR)pcre_refcount.obj” -@erase “$(INTDIR)pcre_study.obj” -@erase “$(INTDIR)pcre_tables.obj” -@erase “$(INTDIR)pcre_try_flipped.obj” -@erase “$(INTDIR)pcre_ucd.obj” -@erase “$(INTDIR)pcre_valid_utf8.obj” -@erase “$(INTDIR)pcre_version.obj” -@erase “$(INTDIR)pcre_xclass.obj” -@erase “$(OUTDIR)it_pcre.exp” -@erase “$(OUTDIR)it_pcre.lib” -@erase “$(OUTDIR)it_pcre1_vc80x64.dll” -@erase “$(OUTDIR)it_pcre1_vc80x64.ilk” -@erase “$(OUTDIR)it_pcre1_vc80x64.pdb” “$(OUTDIR)” : if not exist “$(OUTDIR)/$(NULL)” mkdir “$(OUTDIR)” CPP=cl.exe CPP_PROJ=/nologo /c /I “.” /D “PCRE_EXPORTS” /D “HAVE_CONFIG_H” /Fo”$(INTDIR)” /Fd”$(INTDIR)” /RTC1 .c{$(INTDIR)}.obj:: $(CPP) @<< $(CPP_PROJ) $< << .cpp{$(INTDIR)}.obj:: $(CPP) @<< $(CPP_PROJ) $< << .cxx{$(INTDIR)}.obj:: $(CPP) @<< $(CPP_PROJ) $< << .c{$(INTDIR)}.sbr:: $(CPP) @<< $(CPP_PROJ) $< << .cpp{$(INTDIR)}.sbr:: $(CPP) @<< $(CPP_PROJ) $< << .cxx{$(INTDIR)}.sbr:: $(CPP) @<< $(CPP_PROJ) $< << MTL=midl.exe MTL_PROJ=/nologo /mktyplib203 /win32 RSC=rc.exe BSC32=bscmake.exe BSC32_FLAGS=/nologo /o”$(OUTDIR)pcre.bsc” BSC32_SBRS= LINK32=link.exe LINK32_FLAGS=/nologo /dll /incremental:yes /pdb:”$(OUTDIR)pcre.pdb” /debug /machine:x64 /out:”$(OUTDIR)pcre.dll” /implib:”$(OUTDIR)pcre.lib” LINK32_OBJS= “$(INTDIR)pcre_chartables.obj” “$(INTDIR)pcre_compile.obj” “$(INTDIR)pcre_config.obj” “$(INTDIR)pcre_dfa_exec.obj” “$(INTDIR)pcre_exec.obj” “$(INTDIR)pcre_fullinfo.obj” “$(INTDIR)pcre_get.obj” “$(INTDIR)pcre_globals.obj” “$(INTDIR)pcre_info.obj” “$(INTDIR)pcre_maketables.obj” “$(INTDIR)pcre_newline.obj” “$(INTDIR)pcre_ord2utf8.obj” “$(INTDIR)pcre_refcount.obj” “$(INTDIR)pcre_study.obj” “$(INTDIR)pcre_tables.obj” “$(INTDIR)pcre_try_flipped.obj” “$(INTDIR)pcre_ucd.obj” “$(INTDIR)pcre_valid_utf8.obj” “$(INTDIR)pcre_version.obj” “$(INTDIR)pcre_xclass.obj” “$(OUTDIR)pcre.dll” : “$(OUTDIR)” $(DEF_FILE) $(LINK32_OBJS) $(LINK32) @<< $(LINK32_FLAGS) $(LINK32_OBJS) << !ENDIF !IF “$(NO_EXTERNAL_DEPS)” != “1″ !IF EXISTS(“pcre.dep”) !INCLUDE “pcre.dep” !ELSE !MESSAGE Warning: cannot find “pcre.dep” !ENDIF !ENDIF !IF “$(CFG)” == “pcre – x64 Release” || “$(CFG)” == “pcre – x64 Debug” SOURCE=.pcre_chartables.c “$(INTDIR)pcre_chartables.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_compile.c “$(INTDIR)pcre_compile.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_config.c “$(INTDIR)pcre_config.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_dfa_exec.c “$(INTDIR)pcre_dfa_exec.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_exec.c “$(INTDIR)pcre_exec.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_fullinfo.c “$(INTDIR)pcre_fullinfo.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_get.c “$(INTDIR)pcre_get.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_globals.c “$(INTDIR)pcre_globals.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_info.c “$(INTDIR)pcre_info.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_maketables.c “$(INTDIR)pcre_maketables.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_newline.c “$(INTDIR)pcre_newline.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_ord2utf8.c “$(INTDIR)pcre_ord2utf8.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_refcount.c “$(INTDIR)pcre_refcount.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_study.c “$(INTDIR)pcre_study.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_tables.c “$(INTDIR)pcre_tables.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_try_flipped.c “$(INTDIR)pcre_try_flipped.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_ucd.c “$(INTDIR)pcre_ucd.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_valid_utf8.c “$(INTDIR)pcre_valid_utf8.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_version.c “$(INTDIR)pcre_version.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) SOURCE=.pcre_xclass.c “$(INTDIR)pcre_xclass.obj” : $(SOURCE) “$(INTDIR)” $(CPP) $(CPP_PROJ) $(SOURCE) !ENDIF


You may have heard the software development principle ‘don’t repeat yourself’, or DRY. I stick to it was well as I can, but I have been trying to think of a list of exceptions. When is repeating yourself ok?

And by ok, I still don’t mean good, but at least less bad.

Logic vs Structure

Structure, the boiler plate stuff that allows you to implement logic, is a less important place to avoid duplication than in the logic itself. It should still be avoided, but it’s not as egregious a foul. When I talk about structure I typically mean things like getters and setters, set up for making SOAP calls, and generated code. Like structure, there are different types of logic. Of these, business logic is the one in which duplication should be most strongly avoided. This is often a difficult problem, since the same logic is often required in both clients and servers, in different libraries, and even in different languages. But business logic often changes frequently, and that makes reducing duplication a worthwhile effort.

Production vs Other

I hold production code to the highest standard. This is the code that gets used by customers on an ongoing basis, where quality is most important. Other types of code such as utilities, data conversion code, and tests are less important. Yes, even though tests are often delivered with an application, their use only applies to development of new features and not regular use, so it is often not worthwhile to spend large amounts of time reducing duplication there. Focus on reducing duplication in production code and generating more test coverage, rather than improving the quality of test code.


You can’t always avoid duplication even in the above situations. One example of this is when you do not have the ability to modify source code and in order to extends its capabilities you first have to copy it. Or some existing code is part of a production system that cannot be modified until a replacement system has proven its value. But these are rarities, and should be approached on a case-by-case basis.

Thanks to my brother Angus for his input on this article.


(Following is a slightly modified version of an article I posted on a previous iteration of this blog.)

At a former job, I spent a fair amount of time porting software. Porting is difficult, and in the product I worked on, which was fairly mature, most of the issues I ran into were configuration or environmental, and were rarely problems with the code.

One unit test revealed an interesting bug when I was porting from 32-bit to a 64-bit compiler in C++.

int value = 0xFFFFFFFF;

The intention was an integer that had all bit fields turned on, which, for a signed value, is equivalent to -1. But when this code is compiled with a 64-bit compiler, reality no longer matches intent. The value is now ’0x00000000FFFFFFFF’, not equivalent to -1 in the 64-bit world.

The fix:

int value = -1;

The moral: Try not to be too smart for your own good!


Scary, isn’t it? How is it that this new block of code that you just wrote is performing perfectly the very first time you ran it? I’ve done it a few times in my career in non-trivial blocks of code.

Ok, maybe you are just *that good*. Maybe you thought through your problem and improved your odds of solving it from the get-go. Or maybe your code is bypassing all the hard stuff and because it’s not crashing it just looks like it’s working.

What next? How confident are you in it? You wrote tests of course, and they all pass too. Here are some things to do if you’re not ready to move onto the next problem:

1. Have a colleague look at it. Make sure you’re not missing something obvious. Are you making any assumptions that you should not?
2. Step-debug the code anyway. Inspect variables as it runs.
3. Examine your tests. Are they too superficial or too general? Is there overlap between what the tests consider the ‘solution space’ to be and failure?
4. Comment your code more. Describe the intent so the next maintainer will have some insight into what you were trying to accomplish.
5. Consider writing or performing a more exhaustive test. Try your best to break it.
6. Consider load or stress tests. (Load and stress tests are different!)
7. Look for code that could cause exceptions or infinite loops. Build escape hatches for ‘do’ and ‘while’ loops if you need.
8. Remove dead code, dead comments, commented-out code, unused code, or anything else that looks sloppy.

If you’ve done all of these and everything still works, then congratulations! Next problem!


I got my hands on a Microsoft Lumia 550 this week. The phone was released in December of last year, and it’s my first Windows phone of any kind. I bought it in part to replace my annoying Alcatel OneTouch Idol X and partly because I wanted to try something different. It wasn’t overly expensive so I thought if it didn’t work out, I wouldn’t have lost too much.

Fortunately, the first day has been great. Only one problem has come up – I cannot charge the phone with the wall charger (European spec, but they included a converter in the box).

On to the good stuff. The phone looks good, feels good, it’s responsive, the operating system doesn’t look or feel old, and everything that I used to do on the Android phone has an equivalent. Everything that I have tried so far, that is. The removable battery and SD-card slot are great things to have too.