Wednesday, December 26, 2012

Reclaiming my Brain: My Upcoming Social Media Hiatus

"Who am I in the midst of all this thought traffic?" -Rumi

I've decided to take a social media hiatus for the month of January (at least). There are about a billion reasons for this, but I wanted to catalog them here in case I forget or I want to reverse course.

Why I'm Doing This

I've been losing focus on things that matter. I want to create. I want to think about bigger ideas. I want to do some long-form reading for a change. I realized something was wrong recently when I couldn't focus on a long-form blog post I was reading because I had the urge to check Twitter -- while I was walking from one office to another during work. It's time for a change. I need to become a citizen of the real world again.

The Social Media Outlets I'm Currently Using

So, what will I be giving up?
The list is extensive when I lay it all out.
  • Facebook
  • Twitter
  • Google + 
  • Google Reader
  • LinkedIn
  • Quora
  • FellowUp
  • Timehop
  • Foursquare
  • IFTTT (disabling my social recipes)
  • Path
  • Sonar
  • Undrip
  • Bonus: computer games
I'll allow myself e-mail, but only as a tool to reach out when necessary or to reconnect with friends. While using it, I'll be unsubscribing from everything possible so I can stop the deluge of e-mail I get. I'll also allow myself to write, and may actually start putting some personal thoughts out there somewhere (this blog has always been largely technical.)

What I hope to Achieve
  • Better Sleep. I was constantly browsing social media and Google Reader before sleep, which would be fine, except I had to read all of it. I'm hoping this will let me turn off the screen before bed and get some solid sleep.
  • Better Focus. I want to do some deep thinking on projects & problems. I want to free up space for my mind to fill in with ideas, etc.
  • Better Memory. Due to the nature of my brain jumping around all the time I think, my memory has become terrible due to transient information constantly bombarding it. I want to do some exercises and see if I can't improve my memory.
  • Better Friendship. My friendship skills have seriously an inexcusably fallen off. Social media doesn't equate with being social. I need to get out in the world again and say hello instead of typing it.
  • Better Performance at Work. I get a lot done at work, but I'm excited to see what can happen when I'm not pulling away during every spare minute of downtime to catch up on the social media world.
  • More Deep Learning. Even over this holiday, I've found that I'm able to focus on tutorials and technical reading much better, because there's not the promise of some tweet to keep me occupied every other minute.
  • Read an actual book. Seriously, I can't remember the last time I finished an actual paper book.
  • Stepping back from the Political scene. I'm way involved, and I'm letting others' opinions pass for my own when I agree with them. That's not good enough. I want to see if I can dig deep and attempt to solve some problems in a comprehensive way, and I want to put something out there that others can challenge and build upon or critique. Hiding behind the opinions of others doesn't cut it.
  • Switching from "Consume" to "Produce" mode. I have ideas I want to build, but the state of mind for building and consuming are distinctly different, I've found, and it's not easy to switch from one to the other (at least not for me).
  • Separating Signal from Noise. I hope that stepping back will help remind me what's important and what I actually miss about social media, which will help me learn to more wisely consume it in the future.
How I'm Preparing & Cutting Myself Off

My brother accepted the challenge to barricade me from Social media, so I'm giving him the keys to the kingdom. I'll be doing the following:
  • Analyzing social media, the e-mails I've received, and apps I have linked to Facebook/Twitter. This is my detox list.
  • Disabling the connections to all third-party apps that could ping me in those accounts.
  • Stopping any e-mail alerts from these sites.
  • Removing these apps from my phone.
  • Having my brother change the passwords for each of those accounts in front of me.
  • Utilizing a blocker -- in every web browser on every computer -- to completely block all of the aforementioned web sites.
Possible Issues

  • Self-sabotage. I'm crafty, and while I think I can stay away, I'm sure it will be tempting to try to open something. If you see me on social media somehow in January, yell at me. Like, a lot.
  • Fear of Missing Out (FOMO). Apparently this is a real thing, and I'm sure I'll experience some of it, but I'm hoping that it won't materialize.

Monday, December 17, 2012

How to: Fix Event 1209 for ADWS when using vCenter 5.1 on Server 2008 R2 [Field Notes]


I'm Running vCenter 5.1 and Windows Server 2008 R2.

I notice a number of events in the event log with a source of "ADWS", event ID of 1209. The event text reads:

Active Directory Web Services encountered an error while reading the settings for the specified Active Directory Lightweight Directory Services instance.  Active Directory Web Services will retry this operation periodically.  In the mean time, this instance will be ignored.

 Instance name: ADAM_VMwareVCMSDS


Apparently, VMWare doesn't create a proper registry value for one entry, which causes ADWS to throw an error.

To fix this:

  • Back up your registry.
  • Open regedit.exe
  • Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\ADAM_VMwareVCMSDS\Parameters
  • Note that the "Port SSL" field is a REG_SZ Rather than a REG_DWORD and is empty
  • Delete the "Port SSL" field.
  • Create a new field called "Port SSL" of type "REG_DWORD"
  • Double-click on that field and change the value to 636.
At this point, the errors should stop and your EventLog is now free from 1440 events a day.


Friday, December 14, 2012

Bad UX Experience: MD MVA (2 for 1 deal!)

I know this comes as a shock to nobody, but MD's don't-call-it-the-DMV "Motor Vehicle Association" isn't the easiest to get around.

However, this one was a little puzzling.

Bad UX Example

While attempting to fill out a change of address form on the web site, the new address is given only one line (no "street 2") address. And to make matters worse, the address is limited to only 30 characters.

So, if you have a long street name and/or live in an apartment, this field will be a joy for you to fill out.

Even better: Since I had to get creative with the symbols in the name, I shortened the "BLVD" in my new street address to "BVD" (only way I could still fit the apartment in). It then tried to get me to accept the USPS standardized version, which either cut the apartment number's 3rd digit off, or cut out the "BLVD" portion entirely.

Eventually, I had to force it to use the original, so now it will read "BVD", and I'm sure it's only a matter of time before I'll have to get that corrected.

Additional Bad UX Example

While writing this blog post, I attempted to link to the change of address form, so I right-clicked on the link titled "change of address form" and chose to copy the URL. When I pasted it just now, I got:


Really? We're still doing postbacks on buttons this way? I guess the best way to avoid link rot is to make sure your link is never a link in the first place.

What can we as Developers Learn From This?

There are already great posts on falsehoods programmers believe about names, time, networks, geography, build systems -- It seems like one on addresses is long overdue. In the spirit of those posts, I'll begin one below based on my experience.

In the context of addresses, all of the following are wrong:

  • The street address will be less than 30 characters
  • The location will have a house number
  • The city will be less than 30 characters
  • The USPS address standardization will always be correct

Monday, December 10, 2012

How to: Quickly update an MVC4 project with Bootstrap LESS and FontAwesome [Field Notes]


I'd like to update my MVC4 project to use the following:
  • Bootstrap LESS source (Twitter.Bootstrap.Less nuget package)
  • FontAwesome instead of Bootstrap's icon set

However, this can be a pain for the following reasons:
  • Dotless and Bootstrap's LESS used to not play nicely together
  • The "@import" directives sometimes gave dotless an error and had to be worked around.
  • Font-Awesome's MIME types are not all recognized by the internal webapps


Thanks to the Excellent Twitter.Bootstrap.Less.MVC4 package by Christopher Deutsch, this process is a lot easier.

Install Bootstrap LESS and Dotless
  • Create a net ASP.NET MVC4 Web Project.
  • Open the package manager console.
  • Ensure that your MVC4 Project is set as the current project in the package manager
  • Install Chris's package by using the following command: "install-package Twitter.Bootstrap.Less.MVC4"
  • The project will automatically install dotless and Twitter.Bootstrap.Less.

Reconciling BundleConfig and Bootstrap.BundleConfig

  • If you have an existing BundleConfig.cs that you've made changes to, merge those changes into Bootstrap.BundleConfig. 
  • If you haven't customized it, you can just delete BundleConfig.cs as all the defaults are in Bootstrap.BundleConfig.cs.

Install FontAwesome
  • From the package manager, run "install-package FontAwesome"
  • In the Bootstrap.BundleConfig.cs file, add the font-awesome.css file to the StyleBundle so that the line reads as follows:
            var css = new StyleBundle("~/Content/css").Include("~/Content/site.less", "~/Content/font-awesome.css");
  • Open your twitter.bootstrap file and comment out the line importing sprites.less. FontAwesome and Bootstrap's sprites naturally conflict as FontAwesome is designed to replace them.

Update IIS Settings to allow FontAwesome's Static Content
  • Add the following to web.config in the <system.WebServer> section:

       <remove fileExtension=".svg" />
       <remove fileExtension=".eot" />
       <remove fileExtension=".woff" />
       <mimeMap fileExtension=".svg" mimeType="image/svg+xml"  />
       <mimeMap fileExtension=".eot" mimeType="application/" />
       <mimeMap fileExtension=".woff" mimeType="application/x-woff" />

Ensure That Content is Processed on the Server
  • In Visual Studio, Select all files in the /less, /font, and /content directory and in the properties for the file, ensure that the Build Action is "Content" and the Copy to Output option is "Copy Always". This ensures that FontAwesome, Bootstrap, etc will show up in custom builds and when you package for IIS, etc.


How To: Stop "Access Denied" errors in MVC Intranet Applications VS 2012/IIS Express [Field Notes]

  • Using Visual 2012 and IIS Express
  • Building an MVC4 Intranet project
  • Authentication Doesn't appear to work; I always get an "Access Denied" screen on every page.


This happens because IIS Express isn't configured by default for Windows Authentication.

  • Run your project.
  • While it is running / showing you the error, find the IIS Express Icon in your system tray
  • Right-click the icon and select "Show all Web Applications".
  • Click on your web application.
  • Look at the "Config" property to find where your applicationhost.config file is stored.
  • Stop your web site and open that applicationhost.config file for editing.
  • Find the section where WindowsAuthentication enabled is set to "false" and change it to "true". Save the file.


Thursday, November 29, 2012

Latest First-Hand WMATA Fail: SMARTripoff cards!

OK, technically second-hand; this story comes to us courtesy of the lovely Caroline.

How's This for an Epic WMATA fail? 

I'll bullet it out for easy digestion:

  • The buses require you to pay either cash or a Smartrip card.
  • A discount is given when you use this card and a bigger one when you use the train/bus in combination. The expectation is that people don't have to use cash. WMATA is incentivizing customers to use the card.
  • Some cards it turns out, are new and apparently also incompatible with some existing card reader terminals.
  • WMATA either released these cards into the wild knowing this, or didn't bother to test.
  • WMATA has to update the terminals on buses individually to fix them.
  • WMATA has no idea which buses have the update yet and also somehow have no way to track them
  • WMATA says that in this situation, drivers are supposed to let people on for free. HOWEVER...
  • ...WMATA has not informed its drivers formally of this situation and does not know which ones know about it and which ones don't.
  • This a known issue to WMATA, and is known to affect all 20-digit cards beginning in 0167.
Let Me Count the Fails:
  1. Technology Fail: Who doesn't test card readers or firmware w/cards in the wild before they're released? How could this problem ever exist? 
  2. Asset Management Fail: I know that WMATA uses enterprise-class asset management software (IBM Maximo) because I was at a seminar with them. Given this, how is it possible that it is unknown which buses have the card readers with issues and which ones don't?
  3. Business Fail: WMATA bills SMARTrip as the easy solution and goes to great lengths to build confidence about how it works. And yet, despite putting $20 on her card this morning, Caroline was almost unable to board the bus she needs to get to work. WMATA also asked Caroline to call in to report the issue on each bus it happens on so they could track it. How is that a customer's responsibility?
  4. Communication Fail: WMATA didn't get out ahead of this one. They haven't even informed their drivers, let alone their customers, that this might be an issue. How is there not a sign on every Metro bus and a direct number to call when this sort of thing happens, given that this is a known issue?
  5. Customer Service Fail: While pleasant enough on the phone, the WMATA representative could not offer Caroline a solution. They asked her to call in and submit a support ticket with her bus number every time; however, they could not tell her what to do if a bus driver didn't let her on when her card didn't work despite her being a paying customer.
The moral of the story? If you take a bus in DC, never trust WMATA's own service offerings, or you may find yourself without a ride.

Did you have an experience similar to this? Sound off in the comments!

Tuesday, November 27, 2012

How To: Stop SQL Server Reporting Services from using Port 80 on your Server [Field Notes]


SSRS (SQL Server Reporting Services) uses port 80 by default on any server it's installed on.

This is crazy annoying, because you may want to have web servers or other application servers that also use the default http port 80.

Running "netstat -ano" from the command line at this point usually shows you that port 80 is in use by PID 4 (the system process).

Fortunately, this isn't too hard to fix:

  • Log on to the server that hosts SSRS.
  • Go to Start --> Programs --> SQL Server 2008 R2 --> Configuration Tools --> Reporting Services Configuration Manager
  • Connect to the server in question (usually your local server)
  • Go to the "Web Service URL" section 
  • Change the TCP port to an open port other than port 80 (81 happened to work on my server) and hit "Apply"
  • Go to the "Report Manager URL" section
  • Click "Advanced"
  • Click the entry with a TCP port of 80 and then click the "Edit" button.
  • Change the "TCP Port" entry to the same thing you changed it to in the "Web Service URL" section previously and Click OK.
  • Click OK again.
At this point, running "netstat -ano" should not show an entry for port 80.

Quick Tip: Use rsync to recursively remove .svn folders from a directory [Field Notes]


I have a directory structure that contains Subversion metadata folders (folders named ".svn").

I would like to remove those folders but "svn export" won't work.


Rsync to the rescue. Let's say the folder containing .svn folder structures is named "problemfolder". Do the following:

  • In the same directory as the problem folder, create a "clean" folder to output the contents of problemfolder to eventually.
    • "mkdir problemfolder_clean"
  • Run rsync, excluding .svn folders and their contents, to copy the problem directory to the clean directory.
    • rsync -avr --exclude='.svn*' /path/to/problemfolder/ path/to/problemfolder_clean
At this point, the contents of "problemfolder" (minus the .svn folders) will be in the clean folder you created.

Thursday, October 18, 2012

How To: Run Several Programs Sequentially in PowerShell [Field Notes]

I want to run several installation programs in order, and don't want them to step on each other.

I want to avoid errors such as "another setup program is already running" which result in the second install not completing.

For files without arguments, run:
$var1 = Start-Process -FilePath "[path]" -passthru

For files with arguments, run:

$var1 = Start-Process -FilePath "[path]" -ArgumentList "[Args]" -passthru

In these examples, [path] is the full path to the file (e.g. C:\SomeFolder\MyProgram.exe) you want to run. [Args] is whatever you'd normally put after the path to the exe in the command line.

NOTE: Single quotes around the file path will ensure the command will not break if it includes a space in the path.


How To: Change a Drive Letter With PowerShell [Field Notes]


I need to change the letter of a mounted drive via PowerShell.


Start Powershell as an admin and run the following two lines, where 'x' is the current drive letter and y is the drive letter you'd like it to be:

$drive = Get-WmiObject -Class win32_volume -Filter "DriveLetter = 'x:'"
Set-WmiInstance -input $drive -Arguments @{DriveLetter="Y:";}


Tuesday, September 18, 2012

Maximo Tip: PM Work Order Cancellation [Field Notes]


In Maximo, I want to cancel a PM work order because the work wasn't done for that particular scheduled PM.

However, when I attempt to cancel the PM, the following may happen:

  • I don't see the "CAN" status anywhere in the list of statuses.
  • I receive error BMXAA4507E: "warning-wo{0} is in a non-cancel condition"

This happens because of PM work order sequencing within Maximo. The rules is: You cannot cancel a PM work order without first cancelling all the PM work orders that generated after that particular PM work order. The reasoning behind this is that based on how Maximo generates next PMs, a break in the sequence would cause this generation to fail. (I happen to see that as a design flaw, but hey, that's just me.)

What I didn't realize was that if you've completed a newer PM work order, it cannot be canceled, and therefore none of the previous PM work orders that generated for that PM can be cancelled. 

For example: if I generate workorder #s 1, 2, 3, 4, and 5 (5 being the latest) on one PM, and I forget to do #s 1 and 2, I can cancel 5 and 4. However, if 3 was already completed, cancelling 5 and 4 does not mean that I can now cancel #s 1 and 2.

The recommendation in this situation, as far as I'm aware, is that the work orders that were missed (#s 1 and 2 in my example) should be closed with a message indicating that work was never actually performed

Gross, but it's the only solution I've seen so far.

Have you found any better tricks for cancelling PMs or avoiding this problem? I'd love to hear about it in the comments.

Monday, September 10, 2012

How to: Ensure IIS and ASP.NET MVC Play Nice with Web Fonts [Field Notes]


I'm using ASP.NET MVC via IIS and would like to make use of Web Fonts.


There are two steps:

  • Make sure your Web Font files are going to be outputted by your build process.
  • Make sure IIS can serve those web files.
Step 1: Make Sure Your Web Font Files are Going to be Outputted by Your Build Process
  • In Visual Studio, select all your web font files (EOT, SVG, TTF, and WOFF files)
  • Right-click on them and select properties or look at the properties window.
  • Set the Build Action Property to "Content"
  • Set the Copy to Output Directory property to "Copy Always".
Step 2: Make Sure IIS Can Serve Your Web Font Files

In IIS, make sure you have the following file extensions and the corresponding MIME types:
  • .eot --> application/octet-stream
  • .woff --> application/x-woff
  • .svg --> image/svg+xml

and bingo! Build/deploy your package and it should be right as rain.


How to: Ensure LESS works with ASP.NET MVC and Continuous Integration [Field Notes]

  • I use LESS instead of CSS in my ASP.NET MVC app.
  • I would like it to actually work.

There are two steps (that I know of) to fixing this problem. The first is to make sure that your build package will actually output the LESS files in the first place; the second is to ensure that IIS can serve them.

Step 1: Ensuring that the Build Package outputs LESS files

  • In your solution, highlight all your LESS files
  • Right-click and select properties or look in the properties window.
  • Set the Build Action property to "Content". This will ensure that the raw content is outputted from those files during build instead of another build action.
  • Set the Copy to Output Directory property to "Copy Always". This ensures that your LESS files will actually make it into the package folder that MSBuild outputs.
Step 2: Ensuring that IIS can serve LESS files

When using Twitter Bootstrap LESS with IIS, don't forget to create the MIME Type [Field Notes]


I am using the Twitter Bootstrap LESS source with LessJS in an ASP.NET MVC3 Project that deploys to IIS.

  • When I run my local source, the web site displays fine.
  • When I run my build process, it completes fine.
  • When I open the site on my development or production boxes, the CSS doesn't display.

After making the problem much more complicated than it had to be, I realized that my LESS file wasn't being found by IIS -- not because it wasn't there or wasn't being deployed, but because I had never configured IIS to serve LESS files. Rookie move.

To ensure IIS can serve LESS files, take the following steps:

  • Open the IIS 7 Manager application
  • Navigate to your server.
  • From the different options displayed, select MIME Types.
  • On the right-hand side, from the Actions menu, select Add...
  • In the File name extension field, enter ".less"
  • In the MIME type field, enter "text/css"
  • Click OK to add the MIME type.
Bingo! Now your server won't choke when trying to serve a .less file.

Wednesday, July 18, 2012

Crystal Reports: Avoiding Array Size Limits Using Concatenation [Field Notes]


I have a Crystal Report that needs to pass a list of items to a sub-report, so that the sub-report can find additional items without duplicating items from the master report.

Unfortunately, often times I'm dealing with a large amount of data. Crystal Reports has the following (incredibly frustrating) limitations:

  • Arrays can only hold 1,000 items
  • Strings can only hold 255 characters

We're going to use an array of concatenated strings to do this.

Step 1: Formula to Create/Reset the array
Create a formula in crystal. I recommend using the format "array_[ArrayName]_[ArrayAction]". In our case, this would be "array_ArrayName_CreateOrReset".

//Reference the shared array and the temporary string
Shared StringVar Array array_ArrayName;
Shared StringVar itemsList; 
//"Re-Dim" the array (clear it) and reset the string
ReDim array_ArrayName[1];
itemsList := ""; 
//returns true since formulas cannot return arrays

Step 2: Formula to Increment / add to the array
Create a formula called array_ArrayName_Increment. This array concatenates a string until it's too big and adds it to the array once it's big enough.

In this formula, {YourValue} is the item that you're looping through adding to the list.

NOTE: For some reason, I couldn't get Crystal to just end the if statement and execute the last line regardless, so I had to repeat it in an "else" statement. That's gross; let me know if you know how to get around it.

//access shared array
Shared StringVar Array array_ArrayName;
Shared StringVar itemsList;
//If the string is too big, add it to the array, reset the temporary string, and concatenate to the string
if (length(itemsList) > 235) then
//re-dim array to increase size without losing values
Redim preserve array_ArrayName[Ubound(array_ArrayName) + 1]; 
//add the current text of the itemsList string to the array as one big chunk
array_ArrayName[Ubound(array_ArrayName)] := itemsList; 
//clear the temporary string
itemsList :=""; 
//add your value to the tringitemsList := itemsList + ", " + {YourValue};
//no addition to the array necessary; just add your value to the stringitemsList := itemsList + ", " + {YourValue}; 
Step 3: Formula to Display the Array
Here, we output all the array values (our "list of big comma-separated lists"). 

This formula results in some extra commas and spaces, but I don't care about that because later we'll just be looking for values within this.

The trick here is to remember that the last few items in the temporary string won't be added to the array, so we need to include them specifically.

//reference shared array and temp item
Shared StringVar Array array_ArrayName;
Shared StringVar itemsList; 
//join all elements in the array together, comma-separated, plus the temporary items
Join(array_ArrayName, " ") + itemsList;

Step 4: Positioning the Formulas
  • Insert the CreateOrReset formula in the group heading (or an additional, suppressed group heading)
  • Insert the increment formula in the details section (suppress if necessary)
  • To test, insert the display formula in the group footer. In reality, we won't be "displaying" it in the classic sense, but rather passing it to the sub-report for futher analysis.

Step 5: Linking the Array to the sub-report
  • Create the sub-report to display your data (outside the scope of this topic)
  • Right-click on the sub-report and choose "Change Subreport Links"
  • Move the array display formula into the "Field(s) to link to" box by clicking the right (">") arrow.
  • Click OK.

Step 6: Searching the Array items in the Sub-Report
  • Open the sub-report.
  • Create a formula called "AlreadyInParentReport"
  • The formula should look similar to the following:
//if you find it then it's already in the parent; otherwise it's not
if (InStr({YourDisplayArrayParameterName}, {YourValue}) > 0) then
else false

Step 7: Excluding Duplicate items from the Sub-Report
In your sub-report, In the Record selection, use the following line in addition to other constraints:

{@AlreadyInParentReport} = false

...And We're Done!
That's it. Now you should be able to do anything with those sub-report values (display them, count them, sum them, etc. etc.) and return that data to the parent report.

Wednesday, June 27, 2012

How To: Remove Table Formatting in Excel 2010 [Field Notes]

I added a table in Excel 2010 and now I need to get rid of the table aspect and make them normal cells again.

Just changing the formatting to "Normal" doesn't remove the programmatic features of table formatting (I can still sort my data as a table, etc. which I don't want to be able to do.

This is called "converting to a range." to convert a table back to a range:

  • Highlight the table in your spreadsheet
  • The Design tab will appear on the ribbon menu. Click it to enter the design section.
  • In the design section, click the "Convert to Range" button.
  • Excel will ask you if you're sure. Click Yes.
  • The range will be converted to normal cells.

Hope this helps!

Monday, June 18, 2012

Crystal Reports: Display Month Name and Year of Last Month [Field Notes]

A report I'm running gets the data for the last Month. I'd like to nicely display the name of the month and year.

Step 1: Formula to Return the Date 1 Month Ago

Use the DateAdd function in the formula to get the date minus one month:

DateAdd("m", -1, CurrentDate)

This says to use "month" intervals, subtract one, and use the current date as the starting point.

Step 2: Display the Month Name and Year of the Date Formula

MonthName(Month({@DateMinusOneMonth})) + " " + ToText((Year({@DateMinusOneMonth})), 0, "")
This concatenates a string of the month name for the formula and the year for the formula. The part that calculates the year also notes that there should be no comma for thousands and no decimals (year here is being interpreted as just a number when outputted to ToText).


Sunday, June 17, 2012

A Lesson From My Father

When Father's Day rolls around, for me it's often accompanied by the fact that I'm not sure how to properly honor my Dad. I'm lucky enough to still have him around, but he's never been one for gifts of gratitude for his fatherly-ness, and despite having been the bedrock foundation for most things I'm proud of in myself, he remains pretty low-key about the whole thing. But I think that some public recognition is in order.

I have a number of stories about my Dad -- these things that sort of pass into legend in the eyes of offspring, that while they don't grow bigger (as in size-of-the-fish-I-caught bigger), their importance becomes more apparent as one grows in life and begins to encounter adulthood in its myriad of forms. If you'll permit me, I'd like to share one such story.

When I was younger (I'll say 9 or 10 with the caveat that I'm really bad at memory-based timeframes), one morning I idly looked out the kitchen window of our front door, as I did a lot of mornings. Except, this morning wouldn't be like other mornings.

Because there was an old lady wandering around our carport.

This, as you can imagine, piqued my interest. I went to get my dad to let him know about our strange guest, and he went to investigate the situation. At that point, the woman was walking into our carport shed and murmuring with thick eastern European accent "Apartments, there are no apartments here."

After attempting to talk to the woman, my dad came back in and told us to hang out for a bit. As it turns out, the lady didn't speak much English at all and seemed lost, possibly in the early stages of dementia (she was very old). He drove off with the old lady in our car.

My Dad was gone for a long time, what seemed like (and actually may have been) hours, which I remember being a little nervous about. When he finally got back home, he explained what had taken him so long. The woman didn't know where she lived, and so my Dad had attempted to talk to her about it while driving her all over town to various apartment complexes. Eventually, they got lucky and she recognized the place where she lived, and he was able to come back home.

As a kid, this was confusing to me. I wondered why he hadn't just dropped her off at a police station, or given up, or taken her to a public landmark or something.

Dad's answer to this was as simple as it was profound to me: "Because when you can help someone, you should."


Fast-forward a dozen or so years, and I realize that this is a basic tenant of existence that I've always tried to live up to, and it's done a lot for me in life (and hopefully, something for others as well.) Whenever Dad tells me how proud he is of me for helping someone or doing something good (he has an awesome amount of pride in my brother and I), I tend to think about times like this, when he lived his expectations of us as men and showed us the path to walk by walking it. There is no stronger lesson than that.

But the question remained: what could I get my Dad for his day? How does one begin to honor everything that's wrapped up in the word "Dad"? After reflecting on this story, I had an idea.

For Father's Day, I've made a donation in honor of my Dad to Charity:Water ( Charity:Water is one of the best, most effective and transparent charities I've ever seen, with a goal of drilling wells to bring water to towns that have none, improving the lives and health of communities around the world. This is one of the most basic human needs.

People are thirsty, and they need our help. So I'm doing what Dad has taught me to do.

My donation will help 5 people obtain clean drinking water. If you'd like to participate in honor of my Dad, your Dad, or an overall great cause, you can join in here:

Dad, thank you for this lesson and all the others so far. I'm proud to be your son. Happy Father's Day!


Friday, June 15, 2012

How To: Set a Default Date Parameter In Crystal Reports [Field Notes]


I have a report that I'd like to be able to query the date range for. However, I'd also like to have a default value of "today" so that when I run it on a schedule, etc. it's just me passing in a different parameter, and not changing the actual record selection process.

Crystal Reports doesn't have a default mechanism to do this. It appears to be pretty sought after in the Crystal community, but I haven't seen any solutions that would allow one to, say, use a variable like "currentdate" as the default for a date field.

I found a helpful blog post on Cogniza which I modified a little bit to fit my situation.

Step 1: Create Parameters
We're going to create two parameters. The first I'm going to call "NamedDateRange" and the second I'm going to call "CustomDateRange".

NamedDateRange Parameter
This should be an optional string parameter. Add list items like "Today", "Yesterday", "This Month", etc.

CustomDateRange Parameter
This should be an optional date or date range parameter for custom values to be entered.

Step 2: Create the DateRangeSelection_FromParameters Formula
We need a formula to hold the results of our parameters (this will make it nice and clean in the RecordSelection formula, which I prefer.)

Basically, my logic here is the following:
  • If the NamedDateRange has a value, we'll use it.
  • Otherwise, we'll use the CustomDateRange
  • If neither has a value, we'll default to today's date.
  • Also if NamedDateRange has a wacky value, we'll default to today's date.
Thus, the formula is:

{DateFieldInMyTable} in ( 
if not hasvalue({?NamedDateRange}) then 
    if not hasvalue({?CustomDateRange}) then currentdate
    else {?CustomDateRange}
if {?NamedDateRange} = "Today" THEN currentdate
else if {?NamedDateRange} = "Yesterday" then (currentdate - 1)
else if not hasvalue({?CustomDateRange}) then currentdate
    else {?CustomDateRange}

Step 3: Create the Record Selection Text

Due to our use of the formula earlier, the record selection text is as simple as:

{DateField} in ( {@DateRangeSelection_FromParameters} )

Where DateField is the name of whatever DateField you're comparing.

Step 4: Set the Default Value When Running a Report
I'm tackling this using Crystal Reports Server 2008, so this step will apply mostly to that specific setup.

  • Upload the report to Crystal Reports
  • Set the database configuration options, etc.
  • Right-click on the report and select "Properties"
  • Under the "Default Settings" section, click "Parameters".
  • Click Edit for "NamedDateRange" and enter "Today".
Now "Today" will appear as the default selection, so if the user just hits OK, they will see that it generates for today. 

NOTE: Because of the work we did with our formula earlier, even if both fields are left blank, the report will generate for today. A nice touch.

Step 5: Set the Default Value When Scheduling a Report

  • These steps are essentially the same as step 4. Schedule the instance as you normally would do, but in the "Parameters" section, select "Today".

And with that, we have a Crystal Report that will default a date range to Today while allowing other custom date ranges as well!


Sunday, June 03, 2012

An Open Letter to Eidos

[Cross-posted on the Eidos Facebook page. I really hope they get themselves together over there.]

Steps to attempt to pay for / download / install Eidos' Hitman 2: Silent assassin:

  1. Navigate to
  2. Click "Store"
  3. Search for "hitman". Note the game pops up. Note that the price is in Pounds, not dollars).
  4. Attempt to buy. Receive a message saying you're out of the service area for this download. Note that the browser is now pointing to a domain (strange, since you entered
  5. Go back to realize that for some reason it thinks you're British and click on the "USA" link to switch the locale.
  6. Note that the "Store" icon is greyed out completely, with no explanation whatsoever. Clicking on it does nothing.
  7. Frustrated, realize that it must be because you don't have an login. That's cool, you're happy to create one.
  8. Click "Register". You are taken to a simple-enough registration page. 
  9. Enter your name (no spaces) as your username, enter your correct e-mail, enter a password of letters & numbers (9+ characters), date of birth, be sure to select the right options.
  10. Receive an error: "Sorry, the Eidos Connect account could not be created because of the following error(s): Profile is invalid"
  11. Think maybe it's because you entered something wrong. Enter the information again. 
  12. Receive the same error.
  13. Enter the information again including all optional fields.
  14. Receive the same error again, with still no indication of anything you could have done wrong.
  15. Realize that you're using Google Chrome and maybe it doesn't support that browser.
  16. Retry steps 8-14 in Internet Explorer. Receive the same error.
  17. Retry steps 8-14 in Firefox. Receive the same error.
  18. Give up, having wasted 30 minutes of your life just to attempt o give Eidos your money.

Just wanted to map that out for you in case software companies wonder why someone could ever consider pirating a game. PLEASE HELP ME GIVE YOU MY MONEY.

Unless maybe, the Eidos web site is a game itself?! If you're able to figure out the cypher across the ruins of post-apocalyptic webpage design, and complete the mighty download against the wishes of the enraged server gods, you too can give your money to Eidos! Definitely a new idea. I want the rights to that one.

...I know I'm being harsh here, but I think you need to know that I'm not often a gamer -- but I AM an IT guy -- and in anticipation of Hitman 3 I wanted to delve into the series for the first time. This was my first interaction with you as a customer, and likely my last.

By contrast: I wanted to play call of duty. I went to valve's web site, searched, clicked download, set up an account, paid, kicked off the download and was playing 20 minutes later.

Tuesday, May 15, 2012

Quick Tip: Run Multiple NUnit Assemblies with one Exec Command in MSBuild [Field Notes]

This is mostly a reference post for me, but I figured somebody else might find it useful.


I have a .build file in MSBuild. I'd like to to execute my NUnit tests, but I have multiple test projects and thus multiple DLLs. NUnit requires one assembly to be passed to the nunit-console.exe application that MSBuild calls.


Create a ".nunit" file, a specially understood file by NUnit that contains XML for referencing your test assemblies under each configuration.

I recommend putting this at the same folder level as your solution file and calling it [SolutionName].nunit. NOTE: the file must be a .nunit file; I tried with ".NUnitProjects" and it failed on the name alone.

It will look something like this:

Note that I create one of these references for every configuration. (Debug and Release are the defaults). I'm not sure that this is necessary, but my bet is that it probably helps later when specifying the Configuration to MSBuild for an automated integration.

Lastly, add the appropriate variable to your .build file that references the .nunit file, and call that variable instead of the DLL. All assemblies will then be passed.


Thursday, May 03, 2012

Building a Build Process, Part 8: CruiseControl.NET Preparation

This is part of a larger series in building a proper build process. For more information and additional series links, check out the introductory post.

Choosing Whether to Run as Network Service or another user

This was brought up as an excellent point by Bahrep on a recent blog post of mine, and I thought it was worth sharing for this series.

Essentially, the issue is that CruiseControl.NET runs under the network service user (which makes sense), but in order to accept an SVN certificate, you need to be logged in as that user and check out a copy of the repository once to accept the certificate before CruiseControl will be able to connect to the repository as itself.

There are two ways you can go about this. The more visible way is to create a local user account on the server (i.e. “CI”) which you then tell CruiseControl to run as. You give CI the permissions it needs, including signing on to that user account once , checking out the repository, and accepting the certificate.

The second option is to download and install PsExec, which lets you run programs under other user accounts. You then use PsExec to run the checkout command as the network service user and accept the certificate. No extra user account, no changing how the service runs. It’s a simpler solution, but a much less visible change that you’ll want to document.

Directions for both options are shown below. For what it’s worth, I now prefer option 2.

Option 1 Part 1: Creating a Local User for CI Purposes

  • Click start and type “compmgmt.msc” to bring up Computer Management.
  • In the left-hand navigation, choose Local Users and Groups –> Users.
  • Right-click and select New User…
  • Name the user “CI” or something related to the purpose at hand.
  • Give the user a password.
  • Uncheck “User must change password”
  • Check “password never expires”.
  • Create the User account.
  • Right-click the new account and select Properties.
  • Click the “Member Of” tab.
  • Add the user to the “Administrators” Group. NOTE: This is not ideal for the purposes of our demo. Normally, you’d want to ensure this user account only has access to what it needs for CC.NET (this is why I’m leaning towards option 2).
  • Now is also a good time to install TortoiseSVN on the build server, if you haven’t yet. (be sure to install the command-line client tools as well!)

Option 1 Part 2: Accepting the SVN Certificate as the CI User

  • Log off the Administrator account and sign in as your new CI account.
  • Run the command prompt
  • Run svn info to accept the certificate. This can be done by running svn info https://[repo URL or hostname]svn/TestProjectRepo –username svnuser1 –password passw0rd1
  • Type “p” to accept the certificate permanently, and you’re all set!

Option 1 Part 3: Run the CruiseControl.NET Service as the CI User

  • Log off the CI account and back into the Administrator account.
  • Run services.msc
  • Right-click the service and choose Properties
  • Click the Log On tab.
  • Click the “This account” radio button.
  • Click Browse and Type CI. THe resulting username entry will be “.\CI”
  • Enter the password you created for the CI account and click apply.
  • You will see a message that it has been granted the “Log On as a Service’ Right.

Option 2: Using PsExec to Accept the SVN Cert as the Network Service User

NOTE: At points during this process, you may receive a warning from your antivirus program. This is because PsExec can be dangerous when misused, and viruses have used it in attacks in the past. We know our usage is legitimate here, so we can unblock/ok PsExec operations when we’re working with it.
  • Visit the PsExec Website and Download the application (actually will be a zip file of all the PSExec tools)
  • Unzip the PsExec zip file
  • open a command prompt and navigate to the folder of PsTools
  • Run psexec -u "nt authority\network service" cmd.exe. This will run the command prompt as the network service user.
  • Run svn info to accept the certificate. This can be done by running svn info https://[svn ip or hostname]/svn/TestProjectRepo –username svnuser1 –password passw0rd1
  • Type “p” to accept the certificate permanently, and you’re all set!

Add a CI User to the Repository

  • Open your CentOS source code management VM
  • Taking advice from our Subversion & Apache article, we’ll run the following command to add a new CI user without deleting the old ones: htpasswd -sb /var/www/svn/auth/svn.htpasswd ci passw0rdci
Now you’re set to allow CruisControl to pull down files under its own again (just around the bend in this series when we configure CCNet).

Copying the Microsoft Targets to the Build Server

When dealing with this elsewhere, I discovered this problem. We’ll avoid it in advance here.
  • You have to copy the two files from your local development machine (with VS installed) to the Build Server.
  • Copy the directory C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\*.* to the same location on the build server.

Copying the Reference Assemblies to the Build Server

When dealing with this elsewhere, I discovered this problem. We’ll avoid it in advance here.
  • On your local development machine, copy C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\*.* to a folder on your build server. For me, it was E:\ContinuousIntegration\_ReferenceAssemblies so that it could keep it common for any future builds.
In the next part in the series, we’ll finally configure CruiseControl.NET and get it to build. Thanks for hanging out with me on this epic journey towards build awesomeness!

Feedback Welcome!

I'd love to hear any comments on this series. Find it useful? Think there's a better way to implement the technique or something I should have mentioned? Please drop a line in the comments to help me improve the series!


<—Part 7: Installing CruiseControl.NET
Part 9: coming soon! -->

Building a Build Process, Part 7: Installing CruiseControl.NET

This is part of a larger series in building a proper build process. For more information and additional series links, check out the introductory post.

Adding a HDD to your VM for CI Information

I find this makes it easier to keep your CI files separate, and it’s definitely a good practice in a production environment in my experience. Virtualization makes this easier as well because all the virtual hard drive files only take up as much space as they use. Definitely an upside to VMs.
To add a new HDD to the VM, do the following:
  • Power down your Windows Build Server VM if it’s on.
  • Open VirtualBox
  • Right-click on your BobTheBuilder machine and select “Settings…”
  • Click the “Storage” Section on the Left-hand side
  • Note the two icons next to the “SATA Controller” section. Click the one on the right (“Add Hard Disk”). The Hard disk wizard will open.
  • Click “Create New Disk”
  • Choose your Disk Format (I left it as the default, VDI)
  • Choose “Dynamically Sized”
  • Name the Disk. I usually follow the format of [MachineName]_[DrivePurpose], so in this case I chose “BobTheBuilder_CIDrive”
  • Set the size of the drive. I left the default (25 GB)
  • Next screen is the summary. Click create to create the drive.
  • Start the VM again and login.
Now we have a hard drive initialized, but we still need to format it before the OS can see it. To do this:
  • Click start and type “diskmgmt.msc” to bring up the disk management utility.
  • You’ll see a dialog box to ask you to initialize the disk.
  • Select MBR for the type of initialization and click OK.
  • Now, right-click on the “Disk 1” entry to the bottom (the disk should indicate that it has 25 GB of unallocated space) and select “New Simple Volume”, which will open a wizard.
  • The amount of space for the drive will default to 100%. This is what we want, so click next.
  • Assign the drive letter of your choice. The default was E: for me, so I left it. Henceforth in this tutorial series, I’ll be calling it E:\ so you may want to choose E: to make it easier to follow along.
  • On the next screen, choose to format the volume as NTFS and give it the volume label of “CI”.
  • Click Finish to complete the process.
After a few seconds, you should see the E: drive appear in your drives list. Open Windows Explorer and verify that it exists there, too.

Creating a Home for Our CI Files

Now that we have a drive, it makes sense to create our directory structure.
On the E:\ drive, create the following directory structure:
  • TestProject\
    • CIArtifacts\
    • WorkingDirectory\
CIArtifacts will store the output of our CI (logs, etc.); WorkingDirectory will be where we eventually check out the source code to automatically in order to act upon it.

Downloading the CruiseControl.NET Binaries

Installing CruiseControl .NET

  • Run the CruiseControl.NET Setup executable as an administrator by right-clicking and selecting “Run as Administrator”.
  • Agree to the license agreement.
  • All options are selected by default; leave them and continue.
  • Leave both checkmarks checked to install the CruiseControl .NET dashboard and to install CCNet as a service.
  • Leave the default installation directory or customize to your preference (I left the default for this setup)
  • Leave the default value for the Program Files folder group and click Next. CruiseControl.NET will commence installation.
  • Click Finish and exit the wizard.

Installing CCTray

CCTray allows you to connect to one or more CruiseControl.NET projects and will keep you informed on their status.
You’ll want to repeat this process on the Build Server itself and on any desktops you’d like to see the status of the build on (for example, I have CCTray installed on my laptop’s desktop so I can quickly see if a build is broken).
To install CCTray, perform the following (don’t worry, we’ll configure it later):
  • Run the CCTray setup file and click Next at the introduction.
  • Agree to the license agreement.
  • Leave all three options selected and click next.
  • Click next through the installation location and start menu group name screens. The application will install.
  • Click next, and then click Finish, leaving the checkbox selected to start the program.
Now CCTray is started (though not configured yet.)

Install / Start the CCNet Dashboard in IIS

  • Click Start –> Administrative Tools –> IIS Manager
  • Expand the tree on the left-hand side to [Server Name] –> Sites –> Default Web Site.
If you don’t see a directory under “Default Web Site” called “ccnet”, perform the following steps (otherwise, skip to after this bulleted list):
  • Right-click on Default Web Site and select “Add Virtual Directory…”
  • Give “ccnet” as the alias
  • For the path, choose [CCNet Install Directory]\webdashboard. (e.g. for me, it was C:\Program Files (x86)\CruiseControl.NET\webdashboard)
  • Click “OK”. The Virtual Directory will appear.
  • Right-click the ccnet Virtual directory and choose “Convert to Application”.
  • Click OK in the dialog box that appears.
Now your ccnet virtual directory is set up. Keep going:
  • Click on “Default Web Site”
  • On the right-hand side Action menu, click “Start” to start the default web site (if it’s not already started).
  • Open a web browser on the build server and navigate to http://localhost/ccnet
If a web site shows up at all, you’re good to go. Don’t worry about any errors within the web dashboard application, as we’ll be configuring CCNet later.

Feedback Welcome!

I'd love to hear any comments on this series. Find it useful? Think there's a better way to implement the technique or something I should have mentioned? Please drop a line in the comments to help me improve the series!


<—Part 6b: MSBuild Integration With Cassini and Visual Studio

Wednesday, May 02, 2012

Building a Build Process, Part 6b: MSBuild Integration With Cassini and Visual Studio

This is part of a larger series in building a proper build process. For more information and additional series links, check out the introductory post.

This Time…

In this round, we’re going to discuss:
  • How to Start the Cassini Web Server Asynchronously
  • How to Stop the Cassini Web Server
  • How to (not quite) get Visual Studio to seemlessly Follow the Same directions that your MSBuild file follows.

Starting the Web Server (Asynchronously!)

Visual Studio has a built-in web server – no doubt you’re familiar with it. It’s what runs whenever you hit F5 on a web server and see a web site come up. This server is called Cassini, and you can start it up through an MS Build Task.
[A little background: Rather than show here, I’m going to tell. You can use the Exec task normally to run an executable, but the catch is that MSBuild will usually wait for the task to finish. We’re going to use the AsyncExec task in order to ensure that MSBuild will start the web server and continue performing next commands without waiting for Cassini to exit, since waiting on Cassini would be undesirable behavior here.]
We’re going to take advantage of an excellent set of extensions called the MSBuild Community Tasks to accomplish our mission here.

Get the MSBuild Extension Pack

  • You should visit the MSBuild Extension Pack web site for an excellent overview of the capabilities of these tools. You can just click the download button on the right-hand side to get the latest version in ZIP format..
  • Unzip the download (anywhere is fine).
  • Go one level deeper and extract the .NET 4.0 zip file.
  • Create a folder in the “thirdparty\tools” folder of your solution called “MSBuildExtensionPack”.
  • Copy the contents of the “Build” folder from the zip file (the .dlls, etc.) into this directory.

Updating Your Build File to be AsyncExec Ready

We’ll have to update the build file to bring in the new task library (a great feature of MSBuild, by the way). To do this, we’ll add an import directive to the Extension Pack’s Task Files (this should go just inside of the <project> tag):
<Import Project=".\thirdparty\tools\MSBuildExtensionPack\MSBuild.ExtensionPack.tasks"/>         

Next up, we have to make a slight modification. The Extension Pack attempts to do some nice work for us to include all the tasks, but we need to override it. Find the following section at the top of the MSBuild.ExtensionPack.Tasks file and comment it out (in the lines below, I’ve done that already for you):

        <BuildPath Condition="'$(BuildPath)' == ''">$(MSBuildProjectDirectory)</BuildPath>
        <ExtensionTasksPath Condition="Exists('$(BuildPath)\..\..\BuildBinaries\MSBuild.ExtensionPack.dll')">$(BuildPath)\..\..\BuildBinaries\</ExtensionTasksPath>
        <ExtensionTasksPath Condition="'$(ExtensionTasksPath)' == ''">$(MSBuildExtensionsPath)\ExtensionPack\4.0\</ExtensionTasksPath>

Since the Extension Pack is no longer figuring out what its path is, we need to set an item in our <ItemGroup> to point it to the right place:

        <ExtensionTasksPath Include=".\thirdparty\tools\MSBuildExtensionPack\"/>

Now we’re ready to add the command to start the web server.

Adding a Target to Start the Web Site

First thing’s first – we have to add an item to the <ItemGroup> section to tell MSBuild where Cassini resides, and an item to tell it where our published web site will reside once it’s been spat out by our build process. I took the guesswork out of it for you in the lines below:

<Cassini Include="$(CommonProgramFiles)\microsoft shared\DevServer\10.0\WebDev.WebServer40.exe"/>
<Website Include=".\buildartifacts\_PublishedWebsites\TestProject.Web"/>

Next, we use the AsyncExec task from the Extension Pack to run the web server without hanging up our build process:

    <Target Name="StartWebsite" DependsOnTargets="Compile">         <AsyncExec Command='"@(Cassini)" /port:9999 /path:"%(WebSite.FullPath)" /vpath:'/>     </Target>

Try running your build file with a target of “StartWebsite”. You should be able to navigate to http://localhost:9999/ and see the web site in action (though it may just show a directory’s contents if the site is empty).

However, you may have noticed something. What stops the website so we can start it again? Nothing, and so we’re going to build that target too.

Adding a Target to Stop the Web Site and Updating Dependencies

You can use the handy built-into-windows “TaskKill” program to force a kill of program that has the same name as the WebServer. We just call it with an Exec Command, as shown below:

    <Target Name="StopWebsite">         <Exec Command="taskkill /f /im WebDev.WebServer40.exe" IgnoreExitCode="true" IgnoreStandardErrorWarningFormat="true"/>     </Target>

After that, we add the “StopWebsite” Target as a dependency to StartWebsite (before compile, so we know the site will be down before MS Build erases the files and spits out new ones:

    <Target Name="StartWebsite" DependsOnTargets="StopWebsite;Compile">

Go ahead and try running your build with the StopWebsite and StartWebsite targets.

NOTE: You may receive an error upon starting Cassini about there only being one instance allowed on a port. If you do, try changing the port number from ‘9999’ to something else in the StartWebsite task (try to pick a port that’s not being used!).

For reference, at this point, our build file looks like:

Getting Visual Studio to Play Along: Help Needed!

Unfortunately, this is one area that this blog series will fall short. I’ve scoured the internet in an attempt to find out how I can output the \bin and \obj to another folder based on the $(SolutionDir) variable, but apparently unlike C++, Visual Studio for C# does not allow this and instead creates a strange Folder with “$(SolutionDir)” literally in the name. I thought it would be pretty straightforward, but boy was I wrong. If anyone has any suggestions, I’m all ears. I was told I could go the route of editing the .csproj file, but I really tend to be wary of that kind of text editing; I like Visual Studio to be able to own that file for its sake.

For now, I just recommend using TortoiseSVN to ignore those folders in your source control so that it doesn’t conflict with anyone else if you hit F5 and commit later.

And by all means, if you know how to solve the mystery, sound off in the comments!

Feedback Welcome!

I'd love to hear any comments on this series. Find it useful? Think there's a better way to implement the technique or something I should have mentioned? Please drop a line in the comments to help me improve the series!


<— Part 6: Creating a Custom MS Build File

Tuesday, May 01, 2012

Workaround: MSBuild Error MSB3644: "The reference assemblies for framework ".NETFramework,Version=v4.0" were not found" [Field Notes]


When attempting to build a Continuous Integration solution with MSBuild on Windows Server 2008 R2 (With .NET Framework 4.0 installed) I receive the following error:

c:\Windows\Microsoft.NET\Framework\v4.0.30319\Microsoft.Common.targets(847,9): warning MSB3644: The reference assemblies for framework ".NETFramework,Version=v4.0" were not found. To resolve this, install the SDK or Targeting Pack for this framework version or retarget your application to a version of the framework for which you have the SDK or Targeting Pack installed. Note that assemblies will be resolved from the Global Assembly Cache (GAC) and will be used in place of reference assemblies. Therefore your assembly may not be correctly targeted for the framework you intend.


I'll be the first to admit there are probably more elegant ways, but I recommend the following:

Install .NET Framework 4.0 / Windows 7 SDK
You can download it from Microsoft. If you don't have access to do this, or if that doesn't work, you can do the following.

Copy the Reference Assemblies Folder to Your Build Server
Copy C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\*.* to a folder on your build server (for me, it was E:\ContinuousIntegration\_ReferenceAssemblies)

Override the Reference Assemblies in Your Build Configuration
When you call MSBuild, pass it a parameter -p:FrameworkOverride, with the location of the build.

For example, my CruiseControl.NET config MSBuild section for this project now looks like:

It's hack, for sure, but it got the solution to build, and it's reusable in future scenarios.

Probably easier to download the SDK and install it, though.

CruiseControl .NET Gotcha: Moving Microsoft.WebApplications.Targets to the server [Field Notes]


When attempting an automated build, CruiseControl.NET (running on Server 2008 R2 with .NET Framework 4.0 Installed) gives the following error in its log:

error MSB4019: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.


This is ridiculous, but you have to copy the two files from your machine to the Build Server.

Copy the directory C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\*.* to the same location on the build server.

CruiseControl .NET, VisualSVN, and SSL Certificates [Field Notes]

This is a quick post for my reference. Let me know if details would be helpful and I'll be happy to turn it into more of a tutorial style.


I am integrating CruiseControl.NET with VisualSVN. I'm using a self-signed https on VisualSVN that doesn't match. Because I can't get CruiseControl to accept the certificate permanently, I can't get it to check out files.


  • You should have a local user account for your build process (with only access to what it needs, of course). This is essentially a local service account.
  • Log on to that service account on the local machine.
  • Using the command line, check out the VisualSVN repository into a folder you created and accept the certificate. Something along the lines of "svn.exe checkout https://[servername]:8443/svn/[ProjectName] --username [user] --password [password]" should do it.
  • The certificate message will then pop up. Type "p" to accept it permanently.
  • Now you have an account that has the access you need.
  • Go into services.msc 
  • Set the CruiseControl.NET service to run as the local build user service account, with the password.
  • Restart the CruiseControl.NET
  • It now should have access and acceptance of the certificate.

Monday, April 30, 2012

PSA: IBM Maximo 7.1 User Guide is Now the Product Reference Guide [Field Notes]

Subject says it all.

I was looking for the IBM Maximo 7.1 User's Guide (as there appeared to be one for 6.2.1) and I was having a lot of difficulty finding it, until I found this little ditty on an IBM Resources page:

Looking for the Maximo Asset Management 7.1 User's Guide? It is replaced with the Product Reference Guide.

So there you have it. If you'd like the direct link, you can find the User Guide / Product Reference Guide here.


Tuesday, April 24, 2012

Building a Build Process, Part 6: Creating a Custom MSBuild File

This is part of a larger series in building a proper build process. For more information and additional series links, check out the introductory post.

Ed. Note: I cannot give enough praise to the Continuous Integration course; it helped me put a lot of these pieces together, and this part in the series could not have happened without it. A lot of the content for this post ended up being a pretty direct port of what is talked about there, but there are only so many ways to personalize a best practice while keeping it simple, and so I hope you’ll sign up for a Pluralsight subscription, and that their lawyers will take kindly to this series. :)

So, now that we have our initial project structure created and under source control, we have to find a way to build it. “Oh, I know!” you say. “We’ll hit F5!” well, while that might work for local development, your build server can’t hit F5.

Lucky for us, the process of hitting F5 and turning your code into a DLL isn’t all magic. In fact, it’s a highly documented tool called MSBuild, by the Microsoft team, that ships with the .NET Framework.

In today’s installment, we’re going to do the following:

  • Add MSBuild to the Path line and test it using Powershell
  • Create an XML file that will build our application using MSBuild Tasks
  • Examine variables in MSBuild XML files
  • Create tasks for cleaning the build, initializing the build, and compiling the build
  • Put it all together to create build targets.
  • Use PowerShell to execute those build targets.

Find the Location of MSBuild on Your Machine

The good news is, if you have the .NET Framework, you have it. If you have the .NET 4.0 Framework (and really, you should), you should be able to find it in %WinDir%\Microsoft.NET\Framework\v4.0.30319\.

Add MSBuild to the System Path

  • Click Start
  • Right-click “Computer” and select Properties
  • Click “Advanced System Settings”
  • Click “Environment Variables”
  • In the second section, System Variables, look for the PATH variable, and click edit.
  • Ensure that a semi-colon is after the last entry, and then past in the path to your MSBuild.exe, leaving off a trailing slash (e.g. mine was C:\Windows\Microsoft.NET\Framework\v4.0.30319” without quotes.)

Run PowerShell and Test Access to MSBuild

  • Click the start menu, begin to type “PowerShell”, and bring up the PowerShell console by right-clicking and choosing “Run as Administrator”. You must do this in order to run MSBuild the way it is necessary to run it. (you may also want to pin the icon to your taskbar at this point).
  • When the console is open, type MSBuild.
  • You should see an error about not specifying a project or solution file. If you see any red text, such as “command not found”, etc., then something went wrong and powershell can’t see your MSBuild location. (try running as admin or checking your path variable setup).

Create a New, Empty .build XML File

  • Open the TestProject solution in Visual Studio
  • Right-click on the solution and select Add –> New Item.
  • Select XML File as the type, and name it
Now you’ll have a .build file at the root of your solution. This is what we want.

Adding the Schema for an MSBuild XML file

After the first line of the XML file, you’ll need to add the root node and give it the schema reference for its XML namespace (xmlns). After you do this, it should look like the following:

Note that I also added a ToolsVersion attribute, with “4.0” denoting the version of the .NET Framework we’re using.

Add Some Properties

Instead of going back and refactoring after showing you a full examples, I’m going to save some precious keystrokes and tell you that at some point, it will be easier and more flexible to use variables. To do this:
  • Create an <ItemGroup> </ItemGroup> node within the Project section.
  • The format under this will essentially be: <PropertyName Include=”PropertyTextToInclude”/>
  • Add references to your BuildArtifacts folder, and also to your solution file, like so:

The Basics: Cleaning and Initializing Our Directory

Next, we create two targets. One, “Clean”, will delete the .\buildartifacts folder. The second, “Init”, will recreate it. Pretty simple. The XML to accomplish this is below:

You’ll notice that I have the “Init” target depend on the “Clean” target via DependsOnTargets. This means that any time we create the folder, we’ll first delete it to make sure we’re starting fresh.
Also notice that instead of putting a path to the buildartifacts folder directly, I’m using @(BuildArtifacts), which tells MSBuild to refer to the ItemGroup variable we created earlier.

Trying our “Clean” and “Init” Targets in PowerShell

  • Open PowerShell in Admin mode
  • Navigate to the Solution folder (e.g. cd \Users\Sean\Projects\TestProject for me)
  • Also open this folder in Windows Explorer. See the buildartifacts folder there?
  • In PowerShell, run: MSBuild /Target:Clean
  • Note that in Windows Explorer, the buildartifacts folder is gone.
  • In PowerShell, run: MSBuild /Target:Init
  • Note that the folder has reappeared. Add a file to the folder – a text file or something small.
  • In PowerShell, run: MSBuild /Target:Init
  • Note that the folder has been deleted and recreated, and thus no longer contains the item you put there.

Getting to the Good Stuff: Compiling our App

Up until this point, we haven’t compiled our code. Since that’s what gets us paid, in a manner of speaking, we should create an MSBuild task to compile the code. We can do this via the following additional target:

This calls the MSBuild exe from within MSBuild (I know…whoa, dude), passes it the solution, and compiles the program by passing it in our solution file variable. It also specifies that the Output Directory should be buildartifacts. Tip: See the .FullPath? Our variables are also objects, and so MSBuild will know that %(BuildArtifacts) is an object and thus will pick up on the FullPath property of it.
Note that we’ve made “Compile” dependent on “Init”, so that everytime we compile, the buildartifacts folder will be blown away and re-created.
Try it out: In PowerShell, run “MSBuild /Target:Compile” and watch our solution be compiled to the buildartifacts directory. Pretty sweet, huh?

Telling MSBuild What to do by Default

Passing a Target every time is pretty lame, especially when we usually just want to compile the code.
Luckily, by adding the DefaultTarget attribute to the Project node, we can tell MSBuild what to compile by default. Let’s try that now. Modify the Project node XML to make the default target “Compile”, like so:

Save the file, and run "MSBuild” (without a target attribute). The project should compile.

Next Time…

In the next article, we’ll explore how to start and stop the Cassini Web Server asynchronously, and how to run Visual Studio builds through common output directories.


Feedback Welcome!

I'd love to hear any comments on this series. Find it useful? Think there's a better way to implement the technique or something I should have mentioned? Please drop a line in the comments to help me improve the series!


<— Part 5: TortoiseSVN Client Connection and Repository Layout

Monday, April 23, 2012

Quick Tip: Need a User to Take Screenshots of a Problem? Try Using Win7's Built-in Tool [Tips & Tricks]

I can't believe I hadn't heard of this before. There’s a neat tool that ships with Windows 7 called the “Problem Steps Recorder” tool (psr.exe)

The Process Works Like This:
  • A user clicks the record button
  • The user performs all the steps to recreate the problem (and can add comments at each step)
  • The user clicks stop recording, and is prompted to save a ZIP file, which they can then mail to a technician.
  • The ZIP file contains an MHT file.This file contains a screenshot of every step the user performs, as well as system information regarding those steps. You can look through them one-by-one or play them as a slide show. 

To Access the Tool:
  • From a windows 7 machine, in the Start menu, type “psr” and click PSR.exe. NOTE: if you have applications running in Administrator mode, you will need to run PSR.exe in administrator mode in order to be allowed to capture those.

Quick Tip: Maximo 7 -- Location of Workorder Status Information [Field Notes]

A quick note for myself an anyone else who might be interested:

Maximo 7 has a few internal status codes, but custom statuses can easily be created. However, these have to map to one of the internal status codes.

I wanted to give our customers an overview of the internal status codes and their mappings. After some searching, I found the following method successful:

  • Login to Maximo 7 
  • Click Go To --> System Configuration --> Platform Configuration --> Domains
  • Search for the "WOSTATUS" domain and expand it.
Here, you'll have a list of all statuses with internal and external values.

Hope this helps!

Sunday, April 22, 2012

Building a Build Process, Part 5: TortoiseSVN Client Connection and Repository Layout

This is part of a larger series in building a proper build process. For more information and additional series links, check out the introductory post.
Welcome back! Now that we have a TLS-encrypted Apache setup with SVN, we’re going to take a look at creating the repository layout.

NOTE: Before I begin, I should mention that while I will speak about them authoritatively, the methods for laying out repository structures are by no means set in stone, and are in fact debated quite vigorously at times in the tech community. This is the general flavor that I’ve picked up in some places, including an excellent Pluralsight course on Continuous Integration by James Kovacs. It’s the style that I believe I’ll adopt going forward, though don’t hold me to it.

Obtaining TortoiseSVN

TortoiseSVN is pretty much the de-facto standard for Subversion clients on Windows. The latest version as of this writing is 1.7.6. You can download the 32-bit version or 64-bit version from their downloads page.
Installing Tortoise is about as standard as it gets. Run the installer, accept the license agreement, and select the options for install.
During this time, I usually right-click “command line client tools” and install it by choosing will be installed on local hard drive. You never know when they’ll come in handy (though I don’t intend to refer to them in this series).
After this, the app installs. NOTE: If you’ve got some cash, I highly recommend you donate to the TortoiseSVN project. It’s a fundamental piece of software for developers and good software deserves our support.

Linking a Local Folder to the Repository

Our first step is to pull down the repository we created (which is currently blank, but nevermind that):
  • Create a folder somewhere on your hard drive where you would store repository data (for example, mine is in C:\Users\Sean\Repositories, and I have a Win7 Library configured to point there).
  • Navigate into that folder.
  • Create a new folder called “TestProject”, named after the repository we created on the Subversion CentOS VM.
  • Right-Click on the TestProject Folder. Notice that there’s some new options now; this is where TortoiseSVN lives – in your context menu.
  • Select “SVN Checkout”. This tells Tortoise to attempt to pull down a repository into the folder you have selected.
  • For the URL of the repository, type https://[ip or hostname of your svn server]/svn/TestProjectRepo. Note the https; we’re going to pull this repository down over a TLS-encrypted connection.
  • Double-check that the checkout directory is the new folder that you created on your local machine called TestProject, and then Click OK.
  • At this point, you will likely receive a message about the fact that certificate failed. Click “Accept the certificate permanently”, since we know the certificate is trustworth (we created it, after all)
  • Next, you’ll be prompted for username and password. Enter one of the two logins we created when we initially set up Subversion. You may want to click “save authentication”; otherwise you’ll be prompted for it whenever you communicate with the repository.
You will see through the TortoiseSVN dialog box that the process is completed. The folder may also have a green check-mark overlaid. This is TortoiseSVN’s handiwork; it lets you know that a repository is up to date.

Creating the Initial Repository Layout

Enter the TestProject folder on your local machine. We’re going to create three empty folders under this directory that have a very specific meaning. Create the following folders:
  • trunk
  • branches
  • tags
This is one of the standard layouts for a repository in Subversion, and the one I tend to use. The subversion concepts related to these folders are something like the following:
  • Trunk: This where the main portion of development takes place. It is sort of the “master timeline” of real-time development of your application source code. More often than not, your working copy (more on this soon) will be the trunk.
  • Branches: With subversion, you can create branches. Think of it like creating an alternate reality of your code at a specific point in time. You can take your whole system as it is, and “branch it” to either fix bugs for a specific point in time, or branch it to implement a certain feature based on the code at that point. Branching can be kind of a pain in Subversion, as you then have to merge your changes back into the trunk, so you tend to do a lot of branching and merging to ensure things stay in sync. For this reason, I avoid it unless it’s necessary. I do however, like the idea of branching when code is released, so that you can fix bugs for that release and then merge it back into production.
  • Tags: Think of tags like a read-only branch, or a copy of your code frozen at a specific point in time. This is actually pretty cool. It allows you to do things like tag every major release, so that at any point your developers can pull it back up and compile it to check it out. Imagine a big client comes to you and they have a bug in a legacy version of the application you put out. You can’t force them to upgrade. How do you figure out what the code was at that point in time? Simple; just pull down the tag folder and it’s there.
Normally, you won’t be looking at all three of these at once. You’ll want to create what’s called a working copy of the source that is connected either to the trunk, or a specific branch. Think of your working copy as your lens into the source code. We’ll be creating the working copy soon, but first we have to commit this initial structure.

A Word on the Update / Commit Cycle

A big idea behind subversion is that instead of locking files so only one person can use them, it allows all people to edit all files in the repository (assuming they don’t have read-only access). This is usually vastly more efficient, but it does come with some caveats. If two people edit the same file, it will create a conflict, and that conflict will have to be resolved by a differential view and resolution (“diff”) of the two versions of a file with another developer. This isn’t usually a bad thing, but it becomes cumbersome if the files haven’t been checked in for a long time.
To avoid this, I usually stick to the following principles:
  • Before you begin coding, update the repository by right-clicking on the project folder and choosing “SVN Update”. This will get the latest copy of the source code.
  • As you work on the code, update once in a while in case someone has made changes that need to be merged with your file. Little changes made more often means smaller, more manageable conflicts.
  • Update once before you check your code in to ensure that no conflicts will arise.
  • Check in your code several times a day (if you’ve added something complete and useful that works.) this will ensure that your changes are smaller for other developers (more on the check-in process below.)
  • Always add a message when you check in and describe your changes in enough detail so that others can understand them. No novels are necessary, but it’s important to know in human-readable form who changed what, when, and why.

Committing Our First Changes

So, we added three folders to the repository. As small of a change as it is, it is a change, and so we need to check those changes in so Subversion can pass them along to anyone else. To do this:
  • Right-click on the TestProject folder and select “SVN Commit…”
  • Enter a message into the message box that makes sense (e.g. “Added initial folder structure”.)
  • Note that since we added new items, they’re not currently under source control. Subversion is smart enough to not add new items unless you tell it to.
    • If you don’t see any files, make sure the “Show unversioned files” checkbox is checked.
  • Click the “All” button and subversion will select all items to commit them to the repository.
  • Click OK and the folders will be committed to the repository. You see that it added each folder, and that the repository is now at revision 1.

Creating the Working Copy

Now that we have the files committed to the repository, we can delete the local folder because we know the files are committed to the repository and we can pull them down again later. That’s exactly what we’re going to do, but this time, we’re going to create our working copy by pulling down only the trunk.
  • Delete the TestProject folder in windows explorer.
  • Create a new folder called TestProject.
  • Right-click the TestProject folder and select “SVN Checkout…” from the menu.
  • Make the URL of the checkout path “https://[ip or hostname of svn server]/svn/TestProjectRepo/trunk” and click “OK”.
The trunk will be pulled down, and you’ll see it’s at revision one. Now, even though “branches” and “tags” exist in the universe, the world we know will consist of the trunk folder. If we ever need to switch to a branch, we can do this through the “Switch” statement (but we have no use for that now).

Creating the Project Folder Structure

After watching the Pluralsight course on Continuous Integration, I really like the layout that was chosen for the project. We’re going to create a folder structure, and then I’ll explain why we created it like we did.
Create some new directories so the folder structure looks like the following:
  • TestProject (already exists)
    • \buildartifacts
    • \src
      • \app
      • \test
    • \thirdparty
      • \libs
      • \tools
    • \doc
These folders have the following purposes:
  • TestProject (root folder): This is where your solution file will be. Later in this series, it will also be where we place the MSBuild XML file that applies to our software.
  • buildartifacts: This is the place where we’re eventually going to put all of the code that we’ve compiled, automatically, via the MSBuild script. This folder should be left out of source control because everyone’s binaries will be different and not of much use except to the developer (more on that later).
  • src: as you probably guessed, this is where all our actual source code is going to go.
  • app: this will store all the application source code (your individual projects will eventually be in this directory)
  • test: this is where all your unit testing and integration testing projects will live. This will let us do some handy things later as far as packaging up the project for release, etc.
  • thirdparty: this where code goes that isn’t yours. Could be something you referenced, or an open-source project that you make use of, etc.
  • libs: this is specifically for third-party libraries that you use within your source code.
  • tools: this is for DLLs or applications that help you with the build process or things external to your source code. Think NUnit for running unit tests, or special add-ons to the MSBuild DLLs.
  • doc: this is where all your documentation for your projects should live, if you need to reference it. It could be internal documentation if you choose to keep it this way, a user manual that evolves along with the project, a dictionary of acronyms or documentation on processes for your developers or business use cases or marketing materials. Any documentation that is based on the code in the src folder that evolves along with it should be included here.
At this point, let’s update the solution by right-clicking TestProject and selecting SVN Update. Since no other changes to our files were detected, the update completes successfully. We then commit the changes by right-clicking on TestProject and choosing SVN Commit, following the same process as we did earlier. (did you remember to add a commit message?)

Creating the Solution in our Project’s Root Directory

After all our hard work to set up the solution right, we’re finally ready to create the solution within Visual Studio.
First, we’ll likely want to enable the Always Show Solution option within Visual Studio so that it’s easier to create a blank solution – which is what we’ll be doing first.
Next, complete the following steps:
  • Open Visual Studio 2010 (or version of your choice)
  • Select “New Project”.
  • In the New Project folder, expand the “Other Project Types” folder and click the “Visual Studio Solutions” category.
  • Name the solution TestProject.
  • Since a folder is always created for a solution, but we have a folder already, we’re going to choose the location as the folder above the TestProject folder. (for example, my TestProject folder is in C:\Users\Sean\Projects\TestProject, so I select C:\Users\Sean\Projects.) This keeps us from having to copy/paste the solution to the right spot manually later.
  • The new solution is created and you see it within Visual Studio.

Adding Other Projects Under the src Directory

  • Right-click on the newly-created solution in Visual Studio and select Add –> New Project.
  • Add a C# class library called TestProject.Core, in the location of TestProject\src\app (the folder structure that we’d created previously). This will be the project that holds all of your application’s shared core logic (in case you need to share code common to both a web site and a WPF app, etc).
  • Delete the class1.cs file from the Core Project (we won’t need it).
  • Right-click on the solution and add another C# class project, TestProject.Core.Tests (this will hold the tests for the core business code). Give the location of this project TestProject\src\test. Delete its class1.cs file as well.
  • Repeat this process for two more projects, An MVC3 Web App called “TestProject.Web” and a class library called “TestProject.Web.Tests”. Can you figure out the right directories to put them in?
At this point, we’ve got a full project setup.

Committing our Chan0ges – and Ignoring a Directory

We’re ready to commit our changes – or are we?
I’d mentioned earlier that the “buildartifacts” directory doesn’t really make sense to commit, as everyone will have their own, and the files are highly likely to cause conflicts as they are binary files. It would be useless to attempt to try to do much with them outside the contest of our personal development, so we’re going to have subversion ignore the buildartifacts folder, ensuring that it won’t add that information to the repository ever.

  • In Windows Explorer, right-click on the buildartifacts folder.
  • Select TortoiseSVN –> Unversion and add to ignore list –> buildartifacts. Subversion will now ignore the whole folder.
With this done, we’re ready to commit all our changes.
  • Right-click on the TestProject folder and select …if you guessed commit, you’re wrong! Remember, we should update the solution first (it’s a good habit to get into). Select Update.
  • With no conflicts detected, select “Commit” and click “all” to select all the new unversioned files.
  • Hit OK and the files will be committed to the repository.
Congratulations! You’ve fully completed a solution setup with Visual Studio and Subversion.

Feedback Welcome!

I'd love to hear any comments on this series. Find it useful? Think there's a better way to implement the technique or something I should have mentioned? Please drop a line in the comments to help me improve the series!


<— Part 4b: Securing Subversion's Connection via TLS