Paul's Internet Landfill/ 2013

Here are the non-filtered entries I published in 2013:

Scripted Installations of Windows 7 via PXE

The end of life date for Windows XP is quickly approaching, and like many other lazy/trailing-edge sysadmins, we are just beginning to transition to something else. If I was an ideological purist that "something else" would be some flavour of FLOSS, but because I am a sellout we are moving to Windows 7.

This blog post documents the infrastructure we use to install and maintain Windows images. Here are some of the goals for this installer:

  • It should not require a Windows server to use, because we do not have enough server CALs for all of the computers we install (namely, those we install as part of the Working Centre's Computer Recycling project.

  • It should be integrated with the PXE infrastructure we already have in place.

  • It should work for the different classes of computer we install: staff machines, machines for our training and public access labs, and the computers we refurbish and sell.

  • It should be completely scripted and automated. In particular, it should not require us to maintain images, because maintaining images is for dorks.

  • Ideally, it should provide ways to keep images up to date with third-party updates after they have been deployed.

  • The installer should be reasonably straightforward to maintain even by those who did not set it up. (This documentation is an attempt to fulfill that requirement.)

The infrastructure we set up took weeks and weeks of work to develop, but it works pretty well.

There are other valiant attempts to do this, but I have not seen other cases that match our needs exactly. For example, there is a procedure here that claims not use use the WAIK at all: http://www.ultimatedeployment.org/win7pxelinux1.html . But I could not follow it well enough to get it working, and it was too ideologically pure for me. Since I am installing Windows 7 computers, getting a Windows 7 computer to use as a technician workstation did not seem that much of a stretch.

Overview

The installation process is kind of intricate, so here is a high-level overview:

  • The client boots to the PXE server and selects a Windows autoinstaller.
  • The PXE server then boots a remastered WinPE ISO file.
  • The ISO file contains a file called winpeshl.ini, which launches an init.cmd file on the remastered ISO.
  • This init.cmd maps a network drive, and then calls a runsetup.cmd command on the network drive.
  • The runsetup.cmd handles any interactive user interaction that needs to be set for configuring this installation. Then it launches setup.exe from the Windows 7 installation DVD with an unattend.xml file.
  • The unattend.xml file configures a number of configuration options for the barebones Windows install. It partititions the hard drive, installs Windows, and sets up initial users.
  • As part of the OOBE "Out of the Box Experience" step of the Windows installation, the installer launches a postinstall.cmd script. This script runs a number of tweaks, adds drivers where appropriate, and launches WPKG.
  • Based on the computer name, WPKG installs 3rd party software as required for this machine. After the machine has been deployed WPKG can be run again to update software on the computer.

Terminology/Stuff You Need

  • A Windows 7 installation DVD or ISO.
  • The Windows Automated Installation Toolkit (WAIK), which is installed on a "technician machine" (a regular Windows 7 machine). As of this writing, you could get the WAIK here: http://www.microsoft.com/en-us/download/details.aspx?id=5753
  • A computer to use as a PXE server (I will assume a Debian/Ubuntu server)
  • The ability to redirect PXE clients to your server (which probably requires modifying options in your DHCP server)
  • A few Samba/Windows shares that can be mapped as network drives.

For this walkthrough, let's assume the PXE server and Samba shares are all hosted by a machine called ntinstall with IP address 10.168.1.5 .

The share containing the Windows 7 install files is \\ntinstall\win7-install

The share containing WPKG configuration is \\ntinstall\wpkg

The PXE configuration files live in /var/lib/tftpboot/ of ntinstall`

Remaster bootable WIM

Most of these steps are taken from the following site: https://sites.google.com/site/godunder/windows-build-files/how-to-create-a-custom-winpe-3-0-3-1-boot-cd-iso-or-usb-boot-key

Start by installing the WAIK on the technician computer.

Next, create the file structure for the WinPE environment into c:\winpe :

copype.cmd x86 c:\winpe

Next, get the contents of the Windows 7 installation DVD someplace useful. (Eventually, this will have to be on a network share so that PXE clients can use it.) For now say that the Windows 7 contents are on drive D:.

At this point, there are two possible files that could serve as the bootable ISO: boot.wim (in d:\sources\boot.wim) or winre.wim which is embedded in d:\sources\install.wim (in C:\Windows\System32\Recovery\) . I cannot remember whether it makes a difference, so I will modify boot.wim. Refer to the link above if you want to use winre.wim (and please contact me to let me know that it makes a difference).

First, copy boot.wim someplace useful. "Someplace useful" means the ISO\sources folder, because the oscdimg command below depends on this location:

copy d:\sources\boot.wim c:\winpe\ISO\sources\

then mount boot.wim for editing:

dism /mount-wim /wimfile:c:\winpe\ISO\sources\boot.wim /index:1 /mountdir:c:\winpe\mount

If you look in c:\winpe\mount you should see some files. These are the mounted image. You can make changes here and commit those changes.

Start by editing c:\winpe\mount\windows\system32\winpeshl.ini . Mine looks something like this:

[LaunchApps]
wpeinit
%SystemDrive%\local\init.cmd

The wpeinit starts the installation environment.

Now, make a folder called c:\winpe\mount\local and add an init.cmd command. This will mount the network share and call a runsetup.cmd command. (It is possible that this could all be integrated into winpeshl.ini, which would save a step. Oh well.)

My init.cmd looks something like this:

@echo off

echo Mounting network drive..
net use q: \\ntinstall\win7-install

echo Launching runsetup.cmd

q:\local\runsetup.cmd

Next, we want to remove the irritating "press any key to boot from CD or DVD" message when the ISO boots. To do this, delete the file c:\winpe\ISO\boot\bootfix.bin file.

We should now be ready to commit changes that we made to the ISO:

dism /unmount-wim /mountdir:c:\winpe\mount /commit

and remaster the boot CD:

oscdimg -n -bc:\winpe\etfsboot.com c:\winpe\ISO c:\winpe\winpe_x86.iso

If all went well, you should have a 200MB winpe_x86.iso file that can be used for PXE booting.

Configure a PXE server

I will not go through detailed steps of how to install a PXE server here. There are other good tutorials online. I used the ones here: http://www.howtoforge.com/ubuntu_pxe_install_server

Here are a few quick tips that saved me some hassle:

  • I used the tftpd-hpa TFTP server
  • To get a sensible set of configuration files for the PXE boot menu, I stole the configuration files from an Ubuntu installation CD. It looks like you may be able to steal the isolinux folder from a Debian install CD and get similar template files. You might be best off using them for inspiration rather than using them directly.
  • In /etc/default/tftp-hpa, I set the following option: TFTP_DIRECTORY="/var/lib/tftpboot"
  • I needed to add the following to /etc/sysctl.conf to prevent "PXE-EA1: No PXE server found, using static boot file" errors: net.ipv4.ip_no_pmtu_disc = 1 (but this was for an ancient Ubuntu 8.04 server, and you may not need this any more.)
  • I organize the files in /var/lib/tftpboot/ as follows:
    • introscreens/ : Text for user prompts (f1.txt, f2.txt, etc)
    • pxelinux.cfg/default : I throw all of my PXE configuration stanzas here
    • win7/ : Files related to the Windows 7 installer go here
  • The pxelinux.0 and memdisk files can be found in the syslinux-common package. You want version 4 or higher (which is the default in Debian Wheezy) because this version allows you to boot ISO files directly. I put these in /var/lib/tftpboot
  • Maybe you need the *.c32 files from syslinux-common package as well. I put them in a folder /var/lib/tftpboot/libs and then used a #PATH libs directive in my pxelinux.cfg/default file.

The beginning of my pxelinux.cfg/default file looks something like this:

#PATH libs 

DISPLAY introscreens/f1-boot.txt

f1 introscreens/f1-boot.txt 
f2 introscreens/f2-hwtest.txt
f9 introscreens/f9-windows.txt

and my f9-windows.txt file looks something like this:

^Xintroscreens/cr_splash_danger.rle


These Windows installers are for AUTOMATED INSTALLATION. They will
WIPE 
OUT your hard drive without prompting you!


autoinstall-win7      : Windows 7 Installer

The first character is special. I think it is a ctrl-X character. The cr_splash_danger.rle is a splash screen graphic file. You make it by creating a PPM file and converting it with the ppmtolss16 command (available in the syslinux package). There are some reasonable instructions here:

- http://christian.amsuess.com/tutorials/lanbootserver/index.html :
Look at "Create An Own Bootsplash Image"
- http://frantisek.rysanek.sweb.cz/splash/isolinux-splash-HOWTO.html
Fairly in-depth, but I do not think you need to go through the hassle of making a GIF first.

Next you need to configure the DHCP server to point to your PXE machine. I use pfSense as the DHCP server for this subnet. I needed to add a next-server directive to the "DHCP Server" section (labelled "Enable network booting". The IP of the next-server is the IP address of the PXE server (10.168.1.5 in my example) and the filename is /var/lib/tftpboot/pxelinux.0

If you do not have access to your DHCP server, you could add a second network card to your PXE server and set up a private subnet there (with its own DHCP server). Then you could use that private subnet to install Windows.

To test the PXE server, it might be easiest to test with memtest86+ . Install the memtest86+ package, grab the binary /boot/memtest86+.bin, and copy it to the folder /var/lib/tftpboot/memtest/. Then use the following stanza in your pxelinux.cfg/default file:

LABEL memtest
    kernel memtest/memtest86+.bin

If all goes well, you should be able to network boot a machine, have it connect to your PXE server, and get a prompt. Typing "memtest" should start the memory tester.

Some computers do not support network booting, or they do not support network booting well (in particular, they do not know what to do with a next-server directive from the DHCP server). If this is the case, generate a GPXE ISO or USB key image from http://rom-o-matic.net/ and boot from that.

Configure PXE for the Windows Installer

Once your PXE server is working, set up Windows 7 installer files:

  • Copy the remastered winpe_x86.iso file into /var/lib/tftpboot/win7
  • Make a folder in a Samba share and populate it with the contents of the Windows DVD. I made a (read-only) share in /var/lib/tftpboot/win7/share, but you can do whatever makes you happy. To match the init.cmd contents above, this should be viewable as \\ntinstall\win7-install
  • Copy the contents of the Windows 7 installation DVD to the share. I put mine into /var/lib/tftpboot/win7/share/win7dvd/
  • Make a folder for your configuration files. Mine is called /var/lib/tftpboot/win7/share/local

Next, add a stanza that will boot your Windows 7 installer. Here is what I used in my pxelinux.cfg/default file:

LABEL autoinstall-win7
    kernel memdisk
    append iso raw initrd=win7/winpe_x86.iso

Now if you type autoinstall-win7 at the PXE boot prompt, you should see the ISO file getting downloaded (which will produce a lot of dots on the screen) and then you should boot into the Windows installation environment. Pressing <shift>+<F10> should bring up a command window. Typing wpeutil reboot in a command line might get you out if you are stuck.

Because there is no runsetup.cmd file, the installer will get stuck and probably reboot. But if you are lucky it will mount the Q: network drive.

Add drivers

Windows 7 does a pretty good job of finding drivers on its own, but if you have drivers that are not detected by particular machines, adding your own is usually pretty easy.

In the windvd\sources folder, make the following folder hierarchy: $OEM$\$$\Inf . Inside this Inf folder you can add the (unpacked) drivers for your hardware. I make one folder for each model of machine (dc7100, ibm-6072-c1u, etc) and then make folders in those machine folders for each component (Sound, Video, etc). Then I put the unpacked drivers in those subfolders.

This is great because Windows will only integrate drivers that it needs for the particular installation.

In addition, once you have the $OEM$\$$\Inf path working, the underlying folder hierarchy seems pretty forgiving. I do not put spaces in filenames for superstitious reasons.

Note that this technique will not work for components that don't have enough drivers to get through the install (namely, network cards). I believe those drivers need to be slipstreamed into the winpe_x86.iso file above. UPDATE: See the next section.

I learned about this trick from the following forum post: http://forums.mydigitallife.info/threads/22915-Fully-unattended-win7-x64-install-with-integrated-%28not-injected%29-unsigned-drivers

Add drivers to boot.wim

I ended up having to slipstream network card drivers into winpe_x86.iso, and found that these instructions were insufficient. Here are some quick notes on how I did it, assuming that the driver files are unpacked into a folder called c:\drivers:

Get your winpe_x86_cr.iso. Use 7-zip or something to extract boot.wim to c:\winpe\ISO\sources.

Next, mount the WIM file:

dism /mount-wim /wimfile:boot.wim /index:1 /mountdir:c:\winpe\mount

Now you can install the drivers. All the drivers in c:\drivers will be added. You can also point this command to a single .inf file if you only want to add one driver.

dism /image:c:\winpe\mount /add-driver /driver:\c:\drivers /recurse

Next you have to unmount boot.wim:

dism /unmount-wim /mountdir:c:\winpe\mount /commit

Finally you have to remaster the ISO file:

oscdimg -n -bc:\winpe\etfsboot.com c:\winpe\ISO c:\winpe\winpe_x86.iso

Then upload winpe_x86.iso to your PXE folder, and you are done.

Set up runsetup.cmd

The purposes of this file are:

  • To live on a network drive so that making changes does not require remastering the winpe_x86.iso file.
  • Allow the user to customize the Windows install.
  • To pick an appropriate unattend.xml and run setup.exe with it.

It is worth talking about user customization a little. You might want different kinds of Windows installations for different computers. Maybe some computers should get Office 2010 installed; some Office 2007 or some no Office at all. Maybe you need to install Windows 7 Professional for some computers and Windows 7 Enterprise for others.

Ideally, we would have different PXE stanzas for each case:

LABEL autoinstall-win7-office2010 
    kernel memdisk
    append iso raw initrd=win7/winpe_x86.iso office=o2010


LABEL autoinstall-win7-office2007
    kernel memdisk
    append iso raw initrd=win7/winpe_x86.iso office=o2007

but I never found an effective way of getting parameters passed on the PXE command line to the Windows installation environment. Therefore, we have to make the decision once the installer environment has launched, which is where the runsetup.cmd command fits in.

To do this, I use the choice.exe command, which I copied from the technician machine to the network share (probably violating some EULAs in the process). Then I had a runsetup.cmd file that looks something like this:

@echo off

echo Starting runsetup.cmd... 

echo(


rem =========== CHOOSE OFFICE OPTIONS ==================

echo Irritating installation Menu
echo ----------------------------

echo(
echo a - Install Office 2010
echo b - Install Office 2007 


rem if statements must be in reverse order of choice

q:\local\choice.exe /C ab /t 30 /m "Choose or lose (timeout: 30s):" /d b
if errorlevel 2 goto office_2007
if errorlevel 1 goto office_2010

echo "Timeout!"

:office_2010

echo You chose to install Office 2010!
copy q:\local\unattend-o2010.xml %TEMP%\unattend.xml
goto endchoice

:office_2007

echo You chose to install Office 2007!
copy q:\local\unattend-o2007.xml %TEMP%\unattend.xml
goto endchoice

:endchoice

rem ================ START SETUP (x86) ==================

echo Starting setup.exe

q:\win7dvd\setup.exe /unattend:%TEMP%\unattend.xml

In this case, Office 2007 gets installed if there is no other input for 30 seconds.

You can add other options as well, but if your selections get too multivariate you will face a combinatorial explosion quickly.

The unattend-o2010.xml file is a Windows answer file that specifies how the installation should proceed. In a flagrant violation of the "don't repeat yourself" principle, both of the different XML files are almost copies of each other. (In this case only the preferred hostname changes.)

Note that the XML file must be named unattend.xml for the setup.exe to recognise it as an answer file. (Actually, autounattend.xml might also work as a name, but unattend-o2007.xml will not, and it will frustrate you if you try.)

Set up unattend.xml

The easiest way to create the set of unattend.xml files that you need is to use the "Windows Image System Manager" program. Basically, you need to create a new answer file, and then you need to populate it.

The Windows installer goes through a number of different "passes", and different configuration options apply to each pass.

Here are two good guides to getting a handle on the passes and how to create answer files:

- http://www.symantec.com/connect/blogs/windows7-untangling-scripted-installs-sysprep-and-configuration-passes
- http://community.spiceworks.com/how_to/show/2224-create-a-master-windows-7-image
Steps 6 through 11 are the most helpful. This tutorial uses amd64 (ie x64) options, but there are corresponding x86 options as well.

Here are specific options I set. Note that I did not set any options for 2 offlineServicing, 5 auditSystem or 6 auditUser.

1 windowsPE

  • x86_Microsoft-Windows-International-Core-WinPE_neutral

    • Set InputLocale, SystemLocale, UILanguage, UILanguageFallback, UserLocale to en-US (or whatever)
  • x86_Microsoft-Windows-Setup_neutral

    • DiskConfiguration
      • Disk[DiskID="0"]
        • Action: AddListItem, DiskID: 0, WillWipeDisk: true
        • CreatePartitions
          • (This creates two partitions: 100MB for the system partition, and the remaining for the Windows partition)
          • Action: AddListItem, Order: 1, Size: 100, Type: Primary
          • Action: AddListItem, Order: 2, Extend: true, Type: Primary
        • ModifyPartitions
          • Action: Modify, Format: NTFS, Order: 1, Partition 1
          • Action: Modify, Format: NTFS, Order: 2, Partition 2
    • ImageInstall
      • OSImage
        • InstallToAvailablePartition: False, WillShowUI: OnError
        • InstallFrom:
          • Path Q:\win7dvd\sources\install.wim
          • MetaData[Key="/IMAGE/NAME"]
            • Action: AddListItem, Key: /IMAGE/NAME, Value: "Windows 7 PROFESSIONAL"
        • InstallTo: DiskID 0, PartitionID: 2
    • UserData
      • AcceptEula: true

The ImageInstall | OSImage | InstallFrom | MetaData key determines what version of Windows gets installed. You find out what to type for "Value" by typing

imagex /info \\ntinstall\win7-install\win7dvd\sources\install.wim

This will dump an XML file. Each flavour of Windows has an index and a name. The names available in my install.wim are:

  • Windows 7 STARTER
  • Windows 7 HOMEBASIC
  • Windows 7 HOMEPREMIUM
  • Windows 7 PROFESSIONAL
  • Windows 7 ULTIMATE

Using "IMAGE/INDEX" instead of "IMAGE/NAME" is probably less error-prone, but harder for other people to understand.

3 generalize

  • x86_Microsoft-Windows-Security-SPP_neutral
    • SkipRearm: 1

4 specialize

  • x86_Microsoft-Windows-Deployment_neutral

    • RunSynchronous
      • RunSynchronousCommand[Order="1"]
        • Action: AddListItem
        • Description: Disable "The publisher could not be verified" prompts
        • Order: 1
        • Path: cmd /c reg add HKCU\Software\Microsoft\Internet Explorer\Security\ /v DisableSecuritySettingsCheck /t REG_DWORD /d 0 /f
      • RunSynchronousCommand[Order="2"]
        • Action: AddListItem
        • Order: 2
        • Path: cmd /c reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
  • x86_Microsoft-Windows-Security-SPP-UX_neutral

    • SkipAutoActivation: true
  • x86_Microsoft-Windows-UnattendedJoin_neutral

    • ComputerName: computer-name-here
    • ProductKey: your product key
    • ShowWindowsLive: false
    • TimeZone: Eastern Standard Time
    • WindowsFeatures
      • ShowMediaCenter: false
      • ShowWindowsMail: true
      • ShowWindowsMediaPlayer: false
  • x86_Microsoft-Windows-UnattendedJoin_neutral

    • Identification
      • JoinWorkGroup: WORKGROUPNAME

The primary way I distinguish between different types of builds (Office 2010 vs Office 2007, for example) is by changing ComputerName in the different unattend.xml files.

If you do not specify a ProductKey then I believe that the installer will use a default key, and I believe the installer will try to activate itself by contacting a KMS host (for flavours that support this). You can find a list of the default keys available for Windows 7 by looking for product.ini in the sources folder of the Windows install DVD. Of course, the default keys will not activate Windows, so you will have to activate afterwards.

The list of TimeZones is picky. You can find a list of strings that Microsoft claims works here: http://technet.microsoft.com/library/cc749073%28WS.10%29

The RunSynchronous commands will come up again later. I wish I had documented why I put in the second registry edit. I believe it disables UAC (which is later restored in the postinstall.cmd script).

7 oobeSystem

  • x86_Microsoft-Windows-International-Core_neutral

    • InputLocale, SystemLocale, UILanguage, UILanguageFallback, UserLocale: en-US
  • x86_Microsoft-Windows-Shell-Setup_neutral

    • Autologon
      • Enabled: True
      • LogonCount: 1
      • Username: firstuser
      • Password
        • Value : type the password here; it will be scrambled later
    • Display
      • ColorDepth: 32
      • HorizontalResolution: 1024
      • VerticalResolution: 768
    • FirstLogonCommands
      • SynchronousCommand[Order="1"] Action: AddListItem CommandLine: cmd /c "copy \\ntinstall\win7-install\local\postinstall.cmd %TEMP% && %TEMP%\postinstall.cmd"
    • OOBE
      • HideEULAPage: true
      • HideWirelessSetupInOOBE: true
      • NetworkLocation: Work
      • ProtectYourPC: 1
      • SkipMachineOOBE: true
      • SkipUserOOBE: false
    • UserAccounts
      • LocalAccounts
      • Action: AddListItem
      • Group: Administrators
      • Name: firstuser
      • Password
        • Value: match the password in the other password field

The LogonCount means that the user firstuser will be able to log in once without a password. This gives the postinstall.cmd script a chance to run and complete the installation.

The postinstall.cmd does most of the customization, which is why we do not need the WAIK "Packages" functionality. Note that the command is all one line.

The SkipMachineOOBE and SkipUserOOBE are deprecated, but they still work. The NetworkLocation does not seem to work for me -- after the install finishes the user is still prompted for network locations.

Configure postinstall.cmd

The postinstall.cmd command does a lot of the tweaks that are hard to enter into the unattend.xml file. The script runs as the first user you defined (firstuser in my case, which runs as administrator) , but if UAC is enabled, then the script will give UAC prompts.

Here are some choice elements of my postinstall.cmd script. Sorry that the line lengths are do ridiculous.

:: Make Admin password never expire
WMIC USERACCOUNT WHERE "Name='firstuser'" SET PasswordExpires=FALSE

You can do a lot of interesting things with WMIC.

:: ============ STARTUP SOUND ===========

cmd /c reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\BootAnimation /v DisableStartupSound /t REG_DWORD /d 0 /f

This disables the startup sound.

:: ============ THEME STUFF =============

:: Copy default theme (no sound)
copy \\ntinstall\win7-install\local\includes\twc-default.theme %SYSTEMROOT%\Resources\Themes\

:: This supposedly sets the theme for all users
:: From: http://www.sevenforums.com/tutorials/80384-theme-specify-default-theme-load-new-users.html

:: First load the default user hive and edit it:
:: Edit HKLM\TempHive instead of HKCU

cmd /c reg load HKLM\TempHive "%SystemDrive%\Users\Default\NTUSER.DAT"

cmd /c reg add HKLM\TempHive\Software\Microsoft\Windows\CurrentVersion\Themes /v CurrentTheme  /t REG_SZ /d "%SYSTEMROOT%\Resources\Themes\twc-default.theme" /f

cmd /c reg unload HKLM\TempHive

:: Now install the theme for firstuser
:: See http://www.sevenforums.com/themes-styles/93397-there-silent-command-line-operation-change-theme-2.html

\\ntinstall\win7-install\local\ThemeTool.exe changetheme %SYSTEMROOT%\Resources\Themes\twc-default.theme

Getting Windows 7 to shut up and not play sounds is pretty important for us. The good news is that this was scriptable -- I had a harder time in Windows XP (but I did not know about loading the default user hive then).

To make this twc-default.theme I logged into a Windows machine, right-clicked on the desktop, chose "Personalize" and changed the sounds to be "No sounds". Then I saved the theme. This appears in the current user's %userprofile%\AppData\Local\Microsoft\Windows\Themes folder. If you do not use any custom wallpapers, this is a simple file you can drop in.

The registry changes load the profile for the default user, make a change, and then save that profile. This means that every subsequent user will have the quiet theme by default. This does not force anybody to have a quiet theme, however -- users can still change their sounds. It is kind of confusing because simply changing the theme to a different one -- for example with different wallpaper -- will usually re-enable sounds.

Changing the theme for the current user is actually kind of hard. The ThemeTool.exe program is wacky. You have to generate it in a secret way, as documented in the www.sevenforums.com link:

  • Go to Control Panel / Troubleshooting
  • Run "Appareance and Personalization" . Run the wizard but do not finish it.
  • Look in "C:\Windows\Temp" for a folder that begins with the name "SDIAG_" (You may need to gain access to the folder as administrator.)
  • There may be a ThemeTool.exe file already. If not, right-click TS_ColorTheme.ps1 and run with PowerShell. This should generate the executable.

Copy the ThemeTool.exe executable to the network share (no doubt violating more EULAs).

:: ============ SOUND DRIVER STUFF =============

:: Install soundcard driver for dc7100s
:: WMIC trick from
:: http://myserverissick.com/2008/04/find-a-computers-model-using-the-command-line-or-in-a-batch-file/

FOR /F "tokens=2 delims==" %%A IN ('WMIC csproduct GET Name /VALUE ^| FIND /I "Name="') DO SET machine=%%A
ECHO Computer model: "%machine%"

:: Testing strings is complicated! The idea is to make a new string
:: by GETTING RID OF the text you want, and then compare results

set machine_testdc7100=%machine:dc7100=%

ECHO DC7100 test: %machine_testdc7100%

:: Gah. There is a "Would you like to install this device
:: software?" prompt. We need to trust some nonsense cert.
:: http://www.migee.com/2010/09/24/solution-for-unattendedsilent-installs-and-would-you-like-to-install-this-device-software/


:: The strings are different, so "dc7100" was removed
IF NOT "%machine%" == "%machine_testdc7100%" (
certutil -f -addstore "TrustedPublisher" \\ntinstall\win7-install\drivers\dc7100-audio\analog-devices-inc.cer
\\ntinstall\win7-install\drivers\dc7100-audio\sp36530\setup.exe /s
)

This is an example of installing a driver for a particular machine -- in our case, a Compaq/HP dc7100. The basic idea is to use WMIC to get the name of the computer, and then execute the code only if that machine matches.

:: Run wpkg script
\\ntinstall\wpkg\warez\wpkgsync\wpkgsync.bat

This installs third party software, as described in the next section.

:: Enable stupid "The publisher could not be verified" warnings again
:: From http://www.mockbox.net/windows-7/278-how-to-disable-windows-7-popup-the-publisher-could-not-be-verified

cmd /c reg add HKCU\Software\Microsoft\Internet Explorer\Security\ /v DisableSecuritySettingsCheck /t REG_DWORD /d 0 /f

:: Enable UAC again
cmd /c reg add HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System /v EnableLUA /t REG_DWORD /d 1 /f

This undoes the registry changes done in the 4 specialize section of the unattend.xml file.

Install third-party software with WPKG

Configuring WPKG is an entire topic unto itself, so I am just going to sketch the basics here. There is lots of documentation available at http://wpkg.org . Once WPKG is configured and working it is pretty sweet, but it took a lot of time to set up.

If you have an IT budget, then I would recommend using a service such as Ninite Pro instead: https://ninite.com/pro . It handles a lot of third party software with a lot less hassle than WPKG. There is a version of Ninite that is free for home users.

The advantage of using WPKG is that it can potentially install any software used at your organization, whether open or closed source. The critical factor is that it must be possible to script the program's installation non-interactively. This is usually possible, either because the applicaton has a "silent installer" flag, or because a program like AutoIT can be used to click the buttons required to install the application.

In addition to the http://wpkg.org site, you can find information about silent installers on the old (and loved) Unattended installation site: http://unattended.sourceforge.net/installers.php . (We used the Unattended infrastructure to install Windows XP, and its approach definitely influenced this one.)

To set up WPKG, you need yet another samba share, which I placed at \\ntinstall\wpkg. This needs to be visible by all clients (and it should be read-only).

Inside this share are the following files:

  • wpkg.js and the other WPKG binary files
  • packages.xml and the packages/ folder, which contain XML script files for each of the applications you wish to install. Most of the work in maintaining WPKG is creating these script files (or copying them from wpkg.org) and then keeping them up to date.
  • profiles.xml, which bundles software packages into groups that can be installed on particular hosts.
  • hosts.xml, which defines hosts and assigns profiles to them.
  • A %SOFTWARE% folder, which contains the binaries to be installed.

For example, here is a fragment from my profiles.xml file which specifies all the software that gets installed on three different classes of machine:

<profile id="default">
        <package package-id="WinDirStat" />
        <package package-id="AdobeFlashPlayer" />
        <package package-id="SumatraPDF" />
        <package package-id="JavaJRE" />
        <package package-id="Firefox" />
        <package package-id="7zip" />
</profile>

<profile id="staff-base">
        <depends profile-id="default" />
        <package package-id="TrueCrypt" />
        <package package-id="LibreOffice" />
        <package package-id="AdobeReader" />
        <package package-id="WinDirStat" />
</profile>

<profile id="staff-win7">
        <depends profile-id="staff-base" />
        <package package-id="Office2010ProfessionalPlus-staff" />
</profile>

The default class is common to every machine. The staff-base is used to keep Windows XP machines up to date, and the staff-win7 profile is used to install Win7 clients (which, unlike Windows XP machines, have Office 2010 installed.)

Here is a snippet from my hosts.xml that shows some of these profiles getting assigned:

<host groups="WPKG-Abstainers" profile-id="nothing" />

<host name="to-be-named" profile-id="staff-win7" />

<host groups="Domain Computers" os="professional.+6\.1\.\d{4}" profile-id="staff-win7" />

<host groups="WPKG-Clients-WinXP" profile-id="staff-base" />

The first line specifies an Active Directory group that is used to exclude certain machine from any WPKG updates at all (because the "nothing" profile will have no packages defined.

The second line assigns clients with the name "to-be-named" (as defined in the WAIK answer file unattend.xml) to be given the "staff-win7" profile.

The third line filters on two things: the Active Directory group "Domain Computers" AND the operating system Windows 7 Professional.

The fourth line filters Windows XP clients, which must be in the Active Directory group "WPKG-Clients-WinXP" to have WPKG updates applied to them. (Note that these groups are COMPUTER groups, not USER groups.)

Again, the WPKG website has good documentation about filtering on hosts: http://wpkg.org/Extended_host_attribute_matching

Finally, here is a (very simple) package configuration file, to install and uninstall Inkscape:

<packages>
<package
  id="Inkscape"
  name="Inkscape"
  revision="3"
  reboot="false"
  priority="50">

  <variable name="version" value="0.48.4" />
  <variable name="extra" value="-1-win32" />

  <check
      type="uninstall"
      condition="exists"
      path="Inkscape %version%" />

  <install cmd='"%SOFTWARE%\Inkscape\Inkscape-%version%%extra%.exe" /S' />

  <upgrade include="install" />

  <remove cmd='"%PROGRAMFILES%\Inkscape\uninstall.exe" /S' />

</package>
</packages>

Many packages require some tweaking to install cleanly, but often other people have done the hard work already, and you can download XML files from the WPKG website. There is good documentation on creating a package script file here: http://wpkg.org/packages.xml

The postinstall.cmd script above calls a small batch wpkgsync.bat, which looks something like this:

@echo off
rem Synchronize WPKG from server without any client

set SOFTWARE=\\ntinstall\wpkg\warez

%SOFTWARE%\tools\wpkgMessage.exe /package "Installing.."

start /b %SOFTWARE%\tools\wpkgMessage.exe

cscript \\wpkg-server\wpkg\wpkg.js /synchronize

%SOFTWARE%\tools\wpkgMessage.exe /package "Software updates completed!"

%SOFTWARE%\tools\wpkgMessage.exe /terminate

There are only two important lines in this script:

set SOFTWARE=\\ntinstall\wpkg\warez
cscript \\wpkg-server\wpkg\wpkg.js /synchronize

The rest of the file is to display messages on the screen, using the wpkgMessage.exe program available here: http://www.gig-mbh.de/edv/index.htm?/edv/software/wpkgtools/wpkg-message-english.htm

We also call the wpkgsync.bat script to keep software up to date. In Active Directory, we use a shutdown script. Unfortunately, there is a bug: Unless otherwise specified, Windows 7 terminates shutdown scripts after 10 minutes. So we have to change some parameters and make registry edits on the clients, as documented here: http://www.mail-archive.com/wpkg-users@lists.wpkg.org/msg05228.html

Here are the steps:

  • In Group Policy: Computer Configuration | Administrative Templates | System | Scripts set Maximum wait time for Group Policy scripts = 1800 (which gives WPKG 30 minutes to run

  • As a Group Policy preference, set the following key: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\gpsvc] "PreshutdownTimeout"=dword:1b7740

  • In Group Polcy: Computer Configuration | Administrative Templates | System | Logon set Run shutdown scripts visible = Enable

  • In Group Policy: Computer Configuration | Administrative Templates | System set Verbose vs normal status messages = Enable

  • In Group Policy: Computer Configuration | Windows Settings | Scripts | Shutdown run \\ntinstall\wpkg\warez\wpkgsync.bat

(This part of the installer is not fully tested yet.)

Future Work

  • Thus far, I have not incorporated x64 builds into this infrastructure. I expect it is possible, but there will surely be quirks.
  • Maintaining multiple unattend.xml files is irritating.
  • Batch files are awful, and Windows 7 includes Powershell support by default. Where possible (eg with the postinstall.cmd file) it would be nice to replace the batch files with something saner.
  • I have not incorporated Windows updates into the installer, so machines need hundreds of updates once desployed. I expect that http://download.wsusoffline.net will be helpful here, but I have not integrated it into the installer.
  • I am not happy with the sound schemes. I would like most people to have no sound effects in their themes regardless of what desktop theme they choose, but I do not want to disable sound entirely.
Composed December 28th, 2013 Tags:

Isolating Subnets in pfSense

The pfSense firewall distribution is one of my favourite pieces of software. It is powerful and flexible, has wide adoption, and is under active development.

For the most part, the GUI for firewall rules is intuitive to use. But it has a huge problem: it makes isolating subnets unintuitive. Consider the following set of networks:

  • WAN (which we won't care much about in this entry)
  • LAN (which is our "home" network)
  • GUEST (which is used for guest access)
  • WORKSHOP (which is a lab of computers for development)

GUEST uses the OPT1 interface, and WORKSHOP uses the OPT2 interface.

I want to accomplish the following:

  • Allow LAN/GUEST/WORKSHOP access to the wider internet
  • Isolate the LAN/GUEST/WORKSHOP subnets from each other, so that by default no traffic will flow between them
  • Have the ability to poke holes between the subnets for specific purposes

For years, I assumed that this was easy to set up. For years, I have been doing it wrong, and unless you are careful you might do so as well.

Terminology

Let's define the following aliases:

  • all_subnets is a pfSense network alias, that includes the subnets for LAN, GUEST and WORKSHOP

Let's define the following isolation exceptions:

  • All hosts on LAN/WORKSHOP (and GUEST, by convention) should be able to SSH into nethack_hosts on GUEST (using TCP port 22)
  • All hosts on LAN should be able to access an NTP server ntp_host on WORKSHOP (which uses UDP port 123) [CHECK]

Let's assume that "subnet" is a synonym for the pfSense concept of "interface" (even though that is not strictly true)

Failed Attempt 1: Defensive Subnets

My first approach was as follows:

  • Each subnet has rules that block incoming traffic from other subnets
  • Exceptions are listed before the block rules
  • The last rule for each subnet is an "Allow everything" rule to permit access to the wider internet

Here is what the rules for LAN would be (in pseudocode, not real pf notation):

  • Pass all from "LAN Subnet" to nethack_hosts on port 22
  • Pass all from "LAN Subnet" to ntp_host on port 123
  • Pass all from "LAN Subnet" to "LAN Subnet"
  • Block all from all_subnets to "LAN Subnet"
  • Pass all from "LAN Subnet" to any

Here are the rules for GUEST:

  • Pass all from "GUEST Subnet" to "GUEST Subnet"
  • Block all from all_subnets to "GUEST Subnet"
  • Pass all from "GUEST Subnet" to any

Here are the rules for WORKSHOP:

  • Pass all from "WORKSHOP Subnet" to nethack_hosts on port 22
  • Pass all from "WORKSHOP Subnet" to "WORKSHOP Subnet"
  • Block all from all_subnets to "WORKSHOP Subnet"
  • Pass all from "WORKSHOP Subnet" to any

I am not actually sure whether the

  • Pass all from "LAN Subnet" to "LAN Subnet"
  • Pass all from "GUEST Subnet" to "GUEST Subnet"
  • Pass all from "WORKSHOP Subnet" to "WORKSHOP Subnet"

are actually necessary (in this attempt or any others) but I included them to be safe.

Here are the problems with this approach:

  • Any host on the LAN subnet can pass traffic freely to the GUEST or WORKSHOP networks
  • Any host on the GUEST subnet can pass traffic freely to the WORKSHOP network
  • Hosts on the WORKSHOP network cannot access the nethack_hosts on the GUEST network, despite the firewall rule in the WORKSHOP subnet.

To understand this behaviour, you need to understand what pfSense does behind the scenes in translating rules from the nice GUI into actual pf firewall rules that the underlying FreeBSD system can use. Here is my understanding of how rules are generated:

  • pfSense first makes some magic rules to allow traffic in and out of the firewall
  • Then it converts firewall GUI rules, tab by tab in the following order: floating rules, WAN, LAN, OPT1, OPT2, OPT3... .
  • Within a tab, it converts rules in order

The quick keyword has an effect here (especially for floating rules) but as far as I can tell it does not help us much.

You need not take this assertion on faith: just SSH into your pfSense box and run the pfctl -s rules or pfctl -vv -s rules commands.

What this means for our example is that all rules on the LAN tab are processed before any rule in the GUEST (aka OPT1) or WORKSHOP (aka OPT2) tabs. In particular, the following rule:

  • Pass all from "LAN Subnet" to any

is executed before

  • Block all from all_subnets to "GUEST Subnet"

or

  • Block all from all_subnets to "WORKSHOP Subnet"

rules. That first rule allows traffic to go from the LAN Subnet to any other subnet, including the GUEST and WORKSHOP ones, so the later rules do not matter. Oops!

Similarly, the rule:

  • Block all from all_subnets to "GUEST Subnet"

in the GUEST tab is processed before

  • Pass all from "WORKSHOP Subnet" to nethack_hosts on port 22

on the WORKSHOP tab, which means that the WORKSHOP subnet will be blocked from nethack_hosts regardless of the second rule.

In retrospect (and once I saw what was going on with pf rule generation) this behaviour made perfect sense. But it is not the behaviour I expected when I first used the GUI interface -- for some reason I thought that pfSense would magically know that I wanted to block traffic between subnets but still allow generalized internet traffic.

It is possible that you could massage this particular example into working with a "defence first" approach, but I kept getting stuck, so I started investigating floating rules.

Semi-Failed Attempt 2: Floating Rules

pfSense version 2.0 introduced the idea of "floating rules" -- rules that can apply to multiple interfaces, and which would be processed before any of the interface-specific tabs. I thought I could use this to poke holes in the isolated subnets (which would solve the problem of WORKSHOP getting access to nethack_hosts above).

The real problem with this approach is preventing LAN from accessing the other subnets willy-nilly. I came up with something that kind of worked, but it effectively required all rules to be in the "Floating" tab:

  • First, put in specific exceptions to subnet isolation
  • Then isolate the subnets
  • Then allow all traffic to pass

The ruleset would look something like this:

  • Pass all from all_subnets to nethack_hosts on port 22
  • Pass all from "LAN Subnet" to ntp_host on port 123

  • Pass all from "LAN Subnet" to "LAN Subnet"

  • Pass all from "GUEST Subnet" to "GUEST Subnet"
  • Pass all from "WORKSHOP Subnet" to "WORKSHOP Subnet"

  • Block all from all_subnets to all_subnets

  • Pass all from all_subnets to any

(I simplified things a little by using all_subnets as shorthand for individual subnets in a few places.)

This looks workable for this example, but it has a real disadvantage: effectively, all rules have to be in the "Floating" tab, which means the other tabs do not get used. This is not bad for a simple example, but it gets quite messy (and therefore quite error-prone) when you have seven interfaces with many unique exceptions per interface. Making matters worse is that the tabular display for floating rules does not indicate the set of interfaces for which the rule applies -- if I have a rule that should ONLY apply to the GUEST interface, I cannot see this until I click into the rule.

One wildcard with this approach is the "quick" keyword. I never figured out how to use it properly in this context (although I have used it in traffic shaping, to put traffic into queues).

Working (?) Attempt 3: Polite Subnets

Here is the approach that ended up working for me. No doubt it is the approach that most people come up with from the beginning, but I is dumb:

  • All subnets initiate traffic to other subnets that poke holes in the subnet isolation
  • Then all subnets prohibit their own traffic from entering other subnets
  • Then the subnets allow traffic to the broader internet

This takes the opposite approach from Attempt 1: instead of subnets defending themselves from unwanted traffic from other interfaces, they depend on other interfaces prohibiting unwanted traffic from escaping to them. From a security standpoint this seems insane -- very few security scenarios assume that other actors (in this case, other subnets) are benevolent. But in our case all sysadmins control the behaviour of all subnets, and so should be able to coordinate the actions of those subnets together. If you somehow had a situation where different sysadmins had access to only the firewall rules of their own subnet (and could not be trusted to behave well towards other subnets) then this approach would fail.

With this approach, here is what the rules look like for LAN:

  • Pass all from "LAN Subnet" to nethack_hosts on port 22
  • Pass all from "LAN Subnet" to ntp_host on port 123
  • Pass all from "LAN Subnet" to "LAN Subnet"
  • Block all from "LAN Subnet" to all_subnets
  • Pass all from "LAN Subnet" to any

Here are the rules for GUEST:

  • Pass all from "GUEST Subnet" to "GUEST Subnet"
  • Block all from "GUEST Subnet" to all_subnets
  • Pass all from "GUEST Subnet" to any

Here are the rules for WORKSHOP:

  • Pass all from "WORKSHOP Subnet" to nethack_hosts on port 22
  • Pass all from "WORKSHOP Subnet" to "WORKSHOP Subnet"
  • Block all from "WORKSHOP Subnet" to all_subnets
  • Pass all from "WORKSHOP Subnet" to any

It looks almost the same as Attempt 1, except that rules like:

  • Block all from all_subnets to "LAN Subnet"

are switched to

  • Block all from "LAN Subnet" to all_subnets

This approach is weird to my brain, but (as far as I can tell) it works. Unlike Attempt 2 it allows you to use the firewall GUI tabs in a helpful way. It also scales well to many many subnets, just by use of the all_subnets alias.

It could very well be that my approach is STILL all wrong. Maybe it is catastrophically wrong! I feel stupid even posting this on the Internet, given that I am far from a pfSense expert (and I expect pfSense experts are scoffing at me now). But I believe this gets me a lot closer to the behaviour that I want than my earlier approaches, so I thought I would share.

Composed December 25th, 2013 Tags:

Tabbed PuTTY Windows

Many people who need to use SSH from Windows have used PuTTY, the excellent, lightweight, and free client by Simon Tatham. It does most things right, but one feature it lacks is an effective way to connect to multiple hosts without cluttering up your screen with terminal windows. If you are connecting to a single host, then screen is your friend, but when you are connecting to multiple hosts all running screen then life gets awkward.

For years I have been looking for a tabbed version of PuTTY. It appears that there will be no official support for this feature, so there are some third party products floating around that promise to put PuTTY windows in tabs. For a while I was using a program called SuperPuTTY which kind of worked, but which was crashy and unreliable.

The best answer I have found so far is a program called ConEmu. Supposedly this program has the magical ability to stuff many different kinds of programs in tabs, but I have only used it for PuTTY sessions. The setup ended up being easy but I found it ill-documented, so I am documenting it here.

  • Download ConEmu and PuTTY. Say that PuTTY is downloaded to C:\Program Files\PuTTY\
  • Create a new task
  • Give the task a name ("PuTTY" is fine)
  • Do not enter any parameters
  • In the "Commands" window, enter:

    "C:\Program Files\PuTTY\putty.exe" -new_console

  • Save the new task

Now you will be able to start new "PuTTY" tasks and they will appear in tabs.

Composed December 24th, 2013 Tags:

Why Libraries Still Matter

In the age of the Internet, it is tempting to argue that public libraries no longer matter. University libraries still serve a clear purpose -- they archive old materials that are not on the internet (yet), but which might prove interesting some day. But as I learned to my horror a few years ago, public libraries continually throw out old books to make room for new ones.

As time goes on more and more books are published simultaneously on paper and in (locked-down, natch) electronic versions. Over the next few years I expect that paper copies will become rarer and rarer. Then what purposes will libraries serve?

  • Libraries will continue to provide no-cost reading material for their users. This is important even for rich people, because making material available for free means that people can explore unfamiliar topics cheaply.

  • Libraries provide Internet access for those of us who do not have (or cannot afford) Internet at home.

  • Libraries provide physical meeting spaces which are noncommercial, and do not carry the expectation of buying coffee or magazines in order to use.

  • Libraries provide programming -- talks and movie nights -- that can appeal to the broader community.

The common thread through these purposes is that public libraries are helpful to poor people who have few other alternatives. When I was unemployed and/or depressed, I used the public library extensively. I treated it as a destination, a way to structure my days. The alternative would have been sitting at home all day every day, or wandering idly with no destination. After reading a zine by another depressed person, I realized just how helpful it is to have a noncommercial space where people can go, if only to sit for a while and/or check their email.

Unfortunately, it is not sufficient for libraries to serve the poor. Libraries work because they are popular. If the popularity of libraries diminishes because rich people stay away and download reading materials in the comforts of their own homes, then libraries will become less and less relevant to the general public, and they could die. It does not appear that this is happening yet; even under construction our central library is bustling and well-used. But I worry.

Composed December 17th, 2013 Tags:

Identity

The first incarnation of this website was called "Demons in My Head", which was a play on the UNIX concept of daemon processes, and a comment on the inconsistency of my opinions. I believe in contradictory things and act in contradictory ways, and I thought that by mapping those contradictions out I would discover the underlying coherence. That has not happened. As I grow old my beliefs are growing more inconsistent (and incoherent), not less.

I do not remember whether my website's title was intended to plagiarise Walt Whitman's famous quotation, or whether I was ripping him off unconsciously. But the quotation certainly resonates with me:

Do I contradict myself? Very well then I contradict myself, (I am large, I contain multitudes.)

I am certainly large (and certain bathroom scales will attest that I am getting larger), and I certainly full of contradictions. Sometimes this is a problem, such as my endless selfish hypocrisy: espousing some virtue in public to look good, and then behaving inconsistently with that virtue whenever it is convenient for me. I lie. I cheat. I steal. I preach compassion and practice cruelty. The list goes on and on.

But I do not think I am alone; most of us behave differently in different contexts. That is a form of inconsistency too, and we treat it as appropriate. I will talk about different things, use different language, and make different jokes depending on whether I am at work, drunk at a bar on Saturday night, participating in an internet forum, or writing a blog post. The aspects of personality that I reveal in these different contexts may be different. In some sense these are different identities, even if one uses the same identifier ("Paul Nijjar") for all of them.

Sometimes I even find myself switching identities within a single context. This can be catastrophic: losing my temper 5% of the time means that people are afraid of me for the other 95%. Sometimes it is helpful: demonstrating more energy and enthusiasm when lecturing to 100 students than when tutoring a struggling student one on one during office hours. Some people find these shifts jarring, but they are all representations (perhaps equally false representations) of the bag of contradictions that is my self.

Personally, I feel much more comfortable compartmentalizing my selves rather than integrating them. I try to keep my work life separate from my personal life separate from my online life, and I fragment my online life to suit different audiences. In general, I dislike when these compartments bleed into each other, because then I have to manage potentially conflicting aspects of my projected identity for different audiences. Not only is that emotionally and mentally taxing, and I am not sure it is either helpful or necessary.

This is how Facebook and Google' "use your actual name" policies get things wrong. They both want you to use your "real name" for everything, and then layer on some promises that different compartments ("circles" in Google's terminology) will not bleed into each other. Their public reasoning for this is that attaching all of one's online actions to their "real names" means that people won't troll; they will not choose anonymous identities that are used to harm others. Reducing trolling (and online harassment in general) is a worthy goal, but I oppose their approach of associating identities to "real names" and I think their security promises are nonsense.

One problem with "real names" is the "drunk pictures on Facebook" issue. In some contexts of our lives it is fine to go to parties and get drunk. In other contexts (such as job searches) it is frowned upon. Now when people find a job they have to hide whatever identities are not socially acceptable in the most lucrative (read: employment) context. That, in turn, means that there are honest and valuable aspects of personality that people do not share. There are situations in which drunken partying may affect work performance, but there are many cases in which it does not. Grant Fuhr was a great goalie for the Edmonton Oilers even during the period he was using cocaine. From what I understand, he did not let his actions in one sphere affect the other. One can argue that there are no circumstances under which one should use cocaine recreationally (especially given the societal damage of producing that recreational cocaine); but one can also argue that what people do in their off-work hours should remain independent of what they do on the job. Furthermore, people can do good work on the job even if they hold some identity traits that are offputting.

Secondly, using "real names" constrains our abilities to explore our identities and make mistakes online. Every mistake might be documented for all time, so we limit ourselves to the bland, super-optimistic, no-personality personas you see on LinkedIn and "professional" Twitter accounts. As Google and Facebook get their claws deeper into our identities, our abilities to express ourselves openly (and thus to make mistakes, and thus to learn valuable and difficult lessons, and thus to grow) is constrained.

Believe it or not, I curate the online identity I associate with my "real name". There are other identities I have online in which I express myself more freely; every time those identities are linked to my "real name", I lose that sense of play and freedom. And certainly I have made mistakes (a lot of mistakes) associated with my "real name". I have also made mistakes (a lot of mistakes) associated with my unreal names, and in those domains I only have to pay the consequences within that limited context, not for every aspect of my life ever. Unfortunately for me, as my online (and offline) identities bleed into each other, I find myself more and more reluctant to express anything.

One reason you all think my writings online are so ridiculously revealing is because I have made a decision not to censor myself solely for the sake of a job. I despise the paranoia involved with job searches. I despise the way that employers search and search for that one hint that a potential employee will be a bad fit (which just means they hire somebody who has a more polished persona). When participating in hiring I have behaved in this despicable way as well: I always spend a disproportionate amount of energy wondering what "the catch" associated with potential candidates might be. You know what? There is always a catch, and I have a profound talent for missing it. Even when job candidates reveal hints of "the catch" during the job application process, our hiring teams miss the importance of those hints every single time. And you know what? People often turn out to be okay (and good workers) despite having catches.

Any potential employer should know that I come with lots and lots of catches, because I am a deeply broken person. In fact, I come with more catches than most job candidates -- but once I am hired, people have seen that I got the job done despite (or sometimes because of) these quirks. The dance of employment searches is broken and phony, and I do not want to reward this broken system at the expense of censoring the things I would otherwise be willing to release into the public. If that means I never am employed again, then so be it. I have no dependents, so the consequences are minor.

The second aspect of "real name" policies that irk me is the way our feudal overlords promise us security settings that they are not capable of keeping. Take, for example, Google+, with its concept of "circles". The promise is that anything you reveal to that circle will only be known to that circle (and to Google, which will helpfully track this information and use it to separate you from your money more effectively). But -- as I have learned the hard way -- this is nonsense. Loose lips sink ships. The people in your circles do not sign non-disclosure agreements; nothing but social convention stops them from revealing information you reveal to them to others. Sometimes these revelations are not intended as malicious. Sometimes people in your circle may not realize that you want this information limited to members of that circle. Sometimes people in your circle get into internet fights with you, and judiciously use copy and paste (or screenshots) to make you look bad.

Incidentally, people who are shoulder-surfing (or just sharing computers with) the members of your circles do not sign non-disclosure agreements either. Neither do the script kiddies who pwn the computers and smartphones belonging to members of your circles. The promise that information revealed to your circle will stay within your circle is a lie. It is a platitude intended to get you comfortable revealing all kinds of intimate details on their platform, developing brand loyalty and giving Google more interesting information to track about you.

Facebook is no better. Neither is Twitter. Neither is a personal self-hosted blog. Neither are meatspace conversations. Anything you reveal about yourself -- online or offline -- can be spread and shared and ultimately used against you. The only defence is to be aware of this, and to share only those things that you are willing to stand behind.

That is why when I use the internet with my "real name", I make my content public. This has caused real problems for others in the past -- they interacted with me using handles they wanted to keep limited -- and I have caused a lot of unnecessary suffering as a result (and if those parties are reading this, then please know that I am sorry). But I find keeping secrets -- especially other people's secrets -- too difficult.

And that, in my mind, is the solution to the problem of trolling. Forcing people to use "real names" is an overly-constrained solution to the problem. We do not need to use "real names" that are tied to our many multiple identities. Rather, the identifier (aka "handle") that we use for each identity should be consistent. If I go by the handle "Hairy Fatso" in a particular context, then I should be willing to stand behind what "Hairy Fatso" says, and I should be willing to use "Hairy Fatso" consistently enough to develop a reputation, so that other people who interact with that identity know who they are dealing with and what to expect. Perhaps it should be expensive to create these new identities, but it should not be as expensive as associating our every action with our "real name" is.

My strategy of compartmentalizing different online and offline identities is becoming more and more ineffective, because online and offline tracking is getting better and better, and computers are getting better and better at associating different identities. This is going to have consequences, both for me (as I either become blander, or withdraw from posting anything of consequence on the web) and for society as a whole (especially if we expect our politicians and employees to have histories that are squeaky-clean). I do not know how things will shake out, but I worry. I have gained so much benefit from compartmentalized identities over the years. Having a place to hang out, vent, swear, and make inappropriate jokes before audiences of people I will never interact with in meatspace has been such a blessing. I'm going to miss that.

Composed November 7th, 2013 Tags:

Losing My Religion (again)

It appears that I am losing some of my faith and enthusiasm for free software, and I didn't even realize that I was religious in this way.

I never intended to become a free software zealot, and I do not self-identify as one. I am not a software developer, and I do not contribute that much to the movement, and I make my living as a Windows administrator. But many people see me as "that KWLUG guy" or "that Linux guy", and there is truth to those accusations.

I probably would not have gotten into Linux if it had not been for UNIX, and I would not have gotten into UNIX if it had not been for my university lab environment. But once I bought my first computer in 1998, I knew that I wanted to be running Linux on it, and I decided to go with Debian because I had read that it was "political", and I had aspirations to be an activist. It seems that I make all major life decisions without thinking very much, and this was no exception. But boy howdy did Free Software change my life.

Over the years, I bought into the free software ideology -- or at least those parts of the ideology that were personally beneficial to me. I appreciated the low price of my operating system, I appreciated that I had access to the same powerful software used "professionally", I appreciated that so much good information existed about my software (thank you, public mailing lists; thank you, bug trackers), I appreciated the verbose debugging message much free software offered (contrasted against the black box of proprietary software at the time), I appreciated being able to configure my systems to suit my eccentricities (thank you, xscreensaver backgrounds), and I appreciated the good feelings I got when I contributed a bug report that led to some software improvement. But all of this appreciation was pragmatic, not ideological -- or so I thought.

Earlier this year, I was ingesting my mandated dose of government-funded messaging (namely, TVO The Agenda podcasts) when I listened to an interview with Jaron Lanier. As far as I can tell, Lanier's argument is that giving away things without getting paid for them is immoral. It is immoral to make posts on Facebook without being paid for them; it is immoral to give away our demographic data; and it is immoral (although well-intentioned) to participate in the free software movement. The argument goes like this:

  • Income inequality is a problem: a few people are getting very rich, and the middle class is getting destroyed

  • At the same time, middle-class people contribute all kinds of value to the Internet, in the form of demographic information, pictures, social networking posts, and free software.

  • All that valuable information is sucked up by organizations with giant computers. Those giant computers use that data to make lots of money for themselves, without giving money to those who actually create the value. At best they offer users gratis services ("Join Facebook! It's free!")

  • Therefore, we should demand to be paid (somehow) for the value other people get from our work. If (hypothetically) you read this blog post and (even more hypothetically) you gain some insight as a result, then you should be paying me (somehow).

  • Therefore, giving away stuff for free is immoral, because it is bankrupting the middle class and making those who have the most giant computers inordinately powerful.

It is an interesting argument, and I can't shake the feeling that Lanier's analysis is more right than wrong. What really distinguishes Lanier's argument from other sob stories about the immorality of free software is the giant computer effect: Our intention in giving away our efforts was to make it accessible to the world, but the ones who take the most advantage of this are the powerful, not the poor.

Although Lanier focuses on user-generated data more and free software less, I think the free software movement is culpable for this state of affairs in many ways:

  • The terrible and irritating ambiguity of the word "free" makes everybody think that software should have no cost, and has created the expectation that on the Internet nobody should have to pay for anything. So when these giant companies offer their services, we as consumers demand that they be gratis, and we do not look closely at the strings that are attached.

  • After understanding the free software definition, the first thing people want to know is "How does anybody get paid?" It is true that nothing in the free software definitions explicitly prohibit paying for software (and in fact putting restrictions on commercial use makes the software nonfree, at least in the Debian world). But it is also true that as soon as one person purchases some free software, they are permitted to redistribute that software to everybody else for zero cost. So it is very difficult for people who actually put value into the product (namely software developers) to make a living, which is why Joey Hess is living on poverty wages despite being an amazing programmer who has contributed enormous amounts of effort to free software.

    People have discovered a few monetization schemes that are widely accepted in the free software movement, sustainable for the developers, and somewhat profitable. Sysadmins writing software is one; Drupal web developers writing modules is another; Red Hat's business model is a third. But these examples are few and far between, and it is not even clear whether they can be sustained for long (for example: nobody wants to work on Drupal core modules, because customers for websites won't fund that work).

    The end result is that we keep good programmers poor, which means that they stop developing free software and do other things. It feels to me that Lanier's criticism addresses the heart of this conundrum; if free software programmers are contributing value to society (and they clearly do) then maybe they should be making money, but they aren't.

    Free software really isn't about freedom for software. Software isn't people; it is weird to give it "rights" and "freedoms" when we prohibit such rights for other non-people, such as animals. Rather, free software is mostly about giving the users of free software a lot of rights, while restricting the creators of software from imposing limitations on those programs. That has further established a culture of entitlement that keeps programmers poor.

  • Many of the organizations (if not most) that run the giant computers that suck up everybody's data are built on free software (although few are free software themselves). Google uses Linux and Python extensively; Facebook is written in PHP; webservers everywhere use the LAMP stack or free software competitors like Nginx. Would these giant computers be as profitable if the barriers to creating that infrastructure were not so low?

I do not think that Lanier has all of the answers. His solution to the problem is basically unionization, which requires a solidarity and class consciousness that I do not see happening (and which I am not even sure is desirable). Also I think that imposing fees on the usage of works currently given away for free will slow down innovation considerably. His proposed testbed for micropayments is the 3D printing universe -- basically, he hopes that nobody will give away design specifications for 3D objects without getting paid for them. Fair enough, but he chose a pretty bad example. We might have had widespread 3D printing twenty years ago if it had not been for patents. Patents are not the same as micropayments, but I think the principles are similar.

The big surprise for me when listening to Lanier's interview were my emotions. I felt angry and defensive when he criticized the culture of sharing and the free software movement, even though intellectually I could see the value of what he was saying. I've been through that wringer before with environmentalism, and it's a pretty bad sign. It means my heart is aligned with one side of a debate and my brain is aligned with the other side. Now I get to live with the uncomfortable (often paralyzing) tensions, perhaps for the rest of my life. I felt somewhat uncomfortable in promoting Software Freedom Day this year; will I even want to promote such events in the future?

Composed October 14th, 2013 Tags:

Lessons From Software Freedom Day 2013

Another Software Freedom Day has come and gone. Once again I helped organize a local event. Software Freedom Day is supposed to be an event when we step back and reflect upon the impacts the free software movement has upon us. Through the process of organizing and promoting the event, I ended up reflecting more than I had anticipated; my head is swirling with a haphazard collection of ideas and thoughts, often contradictory. Here is my awkward braindump.

Reflection 0: I found it difficult to convey why anybody (not already immersed in free software ideology) would want to attend our event. Once upon a time, internet connections were slow, and Linux distros were hard to install. After a brief golden age, Linux distros have become difficult to install again (thanks to proprietary video drivers and stripped-out binary blob firmware) but installing a Linux (or GNU/Linux) distro on one's computer is far less compelling than it used to be -- it is much easier to try new distros in virtual machines. In addition, internet connections are no longer slow, so installing software via CD or DVD (or even USB key) seems sillier and sillier. For our event, we still made software DVDs, and people still took them, but I think there is a good chance that not one person will ever use them for their ostensible purpose.

The hardware landscape is changing. The idea of "installing software" on "a personal computer" is becoming quaint and irrelevant. People's computers come in the form of mobile devices; their software is in the form of apps that are installable in a few clicks from centralized curated repositories (thank you, apt-get -- I guess?), and their data is in The Cloud. My gut tells me that the free software movement may have a role to play in this brave new future, but I do not think we are doing so now. Certainly this year's SFD celebrations did not address this need.

That leaves the talks. I think a few people came out because they were interested in specific talks, but over the years I have noticed that we offer the same kinds of talk year after year: a talk on the meaning of Software Freedom Day, a few talks on multimedia software, and something about "Linux for Windows users" (which we neglected to give this year). Is this sufficiently compelling? Does it accomplish our implicit goal of reaching new users and those unfamiliar with the free software movement? I do not feel that it does.

Reflection 1: I think I am losing my religion. This reflection got awfully long, so I have uncharacteristically split it off into its own entry: Losing My Religion (again)

Reflection 2: We pay insufficient attention to free culture, and that is harming its adoption. This year, I tried to stir the pot by broadening the idea of celebration of free software to include the free culture movement -- cultural works that can be shared and modified with fewer restrictions than the "All Rights Reserved" policy of most of the cultural works we consume. I thought that such a sampler would be more relevant to people than CDs of software that could better be downloaded off the Internet. Cultural works can be useful even if they are a hundred years old.

I attempted to solicit selections for this free culture sampler from the KWLUG membership, which is composed of a high proportion of free software enthusiasts. I hoped that we all had small collections of free culture works on our hard drives, and that we could pull the best of it together into a nice sampler for others.

It turned out that there were not many suggestions to be had from our Linux-loving community. Many of us extolled the virtues of Creative Commons and public domain culture, but not many of us actually consume it (or if we consume it, we are reluctant to share our suggestions). I received some of the same old selections everybody names: animated movies produced by the Blender3D foundation, and Sita Sings the Blues. Although these are fine cultural works, they are the same examples of free culture that everybody cites. Is there no additional free culture out there? Is none of it any good? Or are we just not paying any attention to it? In response to my query a few people started conducting CC-licence searches on the internet for suggestions, but I was surprised that we did not have favourites already.

In the end, we restricted the free culture sampler for this year to audio, which resulted in the 2013 Free Culture Sampler, which you can download from http://kwsfd.castletech.ca . We ended up putting this sampler from the following resources:

  • Suggestions from two people
  • Some favourite tracks from my preexisting collection of free culture works (mostly from http://musopen.org and http://jamendo.com, with a few chiptunes thrown in)
  • A lot of listening over the course of a month for additional tracks

In the end, I think the sampler turned out pretty okay, despite scrambling at the last minute to find tracks. It takes work to wade through oceans of boring music, but curation services like Jamendo radios make this easier. (I only wish there was a similar system for archive.org -- there is so much music in the Live Music Archive, and I have no idea of what to listen to.) I think the problem here is demand, not supply -- we do not spend enough attention consuming the free culture content that is available, which removes one of the few incentives people have to release their content under Creative Commons licences.

I am a part of the problem. I spend far too much attention watching or listening to (non-free, often pirated) content on Youtube, and not enough attention exploring the free culture world. I will never be a saint about this, but I have the ability to spend more attention consuming free culture works, and then giving back by leaving reviews and recommendations (and payments?) for the stuff I like.

Reflection 3: I do not like organizing events, and I am not very good at it. This is hardly a new reflection -- I have actively resisted leadership and organizational roles for years now (but not that successfully -- I get roped in again and again). When Software Freedom Day season came up again this year, I felt some dread. However, in the end I resolved not to be resentful, even if I ended up doing the bulk of the organizing. I do not know whether I ended up doing the bulk of the organizing or not -- certainly I did not end up doing everything myself, which was a relief. But those last two weeks of organizing definitely felt more like work and less like fun.

In some ways I did more promotion for the event than I had in previous years, at least in terms of getting my voice heard. I wrote an article for the Good Work News, I did an interview/rant for the Kwartzlab podcast, and on the day of the event I was interviewed by two students at Conestoga College. I also distributed posters and filled out forms for event calendars that I suspected nobody reads. But none of this was very effective.

Perhaps the most effective promotion I engaged in was with the Bits and Bytes computer club. I presented a 10 minute spiel for the group, and in return a few members showed up to the event (and found some presenters for their group, which is fantastic).

The event itself was not a disaster. We had hiccups, and certain things like the Installfest did not work out well at all, but the event happened and some people showed up. Is that good enough? It does not feel good enough. The majority of attendees were already free software aficionados, and if our goal had been to spread the word about free software to so-called "regular people", then we did not succeed.

Is proselyization the goal? I think it was my goal. But without more effective outreach, we won't reach that goal -- and given how weak I am at promotion there is no way we will reach that goal if promotion is left in my hands. So is putting the effort to organize Software Freedom Day really worth it? I ask myself that question every year, and almost every year I push through my reluctance and help with organizing anyways. It may be time to reevaluate that decision.

Composed October 9th, 2013 Tags:

Outdoor Computing

I am writing this from Victoria Park. About half an hour ago some guy in a pickup truck drove by. As he drove by, he yelled "Get off of Facebook!"

I was already in an irritable mood, and this did not make me feel much better. I get drive-by comments on a semi-regular basis (usually when my beard is longer and I look more Muslim). I do not like that these cowards are not willing to stand behind their comments, which is why they drive off after yelling through the windows of their cars. All the same, this joker had a point and didn't have a point.

He was factually wrong; I was not on Facebook. Technically I was not even on the Internet. It would be nice to say that I was working hard on some work-related project, but instead I was reading blogs cached on my hard drive. In my mind that is not very different from being on Facebook, so the heckler gets half-marks at least. Instead of sitting outside staring at a computer screen, I could have been staring at ducks or reading a paper book or meditating or juggling, none of which I have been doing enough of since obtaining a laptop with a working battery. When I am outside, it is far superior to be fully present when outside rather than staring at a screen.

On the other hand, he was very wrong. Assume that his criticism had been factually correct. Staring at a computer screen may be a less worthwhile activity than juggling, but sitting outside looking at Facbook is a better activity than sitting inside looking at Facebook. We spend so many of our entertainment hours alone indoors. I think that overall we are better off when more people are outdoors enjoying themselves, even if that form of enjoyment consists of being on a computer (or cellphone) and staring at a screen. There is a continuum here: sitting down in a public park and surfing does not harm anybody, while surfing and walking can be dangerous (and surfing when you are supposed to be paying attention to another event such as a child's soccer game or watching a presentation is kind of obnoxious). If I had not gone outside with my laptop, I would have stayed inside with it, or I would have stared at different screens at work. I am not sure that would have been preferable to anybody. Even the coward in the pickup truck got some entertainment because I was outside.

Composed August 19th, 2013 Tags:

RSS Oops

One problem with RSS is that feeds can change abruptly. I noticed some problems with the feeds on this site:

  • The articles pointed to files on my local computer, not the public version of this website.

  • The names for specific feeds ("Blather", "Tech", etc) were messed up.

  • The RSS feeds appear as plaintext files when you visit the URLs in Firefox.

I think I fixed the first problem, but in doing so I made a lot of duplicates appear in the feed list. Unfortunately, I may repeat this infraction multiple times in fixing the issue. Please be patient with me.

If you find other problems with the feed please let me know about them. I am not promising to fix them, but at least I will be aware of my crimes.

Composed August 10th, 2013 Tags:
Composed August 7th, 2013