the capacity of one person or thing to react with or affect another in some special way, as by attraction or the facilitation of a function or activity.

Mirrored Drive Failure #2 – Win Server 2003

Posted on | July 19, 2011 | Comments Off on Mirrored Drive Failure #2 – Win Server 2003

I am on a roll with failed mirrored drives lately. I am currently fixing a friends failed mirror set on a windows 2003 server after last weeks in-house ubuntu software raid 1 failure.


The phone call

The system volume on the primary drive failed due to read errors.  When rebooted they could not get it to load the OS even when they restarted the system. Selecting the default ‘windows 2003’ boot option just put them into a boot loop.    This is when they involved me by way of a phone call.

The system is several years old running server 2003 standard with a single 3ghz p4 and 2gb of ram and an asus motherboard in an antec case. The mirrored drives are 80gb sata drives. Good drives in the day. Software RAID 1 mirroring.

Talking with them on the phone – I asked them to choose the ‘mirror – secondary plex’ boot option but this just locked up the system part way into the boot. I was afraid that what ever had messed up the primary system dynamic volume had been copied to the mirror drive so I made arrangements to stop by after finishing the job I was currently at.


First Look

Looking at the server ‘in situ’ I noticed that the box was infested with dust bunnies – but didn’t notice any unusual noises – though it is located adjacent to several other pieces of equipment that are fairly loud.

So I shut it down and took it out to give it a quick cleaning. Just enough to remove the bunnies and visually inspect the interior of the box for stuck fans, loose cables, etc.

Reassembled and attempted a ‘default’, don’t touch anything, boot. No luck – bios failure on recognizing the primary boot drive.

Shut down again and checked all the drive cables – removed and reinstalled.



Took a minute to check with my friend to see if he still had the disk image that we had made of his system volume for insurance – and he did. We also took the time to check the backups from the night before of all the data. Looked good as well. It always feels good at a time like this to know that if all else fails we can restore the system volume from the image file and then restore all of the data from the backups.


Boot the system

Bios recognized the drive but would not boot to default. Rebooted and chose ‘secondary plex’ option.

Booted into Windows server, logged in and ran compmgmt.msc /s from the run command.

In disk management I took a look at the drives. The data volume was re-syncing and the system volume was online with errors – failed redundancy status. Hmm.

I waited for the re-sysncing volume to finish (because I am paranoid) and took the opportunity to take a look at the event viewer – run->eventvwr.msc 


Check Event Viewer

I read through the errors and decided that the drive probably should be replaced just to be on the safe side even if we could bring it back on line and repair it.


Don’t remove the Mirror! or even break it – yet.

Do not remove the mirror. That will wipe out the shadow drive. This is bad.

Do not break the mirror either.  If you break the mirror now (while both drives are in the computer) the second (shadow) drive dynamic volumes will be assigned new drive letters – this will mess with the ability to boot off of that drive at a later date or possibly even rebuilding the raid. I suspect this has something to do with the LDM (Logical Disk Manager) database used by dynamic disks to track volume types, drive letters, etc. If anyone knows the answer to this, let me know.

It is also related to the the fact that the paging file, as far as  this particular registry is concerned, is located on a drive that no longer exists…ouch. This can cause a vicious cycle of ‘enter your login name and password’ because there is no virtual memory.

For some more info on this check out

Another support doc you might want to look at if you inadvertently break your mirror before you remove the bad drive –


Why is it so complicated? I know, stop whining and get back to work.

For more information on Dynamic disks you can check out 

There is an interesting paragraph there (well more than one, but this is relevant to our conversation)


Missing dynamic disks

If Disk Management shows a missing dynamic disk, this means that a dynamic disk that was attached to the system cannot be located. Because every dynamic disk in the system knows about every other dynamic disk, this “missing” disk is shown in Disk Management. Do not delete the missing disk’s volumes or select the Remove Disk option in Disk Management unless you intentionally removed the physical disk from the system and you do not intend to ever reattach it. This is important because after you delete the disk and volume records from the remaining dynamic disk’s LDM database, you may not be able to import the missing disk and bring it back online on the same system after you reattach it.


Remove problem drive

After the data volume finished its job syncing I shut down the server and removed the problem hard drive. I then installed the replacement hard drive and rebooted.

After logging in I returned to disk management and deleted the failed drive followed by converting the newly installed drive to dynamic by right clicking on the Disk and selecting Dynamic.


Re-enable the mirror

Once the new drive is dynamic as opposed to basic, a fast process, right click on the old drive volumes and create a mirror for each volume.

Now the syncing will take a while. Be patient. Go have lunch, dinner, a cup of coffee or if you prefer, a beer. You deserve it.

Recovering a broken mirrored drive – ubuntu 9.04

Posted on | June 29, 2011 | Comments Off on Recovering a broken mirrored drive – ubuntu 9.04

The other day I got an email from mdadm, a process running on some of our servers that keeps an eye on the raid array.


This is an automatically generated mail message from mdadm running on woo
A DegradedArray event had been detected on md device /dev/md0.
Faithfully yours, etc.
P.S. The /proc/mdstat file currently contains the following:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[0]
303805120 blocks [2/1] [U_]
md0 : active raid1 sda1[0]
7815488 blocks [2/1] [U_]
unused devices: <none>


This was not a happy event – looks like one of the two drives in the array was no longer working.

  • This is a system that’s a couple of years old running software RAID 1 (mirrored) 320gb SATA drives.
  • OS is Ubuntu 9.04 running web services.
  • The failed drive is no longer readable by the system.
  • There are only two partitions on the drive : System and Swap.

Easiest thing to do here is to replace the drive (first making a new backup).

I just ran a quick check on the raid status to confirm the email I had received.

cat /proc/mdstat (maybe you will need to sudo this command)

This is my output



Sun Jun 26:02:27 PM:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sda2[0]
303805120 blocks [2/1] [U_]
md0 : active raid1 sda1[0]
7815488 blocks [2/1] [U_]
unused devices: <none>
Sun Jun 26:02:28 PM:~$

Now I had to pull the bad drive and replace it.

1- Sometimes you can find out which is the bad drive by looking in dmesg for the read failure on the device.

dmesg | grep ata (or whatever is appropriate for you)

2- Shutdown and unplug the suspect drive – reboot to confirm you have the correct device unplugged.

3- Plug in the new drive (best if it is unpartitioned/unformatted)

reboot and watch the boot up to see if the drive shows up – if you don’t see it go by on the screen (i always get attracted to something else and forget to watch carefully).

Once the box is booted up – grep the output of dmesg to find the new device.

4- You can also check (and get important info for the next steps) by running

sudo fdisk -l


Sun Jun 26:02:30 PM:~$ sudo fdisk -l
[sudo] password for ken:
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x59b728b7
Device       Boot      Start         End      Blocks      Id  System
/dev/sda1                   1                 973       7815591        fd  Linux raid autodetect
/dev/sda2   *           974              38795   303805215   fd  Linux raid autodetect
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn’t contain a valid partition table
Disk /dev/md0: 8003 MB, 8003059712 bytes
2 heads, 4 sectors/track, 1953872 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn’t contain a valid partition table
Disk /dev/md1: 311.0 GB, 311096442880 bytes
2 heads, 4 sectors/track, 75951280 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn’t contain a valid partition table
Sun Jun 26:02:35 PM:~$

Note that my working drive is sda with a couple of partitions.

The device sdb doesn’t have a valid partition table. Your milage (and drive designations will vary).

5- Now to get the raid back on track we need to copy the existing partition table from the functioning raid drive to the newly installed drive.

(Dangerous stuff here – I have never tried it but would almost bet money that getting the drives backwards would not be ‘good’)

So here is my output for sudo sfdisk -l


Sun Jun 26:02:49 PM:~$ sudo sfdisk -l
Disk /dev/sda: 38913 cylinders, 255 heads, 63 sectors/track
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sda1          0+    972     973-   7815591   fd  Linux raid autodetect
/dev/sda2   *    973   38794   37822  303805215   fd  Linux raid autodetect
/dev/sda3          0       –       0          0    0  Empty
/dev/sda4          0       –       0          0    0  Empty
Disk /dev/sdb: 60801 cylinders, 255 heads, 63 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
No partitions found
Disk /dev/md0: 1953872 cylinders, 2 heads, 4 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/md0: unrecognized partition table type
No partitions found
Disk /dev/md1: 75951280 cylinders, 2 heads, 4 sectors/track
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/md1: unrecognized partition table type
No partitions found
Sun Jun 26:02:49 PM:~$

6- Check out the man page for sfdisk and read through some of the stuff there.

We are going to use the -d option which should give us the partition information

about one device and pipe that through to the other device – hopefully using the partition information gleaned from the good drive to recreate the same partitions on the new drive…(fingers crossed here)


sudo sfdisk -d /dev/sda | sudo sfdisk /dev/sdb


So again we are just piping the output of the first sfdisk command into the input of the second.

If you want see the output of the first part of the command before you commit to destroying whatever is on the target of the second sfdisk command you can enter just that portion and see what you get.

sudo sfdisk -d /dev/sda (again use the appropriate drive designation here for your system – not mine)

You should get some output that sort of makes sense to you…



Sun Jun 26:02:49 PM:~$ sudo sfdisk -d /dev/sda
# partition table of /dev/sda
unit: sectors
/dev/sda1 : start=       63, size= 15631182, Id=fd
/dev/sda2 : start= 15631245, size=607610430, Id=fd, bootable
/dev/sda3 : start=        0, size=        0, Id= 0
/dev/sda4 : start=        0, size=        0, Id= 0
Sun Jun 26:02:57 PM:~$

If you point this command at the newly installed drive you should get an error (unless it has an existing partition table that sfdisk recognizes).

Here is mine again



Sun Jun 26:02:57 PM:~$ sudo sfdisk -d /dev/sdb
sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
No partitions found
Sun Jun 26:02:59 PM:~$



All of this double checking makes me feel a little better about continuing…



Sun Jun 26:02:59 PM:~$ sudo sfdisk -d /dev/sda | sudo sfdisk /dev/sdb
Checking that no-one is using this disk right now …
Disk /dev/sdb: 60801 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdb1          0+    972     973-   7815591   fd  Linux raid autodetect
/dev/sdb2   *    973   38794   37822  303805215   fd  Linux raid autodetect
/dev/sdb3          0       –       0          0    0  Empty
/dev/sdb4          0       –       0          0    0  Empty
New situation:
Units = sectors of 512 bytes, counting from 0
Device Boot    Start       End   #sectors  Id  System
/dev/sdb1            63  15631244   15631182  fd  Linux raid autodetect
/dev/sdb2   *  15631245 623241674  607610430  fd  Linux raid autodetect
/dev/sdb3             0         –          0   0  Empty
/dev/sdb4             0         –          0   0  Empty
Successfully wrote the new partition table
Re-reading the partition table …
If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
Sun Jun 26:03:01 PM:~$

Whew…I always get a  little butterfly thing no matter how many drives i break…

(P.S. right at the moment I am listening to a shredder from the early 90s Gary Hoey. No Joe Satriani but still fun sometimes)


7- So now I want to take another quick look at all of the partitions with

sudo fdisk -l


Sun Jun 26:03:02 PM:~$ sudo fdisk -l
Disk /dev/sda: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x59b728b7
Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         973     7815591   fd  Linux raid autodetect
/dev/sda2   *         974       38795   303805215   fd  Linux raid autodetect
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         973     7815591   fd  Linux raid autodetect
/dev/sdb2   *         974       38795   303805215   fd  Linux raid autodetect
Disk /dev/md0: 8003 MB, 8003059712 bytes
2 heads, 4 sectors/track, 1953872 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn’t contain a valid partition table
Disk /dev/md1: 311.0 GB, 311096442880 bytes
2 heads, 4 sectors/track, 75951280 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md1 doesn’t contain a valid partition table


8- Nice – there is the second drive with appropriate partitions but still not a happy raid camper.

Again and again – use the correct nomenclature for your particular system configuration


In the case of our example system we will use these commands

This is for the swap partition

sudo mdadam –add /dev/md0 /dev/sdb1


Sun Jun 26:03:13 PM:~$ sudo mdadm –add /dev/md0 /dev/sdb1
mdadm: added /dev/sdb1

and this is for the system partition

sudo madam –add /dev/md1 /dev/sdb2


Sun Jun 26:03:13 PM:~$ sudo mdadm –add /dev/md1 /dev/sdb2
mdadm: added /dev/sdb2
so now that we did that -lets see what is going on  by looking at mdstat again.


Sun Jun 26:03:16 PM:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid1 sdb2[2] sda2[0]
303805120 blocks [2/1] [U_]
[>………………..]  recovery =  0.9% (2881792/303805120) finish=80.2min speed=62493K/sec
md0 : active raid1 sdb1[1] sda1[0]
7815488 blocks [2/2] [UU]
unused devices: <none>
Sun Jun 26:03:16 PM:~$


Awesome stuff. Look, the computer machine is working to bring the newly added device up to snuff. Love it.


You can also get additional information using

sudo mdadm –detail /dev/md1

sudo mdadm –detail /dev/md0


Sun Jun 26:04:02 PM:~$ sudo mdadm –detail /dev/md0
Version : 00.90
Creation Time : Sat Sep 19 19:59:31 2009
Raid Level : raid1
Array Size : 7815488 (7.45 GiB 8.00 GB)
Used Dev Size : 7815488 (7.45 GiB 8.00 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun Jun 26 15:15:35 2011
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : c6fe5bf1:47145c2e:8f53a666:581a3da1
Events : 0.604
Number   Major   Minor   RaidDevice State
0       8        1        0      active sync   /dev/sda1
1       8       17        1      active sync   /dev/sdb1
Sun Jun 26:04:04 PM:~$
Sun Jun 26:03:30 PM:~$ sudo mdadm –detail /dev/md1
Version : 00.90
Creation Time : Sat Sep 19 19:59:48 2009
Raid Level : raid1
Array Size : 303805120 (289.73 GiB 311.10 GB)
Used Dev Size : 303805120 (289.73 GiB 311.10 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Sun Jun 26 16:02:40 2011
State : active, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 55% complete
UUID : a004ba5a:4a61bca9:f20d5c50:35d36b51
Events : 0.4423477
Number   Major   Minor   RaidDevice State
0       8        2        0      active sync   /dev/sda2
2       8       18        1      spare rebuilding   /dev/sdb2

9- Now go get a cup of coffee, tea, water…what ever you enjoy. this recovery will take a bit of time to complete. I am going to have some left over pasta from dinner last night.

10- Finally we want to install GRUB on to the new drive

sudo grub-install /dev/md1

Good luck, ken.





Verizon MIFI 2200 vs the VPN

Posted on | May 24, 2011 | Comments Off on Verizon MIFI 2200 vs the VPN

Recently I was troubleshooting what I initially felt was a SonicWall VPN problem.

The client/user tunnels to the VPN endpoint through a wireless connection with a Verizon MIFI 2200. The tunnel comes up just fine but after a few minutes his Remote Desktop Connection to a box on the other end of the VPN drops.

The time between drops varies a bit but hovers around the 5-8 minute mark. He is still able to browse the internet, check email, etc. using local tools with out any undue troubles. The VPN client tool shows that it is connected but sadly no traffic moves across the tunnel after the MIFI becomes ‘dormant’. Can not ping the other side of the tunnel or run a tracert. The computer is a relatively new laptop computer running Windows 7 that otherwise seems happy and well adjusted.

Using the laptop’s ethernet adaptor to  attach to a LAN at a different location connecting to the internet through a dedicated line has no issues with the VPN dropping connection.

Using  wireless through a d-link router connecting to the internet through a cable modem showed no tendency to drop the VPN connection.

I tested the MIFI using a different notebook with a fresh windows 7 professional installation. Loaded SonicWall’s 32bit Global VPN Client and configured the connection.

  • – Attaching the computer through its ethernet adaptor to dslmodem-internet –  the VPN was solid with no drops.
  • – Tethered to the MIFI 2200 with a USB cable the VPN also was solid with no drops.
  • – Connecting to the MIFI 2200 wirelessly, the VPN would build a tunnel and work fine for awhile – the time varied but within 5 minutes or less the VPN connection would drop. Could not print to the remote printer or use any other network devices across the VPN – connection to the internet (browsing, email, twitter)  was still ok.

I checked the firmware and it was pretty old – early 2009 v125.008 so I upgraded the latest from Verizon, v167.029 dated october 2010 and thought that certainly the problem would be fixed.

Sadly this did not resolve the issue and still after a few minutes (or less) the connection would remain dormant long enough that the VPN would fail – though it still shows connected in the VPN client tool.  Browsing and other TCP services worked fine.  I ran a ping process to see if that would prevent the dropping of the VPN connection but the tunnel collapsed all around me anyway.

The tunnel can be disconnected and reconnected using the VPN client but terminal services, etc must be reinitiated. Bad.

Poked around in the interwebs and discovered that many people were having the same type of problem – inability to hold a reliable connection when attaching to the MIFI 2200 over wireless.

At this point in time it seems there is no real fix outside of using the MIFI in tethered mode. (May 2011)


Default Password Policy Win Server 2008

Posted on | May 15, 2011 | Comments Off on Default Password Policy Win Server 2008

Changing Server 2008 Password Policy

I have been having some fun working on a couple of Windows Server 2008 R2 installations. Learning a lot of new things every day and this is something that I thought might be of interest.

In one installation the folks that were paying the bill did not like the default password policies that are now standard in windows server. They felt that in their small and close environment there was no real need for the stricter requirements being enforced by the new default policies. There were actually pretty lax in their password demands.

I did not and still do not agree with them but upon their insistence I had to figure out how to bypass this need for stronger passwords.

As a quick reminder Microsoft Server 2008 r2 now insists that your password meet certain ‘complexity’ requirements. This is a good thing – as long as you can remember your password and don’t write it somewhere obvious. Briefly:

Account Policies/Password Policy

Policy  ::  Settings

  • Enforce Password history  ::  24 passwords remembered
  • Maximum password age ::  42 days
  • Minimum password age ::  1 day
  • Minimum password length ::  7 characters
  • Password must meet complexity requirements ::  Enabled
  • Store passwords using reversible encryption ::  Disabled

Some of these settings can be adjusted at the user level in Active Directory Users and Computers. Modifying or shutting off the Complexity policy requirement in not accessible there.

Here is an explanation of the password complexity requirement option.

Password must meet complexity requirements

This security setting determines whether passwords must meet complexity requirements.

If this policy is enabled, passwords must meet the following minimum requirements:

Not contain the user’s account name or parts of the user’s full name that exceed two consecutive characters
Be at least six characters in length
Contain characters from three of the following four categories:
English uppercase characters (A through Z)
English lowercase characters (a through z)
Base 10 digits (0 through 9)
Non-alphabetic characters (for example, !, $, #, %)
Complexity requirements are enforced when passwords are changed or created.


Enabled on domain controllers.
Disabled on stand-alone servers.

Note: By default, member computers follow the configuration of their domain controllers.

There are probably several ways of working around this – but I chose the simple (not always the best no matter what anyone says) way. Please don’t laugh. I thought this was simple…

  1. Open Group Policy Management Editor
  2. Start->Run->gpme.msc
  3. New Window – Browse for a Group Policy Object
  4. Under the Domains/OUs tab select Default Domain Policy -> OK
  5. New Window – Group Policy Management Editor
  6. Default Domain Policy [servername.domain.extension]
  7. Expand Computer Configuration
  8. Expand Policies
  9. Expand Windows Settings
  10. Expand Security Settings
  11. Expand Account Policies
  12. Select Password Policy
  13. Now in the right pane :
  14. Right Click “Password must meet complexity requirements Enabled”
  15. Select Properties
  16. New Window – Select Security Policy Setting tab
  17. Select Disabled->OK

There is probably an easier, faster, or better way to do this. Let me know.


Trouble Shooting Port Forwarding for HP Media Server

Posted on | February 21, 2011 | Comments Off on Trouble Shooting Port Forwarding for HP Media Server

HP Media Smart Server – troubleshooting Remote Access to media services.
Recently I had the opportunity to spend some time trouble shooting a problem with remotely accessing an HP Media Smart Server. A friend of mine had been beating his head against the wall for awhile trying to get access to his box from locations outside of his home network. He had things working well at home but could not seem to crack the code of opening up his router to gain access to appropriate ports from other locations.

The UPnP option was not doing the trick for him.

I did a little research before I went to visit and found there is a fair amount of support available for these boxes. Sadly some of the recommendations were not very helpful so I thought I would take a little time and jot down the steps that we took to resolve his problems – maybe you will get a laugh out of it or maybe you will cry. Hard to say.

The first thing that I did was run some port scanning software from my office pointed at his personal “” hostname to check and see what ports might be open. None. Well, I thought that was interesting because my friend had explained to me that he had set up his router to port forward everything that was necessary.

In hindsight I should have known then and there what the problem was (I think this is why they say hindsight is 20/20). But, I am not always that bright and thought that maybe there was a problem with my software or maybe his dynamic DNS wasn’t working or, you know, something else was wrong.

The next thing was to check with the ISP supplying him internet connectivity at his home to find out their policies on running services with a residential account. As expected they did have some policies in place that prevented remote access to ports used for web services, outgoing mail services, NetBT ports, things like that. But not port 443, or 3389, or 4125. These are the ports that will need to be open for us to get set up and going.

At this point I asked him if he had more than one router at home, this is not uncommon anymore with VOIP specific routers, etc., being placed into service. If you happen to have two routers, one connected to the other, that configuration can create a special set of routing problems. One solution for avoiding dual router (double NAT) problems is to set up the DMZ on your non VOIP router and then hook the VOIP router into your primary router all by itself as the DMZ device. If your ISP allows for multiple devices to be hooked to your DSL/Cable modem then another simple solution is to use a switch as the first device after the Cable/DSL modem and hook your two routers separately to two different ports on the switch. If you are using two routers one behind the other for a special reason make sure that they are on different lan ip ranges.

So, back to the one router approach since that was all he had. I showed up at his home and passed on the wine – which was tough because it was pretty good wine. Sadly, I need all my available brain cells working when I am troubleshooting and a glass of wine will definitely slow me down.

We began by checking that the Media Server Web interface was available locally. In our browser we used the IP address of the server instead of its name, for instance We also made sure that we had remote desktop access to the server. Both of these worked locally. Good.

Next step was to take a look at DNS resolution for his server host name – that was all as it should be – so we moved on to and used ShieldsUP! To check for open ports. Not a one. Hmm.

Popped up the router in a browser, logged in, and took a look there. Double checked the IP address and and ports that were being forwarded onto the media server. It all looked good. But still no ports were open from outside the network. I had brought a laptop and hooked it up to my verizon phone for internet access so we could test access from outside his local network. Still no go.

Then we decided to change the server IP address from it current ‘reserved dhcp’ to a manually assigned IP address outside of the DHCP range being handed out by the router (a D-link DIR 825). Some routers just don’t like to forward ports to DHCP assigned IP addresses – even when the directions say they will.

Magically ShieldsUP! now showed the appropriate ports as open and I was able to access his Media Services from my remote laptop. Everyone was happy.

The saddest part was that I had run out of time and had to leave, so I still missed out on the glass of wine.

Perhaps we are asking the wrong questions – Agent Brown

Posted on | September 19, 2010 | Comments Off on Perhaps we are asking the wrong questions – Agent Brown

7 ½ step path to a successful project

There aren’t really seven and one half steps to magically manage successful projects, or ten for that matter, but there are a number of items that will always require careful and diligent attention.

I was working on a project writing a database query interface for a law firm a number of years ago (more years than I would like to specify – when dBase III was king) and during a discussion of the project with a seasoned programmer friend of mine I said, “When I finish coding this I am going to compile it and never touch it again.” My friend laughed.

For a long time.

At that particular moment I didn’t see the joke but after the stakeholders redefined the functionality of the project time after time without any seeming willingness to alter either the timeline or cost allowances I came to see the very deep dark humor in my statement. On so many levels.

Field Marshall Helmuth Carl Bernard von Moltke said once upon a time in the late 19th century, “No battle plan ever survives contact with the enemy.”  Dwight Eisenhower followed up a few years later with, “In preparing for battle I have always found that plans are useless, but planning is indispensable.”  I am now convinced they were both seasoned project managers who believed deeply in the need to plan but they also had the benefit of vast experience allowing them to embrace the fact that a plan is just a way to get started doing something.

The sad truth is I like plans, especially when they help guide me through complex tasks. The caveat is simply that the plans I make are completely ignored by life, requiring that they be constantly updated (read changed) during any meaningful project. In creative, new projects, this process of change has to be an iterative process. I always try to remember that, like a marriage, flexibility within the agreed upon framework is primary. Evolution is life, life is change, impermanence just is. The plan will change, be open to that. Hell, plan for it.

P.S.  The scope of this article does not include arguing about PMP vs. Prince2

The 7.5 fold path to enlightenment.

1. What is the goal? Notice I didn’t say ‘your’ goal because a project’s goal belongs to the stakeholders, you are the catalyst in a complex chemistry project. A big part of your job as the project manager is to help the folks the product is being built for clearly define what the product is. Some people go so far as to say that if you can’t define the goal in a single sentence you’re not likely to ever reach it…

2. Spend some time planning . Self evident yes, but we have a tendency to want to ‘get going’ on projects. Take the time to deconstruct. Reductionism is good when applied properly. Break down the big picture into smaller absorbable, manageable parts. Include an iterative process of requirements capturing. Gather data, sometimes it will not even seem related to anything meaningful until you have that epiphany next month. Don’t ever assume you know what the end-users need.

3. Hold onto the big picture. Don’t forget that the smaller parts are interconnected pieces of a complex system of parts that must have a common goal. Sometimes the evolving subsystems don’t evolve in the direction you need. Gently guide everyone back on track when they have wandered off into the woods.

4. Communicate. With everyone. All the time.

5. Know your team. Have them all read #4. Support them, lead them, help them communicate with each other and you. Trust them and make sure they can trust you. Maybe they really do like pizza?

6. Know your Stakeholders.  Who are they and what is their relationship to you and the project. They will run the gamut from the people that pay the bills through to the people that are using the end result of the project.  And  since we don’t know everything we may have to reach outside the team to find ‘experts’ to help in specialized areas like the law or cell phone app design. Listen to all of them carefully. They are the people that are going to decide if you have really completed the project.

7. Test. Test again. Test early, test often. (See #3) Make sure the pieces are fitting together. Evolve the project parts concurrently whenever possible – hopefully avoiding that day when the two halves of the bridge are coming together and you missed the fact there is no place to put the off-ramp.

7.5 Own your project but don’t let it own you. Embrace change and be flexible but remain focused on the goal.

“No battle was ever won according to plan, but no battle was ever won without one.” – Dwight D. Eisenhower.

Ethics in AI and Human Relations

Posted on | August 21, 2010 | Comments Off on Ethics in AI and Human Relations

A Foundation for Ethics in AI and Human Relations

The relationship between humans and ‘intelligent’ machines is  becoming increasingly complex as we move forward together into the new age of synthetic organisms, intelligent systems, and personal use of enhanced bio-synthetic replacement parts for our bodies. We will more regularly come face to face with intelligent systems that will make decisions impacting our everyday life without input from us. From the already widely accepted collection of data assembled by thousands of cameras photographing you as you walk and drive in your town to the seeming science fiction of armed robotic border guards.

There are many ethical issues that need to be addressed now as we rapidly put into place human guided and unguided intelligent equipment. ‘AI’ systems are now being used to complement human decisions in a wide variety of situations, from searching for data online to medical equipment used in the operating room to data management tools used by Generals in war rooms. More autonomous devices such as vehicles able to find their own way through the world are also becoming more widespread. One label for an important part of this field is called “Cognitive Computing”.

There is some discussion taking place in philosophical and  technological circles  about Cognitive Computing but very little awareness in the general public outside opinions created by movies like “Terminator” and “AI” – and just as importantly there is little discussion of ethical considerations on a governmental level.

One of the first questions that has to be examined is “Should devices with advanced artificial intelligence be thought of (treated) like any other tool that we have built?” Should they be treated more like farm animals, or dogs or will we need to treat them as intelligent beings?  “At what point do we need to consider artificially intelligent machines or synthetic organisms our legal equals?”

That is just one of many questions that will require careful exploration. The interaction between intelligent equipment and humanity will give rise to situations we have never been confronted with before. Many of these events will fall well outside the boundaries of our current legal and ethical environments and it is wise for us to begin laying the ground work that will enable the people of the world and their leaders to make well reasoned and careful decisions. Not decisions that are based on irrational fear, bigotry or just lack of knowledge, but rather decisions based on long running rational discussions not just among scientists but philosophers, spiritual leaders, psychologists and people from all walks of life.

It is important to include a wide range of people rather than focusing on one narrower field of study largely because it is the convergence of technologies from many divergent fields, such as biology, nano-tech, structural engineering, computer science, neurobiology, etc., that is enabling the rapid advances in ‘Cognitive Computing’.

To address these many difficult questions rapidly coming our direction we need to gather together knowledgeable individuals from diverse fields of work and study to help develop a balanced perspective from outside the tech culture. These knowledgeable individuals must include Defense Department personnel, guiding members of corporations that currently build robots, ethicists, philosophers, biologists, roboticists, religious leaders, psychologists, and the doctors utilizing bio-synthetic parts for starters. The collected information must be presented to the ‘the rest of us’ in a way that will attempt to actively involve everyone in the process of reaching these history changing ethical decisions and guide the future of humanity.

This goal of information dissemination can be accomplished on several fronts.

  1. Writing articles in newspapers, magazines, wikipedia. Electronic Newsletters.
  2. Speaking engagements at schools, business organizations, government bodies, etc.
  3. Web site with information, new content, blog, links.
  4. New Social Networking tools (twitter, facebook, etc.)
  5. Video – youtube, Television.
  6. Photography books.
  7. Art exhibits
  8. Lobbying efforts
  9. Software apps that would help raise awareness for mobile devices and desktop systems.
  10. Toys that help define the relationships between humans and intelligent machines as well as ‘enhanced’ humans.
  11. Children’s books. – “New surveillance cameras don’t even need anyone watching” – Mathematical algorithms embedded in the stores’ new security system pick out sweethearting on their own. There’s no need for a security guard watching banks of video monitors or reviewing hours of grainy footage. – “Robo-Soldier to Patrol South Korean Border” – “Until now, technology allowed these robots to conduct monitoring function[s] only. But [now] our robots can detect suspicious moving objects, literally go after them, and can even fire at them,” said Sang-Il Han, principal research engineer at Samsung Techwin. – National Institute of Standards and Technology Intelligent Systems Division – (James Albus – Senior Fellow at NIST) Albus, who predicts that autonomous vehicles could equal human levels of performance in most areas within 20 years, is the co-inventor of the Real-time Control Systems (RCS) architecture and methodology. – Dr. Eric Eisenstadt – Defense Sciences Office (DSO) – Brain Machine Interface : “Picture a time when humans see in the UV and IR portions of the electromagnetic spectrum, or hear speech on the noisy flight deck of an aircraft carrier; or when soldiers communicate by thought alone. Imagine a time when the human brain has its own wireless modem so that instead of acting on thoughts, warfighters have thoughts that act. Later during DARPATech, you will hear from IPTO about efforts to create intelligent machines.” – IBM has announced it will lead a US government-funded collaboration to make electronic circuits that mimic brains. – Part of a field called “cognitive computing”, the research will bring together neurobiologists, computer and materials scientists and psychologists. – As a first step in its research the project has been granted $4.9m (£3.27m) from US defence agency Darpa.
  – Artificial Tissue – A team of Australian and Korean researchers led by Geoffrey M. Spinks and Seon Jeong Kim has now developed a novel, highly porous, sponge-like material whose mechanical properties closely resemble those of biological soft tissues. As reported in the journal Angewandte Chemie, it consists of a robust network of DNA strands and carbon nanotubes. – From BBC News a headline in 2006. “Robots could one day demand the same citizen’s rights as humans, according to a study by the British government.”

Domain Names – Who’s the boss?

Posted on | March 18, 2010 | Comments Off on Domain Names – Who’s the boss?

Understanding the the terms used in registering a Domain name will help insure that you maintain control of your own domain.

ICANN – this is the non profit organization that currently manages the assignment of names and numbers on the internet in order to insure that every node (spot) on the network is unique.

To quote from the ICANN web site :

ICANN was formed in 1998. It is a not-for-profit public-benefit corporation with participants from all over the world dedicated to keeping the Internet secure, stable and interoperable. It promotes competition and develops policy on the Internet’s unique identifiers.

To reach another person on the Internet you have to type an address into your computer – a name or a number. That address has to be unique so computers know where to find each other. ICANN coordinates these unique identifiers across the world. Without that coordination we wouldn’t have one global Internet.

ICANN has created the system of Registrars, organizations that register/issue unique domain names. All Registrars must be accredited by ICANN, whicn maintains a list of these Registrars at

The gTLDs (Generic Top Level Domains) include .aero, .biz, .com, .coop, .info, .museum, .name, .net, .org, and .pro. These are the domains that we obtain a name in, for instance, or, etc. Each of these domain names must be unique.

When you are issued the registration to a domain name people/entities will be assigned to different domain management ‘roles’. The initial Registrant (the person/entity obtaining the domain name) and the administrative, technical and billing contacts are the people or entities listed on the original Domain Name Registration Agreement that is filed with the Registrar when you actually obtain the domain name.

Who is assigned to perform the duties of these management ‘roles’ is important in terms of who ultimately controls and can change your domain information. When you register your domain name using a third party such as an ISP or Web development company (as opposed to you going directly to a Registrar’s website and filling out the application yourself) you need to be sure that when that organization assigns people/groups to the different management roles, they are being filled by people that you want. Incorrectly handling of these roles can be a very painful and/or costly error that you won’t notice until later when you want to move your website to a new hosting company, move your email services, change your domain DNS information, etc. You want to be sure that your interests are protected by the people assigned to each role.

You are the Registrant – even if you use some other business to fill out the registration form for you. You or your company/organization needs to be listed as the Registrant. The Registrant is the party that ultimately controls the domain name (at least as long as the renewal fees are paid to the Registrar…)

The administrative, technical and billing contacts are people, groups or a ‘role contact’ that represent the Registrant (you) when issues/questions about your domain name arise either with the Registrar or other entity that might need to gather information about your domain name.

A ‘role contact’ is really just a job title by another name. The person or group holding that title can change but the contact information for that ‘role contact’ will not. An example of a ‘role contact’ would be ‘hostmaster’, ‘webmaster’, ‘domainmaster’ or what ever title you like. You can assign this role contact to the admin, tech or billing fields when filling out your domain name application. It is an excellent method of insuring that contact continuity is maintained when people move on to a new position in your company.

Administrative Contact
This is the person, group, or ‘role contact’ that will act on behalf of the Registrant in communications with the Registrar. This again should be ‘you’ or someone who can be trusted to represent your interests at all times. They do not need to be technically proficient but must be able to deal with the basic questions that might arise in dealing with the Registrant, stuff like “What is the mailing address, phone number, fax, etc…”

Insure that this is exactly who you want it to be in the application for registration. You can create a special position in your organization and use that as the Admin contact or you can assign it to an individual – just be sure that if the individual leaves your employ the domain information gets updated to reflect the new person with this duty.

Technical Contact
Pretty much what it says. The technical contact manages the name servers of a domain name. In many cases, the technical contact will be a representative of the internet service provider, hosting company, or web development firm that helps you manage your website, email services, etc.

Billing Contact
I think we all know what this is about. The Registrar needs to know who is going to pay for you domain name when renewal time comes up. It is also important that this contact information remains current so that billing information gets to someone that will actually pay for the renewal in a timely manner.

Name Servers (from wikipedia)
Name servers. Most registrars provide two or more name servers as part of the registration service. However, a registrant may specify its own authoritative name servers to host a domain’s resource records. The registrar’s policies govern the number of servers and the type of server information required.

The most important thing to take from this is that you want to be sure that these domain management roles are filled appropriately to insure that you can continue to control and user your domain name.

If the data stored in these positions are not accurate, you should immediately contact the people you used to register this domain name and begin the process of setting things right. This can sometimes take quite a bit of time and might involve faxes, letters, etc. to the domain registrar. Plan ahead. Use the Whois command/service to check out what information is currently in place for your domain.

ICANN requires accredited Registrars to provide free public access to the name of the registered domain name and its nameservers and registrar, the date the domain was created and when its registration expires, and the contact information for the Registered Name Holder, the technical contact, and the administrative contact you will generally find a whois service link on your Registrar’s website – or you can use services like

I am happy to help with these services if you are not comfortable with doing this in house.


Social Media and Non Profits

Posted on | December 22, 2009 | Comments Off on Social Media and Non Profits

You hear a lot of buzz about NonProfits and social media these days.

Some commentators believe that NCOs are the perfect fit for these emerging technologies where other experts believe just the opposite, that social media applications will allow advocates to more easily organize independently of non-profits.

(67%) (of study respondents – execs at US non profits) believe that traditional media – including coverage in newspapers, magazines, television and radio – are more effective at supporting fundraising efforts than social media (22%). Further, executives in the nonprofit world are more skeptical about social media’s ability to help them connect with hard-to-reach audiences such as donors (45%), media (39%) and policy makers (31%).

Go to Non-Profits Struggle to Prove Social Media’s Value at to read more of this article.

A Dartmouth report shows that

a remarkable 89% of charitable organizations are using some form of social media including blogs, podcasts, message boards, social networking, video blogging and wikis.

and Alexandra Samuel, Ph.D., Harvard, and CEO of Social Signal offers some sound advice on using social media.

1. Engage your audience by speaking to their core concerns.
2. Put your audience in the driver’s seat.
3. Offer a mix of tangible and social benefits.
4. Embrace emergent value propositions.
5. Innovate within the bounds of your core mission.

More of her blog on this subject is available at

Though there is evidence that a large portion of the Not-for-Profit community is embracing social media as a way to both expand and deepen their connection to their communities, still, according to executives surveyed by Weber Shandwick and KRC Research, there are many roadblocks to overcome.

More than half (52%) of respondents say they do not have enough staff to manage their current social media outreach and almost two-thirds (64%) report that their organizations do not have social media policies and guidelines in place for employees and board members to engage appropriately online.

Taking these facts into account and accepting that these applications are just tools, in order to be most effective with our time and money we should listen closely to what Mr. Natural says “Use the right tool for the job.”

You should not expect that social media tools will replace talking with constituents face to face, immediately displace your current marketing approaches, or that creating a presence on twitter will generate new donors by the truck load. These are not reasonable expectations. Particularly since these tools are still in their infancy and will evolve over the coming years – as will our use of them.

What you should expect to do, whether an old hand or ‘newbie’, is to take some time to study individual social media applications, identify, and organize the tools you want to use by how they fit into your organization’s style of communicating with your current community. Decide which of these applications will lend themselves to being used in campaigns to generate new contacts. Determine whether you have in-house staff or need to reach outside to find assistance with implementation. Define how you will manage the use of these tools by staff, both personally and organizationally. And very importantly, plan carefully how you can best apply these tools within the framework of your very individual vision and social goals while maintaining your identity and integrity.

Luckily you don’t have to reinvent the wheel to accomplish all of this. There are many available Policy and Strategy Handbooks online to read and use in order to craft your very own.

Social Media Governance – Policy Database

Social Media Tips from 10 Corporations UCLA Extension

The Social Media Handbook for Local Red Cross Units and American Red Cross Personal Online Communications Guidelines

South Sound Technology Conference

Posted on | November 22, 2009 | Comments Off on South Sound Technology Conference

November 20, 2009
South Sound Technology Conference

Great conference. Cool venue at University of Washington Tacoma, William W. Philip Hall.

Lots of interesting folks there with some nice panel discussions – if you are in the industry or just interested in finding out more about the local tech scene you should plan on attending next year.

One of the most important messages I took from this conference was that the Technology industry in the Tacoma area (and there is plenty of it with some world class businesses right downtown) needs to get a bit more organized and let the world know that we are here.

For that matter we need to let each other know that we are here! I vote that we pick a restaurant downtown and, a la Green Drinks, get together regularly for a bite and a beer.

« go backkeep looking »
  • About

    This website is supported by Ken Lombardi @ analogman consulting.
    phone: 253.two.two.two-7626
    email: ken@analogman'dot'org
    tweet: analogmanorg

  • Admin