When I first tried to add my two test nodes in the oVirt manager, this worked fine for the first node I added. However, when I tried to add the second node this always seems to fail. This is what I am seeing in /var/log/ovirt-engine/engine.log:
2013-03-02 00:27:20,825 INFO [org.ovirt.engine.core.bll.VdsInstaller] (NioProcessor-14) Installation of vmhost01.test.lenio.be. Received message: 00020003-0004-0005-0006-000700080009. FYI. (Stage: Get the unique vds id)
2013-03-02 00:27:20,844 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (NioProcessor-14) Installation of vmhost01.test.lenio.be. Host with unique id 00020003-0004-0005-0006-000700080009 is already present in system. (Stage: Get the unique vds id)
2013-03-02 00:27:20,862 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-18) [52d9b236] Get unique id of vmhost01.test.lenio.be failed, may be due to empty node id (Stage: Get the unique vds id)
2013-03-02 00:27:20,862 ERROR [org.ovirt.engine.core.bll.VdsInstaller] (pool-3-thread-18) [52d9b236] Installation of vmhost01.test.lenio.be. Operation failure. (Stage: Get the unique vds id)
So the problem is that these Tyan motherboards in my test nodes use the same UUID, and oVirt reads the UUID from the system's DMI data. Shame on you, Tyan. Is it really too hard to understand "universally unique identifier"? Sigh. Fortunately there is an easy workaround: you can put a different UUID in /etc/vdsm/vdsm.id, and oVirt will read that one instead of the one in the DMI data. Here's an easy way to generate a valid UUID and save it in that file:
mkdir /etc/vdsm; cp /proc/sys/kernel/random/uuid /etc/vdsm/vdsm.id