64-bit Windows naming fun
At OSNews.com the article Windows x64 Watch List describes some of the key differences between 64-bit and 32-bit Windows. It’s pretty interesting, and mostly pretty reasonable. But this one caught my eye:
There are now separate system file sections for both 32-bit and 64-bit code
Windows x64’s architecture keeps all 32-bit system files in a directory named “C:\WINDOWS\SysWOW64”, and 64-bit system files are place in the the oddly-named “C:\WINDOWS\system32” directory. For most applications, this doesn’t matter, as Windows will re-direct all 32-bit files to use “SysWOW64” automatically to avoid conflicts.
However, anyone (like us system admins) who depend on VBScripts to accomplish tasks, may have to directly reference “SysWOW64” files if needed, since re-direction doesn’t apply as smoothly.
I’ve been using 64-bit Linux since 2005 and found there to be some learning curve there, with distributors taking different approaches to supporting 32-bit libraries and applications on a 64-bit operating system.
The Debian Etch approach is to treat the 64-bit architecture as “normal”, for lack of a better word, with 64-bit libraries residing in /lib and /usr/lib as always. It’s recommended to …
redhat windows
Filesystem I/O: what we presented
As mentioned last week, Gabrielle Roth and I presented results from tests run in the new Postgres Performance Lab. Our slides are available on Slideshare.
We tested eight core assumptions about filesystem I/O performance and presented the results to a room of filesystem hackers and a few database specialists. Some important things to remember about our tests: we were testing I/O only—no tuning had been done on the hardware, filesystem defaults or for Postgres—and we did not take reliability into account at all. Tuning the database and filesystem defaults will be done for our next round of tests.
Filesystems we tested were ext2, ext3 (with or without data journaling), xfs, jfs, and reiserfs.
Briefly, here are our assumptions, and the results we presented:
-
RAID5 is the worst choice for a database. Our tests confirmed this, as expected.
-
LVM incurs too much overhead to use. Our test showed that for sequential or random reads on RAID0, LVM doesn’t incur much more overhead than hardware or software RAID.
-
Software RAID is slower. Same result as LVM for sequential or random reads.
-
Turning off ‘atime’ is a big performance gain. We didn’t see a big improvement, but you do …
conference postgres
Postfix, ~/.forward, and SELinux on RHEL 5
For the record, and maybe to save confusion for someone else who runs into this:
On Red Hat Enterprise Linux 5 with SELinux in enforcing mode, Postfix cannot read ~/.forward files by default. It’s probably not hard to fix – perhaps the .forward files just need to have the right SELinux context set – but we decided to just use /etc/aliases in this case.
redhat
Competence, Change Agents, Software, and Music
Seth Godin wrote an interesting article on the subject of competence; it resonated with me personally for a variety of reasons.
The article uses musicians, and Bob Dylan in particular, as an example of how “competence” can pale in comparison to “incompetence” in terms of the quality of the results. In particular, it asserts that competent musicians consistently play the music in question the same way, and suggests that the lack of such consistency could be thought of as incompetence. Bob Dylan thus becomes an incompetent musician who is nevertheless really great due to the emotional content of his performances; beyond that, he is a “change agent” because of his brilliance. And that’s the crux of the article: the “incompetent” people are the change agents who advance the state of the art, while the “competent” people resist change and thus hold things back.
As a fairly serious practicing musician myself, I’ll assert in response: this is not an accurate representation of musicianship, and the issue extends to the core of the article’s argument.
Playing music the same way every time is not an indication of competence. It’s an indicator of insufficient imagination and demonstrates a …
community
Red Hat acquires Qumranet
I missed the news a week and a half ago that Red Hat has acquired Qumranet, makers of the Linux KVM virtualization software. They say they’ll be focusing on KVM for their virtualization offerings in future versions of Red Hat Enterprise Linux, though still supporting Xen for the lifespan of RHEL 5 at least. (KVM is already in Fedora.)
Given that Ubuntu also chose KVM as their primary virtualization technology a while back, this should mean even easier use of KVM all around, perhaps making it the default choice on Linux. (Ubuntu supports other virtualization as well.)
Also, something helpful to note for RHEL virtualization users: Red Hat Network entitlements for up to 4 Xen guests carry no extra charge if entitled the right way.
In even older Red Hat news, Dag Wieers wrote about Red Hat lengthening its support lifespan for RHEL by one year for RHEL 4 and 5.
That means RHEL 5 (and thus also CentOS 5) will have full support until March 2011, new media releases until March 2012, and security updates until March 2014. And RHEL 4, despite its aging software stack, will receive security updates until February 2012!
That’s very helpful in making it easier to choose the time of migration …
redhat
UTOSC 2008 wrap-up
Using Vyatta to Replace Cisco Gear
At the 2008 Utah Open Source Conference I attended an interesting presentation by Tristan Rhodes about the Vyatta open source networking software. Vyatta’s software is designed to replace Cisco appliances of many sorts: WAN routers, firewalls, IDSes, VPNs, and load balancers. It runs on Debian GNU/Linux, on commodity hardware or virtualized.
A key selling point is the price/performance benefit vs. Cisco (prominently noted in Vyatta’s marketing materials), and the IOS-style command-line management interface for experienced Cisco network administrators. Regular Linux interfaces are available too, though Tristan wasn’t positive that writes would stick in all cases, as he’s mostly used the native Linux tools for monitoring and reading, not writing.
Pretty cool stuff, and Vyatta sells pre-built appliances and support too. The Vyatta reps were handing out live CDs, but I haven’t had a chance to try it out yet. Presentation details are here.
Google App Engine 101
Jonathan Ellis did a presentation and then hands-on workshop on Google App Engine, which I found especially useful because he’s a longtime Python and Postgres user. His talk on SQLAlchemy last …
conference
Machine virtualization on the Linux desktop
In the past I’ve used virtualization mostly in server environments: Xen as a sysadmin, and VMware and Virtuozzo as a user. They have worked well enough. When there’ve been problems they’ve mostly been traceable to network configuration trouble.
Lately I’ve been playing with virtualization on the desktop, specifically on Ubuntu desktops, using Xen, kvm, and VirtualBox. Here are a few notes.
Xen: Requires hardware virtualization support for full virtualization, and paravirtualization is of course only for certain types of guests. It feels a little heavier on resource usage, but I haven’t tried to move beyond lame anecdote to confirm that.
kvm: Rumored to have been not ready for prime time, but when used from libvirt with virt-manager, has been very nice for me. It requires hardware virtualization support. One major problem in kvm on Ubuntu 8.04 is with the CD/DVD driver when using RHEL/CentOS guests. To work around that, I used the net install and it worked fine.
VirtualBox: This was for me the simplest of all for desktop stuff. I’ve used both the OSE (Open Source Edition) in Ubuntu and Sun’s cost-free but proprietary package on Windows Vista. The current release of VirtualBox only …
environment hosting
Know your tools under the hood
Git supports many workflows; one common model that we use here at End Point is having a shared central bare repository that all developers clone from. When changes are made, the developer pushes the commit to the central repository, and other developers see the relevant changes on subsequent pulls.
We ran into an issue today where after a commit/push cycle, suddenly pulls from the shared repository were broken for downstream developers. It turns out that one of the commits had been created by root and pushed to the shared repository. This worked fine to push, as root had read-write privileges to the filesystem, however it meant that the loose objects which the commit created were in turn owned by root as well; fs permissions on the loose objects and the updated refs/heads/branch prevented the read of the appropriate files, and hence broke the pull behavior downstream.
Trying to debug this purely on the reported messages from the tool itself would have resulted in more downtime at a critical time in the client’s release cycle.
There are a couple of morals here:
- Don’t do anything as root that doesn’t need root privileges. :-)
- Understanding how git works at a low level enabled a …
openafs git