OpenAFS Workshop 2008
This year’s Kerberos and OpenAFS Workshop was very exciting. It was the first I’ve attended since the workshop was large enough to be held separately from USENIX LISA, and it was encouraging to see that this year’s workshop was the largest ever, with well over 100 in attendance, and over 10 countries represented. Jeff Altman of Secure Endpoints did a great job on coordinating the workshop. Kevin Walsh and others at New Jersey Institute of Technology did a fantastic job in hosting, providing the workshop with a good venue and great service.
My summary of the workshop is “energy and enthusiasm” as several projects that have been in the development pipeline are starting to bear fruit.
On the technical side, the workshop keynote kicked off the week with a presentation from Alistair Ferguson from Morgan Stanley, where he noted that the work on demand attach file servers has reduced their server restart times from hours, down to seconds, greatly easing their administrative overhead while making AFS even more highly-available.
Of particular technical note, Jeff Altman reported that the Windows client has had lots of performance and stability changes, with major strategic changes being …
conference open-source openafs
RPM --nodeps really disables all dependency logic
I was surprised about something non-obvious in RPM’s dependency handling for the second time today, the first time having been so many years ago that I had completely forgotten.
When testing out an RPM install without having all the required dependencies installed on the system, it’s natural to do:
rpm -ivh $package --nodeps
The –nodeps option allows RPM to continue installing despite the fact that I’m missing a handful of packages that $package depends on. This shouldn’t be done as a matter of course, but for a quick test, is fine. So far so good.
However, I found out by confusing experience that –nodeps not only allows otherwise fatal dependency errors to be skipped, but it also disables RPM’s entire dependency tracking system!
I was working with 3 RPMs, a base interchange package and 2 ancillary interchange-* packages which depend on the base package, such as here:
interchange-5.6.0-1.x86_64.rpm
interchange-standard-5.6.0-1.x86_64.rpm
interchange-standard-demo-5.6.0-1.x86_64.rpm
Then when I installed them all at once:
rpm -ivh interchange-*.rpm --nodeps
I expected interchange to be installed first, followed by either of the interchange-standard-* packages that depend …
interchange redhat
Listing installed RPMs by vendor
The other day I wanted to see a list of all RPMs that came from a source other than Red Hat, which were installed on a Red Hat Enterprise Linux (RHEL) 5 server. This is straightforward with the rpm –queryformat (short form –qf) option:
rpm -qa --qf '%{NAME} %{VENDOR}\n' | grep -v 'Red Hat, Inc\.' | sort
That instructs rpm to output each package’s name and vendor, then we exclude those from “Red Hat, Inc.” (which is the exact string Red Hat conveniently uses in the “vendor” field of all RPMs they pacakge).
By default, rpm -qa uses the format ‘%{NAME}-%{VERSION}-%{RELEASE}’, and it’s nice to see version and release, and on 64-bit systems, it’s also nice to see the architecture since both 32- and 64-bit packages are often installed. Here’s how I did that:
rpm -qa --qf '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR}\n' | grep -v 'Red Hat, Inc\.' | sort
With that I’ll see output such as:
fping-2.4-1.b2.2.el5.rf.x86_64 Dag Apt Repository, http://dag.wieers.com/apt/
git-1.5.6.5-1.x86_64 End Point Corporation
iftop-0.17-1.el5.x86_64 (none)
There we see the fping package from the excellent DAG RPM repository, along with a few others. …
redhat
End Point’s Spanish website
We’ve had a Spanish version of our website at es.endpoint.com for about a year now, and we keep the content there current with our main English website. We haven’t promoted it much, so I figured I’d mention it here and see if any English speakers feel like checking it out. :) We currently have only a few Spanish speakers at End Point, and if a non-English-speaker calls our main office, it may take a bit of shuffling to route the caller to the right person.
But more to the point, we’ve done a few interesting multilingual projects. One of them is a private business-to-business website localized in US English, UK English, French, Canadian French, German, Italian, Japanese, Simplified Chinese, Traditional Chinese, Portuguese, Brazilian Portuguese, and Spanish. We’re experienced with popular character set encodings and Unicode in web protocols, Postgres and MySQL databases, Perl, and Ruby. We’re always interested in taking on more such projects as they tend to be challenging and fun.
company
The how and why of Code Reviews
Everyone believes that code reviews are highly beneficial to software and web site quality. Yet many of those who agree in principle don’t follow through with them in practice, at least not consistently or thoroughly. To find ways to improve real-world practice, I attended Code Reviews for Fun and Profit, given by Alex Martelli, Über Tech Lead at Google, during OSCON 2008.
One barrier to good reviews is when developers are reluctant to point out flaws in the code of more experienced programmers, perhaps due to culture or personal dynamics. In Open Source projects, and at End Point, the reverse is often true: corrections earn Nerd Cred. But if it is an issue, one good workaround is to ask questions. Instead of “If you use a value of zero, it crashes,” say “What happens if you use a value of zero?”
There are several prerequisites that should be taken care of before code reviews are started. First, a version control system is required (we prefer Git at End Point). Second, a minimal amount of process should be in place to ensure reviews occur, so that some commits do not fall through the cracks. Third, automatable tasks, such as style, test coverage, and smoke tests, should be …
development
Testing Concurrency Control
When dealing with complex systems, unit testing sometimes poses a bigger implementation challenge than does the system itself.
For most of us working in the web application development space, we tend to deal with classic serial programming scenarios: everything happens in a certain order, within a single logical process, be it a Unix(-like) process or a thread. While testing serial programs/modules can certainly get tricky, particularly if the interface of your test target is especially rich or involves interaction with lots of other widgets, it at least does not involve multiple logical lines of execution. Once concurrency is brought into the mix, testing can become inordinately complex. If you want to test the independent units that will operate in parallel, you can of course test each in isolation and presumably have simple “standard” serial-minded tests that are limited to the basic behaviors of the independent units in question. If you need to test the interaction of these units when run in parallel, however, you will do well to expect Pain.
One simple pattern that has helped me a few times in the past:
- identify what it is that you need to verify
- in your test script, fork at …
testing perl rest
Perl on Google App Engine
People are working on getting Perl support for Google App Engine, led by Brad Fitzpatrick (of Livejournal, memcached, etc. fame) at Google.
They’ve created a new module, Sys::Protect, to simulate the restricted Perl interpreter that would have to exist for Google App Engine. There’s some discussion of why they didn’t use Safe, but it sounds like it’s based only on rumors of Safe problems, not anything concrete.
Safe is built on Opcode, and Sys::Protect appears to work the same way Safe + Opcode do, by blocking certain Perl opcodes. All the problems I’ve heard of and personally experienced with Safe were because it was working just fine—but being terribly annoying because many common Perl modules do things a typical Safe compartment disallows. That’s because most Perl module writers don’t use Safe and thus never encounter such problems. It seems likely that Sys::Protect and a hardened Perl Google App Engine environment will have the same problem and will have to modify many common modules if they’re to be used.
Moving on, posters are talking about having support for Moose, Catalyst, CGI::Application, POE, Template::Toolkit, HTML::Template … well, a lot. I guess that makes …
perl cloud
Switching from Sendmail to Postfix on OpenBSD
It’s easy to pick on Sendmail, and with good reason. A poor security record, baroque configuration, slowness, painful configuration, monolithic design, and arcane configuration. Once you know Sendmail it’s bearable, and long-time experts aren’t always eager to give it up, but I wouldn’t recommend anyone deploy it for a serious mail server these days. But for a send-only mail daemon or a private, internal mail server, it works fine. Since it’s the default mailer for OpenBSD, and I haven’t been using OpenBSD as a heavy-traffic mail server, I’ve usually just left Sendmail in place.
A few years ago some of our clients’ internal mail servers running Sendmail were getting heavy amounts of automated output from cron jobs, batch job output, transaction notifications, etc., and they bogged down and sometimes even stopped working entirely under the load. It wasn’t that much email, though—the machines should’ve been able to handle it.
After trying to tune Sendmail to be more tolerant of heavy load and having little success, I finally switched to Postfix (which we had long used elsewhere) and the CPU load immediately dropped from 30+ down to below 1, and mail delivery worked without …
email openbsd