• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Perl on Google App Engine

    Jon Jensen

    By Jon Jensen
    August 2, 2008

    People are working on getting Perl support for Google App Engine, led by Brad Fitzpatrick (of Livejournal, memcached, etc. fame) at Google.

    They’ve created a new module, Sys::Protect, to simulate the restricted Perl interpreter that would have to exist for Google App Engine. There’s some discussion of why they didn’t use Safe, but it sounds like it’s based only on rumors of Safe problems, not anything concrete.

    Safe is built on Opcode, and Sys::Protect appears to work the same way Safe + Opcode do, by blocking certain Perl opcodes. All the problems I’ve heard of and personally experienced with Safe were because it was working just fine—​but being terribly annoying because many common Perl modules do things a typical Safe compartment disallows. That’s because most Perl module writers don’t use Safe and thus never encounter such problems. It seems likely that Sys::Protect and a hardened Perl Google App Engine environment will have the same problem and will have to modify many common modules if they’re to be used.

    Moving on, posters are talking about having support for Moose, Catalyst, CGI::Application, POE, Template::Toolkit, HTML::Template … well, a lot. I guess that makes …


    perl cloud

    Switching from Sendmail to Postfix on OpenBSD

    Jon Jensen

    By Jon Jensen
    August 1, 2008

    It’s easy to pick on Sendmail, and with good reason. A poor security record, baroque configuration, slowness, painful configuration, monolithic design, and arcane configuration. Once you know Sendmail it’s bearable, and long-time experts aren’t always eager to give it up, but I wouldn’t recommend anyone deploy it for a serious mail server these days. But for a send-only mail daemon or a private, internal mail server, it works fine. Since it’s the default mailer for OpenBSD, and I haven’t been using OpenBSD as a heavy-traffic mail server, I’ve usually just left Sendmail in place.

    A few years ago some of our clients’ internal mail servers running Sendmail were getting heavy amounts of automated output from cron jobs, batch job output, transaction notifications, etc., and they bogged down and sometimes even stopped working entirely under the load. It wasn’t that much email, though—​the machines should’ve been able to handle it.

    After trying to tune Sendmail to be more tolerant of heavy load and having little success, I finally switched to Postfix (which we had long used elsewhere) and the CPU load immediately dropped from 30+ down to below 1, and mail delivery worked without …


    email openbsd

    Code Debt-Free

    Ethan Rowe

    By Ethan Rowe
    July 31, 2008

    Every now and then, the opportunity arises to write debt-free code (meaning free of technical debt). When such opportunities come, we must seize them.

    I recently had the distinct pleasure of cranking out some Perl modules in the following order:

    1. Write documentation for the forthcoming functionality

    2. Implement unit tests for the aforementioned forthcoming functionality

    3. Verify that the unit tests fail

    4. Implement the awaited functionality

    5. Verify (jumping back to step 4 as necessary) that the unit tests work.

    Timelines, interruptions, and other pressures often get in the way of this short-term development cycle. The cycle can feel tedious; it makes the task of implementing even simple functions seem unpleasantly large and drawn out. When an implementation approach flashes into the engineer’s mind, leaping to step 4 (implementation) feels natural and immediately gratifying. The best-intentioned of us can fall into this out of habit, out of inertia, out of raw enthusiasm.

    Documentation, though, demonstrates that you know what you’re trying to achieve. It is not a nicety, it is proof that you understand the problem at hand. Unit tests, as hard as they can sometimes be to …


    perl tips

    MySQL vs. PostgreSQL mailing list activity

    Jon Jensen

    By Jon Jensen
    July 31, 2008

    My co-worker, Greg Sabino Mullane, noted this writeup on the MarkMail blog comparing the amount of traffic on the various MySQL and PostgreSQL mailing lists.

    I suppose you could pessimistically say that PostgreSQL users need more community help than MySQL users do, but reviewing the content of the traffic (and going from years of personal experience) doesn’t support such a view. The PostgreSQL community seems to have more long-term, deeply involved users who are also contributors.

    But let’s hope the competition in the free database world picks up. It looks like the new Drizzle project has a good chance of growing a new community around MySQL.

    In any case, the MarkMail mailing list archive and search service is an excellent resource. Thanks, MarkMail folks!


    database postgres

    Git push: know your refspecs

    Ethan Rowe

    By Ethan Rowe
    July 30, 2008

    The ability to push and pull commits to/from remote repositories is obviously one of the great aspects of Git. However, if you’re not careful with how you use git-push, you may find yourself in an embarrassing situation.

    When you have multiple remote tracking branches within a Git repository, any bare git push invocation will attempt to push to all of those remote branches out. If you have commits stacked up that you weren’t quite ready to push out, this can be somewhat unfortunate.

    There are a variety of ways to accommodate this:

    • use local branches for your commits, only merging those commits into your remote tracking branches when you’re ready to push them out;
    • push remote tracking branches out whenever you have something worth committing.

    However, even with sensible branch management practices, it’s worthwhile to know exactly what it is you’re pushing. Therefore, if you want to have a sense of what you’re potentially doing in calling a bare git push, always call it with the –dry-run option first. This will show you what a the push will send out, where the conflicts are, and so on, all without actually performing the push.

    It is ultimately best, though, to understand the …


    git

    Building Perl on 64-bit RHEL/Fedora/CentOS

    Jon Jensen

    By Jon Jensen
    July 28, 2008

    When building Perl from source on 64-bit Red Hat Enterprise Linux, Fedora, CentOS, or derivatives, Perl’s Configure command needs to be told about the “multilib” setup Red Hat uses.

    The multilib arrangement allows both 32-bit and 64-bit libraries to exist on the same system, and leaves the “non-native” 32-bit libraries in /lib and /usr/lib while the “native” 64-bit libraries go in /lib64 and /usr/lib64. That allows the same 32-bit RPMs to be used on either i386 or x86_64 systems. The downside of this is that 64-bit applications have to be told where to look for, and put, libraries, or they usually won’t work.

    For Perl, to compile from a source tarball with the defaults:

    ./Configure -des -Dlibpth="/usr/local/lib64 /lib64 /usr/lib64"
    

    Then build as normal:

    make && make test && sudo make install
    

    I hope this information will come in handy for someone. I believe I learned it from Red Hat’s source RPM for Perl.


    perl redhat

    Perl incompatibility moving to 5.10

    Ethan Rowe

    By Ethan Rowe
    July 28, 2008

    We’re preparing to upgrade from Perl 5.8.7 to 5.10.0 for a particular project, and ran into an interesting difference between the two versions.

    Consider the following statement for some hashref $attrib:

      use strict;
      ...
      my ($a, $b, $c) = @{%{$attrib}}{qw(a b c)};
    

    In 5.8.7, the @{…} construct will return a slice of the hash referenced by $attrib, meaning that $a gets $attrib->{a}, $b gets $attrib->{b}, and so on.

    In 5.10.0, the same construct will result in an error complaining about using a string for a hashref.

    I suspect it’s due to the hash dereference (%{$attrib}) being fully executed prior to applying the hash-slice operation (@{…}{qw(a b c)}), meaning that you’re not operating against a hashref anymore.

    Fortunately, the fix is wonderfully simple and significantly more readable:

      my ($a, $b, $c) = @$attrib{qw( a b c )};
    

    The “fix”—​which is arguably how it should have been constructed in the first place, but this is software we’re talking about—​works in both versions of Perl.


    perl

    Signs of a too-old Git version

    Jon Jensen

    By Jon Jensen
    July 28, 2008

    When running git clone, if you get an error like this:

    Couldn’t get http://some.domain/somerepo.git/refs/remotes/git-svn for remotes/git-svn
    The requested URL returned error: 404 error: Could not interpret remotes/git-svn as something to pull
    

    You’re probably using a really old version of Git that can’t handle some things in the newer repository. The above example was from Git 1.4.4.4, the very old version included with Debian Etch. The best way to handle that is to use Debian Backports to upgrade to Git 1.5.5.

    On Red Hat Enterprise Linux, Fedora, or CentOS, the Git maintainers’ RPMs usually work (though you may need to get a dependency, the perl-Error package from RPMforge).

    If all else fails, grab the Git source and build it. I’ve never had a problem building the code anywhere, though building the docs requires a newer version of asciidoc than is easy to get on RHEL 3.


    git
    Previous page • Page 215 of 219 • Next page