• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Perl 5 now on Git

    Jon Jensen

    By Jon Jensen
    December 21, 2008

    It’s awesome to see that the Perl 5 source code repository has been migrated from Perforce to Git, and is now active at https://perl5.git.perl.org/. Congratulations to all those who worked hard to migrate the entire version control history, all the way back to the beginning with Perl 1.0!

    Skimming through the history turns up some fun things:

    • The last Perforce commit appears to have been on 16 December 2008.
    • Perl 5 is still under very active development! (It seems a lot of people are missing this simple fact, so I don’t feel bad stating it.)
    • Perl 5.8.0 was released on 18 July 2002, and 5.6.0 on 23 March 2000. Those both seem so recent …
    • Perl 5.000 was released on 17 October 1994.
    • Perl 4.0.00 was released 21 March 1991, and the last Perl 4 release, 4.0.36, was released on 4 February 1993. For having an active lifespan of only 4 or so years till Perl 5 became popular, Perl 4 code sure kicked around on servers a lot longer than that.
    • Perl 1.0 was announced by Larry Wall on 18 December 1987. He called Perl a “replacement” for awk and sed. That first release included 49 regression tests.
    • Some of the patches are from people whose contact information is long gone, rendered in Git …

    git perl

    Sometimes it’s a silly hardware problem

    Jon Jensen

    By Jon Jensen
    December 19, 2008

    I’ve been using Twinkle and Ekiga for SIP VoIP on Ubuntu 8.10 x86_64. That’s been working pretty well.

    However, I finally had to take some time to hunt down the source of a very annoying high-pitched noise coming from my laptop’s sound system (external speaker and headset both). I have an Asus M50SA laptop with Intel 82801H (ICH8 Family) audio on Realtek ALC883. I first thought perhaps it was the HDMI cable going to an external monitor, or some other RF interference from a cable, but turning things off or unplugging them didn’t make any difference.

    Then I suspected there was some audio driver problem because the whine only started once the sound driver loaded at boot time. After trying all sorts of variations in the ALSA configuration, changing the options to the snd-hda-intel kernel module, I was at a loss and unplugged my USB keyboard and mouse.

    It was the USB mouse! It’s a laser-tracked mouse with little shielding on the short cable. Plugging it into either of the USB ports near the front of the computer caused the noise. The keyboard didn’t matter.

    At first I thought my other USB non-laser ball mouse didn’t add any noise, but it did, just a quieter and lower-pitch noise.

    Then …


    hardware linux audio

    Using YSlow to analyze website performance

    Ron Phipps

    By Ron Phipps
    December 19, 2008

    While attending OSCON ’08 I listened to Steve Souders discuss some topics from his O’Reilly book, High Performance Web Site, and a new book that should drop in early 2009. Steve made the comment that 80%-90% of the performance of a site is in the delivery and rendering of the front end content. Many engineers tend to immediately look at the back end when optimizing and forget about the rendering of the page and how performance there effects the user’s experience.

    During the talk he demonstrated the Firebug plugin, YSlow, which he built to illustrate 13 of the 14 rules from his book. The tool shows where performance might be an issue and gives suggestions on which resources can be changed to improve performance. Some of the suggestions may not apply to all sites, but they can be used as a guide for the engineer to make an informed decision.

    On a related note, Jon Jensen brought this blog posting to our attention that Google is planning to incorporate landing page time into its quality score for Adword landing pages. With that being known, front-end website performance will become even more important and there may be a point one day where load times come into play when determining …


    browsers performance

    TrueCrypt whole-disk encryption for Windows

    Jon Jensen

    By Jon Jensen
    December 13, 2008

    A few months ago I had a chance to use a new computer with Windows Vista on it. This was actually kind of a fun experience, because Windows 98 was the last version I regularly used myself, though I was at least mildly familiar with Windows 2000 and XP on others’ desktops.

    Since I’ve been using encrypted filesystems on Linux since around 2003, I’ve gotten used to the comfort of knowing a lost or stolen computer would mean only lost hardware, not worries about what may happen with the data on the disk. Linux-Mandrake was the first Linux distribution I recall offering an easy encrypted filesystem option during setup. Now Ubuntu and Fedora have it too.

    I wanted to try the same thing on Windows, but found only folder-level encryption was commonly used out of the box. Happily, the open source TrueCrypt software introduced whole-disk system encryption for Windows with version 5. I’ve now used it with versions 6.0, 6.1, and 6.1a on three machines under Windows Vista and XP, and it really works well, with a few caveats.

    The installation is smooth, and system encryption is really easy to set up if you don’t have any other operating systems on the machine. It will even encrypt on the fly …


    windows security

    Parallel Inventory Access using PostgreSQL

    Mark Johnson

    By Mark Johnson
    December 12, 2008

    Inventory management has a number of challenges. One of the more vexing issues with which I’ve dealt is that of forced serial access. We have a product with X items in inventory. We also have multiple concurrent transactions vying for that inventory. Under any normal circumstance, whether the count is a simple scalar, or is comprised of any number of records up to one record/quantity, the concurrent transactions are all going to hone in on the same record, or set of records. In doing so, all transactions must wait and get their inventory serially, even if doing so isn’t of interest.

    If inventory is a scalar value, we don’t have much hope of circumventing the problem. And, in fact, we wouldn’t want to under that scenario because each transaction must reflect the part of the whole it consumed so that the next transaction knows how much is left to work with.

    However, if we have inventory represented with one record = one quantity, we aren’t forced to serialize in the same way. If we have multiple concurrent transactions vying for inventory, and the sum of the need is less than that available, why must the transactions wait at all? They would normally line up serially because, no …


    postgres ecommerce

    Why is my function slow?

    Greg Sabino Mullane

    By Greg Sabino Mullane
    December 11, 2008

    I often hear people ask “Why is my function so slow? The query runs fast when I do it from the command line!” The answer lies in the fact that a function’s query plans are cached by Postgres, and the plan derived by the function is not always the same as shown by an EXPLAIN from the command line. To illustrate the difference, I downloaded the pagila test database. To show the problem, we’ll need a table with a lot of rows, so I used the largest table, rental, which has the following structure:

    pagila# \d rental
                           Table "public.rental"
        Column    |   Type     |             Modifiers
    --------------+-----------------------------+--------------------------------
     rental_id    | integer    | not null default nextval('rental_rental_id_seq')
     rental_date  | timestamp  | not null
     inventory_id | integer    | not null
     customer_id  | smallint   | not null
     return_date  | timestamp  |
     staff_id     | smallint   | not null
     last_update  | timestamp  | not null default now()
    Indexes:
        "rental_pkey" PRIMARY KEY (rental_id)
        "idx_unq_rental" UNIQUE (rental_date, inventory_id, customer_id)
        "idx_fk_inventory_id" …

    postgres

    Best practices for cron

    Greg Sabino Mullane

    By Greg Sabino Mullane
    December 8, 2008

    Cron is a wonderful tool, and a standard part of all sysadmins toolkit. Not only does it allow for precise timing of unattended events, but it has a straightforward syntax, and by default emails all output. What follows are some best practices for writing crontabs I’ve learned over the years. In the following discussion, “cron” indicates the program itself, “crontab” indicates the file changed by “crontab -e”, and “entry” begin a single timed action specified inside the crontab file. Cron best practices:

    Version control

    This rule is number one for a reason. Always version control everything you do. It provides an instant backup, accountability, easy rollbacks, and a history. Keeping your crontabs in version control is slightly more work than normal files, but all you have to do is pick a standard place for the file, then export it with crontab -l > crontab.postgres.txt. I prefer RCS for quick little version control jobs like this: no setup required, and everything is in one place. Just run: ci -l crontab.postgres.txt and you are done. The name of the file should be something like the example shown, indicating what it is (a crontab file), which one it is (belongs to the user …


    sysadmin

    Creating a PL/Perl RPM linked against a custom Perl build (updated)

    Jon Jensen

    By Jon Jensen
    November 29, 2008

    I recently needed to refer to a post I made on March 7, 2007, showing how to build a PL/Perl RPM linked against a custom Perl build. A few things have changed since that time, so I’ve reworked it here, updated for local Perl 5.10.0 built into RPMs:

    We sometimes have to install a custom Perl build without thread support, and to have some specific newer and/or older versions of CPAN modules, and we don’t want to affect the standard distribution Perl that lives in /usr/bin/perl and /usr/lib/perl5. We use standard PGDG RPMs to install PostgreSQL. We also use PL/Perl, and want PL/Perl to link against our custom Perl build in /usr/local/bin and /usr/local/lib/perl5.

    It’s easy to achieve this with a small patch to the source RPM spec file:

    --- postgresql-8.3.spec 2008-10-31 17:34:34.000000000 +0000
    +++ postgresql-8.3.custom.spec  2008-11-30 02:10:09.000000000 +0000
    @@ -315,6 +315,7 @@
     CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v ffast-math|xargs -n 100`
    
     export LIBNAME=%{_lib}
    +export PATH=/usr/local/bin:$PATH
     %configure --disable-rpath \
     %if %beta
        --enable-debug \
    @@ -322,6 +323,7 @@
     %endif
     %if %plperl
        --with-perl \
    + …

    perl redhat sysadmin postgres
    Previous page • Page 209 of 219 • Next page