• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Using YSlow to analyze website performance

    Ron Phipps

    By Ron Phipps
    December 19, 2008

    While attending OSCON ’08 I listened to Steve Souders discuss some topics from his O’Reilly book, High Performance Web Site, and a new book that should drop in early 2009. Steve made the comment that 80%-90% of the performance of a site is in the delivery and rendering of the front end content. Many engineers tend to immediately look at the back end when optimizing and forget about the rendering of the page and how performance there effects the user’s experience.

    During the talk he demonstrated the Firebug plugin, YSlow, which he built to illustrate 13 of the 14 rules from his book. The tool shows where performance might be an issue and gives suggestions on which resources can be changed to improve performance. Some of the suggestions may not apply to all sites, but they can be used as a guide for the engineer to make an informed decision.

    On a related note, Jon Jensen brought this blog posting to our attention that Google is planning to incorporate landing page time into its quality score for Adword landing pages. With that being known, front-end website performance will become even more important and there may be a point one day where load times come into play when determining …


    browsers performance

    TrueCrypt whole-disk encryption for Windows

    Jon Jensen

    By Jon Jensen
    December 13, 2008

    A few months ago I had a chance to use a new computer with Windows Vista on it. This was actually kind of a fun experience, because Windows 98 was the last version I regularly used myself, though I was at least mildly familiar with Windows 2000 and XP on others’ desktops.

    Since I’ve been using encrypted filesystems on Linux since around 2003, I’ve gotten used to the comfort of knowing a lost or stolen computer would mean only lost hardware, not worries about what may happen with the data on the disk. Linux-Mandrake was the first Linux distribution I recall offering an easy encrypted filesystem option during setup. Now Ubuntu and Fedora have it too.

    I wanted to try the same thing on Windows, but found only folder-level encryption was commonly used out of the box. Happily, the open source TrueCrypt software introduced whole-disk system encryption for Windows with version 5. I’ve now used it with versions 6.0, 6.1, and 6.1a on three machines under Windows Vista and XP, and it really works well, with a few caveats.

    The installation is smooth, and system encryption is really easy to set up if you don’t have any other operating systems on the machine. It will even encrypt on the fly …


    windows security

    Parallel Inventory Access using PostgreSQL

    Mark Johnson

    By Mark Johnson
    December 12, 2008

    Inventory management has a number of challenges. One of the more vexing issues with which I’ve dealt is that of forced serial access. We have a product with X items in inventory. We also have multiple concurrent transactions vying for that inventory. Under any normal circumstance, whether the count is a simple scalar, or is comprised of any number of records up to one record/quantity, the concurrent transactions are all going to hone in on the same record, or set of records. In doing so, all transactions must wait and get their inventory serially, even if doing so isn’t of interest.

    If inventory is a scalar value, we don’t have much hope of circumventing the problem. And, in fact, we wouldn’t want to under that scenario because each transaction must reflect the part of the whole it consumed so that the next transaction knows how much is left to work with.

    However, if we have inventory represented with one record = one quantity, we aren’t forced to serialize in the same way. If we have multiple concurrent transactions vying for inventory, and the sum of the need is less than that available, why must the transactions wait at all? They would normally line up serially because, no …


    postgres ecommerce

    Why is my function slow?

    Greg Sabino Mullane

    By Greg Sabino Mullane
    December 11, 2008

    I often hear people ask “Why is my function so slow? The query runs fast when I do it from the command line!” The answer lies in the fact that a function’s query plans are cached by Postgres, and the plan derived by the function is not always the same as shown by an EXPLAIN from the command line. To illustrate the difference, I downloaded the pagila test database. To show the problem, we’ll need a table with a lot of rows, so I used the largest table, rental, which has the following structure:

    pagila# \d rental
                           Table "public.rental"
        Column    |   Type     |             Modifiers
    --------------+-----------------------------+--------------------------------
     rental_id    | integer    | not null default nextval('rental_rental_id_seq')
     rental_date  | timestamp  | not null
     inventory_id | integer    | not null
     customer_id  | smallint   | not null
     return_date  | timestamp  |
     staff_id     | smallint   | not null
     last_update  | timestamp  | not null default now()
    Indexes:
        "rental_pkey" PRIMARY KEY (rental_id)
        "idx_unq_rental" UNIQUE (rental_date, inventory_id, customer_id)
        "idx_fk_inventory_id" …

    postgres

    Best practices for cron

    Greg Sabino Mullane

    By Greg Sabino Mullane
    December 8, 2008

    Cron is a wonderful tool, and a standard part of all sysadmins toolkit. Not only does it allow for precise timing of unattended events, but it has a straightforward syntax, and by default emails all output. What follows are some best practices for writing crontabs I’ve learned over the years. In the following discussion, “cron” indicates the program itself, “crontab” indicates the file changed by “crontab -e”, and “entry” begin a single timed action specified inside the crontab file. Cron best practices:

    Version control

    This rule is number one for a reason. Always version control everything you do. It provides an instant backup, accountability, easy rollbacks, and a history. Keeping your crontabs in version control is slightly more work than normal files, but all you have to do is pick a standard place for the file, then export it with crontab -l > crontab.postgres.txt. I prefer RCS for quick little version control jobs like this: no setup required, and everything is in one place. Just run: ci -l crontab.postgres.txt and you are done. The name of the file should be something like the example shown, indicating what it is (a crontab file), which one it is (belongs to the user …


    sysadmin

    Creating a PL/Perl RPM linked against a custom Perl build (updated)

    Jon Jensen

    By Jon Jensen
    November 29, 2008

    I recently needed to refer to a post I made on March 7, 2007, showing how to build a PL/Perl RPM linked against a custom Perl build. A few things have changed since that time, so I’ve reworked it here, updated for local Perl 5.10.0 built into RPMs:

    We sometimes have to install a custom Perl build without thread support, and to have some specific newer and/or older versions of CPAN modules, and we don’t want to affect the standard distribution Perl that lives in /usr/bin/perl and /usr/lib/perl5. We use standard PGDG RPMs to install PostgreSQL. We also use PL/Perl, and want PL/Perl to link against our custom Perl build in /usr/local/bin and /usr/local/lib/perl5.

    It’s easy to achieve this with a small patch to the source RPM spec file:

    --- postgresql-8.3.spec 2008-10-31 17:34:34.000000000 +0000
    +++ postgresql-8.3.custom.spec  2008-11-30 02:10:09.000000000 +0000
    @@ -315,6 +315,7 @@
     CFLAGS=`echo $CFLAGS|xargs -n 1|grep -v ffast-math|xargs -n 100`
    
     export LIBNAME=%{_lib}
    +export PATH=/usr/local/bin:$PATH
     %configure --disable-rpath \
     %if %beta
        --enable-debug \
    @@ -322,6 +323,7 @@
     %endif
     %if %plperl
        --with-perl \
    + …

    perl redhat sysadmin postgres

    Multiple reverse DNS pointers per IP address

    Jon Jensen

    By Jon Jensen
    November 28, 2008

    I recently ran across an IP address that had two PTR (reverse DNS) records in DNS. I’ve always thought that each IP address is limited to only a single PTR record, and I’ve seen this rule enforced by many ISPs, but I don’t remember ever seeing it conclusively stated.

    I was going to note the problem to the responsible person but thought it’d be good to test my assumption first. Lo and behold, it’s not true. The Wikipedia “Reverse DNS lookup” page and a source it cites, an IETF draft on reverse DNS, note that multiple PTR records per IP address have always been allowed.

    There is apparently plenty of software out there that can’t properly deal with more than one PTR record per IP address, and with too many PTR records, a DNS query response will no longer fit inside a single UDP packet, forcing a TCP response instead, which can cause trouble of its own. And as I noted, many ISPs won’t allow more than one PTR record, so in those cases it’s an academic question.

    But it’s not invalid, and I saved myself and someone else a bit of wasted time by doing a quick bit of research. It was a good reminder of the value of checking assumptions.


    networking

    OpenSQL Camp 2008

    Greg Sabino Mullane

    By Greg Sabino Mullane
    November 19, 2008

    I attended the OpenSQL Camp last weekend, which ran Friday night to Sunday, November 14-16th. This was the first “unconference” I had been to, and Baron Schwartz did a great job in pulling this all together. I drove down with Bruce Momjian who said that this is the first cross-database conference of any kind since at least the year 2000.

    The conference was slated to start at 6 pm, and Bruce and I arrived at our hotel a few minutes before then. Our hotel was at one end of the Charlottesville Downtown Mall, and the conference was at the other end, so we got a quick walking tour of the mall. Seems like a great place—​lots of shops, people walking, temporary booths set out, outdoor seating for the restaurants. It reminded me a lot of Las Ramblas, but without the “human statue” performance artists. Having a hotel within walking distance of a conference is a big plus in my book, and I’ll go out of my way to find one.

    The first night was simply mingling with other people and designing the next day’s sessions. There was a grid of talk slots on a wall, with large sticky notes stuck to some of them to indicate already-scheduled sessions. Next to the grid were two sections, where people …


    conference database postgres mysql
    Previous page • Page 210 of 220 • Next page