• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    pg_wrapper’s very symbolic links

    Josh Williams

    By Josh Williams
    September 22, 2010

    I like pg_wrapper. For a development environment, or testing replication scenarios, it’s brilliant. If you’re not familiar with pg_wrapper and its family of tools, it’s a set of scripts in the postgresql-common and postgresql-client-common packages available in Debian, as well as Ubuntu and other Debian-like distributions. As you may have guessed pg_wrapper itself is a wrapper script that calls the correct version of the binary you’re invoking – psql, pg_dump, etc – depending on the version of the database you want to connect to. Maybe not all that exciting in itself, but implied therein is the really cool bit: This set of tools lets you manage multiple installations of Postgres, spanning multiple versions, easily and reliably.

    Well, usually reliably. We were helping a client upgrade their production boxes from Postgres 8.1 to 8.4. This was just before the 9.0 release, otherwise we’d consider moving the directly to that instead. It was going fairly smoothly until on one box we hit this message:

    Could not parse locale out of pg_controldata output
    

    Oops, they had pinned the older postgres-common version. An upgrade of those packages and no more error!

    $ pg_lsclusters
    Version Cluster …

    database postgres

    Listen/Notify improvements in PostgreSQL 9.0

    Greg Sabino Mullane

    By Greg Sabino Mullane
    September 21, 2010

    Improved listen/notify is one of the new features of Postgres 9.0 I’ve been waiting for a long time. There are basically two major changes: everything is in shared memory instead of using system tables, and full support for “payload” messages is enabled.

    Before I demonstrate the changes, here’s a review of what exactly the listen/notify system in Postgres is. Basically, it is an inter-process signalling system, which uses the pg_listener system table to coordinate simple named events between processes. One or more clients connects to the database and issues a command such as:

    LISTEN foobar;
    

    The name foobar can be replaced by any valid name; usually the name is something that gives a contextual clue to the listening process, such as the name of a table. Another client (or even one of the original ones) will then issue a notification like so:

    NOTIFY foobar;
    

    Each client that is listening for the ‘foobar’ message will receive a notification that the sender has issued the NOTIFY. It also receives the PID of the sending process. Multiple notifications are collapsed into a single notice, and the notification is not sent until a transaction is committed.

    Here’s some sample code …


    database open-source postgres

    PostgreSQL odd checkpoint failure

    David Christensen

    By David Christensen
    September 14, 2010

    Nothing strikes fear into the heart of a DBA like error messages, particularly ones which indicate that there may be data corruption. One such situation happened recently to us, when we ran into a recent unusual situation in an upgrade to PostgreSQL 8.1.21. We had updated the software and manually been running a REINDEX DATABASE command, when we started to notice some errors being reported on the front-end. We decided to dump the database in question to ensure we had a backup to return to, however we still ended up with more messages:

      pg_dump -Fc database1 > pgdump.database1.archive
    
      pg_dump: WARNING:  could not write block 1 of 1663/207394263/443523507
      DETAIL:  Multiple failures --- write error may be permanent.
      pg_dump: ERROR:  could not open relation 1663/207394263/443523507: No such file or directory
      CONTEXT:  writing block 1 of relation 1663/207394263/443523507
      pg_dump: SQL command to dump the contents of table "table1" failed: PQendcopy() failed.
      pg_dump: Error message from server: ERROR:  could not open relation 1663/207394263/443523507: No such file or directory
      CONTEXT:  writing block 1 of relation 1663/207394263/443523507
      pg_dump: The command …

    database postgres

    jQuery Auto-Complete in Interchange

    Jeff Boes

    By Jeff Boes
    September 13, 2010

    “When all you have is a hammer, everything looks like a nail.”

    Recently, I’ve taken some intermediate steps in using jQuery for web work, in conjunction with Interchange and non-Interchange pages. (I’d done some beginner stuff, but now I’m starting to see nails, nails, and more nails.)

    Here’s how easy it was to add an auto-complete field to an IC admin page. In this particular application, a <select> box would have been rather unwieldy, as there were 400+ values that could be displayed.

    <script src="//ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"
    type="text/javascript"></script>
    
    <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/lib/jquery.bgiframe.min.js"></script>
    
    <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/lib/jquery.dimensions.js"></script>
    
    <script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js"></script>
    

    That’s the requisite header stuff. Then you set up the internal list of autocomplete terms: …


    interchange javascript jquery

    Perl Testing - stopping the firehose

    Greg Sabino Mullane

    By Greg Sabino Mullane
    September 13, 2010

    I maintain a large number of Perl modules and scripts, and one thing they all have in common is a test suite, which is basically a collection of scripts inside a “t” subdirectory used to thoroughly test the behavior of the program. When using Perl, this means you are using the awesome Test::More module, which uses the Test Anything Protocol (TAP). While I love Test::More, I often find myself needing to stop the testing entirely after a certain number of failures (usually one). This is the solution I came up with.

    Normally tests are run as a group, by invoking all files named t/*.t; each file has numerous tests inside of it, and these individual tests issue a pass or a fail. At the end of each file, a summary is output stating how many tests passed and how many failed. So why is stopping after a failed test even needed? The reasons below mostly relate to the tests I write for the Bucardo program, which has a fairly large and complex test suite. Some of the reasons I like having fine-grained control of when to stop are:

    • Scrolling back through screens and screens of failing tests to find the point where the test began to fail is not just annoying, but a very unproductive use of my …


    perl postgres testing

    Reducing bloat without locking

    Josh Tolley

    By Josh Tolley
    September 9, 2010

    It’s not altogether uncommon to find a database where someone has turned off vacuuming, for a table or for the entire database. I assume people do this thinking that vacuuming is taking too much processor time or disk IO or something, and needs to be turned off. While this fixes the problem very temporarily, in the long run it causes tables to grow enormous and performance to take a dive. There are two ways to fix the problem: moving rows around to consolidate them, or rewriting the table completely. Prior to PostgreSQL 9.0, VACUUM FULL did the former; in 9.0 and above, it does the latter. CLUSTER is another suitable alternative, which also does the latter. Unfortunately all these methods require heavy table locking.

    Recently I’ve been experimenting with an alternative method—​sort of a VACUUM FULL Lite. Vanilla VACUUM can reduce table size when the pages at the end of a table are completely empty. The trick is to empty those pages of live data. You do that by paying close attention to the table’s ctid column:

    5432 josh@josh# \d foo
          Table "public.foo"
     Column |  Type   | Modifiers 
    --------+---------+-----------
     a      | integer | not null
     b      | integer | …

    postgres

    CSS Sprites and a “Live” Demo

    Steph Skardal

    By Steph Skardal
    September 6, 2010

    I’ve recently recommended CSS sprites to several clients, but the majority don’t understand what CSS sprites are or what their impact is. In this article I’ll present some examples of using CSS sprites and their impact.

    First, an intro: CSS sprites is a technique that uses a combination of CSS rules and a single background image that is an aggregate of many smaller images to display the image elements on a webpage. The CSS rules set the boundaries and offset that define the part of the image to show. I like to refer to the technique as analogous to the “Ouija board”; the CSS acts as the little [rectangular] magnifying glass to show only a portion of the image.

    It’s important to choose which images should be in a sprite based on how much each image is repeated throughout a site’s design and how often it might be replaced. For example, design border images and icons will likely be included in a sprite since they may be repeated throughout a site’s appearance, but a photo on the homepage that’s replaced daily is not a good candidate to be included in a sprite. I also typically exclude a site’s logo from a sprite since it may be used by externally linking sites. End Point uses CSS …


    ecommerce optimization

    Guidelines for Interchange site migrations

    Ron Phipps

    By Ron Phipps
    September 3, 2010

    I’m involved at End Point often with Interchange site migrations. These migrations can be due to a new client coming to us and needing hosting or migrating from one server to another within our own infrastructure.

    There are many different ways to do a migration, in the end though we need to hit on certain points to make sure that the migration goes smoothly. Below you will find steps which you can adapt for your specific migration.

    During the start of the migration it might be a good time to introduce git for source control. You can do this by creating the repository and cloning it to /home/account/live, setting up .gitignore files for logs, counter files, gdbm files. Then commit the changes back to the repo and you’ve now introduced source control without much effort, improving the ability to make changes to the site in the future. This is also helpful to document the changes you make to the code base along the way during the migration in case you need to merge changes from the current production site before completing the migration.

    • Export all of the gdbm databases to their text file equivalents on the production server

    • Take a backup from production of the database, catalog, …


    ecommerce environment git interchange perl
    Previous page • Page 176 of 220 • Next page