Anonymous code blocks
With the release of PostgreSQL 9.0 comes the ability to execute “anonymous code blocks” in various of the PostgreSQL procedural languages. The idea stemmed from work back in autumn of 2009 that tried to respond to a common question on IRC or the mailing lists: how do I grant a permission to a particular user for all objects in a schema? At the time, the only solution short of manually writing commands to grant the permission in question on every object individually was to write a script of some sort. Further discussion uncovered several people that often found themselves writing simple functions to handle various administrative tasks. Many of those people, it turned out, would rather simply call one statement, rather than create a function, call the function, and then drop (or just ignore) the function they’d never need again. Hence, the new DO command.
The first language to support DO was PL/pgSQL. The PostgreSQL documentation provides an example to answer the original question: how do I grant permissions on everything to a particular user.
DO $$DECLARE r record;
BEGIN
FOR r IN SELECT table_schema, table_name FROM information_schema.tables
WHERE table_type = …database open-source postgres
pg_wrapper’s very symbolic links
I like pg_wrapper. For a development environment, or testing replication scenarios, it’s brilliant. If you’re not familiar with pg_wrapper and its family of tools, it’s a set of scripts in the postgresql-common and postgresql-client-common packages available in Debian, as well as Ubuntu and other Debian-like distributions. As you may have guessed pg_wrapper itself is a wrapper script that calls the correct version of the binary you’re invoking – psql, pg_dump, etc – depending on the version of the database you want to connect to. Maybe not all that exciting in itself, but implied therein is the really cool bit: This set of tools lets you manage multiple installations of Postgres, spanning multiple versions, easily and reliably.
Well, usually reliably. We were helping a client upgrade their production boxes from Postgres 8.1 to 8.4. This was just before the 9.0 release, otherwise we’d consider moving the directly to that instead. It was going fairly smoothly until on one box we hit this message:
Could not parse locale out of pg_controldata outputOops, they had pinned the older postgres-common version. An upgrade of those packages and no more error!
$ pg_lsclusters
Version Cluster …database postgres
Listen/Notify improvements in PostgreSQL 9.0
Improved listen/notify is one of the new features of Postgres 9.0 I’ve been waiting for a long time. There are basically two major changes: everything is in shared memory instead of using system tables, and full support for “payload” messages is enabled.
Before I demonstrate the changes, here’s a review of what exactly the listen/notify system in Postgres is. Basically, it is an inter-process signalling system, which uses the pg_listener system table to coordinate simple named events between processes. One or more clients connects to the database and issues a command such as:
LISTEN foobar;The name foobar can be replaced by any valid name; usually the name is something that gives a contextual clue to the listening process, such as the name of a table. Another client (or even one of the original ones) will then issue a notification like so:
NOTIFY foobar;Each client that is listening for the ‘foobar’ message will receive a notification that the sender has issued the NOTIFY. It also receives the PID of the sending process. Multiple notifications are collapsed into a single notice, and the notification is not sent until a transaction is committed.
Here’s some sample …
database open-source postgres
PostgreSQL odd checkpoint failure
Nothing strikes fear into the heart of a DBA like error messages, particularly ones which indicate that there may be data corruption. One such situation happened recently to us, when we ran into a recent unusual situation in an upgrade to PostgreSQL 8.1.21. We had updated the software and manually been running a REINDEX DATABASE command, when we started to notice some errors being reported on the front-end. We decided to dump the database in question to ensure we had a backup to return to, however we still ended up with more messages:
pg_dump -Fc database1 > pgdump.database1.archive
pg_dump: WARNING: could not write block 1 of 1663/207394263/443523507
DETAIL: Multiple failures --- write error may be permanent.
pg_dump: ERROR: could not open relation 1663/207394263/443523507: No such file or directory
CONTEXT: writing block 1 of relation 1663/207394263/443523507
pg_dump: SQL command to dump the contents of table "table1" failed: PQendcopy() failed.
pg_dump: Error message from server: ERROR: could not open relation 1663/207394263/443523507: No such file or directory
CONTEXT: writing block 1 of relation 1663/207394263/443523507
pg_dump: The command …database postgres
jQuery Auto-Complete in Interchange
“When all you have is a hammer, everything looks like a nail.”
Recently, I’ve taken some intermediate steps in using jQuery for web work, in conjunction with Interchange and non-Interchange pages. (I’d done some beginner stuff, but now I’m starting to see nails, nails, and more nails.)
Here’s how easy it was to add an auto-complete field to an IC admin page. In this particular application, a <select> box would have been rather unwieldy, as there were 400+ values that could be displayed.
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"
type="text/javascript"></script>
<script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/lib/jquery.bgiframe.min.js"></script>
<script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/lib/jquery.dimensions.js"></script>
<script type="text/javascript" src="http://dev.jquery.com/view/trunk/plugins/autocomplete/jquery.autocomplete.js"></script>That’s the requisite header stuff. Then you set up the internal list of autocomplete terms: …
interchange javascript jquery
Perl Testing - stopping the firehose
I maintain a large number of Perl modules and scripts, and one thing they all have in common is a test suite, which is basically a collection of scripts inside a “t” subdirectory used to thoroughly test the behavior of the program. When using Perl, this means you are using the awesome Test::More module, which uses the Test Anything Protocol (TAP). While I love Test::More, I often find myself needing to stop the testing entirely after a certain number of failures (usually one). This is the solution I came up with.
Normally tests are run as a group, by invoking all files named t/*.t; each file has numerous tests inside of it, and these individual tests issue a pass or a fail. At the end of each file, a summary is output stating how many tests passed and how many failed. So why is stopping after a failed test even needed? The reasons below mostly relate to the tests I write for the Bucardo program, which has a fairly large and complex test suite. Some of the reasons I like having fine-grained control of when to stop are:
-
Scrolling back through screens and screens of failing tests to find the point where the test began to fail is not just annoying, but a very unproductive use of my …
perl postgres testing
Reducing bloat without locking
It’s not altogether uncommon to find a database where someone has turned off vacuuming, for a table or for the entire database. I assume people do this thinking that vacuuming is taking too much processor time or disk IO or something, and needs to be turned off. While this fixes the problem very temporarily, in the long run it causes tables to grow enormous and performance to take a dive. There are two ways to fix the problem: moving rows around to consolidate them, or rewriting the table completely. Prior to PostgreSQL 9.0, VACUUM FULL did the former; in 9.0 and above, it does the latter. CLUSTER is another suitable alternative, which also does the latter. Unfortunately all these methods require heavy table locking.
Recently I’ve been experimenting with an alternative method—sort of a VACUUM FULL Lite. Vanilla VACUUM can reduce table size when the pages at the end of a table are completely empty. The trick is to empty those pages of live data. You do that by paying close attention to the table’s ctid column:
5432 josh@josh# \d foo
Table "public.foo"
Column | Type | Modifiers
--------+---------+-----------
a | integer | not null
b | integer | …postgres
CSS Sprites and a “Live” Demo
I’ve recently recommended CSS sprites to several clients, but the majority don’t understand what CSS sprites are or what their impact is. In this article I’ll present some examples of using CSS sprites and their impact.
First, an intro: CSS sprites is a technique that uses a combination of CSS rules and a single background image that is an aggregate of many smaller images to display the image elements on a webpage. The CSS rules set the boundaries and offset that define the part of the image to show. I like to refer to the technique as analogous to the “Ouija board”; the CSS acts as the little [rectangular] magnifying glass to show only a portion of the image.
It’s important to choose which images should be in a sprite based on how much each image is repeated throughout a site’s design and how often it might be replaced. For example, design border images and icons will likely be included in a sprite since they may be repeated throughout a site’s appearance, but a photo on the homepage that’s replaced daily is not a good candidate to be included in a sprite. I also typically exclude a site’s logo from a sprite since it may be used by externally linking sites. End Point uses CSS …
ecommerce optimization
