Postgres Conference — PGCon2010 — Day Two

Day two of the PostgreSQL Conference started a little later than the previous day in obvious recognition of the fact that many people were up very, very late the night before. (Technically, this is day four, as the first two days consisted of tutorials; this was the second day of “talks”.)
The first talk I went to was PgMQ: Embedding messaging in PostgreSQL by Chris Bohn. It was well attended, although there were definitely a lot of late-comers and bleary eyes. A tough slot to fill! Chris is from Etsy.com and I’ve worked with him there, although I had no interaction with the PgMQ project, which looks pretty cool. From the talk description:
PgMQ (PostgreSQL Message Queueing) is an add-on that embeds a messaging client inside PostgreSQL. It supports the AMQP, STOMP and OpenWire messaging protocols, meaning that it can work with all of the major messaging systems such as ActiveMQ and RabbitMQ. PgMQ enables two replication capabilities: “Eventually Consistent” Replication and sharding.
As near as I can tell, “eventually consistent” is the same as “asynchronous replication”: the slave won’t be the same as the master right away, but will be eventually. As with Bucardo and Slony, the …
community conference database open-source postgres bucardo replication
Spree and Multi-site Architecture for Ecommerce
Running multiple stores from a single ecommerce platform instance seems to be quite popular these days. End Point has worked with several clients in developing a multi-store architecture running from one Interchange installation. In the case of our work with Backcountry.com, the data structure requires that a site_id column be included in product and navigation tables to specify which stores products and categories belong to. Frontend views are generated and “partial views” or “view components” are organized into a per-site directory structure and database calls request products against the current site_id. Below is a simplified view of the data structure used for Backcountry.com’s multi-site architecture.
Basic multi-store data architecture
A similar data model was implemented and enhanced for another client, College District. A site_id column exists in multiple tables including the products and navigation tables, and sites can only access data in the current site schema. College District takes one step further in development for appearance management by storing CSS values in the database and enabling CSS stylesheet generation on the fly with the stored CSS settings. This …
ecommerce ruby rails spree
PostgreSQL Conference - PGCon 2010 - Day One
The first day of talks for PGCon 2010 is now over, here’s a recap of the parts that I attended.
On Wednesday, the developer’s meeting took place. It was basically 20 of us gathered around a long conference table, with Dave Page keeping us to a strict schedule. While there were a few side conversations and contentious issues, overall we covered an amazing amount of things in a short period of time, and actually made action items out of almost all of them. My favorite decision we made was to finally move to git, something myself and others have been championing for years. The other most interesting parts for me were the discussion of what features we will try to focus on for 9.1 (it’s an ambitious list, no doubt), and DDL triggers! It sounds like Jan Wieck has already given this a lot of thought, so I’m looking forward to working with him in implementing these triggers (or at least
nagging him about it if he slows down). These triggers will be immensely useful to replication systems like Bucardo and Slony, which implement DDL replication in a very manual and unsatisfactory way. These triggers will not be like the current triggers, in that they will not be directly attached to …
community conference database open-source postgres mongodb
PostgreSQL 8.4 on RHEL 4: Teaching an old dog new tricks
So a client has been running a really old version of PostgreSQL in production for a while. We finally got the approval to upgrade them from 7.3 to the latest 8.4. Considering the age of the installation, it should come as little surprise that they had been running a similarly ancient OS: RHEL 4.
Like the installed PostgreSQL version, RHEL 4 is ancient—5 years old. I anticipated that in order to get us to a current version of PostgreSQL, we’d need to resort to a source build or rolling our own PostgreSQL RPMs. Neither approach was particularly appealing.
While the age/decrepitude of the current machine’s OS came as little surprise, what did come as a surprise was that there were supported RPMs available for RHEL 4 in the community Yum RPM repository (modulo your architecture of choice).
In order to get things installed, I followed the instructions for installing the specific yum repo. There were a few seconds where I was confused because the installation command was giving a “permission denied” error when attempting to install the 8.4 PGDG rpm as root. A little brainstorming and a lsattr later revealed that a previous administrator, apparently in the quest for über-security, had …
database postgres redhat
PostgreSQL switches to Git
Looks like the Postgres project is finally going to be bite the bullet and switch to git as the canonical VCS. Some details are yet to be hashed out, but the decision has been made and a new repo will be built soon. Now to lobby to get that commit-with-inline-patches list to be created…
database git postgres
Finding the PostgreSQL version — without logging in!
Metasploit used the error messages given by a PostgreSQL server to find out the version without actually having to log in and issue a “SELECT version()” command. The original article is at http://blog.metasploit.com/2010/02/postgres-fingerprinting.html and is worth a read. I’ll wait.
The basic idea is that because version 3 of the Postgres protocol gives you the file and the line number in which the error is generated, you can use the information to figure out what version of Postgres is running, as the line numbers change from version to version. In effect, each version of Postgres reveals enough in its error message to fingerprint it. This was a neat little trick, and I wanted to explore it more myself. The first step was to write a quick Perl script to connect and get the error string out. The original Metasploit script focuses on failed login attempts, but after some experimenting I found an easier way was to send an invalid protocol number (Postgres expects “2.0” or “3.0”). Sending a startup packet with an invalid protocol of “3.1” gave me back the following string:
E|SFATALC0A000Munsupported frontend protocol 3.1:
server supports 1.0 to 3.0 …
database postgres security
Thrillist Buys JackThreads
We were excited to learn yesterday that “flash sale” site JackThreads was acquired by Thrillist. Congratulations!
End Point has provided technical assistance for the members-only JackThreads store in numerous ways: provisioning hosting, setting up automated development environments, improving performance for scalability, integrating with third-party systems, and various development projects.
Though the businesses and technology are completely unrelated, End Point has also helped develop and support the growth of Backcountry.com’s several trailblazing “one deal at a time” (ODAT) sites, starting with SteepandCheap.com.
The “flash sale” business model is a lot of fun and opens up new opportunities for retail and technical innovation alike and we look forward to more work in this area.
ecommerce clients
Continuing an interrupted git-svn clone
I’ve run into the issue before when using git-svn to clone a large svn repo; something interrupts the transfer, and you end up having to restart the git-svn clone process again. Attempting to git-svn clone from a partially transferred svn clone directory results in error messages from git-svn, and it’s not immediately clear what you need to do to pick the process back up from where you left off.
In the past I’ve just blown away the partially-transferred repo and started the clone over, but that’s a waste of time and server resources, not to mention extremely frustrating, particularly if you’re substantially into the clone process.
Fortunately, this is not necessary; just go into your partially retrieved git-svn repo and execute git-svn fetch. This continues fetching the svn revisions from where you left off. When the process completes, you will have empty directory with just the .git directory present. Looking at git status shows all of the project files deleted (oh noes!), however this is just misdirection. At this point, you just need to issue a git reset –hard to check out the files in the HEAD commit.
More illustratively:
$ git svn clone …
git