• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Git it in your head

    Ethan Rowe

    By Ethan Rowe
    February 1, 2009

    Git is an interesting piece of software. For some, it comes pretty naturally. For others, it’s not so straightforward.

    Comprehension and appreciation of Git are not functions of intellectual capacity. However, the lack of comprehension/appreciation may well indicate one of the following:

    1. Mistakenly assuming that concepts/procedures from other VCSes (particularly non-distributed “traditional” ones like CVS or Subversion) are actually relevant when using Git

    2. Not adequately appreciating the degree to which Git’s conception of content and history represent a logical layer, as opposed to implementation details

    CVS and Subversion both invite the casual user to basically equate the version control repository and all operations around it to the file system itself. They ask you to understand how files and directories are treated and tracked within their respective models, but that model is basically oriented around files and directories, period. Yes, there are branches and tags. Branches in particular are entirely inadequate in both systems. They don’t really account for branching as a core possibility that should be structured into the logical model itself; consequently, both systems …


    git

    Using cron and psql to transfer data across databases

    Greg Sabino Mullane

    By Greg Sabino Mullane
    February 1, 2009

    I recently had to move information from one database to another in an automatic function. I centralized some auditing information such that specific information about each database in the cluster could be stored in a single table, inside a single database. While I still needed to copy the associated functions and views to each database, I was able to make use of the new “COPY TO query”feature to do it all on one step via cron.

    At the top of the cron script, I added two lines defining the database I was pulling the information from (“alpha”), and the database I was sending the information to (“postgres”):

    PSQL_ALPHA='/usr/bin/psql -X -q -t -d alpha'
    PSQL_POSTGRES='/usr/bin/psql -X -q -t -d postgres'

    From left to right, the options tell psql to not use any psqlrc file found (-X), to be quiet in the output (-q), to print tuples only and no header/footer information (-t), and the name of the database to connect to (-d).

    The cron entry that did the work looked like this:

    */5 * * * * (echo "COPY audit_mydb_stats FROM STDIN;" && $PSQL_ALPHA -c "COPY (SELECT *, current_database(), now(), round(date_part('epoch'::text, now())) FROM …

    postgres

    Take pleasure in small things

    Ethan Rowe

    By Ethan Rowe
    January 29, 2009

    We’re in the midst of our 2009 company meeting, and are having our first-ever “hackathon”. The engineering team is divided into several working groups focusing on a variety of free software projects.

    As a distributed organization, we don’t always get a lot of opportunity to write code and do our “real work” side-by-side like this. And it’s a pleasure to witness and to participate.

    Just thought I’d share.


    conference

    Why not OpenAFS?

    Steven Jenkins

    By Steven Jenkins
    January 28, 2009

    OpenAFS is not always the right answer for a filesystem. While it is a good network filesystem, there are usage patterns that don’t fit well with OpenAFS, and there are some issues with OpenAFS that should be considered before adopting or using it.

    First, if you don’t really need a network filesystem, the overhead of OpenAFS may not be worthwhile. If you mostly write data, but seldom read it across a network, then the cache of OpenAFS may hinder performance rather than help. OpenAFS might not a good place to put web server logs, for example, that get written to very frequently, but seldom read.

    OpenAFS is neither a parallel filesystem nor a high-performance filesystem. In high-performance computing (HPC) situations, a single system (or small set of systems) may write a large amount of data, and then a large number of systems may read from that. In general, OpenAFS does not scale well for multiple parallel reads of read-write data, but it scales very well for parallel reads of replicated read-only data. Because read-only replication is not instantaneous, depending on the latencies that can be tolerated, OpenAFS may or may not be a good choice. If you need to write and immediately …


    openafs

    Slow Xen virtualization of RHEL 3 i386 guest on RHEL 5 x86_64

    Jon Jensen

    By Jon Jensen
    January 23, 2009

    It seems somehow appropriate that this post so closely follows Ethan’s recent note about patches vs. complaints in free software. Here’s the situation and the complaint (no patch, I’m sorry to say):

    We’re migrating an old server into a virtual machine on a new server, because our client needs to get rid of the old server very soon. Then afterwards we will migrate the services piecemeal to run natively on RHEL 5 x86_64 with current versions of each piece of the software stack, so we have time to test compatibility and make adjustments without being in a big hurry.

    The old server is running RHEL 3 i386 on 2 Xeon @ 2.8 GHz CPUs (hyperthreaded), 4 GB RAM, 2 SCSI hard disks in RAID 1 on MegaRAID, running Red Hat’s old 2.4.21-4.0.1.ELsmp kernel.

    The new server is running RHEL 5 x86_64 on 2 Xeon quad-core L5410 @ 2.33GHz CPUs, 16 GB RAM, 6 SAS hard disks in RAID 10 on LSI MegaRAID, running Red Hat’s recent 2.6.18-92.1.22.el5xen kernel.

    The virtual machine is using Xen full virtualization, with 4 virtual CPUs and 4 GB RAM allocated, with a nearly identical copy of the operating system and applications from the old server. And it is bog-slow. Agonizingly slow.

    Under the load of even a …


    environment redhat

    College District launches 4 additional sites

    Ron Phipps

    By Ron Phipps
    January 22, 2009

    We built a system for one of our clients, College District, that allows them to launch e-commerce sites fairly easily using a shared framework, database and adminstration panel. The first of the sites, Tiger District, launched over a year ago and has been succesful in selling LSU branded merchandise. A few weeks ago the following sites were launched on the system: Sooner District, Longhorn District, Gator District and Roll Tide District.

    The interesting parts of the system include a single Interchange installation, serving two catalogs, one for the administration area and one for all of the stores. Each site gets its own htdocs area for its images and CSS files (which are generated by the site generator using the selected colors). A cool part about this setup is that a new feature added appears on all sites instantly. The site code uses the request domain name to determine which user to connect to the database as. The heavy lifting of the multi-site capbilities is handled by a single Postgres database which utilizes roles, schemas and search paths to show or hide data based on the user that connected to the database. This works really well when it comes time to makes changes to an …


    clients case-study

    Note to self

    Ethan Rowe

    By Ethan Rowe
    January 16, 2009

    In free software, patches are considerably more useful than complaints.

    It’s easy to forget.


    open-source tips

    The Orange Code

    Jon Jensen

    By Jon Jensen
    January 15, 2009

    I’ve been reading the new book The Orange Code, the story of ING Direct by Arkadi Kuhlmann and Bruce Philp. Here are a few passages I liked from what I read today:

    The commitment to constantly learn is the only fair way to bring everyone in the company under the same umbrella. It is a leveler. (p. 213)

    … [W]e’ve got to earn it each day, and we need to feel that we have new challenges that can make us or break us every day. … Each day’s work will last only as long as it’s relevant. … [W]e did okay in each of the last seven years, but we are only ever as good as our last year, our last day, our last transaction. We still have a lot to do, since our competition is not resting. (pp. 208–209)

    Trust and faith not only are built over time, but they actually need the passage of time to validate them. (p. 197)

    Contributing is a privilege earned, not a right. And there are, indeed, bad ideas, most of which are answers to questions the contributors didn’t really understand in the first place. There is a reason why some of the world’s finest jazz musicians were classically trained: You have to understand the rules before you can intelligently improvise on them. (p. 195)


    books
    Previous page • Page 210 of 222 • Next page