• Home

  • Custom Ecommerce
  • Application Development
  • Database Consulting
  • Cloud Hosting
  • Systems Integration
  • Legacy Business Systems
  • Security & Compliance
  • GIS

  • Expertise

  • About Us
  • Our Team
  • Clients
  • Blog
  • Careers

  • VisionPort

  • Contact
  • Our Blog

    Ongoing observations by End Point Dev people

    Fun with 72GB disks: Filesystem performance testing

    Selena Deckelmann

    By Selena Deckelmann
    September 9, 2008

    If you haven’t heard, the Linux Plumbers Conference is happening September 17-19, 2008 in Portland, OR. It’s a gathering designed to attract Linux developers—​kernel hackers, tool developers and problem solvers.

    I knew a couple people from the Portland PostgreSQL User Group (PDXPUG) interested in pitching an idea for a talk on filesystem performance. We wanted to examine performance conventional wisdom and put it to the test on some sweet new hardware, recently donated for performance testing Postgres.

    Our talk was accepted, so the three of us have been furiously gathering data, and drawing interesting conclusions, ever since. We’ll be sharing 6 different assumptions about filesystem performance, tested on five different filesystems, under five types of loads generated by fio, a benchmarking tool designed by kernel hacker Jens Axboe to test I/O.

    Look forward to seeing you there!


    conference performance

    Small changes can lead to significant improvements

    Steve McIntosh

    By Steve McIntosh
    September 5, 2008

    Case in point: We’ve been investigating various system management tools for both internal use and possibly for some of our clients. One of these, Puppet from Reductive Labs has a lot of features that I like a lot and good references (Google uses it to maintain hundreds of Mac OS X laptop workstations).

    I was asked to see if I could identify any performance bottlenecks and perhaps fix them. With the aid of dtrace (on my own Mac OS X workstation) and the Ruby dtrace library it was easy to spot that a lot of time was being eaten up in the “checksumming” routines.

    As with all system management tools, security is really important and part of that security is making sure the files you are looking at and using are exactly the files you think they are. Thus as part of surveying a system for modified files, they are each checksummed using an MD5 hash.

    To speed things up, at a small reduction in security, the Puppet checksumming routines have a “lite” option which only feeds the first 512 bytes of a file into the MD5 algorithm instead of the entire file, which can be quite large.

    As with most security packages these days, the way you implement an MD5 hash is to get a “digest” object, …


    security

    Stepping into version control

    David Christensen

    By David Christensen
    September 5, 2008

    It’s no secret that we here at End Point love and encourage the use of version control systems to generally make life easier both on ourselves and our clients. While a full-fledged development environment is ideal for maintaining/​developing new client code, not everyone has the time to be able to implement these.

    A situation we’ve sometimes found is clients editing/​updating production data directly. This can be through a variety of means: direct server access, scp/​sftp, or web-based editing tools which save directly to the file system.

    I recently implemented a script for a client who uses a web-based tool for managing their content in order to provide transparent version control. While they are still making changes to their site directly, we now have the ability to roll back any changes on a file-by-file basis as they are created, modified, or deleted.

    I wanted something that was: (1) fast, (2) useful, and (3) stayed out of the user’s way. I turned naturally to Git.

    In the user’s account, I executed git init to create a new Git repository in their home directory. I then git added the relevant parts that we definitely wanted under version control. This included all of the …


    git

    Standardized image locations for external linkage

    Jeff Boes

    By Jeff Boes
    September 3, 2008

    Here’s an interesting thought: https://boingboing.net/2008/09/01/publishers-should-al.html

    Nutshell summary: publishers should put cover images of books into a standard, predictable location (like http://www.acmebooks.com/covers/{ISBN}.jpg).

    This could be extended for almost any e-commerce site where the product image might be useful for reviews, links, etc.

    At very least, with Interchange action maps, a site could capture external references to such image requests for further study. (E.g., internally you might reference a product image as [image src=“images/products/current{SKU}”], but externally as “/products/{SKU}.jpg”; the actionmap wouldn’t be used for the site, but only for other sites linking to your images.)


    interchange

    Authorize.Net Transaction IDs to increase in size

    Dan Collis-Puro

    By Dan Collis-Puro
    September 2, 2008

    A sign of their success, Authorize.net is going to break through Transaction ID numbers greater than 2,147,483,647 (or 2^31), which happens to exceed the maximum size of a signed MySQL int() column and the default Postgres “integer”.

    It probably makes sense to ensure that your transaction ID columns are large enough proactively—​this would not be a fun bug to run into ex-post-facto.


    database postgres payments ecommerce

    Major rumblings in the browser world

    Jon Jensen

    By Jon Jensen
    September 1, 2008

    Wow. There’s a lot going on in the browser world again all of a sudden.

    I recently came across a new open source browser, Midori, still in alpha status. It’s based on Apple’s WebKit (used in Safari) and is very fast. Surprisingly fast. Of course, it’s not done, and it shows. It crashes, many features aren’t yet implemented, etc. But it’s promising and worth keeping an eye on. It’s nice to have another KHTML/WebKit-based browser on free operating systems, too.

    Now today news has come out about Google’s foray into the browser area, with a browser also based on WebKit called Chrome. It’ll be open source, include a new fast JavaScript engine, and feature compartmentalized JavaScript for each page, so memory and processor usage will be easy to monitor per application, and individual pages can be killed without bringing the whole browser down. Code’s supposed to become available tomorrow.

    A new generation of JavaScript engine for Mozilla is now in testing, called TraceMonkey. It has a just-in-time (JIT) compiler, and looks like it makes many complex JavaScript sites very fast. It sounds like this will appear formally in Firefox 3.1. Information on how to test it now is at John Resig’s …


    browsers

    Camps presentation at UTOSC 2008

    Jon Jensen

    By Jon Jensen
    August 31, 2008

    Friday evening I did a presentation on and demonstration of our “development camps” at the 2008 Utah Open Source Conference in Salt Lake City. Attendees seemed to get what camps are all about, asked some good questions, and we had some good conversations afterwards. You can read my presentation abstract and my slides and notes, and more will be coming soon at the camps website.

    I’ll post more later on some talks I attended and enjoyed at the conference.


    conference camps

    nginx and lighttpd deployments growing

    Jon Jensen

    By Jon Jensen
    August 30, 2008

    Apache httpd is great. But it’s good to see Netcraft report that nginx and lighttpd continue to grow in popularity as well. Having active competition in the free software web server space is really beneficial to everyone, and these very lightweight and fast servers fill an important niche for dedicated static file serving, homegrown CDNs, etc. Thanks to all the developers involved!


    nginx hosting
    Previous page • Page 213 of 220 • Next page