Comparing installed RPMs on two servers
Sometimes I’m called on to deal with a problem that shows up only on one of two or more servers that are supposed to be configured identically, or nearly identically. One of the first things I do is run rpm -qa | sort on each machine and diff the output to see which RPM packages may be missing on one or the other server. I’ve never bothered to package this functionality up into a script because it’s so simple.
To exclude minor version differences, you need to specify a custom rpm --queryformat that leaves the version number off.
To understand what you’re seeing when it appears that some package is different but seems the same, you’re often looking at multiple architectures of packages (e.g. i386 and x86_64) which RPM doesn’t show in its default query format.
Finally, to turn the diff output into a list of RPMs to install via yum, I usually do some combination of grep and sed to pick out the RPMs I need.
After all that the process isn’t entirely simple anymore, and I recently decided it was easier to script it than explain it all to someone else. I first looked around to see what scripts others have come up with, since this is certainly not a new need. I found the blog post “Compare …
hosting redhat sysadmin
A Solution to the Most Common Rails Authentication Problem
Q: What’s one of the most common authentication related mistakes?
A: Forgetting to write the code that triggers authentication.
Q: What can we do about it?
A: Make it easier to test authentication.
The most common authentication problem that probably affects every Rails app is forgetting or overlooking the implementation of the authentication. In Rails, this generally means forgetting to add a controller before filter to verify the user is authenticated for actions that should be protected. Let me be the first to admit that I’m guilty of doing this myself but I’ve noticed it occurring in all Rails apps that I’ve worked on.
Having seen this problem, committed it myself, and being bothered by it, I’ve come up with a small solution that is my humble attempt to solve the problem by making it easier to track what is being authenticated and what isn’t. Before I show the solution I want to divulge that the current implementation has some shortcomings which I will explain towards the end of the article, but I feel it’s still a worthwhile solution in the form of the “good outweighs the bad”.
The solution is to provide helpers that make it easy to unit test the authentication of …
ruby rails
eCommerce Innovation Conference 2013
The eCommerce Innovation Conference 2013 is a new conference being held in Hancock, New York, between October 8th and 11th. The conference aims to discuss everything ecommerce with a focus on Perl-based solutions including Dancer and Interchange. It isn’t geared directly to any one specific type of person unlike most conferences. The current speakers list include in-house ecommerce software developers, consultants, sales managers, project managers, and marketing experts. The talk topics range from customer relationship management to template engines for Perl.
Mark Johnson and I are both going to be speaking at the conference. Also there will be Mike Heins, creator of Interchange, and Stefan Hornburg, longtime Interchange development group “team captain”.
Mark is going to be discussing full page caching in Interchange 5. This is becoming a more frequent request from our larger customers. They want to be able to do full page caching to allow the web browser and a caching proxy server alone to handle most requests leaving Interchange and the database open to handle more shopping-based requests like add to cart or checkout. This is a commonly-used architecture in many application …
camps community conference dancer ecommerce interchange perl
Monitorama, Berlin, EU - Day 2 and final considerations
Moving toward the end of IT conferences your expectations sometime gets lower cause the speakers are tired and so are the attendees. You kind of expect things to get quieter but this wasn’t the case with Monitorama EU 2013.
On this second day I found that all of the talks were as interesting, entertaining and inspiring as the ones on the first day.
I enjoyed all the talks proposed today but I got very inspired by the ones from Jeff Weinstein which talked about how you can use data collected for metrics and monitoring to improve the whole company. I also appreciated the speech from Gareth Rushgrove which highlighted how security is actually still underrated in IT companies and how/why you should try to integrate monitoring with security auditing tools.
I’ve been asked a few times, during the day and the evening, which was my opinion about the conference and the answer was always “absolutely positive!”. I always add that though I don’t expect to see any rocket science in these conferences I kind of suppose that I’ll get a lot of hints, ideas and tips which are a wonderful trampoline for new personal or work-related projects. That is exactly what I got.
The other aspect you …
conference monitoring nagios
Monitorama, Berlin, EU - Day 1
If you care about the quality of your IT infrastructure and work, there are times where you really need to focus on a valuable and important aspect: community.
The thing is that most people don’t realize how valuable the human factor is when working in the IT field, until they happen to be in such a marvellous conference as Monitorama has been so far.
I was lucky enough to be there, in Berlin from 2013.09.19 to 2013.09.20, to enjoy all the awesome talks and attendees which was present there. And what I’m really saying that besides most of the speeches were quite technically interesting and definitely good quality ones, they definitely didn’t only revolves about monitoring per se.
I won’t mention each and every talk, though they all would have deserved it, but I’ll say that while I was very inspired by Danese Copper’s talk about Open Source value and importance, I was also very entertained Ryan Dotsmith’s one about how you could/should learn from failures, either yours or others ones, or the very specific “on the field” one from Katherine Daniels.
On top of that while I generally don’t appreciate sponsors having “talks” during this kind of conferences, I actually appreciated how …
conference monitoring nagios
Apache accidental DNS hostname lookups
Logging website visitor traffic is an interesting thing: Which details should be logged? How long and in what form should you keep log data afterward? That includes questions of log rotation frequency, file naming, and compression. And how do you analyze the data later, if at all?
Allow me to tell a little story that illustrates a few limited areas around these questions.
Reverse DNS PTR records
System administrators may want to make more sense of visitor IP addresses they see in the logs, and one way to do that is with a reverse DNS lookup on the IP address. The network administrators for the netblock that the IP address is part of have the ability to set up a PTR (pointer) record, or not. You can find out what it is, if anything.
For example, let’s look at DNS for End Point’s main website at www.endpoint.com using the standard Unix tool “host”:
% host www.endpoint.com
www.endpoint.com has address 208.43.132.31
www.endpoint.com has IPv6 address 2607:f0d0:2001:103::31
% host 208.43.132.31
31.132.43.208.in-addr.arpa domain name pointer 208.43.132.31-static.reverse.softlayer.com.
% host 2607:f0d0:2001:103::31
1.3.0.0.0.0.0.0.0.0.0.0.0.0.0.0.3.0.1.0.1.0.0.2.0.d.0.f.7.0.6.2.ip6.arpa …devops hosting linux networking
Interchange Form Testing with WWW::Mechanize
Recently, I encountered a testing challenge that involved making detailed comparisons between the old and new versions of over 200 separate form-containing HTML (Interchange) pages.
Because the original developers chose to construct 200+ slightly-different pages, rather than a table-driven Interchange flypage (curses be on them forever and ever, amen), an upgrade to change how the pages prepared the shopping cart meant making over 200 similar edits. (Emacs macros, yay!) Then I had to figure out how to verify that each of the 200 new versions did something at least close to what the 200 old versions did.
Fortunately, I had easy ways to identify which pages needed testing, construct URLs to the new and old pages, and even a way to “script” how to operate on the page-under-test. And I had WWW::Mechanize, which has saved my aft end more than once.
WWW::Mechanize is a pretty mature (originally 2008) “browser-like” system for fetching and acting on web pages. You can accept and store cookies, find and follow links, handle redirection, forms, you name it—but not Javascript. Sorry, but there are other tools in the box that can help you if you are working with more interactive pages.
In my …
automation interchange perl testing
My Favorite Git Commands
Git is a tool that all of us End Pointers use frequently. I was recently reviewing history on a server that I work on frequently, and I took note of the various git commands I use. I put together a list of the top git commands (and/or techniques) that I use with a brief explanation.
git commit -m “****”
This is a no-brainer as it commits a set of changes to the repository. I always use the -m to set the git commit message instead of using an editor to do so. Edit: Jon recommends that new users not use -m, and that more advanced users use this sparingly, for good reasons described in the comments!
git checkout -b branchname
This is the first step to setting up a local branch. I use this one often as I set up local branches to separate changes for the various tasks I work on. This command creates and moves you to the new branch. Of course, if your branch already exists, git checkout branchname will check out the changes for that local branch that already exists.
git push origin branchname
After I’ve done a bit of work on my branch, I push it to the origin to a) back it up in another location (if applicable) and b) provide the ability for others to reference the branch. …
git