MySQL and PostgreSQL command equivalents (mysql vs. psql)
Users toggling between the MySQL and PostgreSQL command-line clients are often confused by the equivalent commands to accomplish basic tasks. Here’s a chart listing some of the differences between the command line client for MySQL (simply called mysql), and the command line client for Postgres (called psql).
MySQL (using mysql) | Postgres (using psql) | Notes |
---|---|---|
\c
Clears the buffer | \r
(same) | |
\d string
Changes the delimiter | No equivalent | |
\e
Edit the buffer with external editor | \e
(same) | Postgres also allows \e filename which will become the new buffer |
\g
Send current query to the server | \g
(same) | |
\h
Gives help — general or specific | \h
(same) | |
\n
Turns the pager off | \pset pager off
(same) | The pager is only used when needed based on number of rows; to force it on, use \pset pager always |
\p
Print the current buffer | \p
(same) | |
\q
Quit the client | \q
(same) | |
\r [dbname] [dbhost]
Reconnect to server | \c [dbname] [dbuser]
(same) | |
\s
Status of server | No equivalent | Some of the same info is available from the pg_settings table |
\t
Stop teeing output to file | No equivalent | However, \o (without any argument) will stop writing to a previously opened outfile |
\u dbname
Use a different … |
database mysql open-source postgres tips
jQuery UI Drag Drop Tips and an Ecommerce Example
This week, I implemented functionality for Paper Source to allow them to manage the upsell products, or product recommendations. They wanted a better way to visualize, organize, and select the three upsell products for every product. The backend requirements of this functionality were relatively simple. A new table was created to manage the product upsells.
The frontend requirements were more complex: They wanted to be able to drag and drop products into the desired upsell position (1, 2, or 3). I was allowed a bit of leeway on the interactivity level of the functionality, but the main requirement was to have drag and drop functionality working to provide a more efficient way to manage upsells. A mockup similar to the image shown below was provided at the onset of the project.
The mockup provided did not demonstrate the “interactiveness” of the drag and drop functionality. Items below the current upsells were ordered by cross sell revenue, or the revenue of each related item purchased with the current item.
Since I was familiar with jQuery, I knew that the jQuery UI included drag and drop functionality. I also had heard of several other jQuery drag and drop plugins, but since the …
browsers javascript
Verifying Postgres tarballs with PGP
If you are downloading the Postgres source code tarballs from a mirror, how can you tell if these are the same tarballs that were created by the packagers? You can’t really—although they come with a MD5 checksum file, these files are packaged right alongside the tarballs themselves, so it would be easy enough for someone to create an evil tarball along with a new MD5 file. All you could do is perhaps check if the tarball that came from mirror A has a matching checksum file from mirror B, or even the main repository itself.
One way around this is to use PGP (which almost always means GnuPG in the open-source software world) to digitally sign the tarballs. Until the Postgres project gets an official key and starts doing this, one workaround is to at least know the checksums from one single point in time. To that end, I’ve been digitally signing messages containing the checksums for the tarballs for many years now and posting them to pgsql-announce. You’ll need a copy of my public key (0x14964AC8m fingerprint 2529 DF6A B8F7 9407 E944 45B4 BC9B 9067 1496 4AC8) to verify the messages. A copy of the latest announcement message is below.
Note that I’ve also added a sha1sum for each …
database open-source postgres security
dstat: better system resource monitoring
I recently came across a useful tool I hadn’t heard of before: dstat, by Dag Wieers (of DAG RPM-building fame). He describes it as “a versatile replacement for vmstat, iostat, netstat, nfsstat and ifstat.”
The most immediate benefit I found is the collation of system resource monitoring output at each point in time, removing the need to look at output from multiple monitors. The coloring helps readability too:
% dstat
—-total-cpu-usage—- -dsk/total- -net/total- —paging– —system–
usr sys idl wai hiq siq| read writ| recv send| in out | int csw
4 1 92 3 0 0| 56k 84k| 0 0 | 94B 188B|1264 1369
3 7 43 44 1 1| 368k 11M| 151B 222B| 0 260k|1453 1565
3 2 46 48 1 0| 432k 5784k| 0 0 | 0 0 |1421 1584
2 2 47 49 0 0| 592k 0 | 0 0 | 0 0 |1513 1763
6 2 44 49 1 0| 448k 248k| 0 0 | 0 0 |1398 1640
8 4 41 45 3 0| 456k 0 | 135B 222B| 0 0 |1530 2102
18 4 38 41 0 0| 408k …
environment hosting monitoring redhat
Content Syndication, SEO, and the rel canonical Tag
End Point Blog Content Syndication
The past couple weeks, I’ve been discussing if content syndication of our blog negatively affects our search traffic with Jon. Since the blog’s inception, full articles have been syndicated by OSNews. The last couple weeks, I’ve been keeping an eye on the effects of content syndication on search to determine what (if any) negative effects we experience.
By my observations, immediately after we publish an article, the article is indexed by Google and is near the top search results for a search with keywords similar to the article’s title. The next day, OSNews syndication of the article shows up in the same keyword search, and our article disappears from the search results. Then, several days later, our article is ahead of OSNews as if Google’s algorithm has determined the original source of the content. I’ve provided visual representation of this behavior:
With content syndication of our blog articles, there is a several day lag where Google treats our blog article as the duplicate content and returns the OSNews article in search results for a search similar to our the blog article’s title. After this lag time, the OSNews article is treated as …
seo
Editing large files in place
Running out of disk space seems to be an all too common problem lately, especially when dealing with large databases. One situation that came up recently was a client who needed to import a large Postgres dump file into a new database. Unfortunately, they were very low on disk space and the file needed to be modified. Without going into all the reasons, we needed the databases to use template1 as the template database, and not template0. This was a very large, multi-gigabyte file, and the amount of space left on the disk was measured in megabytes. It would have taken too long to copy the file somewhere else to edit it, so I did a low-level edit using the Unix utility dd. The rest of this post gives the details.
To demonstrate the problem and the solution, we’ll need a disk partition that has little-to-no free space available. In Linux, it’s easy enough to create such a thing by using a RAM disk. Most Linux distributions already have these ready to go. We’ll check it out with:
$ ls -l /dev/ram*
brw-rw---- 1 root disk 1, 0 2009-12-14 13:04 /dev/ram0
brw-rw---- 1 root disk 1, 1 2009-12-14 22:27 /dev/ram1
From the above, we see that there are some RAM disks available (there are …
database postgres tips emacs vim
Live by the sword, die by the sword
In an amazing display of chutzpah, Monty Widenius recently asked on his blog for people to write to the EC about the takeover of Sun by Oracle and its effect on MySQL, saying:
I, Michael “Monty” Widenius, the creator of MySQL, is asking you urgently to help save MySQL from Oracle’s clutches. Without your immediate help Oracle might get to own MySQL any day now. By writing to the European Commission (EC) you can support this cause and help secure the future development of the product MySQL as an Open Source project.
“Help secure the future development”? Sorry, but that ship has sailed. Specifically, when MySQL was sold to Sun. There were many other missed opportunities over the years to keep MySQL as a good open source project. Some of the missteps:
- Bringing in venture capitalists
- Selling to Sun instead of making an IPO (Initial Public Offering)
- Failing to check on the long-term health of Sun before selling to them
- Choosing the proprietary dual-licensing route
- Making the documentation have a restricted license
- Failing to acquire InnoDB (which instead was bought by Oracle)
- Failing to acquire SleepyCat (which was instead bought by Oracle)
- Spreading FUD about the dual license and …
community database mysql open-source postgres
List Google Pages Indexed for SEO: Two Step How To
Whenever I work on SEO reports, I often start by looking at pages indexed in Google. I just want a simple list of the URLs indexed by the GOOG. I usually use this list to get a general idea of navigation, look for duplicate content, and examine initial counts of different types of pages indexed.
Yesterday, I finally got around to figuring out a command line solution to generate this desired indexation list. Here’s how to use the command line using http://www.endpoint.com/ as an example:
Step 1
Grab the search results using the “site:” operator and make sure you run an advanced search that shows 100 results. The URL will look something like: https://www.google.com/search?num=100&as_sitesearch=www.endpoint.com
But it will likely have lots of other query parameters of lesser importance [to us]. Save the search results page as search.html.

Step 2
Run the following command:
sed 's/<h3 class="r">/\n/g; s/class="l"/LINK\n/g' search.html | grep LINK | sed 's/<a href="\|" LINK//g'
There you have it. Interestingly enough, the order of pages can be an indicator of which pages rank well. Typically, pages with higher PageRank will be near …
seo