Category Archives: Ubuntu Server Configuration

http/2 adoption in India is currently very poor

Nginx with http/2 and usability problems

So Nginx released the mainline version 1.9.5 and then 1.9.6 with an experimental http/2 module. For those using spdy, the upgrade in itself should be simple, by simply replacing “spdy” with “http2” in the listen directive in the server configuration. The server will not start till this change is made.

Sadly, what should have been an occasion of great excitement and eager adoption after almost a year of anticipation has turned horribly wrong. Nginx 1.9.5 onwards, http/2 replaces spdy, which means, your server will serve http/2 only and not spdy. Non http/2 enabled users will get plain ssl. Considering that Opera Mini, Blackberry browser, Android browser and Internet Explorer (other than IE11 on Windows 10) don’t implement http/2 and increasing traffic is now mobile, I fail to see how serving the slowest version of your site to mobile browsers and a majority of users was a useful move for a webserver aiming to transform performance. Even Safari browser has http/2 support only in its latest version. That’s quite a chunk of the internet incapable of using the site at the speeds http/2 should be adopted for. Keeping spdy as a fallback would have allowed existing user experience to continue for many visitors. And that too for an experimental module. Server push – that would have added a serious speed boost for many is not implemented yet.

What is more, benchmarks currently show Nginx with spdy3.1 to be faster than Nginx with http/2. Talk of an upgrade that is a serious usability downgrade.

Not only does this effectively prevent me from touching http/2 on Nginx, it actually has me actively hunting for a frontend that will offer http/2 and spdy before offering plain ssl. Most likely nghttpx.

Oh the irony of needing a frontend proxy for a Nginx server because the server has upgraded to http/2. But sadly, given that only little over a third (38.2%) of the traffic in India is http/2 enabled, it is difficult to see how spdy support can be stopped by a webmaster with sites for Indians in the near future. I anticipate needing to support spdy for another year at least. Yes, I know Google will stop supporting spdy from Feb 2016, but those who don’t upgrade and other browsers and apps that aren’t http/2 capable will still need a way to be faster than raw ssl.

Talk of anticipation followed by a damn squib. I even found myself wondering whether Apache2 is worth checking out once more…. but more likely, I’m going to figure out nghttpx unless there is some indication that future upgrades will support spdy as well as http2 for a while.

Disable SSLv3 on Nginx to prevent #POODLE vulnerability

In the wake of POODLE vulnerability discovered in SSLv3, surprising number of people are not sure how to disable SSLv3. So here is how to do it.

In your Nginx SSL configuration, find the line that shows the protocols. It will be something like this:

ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;

Remove the SSLv3 from it and make it

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

That is all.

This is not relevant if you aren’t using SSL, of course.

Varnish config for wordpress with ngx_pagespeed and wp-touch

This is the Varnish config I am using currently. It is working with wp-touch, pagespeed and wordpress and (bonus) deals with the pagespeed not allowing pages to cache. No time for pretty comments and explanations, here’s the code. I will answer questions, or come back and explain the code in comments – but it is pretty self explanatory.

backend default {
.host = "127.0.0.1";
.port = "80";
.first_byte_timeout = 300s;
}

sub generate_user_agent_based_key {
set req.http.default_ps_capability_list_for_large_screens = "LargeScreen.SkipUADependentOptimizations:";
set req.http.default_ps_capability_list_for_small_screens = "TinyScreen.SkipUADependentOptimizations:";

set req.http.PS-CapabilityList = req.http.default_ps_capability_list_for_large_screens;

# Lazyload
if (req.http.User-Agent ~ “(?i)Chrome/|Firefox/|MSIE |Safari”) {
set req.http.PS-CapabilityList = “ll,ii,dj:”;
}
# lazyload_images (ll), inline_images (ii), defer_javascript (dj), webp (jw) and lossless_webp (ws).
if (req.http.User-Agent ~
“(?i)Chrome/[2][3-9]+\.|Chrome/[[3-9][0-9]+\.|Chrome/[0-9]{3,}\.”) {
set req.http.PS-CapabilityList = “ll,ii,dj,jw,ws:”;
}
# odd ones
if (req.http.User-Agent ~ “(?i)Firefox/[1-2]\.|MSIE [5-8]\.|bot|Yahoo!|Ruby|RPT-HTTPClient|(Google \(\+https\:\/\/developers\.google\.com\/\+\/web\/snippet\/\))|Android|iPad|TouchPad|Silk-Accelerated|Kindle Fire”) {
set req.http.PS-CapabilityList = req.http.default_ps_capability_list_for_large_screens;
}
# mobile
if (req.http.User-Agent ~ “(?i)Mozilla.*Android.*Mobile*|iPhone|BlackBerry|Opera Mobi|Opera Mini|SymbianOS|UP.Browser|J-PHONE|Profile/MIDP|portalmmm|DoCoMo|Obigo|Galaxy Nexus|GT-I9300|GT-N7100|HTC One|Nexus [4|7|S]|Xoom|XT907”) {
set req.http.PS-CapabilityList = req.http.default_ps_capability_list_for_small_screens;
}
# Remove placeholder header values.
remove req.http.default_ps_capability_list_for_large_screens;
remove req.http.default_ps_capability_list_for_large_screens;
}

sub vcl_hash {
# Block 3: Use the PS-CapabilityList value for computing the hash.
hash_data(req.http.PS-CapabilityList);
}
# Block 3a: Define ACL for purge requests
acl purge {
# Purge requests are only allowed from localhost.
“localhost”;
“127.0.0.1”;
#Add your server IP to this list
}
# Block 3b: Issue purge when there is a cache hit for the purge request.
sub vcl_hit {
if (req.request == “PURGE”) {
purge;
error 200 “Purged.”;
}
}

# Block 3c: Issue a no-op purge when there is a cache miss for the purge
# request.
sub vcl_miss {
if (req.request == “PURGE”) {
purge;
error 200 “Purged.”;
}
}

sub vcl_recv {
call generate_user_agent_based_key;

set req.http.X-Forwarded-For = client.ip;
set req.http.Host = regsub(req.http.Host, “:[0-9]+”, “”);

# Block 3d: Verify the ACL for an incoming purge request and handle it.
if (req.request == “PURGE”) {
if (!client.ip ~ purge) {
error 405 “Not allowed.”;
}
return (lookup);
}
# Blocks which decide whether cache should be bypassed or not go here.

# Did not cache the admin and login pages
if (req.url ~ “/wp-(login|admin)”) {
return (pass);
}
// server1 must handle file uploads
if (req.url ~ “media-upload.php” || req.url ~ “file.php” || req.url ~ “async-upload.php”) {
return(pass);
}

// do not cache xmlrpc.php
if (req.url ~ “xmlrpc.php”) {
return(pass);
}

// strip cookies from xmlrpc
if (req.request == “GET” && req.url ~ “xmlrpc.php”){
remove req.http.cookie;return(pass);
}

# Remove the “has_js” cookie
set req.http.Cookie = regsuball(req.http.Cookie, “has_js=[^;]+(; )?”, “”);

# Remove any Google Analytics based cookies
set req.http.Cookie = regsuball(req.http.Cookie, “__utm.=[^;]+(; )?”, “”);

# Remove the Quant Capital cookies (added by some plugin, all __qca)
set req.http.Cookie = regsuball(req.http.Cookie, “__qc.=[^;]+(; )?”, “”);

# Remove the wp-settings-1 cookie
set req.http.Cookie = regsuball(req.http.Cookie, “wp-settings-1=[^;]+(; )?”, “”);

# Remove the wp-settings-time-1 cookie
set req.http.Cookie = regsuball(req.http.Cookie, “wp-settings-time-1=[^;]+(; )?”, “”);

# Remove the wp test cookie
set req.http.Cookie = regsuball(req.http.Cookie, “wordpress_test_cookie=[^;]+(; )?”, “”);

# Are there cookies left with only spaces or that are empty?
if (req.http.cookie ~ “^ *$”) {
unset req.http.cookie;
}

if (req.http.Accept-Encoding) {
# Do no compress compressed files…
if (req.url ~ “\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$”) {
remove req.http.Accept-Encoding;
} elsif (req.http.Accept-Encoding ~ “gzip”) {
set req.http.Accept-Encoding = “gzip”;
} elsif (req.http.Accept-Encoding ~ “deflate”) {
set req.http.Accept-Encoding = “deflate”;
} else {
remove req.http.Accept-Encoding;
}
}

# Cache the following files extensions
if (req.url ~ “\.(css|js|png|gif|jp(e)?g)”) {
unset req.http.cookie;
}

# Check the cookies for wordpress-specific items
if (req.http.Cookie ~ “wordpress_” || req.http.Cookie ~ “comment_”) {
return (pass);
}
if (!req.http.cookie) {
unset req.http.cookie;
}

# — End of WordPress specific configuration

# Did not cache HTTP authentication and HTTP Cookie
if (req.http.Authorization || req.http.Cookie) {
# Not cacheable by default
return (pass);
}

# Cache all others requests
return (lookup);

}

# Block 5b: Only cache responses to clients that support gzip. Most clients
# do, and the cache holds much more if it stores gzipped responses.
if (req.http.Accept-Encoding !~ “gzip”) {
return (pass);
}

# Block 6: Mark HTML uncacheable by caches beyond our control.
sub vcl_fetch {
# For static content related to the theme, strip all backend cookies
if (req.url ~ “\.(css|js|png|gif|jp(e?)g)”) {
unset beresp.http.cookie;
}

# A TTL of 30 minutes
set beresp.ttl = 1800s;

return (deliver);
}
# Block 7: Add a header for identifying cache hits/misses.
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Cache = “HIT”;
} else {
set resp.http.X-Cache = “MISS”;
}
}

Ioncube with Nginx+php-fpm giving 502 gateway error SOLVED

Ubuntu 13.10 seems to be having trouble with ioncube and php-fpm. My earlier guide on loading ioncube may not work for you anymore.

This is really strange and I have no idea why no one seems to mention it, but if you are getting frustrated trying to install the ioncube loader on php-fpm, just ignore the instructions to create the 20-ioncube.ini file, and plug the line directly into the end of your php ini.

Steps to install ioncube loader with php5-fpm

cd /usr/local
sudo wget http://downloads2.ioncube.com/loader_downloads/ioncube_loaders_lin_x86-64.tar.gz
sudo tar xzf ioncube_loaders_lin_x86-64.tar.gz
mv /usr/local/ioncube/* /usr/lib/php5/20121212/

This is the same.

Now, instead of creating a file called 20-ioncube.ini or ioncube.ini directly add it to your php.ini file (On Ubuntu with a repository installed php5-fpm package, php.ini will be found at /etc/php5/fpm/php.ini)

At the very end add:

zend_extension = /usr/lib/php5/20121212/ioncube_loader_lin_5.5.so

Then restart php-fpm

service php5-fpm restart

If it still doesn’t work, try doing the same thing as root.

If you can’t find your php.ini, create a php file on your website with some random name. Open it in an editor and add the line:

Access the file on your site with a browser. It will have all kinds of info about php, including the configuration files (php.ini and others) locations.

Meteor::Socket bind: Address already in use at Meteor/Socket.pm line 115.

If, after installing meteor server, you get an error like

Meteor::Socket bind: Address already in use at Meteor/Socket.pm line 115.

when trying to start it up with ./meteord -d or /etc/init.d/meteord start, it means that you likely have another instance of meteor running.

Unless you have changed ports around, you can kill the exising instance of meteor with pkill meteor or simply use it without starting a new one 😉

Installing Meteor Server on Ubuntu

Meteor is a javascript server that does interesting things like live updating your live blog with far less load on your server than without it. However, setting it up is iffy, and the instructions are not idiot proof. So here are the steps distilled from my hits and misses, so that you may not have to go through that yourself.

Meteor Server Installation instructions

These instructions are as per instructions provided on the Meteor Server website along with my comments.

Make a directory for the Meteor Server and cd into it.
mkdir usr/local/meteor
cd usr/local/meteor

We begin with getting and unpacking Meteor Server:
wget http://meteorserver.org/download/latest.tgz
tar zxvf latest.tgz
rm latest.tgz

Alas, this doesn’t work. There is no file at the provided url, and I had to use the url provided for download. So this should work.

wget https://github.com/visitsb/meteorserver/blob/master/build/meteor-latest.tgz?raw=true
At this point, check the name of the file you got.
ls
if it is “meteor-latest.tgz?raw=true”, then
mv meteor-latest.tgz?raw=true meteor-latest.tgz
before proceeding, or the next step won’t work. Now
ls
should give you “meteor-latest.tgz”. Ready to move on.
tar zxvf meteor-latest.tgz
rm meteor-latest.tgz

Now to set it up.

Copy the init script to /etc/init.d/meteord:

cp daemoncontroller.dist /etc/init.d/meteord

You will need to edit the file to change the path if you did not install meteor in /usr/local/meteor. If you wish to use this to start/stop Meteor, you will need to edit line 14 to specify which user account will be used to run it. The default is meteor, so if you want to create that user account now, type:

useradd meteor

Now copy the configuration file to /etc/meteord.conf:

cp meteord.conf.dist /etc/meteord.conf

To start meteor at boot, they recommend

chkconfig meteord on

This part didn't work for me, as I don't have chkconfig installed - the instructions seem "Fedora-ish" - I have no idea how Fedora works. Never used it. Instead, I did

update-rc.d meteord defaults
update-rc.d meteord enable

At this stage, you should be able to start meteor in debug mode (according to them).

./meteor -d

For me, it didn't. I needed to

chmod +x meteord

as they have suggested. I also did

chmod /etc/init.d/meteord

I could start meteor in debug mode successfully, but

/etc/init.d/meteord start

wouldn't work.

I was getting "/bin/sh^M: bad interpreter: No such file or directory"

Found two problems. The first was that /etc/init.d/meteord script refers to /etc/init.d/functions which didn't exist. I edited the file to change the line

. /etc/init.d/functions

to

. /lib/lsb/init-functions

By checking what file was being referred by scripts that were working.

About the "^M" in the error, I discovered that it was caused by the file having dos line endings. It should have been unix line endings.

I opened it in vi

vi /etc/init.d/meteord

and in the command mode itself (hit ESC if you've switched to INSERT) entered:

:set fileformat=unix

Then saved and exited

:wq

Now

/etc/init.d/meteord

starts Meteor.

:)

I will do a separate post on using Meteor to power Live Blogging Plus (when I finish doing it).

Upload Error: client intended to send too large body

If you are using Nginx and are unable to upload files exceeding 1MB or so (most common) and get your error log shows “client intended to send too large body”, then here is the fix.

Edit your Nginx configuration file (which on Debian/Ubuntu will be found at /etc/nginx/nginx.conf) and edit the setting for client_max_body_size to something you can live with. If there is no line for it, add this line:

client_max_body_size 5M;

Obviously, replace 5M (for MB) with a number that makes you happy if your upload is larger.

Enhanced by Zemanta
Nginx logo

Nginx-1.5.6 with ngx_pagespeed (Google Pagespeed module) and ngx_cache_purge

So I got tired of fiddling around with repositories offering builds that compiled ngx_pagespeed with Nginx. I was getting a lot of errors, was using older versions of Nginx and was not able to make the dotdeb repository work.

I was wary of compiling, because I’m a creature of habit, and I like my Nginx installed as a service and other minor pleasures of life (I still haven’t learned to make init scripts :p)

What I have basically done is compiled the latest Nginx (1.5.6 – as of writing this post) along with these two modules I wanted in the place of the Nginx package.

So far, all seems to be working well, and I’m hitting pagespeed scores of 98+ without any noticeable strain on the server. So, for what it is worth, here is what I did.

Step 0: Install dependencies for compiling

Time to become root (better than typing “sudo” for each line.

sudo bash

Enter your password to become root@whatever:~#

Install dependencies for compiling.

apt-get install build-essential zlib1g-dev libpcre3 libpcre3-dev

Step 1: Get the latest ngx_pagespeed

The ngx_pagespeed page gives you the code to install the beta package. I just grabbed the current master download from the button on the right (right-click and copy link 😉 )

You could choose either. I’m not certain the server won’t explode because of whatever I’m doing. So play safe if you want. I just wanted all the fixes already.

This is if you use the recommended beta:

$ cd ~
$ wget https://github.com/pagespeed/ngx_pagespeed/archive/release-1.6.29.7-beta.zip
$ unzip release-1.6.29.7-beta.zip # or unzip release-1.6.29.7-beta
$ cd ngx_pagespeed-release-1.6.29.7-beta/
$ wget https://dl.google.com/dl/page-speed/psol/1.6.29.7.tar.gz
$ tar -xzvf 1.6.29.7.tar.gz # expands to psol/

What I did was:

$ cd ~
$ wget https://github.com/pagespeed/ngx_pagespeed/archive/master.zip
$ unzip master.zip
$ cd ngx_pagespeed-master/
$ wget https://dl.google.com/dl/page-speed/psol/1.6.29.7.tar.gz
$ tar -xzvf 1.6.29.7.tar.gz # expands to psol/

Step 2: Get the latest ngx_cache_purge

You know the drill by now. Just giving the steps I did:

$ cd ~
$ wget http://labs.frickle.com/files/ngx_cache_purge-2.1.tar.gz
$ tar -xvf ngx_cache_purge-2.1.tar.gz

I could have used the master here as well, but I wasn’t having too many errors with it, so it seemed an unnecessary risk (yeah, I know kinda late in the day to be cautious).

Now for the tricky part.

Step 3: Configuring Nginx for compiling

What we are going to do in this step is configure the source to build right on top of the existing Nginx package.

$ # check http://nginx.org/en/download.html for the latest version
$ wget http://nginx.org/download/nginx-1.5.6.tar.gz
$ tar -xvzf nginx-1.5.6.tar.gz
$ cd nginx-1.5.6/

This assumes you have a Nginx server running (you don’t need to stop it yet. I’ll tell you when) that you want to replace and a preference for organizing the files “as usual” in the Ubuntu/Debian way. I had the added greed of not wanting to invent anything I could recycle – like the lazy habit of “service nginx restart” for example. If not, you could probably install it anywhere. There may be easier ways of doing this.

Remember I am NOT an expert, I am simply a determined person trying to get what I want and making do with my limited knowledge.

Ok. Let’s proceed. Get the configuration of your existing nginx package (for the paths). You could also skip to next step without going through this reasoning and method and only return here if there is a problem.

nginx -V

You want to copy this to a text file somewhere for easy reference.

Now, you have to create the command for configuring using the paths here.

./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log

If you run this command, you will find some alerts going “Not found” in the checking. This is normal, since you don’t need all the things it checks for (indeed some are found on other Operating Systems altogether), but it is a good idea to keep an eye on what’s missing, in case there is a problem…. and there is.

This command will give you all the “Not founds” from that lengthy output. It is the same command, using grep to catch the lines:

./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log | grep 'not found'

The rest seems ok to my inexperienced eye, but “checking for nobody group … not found” is a problem. So we set the user and group to www-data by adding this to our configure line.

--user=www-data --group=www-data

Then we add our modules from steps 1 and 2.

 --add-module=$HOME/ngx_pagespeed-master  --add-module=$HOME/ngx_cache_purge-2.1

And we have our complete line.

Step 4: Configure the build

$ ./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=www-data --group=www-data --with-http_ssl_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --add-module=$HOME/ngx_pagespeed-master  --add-module=$HOME/ngx_cache_purge-2.1

I have no idea what you will do if you get errors. Comment here, and I’ll see if I have ideas. This should build smoothly on a standard Ubuntu server (I tried on three, all three worked).

Hopefully all went well, and we make the build.

$ make

Now for the other tricky part.

Step 5: Stop your existing Nginx server

Find out where the Nginx files and folders are

$ whereis nginx
nginx: /usr/sbin/nginx /etc/nginx /usr/share/nginx

Check and doublecheck that these are the same folders we are configuring. Not the end of the world if you get it wrong, but you’ll probably get errors with the init script and will have to either make a new one or hack it. Sure they are the right folders?

Now stop the server.

$ service nginx stop

Move your configuration folder somewhere safe.

$ mv /etc/nginx ~

Delete the existing install (we have simply stopped the server, not removed the package). Remember the locations we got in the whereis? add them all to a delete command. (yes, I know we moved the configuration folder somewhere safe, just doing a lazy copy-paste)

$ rm -rf /usr/sbin/nginx /etc/nginx /usr/share/nginx

Step 6: Install the compiled Nginx in the place of the files we removed

Time to install the make we did earlier.

$ make install

Step 7: Add a line to fastcgi_params

Edit the new fastcgi_params file /etc/nginx/fastcgi_params and add

fastcgi_param  SCRIPT_FILENAME    $document_root$fastcgi_script_name;

This line gets added when you install from a package. The source doesn’t have it. No idea why.

If you don’t do this, you’ll get blank pages and a lot of frustration trying to figure out why your server isn’t working. Then you’ll get superstitious over masquerading builds as packages and so on. (Don’t ask how I know) So don’t forget.

Step 8: Return the configuration files to their respective places in /etc/nginx

Move or copy or create the files in sites-available, symlink them to sites-enabled, and so on. The usual stuff.

If you don’t return your original nginx.conf here and choose to use the new one, please remember to add in the http block:

        ##
        # Virtual Host Configs
        ##
        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

Your earlier package installed by Ubuntu/Debian would have configured the folders automatically, but the source does not have this structure, so you will have to include the files (or paste their contents here – messy) or returning the server blocks into position will *still* not load them and leave you puzzled.

Tweak to taste. The old files worked as they were, for me. I was able to start my new server with a downtime of less than 2 minutes after I had these steps lined up and ready to copy-paste.

Start/restart server.

If there are problems with emerg not being able to bind to port, just do

pkill nginx

and start it

service nginx start

Result?

My pageload time went from 20+seconds for first page load (I wish I had a screenshot) to under 1s for first pageload right off the bat – this is before configuring pagespeed, and frankly, with this performance, I’ll leave pagespeed unconfigured if it so much as whimpers.

So maybe it was all for nothing, unless you count installing Nginx-1.5.6 with the conveniences of a package before it hit the repositories 😉

Note: When it is time for an update, there may be issues. I have no idea what will happen, but worst comes worst, I can

apt-get remove nginx

and

apt-get install nginx

and

lather-rinse-repeat

unless a better option has hit the repositories by then.

I will also post urgent updates here if anything goes wrong. So far as I can see, this is working as a dream.

Also note: There may be changes in performance over the next couple of days as I fiddle around trying to configure stuff. Not a reflection of end result if you suddenly find the blog slow. Work in progress.

Enhanced by Zemanta

PCLZIP_ERR_BAD_FORMAT (-10) : Unable to find End of Central Dir Record signature

If you are trying to upgrade, and suddenly start getting errors like:

Incompatible Archive. PCLZIP_ERR_BAD_FORMAT (-10) : Unable to find End of Central Dir Record signature

Here are some things to check.

  • This basically means that WordPress is not able to unzip the downloaded packages to install the upgrades.
  • Check to see available space on your disk. If your disk is full you should try upgrading your hosting package or freeing up some space on the disk. One easy way might be to delete old image thumbnails in sizes you no longer use.
  • If you have enough space on your disk, it may be that specific downloaded packages may be problematic. Try to install a plugin you don’t have, and if that works, you should check to see if there is a mysterious folder called “Upgrades” [DO NOT CONFUSE WITH UPLOADS]. If this folder exists, it is worth deleting it to see ii that lets  you do the upgrade.
Enhanced by Zemanta

Free space in your WordPress install by deleting old image sizes

If you change your theme often, your uploads folder will accumulate thumbnails of images in many sizes that you no longer use. This consumes disk space unnecessarily. I wish someone coded a plugin for this, but failing that, a handy way to do this via SSH is:

find . -name *-250x250.* | xargs rm -f

Where 250×250 is the image size you want to delete. You could also try something like:

find . -name *-250x*.* | xargs rm -f

if you have multiple thumbnail sizes like 250×250 250×300 etc.

What I do is list images in the folders to see the unwanted sizes there, and run this delete a few times with various sizes. A more ruthless person could try something like:

find . -name *-*x*.* | xargs rm -f

I do not recommend this, as it can match several files that you may not want to delete, for example a file with a hyphenated name and the letter x in the last hyphenated word, like “wish-merry-xmas.jpg” for example, which wouldn’t be a resized image, but an original or worse, it could be another file and not an image at all, like “here-are-exact-directions-to-our-property.html”.

But if you have a lot of thumbnail sizes, you may feel tempted anyway. Two suggested precautions. Change directory to your uploads folder (you should do this in any case)
cd /path/to/wprdpress/wp-content/uploads
find . -name *-*x*.* | xargs rm -f

The other precaution to take is to specify extension.
find . -name *-*x*.jpg | xargs rm -f
find . -name *-*x*.png | xargs rm -f

This will give you some protection from inadvertently deleting non-resize uploads like “entertaining-extras.pdf”

of course, if you are a patient soul (or don’t have too many files uploaded), you could find the files before deleting to see if any other files are getting selected along with resizes.

find . -name *-*x*.*
and if all is well
find . -name *-*x*.* | xargs rm -f

Do you have a better method?

Enhanced by Zemanta