How to Convert InnoDB to innodb_file_per_table and Shrink ibdata1

Years ago, MySQL started allowing InnoDB to possess separate tablespace per table instead of a massive unified ibdata1 file.  Tablespace is basically a logical storage area that contains table information – references, if you will.  Every table, regardless of the storage engine, has one, and without it the table will be unrecognizable to MySQL.  MyISAM stores this data on a per-table basis already, and InnoDB is capable of doing this as well.  But, a while back it was common (and required) for InnoDB tablespace to be stored in a single metaphorically-speaking conglomerate of a file called ibdata1.  There are a few issues with this implementation:

  • Can use a lot of disk space
  • Some table operations (such as OPTIMIZE) are slower, and can continually increase the size of ibdata1
  • A single corruption within ibdata1 can render all InnoDB tables inaccessible
  • Likewise, restoring individual InnoDB tables from raw files (such as a backup) is typically impossible

Nowadays it’s generally recommended to store InnoDB data in individual files.  All cPanel servers since the early support of MySQL 5.5 use innodb_file_per_table by default, as do all new MySQL 5.6 installations unless this feature is specifically disabled.  Enabling it is easy – simply add the following line to /etc/my.cnf and restart MySQL:


BUT, if you already have databases containing InnoDB tables, you’re not quite off the hook yet.  Only new tablespace will be written there, leaving existing space stuck in ibdata1.  Fortunately, addressing this is rather easy, but there are some gotchas.  You’re typically looking at two possible solutions:

A) Do an in-place conversion, which will leave ibdata1 intact (aka,  at the same size) while tablespace is actually stored in individual files.  This can be very wasteful.  If your ibdata1 file is massive, you’re going to be stuck with it, as it is currently NOT possible to reliably shrink it unless you use solution B below.  I’ll point out that there are proofs of concept out there for truncating ibdata1 to a certain size, but no one recommends doing it on a production server.

I’m not going to into detail about the conversion process, because there is already a step-by-step guide for this provided on the cPanel forums.

After converting tablespace to per-table files, while those tables will no longer rely on the unified storage, you still can’t simply delete or truncate ibdata1. You will hose InnoDB.

B)  Dump and re-import InnoDB tables.  This is the more scalable and preferred method, however, if you have a ton of InnoDB data, it could result in loss of data written to the tables between the time you dump and re-import.  Doing this basically allows you to to get rid of the ibdata file, which will be recreated, but at a much smaller default size.

This post will focus more on option #2, which is the process of exporting and re-importing InnoDB tables and dropping dependency on ibdata1.

Before you begin, I’d highly suggest making a backup of your MySQL data.  Either dump all the databases, or stop MySQL and create a copy of the entire folder.



Step 1:  Dump all InnoDB Tables

We’re basically going to export all the InnoDB tables.  As mentioned, if you have a lot of data, you may lose anything written to the tables between the time they are exported and the time they are imported.  If this is a concern, you may want to consider starting InnoDB in recovery mode which will essentially make the InnoDB tables read-only (MyISAM tables are not affected). To do this, edit /etc/my.cnf and add:

innodb_force_recovery = 4

And restart MySQL.  Keep in mind again that any transaction writing to InnoDB tables will be blocked, which could render some applications reliant on them to be non-functional while recovery mode is active.

There are several methods for exporting InnoDB data, but we have a handy-dandy script that will do all this for you.  Please make sure your /home folder has at least as much free space as the size of your MySQL data directory.  You can find the script here, called innodb_export_import.py:


To use:

./innodb_export_import.py –export

Take note of the output of the script which will give you the name of the folder containing the database dumps.  We’ll use this later on.


Step 2: Adjust MySQL Configuration and Remove ib*

If you started MySQL with innodb_force_recovery above 0, make sure to disable this again.  Add the following to /etc/my.cnf:


Then in /var/lib/mysql (or whatever your MySQL datadir is), move the following files somewhere else. I would not suggest deleting them just yet:

  • ibdata1
  • ib_logfile*

Now start MySQL.  The above files will be recreated, but should be much smaller.


Step 3: Re-import data

We’re now going to re-import all the data that we exported previously.  If you used our script to the export, run it again, passing it the name of the directory you were given in step 1:

./innodb_export_import.py –import –dir=/path/to/export/dir

Our script will also move the existing .ibd files out the way in case you’ve either already attempted the conversion or are dealing with a corrupted InnoDB instance, then recreate the tables from the dumps.


Step 4: Test

Our script will also to a basic test to make sure all InnoDB tables are accessible.  To run:

./innodb_export_import.py –verify

You can review the resulting log file, displayed at the end of the process.


Correcting Perl Scalar::List::Util Errors on CentOS 5

We started getting some reports from users on CentOS 5 trying to run EasyApache and being unable to due to the following error:

Undefined subroutine &Scalar::Util::readonly called at /usr/lib/perl5/site_perl/5.8.8/IO/Uncompress/Base.pm line 1104.

We found that this is due to an outdated Scalar::Util module that was unable to be loaded properly by system Perl.  To fix this, you need to update the perl module directly from source.

You can find the latest sources from: http://search.cpan.org/~pevans/Scalar-List-Utils-1.39/


cd /usr/src

wget http://search.cpan.org/CPAN/authors/id/P/PE/PEVANS/Scalar-List-Utils-1.39.tar.gz

tar -xvzf Scalar-List-Utils-1.39.tar.gz

cd Scalar-List-Utils-1.39

perl Makefile.pl


gmake install


Now re-install the original module that was complaining:

/usr/local/cpanel/scripts/perlinstaller Uncompress::Base




cPanel Configuration Clusters and Security Considerations

cPanel 11.44 introduces a new feature fresh from their feature request system that now allows server administrators to deploy the configuration of one server to multiple servers in what the they call a “configuration cluster”.  It will start by offering this capability to just the cPanel update preferences, but will eventually be expanded to work with other components as well.  The addition of the feature was announced in early April on the cPanel blog, which has a much more thorough explanation of what you can expect in 11.44.

I visited the cPanel headquarters on June 3rd and was shown a live demo of how this works and what it entails.  Overall, the concept is pretty solid in terms of how it integrates with the existing systems, and is an excellent step in the right direction as far as cPanel recognizing the need to make their software more portable for their larger customers.  However, I do have some serious security concerns about its implementation.  These concerns were expressed to and received well by cPanel, but if and how they plan to address them is up in the air at this point.

One thing I’ll mention that’s slightly off-topic from the point of this post, is that this system is not real configuration management.  If you plan to grow as a professional hosting company, use big boy tools like Puppet or Chef.  cPanel developed the Configuration Cluster feature to appeal to novice customers who may not have the time, knowledge, or experience to use the tools that the industry has accepted.

The configuration cluster essentially consists of a remote server’s access hash being added to the server you’re using to store your ‘master’ configuration.  When you make a change to your master server, you can then elect to deploy that config to other servers in the cluster.  You’re not limited to one master server in this context.  Sound slightly familiar? It’s because the DNS clustering system essentially uses the same concept to replicate DNS zones.  I’m not saying that’s any more secure, but we’ll get into that later.

If you’re not familiar with the access hash/key and what it does, let me give you the lowdown: it’s essentially a 960-character plaintext string that is used to authenticate exclusively with WHM.  Root and every reseller on a cPanel server can have one, and once it is created, you can pass the access hash in the header of any request to WHM in place of the root or reseller password.  This is the most common and recommended method of authenticating with the WHM APIs, and is considered much more secure than using password authentication.  The access hash’s ability to authenticate isn’t just limited to APIs – technically you can use it to authenticate and do anything that you could normally do in the WHM interface itself.  With the root access hash, this includes:

  • Adding SSH keys
  • Changing the root password
  • Changing user shells
  • Installing packages

Because of this, the root access hash should be guarded with the same level of security that you would apply to a root password or private SSH key.  It essentially grants root access to your server via WHM.

This is where the security concerns with the Configuration Cluster and DNS Cluster features come into play.  Both of these systems involve servers housing the root access hashes of other servers.  With DNS clustering among systems you manage directly, this isn’t so much of a problem as long as you absolutely do not check the Reverse Trust Relationship box, and the remote cluster server is a DNSONLY or other cPanel instance solely used to operate as a nameserver.  In the worst-case scenario of the setup I just described, if any of the servers in the DNS cluster get compromised, the most you’re looking at is someone also owning the DNS server itself.  Recovering from this is ‘just’ a matter of replacing the compromised servers and re-syncing your DNS zones.

With the Configuration Cluster feature, a determined hacker can obtain the root access hashes of all other hosting servers in the configuration cluster if the ‘master’ server is compromised.  From there, you’re looking at an inestimable amount of damage, considering that someone now essentially has root access to every server in the cluster.

With that in mind, the server that stores these access hashes needs to have an elevated level of security.  Going back to what I said about cPanel catering to the non-experienced sysadmin, the idea that this server should be secured beyond the norm of any other hosting server in your fleet is likely not a concept that would automatically register to the target audience.  But it needs to register with you, if you plan to use this feature and not have your entire infrastructure owned.

During the demo in which I expressed this concern, I was asked for suggestions on how to better implement a feature like this, and to that I verbally stated to the presenter that I was not able to immediately come up with a productive solution to this problem that didn’t negate all the hard work that cPanel’s dev team had already put into this.  And really, it’s the cPanel community that begged for this feature to exist in the first place, unfortunately without regard for the fact that a hosting server acting as a deployment agent for other servers is not something that should ever exist.  It’s a hard fact that a lot of users still do not consider the security ramifications of their actions.

So now that I’ve hopefully driven this point home, if you do plan to incorporate the Configurate Cluster feature into your infrastructure, here are some ideas for how you can make the master server (the one storing all your hashes) more secure:

  • House it outside the DMZ or within an internal network with limited outside access
  • Use the server to house and define configuration only – do not use it to host customer cPanel accounts, websites, email, etc
  • Lock the server down with a firewall to only allow access to what is needed for it to perform its functions.  This starts with port 2087 from the server, and limiting port 2087 TO the server from only authorized IPs
  • Rotate your access hashes on a frequent schedule, just as you (hopefully) do with your root passwords and SSH keys

PHP Phalcon Module for cPanel EasyApache

Over the last few months I’ve seen a number of requests for Phalcon support within EasyApache.  Bearing in mind that the installation is nothing more than that of a typical PECL module, some would prefer to have integration into cPanel to allow for easier installation and portability though EA templates.

I wrote a simple EasyApache module to integrate Phalcon into PHP builds.  You can find it on Github:


To install, simply copy the contents of the repo into /var/cpanel/easy/apache/custom_opt_mods/:

cd /var/cpanel/easy/apache/custom_opt_mods/
git clone https://github.com/thecpaneladmin/EA-PhalconPHP.git .

If you don’t have git on your server (first, of all, why not?!?):

cd /usr/src
wget https://github.com/thecpaneladmin/EA-PhalconPHP/archive/master.zip
unzip -d /var/cpanel/easy/apache/custom_opt_mods/ master.zip


From EasyApache, you can select Phalcon from the list of PHP modules, and proceed to build.

The phalcon.so extension will be loaded from /usr/local/bin/php.ini, so bear in mind that if you have PHP running as as CGI module (CGI, FastCGI, or suPHP), and utilize a local php.ini file, you may need to add the following line to your local php.ini:




Then copy phalcon.so to whatever is defined as the value for extension_dir.



OpenSSL Heartbleed Bug and What You Need to Know

Over the past few days, we have received an overwhelming number of questions about the OpenSSL Heartbleed bug and how cPanel system administrators should be handling this.

First of all, if you haven’t read Codenomicon’s write-up on the bug, which thoroughly explains what it is, you should look visit heartbleed.com. Because their website already covers just about everything you need to know, I don’t feel the need to rehash all the nitty-gritty details here.  I’m only going to address how you, a cPanel administrator, should address this on your server.

Only CentOS 6.5 is vulnerable to this bug.  Obviously this affects other OS’s as well, but as a cPanel administrator you’re only dealing with CentOS (and perhaps FreeBSD).  If you use a version of CentOS older than 6.5, read no further – you’re not affected and can rest easy.

You can test whether you’re vulnerable by using this tool against a website on your server that has an SSL certificate installed.

To clear this up real quick: OpenSSL is a vendor-supplied package that, in your case, is provided by CentOS.  It is not supplied by cPanel, so there’s no point in asking them to “fix” this.  There is nothing for them to fix.  However, if you have system package upgrades enabled for cpupdate, and cPanel updates run automatically on your server, chances are the OpenSSL updates have already been installed.  To configure your cPanel update settings, check out this link, or go to WHM -> Update Preferences.

Check to see if you’re running the latest version as so:

root@server [~]# rpm -qa |grep -i openssl

At this point you’re looking for version 1.0.1e or newer (at the time of this writing, 1.0.1e is the latest).  This update does not necessarily fix the bug in question, but rather disables the TLS heartbeat extensions that are vulnerable.  A later version will likely fix the problem altogether, but such is not available yet because RHEL has not released a fix.

After OpenSSL is updated, you need to restart services.  This is necessary whether you updated OpenSSL yourself, or let cPanel do it during its update process.  The following services should be restarted:

  • cPanel
  • Apach*
  • Exim
  • Dovecot/Courier
  • Pure-ftpd/Proftp
  • MySQL
  • any other services that use SSL (Tip: you can use the following command to find a list of services to restart)

lsof -n | grep ssl | grep DEL | awk ‘{print $1}’ | sort | uniq

(Suggestion provided by Lucas Rolff)

* For Apache, you should either restart via WHM, or from command line do a full stop/start or run /scripts/restartsrv_httpd.  Doing a “service httpd restart” will not kill the parent process that loads the openssl libraries.

Once this is done, you should be good to go.  It is recommended, however, that you re-key your SSL certificates and have then re-issued and re-installed to prevent security problems resulting from a compromised private key.

You may also wish to follow this conversation from the cPanel forums.


Fix for “Missing owner for domain X, force lookup to root”

On recent cPanel versions, rebuilding the Apache conf returns the following non-fatal error:

root@savannah [~]# /scripts/rebuildhttpdconf
info [rebuildhttpdconf] Missing owner for domain savannah.tcaserversolutions.com, force lookup to root
Built /usr/local/apache/conf/httpd.conf OK


The error does not adversely effect the way the httpd.conf file is built, but the presence of the unnecessary info message can be annoying.  And by “can be annoying”, I mean “is annoying”.

To fix this, you will need to edit the main vhost template for the ‘nobody’ user, which owns the primary hostname of your server.  Edit /var/cpanel/userdata/nobody/$hostname and add this line to the bottom of the file:

owner: 'nobody'


Then run /scripts/rebuildhttpdconf again and the “error” should be gone.


This fix was affectionately provided by Mohammed Naser from Vexxhost.




Implementing Mandrill with Exim on cPanel

Mandrill is a transactional email service run by MailChimp, comparable to SendGrid.  It comes stock with a powerful API for fast implementation into applications for sending email reliably over SMTP, but it can also be used as a smart host for all of your server’s outgoing email.

The below instructions cover how to do this via command line.  If you prefer WHM, simply go to WHM -> Exim Configuration Manager -> Advanced Editor and alter the sections indicated.

First, open up your /etc/exim.conf.local file in an editor and look for the @AUTH@ section.  Modify it to look like this:


driver = plaintext
public_name = LOGIN
hide client_send = : your@email : api_key


Note:  If exim.conf.local doesn’t exist, create it.  There’s a template here.

The value of api_key is not your Mandrill account password – it’s your API key.  You can find it in your account settings under “SMTP & API Credentials”

Also, replace your@email with your Mandrill account email address.  If you already have something in the AUTH section, simply add this block of text below it.

Now look for the @PREROUTERS@ section, and modify it to look like this:


driver = manualroute
domains = ! +local_domains
ignore_target_hosts =
transport = auth_relay
route_list = * smtp.mandrillapp.com



The last modification should be to the @TRANSPORTSTART@ section:


driver = smtp
port = 587
hosts_require_auth = $host_address
hosts_require_tls = $host_address


Save this file.  Now open /etc/exim.conf.localopts and add this line (or modify it and append the hostname to the existing line)


smarthost_routelist=*: smtp.mandrillapp.com


If you’re using WHM, this is under Basic Editor -> Mail -> Smarthost support.

Save the file, and apply the changes:


service exim restart


To test whether things are working, send an email out from your server, and look for it in /var/log/exim_mainlog. You should see something like this in your log entry:


2014-03-11 21:59:50 1WNbGg-0006ud-2d => to@email R=smart_route T=auth_relay H=smtp.us-east-1.mandrillapp.com [x.x.x.x] X=UNKNOWN:ECDHE-RSA-AES256-GCM-SHA384:256 A=auth_login C="250 2.0.0 Ok: queued as 3529E193E178"


If you have SPF records, you’ll need to add the hostname of the SMTP server to the record itself to allow the third-party mail server to send email on behalf of your domain.  Mandrill will provide the hostname you should use under Settings -> Sending Domains, when you do an SPF test.




Unable to Change Shell via WHM or CLI

When trying to change a user’s shell via WHM, you may occasionally see the following error:

Shell could not be modified

As shown below:

WHM Shell Fail

Attempting to change the shell via command line fails as well:

# chsh -s /bin/bash myuser
Changing shell for myuser.
setpwnam: File exists
Shell *NOT* changed.  Try again later.

This means there is a lock on the /etc/passwd file.  Simply deleting /etc/ptmp fixes this problem:

rm /etc/ptmp


cPanel 11.42 Cheat Sheet

In 2010, we released our first cPanel cheat sheet for version 11.25.  Admittedly, we haven’t been keeping up with the times.  A lot has changed with cPanel over the last few years and an update is well overdue.

Though labeled for version 11.42, the new cheat sheet is verified to be accurate for the following versions:

  • 11.36
  • 11.38
  • 11.40
  • 11.42

Hop on over to our Cheat Sheets page to download the newest version.  Due to the popularity of our 11.25 cheat sheet and by popular request from our visitors and clients, we plan to release a new cheat sheet for every new minor cPanel version.  We will also be developing more cheat sheets in the near future for various parts of your system that we couldn’t squeeze into the cPanel one.

(in case you’re wondering how many times I can say “cheat sheet” in a single post, the answer is “8″)

http://www.thecpaneladmin.com/wp-content/plugins/downloads-manager/img/icons/pdf.gif File: cPanel 11.42 Cheat Sheet (48.87KB)
Added: 02/14/2014
Downloads: 1316
Description: cPanel ports, file/folder locations, scripts, and log locations, for cPanel 11.36, 11.38, 11.40, and 11.42.



How to FSCK a Linux Filesystem

There comes a time in every sysadmin’s life where filesystem errors just…happen.  Luckily, these are somewhat easy to fix, assuming you don’t have a greater problem involving physical hardware damage.

First, you need to know the name of the disk device having the problem.  Do a quick df to see what device the affected partition is on:

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3       2.7T  2.6T  106G  97% /
/usr/tmpDSK     4.0G  1.7G  2.2G  44% /tmp

Look under the “Filesystem” column to see the device name for the partition in question.  Now, if this is any filesystem but “/”, your job is probably going to be easy.  Simply unmount the file system and run a fsck against it.  For example, if you have a separate /home partition listed as /dev/sda3, you would do:

umount /home
fsck -yC /dev/sda3

There are a number of options for fsck, but the above combination is my personal preference.  The ‘y’ tells the fsck to fix whatever error sees, which is preferable unless you feel that your index finger has the stamina to hit ‘y’ 500 times in a row, and the ‘C’ prints out a pretty little progress bar so you can keep an eye on it.  Ext4 fiesystems fsck rather quickly – typically less than an hour for a 2TB filesystem.  Ext3 takes significantly longer.

Now, unmounting a filesystem may not be straight-forward – if any services are actively using files on that partition, the OS will refuse to unmount it.  Doing a lazy unmount (umount -l) won’t work here either – you need to unmount it cleanly.  To see what processes are using the filesystem in question, use lsof.  From the above example:

lsof -p |grep /var/

Then stop any services or processes using it.

If the filesystem issue is on your primary partition, you have a little more work ahead of you.  You’re going to need to boot into a rescue image.  To do this, simply use a Netinstall image and boot to the CentOS installation screen, then type:

linux rescue nomount

You can skip networking and all that jazz, then run the shell.  From there, you’ll need to find the partition in question and run the same fsck command.  Do note that on CentOS 6+, the device name may be incremented since it will count the rescue image as the first device in most cases.  So your /dev/sda3 might be /dev/sdb3 now, or even /dev/sdc3.

Once the fsck is done, reboot and confirm your filesystem is clean:

dumpe2fs -h /dev/sda3

The “Filesystem state” line should read “clean”.  If it doesn’t, the fsck either didn’t complete correctly, or you have a larger problem on your hands.