11

Cloud Database Backup

Posted May 25th, 2011 (Updated 29 Oct 2011) in Database, Linux

Update 2011-10-12: More storage options! Google Storage, Local HDD, FTP, experimental unlimited Google Docs conversion

Have you ever wanted to ensure that even if your server dies in the most spectacular of ways, your DB is safe? Meet Cloud Database Backup. Cloud Database Backup is a little script I’ve written that allows you to quickly and easily back up your MySQL DBs to the cloud.

Currently you can back your database up to:

  • Amazon S3
  • FTP
  • Google Docs
  • Google Storage
  • Local HDD

Download the script here.

Requirements:

  • MySQL with access to INFORMATION_SCHEMA database
  • php, php5-cli (for S3 backups)
  • python2, python-gdata (for Google Docs backups)

This is an updated version of my old Automatic MySQL Database Backup script that adds support for Google Docs (which now allows uploads of any file type for non-enterprise users), the recently opened Google Storage, as well as local HDD and FTP servers. It does a MySQL dump of each DB separately, zips them all and uploads the zip to your cloud service of choice.

I’ve also added in an experimental ‘db chunker’ which uses this script to split SQL dumps into convertible sizes for Google Docs, allowing you to upload unlimited backups for free; owever due to a reproducible bug in Google Docs, I wouldn’t recommend its use (and it’s disabled by default).

To get started simply update your details in the appropriate places at the top of backup.sh and type ./backup.sh. Presto!

I’ve also included a crontab example to allow automation of backups.

  • Tash

    mind = blasted!

    • Flynsarmy

      I hope you were wearing a helmet!

  • CAT Shannon

    I’ve found some mistakes, i think.
    In lines 28 and 35 you should use the DB_PASS var to execute the mysql commands.

    I have the atom module installed in python and i have the next error:
    File “./drivers/GoogleDocs/flynsarmy_gdocs_backup.py”, line 26, in
    import atom.data
    ImportError: No module named data

  • CAT Shannon

    I forgot to say this is a great script. Thank you.

  • Alan

    Im trying to run this scripts, however is not working for me. If I see the log it doesn´t appear anything on it and when I see the syslog file I can see this messagge : (CRON) error (grandchild #25460 failed with exit status 126) , so I guess there is a configuration´s problem but Im not sure. Im just trying to backup my database and then connect to S3 to upload it.

    Any help? :)
    Thanks…
    Alan

    • Flynsarmy

      Try running the tool manually from the terminal. What does the output say?

  • John Paul Morrison

    Not working with the latest gdata. Simple fix though:

    — flynsarmy_gdocs_backup.py.bak 2011-10-12 04:57:52.000000000 -0700
    +++ flynsarmy_gdocs_backup.py 2012-01-25 12:46:53.000000000 -0800
    @@ -82,7 +82,7 @@

    self.uploader = gdata.client.ResumableUploader(
    self.client, self.f, content_type, file_size,
    - chunk_size=self.chunk_size, desired_class=gdata.docs.data.DocsEntry)
    + chunk_size=self.chunk_size, desired_class=gdata.docs.data.Resource)

    def __del__(self):
    if self.uploader is not None:
    @@ -92,11 +92,11 @@
    “””Uploads an entire file, handing the chunking for you.

    Args:
    - new_entry: gdata.data.docs.DocsEntry An object holding metadata to create
    + new_entry: gdata.data.docs.Resource An object holding metadata to create
    the document with.

    Returns:
    - A gdata.docs.data.DocsEntry of the created document on the server.
    + A gdata.docs.data.Resource of the created document on the server.
    “””
    uri = self.CREATE_SESSION_URI

    @@ -111,11 +111,11 @@
    “””Uploads a file, demonstrating manually chunking the file.

    Args:
    - new_entry: gdata.data.docs.DocsEntry An object holding metadata to create
    + new_entry: gdata.data.docs.Resource An object holding metadata to create
    the document with.

    Returns:
    - A gdata.docs.data.DocsEntry of the created document on the server.
    + A gdata.docs.data.Resource of the created document on the server.
    “””
    uri = self.CREATE_SESSION_URI

    @@ -206,10 +206,10 @@

    if chunk_size != gdata.client.ResumableUploader.DEFAULT_CHUNK_SIZE:
    entry = demo.UploadAutomaticChunks(
    - gdata.docs.data.DocsEntry(title=atom.data.Title(text=title)))
    + gdata.docs.data.Resource(title=atom.data.Title(text=title)))
    else:
    entry = demo.UploadInManualChunks(
    - gdata.docs.data.DocsEntry(title=atom.data.Title(text=title)))
    + gdata.docs.data.Resource(title=atom.data.Title(text=title)))

    if debug:
    print ‘Done: %s’ % demo.uploader.QueryUploadStatus()
    @@ -219,4 +219,4 @@
    print ‘file closed: %s’ % demo.uploader.file_handle.closed

    if __name__ == ‘__main__’:
    - main()
    \ No newline at end of file
    + main()

  • Sanjap

    How can one backup only a specific database. It seems to be backing up all databases on the Mysql server.

    • Flynsarmy

      Open backup.sh and change

      LIST=`mysql -u${DB_USER} -p${DB_PASS} -h${DB_HOST} INFORMATION_SCHEMA -e "SELECT SCHEMA_NAME FROM SCHEMATA WHERE SCHEMA_NAME !='information_schema';"`

      to

      LIST=`mysql -u${DB_USER} -p${DB_PASS} -h${DB_HOST} INFORMATION_SCHEMA -e "SELECT SCHEMA_NAME FROM SCHEMATA WHERE SCHEMA_NAME ='<your_db_name>';"`