我有一個包含多個數據庫(5)的mysqldump文件。其中一個數據庫需要很長時間才能加載,有沒有辦法通過數據庫拆分mysqldump文件,或者只是告訴mysql只加載其中一個指定的數據庫?通過數據庫拆分一個帶有多個數據庫的mysqldump文件
Manish
我有一個包含多個數據庫(5)的mysqldump文件。其中一個數據庫需要很長時間才能加載,有沒有辦法通過數據庫拆分mysqldump文件,或者只是告訴mysql只加載其中一個指定的數據庫?通過數據庫拆分一個帶有多個數據庫的mysqldump文件
Manish
這個Perl腳本應該可以做到。
#!/usr/bin/perl -w
#
# splitmysqldump - split mysqldump file into per-database dump files.
use strict;
use warnings;
my $dbfile;
my $dbname = q{};
my $header = q{};
while (<>) {
# Beginning of a new database section:
# close currently open file and start a new one
if (m/-- Current Database\: \`([-\w]+)\`/) {
if (defined $dbfile && tell $dbfile != -1) {
close $dbfile or die "Could not close file!"
}
$dbname = $1;
open $dbfile, ">>", "$1_dump.sql" or die "Could not create file!";
print $dbfile $header;
print "Writing file $1_dump.sql ...\n";
}
if (defined $dbfile && tell $dbfile != -1) {
print $dbfile $_;
}
# Catch dump file header in the beginning
# to be printed to each separate dump file.
if (! $dbname) { $header .= $_; }
}
close $dbfile or die "Could not close file!"
運行本作包含所有數據庫
./splitmysqldump < all_databases.sql
謝謝對於漂亮的腳本來說它像魅力一樣工作 – sakhunzai 2014-04-17 11:40:49
轉儲文件這是一個偉大的博客文章中,我總是重新指做這種事情了mysqldump
。
http://gtowey.blogspot.com/2009/11/restore-single-table-from-mysqldump.html
您可以輕鬆地擴展它來提取單個數據庫的。
這實際上是一個很棒的技巧,簡單而有效。 :-) – 2014-03-24 05:04:44
我一直在研究一個python腳本,它將一個大的轉儲文件拆分成小的文件,每個數據庫一個。它的名字是dumpsplit,這裏是一個從無到有:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import re
import os
HEADER_END_MARK = '-- CHANGE MASTER TO MASTER_LOG_FILE'
FOOTER_BEGIN_MARK = '\/\*\!40103 SET [email protected]_TIME_ZONE \*\/;'
DB_BEGIN_MARK = '-- Current Database:'
class Main():
"""Whole program as a class"""
def __init__(self,file,output_path):
"""Tries to open mysql dump file to call processment method"""
self.output_path = output_path
try:
self.file_rsrc = open(file,'r')
except IOError:
sys.stderr.write('Can\'t open %s '+file)
else:
self.__extract_footer()
self.__extract_header()
self.__process()
def __extract_footer(self):
matched = False
self.footer = ''
self.file_rsrc.seek(0)
line = self.file_rsrc.next()
try:
while line:
if not matched:
if re.match(FOOTER_BEGIN_MARK,line):
matched = True
self.footer = self.footer + line
else:
self.footer = self.footer + line
line = self.file_rsrc.next()
except StopIteration:
pass
self.file_rsrc.seek(0)
def __extract_header(self):
matched = False
self.header = ''
self.file_rsrc.seek(0)
line = self.file_rsrc.next()
try:
while not matched:
self.header = self.header + line
if re.match(HEADER_END_MARK,line):
matched = True
else:
line = self.file_rsrc.next()
except StopIteration:
pass
self.header_end_pos = self.file_rsrc.tell()
self.file_rsrc.seek(0)
def __process(self):
first = False
self.file_rsrc.seek(self.header_end_pos)
prev_line = '--\n'
line = self.file_rsrc.next()
end = False
try:
while line and not end:
if re.match(DB_BEGIN_MARK,line) or re.match(FOOTER_BEGIN_MARK,line):
if not first:
first = True
else:
out_file.writelines(self.footer)
out_file.close()
if not re.match(FOOTER_BEGIN_MARK,line):
name = line.replace('`','').split()[-1]+'.sql'
print name
out_file = open(os.path.join(self.output_path,name),'w')
out_file.writelines(self.header + prev_line + line)
prev_line = line
line = self.file_rsrc.next()
else:
end = True
else:
if first:
out_file.write(line)
prev_line = line
line = self.file_rsrc.next()
except StopIteration:
pass
if __name__ == '__main__':
Main(sys.argv[1],sys.argv[2])
或者,可以節省每個數據庫爲直接單獨的文件...
#!/bin/bash
dblist=`mysql -u root -e "show databases" | sed -n '2,$ p'`
for db in $dblist; do
mysqldump -u root $db | gzip --best > $db.sql.gz
done
使用'mysql --batch --skip-column-names'而不是'sed'作爲機器可解析輸出。 [(參考)](https://dev.mysql.com/doc/refman/5.0/en/mysql-command-options.html) – 2014-04-27 19:33:52
像Stano建議,最好的辦法是做轉儲時喜歡的東西......
mysql -Ne "show databases" | grep -v schema | while read db; do mysqldump $db | gzip > $db.sql.gz; done
當然,這依賴於〜/ .my.cnf文件的存在與
[client]
user=root
password=rootpass
否則只是-u和-p參數mysql和mysqldump的通話將它們定義:
mysql -u root -prootpass -Ne "show databases" | grep -v schema | while read db; do mysqldump -u root -prootpass $db | gzip > $db.sql.gz; done
希望這有助於
我可能會做的轉儲和步驟重裝:
注意:如果您使用的是MyISAM表格,則可以在步驟4中禁用索引評估,稍後重新啓用它以使插入速度更快。
檢查此解決方案爲Windows/Linux:http://stackoverflow.com/questions/132902/how-do-i-split-the-output-from-mysqldump-into-smaller-files/30988416#30988416 – Alisa 2015-06-22 22:04:18