WordPress数据库错误: [Got error 28 from storage engine]
SHOW FULL COLUMNS FROM `wp_options`

WordPress数据库错误: [Got error 28 from storage engine]
SELECT t.*, tt.*, tr.object_id FROM wp_terms AS t INNER JOIN wp_term_taxonomy AS tt ON t.term_id = tt.term_id INNER JOIN wp_term_relationships AS tr ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy IN ('category', 'post_tag', 'post_format') AND tr.object_id IN (156) ORDER BY t.name ASC

redhat as 4 下配置LVS + Heartbeat + LDirectord – Crazy knowledge base
Crazy knowledge base

redhat as 4 下配置LVS + Heartbeat + LDirectord

关于LVS的配置文档网上一堆,但据本人测试下来没一个能完整地正确地配置下来,经过无数次的失败与挫折之后终于把这个系统给运转起来了。在此做个笔记方便自己也同时给遇到问题的朋友一些参考。

一、准备工作
本人用的操作系统是redhat as4(内核2.6.9-22.EL),WEB Server使用nginx,使用VMware克隆了4个系统。在这提示一下一台机器如果克隆4个系统那真是个惨剧编译起来会慢得无法忍受,所以我使用了两台机器。

我们现在有4个系统,二个做Director Server,一台做主机另一台做备用,还有两台做Real Server。
IP分部为:
DS1 192.168.1.227 (主机)
DS2 192.168.1.228 (备用)
RS1 192.168.1.229
RS2 192.168.1.230
VIP 192.168.1.233 (Virtual IP这里不需要去理解看下去自会明白)

vi /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.227 node1
192.168.1.228 node2

uname -n
查看两台Director Server,分别显示node1和node2

service iptables stop
先关闭4台系统的防火墙,或者设置为信任eth0,使得能相互听见对方心跳。否则两个node都认为对方的状态为dead。

二、配置工作
配置集群分以下几种情况:
(1)、Route模式
(2)、Tunnel模式
(3)、heartbeat
(4)、heartbeat + LDirectord

1、Route模式

下载ipvsadm管理程序
http://www.linuxvirtualserver.org/software/
注意对应自己的内核版本
ipvsadm-1.24.tar.gz
tar zxf ipvsadm-1.24.tar.gz
注意在make时可能会出现很多错误信息,所以请按先操作下面这步
ln -s /usr/src/kernels/2.6.9-22.EL-i686/ /usr/src/linux
cd ipvsadm-1.24
make
make install

(1)在两台DS上配置LVS脚本
vi /etc/init.d/lvsDR

#!/bin/sh
#create in 20060812 by ghb
# description: start LVS of Directorserver
VIP=192.168.1.233
RIP1=192.168.1.229
RIP2=192.168.1.230
#RIPn=192.168.0.128~254
. /etc/rc.d/init.d/functions
case  $1  in
    start)
    echo "start LVS of DirectorServer"
    # set the Virtual IP Address
    /sbin/ifconfig eth0:0 $VIP broadcast $VIP netmask 255.255.255.0 up
    /sbin/route add -host $VIP dev eth0:0
    #Clear IPVS table
    /sbin/ipvsadm -C
    #set LVS
    /sbin/ipvsadm -A -t $VIP:80 -s rr                 
    /sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g
    /sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g
    #/sbin/ipvsadm -a -t $VIP:80 -r $RIP3:80 -g
    #Run LVS
    /sbin/ipvsadm
    #end
    ;;
    stop)
    echo "close LVS Directorserver"
    /sbin/ipvsadm -C
    ;;
    *)
    echo "Usage: $0" {start|stop}
    exit 1
esac

(-s rr 是使用了轮叫算法,可以自行选择相应的算法,更改rr就可以了,ipvsadm -h查看帮助。-g 是使用lvs工作DR直接路由模式,也可自行修改)
如果有多个realserver直接添加就可以了,之后启动此脚本就可以了。

(2)在两台RS上配置LVS脚本
vi /etc/init.d/lvsRS

#!/bin/bash
#description : start realserver
#create in 20060812 by ghb
VIP=192.168.1.233
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.0 up
/sbin/route add -host $VIP dev lo:0
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p
#end

此脚本使realserver不响应arp请求,将此脚本分别在realserver上执行就可以了。

测试:分别启动realserver上的nginx服务
在realserver1 执行 echo “This is realserver1” > /var/www/index.html
在realserver2 执行 echo “This is realserver2” > /var/www/index.html
打开IE浏览器输入http://192.168.1.233 浏览时,ipvsadm会对两台RS进行轮叫。

2、Tunnel模式

(1)在两台DS上配置LVS脚本
vi /etc/init.d/lvsTUNDR

#!/bin/sh
# description: start LVS of Directorserver
VIP=192.168.1.233
RIP1=192.168.1.229
RIP2=192.168.1.230
#RIPn=192.168.0.n
. /etc/rc.d/init.d/functions
case $1 in
    start)
    echo "start LVS of DirectorServer"
    # set the Virtual IP Address
    /sbin/ifconfig tunl0 $VIP broadcast $VIP netmask 255.255.255.0 up
    /sbin/route add -host $VIP dev tunl0
    #Clear IPVS table
    /sbin/ipvsadm -C
    #set LVS
    /sbin/ipvsadm -A -t $VIP:80 -s rr
    /sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -i
    /sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -i
    #/sbin/ipvsadm -a -t $VIP:80 -r $RIP3:80 -i
    #Run LVS
    /sbin/ipvsadm
    #end
    ;;
    stop)
    echo "close LVS Directorserver"
    ifconfig tunl0 down
    /sbin/ipvsadm -C
    ;;
    *)
    echo "Usage: $0" {start|stop}
    exit 1
esac

(2)在两台RS上配置LVS脚本
vi /etc/init.d/lvsTUNRS

#!/bin/sh
# ghb in 20060812
# description: Config realserver tunl port and apply arp patch
VIP=192.168.1.233
. /etc/rc.d/init.d/functions
case $1 in
    start)
    echo "Tunl port starting"
    ifconfig tunl0 $VIP netmask 255.255.255.0 broadcast $VIP up
    /sbin/route add -host $VIP dev tunl0
    echo "1" > /proc/sys/net/ipv4/conf/tunl0/arp_ignore
    echo "2" > /proc/sys/net/ipv4/conf/tunl0/arp_announce
    echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
    sysctl -p
    ;;
    stop)
    echo "Tunl port closing"
    ifconfig tunl0 down
    echo "1" > /proc/sys/net/ipv4/ip_forward
    echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
    ;;
    *)
    echo "Usage: $0" {start|stop}
    exit 1
esac

此脚本分别在realserver上执行,目的使realserver忽略arp响应,并设定vip.
测试同样按照上面的方法测试。

3、Heartbeat
(1)安装Heartbeat
hearbeat默认编译选项需要libnet库,所以这里先安装libnet
cd /down
wget http://download.chinaunix.net/down.php?id=11946&ResourceID=5943&site=1
tar zxf libnet.tar.gz
cd libnet
./configure
make
make install

接着安装heartbeat,heartbeat文件很大编译安装都需要些时间,耐心等待
cd /down
wget http://linux-ha.org/download/heartbeat-2.1.3.tar.gz
tar zxvf heartbeat-2.1.3.tar.gz
cd heartbeat-2.1.3
./ConfigureMe configure
make
make install

注意:
在两台DS上要分别安装heartbeat,不能用VMware直接Clone,直接Clone会出现两台uuid相同的错误。

(2)在node1上的简易配置
将配置文件模板拷贝到默认配置目录(可选,也可手工创建。模板文件中的注释可帮助你理解各配置选项的含义)

cp doc/authkeys /etc/ha.d/
cp doc/ha.cf /etc/ha.d/

cd /etc/ha.d/

开始编辑配置文件(两台DS系统上都需要安装和配置)
编辑/etc/ha.d/authkeys,使用的是第1种认证方式(crc),接着把文件的权限改为600:
vi /etc/ha.d/authkeys
auth 1
1 crc

更改文件权限
chmod 600 /etc/ha.d/authkeys

编辑/etc/ha.d/ha.cf:
vi /etc/ha.d/ha.cf

debugfile /var/log/ha-debug
logfile /var/log/ha-log
keepalive 2
deadtime 60
warntime 10
initdead 120
#udpport 694
bcast eth0
auto_failback on
ping 192.168.1.227
respawn root /usr/lib/heartbeat/ipfail
apiauth ipfail gid=root uid=root
hopfudge 1
use_logd yes
node node1
node node2
#crm on

注意:
ping 192.168.1.227,判断DS主机是否在线。

编辑/etc/ha.d/haresources
vi /etc/ha.d/haresources
node1 192.168.1.233 nginx

编辑/etc/init.d/nginx
vi /etc/init.d/nginx

#!/bin/bash  
#  
# nginx:       Control the nginx Daemon  
#  
# Version:      @(#) /etc/init.d/nginx 0.1  
#  
# description: This is a init.d script for nginx. Tested on CentOS4. \  
#              Change DAEMON and PIDFILE if neccessary.  
#  
 
#Location of nginx binary. Change path as neccessary  
DAEMON=/usr/local/nginx/sbin/nginx  
NAME=`basename $DAEMON`  
 
#Pid file of nginx, should be matched with pid directive in nginx config file.  
PIDFILE=/usr/local/nginx/logs/$NAME.pid  
 
#this file location  
SCRIPTNAME=/etc/init.d/$NAME  
 
#only run if binary can be found  
test -x $DAEMON || exit 0  
 
RETVAL=0  
 
start() {  
    echo $"Starting $NAME" 
    $DAEMON  
    RETVAL=0  
}  
 
stop() {  
    echo $"Graceful stopping $NAME" 
    [ -s "$PIDFILE" ] && kill -QUIT `cat $PIDFILE`  
    RETVAL=0  
}  
 
forcestop() {  
    echo $"Quick stopping $NAME" 
    [ -s "$PIDFILE" ] && kill -TERM `cat $PIDFILE`  
    RETVAL=$?  
}  
 
reload() {  
    echo $"Graceful reloading $NAME configuration" 
    [ -s "$PIDFILE" ] && kill -HUP `cat $PIDFILE`  
    RETVAL=$?  
}  
 
status() {  
    if [ -s $PIDFILE ]; then  
        echo $"$NAME is running." 
        RETVAL=0  
    else 
        echo $"$NAME stopped." 
        RETVAL=3  
    fi  
}  
# See how we were called.  
case "$1" in  
    start)  
        start  
        ;;  
    stop)  
        stop  
        ;;  
    force-stop)  
        forcestop  
        ;;  
    restart)  
        stop  
        start  
        ;;  
    reload)  
        reload  
        ;;  
    status)  
        status  
        ;;  
    *)  
        echo $"Usage: $0 {start|stop|force-stop|restart|reload|status}" 
        exit 1  
esac  
 
exit $RETVAL

将node1上的heartbeat相关的配置文件拷贝到node2
scp /etc/ha.d/ha.cf root@node2:/etc/ha.d/ha.cf
scp /etc/ha.d/authkeys root@node2:/etc/ha.d/authkeys
scp /etc/ha.d/haresources root@node2:/etc/ha.d/haresources
scp /etc/init.d/nginx root@node2:/etc/init.d/nginx

启动两个node上的heartbeat,然后稍等片刻(时间长短取决于ha.cf中的相关参数)。
/etc/init.d/heartbeat start

按照我这里的ha.cf配置,日志信息应该可以通过
tail /var/log/messages -f
进行查看

在两个node上分别运行ifconfig 和ps -ef 来查看虚拟IP和ngnix是否已经启动。
如果两台DS服务器启动正常,那么你现在可以看到DS1(主)中多出了eth0:0这个网络设备IP为192.168.1.233(VIP),nginx也已经启动。
而DS2(备)上还是保持原有状态,当DS1宕机时准备接管服务。你可以在DS1上执行/etc/init.d/heartbeat stop进行测试

在浏览器中访问http://192.168.1.233观察,当DS1的heartbeat处于启动状态时,客户端的请求发送至DS1,反之则DS2。

4、Heartbeat + LDirectord

Ldirectord的作用是监测Real Server,当Real Server失效时,把它从Load Balancer列表中删除,恢复时重新添加,在安装heartbeat时已经安装了Ldirectord。

由于Ldirectord用到了CPAN的MailTools工具,所以请先安装MailTools
下载地址:http://search.cpan.org/~markov/MailTools/
./configure
make && make install

(1)确定LVS使用Route或/Tunnel模式。
本例使用Tunnel模式,请对照上面的配置

(2)配置(/etc/ha.d/ldirectord.cf):
vi /etc/ha.d/ldirectord.cf

checktimeout=3
checkinterval=1
autoreload=yes
logfile="/var/log/ldirectord.log"
quiescent=yes

# Sample for an http virtual service
virtual=192.168.1.233:80
        real=192.168.1.229:80 ipip
        real=192.168.1.230:80 ipip
        fallback=127.0.0.1:80 gate
        service=http
        request="index.html"
        receive="Test Page"
        scheduler=rr
        #persistent=600
        netmask=255.255.255.0
        protocol=tcp
        checktype=negotiate
        checkport=80

(3)在两台DS服务器上创建lvsCloseTUN脚本
vi /etc/init.d/lvsCloseTUN

#!/bin/sh
# create in 200608 ghb
# description: close tunl0 and arp_ignore
VIP=192.168.1.233
. /etc/rc.d/init.d/functions
case $1 in
    start)
    echo "start director server and close tunl"
    ifconfig tunl0 down
    echo "1" > /proc/sys/net/ipv4/ip_forward
    echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
    ;;
    stop)
    echo "start Real Server"
    ifconfig tunl0 $VIP netmask 255.255.255.0 broadcast $VIP up
    /sbin/route add -host $VIP dev tunl0
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
    sysctl -p
    ;;
    *)
    echo "Usage: lvs" {start|stop}
    exit 1
esac

(4)修改两台DS服务器的haresources文件
vi /etc/ha.d/haresources
node1 lvsCloseTUN 192.168.1.233 lvsTUN ldirectord nginx

至此所有的配置已经完成,想必你也已经精疲力尽,但是事以愿为,当启动完成后我用ipvsadm -L命令查看状态时发现RS1,RS2的weight级别为0,查看LOG发现出现(Weight set to 0)错误,访问VIP请求始终落在DS上。
查来查去不知道问题出在哪,后来实在没有办法只能直接修改/usr/sbin/ldirectord启动文件
查找,Weight set to 0,把Weight set to 0上面的$ipvsadm_args $rforw -w 0这句替换为$ipvsadm_args $rforw -w 1

OK,胜利就在眼前。重新启动DS的heartbeat,RS的nginx和/etc/init.d/lvsTUNRS脚本进行测试。

在浏览器中访问http://192.168.1.233观察,客户端的请求会在1台DS服务器和2台RS服务器间轮换。

文档整理的比较粗糙,如果出现配置过程中出现问题可以及时给我留言,大家共同探讨共同研究。

参考文献:
http://sorphi.javaeye.com/blog/191076
http://www.linuxsky.org/doc/admin/200708/97.html
http://imysql.cn/2009/05/13/build_ha_envirenment_using_mysql_cluster7_and_lvs.html

redhat as 4 下配置LVS + Heartbeat + LDirectord

WordPress数据库错误: [Got error 28 from storage engine]
SELECT t.*, tt.* FROM wp_terms AS t INNER JOIN wp_term_taxonomy AS tt ON t.term_id = tt.term_id INNER JOIN wp_term_relationships AS tr ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy IN ('category') AND tr.object_id IN (155) ORDER BY t.name ASC

WordPress数据库错误: [Got error 28 from storage engine]
SELECT t.*, tt.* FROM wp_terms AS t INNER JOIN wp_term_taxonomy AS tt ON t.term_id = tt.term_id INNER JOIN wp_term_relationships AS tr ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy IN ('category') AND tr.object_id IN (157) ORDER BY t.name ASC

滚动到顶部