docker-compose安装使用配置

一、mac安装

brew install docker-compose

二、docker-compose.yml文件配置

version: '3'

services:
memcache:
image: memcached:latest
ports:
- "127.0.0.1:11211:11211"
networks:
- lnmp
container_name: memcache15

mysql:
# build: ./mysql # 使用Dockerfile文件
image: mysql:latest
ports:
- "3306:3306" # 宿主机端口:容器端口
environment:
- MYSQL_ROOT_PASSWORD=asdfghjkl # 设置mysql的root密码
volumes:
- ~/MyServer/mysql/data:/var/lib/mysql:rw # mysql数据文件
networks:
- lnmp
container_name: mysql57 # 设置容器名字

redis:
image: redis:latest
ports:
- "127.0.0.1:6379:6379" # 如不需外网访问容器里面的服务 设置ip地址为127.0.0.1即可
environment:
- appendonly=yes # 打开redis密码设置
- requirepass=123456 # 设置redis密码
networks:
- lnmp
container_name: redis40


php:
# build: ./php
image: php:7.1-fpm
ports:
- "127.0.0.1:9000:9000"
volumes:
- ~/MyServer/myweb/test:/var/www/html:rw # web站点目录
- ~/MyServer/php/php.ini:/usr/local/etc/php/php.ini:ro
- ~/MyServer/php/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
- ~/MyServer/php/php-fpm.conf:/usr/local/etc/php-fpm.conf:ro
networks:
- lnmp
container_name: php72
tty: true
links:
- mysql
privileged: true

nginx:
# build: ./nginx
image: nginx:latest
ports:
- "8080:80" # 如果宿主机有安装nginx或者apache并且在运行则需要映射到其他端口
# - "8081:81" # 设置多个站点
# - "8082:82"
# - "8083:83"
depends_on:
- "php"
volumes:
- ~/MyServer/myweb/test:/var/www/html:rw
- ~/MyServer/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ~/MyServer/nginx/server.conf:/etc/nginx/conf.d/default.conf:ro
networks:
- lnmp
container_name: nginx114

networks: # 创建网络
lnmp:
driver: bridge

三、常用命令

docker-compose up 启动容器

docker-compose up -d 以后台服务形式启动容器

docker-compose exec mysql bash 进入容器(mysql为服务名,不是容器名)
 
四、注意问题
 
(1)mysql8问题 8.0.17
2059 – Authentication plugin ‘caching_sha2_password’ cannot be loaded: dlopen(../Frameworks/caching_sha2_password.so, 2): image not found
 
docker-compose exec mysql bash
 
#default_authentication_plugin=caching_sha2_password (comment line!) default_authentication_plugin=mysql_native_password (new line)
 
mysql -h localhost -u root -pasdfghjkl
ALTER USER ‘root’@’localhost’ IDENTIFIED WITH mysql_native_password BY ‘asdfghjkl’;
ALTER USER ‘root’@’%’ IDENTIFIED WITH mysql_native_password BY ‘asdfghjkl’;
flush privileges;
 
 
(2)fastcgi_pass地址:这个是重点 fastcgi_pass php72:9000; 一般由 php-fpm容器名:9000,这样组成
 
(3)php连接docker
php中配置links: – mysql,php程序配置文件中host=mysql,mysql为服务名
‘dns’=>’mysql:dbname=test;port=3306;host=mysql;charset=utf8’,

 

微信小程序之页面跳转的四种方法

1、wx.navigateTo({}) ,保留当前页面,跳转到应用内的某个页面,使用 wx.navigateBack 可以返回;

传递的参数在接收页面onLoad()函数中得到值:option.id就可以得到了
onLoad: function (option) {
console.log(option)//可以打印一下option看查看参数
this.setData({
id:option.id,
});

wx.navigateTo({
url:’/pages/test/test?id=1&page=4′, //跳转页面的路径,可带参数 ?隔开,不同参数用 & 分隔;相对路径,不需要.wxml后缀
success:function(){} //成功后的回调;
fail:function(){} //失败后的回调;
complete:function(){} //结束后的回调(成功,失败都会执行)
})

2、wx.redirectTo() , 关闭当前页面,跳转到非tabBar的某个页面
例:
let url = ‘/pages/test/share?id=’+e.target.dataset.id
wx.redirectTo({‘url’:url})

3、使用组件 <navigator> 示例: <navigator url=’/pages/test/index’>点击跳转</navigator> 4、wx.switchTab ,跳转到tabBar的某个页面wx.switchTab({
url: ‘/pages/test/index’,//注意switchTab只能跳转到带有tab的页面,不能跳转到不带tab的页面
})

微信小程序之表单

微信小程序的 input 有个属性叫 type,这个 type 有几个可选值:

  • text:不必解释
  • number:数字键盘(无小数点)
  • idcard:数字键盘(无小数点、有个 X 键)
  • digit:数字键盘(有小数点)
    注意:number 是无小数点的,digit 是有小数点的

 

[centos7]DenyHosts安装配置

DenyHosts是Python语言写的一个程序软件,运行于Linux上预防SSH暴力破解的,它会分析sshd的日志文件(/var/log/secure),当发现重复的攻击时就会记录IP到/etc/hosts.deny文件,从而达到自动屏IP的功能。

下载地址
https://sourceforge.net/projects/denyhosts/files/
#安装DenyHosts
tar xvzf DenyHosts-2.6.tar.gz
cd DenyHosts-2.6
python setup.py install
注:测试指定安装目录没用

#默认安装目录
/usr/share/denyhosts
#配置文件
cd /usr/share/denyhosts/
cp denyhosts.cfg-dist denyhosts.cfg
cp daemon-control-dist daemon-control
#启动服务
/usr/share/denyhosts/daemon-control start

[centos7]zookeeper安装使用、php zookeeper扩展安装

一、安装zookeeper
tar xvzf zookeeper-3.4.6.tar.gz
mv zookeeper-3.4.6 zookeeper

/usr/local/zookeeper/bin/zkServer.sh start
/usr/local/zookeeper/bin/zkServer.sh status
/usr/local/zookeeper/bin/zkServer.sh stop
/usr/local/zookeeper/bin/zkServer.sh restart

二、zookeeper客户端使用

/usr/local/zookeeper/bin/zkCli.sh -server 127.0.0.1:2181

1. 显示根目录下、文件: ls / 使用 ls 命令来查看当前 ZooKeeper 中所包含的内容
2. 显示根目录下、文件: ls2 / 查看当前节点数据并能看到更新次数等数据
3. 创建文件,并设置初始内容: create /zk “test” 创建一个新的 znode节点“ zk ”以及与它关联的字符串
4. 获取文件内容: get /zk 确认 znode 是否包含我们所创建的字符串
5. 修改文件内容: set /zk “zkbak” 对 zk 所关联的字符串进行设置
6. 删除文件: delete /zk 将刚才创建的 znode 删除
7. 退出客户端: quit
8. 帮助命令: help

可以通过命令:echo stat|nc 127.0.0.1 2181 来查看哪个节点被选择作为follower或者leader
使用echo ruok|nc 127.0.0.1 2181 测试是否启动了该Server,若回复imok表示已经启动。
echo dump| nc 127.0.0.1 2181 ,列出未经处理的会话和临时节点。
echo kill | nc 127.0.0.1 2181 ,关掉server
echo conf | nc 127.0.0.1 2181 ,输出相关服务配置的详细信息。
echo cons | nc 127.0.0.1 2181 ,列出所有连接到服务器的客户端的完全的连接 / 会话的详细信息。
echo envi |nc 127.0.0.1 2181 ,输出关于服务环境的详细信息(区别于 conf 命令)。
echo reqs | nc 127.0.0.1 2181 ,列出未经处理的请求。
echo wchs | nc 127.0.0.1 2181 ,列出服务器 watch 的详细信息。
echo wchc | nc 127.0.0.1 2181 ,通过 session 列出服务器 watch 的详细信息,它的输出是一个与 watch 相关的会话的列表。
echo wchp | nc 127.0.0.1 2181 ,通过路径列出服务器 watch 的详细信息。它输出一个与 session 相关的路径。
三、安装libzookeeper
cd /usr/local/zookeeper/src/c
./configure -prefix=/usr/local/zookeeper
make && make install

Libraries have been installed in:
/usr/local/zookeeper/lib

四、安装php zookeeper扩展
http://pecl.php.net/package/zookeeper

wget “http://pecl.php.net/get/zookeeper-0.2.2.tgz”
tar xvzf zookeeper-0.2.2.tgz
cd zookeeper-0.2.2
/usr/local/php/bin/phpize
./configure -with-php-config=/usr/local/php/bin/php-config -with-libzookeeper-dir=/usr/local/zookeeper/
make && make install
Installing shared extensions: /usr/local/php/lib/php/extensions/no-debug-non-zts-20121212/

vim /usr/local/php/etc/php.ini
[zookeeper]
extension=zookeeper.so

参与资料:
http://mirror.bit.edu.cn/apache/zookeeper/
http://pecl.php.net/package/zookeeper
http://www.wfuyu.com/mvc/22178.html
http://blog.csdn.net/xiaolang85/article/details/13021339

[pgsql]pgq主从同步实例

1、从crm复制到test_crm
数据库
crm – master
test_crm – slave

vim crm.ini
[londiste3]
job_name = l3_crm
db = host=192.168.232.234 port=5432 user=dev password=dev dbname=crm
queue_name = replika
logfile = /usr/local/skytools/londiste3/log/l3_crm.log
pidfile = /usr/local/skytools/londiste3/pid/l3_crm.pid

vim crm-gp.ini
[londiste3]
job_name = l3_gp
db = host=192.168.232.234 port=5432 user=dev password=dev dbname=test_crm
queue_name = replika
logfile = /usr/local/skytools/londiste3/log/l3_gp.log
pidfile = /usr/local/skytools/londiste3/pid/l3_gp.pid

vim pgqd-crm.ini
[pgqd]
database_list = crm,trade,test_crm
logfile = /usr/local/skytools/londiste3/log/pgqd.log
pidfile = /usr/local/skytools/londiste3/pid/pgqd.pid
#master
#创建provider进程配置文件
/usr/local/skytools/bin/londiste3 crm.ini create-root node1 ‘host=192.168.1.121 port=5432 user=dev password=dev dbname=crm’

#启动worker
/usr/local/skytools/bin/londiste3 -d /usr/local/skytools/londiste3/crm.ini worker

#启动ticker daemon
/usr/local/skytools/bin/pgqd -d /usr/local/skytools/londiste3/pgqd-crm.ini

#slave
/usr/local/skytools/bin/londiste3 crm-gp.ini create-leaf node2 ‘host=192.168.1.121 port=5432 user=dev password=dev dbname=test_crm’ –provider=’host=192.168.1.121 port=5432 user=dev password=dev dbname=crm’

#启动worker
/usr/local/skytools/bin/londiste3 -d /usr/local/skytools/londiste3/crm-gp.ini worker
/usr/local/skytools/bin/londiste3 crm-gp.ini status
/usr/local/skytools/bin/londiste3 crm-gp.ini members
/usr/local/skytools/bin/londiste3 crm.ini add-table public.active
/usr/local/skytools/bin/londiste3 crm.ini add-table public.active_blacklist
/usr/local/skytools/bin/londiste3 crm.ini add-table public.active_status
/usr/local/skytools/bin/londiste3 crm.ini tables

/usr/local/skytools/bin/londiste3 crm-gp.ini add-table public.active
/usr/local/skytools/bin/londiste3 crm-gp.ini add-table public.active_blacklist
/usr/local/skytools/bin/londiste3 crm-gp.ini add-table public.active_status
/usr/local/skytools/bin/londiste3 crm-gp.ini tables

新加入表时,查看表
public.active_status in-copy
新加入表时,查看进程
/usr/local/python27/bin/python2 /usr/local/skytools/bin/londiste3 crm-gp.ini copy public.active_status -d
2、从trade复制到test_crm
数据库
trade – master
test_crm – slave

vim trade.ini
[londiste3]
job_name = l3_trade
db = host=192.168.1.121 port=5432 user=dev password=dev dbname=trade
queue_name = replika-trade
logfile = /usr/local/skytools/londiste3/log/l3_trade.log
pidfile = /usr/local/skytools/londiste3/pid/l3_trade.pid

vim trade-gp.ini
[londiste3]
job_name = l3_gp_trade
db = host=192.168.1.121 port=5432 user=dev password=dev dbname=test_crm
queue_name = replika-trade
logfile = /usr/local/skytools/londiste3/log/l3_gp_trade.log
pidfile = /usr/local/skytools/londiste3/pid/l3_gp_trade.pid

vim pgqd-trade.ini
[pgqd]
database_list = trade,test_crm
logfile = /usr/local/skytools/londiste3/log/pgqd-trade.log
pidfile = /usr/local/skytools/londiste3/pid/pgqd-trade.pid
#master
#创建provider进程配置文件
/usr/local/skytools/bin/londiste3 trade.ini create-root node3 ‘host=192.168.1.121 port=5432 user=dev password=dev dbname=trade’

删除节点
/usr/local/skytools/bin/londiste3 trade.ini drop-node node3

#启动worker
/usr/local/skytools/bin/londiste3 -d trade.ini worker
#启动ticker daemon
/usr/local/skytools/bin/pgqd -d pgqd-trade.ini

#slave
/usr/local/skytools/bin/londiste3 trade-gp.ini create-leaf node4 ‘host=192.168.1.121 port=5432 user=dev password=dev dbname=test_crm’ –provider=’host=192.168.1.121 port=5432 user=dev password=dev dbname=trade’

#启动worker
/usr/local/skytools/bin/londiste3 -d trade-gp.ini worker
/usr/local/skytools/bin/londiste3 trade-gp.ini status
/usr/local/skytools/bin/londiste3 trade-gp.ini members
/usr/local/skytools/bin/londiste3 trade.ini add-table trade.area
/usr/local/skytools/bin/londiste3 trade.ini add-table trade.blacklist
/usr/local/skytools/bin/londiste3 trade.ini tables

/usr/local/skytools/bin/londiste3 trade-gp.ini add-table trade.area
/usr/local/skytools/bin/londiste3 trade-gp.ini add-table trade.blacklist
/usr/local/skytools/bin/londiste3 trade-gp.ini tables

新加入表时
/usr/local/python27/bin/python2 /usr/local/skytools/bin/londiste3 trade-gp.ini copy trade.blacklist -d

[pgsql]postgresql利用pgq进行同步数据

Skytools包含三个组件:pgq、londiste、walmgr。

Pgq提供SQL API,由异步处理机制去灵活调用。用于解决实时事务的异步批处理问题。

Pgq由producer、ticker、consumer组成。Producer将events推送到queue中,ticker负责对批量queue制定相应处理规则,consumer从queue中获取events。

Londiste是基于pgq的事件传输功能的一个数据库复制工具,由python语言编写。

Walmgr是一个包含了WAL归档、基础备份及数据库运行时备份恢复功能的脚本。由python语言编写。

#安装python
wget https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tar.xz
tar xf Python-3.4.3.tar.xz
cd Python-3.4.3
./configure –prefix=/usr/local/python
make
make install (make altinstall)

+++++++++++++++++++++++++++++++++++++++++++++++++++++

tar xf Python-2.7.9.tar.xz
cd Python-2.7.9
./configure –prefix=/usr/local/python27
make
make install
(make altinstall)
+++++++++++++++++++++++++++++++++++++++++++++++++++++

#安装psycopg2
cd /usr/local/
tar xvzf psycopg2-2.6.tar.gz
cd psycopg2-2.6
export C_INCLUDE_PATH=/usr/local/pgsql/include
export LIBRARY_PATH=/usr/local/pgsql/lib

PATH=$PATH:/usr/local/pgsql/bin/

/usr/local/python/bin/python3 setup.py build_ext -R /usr/local/pgsql/lib -I /usr/local/pgsql/include –pg-config /usr/local/pgsql/bin/pg_config

/usr/local/python/bin/python3 setup.py install build_ext -R /usr/local/pgsql/lib -I /usr/local/pgsql/include –pg-config /usr/local/pgsql/bin/pg_config

running install_egg_info
Writing /usr/local/python/lib/python3.4/site-packages/psycopg2-2.6-py3.4.egg-info
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
cd /usr/local/
tar xvzf psycopg2-2.6.tar.gz
cd psycopg2-2.6
/usr/local/python27/bin/python2.7 setup.py build_ext -R /usr/local/pgsql/lib -I /usr/local/pgsql/include –pg-config=/usr/local/pgsql/bin/pg_config

/usr/local/python27/bin/python2.7 setup.py install build_ext -R /usr/local/pgsql/lib -I /usr/local/pgsql/include –pg-config=/usr/local/pgsql/bin/pg_config

running install_egg_info
Writing /usr/local/python27/lib/python2.7/site-packages/psycopg2-2.6-py2.7.egg-info

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

在包含自 psycopg/psycopgmodule.c:27 的文件中:
./psycopg/psycopg.h:30:20: 错误:Python.h:没有那个文件或目录

#rm -rf /usr/bin/python
#ln -s /usr/local/python/bin/python3 /usr/bin/python

#安装skytools(python3不成功)
tar xvzf skytools-3.2.tar.gz
cd skytools-3.2

./configure –with-python=/usr/local/python27/bin/python2 –with-pgconfig=/usr/local/pgsql/bin/pg_config –prefix=/usr/local/skytools
make
make install
ls /usr/local/skytools
mkdir -p /usr/local/skytools/londiste3/pid
mkdir -p /usr/local/skytools/londiste3/log
chown postgres:postgres /usr/local/skytools/londiste3

export PYTHONPATH=/usr/local/skytools/lib/python2.7/site-packages:$PYTHONPATH
#配置master
#创建provider进程配置文件
cd /usr/local/skytools/londiste3
cat db_p.ini

[londiste3]
job_name = l3_db_p
db = host=192.168.232.234 port=5432 user=postgres password=postgres dbname=db_p
queue_name = replika
logfile = /usr/local/skytools/londiste3/log/l3_db_p.log
pidfile = /usr/local/skytools/londiste3/pid/l3_db_p.pid

#ondiste3 error with import pkgloader
export PYTHONPATH=/usr/local/skytools/lib/python2.7/site-packages:$PYTHONPATH

/usr/local/skytools/bin/londiste3 db_p.ini create-root node1 ‘host=192.168.232.234 port=5432 user=postgres password=postgres dbname=db_p’

2015-05-21 11:36:41,535 2057 INFO plpgsql is installed
2015-05-21 11:36:41,537 2057 INFO Installing pgq
2015-05-21 11:36:41,551 2057 INFO Reading from /usr/local/skytools/share/skytools3/pgq.sql
2015-05-21 11:36:42,384 2057 INFO pgq.get_batch_cursor is installed
2015-05-21 11:36:42,385 2057 INFO Installing pgq_ext
2015-05-21 11:36:42,386 2057 INFO Reading from /usr/local/skytools/share/skytools3/pgq_ext.sql
2015-05-21 11:36:42,555 2057 INFO Installing pgq_node
2015-05-21 11:36:42,556 2057 INFO Reading from /usr/local/skytools/share/skytools3/pgq_node.sql
2015-05-21 11:36:42,732 2057 INFO Installing londiste
2015-05-21 11:36:42,733 2057 INFO Reading from /usr/local/skytools/share/skytools3/londiste.sql
2015-05-21 11:36:42,987 2057 INFO londiste.global_add_table is installed
2015-05-21 11:36:43,130 2057 INFO Initializing node
2015-05-21 11:36:43,134 2057 INFO Location registered
2015-05-21 11:36:43,278 2057 INFO Node “node1” initialized for queue “replika” with type “root”
2015-05-21 11:36:43,283 2057 INFO Done

/usr/local/pgsql/bin/psql -U postgres db_p

db_p=# \dn
List of schemas
Name | Owner
———-+———-
londiste | postgres
pgq | postgres
pgq_ext | postgres
pgq_node | postgres
public | postgres

db_p=# set search_path to londiste,pgq,pgq_ext,pgq_node;
db_p=# \d+
List of relations
Schema | Name | Type | Owner | Size | Description
———-+————————-+———-+———-+————+————-
londiste | applied_execute | table | postgres | 8192 bytes |
londiste | pending_fkeys | table | postgres | 8192 bytes |
londiste | seq_info | table | postgres | 8192 bytes |
londiste | seq_info_nr_seq | sequence | postgres | 8192 bytes |
londiste | table_info | table | postgres | 8192 bytes |
londiste | table_info_nr_seq | sequence | postgres | 8192 bytes |
pgq | batch_id_seq | sequence | postgres | 8192 bytes |
pgq | consumer | table | postgres | 16 kB |
pgq | consumer_co_id_seq | sequence | postgres | 8192 bytes |
pgq | event_1 | table | postgres | 8192 bytes |
pgq | event_1_0 | table | postgres | 8192 bytes |
pgq | event_1_1 | table | postgres | 8192 bytes |
pgq | event_1_2 | table | postgres | 8192 bytes |
pgq | event_1_id_seq | sequence | postgres | 8192 bytes |
pgq | event_1_tick_seq | sequence | postgres | 8192 bytes |
pgq | event_template | table | postgres | 8192 bytes |
pgq | queue | table | postgres | 16 kB |
pgq | queue_queue_id_seq | sequence | postgres | 8192 bytes |
pgq | retry_queue | table | postgres | 8192 bytes |
pgq | subscription | table | postgres | 8192 bytes |
pgq | subscription_sub_id_seq | sequence | postgres | 8192 bytes |
pgq | tick | table | postgres | 16 kB |
pgq_ext | completed_batch | table | postgres | 8192 bytes |
pgq_ext | completed_event | table | postgres | 8192 bytes |
pgq_ext | completed_tick | table | postgres | 8192 bytes |
pgq_ext | partial_batch | table | postgres | 8192 bytes |
pgq_node | local_state | table | postgres | 16 kB |
pgq_node | node_info | table | postgres | 16 kB |
pgq_node | node_location | table | postgres | 16 kB |
pgq_node | subscriber_info | table | postgres | 8192 bytes |
(30 rows)

#启动worker
/usr/local/skytools/bin/londiste3 -d db_p.ini worker
ps -ef | grep lond

root 2145 1 0 11:42 ? 00:00:00 /usr/local/python27/bin/python2 /usr/local/skytools/bin/londiste3 -d db_p.ini worker

#配置pgq ticker

cat pgqd.ini

[pgqd]
logfile = /usr/local/skytools/londiste3/log/pgqd.log
pidfile = /usr/local/skytools/londiste3/pid/pgqd.pid
database_list = db_p

#启动ticker daemon
#export LD_LIBRARY_PATH=/usr/local/pgsql/lib:$LD_LIBRARY_PATH

#配置文件丢失[pgqd]
#ERROR load_init_file: value without section: logfile
#FATAL @pgqd.c:77 in function load_config(): failed to read config

/usr/local/skytools/bin/pgqd -d pgqd.ini
2015-05-21 14:00:13.934 3876 LOG Starting pgqd 3.2

tail -f /usr/local/skytools/londiste3/log/l3_db_p.log
tail -f /usr/local/skytools/londiste3/log/pgqd.log

#配置slave
#创建provider进程配置文件
cd /usr/local/skytools/londiste3
cat db_s.ini

[londiste3]
job_name = l3_db_s
db = host=192.168.232.235 port=5432 user=postgres password=postgres dbname=db_s
queue_name = replika
logfile = /usr/local/skytools/londiste3/log/l3_db_s.log
pidfile = /usr/local/skytools/londiste3/pid/l3_db_s.pid

{注:queue_name必须一致}

export PYTHONPATH=/usr/local/skytools/lib/python2.7/site-packages:$PYTHONPATH
/usr/local/skytools/bin/londiste3 db_s.ini create-leaf node2 ‘host=192.168.232.235 port=5432 user=postgres password=postgres dbname=db_s’ –provider=’host=192.168.232.234 port=5432 user=postgres password=postgres dbname=db_p’

/usr/local/pgsql/bin/psql -U postgres db_s
db_s=# \dn
db_s=# set search_path to londiste,pgq,pgq_ext,pgq_node;
db_s=# \d+
#启动worker
/usr/local/skytools/bin/londiste3 -d db_s.ini worker

/usr/local/skytools/bin/londiste3 db_s.ini status
Queue: replika Local node: node2

node1 (root)
| Tables: 0/0/0
| Lag: 2h54m7s, Tick: 1
+–: node2 (leaf)
Tables: 0/0/0
Lag: 2h54m7s, Tick: 1

/usr/local/skytools/bin/londiste3 db_s.ini members
Member info on node2@replika:
node_name dead node_location
————— ————— ————————————————————————–
node1 False host=192.168.232.234 port=5432 user=postgres password=postgres dbname=db_p
node2 False host=192.168.232.235 port=5432 user=postgres password=postgres dbname=db_s
#测试
【londiste1】
/usr/local/pgsql/bin/psql -U postgres db_p
db_p=# create table t1 (id int primary key,name varchar(20));

/usr/local/skytools/bin/londiste3 db_p.ini add-table public.t1
/usr/local/skytools/bin/londiste3 db_p.ini tables
Tables on node
table_name merge_state table_attrs
————— ————— —————
public.t1 ok

db_p=# \d t1
Table “public.t1”
Column | Type | Modifiers
——–+———————–+———–
id | integer | not null
name | character varying(20) |
Indexes:
“t1_pkey” PRIMARY KEY, btree (id)
Triggers:
_londiste_replika AFTER INSERT OR DELETE OR UPDATE ON t1 FOR EACH ROW EXECUTE PROCEDURE pgq.logutriga(‘replika’)
_londiste_replika_truncate AFTER TRUNCATE ON t1 FOR EACH STATEMENT EXECUTE PROCEDURE pgq.sqltriga(‘replika’)

{此时,同步表会自动添加两个触发器}

【londiste2】
/usr/local/pgsql/bin/psql -U postgres db_s
db_s=# create table t1 (id int primary key,name varchar(20));
/usr/local/skytools/bin/londiste3 db_s.ini add-table public.t1
/usr/local/skytools/bin/londiste3 db_s.ini tables

Tables on node
table_name merge_state table_attrs
————— ————— —————
public.t1 None

{开始时为None,同步完成后变为ok}

【londiste1】

db_p=# insert into t1 values (1,’lsk’);

INSERT 0 1

【londiste2】

db_s=# select * from t1 ;

id | name

—-+——

1 | lsk

(1 row)

{测时可以看到数据已经同步,不过该方式的同步速度比较慢,不会将主库端的变更立刻在备库端体现,需要等待一段时间}

同步不成功(启动时加入 export PGUSER=postgres)
2015-05-21 15:12:07.190 4821 ERROR connection error: PQconnectPoll
2015-05-21 15:12:07.190 4821 ERROR libpq: FATAL: role “root” does not exist
2015-05-21 15:17:40.678 5029 ERROR crm: ERROR: function pgq.version() does not exist
LINE 1: select pgq.version()
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.

删除crm中的pgq模式
【级联复制模式】
londiste3 db1.ini create-root node1 ‘host=192.168.100.30 port=5432 user=postgres password=highgo dbname=db1’

londiste3 db2.ini create-branch node2 ‘host=192.168.100.31 port=5432 user=postgres password=highgo dbname=db2′ –provider=’host=192.168.100.30 port=5432 user=postgres password=highgo dbname=db1’

londiste3 db3.ini create-branch node3 ‘host=192.168.100.24 port=5432 user=postgres password=highgo dbname=db3′ –provider=’host=192.168.100.30 port=5432 user=postgres password=highgo dbname=db1’

londiste3 db4.ini create-branch node4 ‘host=192.168.100.25 port=5432 user=postgres password=highgo dbname=db4′ –provider=’host=192.168.100.31 port=5432 user=postgres password=highgo dbname=db2’

londiste3 db5.ini create-branch node5 ‘host=192.168.100.20 port=5432 user=postgres password=highgo dbname=db5′ –provider=’host=192.168.100.24 port=5432 user=postgres password=highgo dbname=db3′

londiste3 db1.ini status

Queue: replika Local node: node1

node1 (root)

| Tables: 0/0/0

| Lag: 24s, Tick: 17, NOT UPTODATE

+–: node2 (branch)

| | Tables: 0/0/0

| | Lag: 4h3m29s, Tick: 1, NOT UPTODATE

| +–: node4 (branch)

| Tables: 0/0/0

| Lag: 4h1m7s, Tick: 1, NOT UPTODATE

+–: node3 (branch)

| Tables: 0/0/0

| Lag: 4h3m29s, Tick: 1, NOT UPTODATE

+–: node5 (branch)

Tables: 0/0/0

Lag: 3h25m5s, Tick: 1, NOT UPTODATE

将node4的provider更改为node3,如下:
londiste3 db4.ini change-provider –provider=node3

takeover
使node3接管node5,如下:
londiste3 db3.ini takeover node5

子节点接管root节点,如下:
londiste3 db2.ini takeover node1

【合并复制模式】
同一台服务器,三个数据库 part1,part1,full1
三个role:root1,root2,full

#创建数据库
create database full1;
create database part1;
create database part2;

#配置ticker
cat pgqd-full-part.ini
[pgqd]
database_list = part1,part2,full1
logfile = /usr/local/skytools/londiste3/log/pgqd.log
pidfile = /usr/local/skytools/londiste3/pid/pgqd.pid

#数据库连接进程配置
vim part1.ini

[londiste3]
job_name = l3_part1
db = dbname=part1
queue_name = l3_part1_q
logfile = /usr/local/skytools/londiste3/log/%(job_name)s.log
pidfile = /usr/local/skytools/londiste3/pid/%(job_name)s.pid

vim part2.ini

[londiste3]
job_name = l3_part2
db = dbname=part2
queue_name = l3_part2_q
logfile = /usr/local/skytools/londiste3/log/%(job_name)s.log
pidfile = /usr/local/skytools/londiste3/pid/%(job_name)s.pid
vim part1_full1.ini

[londiste3]
job_name = l3_part1_full1
db = dbname=full1
queue_name = l3_part1_q
logfile = /usr/local/skytools/londiste3/log/%(job_name)s.log
pidfile = /usr/local/skytools/londiste3/pid/%(job_name)s.pid
vim part2_full1.ini

[londiste3]
job_name = l3_part2_full1
db = dbname=full1
queue_name = l3_part2_q
logfile = /usr/local/skytools/londiste3/log/%(job_name)s.log
pidfile = /usr/local/skytools/londiste3/pid/%(job_name)s.pid

#创建root节点1
/usr/local/skytools/bin/londiste3 part1.ini create-root part1_root dbname=part1
#创建root节点2
/usr/local/skytools/bin/londiste3 part2.ini create-root part2_root dbname=part2

#创建leaf节点1
/usr/local/skytools/bin/londiste3 part1_full1.ini create-leaf merge_part1_full1 dbname=full1 –provider=dbname=part1

#创建leaf节点2
/usr/local/skytools/bin/londiste3 part2_full1.ini create-leaf merge_part2_full1 dbname=full1 –provider=dbname=part2

#启动tricker
/usr/local/skytools/bin/pgqd -d pgqd-full-part.ini

2015-05-21 17:08:38.771 8299 LOG Starting pgqd 3.2

#启动worker
/usr/local/skytools/bin/londiste3 -d part1_full1.ini worker
/usr/local/skytools/bin/londiste3 -d part2_full1.ini worker

#测试
/usr/local/pgsql/bin/psql -d “part1” -c “create table mydata (id int4 primary key, data text)”
/usr/local/pgsql/bin/psql -d “part2” -c “create table mydata (id int4 primary key, data text)”

#root节点加入同步表
/usr/local/skytools/bin/londiste3 part1.ini add-table mydata
/usr/local/skytools/bin/londiste3 part2.ini add-table mydata

/usr/local/pgsql/bin/psql -d “full1” -c “select * from londiste.table_info order by queue_name”
nr | queue_name | table_name | local | merge_state | custom_snapshot | dropped_ddl | table_attrs | dest_table
—-+————+—————+——-+————-+—————–+————-+————-+————
1 | l3_part1_q | public.mydata | f | | | | |
2 | l3_part2_q | public.mydata | f | | | | |
(2 rows)
{看到两个queue已经添加}

#插入测试数据
/usr/local/pgsql/bin/psql part1
part1=# INSERT INTO mydata VALUES (1,’lianshunke1′);
part1=# \c part2
part2=# INSERT INTO mydata VALUES (2,’lianshunke2’);

#在full1中创建并合并同步表
/usr/local/skytools/bin/londiste3 part1_full1.ini add-table mydata –create –merge-all

/usr/local/pgsql/bin/psql -d “full1” -c “select * from londiste.table_info order by queue_name”
nr | queue_name | table_name | local | merge_state | custom_snapshot | dropped_ddl | table_attrs | dest_table
—-+————+—————+——-+————-+—————–+——————————————————+————-+————
1 | l3_part1_q | public.mydata | t | catching-up | 49228:49228: | ALTER TABLE public.mydata ADD CONSTRAINT mydata_pkey+| |
| | | | | | PRIMARY KEY (id); | |
2 | l3_part2_q | public.mydata | t | in-copy | | | |
(2 rows)
#拓扑情况
/usr/local/skytools/bin/londiste3 part1.ini status
Queue: l3_part1_q Local node: part1_root

part1_root (root)

| Tables: 1/0/0

| Lag: 1m0s, Tick: 33, NOT UPTODATE

+–: merge_part1_full1 (leaf)

Tables: 1/0/0

Lag: 1m0s, Tick: 33

/usr/local/skytools/bin/londiste3 part2.ini status

Queue: l3_part2_q Local node: part2_root

part2_root (root)
| Tables: 1/0/0
| Lag: 13s, Tick: 79, NOT UPTODATE
+–: merge_part2_full1 (leaf)
Tables: 1/0/0
Lag: 28m11s, Tick: 21
ERR: l3_part2_full1: [ev_id=9,ev_txid=49748] duplicate key value violates unique constraint “mydata_pkey”

#同步表状态
/usr/local/skytools/bin/londiste3 part1.ini tables

#node状态
/usr/local/skytools/bin/londiste3 part1.ini members

#同步状态比较
/usr/local/skytools/bin/londiste3 part1.ini compare

【分割复制模式】
同一台服务器,三个数据库 part_root,part_part0,part_part1
三个role root,leaf1,leaf2

#创建数据库
#创建配置模式与配置表
【part_part0】
part_part0=# create schema partconf;
part_part0=# CREATE TABLE partconf.conf (part_nr integer,max_part integer,db_code bigint,is_primary boolean,max_slot integer,cluster_name text);
part_part0=# insert into partconf.conf(part_nr, max_part) values(0,1);
【part_part1】
part_part1=# create schema partconf;
part_part1=# CREATE TABLE partconf.conf (part_nr integer,max_part integer,db_code bigint,is_primary boolean,max_slot integer,cluster_name text);
part_part1=# insert into partconf.conf(part_nr, max_part) values(1,1);

【part_root】
part_root=# create schema partconf;

#创建函数
cd /usr/local/pgsql/share/extension/
/usr/local/pgsql/bin/psql part_root < hashlib–1.0.sql
cd /usr/local/skytools-3.2
/usr/local/python27/bin/python2 setup_pkgloader.py build
/usr/local/python27/bin/python2 setup_pkgloader.py install
/usr/local/python27/bin/python2 setup_skytools.py build
/usr/local/python27/bin/python2 setup_skytools.py install

参与资料:
https://wiki.postgresql.org/wiki/Londiste_Tutorial#The_ticker_daemon
http://initd.org/psycopg/
http://www.cnblogs.com/top5/archive/2009/11/06/1597156.html
http://my.oschina.net/lianshunke/blog/201558

[centos7]haproxy安装及配置

一、下载安装
http://pkgs.fedoraproject.org/repo/pkgs/haproxy/

tar xvzf haproxy-1.5.8.tar.gz
cd haproxy-1.5.8
uname -a //查看linux内核版本
make TARGET=linux26 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy

二、配置haproxy

vim /usr/local/haproxy/haproxy.cfg
global
maxconn 5120
chroot /usr/local/haproxy
uid 99
gid 99
daemon
quiet
nbproc 2
pidfile /usr/local/haproxy/haproxy.pid
defaults
log global
mode http
option httplog
option dontlognull
log 127.0.0.1 local3
retries 3
option redispatch
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000

listen webinfo :1080
mode http
balance roundrobin
option httpclose
option forwardfor
server phpinfo1 127.0.0.1:1337 check weight 1 minconn 1 maxconn 3 check inter 40000
server phpinfo2 127.0.0.1:80 check weight 1 minconn 1 maxconn 3 check inter 40000

listen webmb :1081
mode http
balance roundrobin
option httpclose
option forwardfor
server webmb1 127.0.0.1:1337 weight 1 minconn 1 maxconn 3 check inter 40000
server webmb2 127.0.0.1:10000 weight 1 minconn 1 maxconn 3 check inter 40000

listen stats :8888
mode http
transparent
stats uri / haproxy-stats
stats realm Haproxy \ statistic
stats auth admin:admin

三,启动haproxy

#启动haproxy
/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfg

#查看是否启动
[zhangy@BlackGhost haproxy]$ ps -e|grep haproxy
1829 ? 00:00:00 haproxy
1830 ? 00:00:00 haproxy

四,压力测试

[root@BlackGhost haproxy]# /usr/local/bin/webbench -c 100 -t 30 http://localhost:1080/phpinfo.php
Webbench – Simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source Software.

Benchmarking: GET http://localhost:1080/phpinfo.php
100 clients, running 30 sec.

Speed=26508 pages/min, 20929384 bytes/sec.
Requests: 13254 susceed, 0 failed.

说明:haproxy监听的端口是1080,代理192.168.18.2:10000,127.0.0.1:10000

统计监听的是8888端口 http://localhost:8888/haproxy-stats

[centos7]postgresql安装、初始化密码、允许远程连接设置

tar xvzf postgresql-9.6.3.tar.gz
cd postgresql-9.6.3
./configure –prefix=/usr/local/pgsql
make
make install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su – postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
#用户postgres启动
/usr/local/pgsql/bin/postgres -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
#设置初始化密码
bin/psql postgres
\password postgres
#密码登录
./psql -h localhost -U postgres -W
#允许外网ip访问
防火墙增加5432端口访问
vim /etc/sysconfig/iptables
-A INPUT -p tcp -m state –state NEW -m tcp –dport 5432 -j ACCEPT
重启iptables
vim postgresql.conf
listen_addresses = ‘*’
vim pg_hba.conf
增加一行规则
host all all 0.0.0.0/0 trust
#查看版本
select VERSION();
PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit

[php]关闭php版本信息X-Powered-By

查看网页header信息,可以看到PHP的版本,为了安全起见,我们可以关闭这个PHP版本的信息(X-Powered-By)

curl –head “http://blog.54xiake.cn”
HTTP/1.1 200 OK
Server: nginx/1.1.5
Date: Sat, 10 Jun 2017 05:09:36 GMT
Content-Type: text/html; charset=UTF-8
Connection: keep-alive
Vary: Accept-Encoding
X-Powered-By: PHP/5.5.29
X-Pingback: http://blog.54xiake.cn/xmlrpc.php

php.ini中搜索expose_php,默认为On
修改为expose_php = Off