如何检测、清理Greenplum膨胀、垃圾(含修改分布键) - 阿里云HybridDB for PG最佳实践
背景
Greenplum通过多版本支持数据的删除和更新的并发和回滚,在删除数据时(使用DELETE删除),对记录的头部xmax值进行标记。在删除记录时,对记录的头部进行标记,同时插入新的版本。
这一就会导致一个问题,如果用户经常删除和插入或更新数据,表和索引都会膨胀。
PostgreSQL是通过HOT技术以及autovacuum来避免或减少垃圾的。但是Greenplum没有自动回收的worker进程,所以需要人为的触发。
如何查找膨胀的表或索引
Greenplum
1、首先要更新表的统计信息,因为接下来的两个视图是从统计信息来计算膨胀率的。
连接到所有目标库执行:
analyze;
2、查询gp_toolkit.gp_bloat_diag,膨胀较厉害的表。
Use the gp_toolkit administrative schema:
gp_toolkit.gp_bloat_diag - this view shows tables with moderate and significant amount of bloat
列:
bdirelid - Object ID of the table (pg_class.oid)
bdinspname - table schema name
bdirelname - table name
bdirelpages - number of pages currently in table data files
bdiexppages - number of pages expected according to current statistics
bdidiag - diagnosis of bloat (ratio from 1 to 3 -> no bloat, ratio from 4 to 10 -> moderate bloat, ratio > 10 -> significant bloat)
例子:
In this example the table “t1” is severely bloated (the calculated data size for data currently in table is 1 page, but table consists of 97 pages).
gpadmin=# select * from gp_toolkit.gp_bloat_diag;
bdirelid | bdinspname | bdirelname | bdirelpages | bdiexppages | bdidiag
----------+------------+------------+-------------+-------------+---------------------------------------
21488 | public | t1 | 97 | 1 | significant amount of bloat suspected
(1 row)
3、查询gp_toolkit.gp_bloat_expected_pages,所有对象的膨胀明细。
gp_toolkit.gp_bloat_expected_pages - this view shows the page data (current pages/expected pages) for all tables in the database Columns:
列:
btdrelid - Object ID of the table (pg_class.oid)
btdrelpages - number of pages currently in table data files
btdexppages - number of pages expected according to current statistics
例子:
In this example the tables shown all have calculated data size of 1 page and actual data file size 1 page. No bloat is detected.
gpadmin=# select * from gp_toolkit.gp_bloat_expected_pages limit 5;
btdrelid | btdrelpages | btdexppages
----------+-------------+-------------
10789 | 1 | 1
10794 | 1 | 1
10799 | 1 | 1
5004 | 1 | 1
7175 | 1 | 1
(5 rows)
4、GP的系统表也可能产生垃圾,例如频繁的使用临时表(临时表是会话级别的,所以每次使用都要创建。会在pg_class, pg_attribute等系统表产生写入和删除的动作。产生系统表垃圾)。
PostgreSQL
1、首先要更新表的统计信息,因为接下来的两个视图是从统计信息来计算膨胀率的。
如果配置了autovacuum,PostgreSQL会自动生成统计信息,不需要人为干预。
2、查看膨胀的表、索引
《PostgreSQL 垃圾回收原理以及如何预防膨胀 - How to prevent object bloat in PostgreSQL》
连接到对应的数据库查询。
SELECT
current_database() AS db, schemaname, tablename, reltuples::bigint AS tups, relpages::bigint AS pages, otta,
ROUND(CASE WHEN otta=0 OR sml.relpages=0 OR sml.relpages=otta THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,
CASE WHEN relpages < otta THEN 0 ELSE relpages::bigint - otta END AS wastedpages,
CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,
CASE WHEN relpages < otta THEN $$0 bytes$$::text ELSE (bs*(relpages-otta))::bigint || $$ bytes$$ END AS wastedsize,
iname, ituples::bigint AS itups, ipages::bigint AS ipages, iotta,
ROUND(CASE WHEN iotta=0 OR ipages=0 OR ipages=iotta THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,
CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END AS wastedipages,
CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes,
CASE WHEN ipages < iotta THEN $$0 bytes$$ ELSE (bs*(ipages-iotta))::bigint || $$ bytes$$ END AS wastedisize,
CASE WHEN relpages < otta THEN
CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta::bigint) END
ELSE CASE WHEN ipages < iotta THEN bs*(relpages-otta::bigint)
ELSE bs*(relpages-otta::bigint + ipages-iotta::bigint) END
END AS totalwastedbytes
FROM (
SELECT
nn.nspname AS schemaname,
cc.relname AS tablename,
COALESCE(cc.reltuples,0) AS reltuples,
COALESCE(cc.relpages,0) AS relpages,
COALESCE(bs,0) AS bs,
COALESCE(CEIL((cc.reltuples*((datahdr+ma-
(CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)),0) AS otta,
COALESCE(c2.relname,$$?$$) AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols
FROM
pg_class cc
JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> $$information_schema$$
LEFT JOIN
(
SELECT
ma,bs,foo.nspname,foo.relname,
(datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,
(maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
FROM (
SELECT
ns.nspname, tbl.relname, hdr, ma, bs,
SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth,
MAX(coalesce(null_frac,0)) AS maxfracsum,
hdr+(
SELECT 1+count(*)/8
FROM pg_stats s2
WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname
) AS nullhdr
FROM pg_attribute att
JOIN pg_class tbl ON att.attrelid = tbl.oid
JOIN pg_namespace ns ON ns.oid = tbl.relnamespace
LEFT JOIN pg_stats s ON s.schemaname=ns.nspname
AND s.tablename = tbl.relname
AND s.inherited=false
AND s.attname=att.attname,
(
SELECT
(SELECT current_setting($$block_size$$)::numeric) AS bs,
CASE WHEN SUBSTRING(SPLIT_PART(v, $$ $$, 2) FROM $$#"[0-9]+.[0-9]+#"%$$ for $$#$$)
IN ($$8.0$$,$$8.1$$,$$8.2$$) THEN 27 ELSE 23 END AS hdr,
CASE WHEN v ~ $$mingw32$$ OR v ~ $$64-bit$$ THEN 8 ELSE 4 END AS ma
FROM (SELECT version() AS v) AS foo
) AS constants
WHERE att.attnum > 0 AND tbl.relkind=$$r$$
GROUP BY 1,2,3,4,5
) AS foo
) AS rs
ON cc.relname = rs.relname AND nn.nspname = rs.nspname
LEFT JOIN pg_index i ON indrelid = cc.oid
LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml order by wastedbytes desc limit 5
select datname from pg_database where datname not in ($$template0$$, $$template1$$)'`
do
psql -d $db --pset=pager=off -q -x -c 'SELECT
current_database() AS db, schemaname, tablename, reltuples::bigint AS tups, relpages::bigint AS pages, otta,
ROUND(CASE WHEN otta=0 OR sml.relpages=0 OR sml.relpages=otta THEN 0.0 ELSE sml.relpages/otta::numeric END,1) AS tbloat,
CASE WHEN relpages < otta THEN 0 ELSE relpages::bigint - otta END AS wastedpages,
CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::bigint END AS wastedbytes,
CASE WHEN relpages < otta THEN $$0 bytes$$::text ELSE (bs*(relpages-otta))::bigint || $$ bytes$$ END AS wastedsize,
iname, ituples::bigint AS itups, ipages::bigint AS ipages, iotta,
ROUND(CASE WHEN iotta=0 OR ipages=0 OR ipages=iotta THEN 0.0 ELSE ipages/iotta::numeric END,1) AS ibloat,
CASE WHEN ipages < iotta THEN 0 ELSE ipages::bigint - iotta END AS wastedipages,
CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes,
CASE WHEN ipages < iotta THEN $$0 bytes$$ ELSE (bs*(ipages-iotta))::bigint || $$ bytes$$ END AS wastedisize,
CASE WHEN relpages < otta THEN
CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta::bigint) END
ELSE CASE WHEN ipages < iotta THEN bs*(relpages-otta::bigint)
ELSE bs*(relpages-otta::bigint + ipages-iotta::bigint) END
END AS totalwastedbytes
FROM (
SELECT
nn.nspname AS schemaname,
cc.relname AS tablename,
COALESCE(cc.reltuples,0) AS reltuples,
COALESCE(cc.relpages,0) AS relpages,
COALESCE(bs,0) AS bs,
COALESCE(CEIL((cc.reltuples*((datahdr+ma-
(CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::float)),0) AS otta,
COALESCE(c2.relname,$$?$$) AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::float)),0) AS iotta -- very rough approximation, assumes all cols
FROM
pg_class cc
JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname <> $$information_schema$$
LEFT JOIN
(
SELECT
ma,bs,foo.nspname,foo.relname,
(datawidth+(hdr+ma-(case when hdr%ma=0 THEN ma ELSE hdr%ma END)))::numeric AS datahdr,
(maxfracsum*(nullhdr+ma-(case when nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
FROM (
SELECT
ns.nspname, tbl.relname, hdr, ma, bs,
SUM((1-coalesce(null_frac,0))*coalesce(avg_width, 2048)) AS datawidth,
MAX(coalesce(null_frac,0)) AS maxfracsum,
hdr+(
SELECT 1+count(*)/8
FROM pg_stats s2
WHERE null_frac<>0 AND s2.schemaname = ns.nspname AND s2.tablename = tbl.relname
) AS nullhdr
FROM pg_attribute att
JOIN pg_class tbl ON att.attrelid = tbl.oid
JOIN pg_namespace ns ON ns.oid = tbl.relnamespace
LEFT JOIN pg_stats s ON s.schemaname=ns.nspname
AND s.tablename = tbl.relname
AND s.inherited=false
AND s.attname=att.attname,
(
SELECT
(SELECT current_setting($$block_size$$)::numeric) AS bs,
CASE WHEN SUBSTRING(SPLIT_PART(v, $$ $$, 2) FROM $$#"[0-9]+.[0-9]+#"%$$ for $$#$$)
IN ($$8.0$$,$$8.1$$,$$8.2$$) THEN 27 ELSE 23 END AS hdr,
CASE WHEN v ~ $$mingw32$$ OR v ~ $$64-bit$$ THEN 8 ELSE 4 END AS ma
FROM (SELECT version() AS v) AS foo
) AS constants
WHERE att.attnum > 0 AND tbl.relkind=$$r$$
GROUP BY 1,2,3,4,5
) AS foo
) AS rs
ON cc.relname = rs.relname AND nn.nspname = rs.nspname
LEFT JOIN pg_index i ON indrelid = cc.oid
LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml order by wastedibytes desc limit 5
根据浪费的字节数, 设置合适的autovacuum_vacuum_scale_factor, 大表如果频繁的有更新或删除和插入操作, 建议设置较小的autovacuum_vacuum_scale_factor来降低浪费空间.
同时还需要打开autovacuum, 根据服务器的内存大小, CPU核数, 设置足够大的autovacuum_work_mem 或 autovacuum_max_workers 或 maintenance_work_mem, 以及足够小的 autovacuum_naptime .
同时还需要分析是否对大数据库使用了逻辑备份pg_dump, 系统中是否经常有长SQL, 长事务. 这些都有可能导致膨胀.
使用pg_reorg或者vacuum full可以回收膨胀的空间.
参考: http://blog.163.com/digoal@126/blog/static/1638770402015329115636287/
otta评估出的表实际需要页数, iotta评估出的索引实际需要页数;
bs数据库的块大小;
tbloat表膨胀倍数, ibloat索引膨胀倍数, wastedpages表浪费了多少个数据块, wastedipages索引浪费了多少个数据块;
wastedbytes表浪费了多少字节, wastedibytes索引浪费了多少字节;
如何回收垃圾
Greenplum
1、vacuum full
注意,vacuum full不能回收索引的膨胀空间。vacuum full 加载的锁与DDL锁类似,是排它锁。建议在没有业务的时候执行,不要堵塞业务。
使用vacuum full回收垃圾的建议操作流程:
1 记录下表的索引定义
2 删除索引
3 vacuum full 表
4 并行重建索引
例子
postgres=# create table bloat_test(id int , info text);
NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'id' as the Greenplum Database data distribution key for this table.
HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.
CREATE TABLE
postgres=# insert into bloat_test select generate_series(1,100000000), 'test';
INSERT 0 100000000
postgres=# create index idx_bloat_test on bloat_test(id);
CREATE INDEX
postgres=# update bloat_test set info='new';
UPDATE 100000000
postgres=# select * from gp_toolkit.gp_bloat_expected_pages where btdrelid='bloat_test'::Regclass;
btdrelid | btdrelpages | btdexppages
----------+-------------+-------------
629640 | 244453 | 102753
(1 row)
postgres=# select pg_size_pretty(pg_relation_size('bloat_test'));
pg_size_pretty
----------------
7639 MB
(1 row)
postgres=# select pg_size_pretty(pg_relation_size('idx_bloat_test'));
pg_size_pretty
----------------
6380 MB
(1 row)
回收垃圾的错误做法
postgres=# vacuum full bloat_test ;
VACUUM
postgres=# select pg_size_pretty(pg_relation_size('bloat_test'));
pg_size_pretty
----------------
3526 MB
(1 row)
索引没有被回收:
postgres=# select pg_size_pretty(pg_relation_size('idx_bloat_test'));
pg_size_pretty
----------------
6380 MB
(1 row)
回收垃圾的正确做法
postgres=# drop index idx_bloat_test ;
DROP INDEX
postgres=# vacuum full bloat_test ;
VACUUM
postgres=# create index idx_bloat_test on bloat_test(id);
CREATE INDEX
postgres=# select pg_size_pretty(pg_relation_size('bloat_test'));
pg_size_pretty
----------------
3526 MB
(1 row)
postgres=# select pg_size_pretty(pg_relation_size('idx_bloat_test'));
pg_size_pretty
----------------
2174 MB
(1 row)
2、alter table set distribute
alter table <table_name> set with (reorganize=true) distributed randomly;
或
alter table <table_name> set with (reorganize=true) distributed by (<column_names1>,<column_names2>....)
set distribute可以回收索引的膨胀空间。set distribute 加载的锁与DDL锁类似,是排它锁。建议在没有业务的时候执行,不要堵塞业务。
同时set distribute只要分布条件不变,就是在节点内完成的,不会涉及数据的重分布。
建议的操作流程:
1 记录下表的分布列
2 执行set distribute (REORGANIZE=true)
如果是随机分布,则设置为随机分布
例子
postgres=# update bloat_test set info='test';
UPDATE 100000000
postgres=# select pg_size_pretty(pg_relation_size('bloat_test'));
pg_size_pretty
----------------
7727 MB
(1 row)
postgres=# select pg_size_pretty(pg_relation_size('idx_bloat_test'));
pg_size_pretty
----------------
4411 MB
(1 row)
根据当前的分布规则,重分布:
表和索引的垃圾都回收了。
postgres=# alter table bloat_test set with (reorganize=true) distributed by (id);
ALTER TABLE
postgres=# select pg_size_pretty(pg_relation_size('bloat_test'));
pg_size_pretty
----------------
4201 MB
(1 row)
postgres=# select pg_size_pretty(pg_relation_size('idx_bloat_test'));
pg_size_pretty
----------------
2130 MB
(1 row)
PostgreSQL
PostgreSQL 通常用在OLTP系统中,业务对数据库的可用性比OLAP系统要高很多,所以长时间持有排它锁的VACUUM FULL要少用。
通常PostgreSQL的autovacuum参数开启后,不会导致大量的膨胀,除非有长事务、或者人为的设定(例如防止备库QUERY与VACUUM冲突的设定)妨碍了垃圾回收。这些都有解决方法,如下:
《为什么啤酒和纸尿裤最搭 - 用HybridDB/PostgreSQL查询商品营销最佳组合》
《PostgreSQL snapshot too old补丁, 防止数据库膨胀》
PostgreSQL的垃圾回收方法举例:
1、首推reorg的方法,这种方法是新建一个对象,增量同步到新的对象,最后将新对象的DATAFILE和老对象(膨胀对象)的DATAFILE进行交换。
仅仅是交换文件时,需要一个排它锁,非常短暂。
参考
《PostgreSQL 收缩膨胀表或索引 - pg_squeeze or pg_repack》
2、如果你没有按照pg_squeeze或pg_repack插件,那么在遇到膨胀后,可以通过vacuum full来回收,PostgreSQL 9.0以后,VACUUM FULL会回收索引的垃圾,比GP更高级一点。
建议的操作
set lock_timeout='2s';
vacuum full tablename;
注意事项
1、在执行vacuum full或alter table回收垃圾时,务必注意这个是排它锁,请在维护窗口执行,或者至少应该加一个锁超时的设定在开始搞。
2、PG通常不会产生膨胀,除非配置或使用不规范。见文中详解。
3、PG的垃圾回收,建议使用reorg的方式,尽量避免使用vacuum full。
4、GP评估垃圾时,如果你发现没有垃圾,别高兴太早,有可能是统计信息没有收集。所以保持有节奏的analyze是好的习惯。
另外可以参考一下这个ISSUE,不需要依赖ANALYZE,通过采样的方法评估垃圾比例。
https://github.com/greenplum-db/gpdb/issues/706
跑linux任务定时清理膨胀
结合本文提到的方法,取出膨胀的表实施vacuum或vacuum freeze或vacuum full.
export PGHOST=$1
export PGPORT=$2
export PGDATABASE=$3
export PGUSER=$4
export PGPASSWORD=$5
for tbl in `psql -q -t -A -c "select quote_ident(schemaname)||'.'||quote_ident(tablename) from pg_tables"`
do
psql <<EOF
set vacuum_freeze_min_age=0 ;
-- vacuum freeze $tbl ;
vacuum $tbl ;
EOF
done
参考文档
1、alter table 语法
https://gpdb.docs.pivotal.io/4370/ref_guide/sql_commands/ALTER_TABLE.html
2、性能诊断
https://gpdb.docs.pivotal.io/4330/admin_guide/perf_issues.html
3、日常维护
https://gpdb.docs.pivotal.io/4330/admin_guide/managing/maintain.html
4、表膨胀
https://discuss.pivotal.io/hc/en-us/articles/202873573-FAQ-Bloat-in-HEAP-tables
5、消除表膨胀的方法
https://discuss.pivotal.io/hc/en-us/articles/206578327-What-are-the-different-option-to-remove-bloat-from-a-table-
6、如何通过重分布消除表膨胀
https://discuss.pivotal.io/hc/en-us/articles/203248413-Tip-Removing-Fragmentaion-aka-bloat-on-a-relation-using-Redistribute-
7、列存表的收缩和膨胀检查方法:
https://wiki.postgresql.org/wiki/Show_database_bloat