HTAP数据库 PostgreSQL 场景与性能测试之 45 - (OLTP) 数据量与性能的线性关系(10亿+无衰减), 暨单表多大需要分区

5 minute read

背景

PostgreSQL是一个历史悠久的数据库,历史可以追溯到1973年,最早由2014计算机图灵奖得主,关系数据库的鼻祖Michael_Stonebraker 操刀设计,PostgreSQL具备与Oracle类似的功能、性能、架构以及稳定性。

pic

PostgreSQL社区的贡献者众多,来自全球各个行业,历经数年,PostgreSQL 每年发布一个大版本,以持久的生命力和稳定性著称。

2017年10月,PostgreSQL 推出10 版本,携带诸多惊天特性,目标是胜任OLAP和OLTP的HTAP混合场景的需求:

《最受开发者欢迎的HTAP数据库PostgreSQL 10特性》

1、多核并行增强

2、fdw 聚合下推

3、逻辑订阅

4、分区

5、金融级多副本

6、json、jsonb全文检索

7、还有插件化形式存在的特性,如 向量计算、JIT、SQL图计算、SQL流计算、分布式并行计算、时序处理、基因测序、化学分析、图像分析 等。

pic

在各种应用场景中都可以看到PostgreSQL的应用:

pic

PostgreSQL近年来的发展非常迅猛,从知名数据库评测网站dbranking的数据库评分趋势,可以看到PostgreSQL向上发展的趋势:

pic

从每年PostgreSQL中国召开的社区会议,也能看到同样的趋势,参与的公司越来越多,分享的公司越来越多,分享的主题越来越丰富,横跨了 传统企业、互联网、医疗、金融、国企、物流、电商、社交、车联网、共享XX、云、游戏、公共交通、航空、铁路、军工、培训、咨询服务等 行业。

接下来的一系列文章,将给大家介绍PostgreSQL的各种应用场景以及对应的性能指标。

环境

环境部署方法参考:

《PostgreSQL 10 + PostGIS + Sharding(pg_pathman) + MySQL(fdw外部表) on ECS 部署指南(适合新用户)》

阿里云 ECS:56核,224G,1.5TB*2 SSD云盘

操作系统:CentOS 7.4 x64

数据库版本:PostgreSQL 10

PS:ECS的CPU和IO性能相比物理机会打一定的折扣,可以按下降1倍性能来估算。跑物理主机可以按这里测试的性能乘以2来估算。

场景 - 数据量与性能的线性关系(10亿+无衰减), 暨单表多大需要分区 (OLTP)

1、背景

这个测试回答用户一个问题,PostgreSQL在单表可以管理多大的数据量性能不会衰减。

单表多大需要分区。

单表记录数在不同的级别(如千万、亿、十亿、百亿),查询,更新,写入吞吐 分别是什么样的性能。

2、设计

单表记录数:千万、亿、十亿。

分别测试写入吞吐、查询tps,更新TPS。

3、准备测试表

create unlogged table test(id int primary key, info text, crt_time timestamp);    
    
create unlogged table test1(id int primary key, info text, crt_time timestamp);    
    
create unlogged table test3(id int primary key, info text, crt_time timestamp);    

4、准备测试函数(可选)

5、准备测试数据

postgres=# insert into test select generate_series(1,10000000), 'test', now();    
INSERT 0 10000000    
Time: 17197.822 ms (00:17.198)    
    
postgres=# insert into test1 select generate_series(1,100000000), 'test', now();    
INSERT 0 100000000    
Time: 187844.576 ms (03:07.845)    
    
for ((i=1;i<=100;i++)) ; do psql -c "insert into test3 select generate_series(($i-1)*10000000+1, $i*10000000),'test', now();" & done    
写入10亿 耗时 615秒    

6、准备测试脚本

1、查询测试

-- 1000万    
vi test.sql    
    
\set id random(1,10000000)    
select * from test where id=:id;    
    
-- 1亿    
vi test1.sql    
    
\set id random(1,100000000)    
select * from test1 where id=:id;    
    
    
-- 10亿    
vi test3.sql    
    
\set id random(1,1000000000)    
select * from test3 where id=:id;    

2、更新测试

-- 1000万    
vi test.sql    
    
\set id random(1,10000000)    
update test set crt_time=now() where id=:id;    
    
-- 1亿    
vi test1.sql    
    
\set id random(1,100000000)    
update test1 set crt_time=now() where id=:id;    
    
    
-- 10亿    
-- 第三个CASE,虽然记录数10亿,但是频繁被更新的数据假设在1亿内。测试时被访问的数据依旧在10亿的范围。    
vi test3.sql    
    
\set id random(1,100000000)    
\set id1 random(1,1000000000)    
with tmp as (select * from test3 where id=:id1)    
update test3 set crt_time=now() where id=:id ;    

7、测试

测试脚本

CONNECTS=48     
TIMES=120      
export PGHOST=$PGDATA      
export PGPORT=1921      
export PGUSER=postgres      
export PGPASSWORD=postgres      
export PGDATABASE=postgres      
      
pgbench -M prepared -n -r -P 5 -f ./test.sql -c $CONNECTS -j $CONNECTS -T $TIMES      
pgbench -M prepared -n -r -P 5 -f ./test1.sql -c $CONNECTS -j $CONNECTS -T $TIMES      
pgbench -M prepared -n -r -P 5 -f ./test3.sql -c $CONNECTS -j $CONNECTS -T $TIMES      

8、测试结果

1、查询测试TPS

pgbench -M prepared -n -r -P 1 -f ./test.sql -c 48 -j 48 -T 120    
    
transaction type: ./test.sql    
scaling factor: 1    
query mode: prepared    
number of clients: 48    
number of threads: 48    
duration: 120 s    
number of transactions actually processed: 80378275    
latency average = 0.072 ms    
latency stddev = 0.012 ms    
tps = 669810.772760 (including connections establishing)    
tps = 669876.004400 (excluding connections establishing)    
script statistics:    
 - statement latencies in milliseconds:    
         0.001  \set id random(1,10000000)    
         0.071  select * from test where id=:id;    
    
pgbench -M prepared -n -r -P 1 -f ./test1.sql -c 48 -j 48 -T 120    
    
transaction type: ./test1.sql    
scaling factor: 1    
query mode: prepared    
number of clients: 48    
number of threads: 48    
duration: 120 s    
number of transactions actually processed: 76078076    
latency average = 0.076 ms    
latency stddev = 0.010 ms    
tps = 633977.716555 (including connections establishing)    
tps = 634041.588175 (excluding connections establishing)    
script statistics:    
 - statement latencies in milliseconds:    
         0.001  \set id random(1,100000000)    
         0.074  select * from test1 where id=:id;    
    
pgbench -M prepared -n -r -P 1 -f ./test3.sql -c 48 -j 48 -T 120    
    
transaction type: ./test3.sql    
scaling factor: 1    
query mode: prepared    
number of clients: 48    
number of threads: 48    
duration: 120 s    
number of transactions actually processed: 72746181    
latency average = 0.079 ms    
latency stddev = 0.019 ms    
tps = 606203.459638 (including connections establishing)    
tps = 606259.356671 (excluding connections establishing)    
script statistics:    
 - statement latencies in milliseconds:    
         0.001  \set id random(1,1000000000)    
         0.078  select * from test3 where id=:id;    

2、更新测试TPS

pgbench -M prepared -n -r -P 1 -f ./test.sql -c 48 -j 48 -T 120    
transaction type: ./test1.sql    
scaling factor: 1    
query mode: prepared    
number of clients: 48    
number of threads: 48    
duration: 120 s    
number of transactions actually processed: 27703709    
latency average = 0.208 ms    
latency stddev = 0.126 ms    
tps = 230828.616797 (including connections establishing)    
tps = 230853.344303 (excluding connections establishing)    
script statistics:    
 - statement latencies in milliseconds:    
         0.002  \set id random(1,100000000)    
         0.207  update test1 set crt_time=now() where id=:id;    
    
    
pgbench -M prepared -n -r -P 1 -f ./test1.sql -c 48 -j 48 -T 120    
    
    
transaction type: ./test1.sql    
scaling factor: 1    
query mode: prepared    
number of clients: 48    
number of threads: 48    
duration: 120 s    
number of transactions actually processed: 29387603    
latency average = 0.196 ms    
latency stddev = 0.110 ms    
tps = 244891.957430 (including connections establishing)    
tps = 244917.399306 (excluding connections establishing)    
script statistics:    
 - statement latencies in milliseconds:    
         0.001  \set id random(1,100000000)    
         0.195  update test1 set crt_time=now() where id=:id;    
    
pgbench -M prepared -n -r -P 1 -f ./test3.sql -c 48 -j 48 -T 120    
    
transaction type: ./test3.sql    
scaling factor: 1    
query mode: prepared    
number of clients: 48    
number of threads: 48    
duration: 120 s    
number of transactions actually processed: 28026501    
latency average = 0.205 ms    
latency stddev = 0.110 ms    
tps = 233533.801692 (including connections establishing)    
tps = 233554.689137 (excluding connections establishing)    
script statistics:    
 - statement latencies in milliseconds:    
         0.002  \set id random(1,100000000)    
         0.001  \set id1 random(1,1000000000)    
         0.203  with tmp as (select * from test3 where id=:id1)    

索引深度的差别:

postgres=# explain (analyze,verbose,timing,costs,buffers) select * from test where id=1;
                                                       QUERY PLAN                                                       
------------------------------------------------------------------------------------------------------------------------
 Index Scan using test_pkey on public.test  (cost=0.43..2.85 rows=1 width=44) (actual time=0.074..0.075 rows=1 loops=1)
   Output: id, info, crt_time
   Index Cond: (test.id = 1)
   Buffers: shared read=4
 Planning time: 0.215 ms
 Execution time: 0.102 ms
(6 rows)

postgres=# explain (analyze,verbose,timing,costs,buffers) select * from test1 where id=1;
                                                        QUERY PLAN                                                        
--------------------------------------------------------------------------------------------------------------------------
 Index Scan using test1_pkey on public.test1  (cost=0.57..2.98 rows=1 width=44) (actual time=0.094..0.094 rows=1 loops=1)
   Output: id, info, crt_time
   Index Cond: (test1.id = 1)
   Buffers: shared read=5
 Planning time: 0.217 ms
 Execution time: 0.119 ms
(6 rows)

postgres=# explain (analyze,verbose,timing,costs,buffers) select * from test3 where id=1;
                                                        QUERY PLAN                                                        
--------------------------------------------------------------------------------------------------------------------------
 Index Scan using test3_pkey on public.test3  (cost=0.57..2.99 rows=1 width=44) (actual time=0.054..0.055 rows=1 loops=1)
   Output: id, info, crt_time
   Index Cond: (test3.id = 1)
   Buffers: shared hit=5
 Planning time: 0.413 ms
 Execution time: 0.080 ms
(6 rows)

性能小结

数据量 | 写入吞吐 | 查询tps | 更新tps
—|—|—|—
1000万 | 58万行/s | 67万 | 23.1万
1亿 | 53.2万行/s | 63.4万 | 24.5万
10亿 | 162.6万行/s | 60.6万 | 23.4万

表分区建议

单表多大需要分区?

1、非常频繁更新的表(考虑到autovacuum, freeze的速度)

2亿

指表中频繁被更新的记录数在2亿以内,表本身的记录数可以更多。

2、更新、删除不频繁或毫无的表(考虑到设计rewrite的DDL,建索引,逻辑备份等的速度)

20亿(还需要考虑单行大小,直接影响DDL rewrite table的开销)

参考

《PostgreSQL、Greenplum 应用案例宝典《如来神掌》 - 目录》

《数据库选型之 - 大象十八摸 - 致 架构师、开发者》

《PostgreSQL 使用 pgbench 测试 sysbench 相关case》

《数据库界的华山论剑 tpc.org》

https://www.postgresql.org/docs/10/static/pgbench.html

Flag Counter

digoal’s 大量PostgreSQL文章入口