PostgreSQL 10.0 preview 性能增强 - 2PC事务恢复阶段性能提升

2 minute read

背景

两阶段提交,在一些客户端异步事务,或者跨库的事务处理中非常常见。

目前,如果数据库crash,PostgreSQL恢复时,对2PC处理机制如下

* on prepare 2pc data (subxacts, commitrels, abortrels, invalmsgs) saved to xlog and to file, but file not is not fsynced  
* on commit backend reads data from file  
* if checkpoint occurs before commit, then files are fsynced during checkpoint  
* if case of crash replay will move data from xlog to files  

10.0将改进为

* on prepare backend writes data only to xlog and store pointer to the start of the xlog record  
* if commit occurs before checkpoint then backend reads data from xlog by this pointer  
* on checkpoint 2pc data copied to files and fsynced  
* if commit happens after checkpoint then backend reads files  
* in case of crash replay will move data from xlog to files (as it was before patch)  

详见

Hello.  
  
While working with cluster stuff (DTM, tsDTM) we noted that postgres 2pc transactions is approximately two times slower than an ordinary commit on workload with fast transactions — few single-row updates and COMMIT or PREPARE/COMMIT. Perf top showed that a lot of time is spent in kernel on fopen/fclose, so it worth a try to reduce file operations with 2pc tx.  
  
Now 2PC in postgres does following:  
* on prepare 2pc data (subxacts, commitrels, abortrels, invalmsgs) saved to xlog and to file, but file not is not fsynced  
* on commit backend reads data from file  
* if checkpoint occurs before commit, then files are fsynced during checkpoint  
* if case of crash replay will move data from xlog to files  
  
In this patch I’ve changed this procedures to following:  
* on prepare backend writes data only to xlog and store pointer to the start of the xlog record  
* if commit occurs before checkpoint then backend reads data from xlog by this pointer  
* on checkpoint 2pc data copied to files and fsynced  
* if commit happens after checkpoint then backend reads files  
* in case of crash replay will move data from xlog to files (as it was before patch)  
  
Most of that ideas was already mentioned in 2009 thread by Michael Paquier http://www.postgresql.org/message-id/c64c5f8b0908062031k3ff48428j824a9a46f28180ac@mail.gmail.com where he suggested to store 2pc data in shared memory.   
At that time patch was declined because no significant speedup were observed. Now I see performance improvements by my patch at about 60%. Probably old benchmark overall tps was lower and it was harder to hit filesystem fopen/fclose limits.  
  
Now results of benchmark are following (dual 6-core xeon server):  
  
Current master without 2PC: ~42 ktps  
Current master with 2PC: ~22 ktps  
Current master with 2PC: ~36 ktps  
  
Benchmark done with following script:  
  
\set naccounts 100000 * :scale  
\setrandom from_aid 1 :naccounts  
\setrandom to_aid 1 :naccounts  
\setrandom delta 1 100  
\set scale :scale+1  
BEGIN;  
UPDATE pgbench_accounts SET abalance = abalance - :delta WHERE aid = :from_aid;  
UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :to_aid;  
PREPARE TRANSACTION ':client_id.:scale';  
COMMIT PREPARED ':client_id.:scale';  

这个patch的讨论,详见邮件组,本文末尾URL。

PostgreSQL社区的作风非常严谨,一个patch可能在邮件组中讨论几个月甚至几年,根据大家的意见反复的修正,patch合并到master已经非常成熟,所以PostgreSQL的稳定性也是远近闻名的。

参考

https://commitfest.postgresql.org/13/915/

https://www.postgresql.org/message-id/flat/74355FCF-AADC-4E51-850B-47AF59E0B215@postgrespro.ru#74355FCF-AADC-4E51-850B-47AF59E0B215@postgrespro.ru

Flag Counter

digoal’s 大量PostgreSQL文章入口