2020年5月18日星期一

SQL子查询的感悟

SQL子查询的感悟


今天在听陈华军老师的课时;感触颇多。其中讲到“不同执行计划的选择(子查询)”这一栏。我们在平时工作也经常要用到子查询。有哪些思路来优化这种子查询呢?

例如我们今天实验的表结构

  • 表T1 有10000条记录;并在id字段创建btree索引
  • 表T2 有1000条记录
postgres=# create table t1(id int primary key, info text, reg_time timestamp);CREATE TABLEpostgres=# create table t2(id int, name text);CREATE TABLEpostgres=# insert into t1 select generate_series(1, 10000),'lottu', now();INSERT 0 10000postgres=# insert into t2 select (random()*1000)::int, 'lottu'||id from generate_series(1,1000) id;INSERT 0 1000postgres=# create index ind_t1_id on t1(id);CREATE INDEX

实验对象SQL;

select * from t1 where id in (select id from t2);

SQL语法改造

我们先看下这SQL的执行计划

postgres=# explain (analyze,verbose,costs,timing) select * from t1 where id in (select id from t2);                QUERY PLAN                ---------------------------------------------------------------------- Merge Join (cost=54.25..99.73 rows=628 width=18) (actual time=1.319..2.365 rows=628 loops=1) Output: t1.id, t1.info, t1.reg_time Inner Unique: true Merge Cond: (t1.id = t2.id) -> Index Scan using ind_t1_id on public.t1 (cost=0.29..337.29 rows=10000 width=18) (actual time=0.014..0.421 rows=997 loops=1)   Output: t1.id, t1.info, t1.reg_time -> Sort (cost=53.97..55.54 rows=628 width=4) (actual time=1.298..1.387 rows=628 loops=1)   Output: t2.id   Sort Key: t2.id   Sort Method: quicksort Memory: 54kB   -> HashAggregate (cost=18.50..24.78 rows=628 width=4) (actual time=0.730..0.877 rows=628 loops=1)    Output: t2.id    Group Key: t2.id    -> Seq Scan on public.t2 (cost=0.00..16.00 rows=1000 width=4) (actual time=0.013..0.267 rows=1000 loops=1)      Output: t2.id Planning Time: 0.454 ms Execution Time: 2.507 ms(17 rows)

从该执行计划可以看到很多信息;

  • 其中获取的行数只有628条;
  • 执行时间是2.507ms;
  • 两表之间采用Merge Join;由于t2表没有索引且无须存放;需要使用内存进行排序。

若采用join的方式

如果子查询被循环执行导致SQL慢,可尝试改成等价的join;

postgres=# explain (analyze,verbose,costs,timing) select t1,* from t1 , t2 where t1.id = t2.id ;                QUERY PLAN                ------------------------------------------------------------------------------------------------------------------------------------ Merge Join (cost=66.11..117.17 rows=1000 width=72) (actual time=0.601..2.184 rows=1000 loops=1) Output: t1.*, t1.id, t1.info, t1.reg_time, t2.id, t2.name Merge Cond: (t1.id = t2.id) -> Index Scan using ind_t1_id on public.t1 (cost=0.29..337.29 rows=10000 width=60) (actual time=0.021..0.726 rows=997 loops=1)   Output: t1.*, t1.id, t1.info, t1.reg_time -> Sort (cost=65.83..68.33 rows=1000 width=12) (actual time=0.573..0.721 rows=1000 loops=1)   Output: t2.id, t2.name   Sort Key: t2.id   Sort Method: quicksort Memory: 71kB   -> Seq Scan on public.t2 (cost=0.00..16.00 rows=1000 width=12) (actual time=0.013..0.226 rows=1000 loops=1)    Output: t2.id, t2.name Planning Time: 0.288 ms Execution Time: 2.421 ms(13 rows)

性能有点提升;其实两个SQL之间不等价;因为T2有重复id;导致最后的结果集是1000条;而非上面的628.

采用array的方式改写

postgres=# explain (analyze,verbose,costs,timing) select * from t1 where id = any(array(select id from t2));              QUERY PLAN               --------------------------------------------------------------------------------------------------------------------------- Index Scan using ind_t1_id on public.t1 (cost=16.29..59.03 rows=10 width=18) (actual time=0.418..1.108 rows=628 loops=1) Output: t1.id, t1.info, t1.reg_time Index Cond: (t1.id = ANY ($0)) InitPlan 1 (returns $0)  -> Seq Scan on public.t2 (cost=0.00..16.00 rows=1000 width=4) (actual time=0.014..0.127 rows=1000 loops=1)   Output: t2.id Planning Time: 0.106 ms Execution Time: 1.178 ms(8 rows)

结果跟SQL1是等价的;用时只有1.178ms;且未用内存;效果最优。选它准没错

思路转换

前面我们t2表只有1000条记录,且id小于1000;若我们t2表有1000000条甚至更多;且ID也没有限制。

select * from t1 where id in (select id from t2 where id <= 1000);或者with t as(select id from t2 where id <= 1000)select t1.* from t1 where id in (select id from t);

我相信很多人还是会采用这种写法。这些写不好;虽然你一个SQL搞定;但是效率慢。这是有人说你可以在t2表建个索引;这个是可以的;效率确实提升很多。若t2没有这个索引;你没必要单独为这个需求创建一个索引。

我建议可以用一个子表用来存放;

select id from t2 where id <= 1000);

子表:你可以用临时表/表/物化视图。

这样的优势;减少多次扫描t2表的数据块;只要扫描一次即可


没有评论:

发表评论