批量插入数据的优化,试试直接Append出SQL?

1,649 阅读3分钟

背景

最近在项目中碰到了需要批量插入数据的场景,当数据量在20w条的时候,就要花近36s(有索引)

用MyBatis动态SQL,代码类似这样:

<insert id="batchInsert" useGeneratedKeys="true" keyProperty="id">
    INSERT INTO sample(X,X,X)
    VALUES
    <foreach collection="list" item="item" separator=",">
        (#{X},#{X},#{X})
    </foreach>
</insert>

最后拼接出来执行的sql大概这样:

INSERT INTO sample(X,X,X) VALUES (X,X,X),(X,X,X),(X,X,X) ……

如果是查询优化,还可以在索引上下点功夫,但是批量插入的场景下,SQL语句就没有什么商量的余地了,目前也就只能想到将数据集分批批量插入,控制下单次执行SQL的长度而已。难道这就是极限了?

在同事的建议下,用了JProfiler看了下call tree的耗时情况,发现近36s的总耗时中,就有10s左右是在做PreparedStatement.setXXX()的赋值操作,抱着试一试的想法直接通过StringBuilder直接append出一条SQL,再去执行,居然只需要11s左右,足足差了3倍多!虽然可以想到通过MyBatis处理动态SQL会有点损耗,但是差距还是超出想象的。

单独把这个场景抽离出来,做了下实验,过程如下

场景还原

假设要插入的每条记录有10个String类型的字段,每个字段的数据为“abcdefghijklmnopqrstuvwxyz”,每批插入1w条,共插入50w条.

在SpringBootTest下进行还原:

表结构及java对象

DROP TABLE IF EXISTS `sample`;
CREATE TABLE `sample` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `col1` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col2` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col3` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col4` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col5` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col6` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col7` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col8` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col9` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  `col10` varchar(255) COLLATE utf8_bin DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
@Data
@Builder
@AllArgsConstructor
@NoArgsConstructor
public class Sample implements Serializable {

    private Long id;
    private String col1;
    private String col2;
    private String col3;
    private String col4;
    private String col5;
    private String col6;
    private String col7;
    private String col8;
    private String col9;
    private String col10;
}

测试数据生成

public List<Sample> buildData() {
    List<Sample> samples = new ArrayList<>();
    String col = "abcdefghijklmnopqrstuvwxyz";
    for (int i = 0; i < 500000; i++) {
        Sample sample = Sample.builder()
                .col1(col)
                .col2(col)
                .col3(col)
                .col4(col)
                .col5(col)
                .col6(col)
                .col7(col)
                .col8(col)
                .col9(col)
                .col10(col)
                .build();
        samples.add(sample);
    }
    return samples;
}

MyBatis测试

动态SQL如下:

<insert id="batchInsertSamples" parameterType="java.util.List" useGeneratedKeys="true" keyProperty="id">
    INSERT INTO sample(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10)
    VALUES
    <foreach collection="samples" item="item" separator=",">
        (#{item.col1},#{item.col2},#{item.col3},#{item.col4},#{item.col5},
        #{item.col6},#{item.col7},#{item.col8},#{item.col9},#{item.col10})
    </foreach>
</insert>

测试代码:

@Autowired
private SampleMapper sampleMapper;

@Test
public void testMyBatis() {
    List<Sample> samples = buildData();
    StopWatch stopWatch = new StopWatch();
    System.out.println("开始使用MyBatis进行批量插入");
  
    stopWatch.start();
    List<List<Sample>> batch = Lists.partition(samples,10000);
    for(List<Sample> part :batch){
        sampleMapper.batchInsertSamples(part);
    }
    stopWatch.stop();

    System.out.println("Mybatis批量插入完成,耗时:" + stopWatch.getTotalTimeMillis());
}

结果用时26.439s

Append测试

@Autowired
private JdbcTemplate jdbcTemplate;

@Test
public void testAppend(){
    List<Sample> samples = buildData();
    String prefix = "INSERT INTO sample(col1,col2,col3,col4,col5,col6,col7,col8,col9,col10) VALUES";

    StopWatch stopWatch = new StopWatch();
    System.out.println("开始直接拼接sql插入");
    
    stopWatch.start();
    List<List<Sample>> batch = Lists.partition(samples,10000);
    for(List<Sample> part :batch){
        StringBuilder sb = new StringBuilder();
        for(Sample sample :part){
            sb.append("(");
            sb.append("\""+sample.getCol1()+"\"").append(",");
            sb.append("\""+sample.getCol2()+"\"").append(",");
            sb.append("\""+sample.getCol3()+"\"").append(",");
            sb.append("\""+sample.getCol4()+"\"").append(",");
            sb.append("\""+sample.getCol5()+"\"").append(",");
            sb.append("\""+sample.getCol6()+"\"").append(",");
            sb.append("\""+sample.getCol7()+"\"").append(",");
            sb.append("\""+sample.getCol8()+"\"").append(",");
            sb.append("\""+sample.getCol9()+"\"").append(",");
            sb.append("\""+sample.getCol10()+"\"");
            sb.append(")");
            sb.append(",");
        }
        String sql = prefix + sb.replace(sb.length()-1,sb.length(),"");
        jdbcTemplate.execute(sql);
    }
    stopWatch.stop();

    System.out.println("拼接sql批量插入完成,耗时:" + stopWatch.getTotalTimeMillis());
}

结果用时13.473s

测试结果

2倍之差,还是挺可观的。

万万没想到一句简单的赋值操作,当数据量大的时候,会有这么多的差距