开发者

Using batch SQL commands with MySQL JDBC

开发者 https://www.devze.com 2023-04-12 09:21 出处:网络
I am tryi开发者_Go百科ng to figure out the best way to get MySQL to use batch updates. I create, let\'s say, 10,000 records. Each record needs to be inserted into the DB like so:

I am tryi开发者_Go百科ng to figure out the best way to get MySQL to use batch updates.

I create, let's say, 10,000 records. Each record needs to be inserted into the DB like so:

for 1 to 10000
 1) insert into table1
 2) get auto increment id from table1 (call it table1ID)
 3) use table1ID to insert into table2
next

I don't see an easy way to batch that scenario. Maybe batch table1 insert 10,000 times, then get all table IDs, then another batch of 10,000 inserts to table2. But then I need to make sure all my inserts match. For each insert into table1 I need the correct insert into table2.


There is no way to batch insert rows and get back the corresponding IDs (from an autoincrement column). You have to insert one-by-one in a loop, store all the IDs in a list, and then batch-insert/update/whatever table 2.

In your example, that is 10,000 + 1 DB operations, which is quite an improvement compared to 20,000 operations.


Here's a sample of how I do it in a very big batch SQL file:

  • I insert a value first, with no parent
  • I remember the last insert id
  • I use it to insert subcategories, and precise idparent as the last inserted id.

This is very close to what you want to do:

insert into category (idparent,description) values (NULL, 'Mycat');
set @lt=LAST_INSERT_ID();
insert into category (idparent,description) values
    (@lt, 'MySubCat'),
    (@lt, 'MyOtherSubCat');

Hope this helps.


You can use the following approach... Do test before using because getGeneratedKeys() in Statement because it depends on driver used. The below code is tested on Maria DB 10.0.12 and Maria JDBC driver 1.2 Maria DB 10 is build on top of MySQL 5.6, so it should work with MySQL as well.

Remember that increasing batch size improves performance only to a certain extent... for my setup increasing batch size above 500 was actually degrading the performance.

public Connection getConnection(boolean autoCommit) throws SQLException {
    Connection conn = dataSource.getConnection();
    conn.setAutoCommit(autoCommit);
    return conn;
}

private void testBatchInsert(int count, int maxBatchSize) {
    String querySql = "insert into batch_test(keyword) values(?)";
    try {
        Connection connection = getConnection(false);
        PreparedStatement pstmt = null;
        ResultSet rs = null;
        boolean success = true;
        int[] executeResult = null;
        try {
            pstmt = connection.prepareStatement(querySql, Statement.RETURN_GENERATED_KEYS);
            for (int i = 0; i < count; i++) {
                pstmt.setString(1, UUID.randomUUID().toString());
                pstmt.addBatch();
                if ((i + 1) % maxBatchSize == 0 || (i + 1) == count) {
                    executeResult = pstmt.executeBatch();
                }
            }
            ResultSet ids = pstmt.getGeneratedKeys();
            for (int i = 0; i < executeResult.length; i++) {
                ids.next();
                if (executeResult[i] == 1) {
                    System.out.println("Execute Result: " + i + ", Update Count: " + executeResult[i] + ", id: "
                            + ids.getLong(1));
                }
            }
        } catch (Exception e) {
            e.printStackTrace();
            success = false;
        } finally {
            if (rs != null) {
                rs.close();
            }
            if (pstmt != null) {
                pstmt.close();
            }
            if (connection != null) {
                if (success) {
                    connection.commit();
                } else {
                    connection.rollback();
                }
                connection.close();
            }
        }
    } catch (SQLException e) {
        e.printStackTrace();
    }
}
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号