[ACCEPTED]-Optimizing batch inserts, SQLite-sqlite
I'm a bit hazy on the Java API but I think 16 you should start a transaction first, otherwise 15 calling commit()
is pointless. Do it with conn.setAutoCommit(false)
. Otherwise 14 SQLite will be journaling for each individual 13 insert / update. Which requires syncing 12 the file, which will contribute towards 11 slowness.
EDIT: The questioner updated to 10 say that this is already set true. In that 9 case:
That is a lot of data. That length 8 of time doesn't sound out of this world. The 7 best you can do is to do tests with different 6 buffer sizes. There is a balance between 5 buffer jitter from them being too small 4 and virtual memory kicking in for large 3 sizes. For this reason, you shouldn't try 2 to put it all into one buffer at once. Split 1 up the inserts into your own batches.
You are only executing executeBatch
once, which means 7 that all 10 million statements are send 6 to the database in the executeBatch
call. This is way 5 too much to handle for a database.
You should 4 additionally execute int[] updateCounts = prep.executeBatch();
in your loop let's 3 say all 1000 rows. Just make an if statement 2 which tests on counter % 1000 == 0
. Then the database can asynchronously 1 already work on the data you sent.
More Related questions
We use cookies to improve the performance of the site. By staying on our site, you agree to the terms of use of cookies.