• Welcome to the Chevereto user community!

    Here users from all over the world gather around to learn the latest about Chevereto and contribute with ideas to improve the software.

    Please keep in mind:

    • 😌 This community is user driven. Be polite with other users.
    • 👉 Is required to purchase a Chevereto license to participate in this community (doesn't apply to Pre-sales).
    • 💸 Purchase a Pro Subscription to get access to active software support and faster ticket response times.
  • Chevereto Support CLST

    Support response

    Support checklist

Dumped update SQL stuck with very big site 6M rows in `chv_images`

phansontt

Chevereto Member
Hi my site is getting big now

It has ~6M in the table `chv_images`

When trying to update to version `3.17.0`

I saw that there are some SQL queries that alter the table `chv_images` which had more than 6M will never complete. Event tried with 32 Cores server & max tweak settings.

```
ALTER TABLE `chv_images` ADD `image_is_approved` tinyint(1) NOT NULL DEFAULT '1';
ALTER TABLE `chv_images` ADD `image_is_360` tinyint(1) NOT NULL DEFAULT '0';
```
@Rodolfo
I suggest:
 
Last edited:
Do you use InnoDB maybe, if not, u should consider to start using it to improve better performance and reliability of your database, especially when your site is getting big now, it would be also good tho check your mysql server settings tho, for even more reliability you can deploy mysql innodb clusters for high availability in the future, otherwise you can check percona.com blog entries, there you can find tons of tutorials and examples for configuring mysql servers for highe usage, and please, always provide error loggs.
 
Do you use InnoDB maybe, if not, u should consider to start using it to improve better performance and reliability of your database, especially when your site is getting big now, it would be also good tho check your mysql server settings tho, for even more reliability you can deploy mysql innodb clusters for high availability in the future, otherwise you can check percona.com blog entries, there you can find tons of tutorials and examples for configuring mysql servers for highe usage, and please, always provide error loggs.
Thank you for your reply
My site was hosted on the mysql cluster and shading already, InnoDB was setup from the beginning.
Btw I’ve been searched around the internet but can not find the proper way to alter big table. I think this is a thing that we should consider in the db design, not only the technical.
 
Try turning off all public services so your machine actually has the resources needed. The update could take days, it all depends on the data and your hardware.

Regarding your suggestions:

don't touch that (`chv_images`) table, if you need to add any attribute to the images, please create a new table for it. Which only save the ID of the images and value of the new attribute if that images has.
These attributes are needed to add filtering based in the is_360 and is_approved flags, what you describe falls into normalizing and the problem of that approach is that these attributes are intended to be used for filtering the images, so is not like I can make another table and expect to have a decent listing performance. This is because there won't be any index hinting image rows with the filter and the actual data needed, each listing will spawn a costly cross join. I do like normalized schema and I'm getting into PostgreSQL to be able to use it in combination of materialized views and other goodies, but for now my knowledge and context won't allow me to pursue the alternative you are mentioning here.

Before shipping this release this was beta tested, with a website with 13M row records and while the process was slow, it was never longer than 5 minutes and I throw/created many indexes. You should investigate the situation and give me something to look for, something relevant as the actual messages contained in your MySQL log.

Add a json general purpose column for any extra attribute in the future
I haven't checked the performance of using JSON fields in databases for storage of attributes intended to be used as filters. If I have to guess, I will say that in such case the re-indexing should be even longer as the current process that is giving you issues. This is because the data must be indexed if you want to retrieve it fast and the job of having to re-index a variable document should be toller than a fixed column. I haven't yet played with that, will give it a try as I'm also looking to use these types of columns for other purposes.
 
Do you really need the entire database to be restored, if you dont need it, you can extract specific tables to do your restore, another way to help speed up the import is to lock the table while importing, use the --add-locks option to mysqldump, and you definitely need to change the following entries for innodb:

innodb_buffer_pool_size =
innodb_log_buffer_size =
innodb_log_file_size =
innodb_write_io_threads =
innodb_flush_log_at_trx_commit =

Why these settings ?

 
I have ending up with this solution


1. Copy the table
2. create the new table
3. Bulk load from old table to new table

I took a day to finish.
 
I had also faced similar problems with chv_images. My table also has 6M images. It takes around 3-4 hours for Alter table to modify any column datatype. There is no other way around this. Even a database restore takes 1.5 hours to complete.

My DB machine has 45GB RAM with 35GB InnoDB buffer and NVMe storage. Still, the restore and alter table operations are dead slow. I usually differ updates and update it only once or twice a year.

I am still running 3.15, but planning to update to 3.17.1. I will try the Copy table solution. Seems to be a better option.
 
Today, I updated my installation to 3.17.1. I ran the Alter table commands manually and it took only around 15 mins to complete it which is a major improvement from the previous 3 hours.

I haven't done anything different from last time, but this is a new system with 40GB RAM and 35GB InnodB buffer. MySQL process was using almost 30GB of RAM and Innodb buffer was at 60%.

I think the key here is to have enough memory and Innodb buffer. During the operation, I could see MySQL was in fact creating the temp table and copying the data. So I don't think we have to do this.
 
So I have been thinking why Alter table operation was so fast this time around. Turns of it was because of below settings I used.

innodb_log_buffer_size = 4G

innodb_log_file_size = 10G


"A large log buffer enables large transactions to run without the need to write the log to disk before the transactions commit. Thus, if you have transactions that update, insert, or delete many rows, making the log buffer larger saves disk I/O."

"The larger log file size the value, the less checkpoint flush activity is required in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower."
 
thank you everyone! I ended up having to increase my innodb log buffer size myself for my server, and increase the timeout. I had to batch out my SQL
 
Back
Top