• Welcome to the Chevereto user community!

    Here users from all over the world gather around to learn the latest about Chevereto and contribute with ideas to improve the software.

    Please keep in mind:

    • 😌 This community is user driven. Be polite with other users.
    • 👉 Is required to purchase a Chevereto license to participate in this community (doesn't apply to Pre-sales).
    • 💸 Purchase a Pro Subscription to get access to active software support and faster ticket response times.
  • Chevereto Support CLST

    Support response

    Support checklist

Reduce Database Writes

darkufo

Chevereto Member
Hi,


I've got Chevereto installed on a Dedication Cloud Server, 16 GB memory, 8 cores, 1TB SSD

We get a lot of readers to our gallery, typically 200 concurrent people at any one time.

My hosts are concerned that the number of write I/O's are very high.

I was a little confused being a read-only gallery what might be being written to the disk I/O etc.

The only thing I can think of is that Cheverto stores the view count for each image.

Can this be turned off?

Are there any other I/O Write Operations that we might look to reduce/improve etc.

Thanks in advance.
 
I/O means read write, not just writes. For your case you should consider cache strategies on your sql server and website. Also, consider to disable some functionality like explorer.

The view counter write is not expensive as a listing query for example, which could get really heavy depending on the filtering made. This is known, V4 is about to deprecate the stuff that makes listing so expensive.
 
Thanks Rodolfo. We've got Dynamic Cache/NGINX on our server and also memcached but I don't believe chev users that.

I'll look to disable some of the other functions.

Yep, we have a list of both Read and Write IO's. The write ones were quite large so we were trying to narrrow down what was doing the writing :)
 
honestly, it's not a high number at all.
200ccu should put you somewhere near 25k daily visitors.

Have you considered using cloudflare's cache?
Thanks, yes we're now using a cache which has helped a lot.
 
Back
Top