• Welcome to the Chevereto user community!

    Here users from all over the world gather around to learn the latest about Chevereto and contribute with ideas to improve the software.

    Please keep in mind:

    • 😌 This community is user driven. Be polite with other users.
    • 👉 Is required to purchase a Chevereto license to participate in this community (doesn't apply to Pre-sales).
    • 💸 Purchase a Pro Subscription to get access to active software support and faster ticket response times.
  • Chevereto Support CLST

    Support response

    Support checklist

Nginx and other issues

AshleyUK

💖 Chevereto Fan
Can someone please review my nginx conf? Pretty links does not work, after upload users got redirected to shortlink which does not exist - error 404

server {
root /var/www/domain;
index index.php;
server_name www.domain.com domain.com;

access_log /var/log/nginx/domain-access.log main buffer=32k;
error_log /var/log/nginx/domain-error.log error;

location / {
try_files $uri $uri/ /index.php?$query_string;
}

location ~ /2016 {
rewrite ^ $scheme://domain.com$request_uri permanent;
}





location ~ (^/app/(.*)\.php$|^/lib/G) {
internal;
return 404;
}

location ~ (\.htaccess$|\.svn) {
internal;
return 404;
}


location ~ \.php$ {
expires off;
proxy_buffers 16 16k;
proxy_buffer_size 16k;
try_files $fastcgi_script_name =404;

fastcgi_pass 127.0.0.1:9000;

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

fastcgi_read_timeout 300;

include fastcgi_params;
}
}



thank you
 
Hello,

Thank you for your reply.

I fixed the issue with Nginx config.

Now I have a problem with the FTP upload. It takes more than 60 seconds sometimes, which cause some HTTP 500 error.

Is there anything else I have to check?

I have to notice that the files stored in external server but in the same DC.
 
I haven't ever used FTP to handle external servers, however I am using sFTP without issue (My external servers aren't in the same DC, not even the same continent).

Things I would check on..

FTP server Log (/var/log/ftpservername)
Load (CPU, RAM & Disk I/O)
Latency between the two servers (Should be sub 1MS if in the same DC)
FTP server configuration

Beyond the above, maybe swap over to sFTP to see if the issue persists?
 
Hi, sorry you right its SFTP not FTP.

here more info:

esxi, deb 8.5, php7.0, mysql 5.7, nginx 1.10.1

1. latency to datastore < 1ms
2. sftp works fine, uploading to datastore directly is fast (not thru chevereto)
3. crazy load on www server ( not datastore ), load avg > 20 on 8 core server
most of the load makes process mysql(200%) and multiple php-fpm instances
4. I/O low
5. mem total 8GB, free 200MB ( most used by mysql )
6. traffic ~ 100Mbit
7. MySQL server info Uptime: 947 Threads: 22 Questions: 2383303 Slow queries: 0 Opens: 549 Flush tables: 1 Open tables: 392 Queries per second avg: 2516.687
8. Tasks 66, threads 60, running 25
9.vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
19 0 264 126644 357868 6110352 0 0 461 772 973 1769 86 8 5 0 0
25 0 264 122104 357888 6111600 0 0 0 1732 7204 16641 90 10 0 0 0
20 0 288 149036 357900 6082680 0 24 0 1340 7024 17136 92 8 0 0 0
21 0 288 150488 357900 6083732 0 0 0 8 7497 15493 91 8 0 0 0


I am fighting with issues on this server even after reinstalling for days. I dont know what is sql doing but 3000 queries per second, constantly , is a lot. I tried to keep all sw up-to-date, maybe there might be some incompatibility or bug with chev & mysql 5.7 or php7.0 ?

thank you
 
Last edited:
I dont know what is sql doing but 3000 queries per second, constantly , is a lot.

Try doing a MySQL log and see what's going on. Most likely your server isn't configured properly, I don't have any idea on how many modifications you have made but considering that no one else is right now in server panic I will bet my money in that this is just server configuration issues at your end.
 
Hello

It seems that the problem is in image post:

curl 'http://www.ΧΧΧΧΧ.com/json' -X POST -H 'Accept: application/json' -H 'Accept-Encoding: gzip, deflate' -H 'Accept-Language: en-US,en;q=0.5' -H 'Content-Length: 111566' -H 'Content-Type: multipart/form-data; boundary=---------------------------196992331920645' -H 'Cookie: cfduid=d2a0079fbf87d25ad11702edbbaec946d1469815074; ga=GA1.2.1420133690.1469815080; PHPSESSID=l3ln6tbjtjtoqe2agno4bvb0v1; gat=1' -H 'Host: www.ΧΧΧΧΧ.com' -H 'Referer: http://www.ΧΧΧΧΧ.com/' -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:47.0) Gecko/20100101 Firefox/47.0'

It takes more than 5 minutes to finish.

Is there any setting that I have to change?
 
Try using local storage and see if it takes that long.
 
So, i made some test and here are the results. I used one server + external datastore, same configuration, just swapped the database. HW config is sufficient 2x E5, raid 10, 32GB ram

i will split this post to 2 parts

1 part - SQL queries
1st test - db 120MB - all running well
2nd test - db 6GB - delays in post, overall performance wrong

And here is the reason, what we detected
for example when switching to external storage, your script executes
########################################################
mysql> SELECT DISTINCT image_name FROM chv_images WHERE image_storage_id='1' AND image_extension='jpg' AND image_name IN("img7","img7518c4","img7c2e4a","img71beb7","img7072c0","img7466b9","img7fbce9","img71199a","img71cd54","img73dbeb","img7d79ab","img7e14e4","img7051ab","img73bee3","img74251e","img711671","29636d26422f1afebf40bcb821542860","cc9e02825fcf8cfb3240587f0e059b4b","fc8ebcf5ab1c3fba92c47066c38c5b4b","8e517210af8edc94291d07ecfe7e47be","a6beeeb7f4cfe2e975af3042d413c9a5","03de500899da4238accb3f845b796c6c","c464b0d0825881b5488644fae5fd69bd","bc86f9050a88136d26dc38f686946523","51d931b1487e80b889359ba12c3dff4a") AND DATE(image_date_gmt)='2016-07-30' ORDER BY image_id DESC;
+------------+
| image_name |
+------------+
| img7518c4 |
| img7 |
+------------+
2 rows in set (1 min 38.01 sec)
########################################################

which took 1:38 minuts on test2. This means your queries are not optimized for larger scale and thats the reason of overall performace decrease which results in error 500.
What you can do about that?

2. SFTP optimization

######################################
Jul 31 08:13:59 storage sftp-server[32064]: mkdir name "/home" mode 0777
Jul 31 08:13:59 storage sftp-server[32064]: sent status Failure
Jul 31 08:13:59 storage sftp-server[32064]: mkdir name "/home/chevereto" mode 0777
Jul 31 08:13:59 storage sftp-server[32064]: sent status Failure
Jul 31 08:13:59 storage sftp-server[32064]: mkdir name "/home/chevereto/images" mode 0777
Jul 31 08:13:59 storage sftp-server[32064]: sent status Failure
Jul 31 08:13:59 storage sftp-server[32064]: mkdir name "/home/chevereto/images/2016" mode 0777
Jul 31 08:13:59 storage sftp-server[32064]: sent status Failure
Jul 31 08:13:59 storage sftp-server[32064]: mkdir name "/home/chevereto/images/2016/07" mode 0777
Jul 31 08:13:59 storage sftp-server[32064]: sent status Failure
Jul 31 08:13:59 storage sftp-server[32064]: mkdir name "/home/chevereto/images/2016/07/31" mode 0777
Jul 31 08:13:59 storage sftp-server[32064]: sent status Failure
Jul 31 08:13:59 storage sftp-server[32064]: open "/home/chevereto/images/2016/07/31/vlcsnap-2016-07-31-12h12m37s077.th.png" flags WRITE,CREATE,TRUNCATE mode 0666
Jul 31 08:13:59 storage sftp-server[32064]: close "/home/chevereto/images/2016/07/31/vlcsnap-2016-07-31-12h12m37s077.th.png" bytes read 0 written 24328
######################################

with every sftp "put" you are trying to create new dir, which creates unnecessary load and single put command always results in usually 7 commands executed for each upload.

Better approach would be logic like this:

if [ (scp put command) != "0" ] ; then mkdir /dir_structure/ ; fi

or some similar logic based on "only create dir when returned error"

thanx
 
When switching to external storage, your script executes
########################################################
mysql> SELECT DISTINCT image_name FROM chv_images WHERE image_storage_id='1' AND image_extension='jpg' AND image_name IN("img7","img7518c4","img7c2e4a","img71beb7","img7072c0","img7466b9","img7fbce9","img71199a","img71cd54","img73dbeb","img7d79ab","img7e14e4","img7051ab","img73bee3","img74251e","img711671","29636d26422f1afebf40bcb821542860","cc9e02825fcf8cfb3240587f0e059b4b","fc8ebcf5ab1c3fba92c47066c38c5b4b","8e517210af8edc94291d07ecfe7e47be","a6beeeb7f4cfe2e975af3042d413c9a5","03de500899da4238accb3f845b796c6c","c464b0d0825881b5488644fae5fd69bd","bc86f9050a88136d26dc38f686946523","51d931b1487e80b889359ba12c3dff4a") AND DATE(image_date_gmt)='2016-07-30' ORDER BY image_id DESC;

It is very odd that a simple SELECT DISTINCT takes that long. The query doesn't have joins or any complex operator, it is just a "select the images that are in this array". Two optimizations can be made in this procedure but I think that the problem isn't in the query but the database structure which lacks of a image_name index. You can easily test that by running this query:

Code:
ADD INDEX `image_name` (`image_name`)

Note: Run this from MySQL root console, in your website it will take a lot to populate this index.

with every sftp "put" you are trying to create new dir, which creates unnecessary load and single put command always results in usually 7 commands executed for each upload.
True, but those are 7 commands that runs in less than a second. I agree on that it shouldn't do those but optimizing that won't be noticeable at all.
 
Last edited:
Here it is:

1.
mysql> ALTER TABLE `chv_images` ADD INDEX `image_name` (`image_name`);

Query OK, 0 rows affected (1 min 17.45 sec)
Records: 0 Duplicates: 0 Warnings: 0

2. The problem about sftp ( ftp too ) is that with every uploaded pic you not only create 7 unnecessary tasks but also open and close connection, which takes resources. I think better approach would be mount remote drive as NFS and got it for chevereto look like "locally" unfortunately i do not see option how to add multiple local paths if have multiple external storages. Someone tried NFS and can compare performance to sftp in this case?

3. new issue.
I had tens thousands of pictures from single IP which i needed to delete and i did not see any option to delete bulk picture without displaying and selecting them first, so i run sql query remove where ip=XXXX, which did the job. What else i need to do to count correctly the img stats in admin? I see the counts did not change

thanx
 
but also open and close connection
Nope. It is the same session and it happens when both servers are already connected.

i run sql query remove where ip=XXXX, which did the job. What else i need to do to count correctly the img stats in admin? I see the counts did not change

That is the worse way to get rid of content in a system that relies in keeping indexes, counts, everything. If no option was available, you should have created your own procedure (easy as a custom script and trigger the queue system). Now you have a mess in the database because you need to sync external storage usage, user counts and system counts. Next time you should ask.

Cheers.
 
1. How this my reply here
mysql> ALTER TABLE `chv_images` ADD INDEX `image_name` (`image_name`);

Query OK, 0 rows affected (1 min 17.45 sec)
Records: 0 Duplicates: 0 Warnings: 0

help us further? Whats gonna be next?

2. you wrong about sftp its not same session, i made small test, uploaded 2 pic in batch, here is log

#################
Aug 1 05:44:49 storage sshd[824]: pam_unix(sshd:session): session opened for user chevereto by (uid=0)
Aug 1 05:44:49 storage sshd[824]: pam_limits(sshd:session): invalid line '' - skipped
Aug 1 05:44:49 storage sftp-server[827]: session opened for local user chevereto from [10.10.10.15]
Aug 1 05:44:49 storage sftp-server[827]: session closed for local user chevereto from [10.10.10.15]
Aug 1 05:44:49 storage sshd[824]: pam_unix(sshd:session): session closed for user chevereto
Aug 1 05:44:50 storage sshd[829]: pam_unix(sshd:session): session opened for user chevereto by (uid=0)
Aug 1 05:44:50 storage sshd[829]: pam_limits(sshd:session): invalid line '' - skipped
Aug 1 05:44:50 storage sftp-server[832]: session opened for local user chevereto from [10.10.10.15]
Aug 1 05:44:51 storage sftp-server[832]: session closed for local user chevereto from [10.10.10.15]
Aug 1 05:44:51 storage sshd[829]: pam_unix(sshd:session): session closed for user chevereto
#################

As you can see with each uploaded pic ( i know each pic is uploaded 3x cause thumbnails - this is in same session ) opened and closed session

3. I have backup so i can revert it, so whats the best way to delete 50.000 pic ?

thank you
 
1. That's all. That added an index for that column so the lookup shouldn't be an issue. If the query is still slow then I'm afraid that the problem is a poor or bad configured mysql server. If even having an index the query takes one minute to do a simple query then your problem is elsewhere. It could be concurrent users, an slow query maybe, etc. To be honest debug and profile MySQL is hard, the only thing left to do will be to debug it on my own but to do that I need access to your MySQL server/user.

2. Is the same session when it does that "job". It does 3 "jobs" for each image. It could be easily improved by passing a "don't kill the session" parameter but again, we are talking about one maybe two seconds? That's fine tuning and your server is having problems way more important than that.

3. Best way is run Image::deleteMultiple(<ids array>) in a batch script. Something that executes 100 images per run. If those images are on external server the system will add those as a queue job and it will be executed slowly but without breaking anything. You could also update the database but the queries are complex, way above the average MySQL queries. Hardest part is the likes-related columns. Anyway, the tables that should be fixed are:
  • Storages
  • Users
  • Stats (those queries can be found at app/install/installer.php)
  • Likes
  • Notifications
You should be able to process the table updates without needing to restore the actual image files. You only need the image ids. Run the job from the system itself (create a simple route.job.php or anything like that then access to it with /job) with a MySQL lookup for X images matching the target IP, delete using Image::deleteMultiple(<ids array>) then sleep some seconds and then re-fresh the page (header redirect self).

Cheers.
 
Last edited:
After several frustrations I finally found the issue and I will pack these optimizations in the next release. Is not a trivial work what I need to do for this one and I need to optimize almost all the queries used by class.listing.php.

The problem here is that when the image table is huge, in this case with near 7,900.000 records (20 8 GB) MySQL can't allocate temp tables in the memory and starts to hit the disk, but since disks are way slower, the process takes forever affecting not only the website display but the disk. There is no way in that the current queries could work, I added some indexes and that helped with some queries but lots of them can't be optimized using indexes and the only possible way to increase performance is by combining multiple queries and PHP. That said, I'm afraid that I will need some weeks to wrap this because I could break several functionalities.

That said, normal or even large websites won't ever notice this issue. Your website must be big and when I say big I should actually say HUGE! very big. UltraIMG is probably one the biggest Chevereto based website and I will be happy to optimize Chevereto there so not only Ivan and the other guys at UltraIMG wins but everybody else. This optimization is of course for free, I just need time to work on it.

For those who also have huge websites the only workaround is to have plenty free space in your machine. I know that is terrible but this should be improved within this month.

Cheers,
Rodolfo.
 
Last edited:
Back
Top