The problem with FTP is that is highly limited for this purpose because is not made for this. The whole idea of the multiservers is not balance the storage, is balance the load. Any normal server can easily handle tbs of data, thing is how many peers do you have and that is the real pain for the system, not how many disk space is used.
Use a NAS with FTP will mean a persistent connection over that protocol for every single file interaction like PUT, DELETE, RENAME and I'm not sure if the protocol can meet this. I've saw myself that when I upload more than 3 files at the same times through FTP the transfer quality is really poor and many packets are lost. With the best host that I have I notice that issues when I upload simultaneous 10 files... I believe that a public site will have way more than 10 simultaneous task and like I said, that maximum depends on the system. If your NAS is not industrial grade then I bet that it can't handle (with no lost) more than 10 simultaneous taks.
I believe that the best that you can do is investigate if you can use in your HTTP server a NAS remote server as storage without affecting the script side and I believe that there must be a protocol to interact with a NAS server that should be better than FTP.