• Welcome to the Chevereto User Community!

    Here, users from all over the world come together to learn, share, and collaborate on everything related to Chevereto. It's a place to exchange ideas, ask questions, and help improve the software.

    Please keep in mind:

    • This community is user-driven. Always be polite and respectful to others.
    • Support development by purchasing a Chevereto license, which also gives you priority support.
    • Go further by joining the Community Subscription for even faster response times and to help sustain this space
  • Chevereto Support CLST

    Support response

    Support checklist

    • Got a Something went wrong message? Read this guide and provide the actual error. Do not skip this.
    • Confirm that the server meets the System Requirements
    • Check for any available Hotfix - your issue could be already reported/fixed
    • Read documentation - It will be required to Debug and understand Errors for a faster support response

Image deletion queue is not being processed when external storage is Backblaze b2

lovedigit

👽 Chevereto Freak
▶ Reproduction steps
  1. Upload large number of images inside an album
  2. Delete album so all images are queued at once for deletion
  3. Check database chv_queues table and error log
😢 Unexpected result

I created a separate chevereto instance with v3.20.0.beta.1, setup entirely new backblaze bucket and created an external storage with it.
After initial setup. I uploaded 1000 images to this website.
Queues were however processed. It was throwing same error trace as it is logged on my actual website permanently. But, on test website, it was able to process request after several failed attempts.
I am not knowledgeable enough to diagnose it myself to find the root cause.

If someone wants to test this on their website, here is the zip file with 1000 images. Please make sure that you upload them inside album, and delete them all at once. If your deletion queue has only few hundred rows, it will process without errors.

Download 1000 images: https://gofile.io/d/J44aAg

This indicates that it is actually affected by number of rows in deletion queue to be processed rather than the size of the backblaze bucket which we initially suspected as culprit.
But, I can only guess. Rodolfo will be able to find the issue.

I have 75K images pending for deletion. I really need a fix for this. There are several illegal images which needs to be removed from the server. 3 out of 5 of my external storages are on backblaze now. It is affecting the normal operations of the website. Please help.

📃 Error log message

Aw, snap! Internal Server Error [debug @ print,error_log] - https://v3-docs.chevereto.com/setup/debug.html

Fatal error [0]:
Triggered in /var/www/chev_test/html/app/vendor/esac/backblaze-b2/src/Client.php:424

Stack trace:
#0 /var/www/chev_test/html/app/vendor/esac/backblaze-b2/src/Client.php(452): esac\B2\Client->getFile()
#1 /var/www/chev_test/html/app/lib/classes/class.queue.php(161): esac\B2\Client->deleteFile()
#2 /var/www/chev_test/html/app/cron.php(39): CHV\Queue:😛rocess()
#3 /var/www/chev_test/html/app/cron.php(26): storageDelete()
#4 /var/www/chev_test/html/app/loader.php(296): require_once('/var/www/chev_test/html/app/cron.php')
#5 /var/www/chev_test/html/cron.php(23): include_once('/var/www/chev_test/html/app/loader.php')

  • Processing cleanUnconfirmedUsers
  • Processing removeDeleteLog
  • Processing deleteExpiredImages
  • Processing storageDelete

Aw, snap! Internal Server Error [debug @ print,error_log] - https://v3-docs.chevereto.com/setup/debug.html

Fatal error [0]:
Triggered in /var/www/chev_test/html/app/vendor/esac/backblaze-b2/src/Client.php:424

Stack trace:
#0 /var/www/chev_test/html/app/vendor/esac/backblaze-b2/src/Client.php(452): esac\B2\Client->getFile()
#1 /var/www/chev_test/html/app/lib/classes/class.queue.php(161): esac\B2\Client->deleteFile()
#2 /var/www/chev_test/html/app/cron.php(39): CHV\Queue:😛rocess()
#3 /var/www/chev_test/html/app/cron.php(26): storageDelete()
#4 /var/www/chev_test/html/app/loader.php(296): require_once('/var/www/chev_test/html/app/cron.php')
#5 /var/www/chev_test/html/cron.php(23): include_once('/var/www/chev_test/html/app/loader.php')

Aw, snap! Internal Server Error [debug @ print,error_log] - https://v3-docs.chevereto.com/setup/debug.html

Fatal error [0]:
Triggered in /var/www/chev_test/html/app/vendor/esac/backblaze-b2/src/Client.php:424

Stack trace:
#0 /var/www/chev_test/html/app/vendor/esac/backblaze-b2/src/Client.php(452): esac\B2\Client->getFile()
#1 /var/www/chev_test/html/app/lib/classes/class.queue.php(161): esac\B2\Client->deleteFile()
#2 /var/www/chev_test/html/app/cron.php(39): CHV\Queue:😛rocess()
#3 /var/www/chev_test/html/app/cron.php(26): storageDelete()
#4 /var/www/chev_test/html/app/loader.php(296): require_once('/var/www/chev_test/html/app/cron.php')
#5 /var/www/chev_test/html/cron.php(23): include_once('/var/www/chev_test/html/app/loader.php')

  • Processing tryForUpdates
  • Processing cleanUnconfirmedUsers
  • Processing removeDeleteLog
  • Processing deleteExpiredImages
--
[OK] Cron tasks ran
  • Processing removeDeleteLog
  • Processing tryForUpdates
  • Processing cleanUnconfirmedUsers
  • Processing storageDelete
  • Processing deleteExpiredImages
 
Last edited:
You have already reported this and the error seems to be in B2 API implementation.

[CODE lang="php" title="app/vendor/esac/backblaze-b2/src/Client.php" highlight="7"] public function getFile(array $options)
{
if (!isset($options['FileId']) && isset($options['BucketName']) && isset($options['FileName'])) {
$options['FileId'] = $this->getFileIdFromBucketAndFileName($options['BucketName'], $options['FileName']);

if (!$options['FileId']) {
throw new NotFoundException();
}
}

$response = $this->request('POST', '/b2_get_file_info', [
'json' => [
'fileId' => $options['FileId'],
],
]);

return new File($response);
}[/CODE]

And here the method causing it:

[CODE lang="php" title="app/vendor/esac/backblaze-b2/src/Client.php"] protected function getFileIdFromBucketAndFileName($bucketName, $fileName)
{
$files = $this->listFiles([
'BucketName' => $bucketName,
'FileName' => $fileName,
]);

foreach ($files as $file) {
if ($file->getFileName() === $fileName) {
return $file->getFileId();
}
}

return null;
}[/CODE]

^^^ To get the fileId it iterates ALL FILES in the B2 bucket, this is very problematic as it works per 1000 files as described in their API: https://www.backblaze.com/b2/docs/b2_list_file_names.html meaning that your buckets reached the size that made this implementation obsolete.

Try to use B2 S3 Compatible API, if that doesn't work then try to chop your buckets to 1000 files.
 
^^^ Basically, to get the fileId it iterates ALL FILES in the B2 bucket, this is very problematic as it works per 1000 files as described in their API: https://www.backblaze.com/b2/docs/b2_list_file_names.html meaning that your buckets reached the size that made this implementation obsolete.

Try to use B2 S3 Compatible API, if that doesn't work then try to chop your buckets to 1000 files.
How can I convey this message to their support? What do I tell them? What files should I share with them?
It is important to me now, because I have to get rid of illegal images, which can be problematic to me.
 
Try to switch to S3 compatible, I'm afraid that's your only realistic option.
 
I am unable to change storage API on test website.
This is the error:


Code:
Aw, snap! Internal Server Error [debug @ print,error_log] - https://v3-docs.chevereto.com/setup/troubleshoot/debug.html

ErrorException [0]: Invalid argument supplied for foreach()
At /app/routes/route.dashboard.php:1022

Stack trace:
#0 /app/routes/route.dashboard.php(1022): G\errorsAsExceptions()
#1 /lib/G/classes/class.handler.php(230): G\Handler->{closure}()
#2 /lib/G/classes/class.handler.php(130): G\Handler->processRequest()
#3 /app/web.php(466): G\Handler->__construct()
#4 /app/loader.php(238): require_once('/app/web.php')
#5 /index.php(20): include_once('/app/loader.php')

Edit: It is already reported. https://chevereto.com/community/threads/error-while-updating-settings.13244/
Is there fix for it?
 
I use the same PHP and MariaDB version, I haven't upgraded my site to the new beta (nor tested it). But deleting images on Backblaze does work for my current site's storage.
 
I use the same PHP and MariaDB version, I haven't upgraded my site to the new beta (nor tested it). But deleting images on Backblaze does work for my current site's storage.
Have you tried with large number of deletions at the same time?
If you delete 2-3K images at once, enable error reporting, and set debug level to 3. Check cronlog, you'll see the error mentioned in OP.
 
Have you tried with large number of deletions at the same time?
If you delete 2-3K images at once, enable error reporting, and set debug level to 3. Check cronlog, you'll see the error mentioned in OP.
Yep. I had to delete a dude that was illegal, it took about 30 minutes for it to actually happen.
 
Mine doesn't process even after more than 15 days. There are 75K images pending. I think v3.18.0 didn't have this issue.
I will try s3-compatible b2 API.
 
S3-compatible storage API solved this issue. Images are finally getting deleted. What a relief.
B2 really sucks at programming support for third party applications.
 
Back
Top