• Welcome to the Chevereto user community!

    Here users from all over the world gather around to learn the latest about Chevereto and contribute with ideas to improve the software.

    Please keep in mind:

    • 😌 This community is user driven. Be polite with other users.
    • πŸ‘‰ Is required to purchase a Chevereto license to participate in this community (doesn't apply to Pre-sales).
    • πŸ’Έ Purchase a Pro Subscription to get access to active software support and faster ticket response times.
  • Chevereto Support CLST

    Support response

    Support checklist

    • ⚠️ Got a Something went wrong message? Read this guide and provide the actual error. Do not skip this.
    • βœ… Confirm that the server meets the System Requirements
    • πŸ”₯ Check for any available Hotfix - your issue could be already reported/fixed
    • πŸ“š Read documentation - It will be required to Debug and understand Errors for a faster support response

Moderate Content Settings

siddharth

πŸ’– Chevereto Fan
Hi All,

I am seeing two different options for Moderate Content.

1) Adult - It is to block all the adult content. Which means if a website doesn't allow both legal and illegal adult images.
2) Teen and Adult - I believe this option is to block adult images of the minors which is illegal.

Where as I am trying to test "Teen and Adult" by downloading some legal adult images and I see, it is also getting blocked.
 
It cannot differentiate legal from illegal, it just classifies inappropriate content.

 
It cannot differentiate legal from illegal, it just classifies inappropriate content.


Still my question is not answered or you understand in a different way.

What is the difference between the 1 and 2

1) Adult - It is legal in few countries and not illegal in many countries, so an image host can accept or block
2) Teen and Adult - It is referring to CP, but as per the current method, it is not blocking the teen + adult, it is blocking all the adult which means, this option doesnt differentiate from the above

The API can give output as Teen and also the same API will give you output as Adult, which means, if it ticks both the boxes it is illegal. I believe there is a bug in the API implementation.

@Rodolfo
 
The diffrence is the score, it's just an algorithm, legal or illegal, the algorithm/API cannot differentiate.

You are not understanding my point at all. How exactly one will be benefitted by 2 of the options.

I uploaded a Adult only image and it is blocked for Teen and Adult option where as it should not get blocked with that settings.

I have built my own monitor or moderation API for my image host, but it is done on Google API. Still chevereto have added this feature I am testing out and it is failing the Teen and Adult test.

I believe host owner should be allowed to edit the score as @Rodolfo may have used wrong value for Teen and Adult. Because most of the adult images are now being blocked for Teen + Adult settings.
 
Last edited:
@Rodolfo Can you please take a look at this as this have to be addressed. Most of the host are getting down only because of CP. And once the Teen + Adult values are adjusted properly then it will save 100 other website which use chevereto to avoid CP getting uploaded to them but still allow valid and legal adult images.
 
I provide ModerateContent because it does content flagging, that's all. We use them accordingly with their docs: https://moderatecontent.com/documentation/content and if you read you will notice that is just a rating algorithm to RATE content as Adult, Teen, etc... In fact, it gives you just a prediction index for those. It doesn't do anything else regarding legal stuff.

I suggest you to implement manual moderation if you really care about the content.
 
I provide ModerateContent because it does content flagging, that's all. We use them accordingly with their docs: https://moderatecontent.com/documentation/content and if you read you will notice that is just a rating algorithm to RATE content as Adult, Teen, etc... In fact, it gives you just a prediction index for those. It doesn't do anything else regarding legal stuff.

I suggest you to implement manual moderation if you really care about the content.

Agreed, I see two options

Adult and Teen + Adult, do Teen + Adult is to block the Teen content, it is block even if it is adult and there are no teens involved. Is it possible to modify the predication index value and so we can adjust it as per our choice. Is it possible to modify manually in the chevereto code?
 
Is it possible to modify manually in the chevereto code?
NO! it's just a prediction and I think it's very misleading and the dev's of moderatecontent should just remove it (teen)
Look here and specifically at the section in the screenshot.
It's really very VERY clearly explained.
 

Attachments

  • Screenshot 2021-06-02 at 20.36.54.png
    Screenshot 2021-06-02 at 20.36.54.png
    72.3 KB · Views: 7
NO! it's just a prediction and I think it's very misleading and the dev's of moderatecontent should just remove it (teen)
Look here and specifically at the section in the screenshot.
It's really very VERY clearly explained.

So it just gives the output either as Teen or Adult. It won't give combined inputs. Got it.

Then, I need to add a custom filter on my own using Google vision API as it is expensive, but scanning adult-only images will save the cost. I have the external monitor which connects with the DB to take the image URL and then it passes to google vision and update the DB again and remove the images if it is illegal.

I thought of scraping the monitor app, but seems have to continue using it as moderatecontent cannot give both.
 
@HenrysCat But I checked their website

"url_classified": "https://www.moderatecontent.com/img/sample_face_6.jpg",
"rating_index": 2,
"rating_letter": "t",
"predictions": {
"teen": 72.6473867893219,
"everyone": 26.903659105300903,
"adult": 0.4489644430577755
},
"rating_label": "teen",
"error_code": 0

as you can see they output three different values.

Teen, Everyone and Adult. So we can modify the chevereto code a bit to achieve what I am pointing out.

If the Teen value is 50+ and Adult value is 50+ then it should be illegal image as CP is illegal. Since CP is major issue for image host, I hosted more than 4 million images on my old site, and served TB of bandwidth. And have been kicked out by CF and forced to close the site.

This time, I am not letting it down by making the host vulnerable to predators who upload CP. Providing such option inbuilt can save lot other image hosters. If it is not provided inbuilt, I have to go with my custom model.
 
ModerateContent gives you a rating letter, which is what I use to determine the type. The letter dermine 'a' and 't', that's all the feedback needed at uploading layer and the implementation was made by them directly, I just polished it over to label it as official.

It will be awesome if ModerateContent extend its service to detect CP, but far as I know they just rate images and detect anime.
 
ModerateContent gives you a rating letter, which is what I use to determine the type. The letter dermine 'a' and 't', that's all the feedback needed at my layer, the implementation was made by them directly, I just polished it over to label it as official.

Screenshot_3.png
This is what the output they give. They also give you % of the three important variable

Teen, Adult, Everyone. It will be great if you make use of this variable as it will be welcomed by more users.

I am sure I have tested them when I hired a person to develop the monitor for me using cloud vision because moderatorcontent result is not 100% accurate, but I can see moderatecontent is more cheaper.

So, you can give an option in the backend, to block the content based on the % also what is the content they need to block along with combination. 100% of the people will go with Adult + Teen, which means if it is adult, then check for age, if it is teen then block it. It doesn't mean block both teen or adult.

Teen:
Adult:

If I enter teen 50% and adult: 50% then any content which is higher than this will be blocked (Check for Adult first and then check for Age), but we need do exactly like above, if we do in reverse we will end up in more API usage (Check for Age first and then Adult). Both give the same result, but the first method is optimized.

And we can discuss over the community over the years to know which is the most accurate value as the day goes as different owners get different value by lots of testing and it will be good for all.
 
I strongly suggest you to contact them directly before blindly starting to make sense of these numbers.

This is because they made our implementation based on letter-rating, not us. They did the plugin, then I made it officially basically "as-is" directly from them:

1622664944963.png

I assume they didn't use these direct rating numbers on purpose on our implementation, I assume that they did it because those numbers are required for debugging purposes, not to use it to determine things on your own.

I don't know, we are telling you how the stuff was made but you keep insisting in use it for something it wasn't made at least ask to the people who made the service.
 
@Rodolfo I have also added the proof of what I did before using cloudvision and I achieved in deleting CP. But it costed me a lot. I went with google cloud vision as moderatecontent is not accurate. Because I tested lot other API

As you said above, I will work with moderatecontent and also with few other community members to find whether we can rely on moderatecontent on their values by doing testing with good number of images.

I will post my finding here soon.
 
Back
Top