‘Inappropriate Language’ And How Profanity Impacts Monetization With Google

Profanity, swearing, blasphemous and obscene language. The web has no shortage of them.

But when it comes to monetization with Google through its humongous advertising network, bad words are bad.

Google frowns upon those who use profanities on contents that are monetized through its network. However, that doesn't mean all of them are not allowed.

Words or statements that are considered profanities are too long to list. They are too many, and attempting to create one will make an non-exhaustive list.

But Google is making it clear about how it determines when contents with “vulgarity and inappropriate language” are eligible for ads - and which words and usage contexts it deems unfriendly and just scaring advertiser away.

On its support page, Google said that:

"We value diversity and respect for others, and we strive to avoid offending users, so we don’t allow ads or destinations that display shocking content or promote hatred, intolerance, discrimination, or violence."
Google - red card

In another support page, Google listed "all the main topics that are not advertiser-friendly":

  • Inappropriate language.
  • Violence.
  • Adult content.
  • Shocking content.
  • Harmful or dangerous acts.
  • Hateful & derogatory content.
  • Recreational drugs and drug-related content.
  • Firearms-related content.
  • Controversial issues.
  • Sensitive events.
  • Incendiary and demeaning.
  • Tobacco-related content.
  • Adult themes in family content.

Examples of using "inappropriate language" include swearing or curse words, slurs relating to race or sexuality, variations and misspellings of profane language.

Google relates this to promotions that contain obscene or profane language.

However. Google only considers it "inappropriate" if that "content that contains frequent uses of strong profanity or vulgarity."

"Occasional use of profanity (such as in music videos) won’t necessarily result in your video being unsuitable for advertising."

To give an idea of how broad this topic is, Google said that it provides "translated versions of our Help Center as a convenience, though they are not meant to change the content of our policies. The English version is the official language we use to enforce our policies."


On YouTube, Google puts profanities into three different categories.

First, more moderate swears like ‘damn’ and ‘hell’ can be used as frequently as creators want. Creators can even put them in their posts' titles and thumbnails. These kind of words will not impact monetization, because Google considers them “totally safe.”

Second, harsher swear words can still be still be used on monetized video, but that as long as the words don’t appear in titles, thumbnails, or at the beginning of videos, and as long as they aren’t used “repeatedly at the beginning of the video.” YouTube said that “many brands may choose not to advertise” in videos using those words.

And third, profanity that includes racial slurs and hate speech can never be monetized, even if the word is censored.

Another way of saying it, in most cases, it all depends on the context.

Google won't just "demonetize" contents whenever it sees some swearing. Most of the time, problem happens only when the swearing is too many or too much, and/or is made to offend others.

But still, Google isn't perfect. There are chances that it will flag contents the wrong way. This is why Google said that "you can request human review of decisions made by our automated systems."