Moderation Policy

I. COMMUNITY RULES ENFORCEMENT

Our Community Rules and Terms of Service are designed to create a secure, enjoyable, and respectful environment for all users. Our moderation team is committed to upholding these standards and may take action against profiles, photos, or other inappropriate content that violate our policies. The following are examples of such violations:

  • Child Exploitation & Abuse: We have zero-tolerance toward any content or behavior that features, glorifies, promotes, or accommodates the sexual exploitation of children.
  • Mental Abuse, Harassment & Cyber-Bullying: We foster a culture of respect and kindness, free from bullying, shaming and degradation.
  • Hate Speech & Discrimination: We promote non-discrimination and do not allow any content that promotes violence, harmful stereotypes, prejudice and so on.
  • Nudity & Explicit Content: Public sharing of explicit or suggestive content, including, but not limited to, pornography, provocative poses, and potentially abusive requests, is prohibited. Sharing explicit sexual and non-sexually images/videos of someone else without their valid written consent (this would include, namely: non-consensual distribution of intimate images, also commonly referred to as “revenge porn”) is prohibited.
  • Commercial Solicitations: Advertising or commercial transactions are not permitted
  • Illegal Goods & Services: Content related to illegal goods & services, including, but not limited to drugs, weapons etc, is strictly forbidden.
  • Spam, Impersonation: We act against impersonation, artificial activity that creates a disruptive or negative user experience, and other malicious or fraudulent activities.

If we detect, or receive a report from another user, or if local authorities alert us to content that violates our rules and policies, we will promptly review and remove it.

II. MODERATION PROCEDURE

Our tools and actions

To maintain the quality of our moderation, we employ a combination of automated tools and human review. Initially, a user’s content, specifically photos and cards, undergoes automated moderation (proactive review). For messages, videos, and in situations requiring further or additional content checks, such as when we receive a report from another user, the review is carried out manually by a real person (reactive review).

If you believe that the user, their photos, videos, messages or other inappropriate content violates our Community Rules and/or Terms of Service, we encourage you to submit a report via our in-app feature. You are able to choose the appropriate category for your report and provide relevant comments. We assure you that the report will be handled with confidentiality and that your personality will not be disclosed to the user you have reported (reported user).

When we receive a report, our dedicated moderator evaluates the situation and makes the decision. If the content violates our rules and\or policies, we promptly remove it. In cases of serious or repeated infractions, the user may be banned from Taimi.

In accordance with the Digital Services Act, we prioritize notices from trusted flaggers regarding illegal content on our platform. Trusted flaggers can reach out to us with their notices at legal@taimi.com.

Notices to users

We strive to be open with our users, especially when it comes to actions with their content. That is why we have developed a user notice system:

  1. If you believe that another user violates our rules and you decide to report them, we will immediately notify you upon receipt of your report.
  2. If, as a result of the check, we do not detect violations by the reported user, we will notify you in a timely manner. The reported user will not receive any notices.
  3. If, as a result of the check, we detect violations by the reported user, we will notify the reported user that their content was moderated and why. In some jurisdictions we are required to inform the reported user that their content was reported by another user. However, we will not disclose your personality. Regardless of whether the reported user submitted the appeal or not, we will inform you on our final decision.

Our team 

Our moderation team is dedicated to maintaining a respectful and inclusive environment year-round, operating 24/7 in more than 10 countries. We have carefully selected our moderation team to include diverse individuals with unique life experiences, enabling them to make fair and informed decisions.

Each of our moderators is trained to consider a wide range of factors, including bias, discrimination, and gender, and to make decisions ensuring fairness and equity. They are equipped to handle a wide range of content and situations, providing timely and effective responses to any issues that may arise.

This commitment to continuous improvement allows us to consistently uphold high standards of moderation across diverse cultural contexts.

III. APPEAL (FOR EU AND UK USERS)

Since we are committed to ensure transparency and open dialog with our users, we provide EU and UK users with the opportunity to contest our decision regarding content moderation, regardless of whether you reported the content or your content was reported.

You have the option to submit an appeal if you believe:

  • we made a mistake in removing your content/banning your account;
  • we made a mistake in not removing the content of the user you reported.

Your appeal will be evaluated separately by 2 (two) human reviewers who were not involved in the original evaluation, guaranteeing an impartial assessment. If the reviewers fail to make a unanimous decision, the third reviewer will make an independent assessment of the content in question and make a decision. Appeal decision is final.

You can initiate an appeal within the 6 (six) months from the date of our initial decision. Alternatively, you also have the option to apply to an alternative dispute resolution body in accordance with the applicable legislation.

Please note that in certain circumstances, we may permanently ban your account. Examples of such circumstances include the dissemination of child sexual exploitation content or the non-consensual sharing of photos and/or videos of other individuals.

NOTE: Your subscription will remain active until you cancel it on Taimi or the App Store and Google Play store, whichever applies. The ban on your Taimi account DOES NOT automatically cancel your Taimi Premium subscription. You can find more on how to cancel your subscription in Subscription Terms.

IV. REGIONAL REQUIREMENTS/COUNTRY-SPECIFIC REGULATIONS

Depending on your residency, different or additional rules may apply. For further information, refer to our Terms of Service.

AUSTRALIA. The following apply to you, if you are an Australian resident to the extent required by applicable law:

Australian eSafety Commissioner Information

The eSafety Commissioner serves as the online safety regulator in Australia, with a mission to protect Australians from online harms and foster a safer, more positive online environment. This is achieved through the enforcement of the Online Safety Act 2021 and other relevant legislation, which grants the eSafety Commissioner various powers.

In particular, Australian users can use the eSafety Commissioner’s systems to report harmful online content, such as cyberbullying, adult cyber abuse, and image-based abuse.

To learn more about the eSafety Commissioner’s role, functions, and the resources available, please refer to this webpage. For information on how to file a complaint, please consult this webpage.

Get Taimi App for Free