User Safety & Content Moderation Policy
Last updated on August 2025
Creator Studio – The Creator Alliance Pty Ltd
Suite 302, 13/15 Wentworth Ave, Sydney NSW 2000
Email: admin@creatoralliancegroup.com
Creator Studio – The Creator Alliance Pty Ltd
Suite 302, 13/15 Wentworth Ave, Sydney NSW 2000
Email: admin@creatoralliancegroup.com
Introduction
Creator Studio is committed to maintaining a safe, legal, and respectful environment for all users. This User Safety & Content Moderation Policy explains our approach to monitoring user-generated content, how we enforce our community standards, and what you can expect from our content moderation process. By using Creator Studio, you agree to abide by our rules (as detailed in our Terms of Service, including the Acceptable Use Policy) and understand that we reserve the right to enforce those rules at our discretion to protect our community. This Policy is intended to be read in conjunction with our Acceptable Use Policy and AI Prompt Guidelines, which set out the content standards and user conduct expectations on our platform.
Our goal is to foster a positive community while complying with all applicable laws. We use a combination of tools and human oversight to achieve this, as detailed below. Please note: Creator Studio is a privately operated platform, and participation is a privilege, not a right. We have the discretion to remove content or users that violate our policies or otherwise pose a risk to the platform or its users.
Definitions
For clarity, here are key definitions related to this document.
Moderation means the combination of automated tools and human review used to enforce our rules.
Proactive Monitoring means automated or scheduled checks that flag or block obvious violations.
Reactive Monitoring means investigation of user reports.
Flag means a system or user generated alert that content or behaviour may breach policy.
Enforcement Actions means the measures we may apply, including warnings, content removal, temporary restrictions, suspension or permanent ban.
Emergency Issue means a matter that presents immediate risk of harm or illegality, prioritised for urgent action, including possible referral to authorities.
Acceptable Use means the content and conduct standards set out in our Acceptable Use and Fictional Content Guidelines.
Moderation Overview
Proactive Monitoring: Creator Studio employs a multi-layered content moderation system. We utilise automated filters and tools to proactively scan for obvious violations of our content standards (for example, attempts to generate content involving minors or other illegal material). These automated systems help flag or block prohibited content in real-time – for instance, if a user enters a text prompt that appears to request disallowed content, the system may issue a warning or prevent the generation. Automated filtering is continuously refined to detect content that violates our rules, but it is not infallible. Therefore, human moderators provide oversight and review. Our moderation team may audit user-generated content (including text prompts and AI-generated outputs) especially if it is flagged by our systems or by users. In sensitive or borderline cases, multiple moderators might review the content to ensure a fair and consistent decision, applying a “reasonable person” standard to interpret the context.
Reactive Monitoring (User Flagging): We strongly encourage users to help keep the community safe by reporting content or behaviour that may violate our policies. If you encounter content that seems illegal, harmful, or in breach of our Acceptable Use Policy (e.g. harassment, hate speech, non-consensual sexual content, impersonation, etc.), or if you observe another user behaving inappropriately, you can report it to us. Reports can be made through the in-app reporting tools (where available) or by contacting us via email at admin@creatoralliancegroup.com with details. Our moderation team reviews user reports promptly. While we are not able to pre-screen all content, we respond to flags as quickly as possible to investigate and take action.
Manual Review and Context: Any content flagged either by our automated systems or by user reports is subject to manual review by our moderation staff. Our team examines the content in context to determine if a policy has been violated. We understand that AI-generated content can be nuanced, so moderators consider context, intent, and severity. They also reference our established Acceptable Use Policy and AI Prompt Guidelines during evaluations. These documents outline what is and isn’t allowed on Creator Studio (for example, no depiction of real individuals in explicit sexual content, all characters portrayed must be adults (18+), no glorification of violence, etc.). By aligning our reviews with these standards, we ensure consistency and fairness. In cases of doubt, moderators may escalate the decision or confer as a team. We also maintain internal moderation logs to track decisions and ensure repeat issues are handled appropriately.
User Reporting & Response Time
We provide users with tools to report concerns, and we take these reports seriously. Here’s how the reporting process works and what you can expect:
How to Report: If you need to report content, you can use any “Report” or “Flag” function presented alongside the content (if available), or send a detailed email to admin@creatoralliancegroup.com. Please include as much information as possible, such as the username of the offender (if applicable), a description or screenshot of the offending content, and why you believe it violates our policies. The more detail you provide, the easier it is for our team to locate and evaluate the material in question.
Acknowledgement: Upon receiving a report, we will acknowledge it (either through an automated response or personal reply) so you know it has been received. If you report in-app, you may see a confirmation message that your report was submitted.
Review Timeline: Our moderation team aims to review and address reports within 3–5 business days. In many cases, straightforward policy violations (especially urgent safety issues) will be acted on much sooner – potentially within hours. However, more complex cases or high volumes of reports may require additional time to ensure a thorough investigation. We appreciate your patience as we carefully evaluate each report.
Actions After Review: Once we review the reported issue, we will take whatever action we deem appropriate (see Enforcement Actions below for details). If the report is verified and content is found to violate our rules or poses a risk, we will remove or disable access to that content. If a user is found violating policies, we will apply the appropriate consequence (warning, suspension, etc.). If we determine there is no violation or a false alarm, we may take no action. Because of privacy concerns, we may not always be able to provide you detailed feedback about the outcome of your specific report, but we strive to be consistent in enforcement.
Emergency Situations: Certain violations are considered extremely serious and will be prioritised. For example, any content that exploits or endangers minors, depicts any form of sexual violence or other non-consensual acts, or includes a credible threat of harm will trigger immediate action. This can include rapid removal of content and involvement of law enforcement authorities if appropriate. In such cases, we do not wait 3–5 days; we act as quickly as possible to mitigate the harm. Users who submit such reports may not receive detailed follow-up due to legal considerations, but please know we treat these issues with the utmost urgency.
Enforcement Actions and Consequences
Creator Studio uses a tiered enforcement approach to address violations of our rules. Our goal is to educate and correct when possible, but we will not hesitate to take stronger action for serious or repeat offences. The general enforcement ladder is as follows:
Warning: For first-time or minor infractions, we will typically issue a warning to the user. This may be in the form of an in-app notification or email explaining what rule was violated. Warnings serve as an opportunity for the user to correct their behaviour without immediate loss of account privileges. For example, if a user unknowingly attempts to generate borderline content that is against our guidelines, we might warn them and provide guidance on acceptable use.
Content Removal: If content (an AI-generated image, text, profile content, etc.) is found to violate our policies, it will be removed or disabled. We may remove content with or without prior notice to the user. In many cases, content removal accompanies a warning or more severe action. We have no obligation to restore content once it’s deleted, and we are not liable for any loss of data or impact this removal may cause. (Users are advised to always adhere to the content rules to avoid losing their creations.)
Temporary Restrictions: For more serious violations or repeat offences, Creator Studio may impose temporary restrictions on the user’s account. This can include a suspension (preventing login or access to the service for a defined period), or limited functionality (for example, loss of ability to generate new content, comment, or interact for some days). The length of a suspension can vary based on severity – it could be 24 hours, 7 days, 30 days, etc. We will notify the user of the suspension and the reason, as well as any conditions for reinstatement (if applicable).
Permanent Ban (Account Termination): For severe violations or continued misconduct despite prior warnings, we will terminate the user’s account. Permanent bans are typically reserved for egregious breaches such as: creating or attempting to create illegal content (for example, child sexual exploitation material), harassing or threatening other users, circumventing safety measures, or repeatedly violating the rules after multiple warnings. If your account is terminated, you will lose access to Creator Studio’s services entirely. Depending on the violation, we may also prohibit you from re-registering in the future. Note that if an account is banned, any subscription or virtual currency (Tokens) associated with it may be forfeited; per our Terms of Service, users banned for cause are not entitled to refunds for unused services due to their misconduct.
Involving Authorities: Where required by law or if we believe there is a risk of real-world harm, we will involve appropriate authorities. For example, any indication of child exploitation material will be reported to law enforcement immediately, along with any relevant user data we can lawfully provide. Similarly, threats of violence may be forwarded to police if we believe there’s a credible danger. We cooperate with law enforcement and regulatory bodies as legally mandated to keep our platform safe.
These enforcement steps are guidelines; the actual action taken may vary based on circumstances. Creator Studio reserves the right to escalate or skip steps in the enforcement ladder depending on the severity of the violation. For instance, we may ban a user without prior warning if the behaviour is blatantly harmful or unlawful. Conversely, we might choose to issue multiple warnings for different minor infractions if we believe the user is acting in good faith and trying to learn the rules. Our primary aim is to correct problematic behaviour and protect the community, not to punish unnecessarily – but the safety and integrity of the platform always comes first.
Alignment with Acceptable Use Policy & Guidelines
All decisions regarding content moderation and user safety are grounded in our established rules and guidelines:
Acceptable Use Policy: This is the set of rules (outlined in our Terms of Service or a separate policy document) that defines prohibited content and activities on Creator Studio. It covers obvious illegal or harmful content (for example, no content involving minors, sexual content involving animals, sexual violence or other non-consensual acts, hate speech, doxxing (the malicious sharing of personal information), fraud, etc.) and other misuse of the service (such as attempting to reverse-engineer the platform or circumvent security measures). Our moderators directly reference the Acceptable Use Policy when evaluating content. If content violates any of those core rules, it will be removed and the user will face the corresponding enforcement action.
AI Prompt Guidelines: Creator Studio provides AI-driven content generation features. The AI Prompt Guidelines (also known as our AI Content Guidelines) offer specific direction on how users should engage with these tools responsibly. They remind users that any text prompt they enter is considered “content” under our policies. For example, even attempting to generate disallowed content via a prompt is against the rules, whether or not the AI produces it. The Guidelines cover nuances like avoiding prompts that might inadvertently create underage-looking characters, disallowing attempts to generate real persons or non-consensual scenarios, and encouraging clarity in prompts to avoid misinterpretation. Our moderation team monitors prompt inputs (with respect for user privacy, as outlined in our Privacy Policy) and uses these Guidelines to judge whether a user’s intended use of the AI is within policy. If you violate the Prompt Guidelines, it is treated as a violation just like posting forbidden content would be.
Community Standards & Context: In addition to hard rules, we maintain broader community standards to ensure respect and safety. For instance, even if certain extreme content might not be explicitly illegal, it could be considered out-of-bounds for our community (e.g., content that is exceptionally graphic or otherwise harmful to the overall user experience). Our moderators are empowered to use reasonable judgement in such cases, guided by the spirit of our policies. We strive for consistency: similar cases should result in similar outcomes. Internally, we train our team and document precedent decisions to help maintain fairness.
By adhering to these policies and guidelines, we not only protect users but also ensure that Creator Studio can continue to operate in compliance with laws and platform standards (for example, App Store policies or regulatory requirements in various jurisdictions). Users should familiarise themselves with the Acceptable Use Policy and AI Prompt Guidelines, as those documents provide helpful examples of what is not allowed. Ultimately, if you are unsure whether something is permitted, it’s safest not to post or generate it and to ask our support team if you need clarification.
Platform Rights and User Responsibilities
Creator Studio is a private platform, and we retain ultimate discretion over content and accounts. By using our service, you agree that:
We may remove or restrict content that violates our policies or that we believe is inappropriate for the platform, at any time and without prior notice. This includes content that may not be illegal but is nonetheless deemed contrary to our community standards or the intended tone of the platform.
We may suspend or terminate accounts for conduct that breaches our rules or that we determine, in our sole judgement, poses a risk to other users, our community, or our business. This enforcement can occur even if you have paid for a subscription or purchased tokens – as noted, violating the rules can result in loss of access without compensation.
Our decisions are final. While we strive to be fair and consider context, we do not guarantee that every moderation decision is subject to debate. We are not a public forum; users do not have a “right” to post content that we decide is against our guidelines. In borderline cases, we err on the side of safety and compliance.
No Guaranteed Reinstatement: If your content is removed or your account is penalised, we are not obligated to reinstate the content or account. We do, however, allow users to contact us if they sincerely believe a mistake has been made. If you feel you were unfairly moderated or banned, you may reach out to our support (via the contact information below) to request a secondary review. We will review such requests and, if a mistake is found or new information comes to light, we may adjust our decision. That said, vexatious or repeated appeals without new evidence will not be entertained indefinitely.
User Responsibility: All users are responsible for the content they create or share on Creator Studio. Ignorance of the rules is not an excuse for violating them. We provide clear guidelines and expect you to follow them. Additionally, users are responsible for securing their account credentials. If someone else accesses your account and violates our policies, it may still result in enforcement against your account. Treat your login information securely and report any suspicious account activity to us immediately.
Community Self-Moderation: We encourage a culture of self-moderation and mutual respect. This means users should not only follow the rules themselves but also help by flagging issues and not engaging with those who violate policies (for instance, do not “feed the trolls” and escalate conflicts; instead, report harassment to us). We take reports seriously and want to resolve issues before they escalate.
Changes to Moderation Policy
As online content and threats evolve, so too may our moderation practices. Creator Studio reserves the right to update or modify this User Safety & Content Moderation Policy at any time. The “Last Updated” date at the top will reflect when the latest changes were made. We may inform users of significant changes via an in-app notification, email, or a prominent notice on our website. However, it is also your responsibility to periodically review this Policy for any updates. Continued use of Creator Studio after changes to this Policy indicates your acceptance of the revised rules and procedures.
Contact and Support
Maintaining a safe community is a partnership between the platform and its users. If you have questions about this Policy or suggestions for improving safety, or if you need to alert us to an urgent issue, please contact us. You can reach Creator Studio support at admin@creatoralliancegroup.com. In your correspondence, please include as much detail as possible about your concern. We will address inquiries and reports in accordance with this Policy and our Terms of Service.
For urgent matters (for example, if you feel someone’s safety is at immediate risk or you’ve discovered critical security issues), you may use the subject line “URGENT: Safety Concern” in your email to help flag our attention. We will do our best to respond as quickly as practicable.
Thank you for helping us keep Creator Studio a safe and enjoyable space for creative expression. By working together under these guidelines, we can protect our community and ensure that everyone feels secure using our platform.