TinyMCE AI Limits
Understand the limits that ensure fair usage, optimal performance, and cost control across all TinyMCE AI features.
Overview
TinyMCE AI implements various limits to ensure fair usage, optimal performance, and cost control. These include rate limits for API requests, context limits for content size and processing, model-specific constraints, and file restrictions.
Rate Limits
Rate limits control the frequency of API requests to prevent abuse and ensure service stability. The service implements limits on API requests, token usage, web search, and web scraping requests per minute. All rate limits are applied at both organization level (higher limits) and individual user level (lower limits) to ensure fair usage.
| Specific rate limit values are subject to change and may vary based on your subscription tier. Contact support for current rate limit details for your environment. |
Context Limits
Context limits control how much content can be attached to conversations to ensure AI models can process all information effectively. These limits vary by model based on their specific capabilities and processing requirements.
Model-Specific Limits
Different AI models have varying capabilities and limitations that affect context processing. Each model has different context window sizes that determine how much content can be processed. Models have response timeouts, file processing timeouts, web resource timeouts, and streaming response limits. All models include content moderation for inappropriate content, safety checks, and moderation response time limits.
Next Steps
-
Learn about AI Models for model-specific limitations.
-
Set up Permisssions to control user access.
-
Explore Conversations for context management.
-
API Documentation – Complete API reference for TinyMCE AI.