Skip to content

Assembly AI Integration

TractStack integrates with Assembly AI’s LeMUR API to provide AI-powered content generation, analysis, and processing capabilities. This integration enables automated content creation, text analysis, and intelligent content processing within your content management workflow.

Set up Assembly AI integration in your environment configuration:

Terminal window
# Required: Assembly AI API Key
AAI_API_KEY=your_assembly_ai_api_key_here
# Optional: Custom API endpoint (defaults to Assembly AI's endpoint)
AAI_API_ENDPOINT=https://api.assemblyai.com
  1. Get API Key: Sign up at Assembly AI and obtain your API key
  2. Environment Configuration: Add the key to your .env file or environment variables
  3. Tenant Configuration: The API key is configured per tenant in TractStack

The Assembly AI integration uses the LeMUR API for text processing tasks:

{
"prompt": "Your question or instruction",
"input_text": "Content to analyze",
"final_model": "anthropic/claude-3-5-sonnet",
"max_tokens": 1000,
"temperature": 0.7
}
ParameterTypeRequiredDescription
promptstringYesThe instruction or question for the AI
input_textstringYesThe content to analyze or process
final_modelstringNoAI model to use (default: anthropic/claude-3-5-sonnet)
max_tokensintegerNoMaximum response length (default: 4000)
temperaturefloatNoResponse creativity (0.0-1.0, default: 0.0)
  • anthropic/claude-3-5-sonnet (default)
  • anthropic/claude-3-haiku
  • anthropic/claude-3-opus

Automatically generate titles and URL slugs for content:

{
"prompt": "Generate a concise title (maximum 40-50 characters) and a URL-friendly slug that captures the essence of this content. Return only JSON with 'title' and 'slug' keys.",
"input_text": "Your markdown content here",
"final_model": "anthropic/claude-3-5-sonnet",
"temperature": 0.3,
"max_tokens": 200
}

Generate new content based on prompts and context:

{
"prompt": "Write a professional landing page section about [topic]. Use an engaging, confident tone. Include a clear value proposition and call-to-action.",
"input_text": "Reference context or existing content",
"final_model": "anthropic/claude-3-5-sonnet",
"temperature": 0.7,
"max_tokens": 4000
}

Analyze existing content for insights:

{
"prompt": "Analyze this content for tone, key messages, and improvement suggestions. Provide specific, actionable feedback.",
"input_text": "Content to analyze",
"final_model": "anthropic/claude-3-5-sonnet",
"temperature": 0.3,
"max_tokens": 2000
}

From your frontend components, call the Assembly AI service:

const response = await fetch(`${goBackend}/api/v1/aai/askLemur`, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-Tenant-ID": tenantId,
},
credentials: "include",
body: JSON.stringify({
prompt: "Your prompt here",
input_text: "Content to process",
final_model: "anthropic/claude-3-5-sonnet",
temperature: 0.7,
max_tokens: 4000,
}),
});
const result = await response.json();

The API returns responses in this format:

{
"success": true,
"data": {
"response": "Generated content or analysis"
}
}

Error responses:

{
"success": false,
"error": "Error description"
}

Implement proper error handling in your frontend:

try {
const response = await fetch(apiUrl, requestOptions);
if (!response.ok) {
throw new Error("API request failed");
}
const result = await response.json();
if (!result.success) {
throw new Error(result.error || "Processing failed");
}
// Handle successful response
const content = result.data.response;
} catch (error) {
console.error("Assembly AI error:", error.message);
// Handle error state
}

The Assembly AI integration is built into TractStack’s content editor for:

  • Automatic Title Generation: Generate titles and slugs from content
  • Content Suggestions: Get writing assistance and improvements
  • Content Analysis: Analyze tone, structure, and effectiveness

Integrated with markdown processing pipeline for:

  • Auto-completion: Suggest content completions
  • Style Analysis: Ensure consistent writing style
  • SEO Optimization: Generate SEO-friendly titles and descriptions

Supports automated page creation with:

  • Template-based Generation: Use predefined prompts for different page types
  • Context-aware Content: Generate content that fits your site’s style
  • Bulk Processing: Generate multiple pages or sections efficiently

Be Specific: Provide clear, detailed prompts for better results

{
"prompt": "Write a technical documentation section about API authentication. Include code examples, security considerations, and troubleshooting tips. Use a professional, instructional tone."
}

Use Context: Provide relevant context in input_text

{
"input_text": "Existing API documentation context, brand voice examples, technical requirements"
}

Set Appropriate Parameters: Adjust temperature and tokens based on use case

  • Low temperature (0.0-0.3): For factual, consistent content
  • Medium temperature (0.4-0.7): For creative but controlled content
  • High temperature (0.8-1.0): For very creative content

Batch Requests: Group related content generation tasks Cache Results: Store generated content to avoid regeneration Async Processing: Use non-blocking requests for better UX Timeout Handling: Implement appropriate timeout values

Review Generated Content: Always review AI-generated content before publishing Maintain Brand Voice: Use consistent prompts that reflect your brand Iterate on Prompts: Refine prompts based on output quality Human Oversight: Keep human editorial control in the content workflow

  • Monitor your Assembly AI usage and limits
  • Implement request throttling if needed
  • Consider caching strategies for repeated requests
  • Use appropriate token limits to control costs
  • Choose models based on complexity needs
  • Monitor usage patterns and optimize accordingly

API Key Not Configured

  • Verify AAI_API_KEY is set in environment
  • Check tenant-specific configuration
  • Ensure API key is valid and active

Request Failures

  • Check network connectivity
  • Verify request format and parameters
  • Review Assembly AI service status

Poor Quality Results

  • Refine prompts for clarity and specificity
  • Adjust temperature and token settings
  • Provide better context in input_text

Timeout Issues

  • Increase timeout values for complex requests
  • Consider breaking large requests into smaller parts
  • Check Assembly AI service response times

Enable debug logging to troubleshoot issues:

Terminal window
# Enable detailed logging
LOG_LEVEL=debug
# Assembly AI specific logging
AAI_DEBUG=true

This will provide detailed information about API requests and responses in your application logs.