The `LLMCouncil` class orchestrates multiple specialized LLM agents to collaboratively answer queries through a structured peer review and synthesis process. Inspired by Andrej Karpathy's llm-council implementation, this architecture demonstrates how different models evaluate and rank each other's work, often selecting responses from other models as superior to their own.
The class automatically tracks all agent messages in a `Conversation` object and formats output using `history_output_formatter`, providing flexible output formats including dictionaries, lists, strings, JSON, YAML, and more.
## Workflow Overview
The LLM Council follows a four-step process:
@ -80,6 +54,8 @@ class LLMCouncil:
|-----------|------|-------------|---------|
| `council_members` | `List[Agent]` | List of Agent instances representing council members | `None` (creates default council) |
| `chairman` | `Agent` | The Chairman agent responsible for synthesizing responses | Created during initialization |
| `conversation` | `Conversation` | Conversation object tracking all messages throughout the workflow | Created during initialization |
| `output_type` | `HistoryOutputType` | Format for the output (e.g., "dict", "list", "string", "json", "yaml") | `"dict"` |
| `verbose` | `bool` | Whether to print progress and intermediate results | `True` |
## Methods
@ -92,9 +68,13 @@ Initializes the LLM Council with council members and a Chairman agent.
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `id` | `str` | `swarm_id()` | Unique identifier for the council instance. |
| `name` | `str` | `"LLM Council"` | Name of the council instance. |
| `description` | `str` | `"A collaborative council..."` | Description of the council's purpose. |
| `council_members` | `Optional[List[Agent]]` | `None` | List of Agent instances representing council members. If `None`, creates default council with GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4. |
| `chairman_model` | `str` | `"gpt-5.1"` | Model name for the Chairman agent that synthesizes responses. |
| `verbose` | `bool` | `True` | Whether to print progress and intermediate results. |
| `output_type` | `HistoryOutputType` | `"dict"` | Format for the output. Options: "list", "dict", "string", "final", "json", "yaml", "xml", "dict-all-except-first", "str-all-except-first", "dict-final", "list-final". |
#### Returns
@ -105,12 +85,13 @@ Initializes the LLM Council with council members and a Chairman agent.
#### Description
Creates an LLM Council instance with specialized council members. If no members are provided, it creates a default council consisting of:
- **GPT-5.1-Councilor**: Analytical and comprehensive responses
- **Gemini-3-Pro-Councilor**: Concise and well-processed responses
- **Claude-Sonnet-4.5-Councilor**: Thoughtful and balanced responses
- **Grok-4-Councilor**: Creative and innovative responses
The Chairman agent is automatically created with a specialized prompt for synthesizing responses.
The Chairman agent is automatically created with a specialized prompt for synthesizing responses. A `Conversation` object is also initialized to track all messages throughout the workflow, including user queries, council member responses, evaluations, and the final synthesis.
#### Example Usage
@ -120,7 +101,7 @@ from swarms.structs.llm_council import LLMCouncil
# Create council with default members
council = LLMCouncil(verbose=True)
# Create council with custom members
# Create council with custom members and output format
Executes the full LLM Council workflow: parallel responses, anonymization, peer review, and synthesis.
Executes the full LLM Council workflow: parallel responses, anonymization, peer review, and synthesis. All messages are tracked in the conversation object and formatted according to the `output_type` setting.
#### Parameters
@ -149,54 +131,79 @@ Executes the full LLM Council workflow: parallel responses, anonymization, peer
| Type | Description |
|------|-------------|
| `Dict` | Dictionary containing the following keys: |
| `Union[List, Dict, str]` | Formatted output based on `output_type`. The output contains the conversation history with all messages tracked throughout the workflow. |
#### Return Dictionary Structure
#### Output Format
| Key | Type | Description |
|-----|------|-------------|
| `query` | `str` | The original user query. |
| `original_responses` | `Dict[str, str]` | Dictionary mapping council member names to their original responses. |
| `evaluations` | `Dict[str, str]` | Dictionary mapping evaluator names to their evaluation texts (rankings and reasoning). |
| `final_response` | `str` | The Chairman's synthesized final answer combining all perspectives. |
| `anonymous_mapping` | `Dict[str, str]` | Mapping from anonymous IDs (A, B, C, D) to member names for reference. |
The return value depends on the `output_type` parameter set during initialization:
- **`"dict"`** (default): Returns conversation as a dictionary/list of message dictionaries
- **`"list"`**: Returns conversation as a list of formatted strings (`"role: content"`)
- **`"string"`** or **`"str"`**: Returns conversation as a formatted string
- **`"final"`** or **`"last"`**: Returns only the content of the final message (Chairman's response)
- **`"json"`**: Returns conversation as a JSON string
- **`"yaml"`**: Returns conversation as a YAML string
- **`"xml"`**: Returns conversation as an XML string
- **`"dict-all-except-first"`**: Returns all messages except the first as a dictionary
- **`"str-all-except-first"`**: Returns all messages except the first as a string
- **`"dict-final"`**: Returns the final message as a dictionary
- **`"list-final"`**: Returns the final message as a list
#### Conversation Tracking
All messages are automatically tracked in the conversation object with the following roles:
- **`"User"`**: The original user query
- **`"{member_name}"`**: Each council member's response (e.g., "GPT-5.1-Councilor")
- **`"{member_name}-Evaluation"`**: Each council member's evaluation (e.g., "GPT-5.1-Councilor-Evaluation")
- **`"Chairman"`**: The final synthesized response
#### Description
Executes the complete LLM Council workflow:
1. **Dispatch Phase**: Sends the query to all council members in parallel using `run_agents_concurrently`
2. **Collection Phase**: Collects all responses and maps them to member names
3. **Anonymization Phase**: Creates anonymous IDs (A, B, C, D, etc.) and shuffles them to ensure anonymity
4. **Evaluation Phase**: Each member evaluates and ranks all anonymized responses using `batched_grid_agent_execution`
5. **Synthesis Phase**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer
1. **User Query Tracking**: Adds the user query to the conversation as "User" role
2. **Dispatch Phase**: Sends the query to all council members in parallel using `run_agents_concurrently`
3. **Collection Phase**: Collects all responses, maps them to member names, and adds each to the conversation with the member's name as the role
4. **Anonymization Phase**: Creates anonymous IDs (A, B, C, D, etc.) and shuffles them to ensure anonymity
5. **Evaluation Phase**: Each member evaluates and ranks all anonymized responses using `batched_grid_agent_execution`, then adds evaluations to the conversation with "{member_name}-Evaluation" as the role
6. **Synthesis Phase**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer, which is added to the conversation as "Chairman" role
7. **Output Formatting**: Returns the conversation formatted according to the `output_type` setting using `history_output_formatter`
The method provides verbose output by default, showing progress at each stage.
The method provides verbose output by default, showing progress at each stage. All messages are tracked in the `conversation` attribute for later access or export.
#### Example Usage
```python
from swarms.structs.llm_council import LLMCouncil
# Create council with default output format (dict)
council = LLMCouncil(verbose=True)
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
# Run the council - returns formatted conversation based on output_type
result = council.run(query)
# Access the final synthesized response
print(result["final_response"])
# With default "dict" output_type, result is a list of message dictionaries
returnf"""You are evaluating responses from your fellow LLM Council members to the following query:
@ -191,7 +197,7 @@ def get_synthesis_prompt(
query:str,
original_responses:Dict[str,str],
evaluations:Dict[str,str],
id_to_member:Dict[str,str]
id_to_member:Dict[str,str],
)->str:
"""
CreatesynthesispromptfortheChairman.
@ -205,15 +211,19 @@ def get_synthesis_prompt(
Returns:
Formattedsynthesisprompt
"""
responses_section="\n\n".join([
responses_section="\n\n".join(
[
f"=== {name} ===\n{response}"
forname,responseinoriginal_responses.items()
])
]
)
evaluations_section="\n\n".join([
evaluations_section="\n\n".join(
[
f"=== Evaluation by {name} ===\n{evaluation}"
forname,evaluationinevaluations.items()
])
]
)
returnf"""As the Chairman of the LLM Council, synthesize the following information into a final, comprehensive answer.
@ -256,9 +266,13 @@ class LLMCouncil:
def__init__(
self,
id:str=swarm_id(),
name:str="LLM Council",
description:str="A collaborative council of LLM agents where each member independently answers a query, reviews and ranks anonymized peer responses, and a chairman synthesizes the best elements into a final answer.",