The `LLMCouncil` class orchestrates multiple specialized LLM agents to collaboratively answer queries through a structured peer review and synthesis process. Inspired by Andrej Karpathy's llm-council implementation, this architecture demonstrates how different models evaluate and rank each other's work, often selecting responses from other models as superior to their own.
The `LLMCouncil` class orchestrates multiple specialized LLM agents to collaboratively answer queries through a structured peer review and synthesis process. Inspired by Andrej Karpathy's llm-council implementation, this architecture demonstrates how different models evaluate and rank each other's work, often selecting responses from other models as superior to their own.
The class automatically tracks all agent messages in a `Conversation` object and formats output using `history_output_formatter`, providing flexible output formats including dictionaries, lists, strings, JSON, YAML, and more.
## Workflow Overview
## Workflow Overview
The LLM Council follows a four-step process:
The LLM Council follows a four-step process:
@ -80,6 +54,8 @@ class LLMCouncil:
|-----------|------|-------------|---------|
|-----------|------|-------------|---------|
| `council_members` | `List[Agent]` | List of Agent instances representing council members | `None` (creates default council) |
| `council_members` | `List[Agent]` | List of Agent instances representing council members | `None` (creates default council) |
| `chairman` | `Agent` | The Chairman agent responsible for synthesizing responses | Created during initialization |
| `chairman` | `Agent` | The Chairman agent responsible for synthesizing responses | Created during initialization |
| `conversation` | `Conversation` | Conversation object tracking all messages throughout the workflow | Created during initialization |
| `output_type` | `HistoryOutputType` | Format for the output (e.g., "dict", "list", "string", "json", "yaml") | `"dict"` |
| `verbose` | `bool` | Whether to print progress and intermediate results | `True` |
| `verbose` | `bool` | Whether to print progress and intermediate results | `True` |
## Methods
## Methods
@ -92,9 +68,13 @@ Initializes the LLM Council with council members and a Chairman agent.
| Parameter | Type | Default | Description |
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
|-----------|------|---------|-------------|
| `id` | `str` | `swarm_id()` | Unique identifier for the council instance. |
| `name` | `str` | `"LLM Council"` | Name of the council instance. |
| `description` | `str` | `"A collaborative council..."` | Description of the council's purpose. |
| `council_members` | `Optional[List[Agent]]` | `None` | List of Agent instances representing council members. If `None`, creates default council with GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4. |
| `council_members` | `Optional[List[Agent]]` | `None` | List of Agent instances representing council members. If `None`, creates default council with GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4. |
| `chairman_model` | `str` | `"gpt-5.1"` | Model name for the Chairman agent that synthesizes responses. |
| `chairman_model` | `str` | `"gpt-5.1"` | Model name for the Chairman agent that synthesizes responses. |
| `verbose` | `bool` | `True` | Whether to print progress and intermediate results. |
| `verbose` | `bool` | `True` | Whether to print progress and intermediate results. |
| `output_type` | `HistoryOutputType` | `"dict"` | Format for the output. Options: "list", "dict", "string", "final", "json", "yaml", "xml", "dict-all-except-first", "str-all-except-first", "dict-final", "list-final". |
#### Returns
#### Returns
@ -105,12 +85,13 @@ Initializes the LLM Council with council members and a Chairman agent.
#### Description
#### Description
Creates an LLM Council instance with specialized council members. If no members are provided, it creates a default council consisting of:
Creates an LLM Council instance with specialized council members. If no members are provided, it creates a default council consisting of:
- **GPT-5.1-Councilor**: Analytical and comprehensive responses
- **GPT-5.1-Councilor**: Analytical and comprehensive responses
- **Gemini-3-Pro-Councilor**: Concise and well-processed responses
- **Gemini-3-Pro-Councilor**: Concise and well-processed responses
- **Claude-Sonnet-4.5-Councilor**: Thoughtful and balanced responses
- **Claude-Sonnet-4.5-Councilor**: Thoughtful and balanced responses
- **Grok-4-Councilor**: Creative and innovative responses
- **Grok-4-Councilor**: Creative and innovative responses
The Chairman agent is automatically created with a specialized prompt for synthesizing responses.
The Chairman agent is automatically created with a specialized prompt for synthesizing responses. A `Conversation` object is also initialized to track all messages throughout the workflow, including user queries, council member responses, evaluations, and the final synthesis.
#### Example Usage
#### Example Usage
@ -120,7 +101,7 @@ from swarms.structs.llm_council import LLMCouncil
# Create council with default members
# Create council with default members
council = LLMCouncil(verbose=True)
council = LLMCouncil(verbose=True)
# Create council with custom members
# Create council with custom members and output format
Executes the full LLM Council workflow: parallel responses, anonymization, peer review, and synthesis.
Executes the full LLM Council workflow: parallel responses, anonymization, peer review, and synthesis. All messages are tracked in the conversation object and formatted according to the `output_type` setting.
#### Parameters
#### Parameters
@ -149,54 +131,79 @@ Executes the full LLM Council workflow: parallel responses, anonymization, peer
| Type | Description |
| Type | Description |
|------|-------------|
|------|-------------|
| `Dict` | Dictionary containing the following keys: |
| `Union[List, Dict, str]` | Formatted output based on `output_type`. The output contains the conversation history with all messages tracked throughout the workflow. |
#### Return Dictionary Structure
#### Output Format
| Key | Type | Description |
The return value depends on the `output_type` parameter set during initialization:
|-----|------|-------------|
| `query` | `str` | The original user query. |
- **`"dict"`** (default): Returns conversation as a dictionary/list of message dictionaries
| `original_responses` | `Dict[str, str]` | Dictionary mapping council member names to their original responses. |
- **`"list"`**: Returns conversation as a list of formatted strings (`"role: content"`)
| `evaluations` | `Dict[str, str]` | Dictionary mapping evaluator names to their evaluation texts (rankings and reasoning). |
- **`"string"`** or **`"str"`**: Returns conversation as a formatted string
| `final_response` | `str` | The Chairman's synthesized final answer combining all perspectives. |
- **`"final"`** or **`"last"`**: Returns only the content of the final message (Chairman's response)
| `anonymous_mapping` | `Dict[str, str]` | Mapping from anonymous IDs (A, B, C, D) to member names for reference. |
- **`"json"`**: Returns conversation as a JSON string
- **`"yaml"`**: Returns conversation as a YAML string
- **`"xml"`**: Returns conversation as an XML string
- **`"dict-all-except-first"`**: Returns all messages except the first as a dictionary
- **`"str-all-except-first"`**: Returns all messages except the first as a string
- **`"dict-final"`**: Returns the final message as a dictionary
- **`"list-final"`**: Returns the final message as a list
#### Conversation Tracking
All messages are automatically tracked in the conversation object with the following roles:
- **`"User"`**: The original user query
- **`"{member_name}"`**: Each council member's response (e.g., "GPT-5.1-Councilor")
- **`"{member_name}-Evaluation"`**: Each council member's evaluation (e.g., "GPT-5.1-Councilor-Evaluation")
- **`"Chairman"`**: The final synthesized response
#### Description
#### Description
Executes the complete LLM Council workflow:
Executes the complete LLM Council workflow:
1. **Dispatch Phase**: Sends the query to all council members in parallel using `run_agents_concurrently`
1. **User Query Tracking**: Adds the user query to the conversation as "User" role
2. **Collection Phase**: Collects all responses and maps them to member names
2. **Dispatch Phase**: Sends the query to all council members in parallel using `run_agents_concurrently`
3. **Anonymization Phase**: Creates anonymous IDs (A, B, C, D, etc.) and shuffles them to ensure anonymity
3. **Collection Phase**: Collects all responses, maps them to member names, and adds each to the conversation with the member's name as the role
4. **Evaluation Phase**: Each member evaluates and ranks all anonymized responses using `batched_grid_agent_execution`
4. **Anonymization Phase**: Creates anonymous IDs (A, B, C, D, etc.) and shuffles them to ensure anonymity
5. **Synthesis Phase**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer
5. **Evaluation Phase**: Each member evaluates and ranks all anonymized responses using `batched_grid_agent_execution`, then adds evaluations to the conversation with "{member_name}-Evaluation" as the role
6. **Synthesis Phase**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer, which is added to the conversation as "Chairman" role
7. **Output Formatting**: Returns the conversation formatted according to the `output_type` setting using `history_output_formatter`
The method provides verbose output by default, showing progress at each stage.
The method provides verbose output by default, showing progress at each stage. All messages are tracked in the `conversation` attribute for later access or export.
#### Example Usage
#### Example Usage
```python
```python
from swarms.structs.llm_council import LLMCouncil
from swarms.structs.llm_council import LLMCouncil
# Create council with default output format (dict)
council = LLMCouncil(verbose=True)
council = LLMCouncil(verbose=True)
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
# Run the council - returns formatted conversation based on output_type
result = council.run(query)
result = council.run(query)
# Access the final synthesized response
# With default "dict" output_type, result is a list of message dictionaries
returnf"""You are evaluating responses from your fellow LLM Council members to the following query:
returnf"""You are evaluating responses from your fellow LLM Council members to the following query:
@ -191,7 +197,7 @@ def get_synthesis_prompt(
query:str,
query:str,
original_responses:Dict[str,str],
original_responses:Dict[str,str],
evaluations:Dict[str,str],
evaluations:Dict[str,str],
id_to_member:Dict[str,str]
id_to_member:Dict[str,str],
)->str:
)->str:
"""
"""
CreatesynthesispromptfortheChairman.
CreatesynthesispromptfortheChairman.
@ -205,15 +211,19 @@ def get_synthesis_prompt(
Returns:
Returns:
Formattedsynthesisprompt
Formattedsynthesisprompt
"""
"""
responses_section="\n\n".join([
responses_section="\n\n".join(
[
f"=== {name} ===\n{response}"
f"=== {name} ===\n{response}"
forname,responseinoriginal_responses.items()
forname,responseinoriginal_responses.items()
])
]
)
evaluations_section="\n\n".join([
evaluations_section="\n\n".join(
[
f"=== Evaluation by {name} ===\n{evaluation}"
f"=== Evaluation by {name} ===\n{evaluation}"
forname,evaluationinevaluations.items()
forname,evaluationinevaluations.items()
])
]
)
returnf"""As the Chairman of the LLM Council, synthesize the following information into a final, comprehensive answer.
returnf"""As the Chairman of the LLM Council, synthesize the following information into a final, comprehensive answer.
@ -256,9 +266,13 @@ class LLMCouncil:
def__init__(
def__init__(
self,
self,
id:str=swarm_id(),
name:str="LLM Council",
description:str="A collaborative council of LLM agents where each member independently answers a query, reviews and ranks anonymized peer responses, and a chairman synthesizes the best elements into a final answer.",