@ -46,7 +46,7 @@ We have a small gallery of examples to run here, [for more check out the docs to
```python
from swarms.workers import Worker
from swarms.swarms import MultiAgentDebate, select_speaker
from langchain.llms import OpenAIChat
from swarms.models import OpenAIChat
llm = OpenAIChat(
model_name='gpt-4',
@ -107,7 +107,7 @@ for result in results:
- And, then place the openai api key in the Worker for the openai embedding model
```python
from langchain.llms import ChatOpenAI
from swarms.models import ChatOpenAI
from swarms.workers import Worker
llm = ChatOpenAI(
@ -139,7 +139,7 @@ print(response)
- OmniModal Agent is an LLM that access to 10+ multi-modal encoders and diffusers! It can generate images, videos, speech, music and so much more, get started with: