Skip to content

Commit bc32b99

Browse files
authored
Correct a few typos in docs (openai#1128)
1 parent fdbc618 commit bc32b99

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

docs/running_agents.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ async def main():
1616
print(result.final_output)
1717
# Code within the code,
1818
# Functions calling themselves,
19-
# Infinite loop's dance.
19+
# Infinite loop's dance
2020
```
2121

2222
Read more in the [results guide](results.md).
@@ -40,7 +40,7 @@ The runner then runs a loop:
4040

4141
## Streaming
4242

43-
Streaming allows you to additionally receive streaming events as the LLM runs. Once the stream is done, the [`RunResultStreaming`][agents.result.RunResultStreaming] will contain the complete information about the run, including all the new outputs produces. You can call `.stream_events()` for the streaming events. Read more in the [streaming guide](streaming.md).
43+
Streaming allows you to additionally receive streaming events as the LLM runs. Once the stream is done, the [`RunResultStreaming`][agents.result.RunResultStreaming] will contain the complete information about the run, including all the new outputs produced. You can call `.stream_events()` for the streaming events. Read more in the [streaming guide](streaming.md).
4444

4545
## Run config
4646

@@ -73,6 +73,7 @@ You can manually manage conversation history using the [`RunResultBase.to_input_
7373
async def main():
7474
agent = Agent(name="Assistant", instructions="Reply very concisely.")
7575

76+
thread_id = "thread_123" # Example thread ID
7677
with trace(workflow_name="Conversation", group_id=thread_id):
7778
# First turn
7879
result = await Runner.run(agent, "What city is the Golden Gate Bridge in?")

0 commit comments

Comments
 (0)