Corrected "continously" to "continuously" in [README.md].
This pull request addresses a minor typo found in repository. The typo has been corrected to improve clarity and maintain the quality of the documentation.
This change is purely cosmetic and does not affect functionality.
- apart of modular and small unittests , its time to put to test for larger and more wholesome integration test
- an end to end test , to check if this change in code causes any breaks in flow and normal functioning .
- Implemented test_agent_output_updating to verify that logging step metadata correctly updates the total token count and ensures that the agent's output steps are properly tracked confirming only one step is recorded.
- Implemented test_token_counting_integration to verify the correct total token count when using a mocked tokenizer, ensuring that prompt and response token counts are accurately aggregated.
- Implemented test_log_step_metadata_no_long_term_memory to ensure that when long-term memory is None, the memory_usage for long_term is an empty dictionary in the log result.
- Implemented test_log_step_metadata_basic to verify the correct logging of step metadata including step_id, timestamp, tokens, and memory usage.
- Confirmed that the token counts for total are accurately logged.
- create class to execute modular unittests
- def setup for modular setup
- objective to keep setup minimal , so that tests aren't bloated and fast to run
- Since most param have a set default , init of necessary condition is a valid and supportive op wrt test speed .
- when response is a choice , this determines how it is handled.
- current implementation uses a placeholder for llm_output_parser , it needs to be updated ( next commit )