Skip to content

Commit

Permalink
Python: updated packages and reverted default multi-core testing (#8091)
Browse files Browse the repository at this point in the history
### Motivation and Context

<!-- Thank you for your contribution to the semantic-kernel repo!
Please help reviewers and future users, providing the following
information:
  1. Why is this change required?
  2. What problem does it solve?
  3. What scenario does it contribute to?
  4. If it fixes an open issue, please link to the issue here.
-->
Updated all flagged packages by dependabot and reverted a earlier to
change to running tests in paralel by default, now only done directly in
integration tests.

### Description

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [x] The code builds clean without any errors or warnings
- [x] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [x] All unit tests pass, and I have added new tests where possible
- [x] I didn't break anyone 😄
  • Loading branch information
eavanvalkenburg authored Aug 14, 2024
1 parent 7e2bca7 commit ba9d34f
Show file tree
Hide file tree
Showing 8 changed files with 1,880 additions and 1,754 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/python-integration-tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ jobs:
REDIS_CONNECTION_STRING: ${{ vars.REDIS_CONNECTION_STRING }}
run: |
cd python
poetry run pytest ./tests/integration ./tests/samples -v --junitxml=pytest.xml
poetry run pytest -n logical --dist loadfile --dist worksteal ./tests/integration ./tests/samples -v --junitxml=pytest.xml
- name: Surface failing tests
if: always()
uses: pmeier/pytest-results-action@main
Expand Down Expand Up @@ -250,8 +250,8 @@ jobs:
fi
cd python
poetry run pytest ./tests/integration -v
poetry run pytest ./tests/samples -v
poetry run pytest -n logical --dist loadfile --dist worksteal ./tests/integration -v
poetry run pytest -n logical --dist loadfile --dist worksteal ./tests/samples -v
# This final job is required to satisfy the merge queue. It must only run (or succeed) if no tests failed
python-integration-tests-check:
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ repos:
- id: pyupgrade
args: [--py310-plus]
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.5.2
rev: v0.5.7
hooks:
- id: ruff
args: [ --fix, --exit-non-zero-on-fix ]
Expand Down
8 changes: 7 additions & 1 deletion python/.vscode/tasks.json
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,13 @@
"args": [
"run",
"pytest",
"tests/"
"tests/",
"-n",
"logical",
"--dist",
"loadfile",
"--dist",
"worksteal"
],
"group": "test",
"presentation": {
Expand Down
3,583 changes: 1,853 additions & 1,730 deletions python/poetry.lock

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions python/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -65,14 +65,14 @@ motor = { version = "^3.3.2", optional = true }
# notebooks
ipykernel = { version = "^6.21.1", optional = true}
# milvus
pymilvus = { version = ">=2.3,<2.4.4", optional = true}
pymilvus = { version = ">=2.3,<2.4.5", optional = true}
milvus = { version = ">=2.3,<2.3.8", markers = 'sys_platform != "win32"', optional = true}
# mistralai
mistralai = { version = "^0.4.1", optional = true}
# ollama
ollama = { version = "^0.2.1", optional = true}
# pinecone
pinecone-client = { version = ">=3.0.0", optional = true}
pinecone-client = { version = "^5.0.0", optional = true}
# postgres
psycopg = { version="^3.2.1", extras=["binary","pool"], optional = true}
# qdrant
Expand Down Expand Up @@ -141,7 +141,7 @@ transformers = { version = "^4.28.1", extras=['torch']}
sentence-transformers = { version = "^2.2.2"}
torch = {version = "2.2.2"}
# milvus
pymilvus = ">=2.3,<2.4.4"
pymilvus = ">=2.3,<2.4.5"
milvus = { version = ">=2.3,<2.3.8", markers = 'sys_platform != "win32"'}
# mistralai
mistralai = "^0.4.1"
Expand All @@ -150,7 +150,7 @@ ollama = "^0.2.1"
# mongodb
motor = "^3.3.2"
# pinecone
pinecone-client = ">=3.0.0"
pinecone-client = "^5.0.0"
# postgres
psycopg = { version="^3.1.9", extras=["binary","pool"]}
# qdrant
Expand Down Expand Up @@ -186,7 +186,7 @@ usearch = ["usearch", "pyarrow"]
weaviate = ["weaviate-client"]

[tool.pytest.ini_options]
addopts = "-ra -q -r fEX -n logical --dist loadfile --dist worksteal"
addopts = "-ra -q -r fEX"

[tool.ruff]
line-length = 120
Expand Down
15 changes: 6 additions & 9 deletions python/semantic_kernel/agents/open_ai/open_ai_assistant_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,6 @@
from openai.resources.beta.assistants import Assistant
from openai.resources.beta.threads.messages import Message
from openai.resources.beta.threads.runs.runs import Run
from openai.types.beta import AssistantResponseFormat
from openai.types.beta.assistant_tool import CodeInterpreterTool, FileSearchTool
from openai.types.beta.threads.image_file_content_block import ImageFileContentBlock
from openai.types.beta.threads.runs import RunStep
Expand Down Expand Up @@ -317,8 +316,8 @@ def _create_open_ai_assistant_definition(cls, assistant: "Assistant") -> dict[st

enable_json_response = (
hasattr(assistant, "response_format")
and isinstance(assistant.response_format, AssistantResponseFormat)
and assistant.response_format.type == "json_object"
and assistant.response_format is not None
and getattr(assistant.response_format, "type", "") == "json_object"
)

enable_code_interpreter = any(isinstance(tool, CodeInterpreterTool) for tool in assistant.tools)
Expand Down Expand Up @@ -1012,12 +1011,10 @@ def _format_tool_outputs(self, chat_history: ChatHistory) -> list[dict[str, str]
tool_outputs = []
for tool_call in chat_history.messages[0].items:
if isinstance(tool_call, FunctionResultContent):
tool_outputs.append(
{
"tool_call_id": tool_call.id,
"output": tool_call.result,
}
)
tool_outputs.append({
"tool_call_id": tool_call.id,
"output": tool_call.result,
})
return tool_outputs

# endregion
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from typing import NamedTuple

from numpy import ndarray
from pinecone import FetchResponse, IndexDescription, IndexList, Pinecone, ServerlessSpec
from pinecone import FetchResponse, IndexList, IndexModel, Pinecone, ServerlessSpec
from pydantic import ValidationError

from semantic_kernel.connectors.memory.pinecone.pinecone_settings import PineconeSettings
Expand Down Expand Up @@ -111,7 +111,7 @@ async def create_collection(
)
self.collection_names_cache.add(collection_name)

async def describe_collection(self, collection_name: str) -> IndexDescription | None:
async def describe_collection(self, collection_name: str) -> IndexModel | None:
"""Gets the description of the index.
Args:
Expand Down
6 changes: 3 additions & 3 deletions python/tests/unit/agents/test_open_ai_assistant_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
from openai import AsyncAzureOpenAI, AsyncOpenAI
from openai.resources.beta.threads.runs.runs import Run
from openai.types.beta.assistant import Assistant, ToolResources, ToolResourcesCodeInterpreter, ToolResourcesFileSearch
from openai.types.beta.assistant_response_format import AssistantResponseFormat
from openai.types.beta.assistant_tool import CodeInterpreterTool, FileSearchTool
from openai.types.beta.threads.annotation import FileCitationAnnotation, FilePathAnnotation
from openai.types.beta.threads.file_citation_annotation import FileCitation
Expand All @@ -33,6 +32,7 @@
from openai.types.beta.threads.runs.tool_calls_step_details import ToolCallsStepDetails
from openai.types.beta.threads.text import Text
from openai.types.beta.threads.text_content_block import TextContentBlock
from openai.types.shared.response_format_json_object import ResponseFormatJSONObject

from semantic_kernel.agents.open_ai.azure_assistant_agent import AzureAssistantAgent
from semantic_kernel.contents.annotation_content import AnnotationContent
Expand Down Expand Up @@ -366,7 +366,7 @@ async def test_create_assistant(
assert assistant.tools == [CodeInterpreterTool(type="code_interpreter"), FileSearchTool(type="file_search")]
assert assistant.temperature == 0.7
assert assistant.top_p == 0.9
assert assistant.response_format == AssistantResponseFormat(type="json_object")
assert assistant.response_format == ResponseFormatJSONObject(type="json_object")
assert assistant.tool_resources == ToolResources(
code_interpreter=ToolResourcesCodeInterpreter(file_ids=["file1", "file2"]),
file_search=ToolResourcesFileSearch(vector_store_ids=["vector_store1"]),
Expand Down Expand Up @@ -403,7 +403,7 @@ async def test_create_assistant_with_model_attributes(
assert assistant.tools == [CodeInterpreterTool(type="code_interpreter"), FileSearchTool(type="file_search")]
assert assistant.temperature == 0.7
assert assistant.top_p == 0.9
assert assistant.response_format == AssistantResponseFormat(type="json_object")
assert assistant.response_format == ResponseFormatJSONObject(type="json_object")
assert assistant.tool_resources == ToolResources(
code_interpreter=ToolResourcesCodeInterpreter(file_ids=["file1", "file2"]),
file_search=ToolResourcesFileSearch(vector_store_ids=["vector_store1"]),
Expand Down

0 comments on commit ba9d34f

Please sign in to comment.