- Added detailed logging for various failure cases in the `infer_response_model_from_ast` function to aid debugging.
- Modified the logic to collect all return statements, ensuring that inference is skipped if multiple returns are detected.
- Introduced a new test case to verify that functions with multiple return statements correctly return None for inference.
- Simplified the `_infer_type_from_ast` function by consolidating conditional checks for argument annotations, enhancing readability and maintainability.
- Updated test cases in `tests/test_ast_inference.py` to utilize parameterization for better organization and coverage of edge cases in response model inference.
- Updated the `infer_response_model_from_ast` function to use `textwrap.dedent` for cleaner source code handling.
- Added multiple test cases in `tests/test_ast_inference.py` to cover various edge cases for response model inference, including functions with different return types and argument annotations.
- Improved type inference for functions returning lists and nested dictionaries, ensuring better schema generation.
- Introduced a new helper function `_contains_response` to check for response types in return annotations, improving the inference logic in `APIRoute`.
- Updated the `infer_response_model_from_ast` function to prevent model creation when all fields are of type `Any`, ensuring better type information and avoiding unnecessary overrides.
- Updated the response model inference in APIRoute to check for return annotations before inferring the model from the endpoint's source code.
- Added type ignore comments in utils for better type checking compatibility.
- Specified the type of nodes_to_visit in the infer_response_model_from_ast function for improved clarity.
- Removed unnecessary blank lines and improved formatting for better readability in `fastapi/utils.py`.
- Consolidated return statements in test cases to a single line for consistency in `tests/test_ast_inference.py`.
- Enhanced the structure of dictionary returns in endpoint functions for clarity.
- Updated the response model inference to handle cases where the model is None or not a subclass of BaseModel or a dataclass.
- Enhanced the logic to infer the response model from the endpoint function's source code when necessary, improving schema generation accuracy.
- Introduced two new endpoints: "/db/direct_return" and "/db/dict_construction" to simulate database interactions.
- Added a FakeDB class to mock database behavior for testing purposes.
- Enhanced test cases to validate the OpenAPI schema for the new endpoints, ensuring correct response structure and types.
- Updated the response model inference logic in "APIRoute" to check for "None" before validating against "BaseModel".
- Refined the "infer_response_model_from_ast" function to handle nested function definitions and invalid dictionary keys, ensuring robust schema generation.
- Added new test cases for edge scenarios to validate the inference behavior.
- Added "infer_response_model_from_ast" function to analyze endpoint functions and infer Pydantic models from returned dictionary literals or variable assignments.
- Updated "APIRoute" to utilize the new inference method when the specified response model is not a subclass of "BaseModel"
* Sync with #14217
* Sync with #14359
* Sync with #13786
* Sync with #14070
* Sync with #14120
* Sync with #14211
* Sync with #14405
* "to deploy" -> "deployen"
The LLM used that translation a lot ithis convinced me that "deployen" it is the better word. "bereitstellen" (or "ausliefern") is still used for "to serve".
---------
Co-authored-by: Motov Yurii <109919500+YuriiMotov@users.noreply.github.com>
Co-authored-by: Yurii Motov <yurii.motov.monte@gmail.com>