move copilot instructions to AGENTS.md (#18259)
* move copilot --> agents.md * agents: add disclose AI usage * refine
This commit is contained in:
parent
fd05c51cec
commit
5e25ddebff
|
|
@ -1,4 +1,4 @@
|
||||||
# Copilot Instructions for llama.cpp
|
# Instructions for llama.cpp
|
||||||
|
|
||||||
## Repository Overview
|
## Repository Overview
|
||||||
|
|
||||||
|
|
@ -12,6 +12,24 @@ llama.cpp is a large-scale C/C++ project for efficient LLM (Large Language Model
|
||||||
- **Backends supported**: CPU (AVX/NEON/RVV optimized), CUDA, Metal, Vulkan, SYCL, ROCm, MUSA
|
- **Backends supported**: CPU (AVX/NEON/RVV optimized), CUDA, Metal, Vulkan, SYCL, ROCm, MUSA
|
||||||
- **License**: MIT
|
- **License**: MIT
|
||||||
|
|
||||||
|
## Disclose AI Usage
|
||||||
|
|
||||||
|
It is crucial to remind contributors that the project mandates disclosure of any AI usage in pull requests. This requirement stems from the potential for AI-generated code to include suboptimal optimizations and hidden bugs, owing to the inherent overconfidence in AI outputs.
|
||||||
|
|
||||||
|
When generating significant portions of code, address this by:
|
||||||
|
- Informing the user that AI-generated content may be rejected by maintainers.
|
||||||
|
- Clearly marking AI-generated code in commit messages and comments.
|
||||||
|
- Example of commit message: `[AI] Fix a race condition in ...`
|
||||||
|
- Example of code comment: `// [AI] spawn a new thread ...`
|
||||||
|
|
||||||
|
These measures apply to:
|
||||||
|
- Changes resulting in large portions of code or complex logic.
|
||||||
|
- Modifications or additions to public APIs in `llama.h`, `ggml.h`, or `mtmd.h`.
|
||||||
|
- Backend-related changes, such as those involving CPU, CUDA, Metal, Vulkan, etc.
|
||||||
|
- Modifications to `tools/server`.
|
||||||
|
|
||||||
|
Note: These measures can be omitted for small fixes or trivial changes.
|
||||||
|
|
||||||
## Build Instructions
|
## Build Instructions
|
||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
@ -251,6 +269,7 @@ Primary tools:
|
||||||
- **Cross-platform compatibility**: Test on Linux, macOS, Windows when possible
|
- **Cross-platform compatibility**: Test on Linux, macOS, Windows when possible
|
||||||
- **Performance focus**: This is a performance-critical inference library
|
- **Performance focus**: This is a performance-critical inference library
|
||||||
- **API stability**: Changes to `include/llama.h` require careful consideration
|
- **API stability**: Changes to `include/llama.h` require careful consideration
|
||||||
|
- **Disclose AI Usage**: Refer to the "Disclose AI Usage" earlier in this document
|
||||||
|
|
||||||
### Git Workflow
|
### Git Workflow
|
||||||
- Always create feature branches from `master`
|
- Always create feature branches from `master`
|
||||||
Loading…
Reference in New Issue