Refactors MCP server enabling logic to remove the dependency on global settings.
This simplifies the logic by directly checking the per-chat override status, and removes the need to pass the global enabled state as a parameter.
Additionally:
- Only shows MCP servers that are enabled in settings in the selector.
- Sorts the servers by whether they are enabled for the current chat.
Remove all frontend validation logic that prevented users from selecting
models based on multimodal capabilities. This refactoring removes
restrictive UI code while maintaining full functionality
- Vision models can describe images as text
- That text remains useful for non-vision models
- Chaining vision -> non-vision is a valid workflow
- Users know their use case better than the UI
- Users can return to vision models when needed
LLM can reference tool-generated images using markdown links like,
plugin resolves attachment names to base64 from message.extra when present,
regular HTTP/data URLs pass through unchanged (no regression)
- rehypeResolveAttachmentImages plugin in markdown pipeline
- Pass message prop to MarkdownContent and AgenticContent
- Force processor reactivity on message.extra changes
- Filter assistant images from API context (display-only)
Haiku OS does not support RLIMIT_MEMLOCK, similar to visionOS/tvOS.
Skip the resource limit check on Haiku to allow mlock functionality
to work without compile errors.
Tested on Haiku with NVIDIA RTX 3080 Ti using Vulkan backend.
* ci, tests : use cmake to download models and remove libcurl dependency
* llama_dl_model -> llama_download_model
* use EXPECTED_HASH for robust model downloading
* Move llama_download_model to cmake/common.cmake
Signed-off-by: Adrien Gallouët <angt@huggingface.co>
This commit removes the `-c, --ctx-size N` from the llama-server
command in the model card template for causal models.
The motivation for this is that -c 0 is the default and specifying it
is redundant.
* fix: Remove unnecessary `h` loops where `h` was only ever 0
Branch: CleanUpT5InputBuilders
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
* fix: Remove unnecessary padding loop that is never hit anymore
The upper bound used to use GGML_PAD(n_tokens, GGML_KQ_MASK_PAD), but was
removed in https://github.com/ggml-org/llama.cpp/pull/17910 leaving the
loop dead.
Branch: CleanUpT5InputBuilders
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
---------
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>