mirror of
https://git.adityakumar.xyz/llama.cpp.git
synced 2025-02-22 15:40:02 +00:00
readme : update hot topics
This commit is contained in:
parent
55390bcaf2
commit
7f15c5c477
1 changed files with 1 additions and 3 deletions
|
@ -9,10 +9,8 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
|
||||||
|
|
||||||
**Hot topics:**
|
**Hot topics:**
|
||||||
|
|
||||||
|
- [Roadmap May 2023](https://github.com/ggerganov/llama.cpp/discussions/1220)
|
||||||
- [New quantization methods](https://github.com/ggerganov/llama.cpp#quantization)
|
- [New quantization methods](https://github.com/ggerganov/llama.cpp#quantization)
|
||||||
- [Added LoRA support](https://github.com/ggerganov/llama.cpp/pull/820)
|
|
||||||
- [Add GPU support to ggml](https://github.com/ggerganov/llama.cpp/discussions/915)
|
|
||||||
- [Roadmap Apr 2023](https://github.com/ggerganov/llama.cpp/discussions/784)
|
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue