State of the Art Natural Language Processing
updated at May 11, 2024, 9:33 p.m.
scikit-learn: machine learning in Python
updated at May 12, 2024, 2:16 a.m.
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.
updated at May 12, 2024, 3:48 a.m.