Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.
updated at May 19, 2024, 3:24 p.m.
scikit-learn: machine learning in Python
updated at May 19, 2024, 11:28 a.m.
State of the Art Natural Language Processing
updated at May 19, 2024, 4:15 a.m.
An implementation of DBSCAN runing on top of Apache Spark
updated at May 18, 2024, 7:22 a.m.
Base classes to use when writing tests with Spark
updated at May 17, 2024, 4:55 p.m.
Apache Spark testing helpers (dependency free & works with Scalatest, uTest, and MUnit)
updated at May 17, 2024, 4:50 p.m.
Scientific workflow engine designed for simplicity & scalability. Trivially transition between one off use cases to massive scale production environments
updated at May 16, 2024, 2:33 p.m.