Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.
updated at May 26, 2024, 1:23 p.m.
State of the Art Natural Language Processing
updated at May 26, 2024, 8:02 a.m.
scikit-learn: machine learning in Python
updated at May 25, 2024, 8:59 p.m.
Apache Livy is an open source REST interface for interacting with Apache Spark from anywhere.
updated at May 25, 2024, 7:50 p.m.
Base classes to use when writing tests with Spark
updated at May 25, 2024, 7:11 p.m.
Jupyter magics and kernels for working with remote Spark clusters
updated at May 25, 2024, 2:45 p.m.
Scientific workflow engine designed for simplicity & scalability. Trivially transition between one off use cases to massive scale production environments
updated at May 25, 2024, 9:18 a.m.
Sparkling Water provides H2O functionality inside Spark cluster
updated at May 24, 2024, 2:50 p.m.