Running my own LLM
Instead of just using OpenAI and consume, we can look into how to run and play with a real LLM on your own computer. That will help us learn much faster, especially if we’re dipping our toe into AI/ML. Also checkout the petals tool at the tool section in this issue.
Optimizing LLMs From a Dataset Perspective
This article focuses on improving the modeling performance of LLMs by finetuning them using carefully curated datasets. Specifically, this article highlights strategies that involve modifying, utilizing, or manipulating the datasets for instruction-based finetuning rather than altering the model architecture or training algorithms. This article will also explain how you can prepare your own datasets to finetune open-source LLMs.
Code to read
Ruby bindings for Rust. Write Ruby extension gems in Rust, or call Ruby from Rust.
Python has a super cool feature where you add
@name on top of a function and name is call, wrap your function inside. Now we can do similar to that in Ruby, implement entirely as a library at Ruby level. no new keyword, just pure Ruby syntax.
a simple CLI tool for making tunnels to localhost
Run large language models at home, BitTorrent‑style
an embedded OLAP SQL Engine powered by ClickHouse
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device.
is a chat interface crafted with llama.cpp for running Alpaca models. No API keys, entirely self-hosted!
WireGuard® automation, quick way to setup VPN with admin ui to manage user.
Another wireguard based VPN automation with SSO/MFA. Wireguard is very performance and easy to setup, but there isn’t an easy solution to manage user out of the box. Therefore we need tool like netmaker or netbird