Happy tuesday everyone. We’re back for other issue. Lot of exiciting article. Hope you like this issue. This issue is a bit short and lack the summary but hope you still like it.
Instead of just using OpenAI and consume, we can look into how to run and play with a real LLM on your own computer. That will help us learn much faster, especially if we’re dipping our toe into AI/ML. Also checkout the petals tool at the tool section in this issue.
This article focuses on improving the modeling performance of LLMs by finetuning them using carefully curated datasets. Specifically, this article highlights strategies that involve modifying, utilizing, or manipulating the datasets for instruction-based finetuning rather than altering the model architecture or training algorithms. This article will also explain how you can prepare your own datasets to finetune open-source LLMs.
This sounds so easy. Just get youtube raw video and download it. But turning out they throttling it, how do those download script by pass it?
Python has a super cool feature where you add @name
on top of a function and name is call, wrap your function inside. Now we can do similar to that in Ruby, implement entirely as a library at Ruby level. no new keyword, just pure Ruby syntax.
Minimal and opinionated eBPF tooling for the Rust ecosystem. Read the introduction post to play around
RustGenerative Model Programming
an embedded OLAP SQL Engine powered by ClickHouse
A self-hosted, offline, ChatGPT-like chatbot. Powered by Llama 2. 100% private, with no data leaving your device.
is a chat interface crafted with llama.cpp for running Alpaca models. No API keys, entirely self-hosted!
WireGuard® automation, quick way to setup VPN with admin ui to manage user.
Another wireguard based VPN automation with SSO/MFA. Wireguard is very performance and easy to setup, but there isn’t an easy solution to manage user out of the box. Therefore we need tool like netmaker or netbird
BetterDev Link
Every Monday