Show HN: Control your X/Twitter feed using a small on-device LLM
| Source: HN | Original article
A developer on Hacker News has released an open‑source tool that lets users shape their X (formerly Twitter) timeline with a tiny language model that runs entirely on a personal device. The project, posted under “Show HN: Control your X/Twitter feed using a small on‑device LLM,” bundles a lightweight inference engine—often built on llama.cpp or similar runtimes—with a script that intercepts the X API, parses each tweet and applies user‑defined prompts to keep, hide or re‑rank content. Because the model never leaves the user’s hardware, the feed‑filtering logic operates without sending any tweet data to cloud services.
The move matters for two reasons. First, it offers a privacy‑preserving alternative to the cloud‑based AI filters that dominate today’s social‑media ecosystems, addressing growing concerns about data harvesting and algorithmic opacity. Second, it demonstrates that modern quantised LLMs can run on modest CPUs or even smartphones, expanding the range of consumer‑grade AI applications beyond chatbots and code assistants. The timing is notable: just days earlier we reported on Mozilla’s “Scan any LLM chatbot for vulnerabilities,” highlighting the security risks of third‑party AI services, and on Vercel’s Claude plugin that silently reads prompts, underscoring the industry’s appetite for on‑device processing.
What to watch next is whether the approach gains traction beyond hobbyists. Developers may integrate the filter into third‑party X clients, or the model could be fine‑tuned for niche moderation tasks such as political bias reduction or spam suppression. Regulators in the EU and Nordic countries are already probing algorithmic transparency, so a locally‑run solution could become a template for compliant feed curation. Finally, improvements in quantisation and hardware acceleration could shrink the model further, making real‑time, on‑device moderation a realistic feature for mainstream mobile browsers within months.
Sources
Back to AIPULSEN