tlehman.blog >

Running LLaMA models locally in GNU Emacs

56 page views   |   188 words   |  
I've gotten my emacs config posted here, and then I used ollama, a server written in Go, to pull and serve up the zephyr 7b model. I can then interact with it through ellama, and it even formats the chat dialogue as org-mode.
This workflow is very easy to get into a flow state. The tab-bar-mode gives me those tabs across the top, then I can stay entirely in emacs while I work through various parts of an Emacs Lisp or Clojure program. I'll start working with other languages too, but the integration of a local LLM is a game-changer. I can't describe how flow-state-inducing it is.


Programming will never be the same, you can have a little intelligence to chat with in your editor, a hyper-rubber-duck for all your programming questions. It's frequently wrong or incomplete, but the feedback loop between generation, editing and execution is so tight that it makes for a really nice workflow.

#programming #AI #emacs #Go