Learning with LLMs
One of the best uses I personally have found for LLMs is learning. I love to read and learn new things, having something you can bounce ideas and thoughts off of is amazingly helpful to me.
As an example I'm reading through some books using various tools to help understand the material better, Introduction to Algorithms (CLRS) and Fluent Python (yay public library!). If you haven't read Fluent Python I highly recommend it but haven't finished it yet. Pre all this you had books, forums/blogs (web content), and people you knew versed in the topic. Stack Overflow and bloggers were your go to. Now we have these new tools as well. NotebookLM has some really great features for studying a particular book or paper. I use ChatGPT, Claude, Gemini, etc. to check code, not in the 'do this for me' sense more here is my goal/plan, here is my code, what do you think? I also find it helpful to take the output from one and check it against others. Have Claude write a block of code, take it and ask ChatGPT and Gemini what they think. Perplexity has replaced Google for me on specific searches. It hasn’t stopped me from checking other sources, but it (AI) is now a trusted tool I use first generally.
I've started using Jupyter, I really enjoy it as a learning environment over just using an IDE and console, I’m more focused and less prone to wander into other code. However since vim 8 added console there is still a lot to say for just using .py files. I've debated giving T.O.M. my AI agent, access to Jupyter but decided no, it's just too risky.
I've been using all this to really get back into coding full time. I took a professional break a few years back and spent time working on some thoughts and concepts of my own, and trying to determine what my next personal goals and steps were. I’ve spent the past twenty-ish years devoted to web based technology, from the server level up in one way or another. My title transitioned from 'web developer' to 'front end' engineer to 'full stack'. Depending on where you work that term has different meanings, but to me it was always the same sphere, from the http request up, the “stack” or “front end” that responded to a request. Mostly that was web but I did build some non FE facing APIs, and I noticed then that I enjoyed that “back end” work more. It was around this time that the team I was on was acquired by another company, and after we transitioned brand assets I decided I needed a new direction beyond just content and display.
Earlier this year I was working on a concept and really fell into the LLM rabbit hole. I was working with ChatGPT I think, and got off on a tangent about how it (LLMs) worked. I started poking around hugging face, downloaded Ollama, and here we are. I’ve heard it said once you find the thing that you were meant to do it just consumes you. You go from thinking in terms of hours spent to just being pulled in.
Oddly though very little of the time is spent interfacing with a LLM. Its learning. I equate a lot of tech to magic, it’s amazing until you notice the fake thumb they stuffed the hanky into. If you are an aspiring magician you might learn something, a cynic might heckle, most enjoy the trick and go on with a bit more wonder in their lives. How this trick works? It’s quite the rabbit hole. And strangely combines three things that I have always loved, computers, math, and philosophy. I started college for philosophy, transitioned into math/computer science, and graduated with a BA in New Media Arts. Now almost 15 years later I'm considering returning for a masters in machine learning.
I’m working on my own agent framework and I highly suggest others, if you're an aspiring magician, to build your own. It’s quite the learning experience and only you know what you need from your assistant.