2024-06-04
- I wish there was some notion of a “for loop” for ML. By “for loop” I mean some extremely basic structure that is easy to understand and implement to solve some known problem. Programmers know “for loops”, when to use them, and how to quickly plug them into a program/system when useful. Despite the importance of ML, and even some basic familiarity with terms/structures (e.g. CNNs, RNNs, GANs), I have almost no ability to plug the building blocks into a program/system quickly. Of course, it’s not impossible for me to use these things. But the conceptual gaps for both knowing when the building blocks would apply and being able to quickly place a block are quite large. Would be awesome if those gaps were smaller. Modern LLMs help somewhat in that they are plug-and-play via various APIs, so that is something. But still feels like the programming primitives in ML are just absurdly complex/difficult to use.
- I am back on my experimentation with integrating LLMs into my learning system (which is mostly just Anki rn). Today’s experiment was around solving the “re-reading” problem for highlights from a book. To understand where I’m coming from here, it’s worth going back a bit to the foundations of spaced-repetition. My understanding from lived experience and hanging around spaced-repetition people for a long time is that simply re-reading notes (or kindle highlights) is not very useful for retention/understanding. It’s not a negative thing to do, by any means, but the level of engagement with the information is quite low during re-reading because (at least in my case) there’s a feeling of “been there, done that” that creeps in during the process. It’s not new information, your brain knows that, so you just kinda gloss over it. One of the powerful things we can theoretically do with LLMs is to rewrite stuff we’ve already read before into something that feels fairly new (or hopefully “new
enough”). The idea is that this will break down the “been there, done that” feeling and cause you to actually re-engage with the material. Would be great if this worked. Now, there’s a number of approaches you could take with this. LLMs are super flexible, so the limit is mostly your imagination. For example, you could change the style so it’s as if Shakespeare wrote the info. But for this first experiment, I went for something simpler and more straightforward: just converting a collection of recent highlights into a structured outline, like you would make in a college course. There wasn’t a super strong reason to do it this way (vs the Shakespeare approach), it was mostly driven by intuition that a simpler, somewhat familiar format would likely be the most worthwhile. I should probably write up a longer post that goes into more detail about the specifics of the experiment (e.g. input highlights, llm prompts, and outputs). But overall, I’m reasonably impressed with the output for an
initial experiment. The output is coherent/worthwhile and feels novel enough. It feels like a good solution to the problem where you step away from a book for a bit and want to remember “what was going on last time I read this” without actually re-reading highlights or sections, which would suffer from the “eyes-glaze-over” problem. And the results here make it seem like it has good potential to open a set of ways to engage with books/reading things that falls in between “making/reviewing flashcards” and “just forgetting things.” Exciting stuff.
Date
June 4, 2024