I'm imagining being able to visualize the location of a text on a graph like that, then see the way it ramifies as you summarize/represent it at different levels of granularity (document, chapter, section, paragraph, sentence, etc.)
It could offer a way to discover adjacencies in a library of texts, too, like RAG to find other texts with threads adjacent to what you're currently reading.
Thanks Daniel! That's really interesting. I find things like this super fascinating. I'm super excited about the ways LLMs and other models will improve information retrieval, synthesis, and storage. I've been playing around with using in my 'second brain' and it's been super helpful so far.
Very cool. Same here, though I'd wager in a much less code-forward kind of way. I'm using a platform called Afforai to upload documents into a personal library accessible to GPTs, and experimenting with aggregating a sort of personal ontology, then using that to prompt it into tagging my own journal/docs so they'll be networked in something like Obsidian (since I can't seem to manage that feat while writing in the first place).
This is a very exhaustive piece and comes as a breath of fresh air in today's day and age of "fly-by-night" content. Really well-researched. Great show, Logan.
Thanks Sorab! I appreciate the kind words. Making it understandable while conveying all the relevant info is a balancing act I’m not always great at, so that means a lot 😊
This was super interesting and digestible for a partially technical person (aka me). The consideration that better isn’t always better is always a good thing to keep in mind.
That's awesome, and really clear. Thank you.
I've been thinking about this recently in terms of a map like the one described here, a graph of 80k distinct concepts: https://www.numind.ai/blog/a-foundation-model-for-entity-recognition
I'm imagining being able to visualize the location of a text on a graph like that, then see the way it ramifies as you summarize/represent it at different levels of granularity (document, chapter, section, paragraph, sentence, etc.)
It could offer a way to discover adjacencies in a library of texts, too, like RAG to find other texts with threads adjacent to what you're currently reading.
Thanks Daniel! That's really interesting. I find things like this super fascinating. I'm super excited about the ways LLMs and other models will improve information retrieval, synthesis, and storage. I've been playing around with using in my 'second brain' and it's been super helpful so far.
Very cool. Same here, though I'd wager in a much less code-forward kind of way. I'm using a platform called Afforai to upload documents into a personal library accessible to GPTs, and experimenting with aggregating a sort of personal ontology, then using that to prompt it into tagging my own journal/docs so they'll be networked in something like Obsidian (since I can't seem to manage that feat while writing in the first place).
I'm doing something similar using Obsidian. Currently trying to figure out how to get all the informational content I consume into my vault.
Great article! TF-IDF is one of those rare ideas that is at the same time Simple and Powerful
Thanks Jose! It’s definitely one of those algorithms that makes me think about how beautiful math is.
Giving you due credit to you, I may point my newsletter readers to this piece..
This is a very exhaustive piece and comes as a breath of fresh air in today's day and age of "fly-by-night" content. Really well-researched. Great show, Logan.
Thanks Sorab! I appreciate the kind words. Making it understandable while conveying all the relevant info is a balancing act I’m not always great at, so that means a lot 😊
This was super interesting and digestible for a partially technical person (aka me). The consideration that better isn’t always better is always a good thing to keep in mind.
Thanks Andy! It’s one of the reasons I find algorithms and computer science in general so fascinating