From Pack Rat to Cyber Squirrel
The Digital Hoarding Dilemma
I collect too many links. I was a pack rat as a kid, my locker overflowing with random knick-knacks and other garbage I didn't need but felt compelled to hoard. As I
became an adult got older, that behaviour transitioned to collecting things I found online.
Things I'm researching right now.
Things that might be interesting in the future.
Things that would have been helpful in the past.
Things I want to read, listen to, watch, play.
and every open-source software package that looks halfway interesting.
The problem is I so rarely go back to check on all those links. Which isn't so bad when it's some article I forgot about, one lazy Sunday and I'm all caught up. When it comes to the software packages, it's another story. It takes so much work and time with my current process and I rarely make much progress:
Get the environment set up right.
Figure out how to get the Hello, World! example going.
Try something a little more challenging or robust.
Get flustered when it doesn't go right.
Leave it for a week and then forget about it.
Remember it, return to it, and be completely lost.
When Link Love Turns into Link Overload
So my list of software packages to learn just gets larger and larger. Lately, at least 10 times this week, I've landed on a GitHub page and thought to myself: "Hey this looks cool, I should star this" and it's already starred. It's a real problem, this pile only seems to get larger. I need some help moving forward.
I've been thinking on and off about building a personal assistant application to help me manage this for a few years, mostly thinking it was an unfeasible goal. But seeing the capabilities of ChatGPT (and other LLMs like it), in these last few months. I'm starting to think it might be time for me to try to make something.
ChatGPT's Code Conjuring
Here's a prompt I gave ChatGPT 4 (FYI: you get 5 messages every 3 hours for free from Forefront.ai):
It spit out a bunch of code that I then copy-pasted into a Codepen, it had just one error to fix. It was missing a closing "}" at the end of the script that's it. I fixed it and it WORKED! Only 10 minutes from finding out about this free GPT4 option from Forefront to having a working demo on Codepen. Look at that slick animation when you add a task!
Blew. My. Mind. 🤯 You can find the full chat transcript here if you want to try it for yourself.
Taking the Next Steps in Link Management and Learning
Time to figure out where to go from here. I have some ideas, a lot of them actually, I'll just regurgitate them stream of consciousness style here:
Write a better post about this long-term assistant plan (...and my Axure script from earlier this week). I'm excited to talk about the full idea this experiment is connected with in detail. If only to clarify it for my benefit and have a proper plan of action.
Learn about planning prompts. A thing I see quite often in tutorials and articles about prompts is a block of text that seem to describe the character and purpose of the AI for that session. I hope that I can better understand their structure and preferred content so that I can have a library of prompts prepared to generate for various tasks. Is there a directory of ChatGPT prompts somewhere? (EDIT: Yes, I found one in March. Guess where it was? IN THE LINK DATABASE 😳)
Make more things.
How do I go about setting this random idea up as a system?
What pieces do I need to assemble so that I can have scripts monitoring my link database in Notion (and my starred GitHub repos) for new links, and then have it generate examples to learn with?
a To-Do list was a bad example, every demo is a to-do list. What other prototypical app examples should I consider?
What if I get a whole library of prototypical apps and a large corpus of previously learned packages? Can I make the AI idly "wonder" and "imagine" how existing things it has learned might be mashed together into new things? I want to wake up to an email from my assistant telling me it built a UI generating example data by mashing up faker-js and mocker-data-generator with nocodb, but then it also did another version styled like arwes and rendered it in aframe so you can do it in VR - just because it could
Error checking and automated fixing of the code. I was utterly surprised that it worked so well on the first try. This isn't my first time getting ChatGPT to write code (I should share my experiments with Unity C# generation sometime), so I know it sometimes needs massaging to get right or fix little syntax errors.
Learn about text summarization. Beyond code generation, I expect this assistant system to also give me summaries of articles and non-codey things that show up in my links database. This I haven't been too worried about, I have a tutorial I found today that should help me make progress.
What about long-term storage? I've read some things about that, but I'm far from understanding it. Is this what a vector database is for?
Figure out if this still works on a local machine. I've been meaning to look at GPT4All, it runs locally with about 16GB of RAM and no GPU requirement. I'd settle for slower output or lower quality (so long as I can fix it after) if it meant I wasn't beholden to a third party to make it happen. I found another tutorial to learn that too.