As a serial DIYer, I respect the engineering depth here, especially the custom vector index, but I disagree on the self-hosted ML approach. The innovation in embeddings is just too fast to keep up with locally without constant refactoring. You can actually see the trade-off in the "girl drinking water" example where one result is a clear hallucination.
Hi, Author here!
I have been working on this project for quite some time now. Even though for such search engines, basic ideas remain the same i.e extracting meta-data or semantic info, and providing an interface to query it. Lots of effort have gone into making those modules performant while keeping dependencies minimal. Current version is down to only 3 dependencies i.e numpy, markupsafe, ftfy and a python installation with no hard dependence on any version. A lot of code is written from scratch including a meta-indexing engine and minimal vector database. Being able to index any personal data from multiple devices or service without duplicating has been the main them of the project so far!
We (My friend) have already tested it on around 180gb of Pexels dataset and upto 500k of flickr 10M dataset. Machine learning models are powered by a framework completely written in Nim (which is currently not open-source) and has ONEDNN as only dependency (which has to be do away to make it run on ARM machines!)
I have been mainly looking for feedback to improve upon some rough edges, but it has been worthwhile to work upon this project and includes code written in assembly to html !
Interesting project, very dense post. I like the idea of a genuine personal search engine. You’d think that Windows and MacOS would do this well, but they really don’t.
I don't know about macOS, but I've found Spotlight awesome since switching to an iPhone last year. The only issue I have is that some apps that I would really like to search don't index their data with it.
I have also been surprised that personal search engines are not a solved problem. “We” have actually known how to do decent search for a long time, including across images and the entire freaking internet for over two decades, but it’s not simple or commonplace to get a good semantic search interface for your own files, local or remote.
Chrome currently offers a semantic search across your browser history, but it’s buried. The major photo services allow for search across your photos. Windows and Mac have indexed keyword search across files, but the interface feels primitive.
I increasingly want a private search index across my browsing history, my photos, my notes/files, my voice recordings, GitHub projects, etc.
I thought a paid personalizable search engine like Kagi would be a good place to get/build a personalized internet search index on my browser history, but they don’t really offer the tools for that scale.
There are some enterprise search engines trying to solve this for orgs, so maybe I should be looking there?
I’m glad to see projects like Hachi, and am curious what others are doing or reaching for.
“Windows and Mac have indexed keyword search across files, but the interface feels primitive.”
The functionality is further obscured when (at least on windows) the local files results are intermingled with results from afar, which I guess are Bing.
For me it just doesn't work at all. I don't know why but every windows instance I've used since Win7 has not been able to find files even with the exact filename supplied.
I don't disable the indexer. I can see it using CPU and disk resources but it just doesn't find anything relevant when I search.
When I instead use Search Everything on Windows it works perfectly.
Reminds me of Danswer, actually. That’s an LLM-powered personal search engine. Looks like they’re making an enterprise play now.
I've been hoping to see something like this, as finding or rediscovering images that I've archived has been a painful process for some years now.
Still, I've come to the conclusion that search alone - especially LLM-based search - isn't enough for these applications, because of its volatility. Human spatial localization relies on object permanence, so there needs to be some amount of durability baked into at least some of the functions of any application that involves us storing and retrieving desired objects and data.
I don't know precisely what that looks like, but I do know that, for example, whenever YouTube refreshes a recommended video list, I miss the days when those lists were largely fixed for days or weeks.
>My try has been to expose multiple (if not all) attributes for a resource directly to user and then letting user recursively refine query to get to desired result.
I do really like this part, though. I'd rather photos get tagged with as many (possibly erroneous) attributes as possible, and let me carve out what I'm really looking for, rather than missing the one I wanted because the system mistook a seesaw for a teeter-totter or something.
You can hack together an image search with a 500k VLM and a tiny embedding model that works surprisingly well. I built a tool like this 2 years ago that I can throw a hard drive at and any and all image files are
processed and searchable locally, including video frames.
As a serial DIYer, I respect the engineering depth here, especially the custom vector index, but I disagree on the self-hosted ML approach. The innovation in embeddings is just too fast to keep up with locally without constant refactoring. You can actually see the trade-off in the "girl drinking water" example where one result is a clear hallucination.
Hi, Author here!
I have been working on this project for quite some time now. Even though for such search engines, basic ideas remain the same i.e extracting meta-data or semantic info, and providing an interface to query it. Lots of effort have gone into making those modules performant while keeping dependencies minimal. Current version is down to only 3 dependencies i.e numpy, markupsafe, ftfy and a python installation with no hard dependence on any version. A lot of code is written from scratch including a meta-indexing engine and minimal vector database. Being able to index any personal data from multiple devices or service without duplicating has been the main them of the project so far!
We (My friend) have already tested it on around 180gb of Pexels dataset and upto 500k of flickr 10M dataset. Machine learning models are powered by a framework completely written in Nim (which is currently not open-source) and has ONEDNN as only dependency (which has to be do away to make it run on ARM machines!)
I have been mainly looking for feedback to improve upon some rough edges, but it has been worthwhile to work upon this project and includes code written in assembly to html !
Interesting project, very dense post. I like the idea of a genuine personal search engine. You’d think that Windows and MacOS would do this well, but they really don’t.
Project GitHub is here https://github.com/eagledot/hachi
I don't know about macOS, but I've found Spotlight awesome since switching to an iPhone last year. The only issue I have is that some apps that I would really like to search don't index their data with it.
I have also been surprised that personal search engines are not a solved problem. “We” have actually known how to do decent search for a long time, including across images and the entire freaking internet for over two decades, but it’s not simple or commonplace to get a good semantic search interface for your own files, local or remote.
Chrome currently offers a semantic search across your browser history, but it’s buried. The major photo services allow for search across your photos. Windows and Mac have indexed keyword search across files, but the interface feels primitive.
I increasingly want a private search index across my browsing history, my photos, my notes/files, my voice recordings, GitHub projects, etc.
I thought a paid personalizable search engine like Kagi would be a good place to get/build a personalized internet search index on my browser history, but they don’t really offer the tools for that scale.
There are some enterprise search engines trying to solve this for orgs, so maybe I should be looking there?
I’m glad to see projects like Hachi, and am curious what others are doing or reaching for.
“Windows and Mac have indexed keyword search across files, but the interface feels primitive.”
The functionality is further obscured when (at least on windows) the local files results are intermingled with results from afar, which I guess are Bing.
For me it just doesn't work at all. I don't know why but every windows instance I've used since Win7 has not been able to find files even with the exact filename supplied. I don't disable the indexer. I can see it using CPU and disk resources but it just doesn't find anything relevant when I search. When I instead use Search Everything on Windows it works perfectly.
Reminds me of Danswer, actually. That’s an LLM-powered personal search engine. Looks like they’re making an enterprise play now.
https://danswer-website.vercel.app
I've been hoping to see something like this, as finding or rediscovering images that I've archived has been a painful process for some years now.
Still, I've come to the conclusion that search alone - especially LLM-based search - isn't enough for these applications, because of its volatility. Human spatial localization relies on object permanence, so there needs to be some amount of durability baked into at least some of the functions of any application that involves us storing and retrieving desired objects and data.
I don't know precisely what that looks like, but I do know that, for example, whenever YouTube refreshes a recommended video list, I miss the days when those lists were largely fixed for days or weeks.
>My try has been to expose multiple (if not all) attributes for a resource directly to user and then letting user recursively refine query to get to desired result.
I do really like this part, though. I'd rather photos get tagged with as many (possibly erroneous) attributes as possible, and let me carve out what I'm really looking for, rather than missing the one I wanted because the system mistook a seesaw for a teeter-totter or something.
You can hack together an image search with a 500k VLM and a tiny embedding model that works surprisingly well. I built a tool like this 2 years ago that I can throw a hard drive at and any and all image files are processed and searchable locally, including video frames.