4
Show HN: LocalScore – Local LLM Benchmark
Hey Folks!
I've been building an open source benchmark for measuring local LLM performance on your own hardware. The benchmarking tool is a CLI written on top of Llamafile to allow for portability across different hardware setups and operating systems. The website is a database of results from the benchmark, allowing you to explore the performance of different models and hardware configurations.
Please give it a try! Any feedback and contribution is much appreciated. I'd love for this to serve as a helpful resource for the local AI community.
For more check out: - Website: https://localscore.ai - Demo video: https://youtu.be/De6pA1bQsHU - Blog post: https://localscore.ai/blog - CLI Github: https://github.com/Mozilla-Ocho/llamafile/tree/main/localsco... - Website Github: https://github.com/cjpais/localscore
Congrats on launching!
Stoked to have this dataset out in the open. I submitted a bunch of tests for some models I'm experimenting with on my M4 Pro. Rather paltry scores compared to having a dedicated GPU but I'm excited that running a 24B model locally is actually feasible at this point.