Your ML model cache volume is getting blown up during restart and the model is being re-downloaded during the first search post-restart. Either set it to a path somewhere on your storage, or ensure you’re not blowing up the dynamic volume upon restart.
In my case I changed this:
immich-machine-learning:
...
volumes:
- model-cache:/cache
To that:
immich-machine-learning:
...
volumes:
- ./cache:/cache
I no longer have to wait uncomfortably long when I’m trying to show off Smart Search to a friend, or just need a meme pronto.
That’ll be all.
Interesting, it’s slightly slower for me through the web interface both with a direct connect to my network, or when proxied through the internet. Still, we’re talking seconds here, and the results are so accurate!
Immich has effectively replaced the (expensive) Windows software Excire Foto, which I was using for on-device contextual search because Synology Photos search just sucks. Excire isn’t ideal to run from Linux because it has to be done through a VM, so I’m happy to self-host Immich and be able to use it even while out of the house.