You are viewing a single thread.
View all comments View context
2 points

Hope you like 40 second response times unless you use a GPU model.

permalink
report
parent
reply
10 points

I’ve hosted one on a raspberry pi and it took at most a second to process and act on commands. Basic speech to text doesn’t require massive models and has become much less compute intensive in the past decade.

permalink
report
parent
reply
2 points

Okay well I was running faster-whisper through Home Assistant.

permalink
report
parent
reply