• chiisanaA
    link
    22 days ago

    Yep! Give granite a try. I think that would be perfect for this use case both in terms of able to answer your queries and doing them quickly, without a GPU by just using modern CPU. I was getting above 30 tokens per second on my 10th gen i5, which kind of blew my mind.

    Thinking models like r1 will be better at things like troubleshooting a faulty furnace, or user problems, so there’s benefits in pushing those envelopes. However, if all you need is to give basic instructions, have it infer your intent, and finally perform the desired tasks, then smaller mixture of experts models should be passable even without a GPU.