you could use LocalAI or ollama.
but neither is going to work with 300mb of ram, and it needs a bunch compute resources for response speed to be usable.
these models are also not very capable, in comparison to openAI’s gpt’s, but that depends on what your goal is with the models.
you could use LocalAI or ollama. but neither is going to work with 300mb of ram, and it needs a bunch compute resources for response speed to be usable. these models are also not very capable, in comparison to openAI’s gpt’s, but that depends on what your goal is with the models.