Main function to be used is "chain" This will allow you to "chain" together most of the other functions of automonkey. Which in turn will enable you to create sequences of mouse and/or keyboard ...
So far, running LLMs has required a large amount of computing resources, mainly GPUs. Running locally, a simple prompt with a typical LLM takes on an average Mac ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results
Feedback