CarbonNanotubes wrote a comment+100 XP
6d ago
For anyone using LM Studio instead of OpenAI API. I had to drop using the OPenAI compatible API for LM Studio to get structured output working. Nothing would fail but it would not recognize the formatted output I specified. Doing this does require change a lot of the structure of payloads though. I used AI to help me do that quickly.
CarbonNanotubes wrote a comment+100 XP
1w ago
18:49 since I'm using LM Studio with a Gemma 4 model that has reasoning, mine actually already works for this multi-file use case lol. Since I never know when I will just get a response back or it will reason first. Though the first time I ran this and said "read the contents of package.json and composer.json " it got the package.json but failed to find the composer.json

Here is my code for that
while(true) {
$prompt = text('Talk with Gemma!', required: true);
$this->history[] = [
'role' => 'user',
'content' => $prompt
];
ray($this->history);
while(true) {
$response = spin(
message: 'Hmmmmmm.....',
callback: fn () => $this->runModel()
);
$this->history = [...$this->history, ...$response['output']];
$lastResponse = end($response['output']);
if ($lastResponse['type'] == 'function_call') {
$tool = $lastResponse['name'];
if ($tool == 'get_current_time') {
$time = now()->toIso8601String();
ray($time);
$this->history[] = [
'type' => 'function_call_output',
'call_id' => $lastResponse['call_id'],
'output' => $time,
];
}
if ($tool == 'read_file') {
$this->history[] = [
'type' => 'function_call_output',
'call_id' => $lastResponse['call_id'],
'output' => file_get_contents(
base_path(json_decode($lastResponse['arguments'])->path)
),
];
}
} else {
info($lastResponse['content'][0]['text']);
break;
}
}
ray($response)->purple();
ray($this->history);
}
Though of course Jeff's way is a lot cleaner that what I had :D
CarbonNanotubes wrote a comment+100 XP
1w ago
@willtomlinson think they realized that model was not going to work