Naive regex replacement, needs examining for correctness
Some checks failed
ci/woodpecker/push/author-tests/1 Pipeline failed
ci/woodpecker/push/author-tests/4 Pipeline failed
ci/woodpecker/push/author-tests/2 Pipeline failed
ci/woodpecker/push/author-tests/3 Pipeline failed

This commit is contained in:
Ryan Voots 2024-12-18 11:11:10 -05:00
parent 502433f9a1
commit 1f188e3ceb
27 changed files with 306 additions and 306 deletions

View file

@ -1,4 +1,4 @@
package OpenAIAsync; package Net::Async::xLM::API;
use strict; use strict;
use warnings; use warnings;

View file

@ -4,7 +4,7 @@
There's two big submodules that you'll want to look at: There's two big submodules that you'll want to look at:
L<OpenAIAsync::Client> and L<OpenAIAsync::Client::OobaBooga> L<Net::Async::xLM::API::Client> and L<OpenAIAsync::Client::OobaBooga>
There will eventually be a compatible server that uses Net::Async::HTTP::Server that can be used to build a proxy that lets you manipulate or reroute requests, etc. There will eventually be a compatible server that uses Net::Async::HTTP::Server that can be used to build a proxy that lets you manipulate or reroute requests, etc.
@ -38,7 +38,7 @@ I've left these off since they're more expensive on OpenAI's service and I haven
=item * Direct simple chatbot interface =item * Direct simple chatbot interface
The ChatCompletion API and it's features are actually a little verbose and complicated to use properly, but I'm intending this series of modules to be direct and complete API clients only. To that end though I'll be making an OpenAIAsync::ChatBot module that provides a better interface for making actual chatbots, particularly chat session handling and stable serialization of chats so that they can be persisted somewhere and then reloaded to continue in the future. The ChatCompletion API and it's features are actually a little verbose and complicated to use properly, but I'm intending this series of modules to be direct and complete API clients only. To that end though I'll be making an Net::Async::xLM::API::ChatBot module that provides a better interface for making actual chatbots, particularly chat session handling and stable serialization of chats so that they can be persisted somewhere and then reloaded to continue in the future.
=back =back

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Client; package Net::Async::xLM::API::Client;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,16 +17,16 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Client - IO::Async based client for OpenAI compatible APIs Net::Async::xLM::API::Client - IO::Async based client for OpenAI compatible APIs
=head1 SYNOPSIS =head1 SYNOPSIS
use IO::Async::Loop; use IO::Async::Loop;
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client); $loop->add($client);
@ -48,7 +48,7 @@ OpenAIAsync::Client - IO::Async based client for OpenAI compatible APIs
max_tokens => 1024, max_tokens => 1024,
})->get(); })->get();
# $output is now an OpenAIAsync::Type::Results::ChatCompletion # $output is now an Net::Async::xLM::API::Type::Results::ChatCompletion
=head1 THEORY OF OPERATION =head1 THEORY OF OPERATION
@ -60,7 +60,7 @@ it will properly suspend the execution of your program and do something else con
=head2 new() =head2 new()
Create a new OpenAIAsync::Client. You'll need to register the client with C<< $loop->add($client) >> after creation. Create a new Net::Async::xLM::API::Client. You'll need to register the client with C<< $loop->add($client) >> after creation.
=head3 PARAMETERS =head3 PARAMETERS
@ -133,14 +133,14 @@ A hash ref that gets passed as additional parameters to L<Net::Async::HTTP>'s co
=head2 completion (deprecated) =head2 completion (deprecated)
Create a request for completion, this takes a prompt and returns a response. See L<OpenAIAsync::Types::Request::Completion> for exact details. Create a request for completion, this takes a prompt and returns a response. See L<Net::Async::xLM::API::Types::Request::Completion> for exact details.
This particular API has been deprecated by OpenAI in favor of doing everything through the chat completion api below. However it is still supported This particular API has been deprecated by OpenAI in favor of doing everything through the chat completion api below. However it is still supported
by OpenAI and compatible servers as it's a very simple interface to use by OpenAI and compatible servers as it's a very simple interface to use
=head2 chat =head2 chat
Create a request for the chat completion api. This takes a series of messages and returns a new chat response. See L<OpenAIAsync::Types::Request::ChatCompletion> for exact details. Create a request for the chat completion api. This takes a series of messages and returns a new chat response. See L<Net::Async::xLM::API::Types::Request::ChatCompletion> for exact details.
This API takes a series of messages from different agent sources and then responds as the assistant agent. A typical interaction is to start with a C<"system"> agent message This API takes a series of messages from different agent sources and then responds as the assistant agent. A typical interaction is to start with a C<"system"> agent message
to set the context for the assistant, followed by the C<"user"> agent type for the user's request. You'll then get the response from the assistant agent to give to the user. to set the context for the assistant, followed by the C<"user"> agent type for the user's request. You'll then get the response from the assistant agent to give to the user.
@ -150,7 +150,7 @@ a new module that uses this API and helps manage the chat in an easier manner wi
=head2 embedding =head2 embedding
Create a request for calculating the embedding of an input. This takes a bit of text and returns a gigantic list of numbers, see L<OpenAIAsync::Types::Request::Embedding> for exact details. Create a request for calculating the embedding of an input. This takes a bit of text and returns a gigantic list of numbers, see L<Net::Async::xLM::API::Types::Request::Embedding> for exact details.
These values are a bit difficult to explain how they work, but essentially you get a mathematical object, a vector, that describes the contents of the input as These values are a bit difficult to explain how they work, but essentially you get a mathematical object, a vector, that describes the contents of the input as
a point in an N-dimensional space (typically 768 or 1536 dimensions). The dimensions themselves really don't have any inherit mathematical meaning but are instead relative to one-another a point in an N-dimensional space (typically 768 or 1536 dimensions). The dimensions themselves really don't have any inherit mathematical meaning but are instead relative to one-another
@ -192,7 +192,7 @@ Ryan Voots, ... etc.
=cut =cut
class OpenAIAsync::Client :repr(HASH) :strict(params) { class Net::Async::xLM::API::Client :repr(HASH) :strict(params) {
inherit IO::Async::Notifier; inherit IO::Async::Notifier;
use JSON::MaybeXS qw//; use JSON::MaybeXS qw//;
use Net::Async::HTTP; use Net::Async::HTTP;
@ -287,8 +287,8 @@ class OpenAIAsync::Client :repr(HASH) :strict(params) {
async method completion($input) { async method completion($input) {
if (ref($input) eq 'HASH') { if (ref($input) eq 'HASH') {
$input = OpenAIAsync::Types::Requests::Completion->new($input->%*); $input = Net::Async::xLM::API::Types::Requests::Completion->new($input->%*);
} elsif (ref($input) eq 'OpenAIAsync::Types::Requests::Completion') { } elsif (ref($input) eq 'Net::Async::xLM::API::Types::Requests::Completion') {
# dummy, nothing to do # dummy, nothing to do
} else { } else {
die "Unsupported input type [".ref($input)."]"; die "Unsupported input type [".ref($input)."]";
@ -296,15 +296,15 @@ class OpenAIAsync::Client :repr(HASH) :strict(params) {
my $data = await $self->_make_request($input->_endpoint(), $input); my $data = await $self->_make_request($input->_endpoint(), $input);
my $type_result = OpenAIAsync::Types::Results::Completion->new($data->%*); my $type_result = Net::Async::xLM::API::Types::Results::Completion->new($data->%*);
return $type_result; return $type_result;
} }
async method chat($input) { async method chat($input) {
if (ref($input) eq 'HASH') { if (ref($input) eq 'HASH') {
$input = OpenAIAsync::Types::Requests::ChatCompletion->new($input->%*); $input = Net::Async::xLM::API::Types::Requests::ChatCompletion->new($input->%*);
} elsif (ref($input) eq 'OpenAIAsync::Types::Requests::ChatCompletion') { } elsif (ref($input) eq 'Net::Async::xLM::API::Types::Requests::ChatCompletion') {
# dummy, nothing to do # dummy, nothing to do
} else { } else {
die "Unsupported input type [".ref($input)."]"; die "Unsupported input type [".ref($input)."]";
@ -312,15 +312,15 @@ class OpenAIAsync::Client :repr(HASH) :strict(params) {
my $data = await $self->_make_request($input->_endpoint(), $input); my $data = await $self->_make_request($input->_endpoint(), $input);
my $type_result = OpenAIAsync::Types::Results::ChatCompletion->new($data->%*); my $type_result = Net::Async::xLM::API::Types::Results::ChatCompletion->new($data->%*);
return $type_result; return $type_result;
} }
async method embedding($input) { async method embedding($input) {
if (ref($input) eq 'HASH') { if (ref($input) eq 'HASH') {
$input = OpenAIAsync::Types::Requests::Embedding->new($input->%*); $input = Net::Async::xLM::API::Types::Requests::Embedding->new($input->%*);
} elsif (ref($input) eq 'OpenAIAsync::Types::Requests::Embedding') { } elsif (ref($input) eq 'Net::Async::xLM::API::Types::Requests::Embedding') {
# dummy, nothing to do # dummy, nothing to do
} else { } else {
die "Unsupported input type [".ref($input)."]"; die "Unsupported input type [".ref($input)."]";
@ -328,7 +328,7 @@ class OpenAIAsync::Client :repr(HASH) :strict(params) {
my $data = await $self->_make_request($input->_endpoint(), $input); my $data = await $self->_make_request($input->_endpoint(), $input);
my $type_result = OpenAIAsync::Types::Results::Embedding->new($data->%*); my $type_result = Net::Async::xLM::API::Types::Results::Embedding->new($data->%*);
return $type_result; return $type_result;
} }

View file

@ -1,9 +1,9 @@
package OpenAIAsync::Client::Stream; package Net::Async::xLM::API::Client::Stream;
use v5.36; use v5.36;
use Object::Pad; use Object::Pad;
class OpenAIAsync::Client::Stream { class Net::Async::xLM::API::Client::Stream {
use Future::Queue; use Future::Queue;
use Future::AsyncAwait; use Future::AsyncAwait;

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server; package Net::Async::xLM::API::Server;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
use Future::Queue; use Future::Queue;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -18,18 +18,18 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server - IO::Async based server for OpenAI compatible APIs Net::Async::xLM::API::Server - IO::Async based server for OpenAI compatible APIs
=head1 SYNOPSIS =head1 SYNOPSIS
use IO::Async::Loop; use IO::Async::Loop;
use OpenAIAsync::Server; use Net::Async::xLM::API::Server;
use builtin qw/true false/; use builtin qw/true false/;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
class MyServer { class MyServer {
inherit OpenAIAsync::Server; inherit Net::Async::xLM::API::Server;
method init() { method init() {
# We return the info on where we should be listening, and any other settings for Net::Async::HTTP::Server # We return the info on where we should be listening, and any other settings for Net::Async::HTTP::Server
@ -78,14 +78,14 @@ The server object can handle reloading itself without being recreated. This way
to be closed and reopened to be reconfigured (assuming that the new configuration keeps them open). to be closed and reopened to be reconfigured (assuming that the new configuration keeps them open).
Streaming from ::Server is still being designed, I'll publish this system WITHOUT streaming support the first time around Streaming from ::Server is still being designed, I'll publish this system WITHOUT streaming support the first time around
since I need to write at least a new http client module that supports it in order to test things properly, and the make OpenAIAsync::Client since I need to write at least a new http client module that supports it in order to test things properly, and the make Net::Async::xLM::API::Client
work with streaming events anyway. work with streaming events anyway.
=head1 Methods =head1 Methods
=head2 new() =head2 new()
Create a new OpenAIAsync::Server. You'll need to register the server with C<< $loop->add($server) >> after creation. Create a new Net::Async::xLM::API::Server. You'll need to register the server with C<< $loop->add($server) >> after creation.
=head3 PARAMETERS =head3 PARAMETERS
@ -208,7 +208,7 @@ Ryan Voots, ... etc.
=cut =cut
class OpenAIAsync::Server :repr(HASH) :strict(params) { class Net::Async::xLM::API::Server :repr(HASH) :strict(params) {
inherit IO::Async::Notifier; inherit IO::Async::Notifier;
use JSON::MaybeXS qw//; use JSON::MaybeXS qw//;
@ -223,8 +223,8 @@ class OpenAIAsync::Server :repr(HASH) :strict(params) {
use HTTP::Request; use HTTP::Request;
use Scalar::Util qw/blessed/; use Scalar::Util qw/blessed/;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
field $_json = JSON::MaybeXS->new(utf8 => 1, convert_blessed => 1); field $_json = JSON::MaybeXS->new(utf8 => 1, convert_blessed => 1);
field $http_server; field $http_server;

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::Audio; package Net::Async::xLM::API::Server::API::v1::Audio;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::Audio - Basic audio api role, consumed to implement the OpenAI audio api. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::Audio - Basic audio api role, consumed to implement the OpenAI audio api. Does not provide an implementation, you are expected to override them in your class
TODO document the subroles here, split up because TTS is much simpler to implement than the others and will be more valuable to support alone if someone chooses TODO document the subroles here, split up because TTS is much simpler to implement than the others and will be more valuable to support alone if someone chooses
@ -28,13 +28,13 @@ TODO document the subroles here, split up because TTS is much simpler to impleme
=cut =cut
role OpenAIAsync::Server::API::v1::AudioTTS :strict(params) { role Net::Async::xLM::API::Server::API::v1::AudioTTS :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/audio/speech$}, url => qr{^/v1/audio/speech$},
handle => "audio_create_speech", handle => "audio_create_speech",
request_class => "OpenAIAsync::Type::Requests::CreateSpeech", request_class => "Net::Async::xLM::API::Type::Requests::CreateSpeech",
result_class => "", # This gives back a file of audio data result_class => "", # This gives back a file of audio data
); );
} }
@ -42,38 +42,38 @@ role OpenAIAsync::Server::API::v1::AudioTTS :strict(params) {
async method audio_create_speech($future_status, $queue, $ctx, $obj, $params); async method audio_create_speech($future_status, $queue, $ctx, $obj, $params);
} }
role OpenAIAsync::Server::API::v1::AudioSTT :strict(params) { role Net::Async::xLM::API::Server::API::v1::AudioSTT :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/audio/transcription$}, url => qr{^/v1/audio/transcription$},
handle => "audio_create_transcript", handle => "audio_create_transcript",
request_class => "OpenAIAsync::Type::Requests::CreateTranscription", request_class => "Net::Async::xLM::API::Type::Requests::CreateTranscription",
result_class => "OpenAIAsync::Type::Response::AudioFile", result_class => "Net::Async::xLM::API::Type::Response::AudioFile",
); );
} }
async method audio_create_transcript($future_status, $queue, $ctx, $obj, $params); async method audio_create_transcript($future_status, $queue, $ctx, $obj, $params);
} }
role OpenAIAsync::Server::API::v1::AudioTranslate :strict(params) { role Net::Async::xLM::API::Server::API::v1::AudioTranslate :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/$}, url => qr{^/v1/$},
handle => "audio_create_translation", handle => "audio_create_translation",
request_class => "OpenAIAsync::Type::Requests::CreateTranslation", request_class => "Net::Async::xLM::API::Type::Requests::CreateTranslation",
result_class => "OpenAIAsync::Type::Response::AudioFile", result_class => "Net::Async::xLM::API::Type::Response::AudioFile",
); );
} }
async method audio_create_translation($future_status, $queue, $ctx, $obj, $params); async method audio_create_translation($future_status, $queue, $ctx, $obj, $params);
} }
role OpenAIAsync::Server::API::v1::Audio :strict(params) { role Net::Async::xLM::API::Server::API::v1::Audio :strict(params) {
apply OpenAIAsync::Server::API::v1::AudioTTS; apply Net::Async::xLM::API::Server::API::v1::AudioTTS;
apply OpenAIAsync::Server::API::v1::AudioSTT; apply Net::Async::xLM::API::Server::API::v1::AudioSTT;
apply OpenAIAsync::Server::API::v1::AudioTranslate; apply Net::Async::xLM::API::Server::API::v1::AudioTranslate;
} }
1; 1;

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::ChatCompletion; package Net::Async::xLM::API::Server::API::v1::ChatCompletion;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::ChatCompletion - Basic chat api role, consumed to implement the OpenAI chat completion api. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::ChatCompletion - Basic chat api role, consumed to implement the OpenAI chat completion api. Does not provide an implementation, you are expected to override them in your class
=head1 SYNOPSIS =head1 SYNOPSIS
@ -25,7 +25,7 @@ OpenAIAsync::Server::API::ChatCompletion - Basic chat api role, consumed to impl
=cut =cut
role OpenAIAsync::Server::API::v1::ChatCompletion :strict(params) { role Net::Async::xLM::API::Server::API::v1::ChatCompletion :strict(params) {
use Future::AsyncAwait; use Future::AsyncAwait;
ADJUST { ADJUST {
@ -33,8 +33,8 @@ role OpenAIAsync::Server::API::v1::ChatCompletion :strict(params) {
method => 'POST', method => 'POST',
url => qr{^/v1/chat/completions$}, url => qr{^/v1/chat/completions$},
handle => "chat_completion", handle => "chat_completion",
request_class => "OpenAIAsync::Types::Requests::ChatCompletion", request_class => "Net::Async::xLM::API::Types::Requests::ChatCompletion",
result_class => "OpenAIAsync::Types::Results::ChatCompletion", result_class => "Net::Async::xLM::API::Types::Results::ChatCompletion",
decoder => 'json', # default is json, we need this for this api decoder => 'json', # default is json, we need this for this api
); );
} }

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::Completions; package Net::Async::xLM::API::Server::API::v1::Completions;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::Completions - Basic completion api role, consumed to implement the OpenAI chat completion api. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::Completions - Basic completion api role, consumed to implement the OpenAI chat completion api. Does not provide an implementation, you are expected to override them in your class
=head1 SYNOPSIS =head1 SYNOPSIS
@ -25,14 +25,14 @@ OpenAIAsync::Server::API::Completions - Basic completion api role, consumed to i
=cut =cut
role OpenAIAsync::Server::API::v1::Completions :strict(params) { role Net::Async::xLM::API::Server::API::v1::Completions :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/completions$}, url => qr{^/v1/completions$},
handle => "completion", handle => "completion",
request_class => "OpenAIAsync::Type::Request::Completion", request_class => "Net::Async::xLM::API::Type::Request::Completion",
result_class => "OpenAIAsync::Type::Result::Completion", result_class => "Net::Async::xLM::API::Type::Result::Completion",
decoder => 'www-form-urlencoded', # default is json, we need this for this api decoder => 'www-form-urlencoded', # default is json, we need this for this api
); );
} }

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::Embeddings; package Net::Async::xLM::API::Server::API::v1::Embeddings;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::Embeddings - Basic embeddings api role, consumed to implement the OpenAI embeddings api. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::Embeddings - Basic embeddings api role, consumed to implement the OpenAI embeddings api. Does not provide an implementation, you are expected to override them in your class
=head1 SYNOPSIS =head1 SYNOPSIS
@ -25,14 +25,14 @@ OpenAIAsync::Server::API::Embeddings - Basic embeddings api role, consumed to im
=cut =cut
role OpenAIAsync::Server::API::v1::Embeddings :strict(params) { role Net::Async::xLM::API::Server::API::v1::Embeddings :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/embeddings$}, url => qr{^/v1/embeddings$},
handle => "embeddings", handle => "embeddings",
request_class => "OpenAIAsync::Type::Request::Embeddings", request_class => "Net::Async::xLM::API::Type::Request::Embeddings",
result_class => "OpenAIAsync::Type::Result::Embeddings", result_class => "Net::Async::xLM::API::Type::Result::Embeddings",
decoder => 'www-form-urlencoded', # default is json, we need this for this api decoder => 'www-form-urlencoded', # default is json, we need this for this api
); );
} }

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::File; package Net::Async::xLM::API::Server::API::v1::File;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::File - Basic file api role, consumed to implement the OpenAI file server. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::File - Basic file api role, consumed to implement the OpenAI file server. Does not provide an implementation, you are expected to override them in your class
=head1 SYNOPSIS =head1 SYNOPSIS
@ -25,14 +25,14 @@ OpenAIAsync::Server::API::File - Basic file api role, consumed to implement the
=cut =cut
role OpenAIAsync::Server::API::v1::File :strict(params) { role Net::Async::xLM::API::Server::API::v1::File :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/files$}, url => qr{^/v1/files$},
handle => "file_upload", handle => "file_upload",
request_class => "OpenAIAsync::Type::Request::FileUpload", request_class => "Net::Async::xLM::API::Type::Request::FileUpload",
result_class => "OpenAIAsync::Type::Shared::File", result_class => "Net::Async::xLM::API::Type::Shared::File",
decoder => 'www-form-urlencoded', # default is json, we need this for this api decoder => 'www-form-urlencoded', # default is json, we need this for this api
); );
$self->register_url( $self->register_url(
@ -40,28 +40,28 @@ role OpenAIAsync::Server::API::v1::File :strict(params) {
url => qr{^/v1/files/(?<file_id>[^/]+)/content$}, url => qr{^/v1/files/(?<file_id>[^/]+)/content$},
handle => "file_download", handle => "file_download",
request_class => "", # No req type here request_class => "", # No req type here
result_class => "OpenAIAsync::Type::Results::RawFile", result_class => "Net::Async::xLM::API::Type::Results::RawFile",
); );
$self->register_url( $self->register_url(
method => 'GET', method => 'GET',
url => qr{^/v1/files/(?<file_id>[^/]+)$}, url => qr{^/v1/files/(?<file_id>[^/]+)$},
handle => "file_info", handle => "file_info",
request_class => "", # No req type here request_class => "", # No req type here
result_class => "OpenAIAsync::Type::Shared::File", result_class => "Net::Async::xLM::API::Type::Shared::File",
); );
$self->register_url( $self->register_url(
method => 'DELETE', method => 'DELETE',
url => qr{^/v1/files/(?<file_id>[^/]+)$}, url => qr{^/v1/files/(?<file_id>[^/]+)$},
handle => "file_delete", handle => "file_delete",
request_class => "", # No req type here request_class => "", # No req type here
result_class => "OpenAIAsync::Type::Results::FileDeletion", result_class => "Net::Async::xLM::API::Type::Results::FileDeletion",
); );
$self->register_url( $self->register_url(
method => 'GET', method => 'GET',
url => qr{^/v1/files$}, url => qr{^/v1/files$},
handle => "file_list", handle => "file_list",
request_class => "OpenAIAsync::Type::Request::FileList", request_class => "Net::Async::xLM::API::Type::Request::FileList",
result_class => "OpenAIAsync::Type::Results::FileList", result_class => "Net::Async::xLM::API::Type::Results::FileList",
decoder => 'optional_json', # this API input is OPTIONAL, if it's not present then we create a blank object to use. decoder => 'optional_json', # this API input is OPTIONAL, if it's not present then we create a blank object to use.
); );
} }

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::Image; package Net::Async::xLM::API::Server::API::v1::Image;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::Image - Basic image role, consumed to implement the OpenAI image api. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::Image - Basic image role, consumed to implement the OpenAI image api. Does not provide an implementation, you are expected to override them in your class
=head1 SYNOPSIS =head1 SYNOPSIS
@ -25,14 +25,14 @@ OpenAIAsync::Server::API::Image - Basic image role, consumed to implement the Op
=cut =cut
role OpenAIAsync::Server::API::v1::Image :strict(params) { role Net::Async::xLM::API::Server::API::v1::Image :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'GET', method => 'GET',
url => qr{^/v1/files$}, url => qr{^/v1/files$},
handle => "create_image", handle => "create_image",
request_class => "OpenAIAsync::Type::Requests::GenerateImage", request_class => "Net::Async::xLM::API::Type::Requests::GenerateImage",
result_class => "OpenAIAsync::Type::Results::RawFile", # TOOD image class? result_class => "Net::Async::xLM::API::Type::Results::RawFile", # TOOD image class?
); );
} }

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::ModelList; package Net::Async::xLM::API::Server::API::v1::ModelList;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::ModelList - Basic model list api role, consumed to implement the OpenAI model list api. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::ModelList - Basic model list api role, consumed to implement the OpenAI model list api. Does not provide an implementation, you are expected to override them in your class
=head1 SYNOPSIS =head1 SYNOPSIS
@ -25,14 +25,14 @@ OpenAIAsync::Server::API::ModelList - Basic model list api role, consumed to imp
=cut =cut
role OpenAIAsync::Server::API::v1::ModelList :strict(params) { role Net::Async::xLM::API::Server::API::v1::ModelList :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/models$}, url => qr{^/v1/models$},
handle => "model_list", handle => "model_list",
request_class => "", request_class => "",
result_class => "OpenAIAsync::Type::Result::ModelList", result_class => "Net::Async::xLM::API::Type::Result::ModelList",
); );
} }

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Server::API::v1::Moderations; package Net::Async::xLM::API::Server::API::v1::Moderations;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -6,8 +6,8 @@ use IO::Async::SSL; # We're not directly using it but I want to enforce that we
use Future::AsyncAwait; use Future::AsyncAwait;
use IO::Async; use IO::Async;
use OpenAIAsync::Types::Results; use Net::Async::xLM::API::Types::Results;
use OpenAIAsync::Types::Requests; use Net::Async::xLM::API::Types::Requests;
our $VERSION = '0.02'; our $VERSION = '0.02';
@ -17,7 +17,7 @@ our $VERSION = '0.02';
=head1 NAME =head1 NAME
OpenAIAsync::Server::API::Moderations - Basic moderation api role, consumed to implement the OpenAI moderation api. Does not provide an implementation, you are expected to override them in your class Net::Async::xLM::API::Server::API::Moderations - Basic moderation api role, consumed to implement the OpenAI moderation api. Does not provide an implementation, you are expected to override them in your class
=head1 SYNOPSIS =head1 SYNOPSIS
@ -25,14 +25,14 @@ OpenAIAsync::Server::API::Moderations - Basic moderation api role, consumed to i
=cut =cut
role OpenAIAsync::Server::API::v1::Moderations :strict(params) { role Net::Async::xLM::API::Server::API::v1::Moderations :strict(params) {
ADJUST { ADJUST {
$self->register_url( $self->register_url(
method => 'POST', method => 'POST',
url => qr{^/v1/moderations$}, url => qr{^/v1/moderations$},
handle => "moderations", handle => "moderations",
request_class => "OpenAIAsync::Type::Requests::CreateModeration", request_class => "Net::Async::xLM::API::Type::Requests::CreateModeration",
result_class => "OpenAIAsync::Type::Results::Moderations", result_class => "Net::Async::xLM::API::Type::Results::Moderations",
); );
} }

View file

@ -1,9 +1,9 @@
package OpenAIAsync::Server::Stream; package Net::Async::xLM::API::Server::Stream;
use v5.36; use v5.36;
use Object::Pad; use Object::Pad;
class OpenAIAsync::Server::Stream { class Net::Async::xLM::API::Server::Stream {
use Future::Queue; use Future::Queue;
# TODO what to do for non-io async setups, long term # TODO what to do for non-io async setups, long term

View file

@ -1,4 +1,4 @@
package OpenAIAsync::Types; package Net::Async::xLM::API::Types;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
@ -7,7 +7,7 @@ use Object::PadX::Role::AutoJSON;
use Object::Pad::ClassAttr::Struct; use Object::Pad::ClassAttr::Struct;
# Base role for all the types to simplify things later # Base role for all the types to simplify things later
role OpenAIAsync::Types::Base :Struct { role Net::Async::xLM::API::Types::Base :Struct {
apply Object::PadX::Role::AutoJSON; apply Object::PadX::Role::AutoJSON;
apply Object::PadX::Role::AutoMarshal; apply Object::PadX::Role::AutoMarshal;
@ -21,7 +21,7 @@ role OpenAIAsync::Types::Base :Struct {
} }
# Keep the JSON role stuff here, I might use it to annotate encodings of some non-json fields? not sure # Keep the JSON role stuff here, I might use it to annotate encodings of some non-json fields? not sure
role OpenAIAsync::Types::BaseFormEncoding :Struct { role Net::Async::xLM::API::Types::BaseFormEncoding :Struct {
apply Object::PadX::Role::AutoJSON; apply Object::PadX::Role::AutoJSON;
apply Object::PadX::Role::AutoMarshal; apply Object::PadX::Role::AutoMarshal;

View file

@ -1,22 +1,22 @@
package OpenAIAsync::Types::Requests; package Net::Async::xLM::API::Types::Requests;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
use Object::PadX::Role::AutoMarshal; use Object::PadX::Role::AutoMarshal;
use Object::PadX::Role::AutoJSON; use Object::PadX::Role::AutoJSON;
use Object::Pad::ClassAttr::Struct; use Object::Pad::ClassAttr::Struct;
use OpenAIAsync::Types; use Net::Async::xLM::API::Types;
use OpenAIAsync::Types::Shared; use Net::Async::xLM::API::Types::Shared;
role OpenAIAsync::Types::Requests::Base :Struct { role Net::Async::xLM::API::Types::Requests::Base :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
method _endpoint(); # How the client finds where to send the request method _endpoint(); # How the client finds where to send the request
method decoder() {"json"} method decoder() {"json"}
method encoder() {"json"} method encoder() {"json"}
} }
role OpenAIAsync::Types::Requests::BaseFormEncoding :Struct { role Net::Async::xLM::API::Types::Requests::BaseFormEncoding :Struct {
apply OpenAIAsync::Types::BaseFormEncoding; apply Net::Async::xLM::API::Types::BaseFormEncoding;
method _endpoint(); # How the client finds where to send the request method _endpoint(); # How the client finds where to send the request
method decoder() {"www-form-urlencoded"} method decoder() {"www-form-urlencoded"}
method encoder() {"www-form-urlencoded"} method encoder() {"www-form-urlencoded"}
@ -24,11 +24,11 @@ role OpenAIAsync::Types::Requests::BaseFormEncoding :Struct {
#### Base Request Types #### Base Request Types
class OpenAIAsync::Types::Requests::ChatCompletion :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/chat/completions"} method _endpoint() {"/chat/completions"}
field $messages :MarshalTo([OpenAIAsync::Types::Requests::ChatCompletion::Messages::Union]); field $messages :MarshalTo([Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Union]);
field $model :JSONStr = "gpt-3.5-turbo"; field $model :JSONStr = "gpt-3.5-turbo";
field $frequency_penalty :JSONNum = undef; field $frequency_penalty :JSONNum = undef;
field $presence_penalty :JSONNum = undef; field $presence_penalty :JSONNum = undef;
@ -47,8 +47,8 @@ class OpenAIAsync::Types::Requests::ChatCompletion :Struct {
field $function_call :JSONExclude = undef; field $function_call :JSONExclude = undef;
field $functions :JSONExclude = undef; field $functions :JSONExclude = undef;
} }
class OpenAIAsync::Types::Requests::Completion :Struct { class Net::Async::xLM::API::Types::Requests::Completion :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/completions"} method _endpoint() {"/completions"}
@ -81,8 +81,8 @@ class OpenAIAsync::Types::Requests::Completion :Struct {
} }
} }
class OpenAIAsync::Types::Requests::Embedding :Struct { class Net::Async::xLM::API::Types::Requests::Embedding :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/embeddings"} method _endpoint() {"/embeddings"}
field $input :JSONStr; field $input :JSONStr;
field $model :JSONStr; field $model :JSONStr;
@ -92,42 +92,42 @@ class OpenAIAsync::Types::Requests::Embedding :Struct {
### Request Subtypes ### Request Subtypes
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::Assistant::ToolCall :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Assistant::ToolCall :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $id :JSONStr; field $id :JSONStr;
field $arguments :JSONStr; field $arguments :JSONStr;
field $type :JSONStr; field $type :JSONStr;
field $function :MarshalTo(OpenAIAsync::Types::Requests::ChatCompletion::Messages::Assistant::FunctionCall); field $function :MarshalTo(Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Assistant::FunctionCall);
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::Assistant::FunctionCall :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Assistant::FunctionCall :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $arguments :JSONStr; field $arguments :JSONStr;
field $name :JSONStr; field $name :JSONStr;
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::Text :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::Text :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $type :JSONStr; field $type :JSONStr;
field $text :JSONStr; field $text :JSONStr;
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::ImageUrl :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::ImageUrl :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $url :JSONStr; field $url :JSONStr;
field $detail :JSONStr = undef; field $detail :JSONStr = undef;
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::Image :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::Image :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $type :JSONStr; field $type :JSONStr;
field $image_url :MarshalTo(OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::ImageUrl); field $image_url :MarshalTo(Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::ImageUrl);
} }
# TODO, why have two of these? just shove it into the big one below # TODO, why have two of these? just shove it into the big one below
package package
OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::ContentUnion { Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::ContentUnion {
# This guy does some additional checks to give us the right type here # This guy does some additional checks to give us the right type here
sub new { sub new {
@ -137,17 +137,17 @@ package
die "Missing type in creation" unless $input{type}; die "Missing type in creation" unless $input{type};
if ($input{type} eq 'text') { if ($input{type} eq 'text') {
return OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::Text->new(%input); return Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::Text->new(%input);
} elsif ($input{type} eq 'image_url') { } elsif ($input{type} eq 'image_url') {
return OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::Image->new(%input); return Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::Image->new(%input);
} else { } else {
die "Unsupported ChatCompletion User Message type: [".$input{type}."]"; die "Unsupported ChatCompletion User Message type: [".$input{type}."]";
} }
} }
}; };
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::User :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
# This particular type is more complicated than AutoMarshal can handle, so we need to # This particular type is more complicated than AutoMarshal can handle, so we need to
# do this in a custom manner. # do this in a custom manner.
field $role; field $role;
@ -160,7 +160,7 @@ class OpenAIAsync::Types::Requests::ChatCompletion::Messages::User :Struct {
if (ref($cont) eq 'HASH') { if (ref($cont) eq 'HASH') {
# We've got a more detailed type here, create the union type here # We've got a more detailed type here, create the union type here
my $obj = OpenAIAsync::Types::Requests::ChatCompletion::Messages::User::ContentUnion->new(%$cont); my $obj = Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User::ContentUnion->new(%$cont);
} elsif (ref($cont) eq '') { } elsif (ref($cont) eq '') {
return $cont; # Bare string/scalar is fine return $cont; # Bare string/scalar is fine
} else { } else {
@ -177,31 +177,31 @@ class OpenAIAsync::Types::Requests::ChatCompletion::Messages::User :Struct {
} }
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::Assistant :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Assistant :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $role :JSONStr; field $role :JSONStr;
field $content :JSONStr; field $content :JSONStr;
field $name = undef; field $name = undef;
field $tool_calls :MarshalTo([OpenAIAsync::Types::Requests::ChatCompletion::Messages::Assistant::ToolCall]) = undef; field $tool_calls :MarshalTo([Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Assistant::ToolCall]) = undef;
field $function_call :MarshalTo(OpenAIAsync::Types::Requests::ChatCompletion::Messages::Assistant::FunctionCall) = undef; field $function_call :MarshalTo(Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Assistant::FunctionCall) = undef;
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::Function :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Function :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $role :JSONStr; field $role :JSONStr;
field $content :JSONStr; field $content :JSONStr;
field $name :JSONStr; field $name :JSONStr;
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::Tool :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Tool :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $role :JSONStr; field $role :JSONStr;
field $content :JSONStr; field $content :JSONStr;
field $tool_call_id :JSONStr; field $tool_call_id :JSONStr;
} }
class OpenAIAsync::Types::Requests::ChatCompletion::Messages::System :Struct { class Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::System :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $role :JSONStr; field $role :JSONStr;
field $name :JSONStr = undef; field $name :JSONStr = undef;
field $content :JSONStr; field $content :JSONStr;
@ -209,7 +209,7 @@ class OpenAIAsync::Types::Requests::ChatCompletion::Messages::System :Struct {
package package
OpenAIAsync::Types::Requests::ChatCompletion::Messages::Union { Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Union {
# This guy does some additional checks to give us the right type here # This guy does some additional checks to give us the right type here
sub new { sub new {
@ -217,60 +217,60 @@ package
die "Missing role in creation" unless $input{role}; die "Missing role in creation" unless $input{role};
if ($input{role} eq 'system') { if ($input{role} eq 'system') {
return OpenAIAsync::Types::Requests::ChatCompletion::Messages::System->new(%input); return Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::System->new(%input);
} elsif ($input{role} eq 'user') { } elsif ($input{role} eq 'user') {
return OpenAIAsync::Types::Requests::ChatCompletion::Messages::User->new(%input); return Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::User->new(%input);
} elsif ($input{role} eq 'tool') { } elsif ($input{role} eq 'tool') {
return OpenAIAsync::Types::Requests::ChatCompletion::Messages::Tool->new(%input); return Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Tool->new(%input);
} elsif ($input{role} eq 'function') { } elsif ($input{role} eq 'function') {
return OpenAIAsync::Types::Requests::ChatCompletion::Messages::Function->new(%input); return Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Function->new(%input);
} elsif ($input{role} eq 'assistant') { } elsif ($input{role} eq 'assistant') {
return OpenAIAsync::Types::Requests::ChatCompletion::Messages::Assistant->new(%input); return Net::Async::xLM::API::Types::Requests::ChatCompletion::Messages::Assistant->new(%input);
} else { } else {
die "Unsupported ChatCompletion Message role: [".$input{role}."]"; die "Unsupported ChatCompletion Message role: [".$input{role}."]";
} }
} }
}; };
class OpenAIAsync::Types::Requests::FileUpload :Struct { class Net::Async::xLM::API::Types::Requests::FileUpload :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/files"} method _endpoint() {"/files"}
field $file :MarshalTo(OpenAIAsync::Types::Shared::FileObject); field $file :MarshalTo(Net::Async::xLM::API::Types::Shared::FileObject);
field $purpose :JSONStr; # fine-tune and assistants for the types, TODO check format/type of file field $purpose :JSONStr; # fine-tune and assistants for the types, TODO check format/type of file
} }
class OpenAIAsync::Types::Requests::FileList :Struct { class Net::Async::xLM::API::Types::Requests::FileList :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/files"} method _endpoint() {"/files"}
field $purpose :JSONStr = undef; # fine-tune and assistants for the types, optional, used for filtering field $purpose :JSONStr = undef; # fine-tune and assistants for the types, optional, used for filtering
} }
class OpenAIAsync::Types::Requests::FileInfo :Struct { class Net::Async::xLM::API::Types::Requests::FileInfo :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/files/".$self->file_id} method _endpoint() {"/files/".$self->file_id}
field $file_id :JSONStr; # id of the file to retrieve field $file_id :JSONStr; # id of the file to retrieve
} }
class OpenAIAsync::Types::Requests::FileDelete :Struct { class Net::Async::xLM::API::Types::Requests::FileDelete :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/files/".$self->file_id} method _endpoint() {"/files/".$self->file_id}
field $file_id :JSONStr; # id of the file to retrieve field $file_id :JSONStr; # id of the file to retrieve
} }
class OpenAIAsync::Types::Requests::FileContent :Struct { class Net::Async::xLM::API::Types::Requests::FileContent :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/files/".$self->file_id.'/content'} method _endpoint() {"/files/".$self->file_id.'/content'}
field $file_id :JSONStr; # id of the file to retrieve field $file_id :JSONStr; # id of the file to retrieve
} }
class OpenAIAsync::Types::Requests::CreateSpeech :Struct { class Net::Async::xLM::API::Types::Requests::CreateSpeech :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/audio/speech"} method _endpoint() {"/audio/speech"}
field $model :JSONStr = 'tts-1'; # default to cheapest model for simpler requests field $model :JSONStr = 'tts-1'; # default to cheapest model for simpler requests
@ -280,8 +280,8 @@ class OpenAIAsync::Types::Requests::CreateSpeech :Struct {
field $speed :JSONNum = undef; # default 1.0, range 0.25 to 4.0 field $speed :JSONNum = undef; # default 1.0, range 0.25 to 4.0
} }
class OpenAIAsync::Types::Requests::CreateTranscript :Struct { class Net::Async::xLM::API::Types::Requests::CreateTranscript :Struct {
apply OpenAIAsync::Types::Requests::BaseFormEncoding; apply Net::Async::xLM::API::Types::Requests::BaseFormEncoding;
method _endpoint() {"/audio/transcript"} method _endpoint() {"/audio/transcript"}
field $file; field $file;
@ -294,8 +294,8 @@ class OpenAIAsync::Types::Requests::CreateTranscript :Struct {
# ED: Why do they only support translating audio to english? seems really limited and I feel like this API will get # ED: Why do they only support translating audio to english? seems really limited and I feel like this API will get
# updated or replaced fairly soon # updated or replaced fairly soon
class OpenAIAsync::Types::Requests::CreateTranslations :Struct { class Net::Async::xLM::API::Types::Requests::CreateTranslations :Struct {
apply OpenAIAsync::Types::Requests::BaseFormEncoding; apply Net::Async::xLM::API::Types::Requests::BaseFormEncoding;
method _endpoint() {"/audio/translations"} method _endpoint() {"/audio/translations"}
@ -306,16 +306,16 @@ class OpenAIAsync::Types::Requests::CreateTranslations :Struct {
field $temperature = undef; # number, between 0 and 1. higher values with make the ouput more random but lower values will make it more deterministic. field $temperature = undef; # number, between 0 and 1. higher values with make the ouput more random but lower values will make it more deterministic.
} }
class OpenAIAsync::Types::Requests::Moderations :Struct { class Net::Async::xLM::API::Types::Requests::Moderations :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/moderations"} method _endpoint() {"/moderations"}
field $input :JSONStr; field $input :JSONStr;
field $model :JSONStr = undef; field $model :JSONStr = undef;
} }
class OpenAIAsync::Types::Requests::GenerateImage :Struct { class Net::Async::xLM::API::Types::Requests::GenerateImage :Struct {
apply OpenAIAsync::Types::Requests::Base; apply Net::Async::xLM::API::Types::Requests::Base;
method _endpoint() {"/images/generations"} method _endpoint() {"/images/generations"}
field $prompt :JSONStr; field $prompt :JSONStr;
@ -328,8 +328,8 @@ class OpenAIAsync::Types::Requests::GenerateImage :Struct {
field $user :JSONStr = undef; field $user :JSONStr = undef;
} }
class OpenAIAsync::Types::Requests::CreateImageEdit :Struct { class Net::Async::xLM::API::Types::Requests::CreateImageEdit :Struct {
apply OpenAIAsync::Types::Requests::BaseFormEncoding; apply Net::Async::xLM::API::Types::Requests::BaseFormEncoding;
method _endpoint() {"/images/edits"} method _endpoint() {"/images/edits"}
field $image; # Image file data, TODO document? field $image; # Image file data, TODO document?
@ -342,8 +342,8 @@ class OpenAIAsync::Types::Requests::CreateImageEdit :Struct {
field $user :JSONStr = undef; field $user :JSONStr = undef;
} }
class OpenAIAsync::Types::Requests::CreateImageVariation :Struct { class Net::Async::xLM::API::Types::Requests::CreateImageVariation :Struct {
apply OpenAIAsync::Types::Requests::BaseFormEncoding; apply Net::Async::xLM::API::Types::Requests::BaseFormEncoding;
method _endpoint() {"/images/variations"} method _endpoint() {"/images/variations"}
field $image; # Image file data, TODO document? field $image; # Image file data, TODO document?

View file

@ -2,20 +2,20 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Request::ChatCompletion Net::Async::xLM::API::Types::Request::ChatCompletion
=head1 DESCRIPTION =head1 DESCRIPTION
A chat completion request, once put through the client you'll get a L<OpenAIAsync::Types::Results::ChatCompletion> with the result of the model. A chat completion request, once put through the client you'll get a L<Net::Async::xLM::API::Types::Results::ChatCompletion> with the result of the model.
=head1 SYNOPSIS =head1 SYNOPSIS
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
use IO::Async::Loop; use IO::Async::Loop;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client); $loop->add($client);
my $output_future = $client->chat({ my $output_future = $client->chat({
@ -38,7 +38,7 @@ A chat completion request, once put through the client you'll get a L<OpenAIAsyn
=head2 messages (required) =head2 messages (required)
The messages that are part of the chat, see the L<OpenAIAsync::Types::Request::ChatCompletion/messages> section for details The messages that are part of the chat, see the L<Net::Async::xLM::API::Types::Request::ChatCompletion/messages> section for details
=head2 model =head2 model
@ -104,7 +104,7 @@ lead to less variation in the responses at the same time.
=head2 response_format =head2 response_format
This is currently ignored by OpenAIAsync::Client right now, but will be used to force generation of specific formats of responses. This is currently ignored by Net::Async::xLM::API::Client right now, but will be used to force generation of specific formats of responses.
OpenAI supports two values, null and C<json_object> to force a correctly formatted JSON response. Needs additional documentation OpenAI supports two values, null and C<json_object> to force a correctly formatted JSON response. Needs additional documentation
for how to use this before I enable it. for how to use this before I enable it.
@ -199,7 +199,7 @@ That will generate a new response based on the results of the function calls wit
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Results::ChatCompletion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Results::ChatCompletion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -2,11 +2,11 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Request::Completion Net::Async::xLM::API::Types::Request::Completion
=head1 DESCRIPTION =head1 DESCRIPTION
A completion request, once put through the client you'll get a L<OpenAIAsync::Types::Results::Completion> with the result of the model. A completion request, once put through the client you'll get a L<Net::Async::xLM::API::Types::Results::Completion> with the result of the model.
This type of request is officially deprecated by OpenAI and got it's final update in June 2023. That said it's a very simple API and will This type of request is officially deprecated by OpenAI and got it's final update in June 2023. That said it's a very simple API and will
likely exist for some time, but it can be more difficult to control and get continuous responses since you have to do all the prompt formatting likely exist for some time, but it can be more difficult to control and get continuous responses since you have to do all the prompt formatting
@ -14,11 +14,11 @@ yourself.
=head1 SYNOPSIS =head1 SYNOPSIS
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
use IO::Async::Loop; use IO::Async::Loop;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client) $loop->add($client)
@ -61,7 +61,7 @@ Or for an self hosted inference server running a WizardLM style model:
You will need to consult with whatever model you are using to properly format and handle the response from the model. Failure to do so You will need to consult with whatever model you are using to properly format and handle the response from the model. Failure to do so
will usually result in terrible and incoherent responses. This is why the api is a deprecated legacy api, since the control is model specific will usually result in terrible and incoherent responses. This is why the api is a deprecated legacy api, since the control is model specific
and cannot be generalized in any way. For the replacement see L<OpenAIAsync::Types::Requests::ChatCompletion> for a better API, even if you and cannot be generalized in any way. For the replacement see L<Net::Async::xLM::API::Types::Requests::ChatCompletion> for a better API, even if you
are not explicitly doing a chat session. are not explicitly doing a chat session.
=head2 model =head2 model
@ -155,7 +155,7 @@ lead to less variation in the responses at the same time.
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Results::Completion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Results::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -2,19 +2,19 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Request::Embedding Net::Async::xLM::API::Types::Request::Embedding
=head1 DESCRIPTION =head1 DESCRIPTION
An embedding request, once put through the client you'll get a L<OpenAIAsync::Types::Results::Embedding> with the result of the model. An embedding request, once put through the client you'll get a L<Net::Async::xLM::API::Types::Results::Embedding> with the result of the model.
=head1 SYNOPSIS =head1 SYNOPSIS
use IO::Async::Loop; use IO::Async::Loop;
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client); $loop->add($client);
@ -47,7 +47,7 @@ Parameter used for tracking users when you make the api request. Give it whatev
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Results::Embedding>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Results::Embedding>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -1,14 +1,14 @@
package OpenAIAsync::Types::Results; package Net::Async::xLM::API::Types::Results;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
use OpenAIAsync::Types; use Net::Async::xLM::API::Types;
use Object::PadX::Role::AutoMarshal; use Object::PadX::Role::AutoMarshal;
use Object::PadX::Role::AutoJSON; use Object::PadX::Role::AutoJSON;
use Object::Pad::ClassAttr::Struct; use Object::Pad::ClassAttr::Struct;
role OpenAIAsync::Types::Results::Encoder::JSON { role Net::Async::xLM::API::Types::Results::Encoder::JSON {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
apply Object::PadX::Role::AutoJSON; apply Object::PadX::Role::AutoJSON;
apply Object::PadX::Role::AutoMarshal; apply Object::PadX::Role::AutoMarshal;
@ -25,8 +25,8 @@ role OpenAIAsync::Types::Results::Encoder::JSON {
method _event_name() {"event"} method _event_name() {"event"}
} }
role OpenAIAsync::Types::Results::Encoder::Raw { role Net::Async::xLM::API::Types::Results::Encoder::Raw {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
apply Object::PadX::Role::AutoJSON; apply Object::PadX::Role::AutoJSON;
apply Object::PadX::Role::AutoMarshal; apply Object::PadX::Role::AutoMarshal;
@ -37,8 +37,8 @@ role OpenAIAsync::Types::Results::Encoder::Raw {
} }
} }
role OpenAIAsync::Types::Results::Encoder::WWWForm { role Net::Async::xLM::API::Types::Results::Encoder::WWWForm {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
apply Object::PadX::Role::AutoJSON; apply Object::PadX::Role::AutoJSON;
apply Object::PadX::Role::AutoMarshal; apply Object::PadX::Role::AutoMarshal;
@ -49,89 +49,89 @@ role OpenAIAsync::Types::Results::Encoder::WWWForm {
} }
} }
class OpenAIAsync::Types::Results::ToolCall :Struct { class Net::Async::xLM::API::Types::Results::ToolCall :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $id :JSONStr = undef; field $id :JSONStr = undef;
field $type :JSONStr = undef; # always "function" right now, may get expanded in the future field $type :JSONStr = undef; # always "function" right now, may get expanded in the future
field $function :MarshalTo(OpenAIAsync::Types::Results::FunctionCall) = undef; field $function :MarshalTo(Net::Async::xLM::API::Types::Results::FunctionCall) = undef;
} }
class OpenAIAsync::Types::Results::FunctionCall :Struct { class Net::Async::xLM::API::Types::Results::FunctionCall :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $arguments :JSONStr = undef; # TODO decode the json from this directly? field $arguments :JSONStr = undef; # TODO decode the json from this directly?
field $name :JSONStr = undef; field $name :JSONStr = undef;
} }
class OpenAIAsync::Types::Results::ChatMessage :Struct { class Net::Async::xLM::API::Types::Results::ChatMessage :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $content :JSONStr; field $content :JSONStr;
field $tool_calls :MarshalTo([OpenAIAsync::Types::Results::ToolCall]) = undef; # don't think my local server provides this field $tool_calls :MarshalTo([Net::Async::xLM::API::Types::Results::ToolCall]) = undef; # don't think my local server provides this
field $role :JSONStr; field $role :JSONStr;
field $function_call :MarshalTo(OpenAIAsync::Types::Results::FunctionCall) = undef; # Depcrecated, might still happen field $function_call :MarshalTo(Net::Async::xLM::API::Types::Results::FunctionCall) = undef; # Depcrecated, might still happen
} }
class OpenAIAsync::Types::Results::ChatCompletionChoices :Struct { class Net::Async::xLM::API::Types::Results::ChatCompletionChoices :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $finish_reason :JSONStr; field $finish_reason :JSONStr;
field $index :JSONNum; field $index :JSONNum;
field $message :MarshalTo(OpenAIAsync::Types::Results::ChatMessage); field $message :MarshalTo(Net::Async::xLM::API::Types::Results::ChatMessage);
} }
class OpenAIAsync::Types::Results::ChatCompletion :Struct { class Net::Async::xLM::API::Types::Results::ChatCompletion :Struct {
apply OpenAIAsync::Types::Results::Encoder::JSON apply Net::Async::xLM::API::Types::Results::Encoder::JSON
field $id :JSONStr; field $id :JSONStr;
field $choices :MarshalTo([OpenAIAsync::Types::Results::ChatCompletionChoices]); field $choices :MarshalTo([Net::Async::xLM::API::Types::Results::ChatCompletionChoices]);
field $created :JSONStr; field $created :JSONStr;
field $model :JSONStr; field $model :JSONStr;
field $system_fingerprint :JSONStr = undef; # My local system doesn't provide this field $system_fingerprint :JSONStr = undef; # My local system doesn't provide this
field $usage :MarshalTo(OpenAIAsync::Types::Results::Usage); field $usage :MarshalTo(Net::Async::xLM::API::Types::Results::Usage);
field $object :JSONStr; field $object :JSONStr;
} }
class OpenAIAsync::Types::Results::ChunkDelta :Struct { class Net::Async::xLM::API::Types::Results::ChunkDelta :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $content :JSONStr; field $content :JSONStr;
field $function_call :MarshalTo(OpenAIAsync::Types::Results::FunctionCall) = undef; field $function_call :MarshalTo(Net::Async::xLM::API::Types::Results::FunctionCall) = undef;
field $tool_cass :MarshalTo([OpenAIAsync::Types::Results::ToolCall]) = undef; field $tool_cass :MarshalTo([Net::Async::xLM::API::Types::Results::ToolCall]) = undef;
field $role :JSONStr; field $role :JSONStr;
} }
class OpenAIAsync::Types::Results::ChatCompletionChunkChoices :Struct { class Net::Async::xLM::API::Types::Results::ChatCompletionChunkChoices :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $delta :MarshalTo(OpenAIAsync::Types::Results::ChunkDelta); field $delta :MarshalTo(Net::Async::xLM::API::Types::Results::ChunkDelta);
field $finish_reason :JSONStr; field $finish_reason :JSONStr;
field $index :JSONStr; field $index :JSONStr;
} }
# This is part of the streaming API # This is part of the streaming API
class OpenAIAsync::Types::Results::ChatCompletionChunk :Struct { class Net::Async::xLM::API::Types::Results::ChatCompletionChunk :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $id :JSONStr; field $id :JSONStr;
field $choices :MarshalTo(OpenAIAsync::Types::Results::ChatCompletionChunkChoices); field $choices :MarshalTo(Net::Async::xLM::API::Types::Results::ChatCompletionChunkChoices);
field $created :JSONStr; field $created :JSONStr;
field $model :JSONStr; field $model :JSONStr;
field $system_fingerprint :JSONStr = undef; field $system_fingerprint :JSONStr = undef;
field $object :JSONStr; field $object :JSONStr;
} }
class OpenAIAsync::Types::Results::Usage :Struct { class Net::Async::xLM::API::Types::Results::Usage :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $total_tokens :JSONNum; field $total_tokens :JSONNum;
field $prompt_tokens :JSONNum; field $prompt_tokens :JSONNum;
field $completion_tokens :JSONNum = undef; # look at chat completions, is this the same field $completion_tokens :JSONNum = undef; # look at chat completions, is this the same
} }
class OpenAIAsync::Types::Results::LogProbs :Struct { class Net::Async::xLM::API::Types::Results::LogProbs :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
# TODO what's the representation here? # TODO what's the representation here?
field $text_offset = undef; field $text_offset = undef;
@ -140,54 +140,54 @@ class OpenAIAsync::Types::Results::LogProbs :Struct {
field $top_logprobs = undef; field $top_logprobs = undef;
} }
class OpenAIAsync::Types::Results::CompletionChoices :Struct { class Net::Async::xLM::API::Types::Results::CompletionChoices :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $text :JSONStr; field $text :JSONStr;
field $index :JSONNum; field $index :JSONNum;
field $logprobs :MarshalTo(OpenAIAsync::Types::Results::LogProbs) = undef; # TODO make nicer type? field $logprobs :MarshalTo(Net::Async::xLM::API::Types::Results::LogProbs) = undef; # TODO make nicer type?
field $finish_reason :JSONStr = undef; # TODO enum? helper funcs for this class? ->is_finished? field $finish_reason :JSONStr = undef; # TODO enum? helper funcs for this class? ->is_finished?
} }
class OpenAIAsync::Types::Results::Completion :Struct { class Net::Async::xLM::API::Types::Results::Completion :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $id :JSONStr; field $id :JSONStr;
field $choices :MarshalTo([OpenAIAsync::Types::Results::CompletionChoices]); field $choices :MarshalTo([Net::Async::xLM::API::Types::Results::CompletionChoices]);
field $created :JSONStr; field $created :JSONStr;
field $model :JSONStr; field $model :JSONStr;
field $system_fingerprint = undef; # my local implementation doesn't provide this, openai does it for tracking changes somehow field $system_fingerprint = undef; # my local implementation doesn't provide this, openai does it for tracking changes somehow
field $usage :MarshalTo(OpenAIAsync::Types::Results::Usage); field $usage :MarshalTo(Net::Async::xLM::API::Types::Results::Usage);
field $object :JSONStr; field $object :JSONStr;
} }
class OpenAIAsync::Types::Results::Embedding :Struct { class Net::Async::xLM::API::Types::Results::Embedding :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $object :JSONStr; field $object :JSONStr;
field $model :JSONStr; field $model :JSONStr;
field $usage :MarshalTo(OpenAIAsync::Types::Results::Usage); field $usage :MarshalTo(Net::Async::xLM::API::Types::Results::Usage);
field $data :MarshalTo([OpenAIAsync::Types::Results::EmbeddingData]); field $data :MarshalTo([Net::Async::xLM::API::Types::Results::EmbeddingData]);
} }
class OpenAIAsync::Types::Results::EmbeddingData :Struct { class Net::Async::xLM::API::Types::Results::EmbeddingData :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $index :JSONNum; field $index :JSONNum;
field $embedding :JSONList(JSONNum); field $embedding :JSONList(JSONNum);
field $object :JSONStr; field $object :JSONStr;
} }
class OpenAIAsync::Types::Results::ModelList :Struct { class Net::Async::xLM::API::Types::Results::ModelList :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $object :JSONStr = 'list'; field $object :JSONStr = 'list';
field $data :MarshalTo(OpenAIAsync::Types::Results::ModelInfo); field $data :MarshalTo(Net::Async::xLM::API::Types::Results::ModelInfo);
} }
class OpenAIAsync::Types::Results::ModelInfo :Struct { class Net::Async::xLM::API::Types::Results::ModelInfo :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $created :JSONNum; field $created :JSONNum;
field $id :JSONStr; field $id :JSONStr;
@ -195,24 +195,24 @@ class OpenAIAsync::Types::Results::ModelInfo :Struct {
field $owned_by :JSONStr; field $owned_by :JSONStr;
} }
class OpenAIAsync::Types::Results::Moderation :Struct { class Net::Async::xLM::API::Types::Results::Moderation :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $id :JSONStr; field $id :JSONStr;
field $model :JSONStr; field $model :JSONStr;
field $results :MarshalTo([OpenAIAsync::Types::Results::ModerationResults]); # Not really sure why it's an array, the input doesn't allow multiple things to categorize field $results :MarshalTo([Net::Async::xLM::API::Types::Results::ModerationResults]); # Not really sure why it's an array, the input doesn't allow multiple things to categorize
} }
class OpenAIAsync::Types::Results::ModerationResults :Struct { class Net::Async::xLM::API::Types::Results::ModerationResults :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $flagged :JSONBool; field $flagged :JSONBool;
field $categories :MarshalTo(OpenAIAsync::Types::Results::ModerationResultsCategories); field $categories :MarshalTo(Net::Async::xLM::API::Types::Results::ModerationResultsCategories);
field $category_scores :MarshalTo(OpenAIAsync::Types::Results::ModerationResultsCategoryScores); field $category_scores :MarshalTo(Net::Async::xLM::API::Types::Results::ModerationResultsCategoryScores);
} }
class OpenAIAsync::Types::Results::ModerationResultsCategories :Struct { class Net::Async::xLM::API::Types::Results::ModerationResultsCategories :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $hate :JSONBool; field $hate :JSONBool;
field $hate_threatening :JSONBool :JSONKey(hate/threatening); field $hate_threatening :JSONBool :JSONKey(hate/threatening);
@ -227,8 +227,8 @@ class OpenAIAsync::Types::Results::ModerationResultsCategories :Struct {
field $violence_graphic :JSONBool :JSONKey(violence/graphic); field $violence_graphic :JSONBool :JSONKey(violence/graphic);
} }
class OpenAIAsync::Types::Results::ModerationResultsCategoryScores :Struct { class Net::Async::xLM::API::Types::Results::ModerationResultsCategoryScores :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $hate :JSONNum; field $hate :JSONNum;
field $hate_threatening :JSONNum :JSONKey(hate/threatening); field $hate_threatening :JSONNum :JSONKey(hate/threatening);
@ -243,8 +243,8 @@ class OpenAIAsync::Types::Results::ModerationResultsCategoryScores :Struct {
field $violence_graphic :JSONNum :JSONKey(violence/graphic); field $violence_graphic :JSONNum :JSONKey(violence/graphic);
} }
class OpenAIAsync::Types::Results::Image :Struct { class Net::Async::xLM::API::Types::Results::Image :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $b64_json :JSONStr = undef; field $b64_json :JSONStr = undef;
field $url :JSONStr = undef; field $url :JSONStr = undef;

View file

@ -2,20 +2,20 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Results::ChatCompletion Net::Async::xLM::API::Types::Results::ChatCompletion
=head1 DESCRIPTION =head1 DESCRIPTION
An object representing a Chat Completion response, see L<OpenAIAsync::Types::Request::ChatCompletion> An object representing a Chat Completion response, see L<Net::Async::xLM::API::Types::Request::ChatCompletion>
=head1 SYNOPSIS =head1 SYNOPSIS
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
use IO::Async::Loop; use IO::Async::Loop;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client); $loop->add($client);
my $output_future = $client->chat({ my $output_future = $client->chat({
@ -42,7 +42,7 @@ id of the response, used for debugging and tracking
=head2 choices =head2 choices
The chat responses, L<OpenAIAsync::Types::Results::ChatCompletionChoices> for details. The text of the responses will be here The chat responses, L<Net::Async::xLM::API::Types::Results::ChatCompletionChoices> for details. The text of the responses will be here
=head2 created =head2 created
@ -58,7 +58,7 @@ Given by the service to identify which server actually generated the response, u
=head2 usage =head2 usage
Token counts for the generated responses, in a L<OpenAIAsync::Types::Results::Usage> object. Has C<total_tokens>, C<prompt_tokens>, and C<completion_tokens> fields. Token counts for the generated responses, in a L<Net::Async::xLM::API::Types::Results::Usage> object. Has C<total_tokens>, C<prompt_tokens>, and C<completion_tokens> fields.
=head2 object =head2 object
@ -66,7 +66,7 @@ Static field that will likely only ever contain, C<chat.completion>
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Completion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -2,19 +2,19 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Results::Completion Net::Async::xLM::API::Types::Results::Completion
=head1 DESCRIPTION =head1 DESCRIPTION
A result from a completion request, L<OpenAIAsync::Types::Request::Completion> A result from a completion request, L<Net::Async::xLM::API::Types::Request::Completion>
=head1 SYNOPSIS =head1 SYNOPSIS
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
use IO::Async::Loop; use IO::Async::Loop;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client) $loop->add($client)
@ -32,7 +32,7 @@ id of the completion response, used for tracking duplicate responses or reportin
=head1 choices =head1 choices
An array of L<OpenAIAsync::Types::Results::CompletionChoices> objects. If you asked for more than 1 response with the request parameter C<n> then they will be present here. An array of L<Net::Async::xLM::API::Types::Results::CompletionChoices> objects. If you asked for more than 1 response with the request parameter C<n> then they will be present here.
You likely just want to get ->text from the first result, as demonstrated in the synopsis but see the ::CompletionChoices docs for more detailed information. You likely just want to get ->text from the first result, as demonstrated in the synopsis but see the ::CompletionChoices docs for more detailed information.
@ -52,7 +52,7 @@ Used by OpenAI to identify which system the generation happened on. Needed for
=head2 usage =head2 usage
A L<OpenAIAsync::Tupes::Results::Usage> object, has three fields C<total_tokens>, C<prompt_tokens>, and C<completion_tokens> A L<Net::Async::xLM::API::Tupes::Results::Usage> object, has three fields C<total_tokens>, C<prompt_tokens>, and C<completion_tokens>
=head2 object =head2 object
@ -60,7 +60,7 @@ A string describing what kind of result this was, will always be "completion".
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Request::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -2,19 +2,19 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Results::CompletionChoices Net::Async::xLM::API::Types::Results::CompletionChoices
=head1 DESCRIPTION =head1 DESCRIPTION
A choice from a completion request, L<OpenAIAsync::Types::Request::Completion> as part of L<OpenAIAsync::Types::Results::Completion> A choice from a completion request, L<Net::Async::xLM::API::Types::Request::Completion> as part of L<OpenAIAsync::Types::Results::Completion>
=head1 SYNOPSIS =head1 SYNOPSIS
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
use IO::Async::Loop; use IO::Async::Loop;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client) $loop->add($client)
@ -36,7 +36,7 @@ Index of the choice? I believe this will just always be the same as it's positio
=head2 logprobs =head2 logprobs
Log probabilities, see L<OpenAIAsync::Types::Results::LogProbs> for details Log probabilities, see L<Net::Async::xLM::API::Types::Results::LogProbs> for details
=head2 finish_reason =head2 finish_reason
@ -44,7 +44,7 @@ What made the model stop generating. Could be from hitting a stop token, or run
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Results::Completion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Request::Completion>, L<OpenAIAsync::Types::Results::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -2,19 +2,19 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Results::Completion Net::Async::xLM::API::Types::Results::Completion
=head1 DESCRIPTION =head1 DESCRIPTION
A result from a an embedding request, L<OpenAIAsync::Types::Request::Completion> A result from a an embedding request, L<Net::Async::xLM::API::Types::Request::Completion>
=head1 SYNOPSIS =head1 SYNOPSIS
use IO::Async::Loop; use IO::Async::Loop;
use OpenAIAsync::Client; use Net::Async::xLM::API::Client;
my $loop = IO::Async::Loop->new(); my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new(); my $client = Net::Async::xLM::API::Client->new();
$loop->add($client); $loop->add($client);
@ -36,7 +36,7 @@ already loaded, and this will reflect what was loaded.
=head2 data =head2 data
An C<OpenAIAsync::Types::Results::EmbeddingData> object, used just for this An C<Net::Async::xLM::API::Types::Results::EmbeddingData> object, used just for this
it has the following fields: C<index>, C<embedding>, C<object> it has the following fields: C<index>, C<embedding>, C<object>
@ -44,7 +44,7 @@ Of these, you probably only want embedding as it's the list of the numbers repre
=head2 usage =head2 usage
A L<OpenAIAsync::Tupes::Results::Usage> object, has three fields C<total_tokens>, C<prompt_tokens>, and C<completion_tokens> A L<Net::Async::xLM::API::Tupes::Results::Usage> object, has three fields C<total_tokens>, C<prompt_tokens>, and C<completion_tokens>
=head2 object =head2 object
@ -52,7 +52,7 @@ A string describing what kind of result this was, will always be "compleembeddin
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Request::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -2,7 +2,7 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Results::LogProbs Net::Async::xLM::API::Types::Results::LogProbs
=head1 DESCRIPTION =head1 DESCRIPTION
@ -28,7 +28,7 @@ Not available on my local ai server, will update in next set of changes from how
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Completion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -2,7 +2,7 @@
=head1 NAME =head1 NAME
OpenAIAsync::Types::Results::Usage Net::Async::xLM::API::Types::Results::Usage
=head1 DESCRIPTION =head1 DESCRIPTION
@ -30,7 +30,7 @@ How many total tokens were processed in completing the request. May also includ
=head1 SEE ALSO =head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Client> L<Net::Async::xLM::API::Types::Request::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR =head1 AUTHOR

View file

@ -1,16 +1,16 @@
package OpenAIAsync::Types::Shared; package Net::Async::xLM::API::Types::Shared;
use v5.36.0; use v5.36.0;
use Object::Pad; use Object::Pad;
use Object::PadX::Role::AutoMarshal; use Object::PadX::Role::AutoMarshal;
use Object::PadX::Role::AutoJSON; use Object::PadX::Role::AutoJSON;
use Object::Pad::ClassAttr::Struct; use Object::Pad::ClassAttr::Struct;
use OpenAIAsync::Types; use Net::Async::xLM::API::Types;
# TODO this is shared request and result? # TODO this is shared request and result?
# TODO Add a method here that given a file name will create a new object with things filled out # TODO Add a method here that given a file name will create a new object with things filled out
class OpenAIAsync::Types::Shared::FileObject :Struct { class Net::Async::xLM::API::Types::Shared::FileObject :Struct {
apply OpenAIAsync::Types::Base; apply Net::Async::xLM::API::Types::Base;
field $id :JSONStr = undef; # Only optional for uploads, but always comes back from the service. TODO make a check field $id :JSONStr = undef; # Only optional for uploads, but always comes back from the service. TODO make a check
field $bytes :JSONNum; field $bytes :JSONNum;
field $created_at :JSONNum; field $created_at :JSONNum;