Update some more docs
Some checks failed
ci/woodpecker/push/author-tests Pipeline failed

This commit is contained in:
Ryan Voots 2023-11-27 10:22:16 -05:00
parent c1a08eeb27
commit f58a113242
8 changed files with 93 additions and 12 deletions

View file

@ -9,7 +9,7 @@ use IO::Async;
use OpenAIAsync::Types::Results;
use OpenAIAsync::Types::Requests;
our $VERSION="v0.1.0";
our $VERSION = '0.02';
# ABSTRACT: Async client for OpenAI style REST API for various AI systems (LLMs, Images, Video, etc.)
@ -48,7 +48,7 @@ OpenAIAsync::Client - IO::Async based client for OpenAI compatible APIs
max_tokens => 1024,
})->get();
# $output is now an OpenAIAsync::Type::Response::ChatCompletion
# $output is now an OpenAIAsync::Type::Results::ChatCompletion
=head1 THEORY OF OPERATION
@ -174,6 +174,10 @@ Unimplemented. The opposite of the above.
Unimplemented, I've not investigated this one much yet but I believe it's to get a description of an image and it's contents.
=head2 Missing apis
At least some for getting the list of models and some other meta information, those will be added next after I get some more documentation written
=head1 See Also
L<IO::Async>, L<Future::AsyncAwait>, L<Net::Async::HTTP>

View file

@ -48,7 +48,7 @@ OpenAIAsync::Client - IO::Async based client for OpenAI compatible APIs
max_tokens => 1024,
})->get();
# $output is now an OpenAIAsync::Type::Response::ChatCompletion
# $output is now an OpenAIAsync::Type::Results::ChatCompletion
=head1 THEORY OF OPERATION

View file

@ -6,7 +6,7 @@ OpenAIAsync::Types::Request::ChatCompletion
=head1 DESCRIPTION
A chat completion request, once put through the client you'll get a L<OpenAIAsync::Types::Response::ChatCompletion> with the result of the model.
A chat completion request, once put through the client you'll get a L<OpenAIAsync::Types::Results::ChatCompletion> with the result of the model.
=head1 SYNOPSIS
@ -199,7 +199,7 @@ That will generate a new response based on the results of the function calls wit
=head1 SEE ALSO
L<OpenAIAsync::Types::Response::ChatCompletion>, L<OpenAIAsync::Client>
L<OpenAIAsync::Types::Results::ChatCompletion>, L<OpenAIAsync::Client>
=head1 AUTHOR

View file

@ -6,7 +6,7 @@ OpenAIAsync::Types::Request::Completion
=head1 DESCRIPTION
A completion request, once put through the client you'll get a L<OpenAIAsync::Types::Response::Completion> with the result of the model.
A completion request, once put through the client you'll get a L<OpenAIAsync::Types::Results::Completion> with the result of the model.
This type of request is officially deprecated by OpenAI and got it's final update in June 2023. That said it's a very simple API and will
likely exist for some time, but it can be more difficult to control and get continuous responses since you have to do all the prompt formatting
@ -155,7 +155,7 @@ lead to less variation in the responses at the same time.
=head1 SEE ALSO
L<OpenAIAsync::Types::Response::Completion>, L<OpenAIAsync::Client>
L<OpenAIAsync::Types::Results::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR

View file

@ -6,7 +6,7 @@ OpenAIAsync::Types::Request::Embedding
=head1 DESCRIPTION
An embedding request, once put through the client you'll get a L<OpenAIAsync::Types::Response::Embedding> with the result of the model.
An embedding request, once put through the client you'll get a L<OpenAIAsync::Types::Results::Embedding> with the result of the model.
=head1 SYNOPSIS
@ -47,7 +47,7 @@ Parameter used for tracking users when you make the api request. Give it whatev
=head1 SEE ALSO
L<OpenAIAsync::Types::Response::Embedding>, L<OpenAIAsync::Client>
L<OpenAIAsync::Types::Results::Embedding>, L<OpenAIAsync::Client>
=head1 AUTHOR

View file

@ -0,0 +1,75 @@
=pod
=head1 NAME
OpenAIAsync::Types::Results::ChatCompletion
=head1 DESCRIPTION
An object representing a Chat Completion response, see L<OpenAIAsync::Types::Request::ChatCompletion>
=head1 SYNOPSIS
use OpenAIAsync::Client;
use IO::Async::Loop;
my $loop = IO::Async::Loop->new();
my $client = OpenAIAsync::Client->new();
$loop->add($client);
my $output_future = $client->chat({
model => "gpt-3.5-turbo",
messages => [
{
role => "system",
content => "You are a helpful assistant that tells fanciful stories"
},
{
role => "user",
content => "Tell me a story of two princesses, Judy and Emmy. Judy is 8 and Emmy is 2."
}
],
max_tokens => 1024,
});
=head1 Fields
=head2 id
id of the response, used for debugging and tracking
=head2 choices
The chat responses, L<OpenAIAsync::Types::Results::ChatCompletionChoices> for details. The text of the responses will be here
=head2 created
Date and time of when the response was generated
=head2 model
Name of the model that actually generated the response, may not be the same as the requested model depending on the service
=head2 system_fingerprint
Given by the service to identify which server actually generated the response, used to detect changes and issues with servers
=head2 usage
Token counts for the generated responses, in a L<OpenAIAsync::Types::Results::Usage> object. Has C<total_tokens>, C<prompt_tokens>, and C<completion_tokens> fields.
=head2 object
Static field that will likely only ever contain, C<chat.completion>
=head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR
Ryan Voots ...
=cut

View file

@ -6,7 +6,7 @@ OpenAIAsync::Types::Results::CompletionChoices
=head1 DESCRIPTION
A choice from a completion request, L<OpenAIAsync::Types::Request::Completion> as part of L<OpenAIAsync::Types::Result::Completion>
A choice from a completion request, L<OpenAIAsync::Types::Request::Completion> as part of L<OpenAIAsync::Types::Results::Completion>
=head1 SYNOPSIS
@ -44,7 +44,7 @@ What made the model stop generating. Could be from hitting a stop token, or run
=head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Completion>, L<OpenAIAsync::Client>
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Results::Completion>, L<OpenAIAsync::Client>
=head1 AUTHOR

View file

@ -24,6 +24,8 @@ Which position in the resulting text this log probability represents
=head2 top_logprobss
Not available on my local ai server, will update in next set of changes from how OpenAI implements them
=head1 SEE ALSO
L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Completion>, L<OpenAIAsync::Client>
@ -32,4 +34,4 @@ L<OpenAIAsync::Types::Request::Completion>, L<OpenAIAsync::Types::Result::Comple
Ryan Voots ...
=cu
=cut