octoai.clients package
Submodules
octoai.clients.asset_orch module
Asset Orchestrator class.
- class octoai.clients.asset_orch.Asset(id, asset_type, name, description, size_bytes, status, status_details, created_at, data, tenant_uuid, *args, **kwargs)[source]
Bases:
object
Asset Orchestrator implementation of an asset.
Hashable with UUID used as key if passed to a dictionary.
- Parameters:
id – UUID unique to asset.
asset_type –
AssetType
, including “lora”, “vae”, “checkpoint”, or “textual-inversion”.name – Alphanumeric, _, or - allowed.
description – Description of asset.
size_bytes – Total bytes of the asset file.
status –
Status
of Asset. One of “ready_to_upload”, “ready”, “uploaded”, “deleted”, “rejected”, or “error”.status_details – Description of asset status.
created_at – Time created.
data – Additional information about asset such as engine, file_format, etc.
tenant_uuid – UUID of person who created the asset.
- class octoai.clients.asset_orch.AssetOrchestrator(token: str | None = None, config_path: str | None = None, endpoint: str = 'https://api.octoai.cloud/')[source]
Bases:
object
Asset Orchestrator class to create, read, delete, and list assets.
- Parameters:
token – OCTOAI_TOKEN if one is not set as an environment variable, default to None.
config_path – Path to config file from CLI, used if token or envvar is not set, default to None and checks default path.
endpoint – Defaults to “https://api.octoai.cloud/”.
- create(data: AssetData, name: str, file: str | None = None, url: str | None = None, is_public: bool = False, description: str | None = None, transfer_api_type: str | TransferApiType | None = None) Asset [source]
Create and upload an asset.
- Parameters:
data –
CheckpointData
,LoraData
,VAEData
, orTextualInversionData
name – Name of asset, alphanumeric with - and _ characters allowed.
file – str to file path, optional, defaults to None.
url – Url to copy file data from instead of file, optional, defaults to None. If set file and transfer_api_type must be None.
description – Description of asset, optional, defaults to None.
transfer_api_type –
TransferApiType
or str of either “presigned-url”, or “sts”, defaults to “sts” for >= 50mb and “presigned-url” for under 50mb.
- Returns:
- delete(asset_id: str | None) DeleteAssetResponse [source]
Delete an asset.
- Parameters:
asset_id – the UUID of the asset to be deleted.
- Returns:
DeleteAssetResponse
containing an id as a str and deleted_at as a str with the timestamp.
- get(name: str | None = None, id: str | None = None) Asset [source]
Get an asset associated with an asset name or asset id.
- Parameters:
name – Name of the asset to get.
id – ID of the asset to get.
- Returns:
- list(name: str | None = None, is_public: bool | None = None, data_type: DataType | None = None, asset_type: List[AssetType] | dict | None = None, engine: List[BaseEngine] | None = None, limit: int | None = None, offset: int | None = None, owner: str | None = None) list[Asset] [source]
Return list of assets filtered on the non-None parameters.
- Parameters:
name – Asset name, alphanumeric, -, and _ allowed. Defaults to None.
is_public – Whether to filter for public assets, such as looking for octoai public assets.
data_type –
DataType
, defaults to None.asset_type – List of
AssetType
of assets, defaults to None.engine – List of
BaseEngine
of assets, defaults to None.limit – Max number of assets to return, defaults to None.
offset – Where to start including next list of assets, defaults to None.
owner – ID of owner, defaults to None.
- Returns:
list[
Asset
]
- class octoai.clients.asset_orch.CheckpointData(engine: BaseEngine | str, file_format: FileFormat | str, data_type: DataType | str = DataType.FP16)[source]
Bases:
ModelData
Checkpoint data associated with checkpoint AssetType.
Used for
AssetOrchestrator.create()
.- Parameters:
engine – Compatible
BaseEngine
type for model. Includes “image/stable-diffusion-v1-5”, “image/stable-diffusion-xl-v1-0”, and “image/controlnet-sdxl”.file_format –
FileFormat
of model, includes safetensors.data_type –
DataType
or str matching an enum in DataType, default to ‘fp16’.
- class octoai.clients.asset_orch.FileData(file_format: FileExtension | str, version: str = '')[source]
Bases:
object
- class octoai.clients.asset_orch.LoraData(engine: BaseEngine | str, file_format: FileFormat | str, data_type: DataType | str = DataType.FP16)[source]
Bases:
ModelData
LoRA data associated with the lora AssetType.
Used for
AssetOrchestrator.create()
.- Parameters:
engine – Compatible
BaseEngine
type for model. Includes “image/stable-diffusion-v1-5”, “image/stable-diffusion-xl-v1-0”, and “image/controlnet-sdxl”.file_format –
FileFormat
of model, includes safetensors.data_type –
DataType
or str matching an enum in DataType, default to ‘fp16’.
- class octoai.clients.asset_orch.ModelData(engine: BaseEngine | str, file_format: FileFormat | str, data_type: DataType | str)[source]
Bases:
ABC
Base class for Checkpoints, LoRAs, and Textual Inversions.
- Parameters:
engine – Compatible
BaseEngine
type for model. Includes “image/stable-diffusion-v1-5” or “image/stable-diffusion-xl-v1-0”.file_format –
FileFormat
of model, includes safetensors.data_type –
DataType
or str matching an enum in DataType, default to ‘fp16’.
- class octoai.clients.asset_orch.TextualInversionData(engine: BaseEngine | str, file_format: FileFormat | str, trigger_words: List[str], data_type: DataType | str = DataType.FP16)[source]
Bases:
ModelData
TextualInversionData associated with textual_inversion
AssetType
.Used for
AssetOrchestrator.create()
.- Parameters:
engine – Compatible
BaseEngine
type for model. Includes “image/stable-diffusion-v1-5”, “image/stable-diffusion-xl-v1-0”, and “image/controlnet-sdxl”.file_format –
FileFormat
of model, includes safetensors.data_type –
DataType
or str matching an enum in DataType, default to ‘fp16’.
- class octoai.clients.asset_orch.VAEData(engine: BaseEngine | str, file_format: FileFormat | str, data_type: DataType | str = DataType.FP16)[source]
Bases:
ModelData
VAE data associated with the vae AssetType.
Used for
AssetOrchestrator.create()
.- Parameters:
engine – Compatible
BaseEngine
type for model. Includes “image/stable-diffusion-v1-5”, “image/stable-diffusion-xl-v1-0”, and “image/controlnet-sdxl”.file_format –
FileFormat
of model, includes safetensors.data_type –
DataType
or str matching an enum in DataType, default to ‘fp16’.
octoai.clients.fine_tuning module
OctoAI Fine Tuning.
- class octoai.clients.fine_tuning.FineTuningClient(token: str, endpoint: str = 'https://api.octoai.cloud/')[source]
Bases:
object
OctoAI Fine Tuning Client.
This client is used to interact with the OctoAI Fine Tuning API. This class can be used on its own, providing a valid token and endpoint, but it is mainly used from instances of :class: octoai.client.Client as client.tune.
- cancel(id: str) Tune [source]
Cancel a fine tuning job.
- Parameters:
id – Required. The id of the fine tuning job to cancel.
- Returns:
Tune
object representing the fine tuning job.
- create(name: str, base_checkpoint: str | Asset, files: List[LoraTuneFile] | List[Asset] | List[str] | Mapping[Asset, str] | Mapping[str, str], trigger_words: str | List[str], steps: int, description: str | None = None, engine: str | BaseEngine | None = None, seed: int | None = None, continue_on_rejection: bool = False) Tune [source]
Create a new fine tuning job.
- Parameters:
name – Required. The name of the fine tuning job.
base_checkpoint – Required. The base checkpoint to use. Accepts an asset id or an asset object.
files – Required. The training files to use. Supports a list of assets or asset ids without captions, or a mapping of assets or asset ids to captions.
trigger_words – Required. The trigger words to use.
description – Optional. The description of the fine tuning job.
engine – Optional. The engine to use. Defaults to the corresponding engine for the base checkpoint.
seed – Optional. The seed to use for training. Defaults to a random seed.
continue_on_rejection – Optionally continue with the fine-tune job if any of the training images are identified as NSFW. Defaults to False.
- Steps:
Required. The number of steps to train for.
- Returns:
Tune
object representing the newly created fine tuning job.
- delete(id: str)[source]
Delete a fine tuning job.
- Parameters:
id – Required. The id of the fine tuning job to delete.
- get(id: str) Tune [source]
Get a fine tuning job.
- Parameters:
id – Required. The id of the fine tuning job.
- Returns:
Tune
object representing the fine tuning job.
- list(limit: int | None = None, offset: int | None = None, name: str | None = None, tune_type: str | None = None, base_checkpoint: str | Asset | None = None, trigger_words: str | List[str] | None = None) ListTunesResponse [source]
List available fine tuning jobs.
- Parameters:
limit – Optional. The maximum number of fine tuning jobs to return.
offset – Optional. The offset to start listing fine tuning jobs from.
name – Optional. Filter results by job name.
tune_type – Optional. Filter results by job type.
base_checkpoint – Optional. Filter results by base checkpoint. Accepts an asset id or an asset object.
trigger_words – Optional. Filter results by trigger words.
- Returns:
ListTunesResponse
object representing the list of fine tuning jobs.
octoai.clients.image_gen module
OctoAI Image Generation.
- class octoai.clients.image_gen.Engine(value)[source]
Bases:
str
,Enum
SDXL: Stable Diffusion XL SD: Stable Diffusion
- CONTROLNET_SD = 'controlnet-sd'
- CONTROLNET_SDXL = 'controlnet-sdxl'
- SD = 'sd'
- SDXL = 'sdxl'
- SSD = 'ssd'
- class octoai.clients.image_gen.ImageGenerateResponse(images: List[Image], removed_for_safety: int)[source]
Bases:
object
Image generation response.
Contains a list of images as well as a counter of those filtered for safety.
- property removed_for_safety: int
Return int representing number of images removed for safety.
- class octoai.clients.image_gen.ImageGenerator(api_endpoint: str | None = None, *args, **kwargs)[source]
Bases:
Client
Client for image generation.
- generate(engine: Engine | str, prompt: str, prompt_2: str | None = None, negative_prompt: str | None = None, negative_prompt_2: str | None = None, checkpoint: str | Asset | None = None, vae: str | Asset | None = None, textual_inversions: Dict[str | Asset, str] | None = None, loras: Dict[str | Asset, float] | None = None, sampler: str | Scheduler | None = None, height: int | None = None, width: int | None = None, cfg_scale: float | None = 12.0, steps: int | None = 30, num_images: int | None = 1, seed: int | List[int] | None = None, init_image: str | Image | None = None, controlnet: str | None = None, controlnet_image: str | Image | None = None, controlnet_conditioning_scale: float | None = None, strength: float | None = None, style_preset: str | SDXLStyles | None = None, use_refiner: bool | None = None, high_noise_frac: float | None = None, enable_safety: bool | None = True, image_encoding: ImageEncoding | str = None) ImageGenerateResponse [source]
Generate a list of images based on request.
- Parameters:
engine – Required. “sdxl” for Stable Diffusion XL; “sd” for Stable Diffusion 1.5; “ssd” for Stable Diffusion SSD; “controlnet-sdxl” for ControlNet Stable Diffusion XL; “controlnet-sd” for ControlNet Stable Diffusion 1.5.
prompt – Required. Describes the image to generate. ex. “An octopus playing chess, masterpiece, photorealistic”
prompt_2 – High level description of the image to generate, defaults to None.
negative_prompt – Description of image traits to avoid, defaults to None. ex. “Fingers, distortions”
negative_prompt_2 – High level description of things to avoid during generation, defaults to None. ex. “Unusual proportions and distorted faces”
checkpoint – Which checkpoint to use for inferences, defaults to None.
vae – Custom VAE to be used during image generation, defaults to None.
textual_inversions – A dictionary of textual inversion updates, defaults to None ex. {‘name’: ‘trigger_word’}
loras – A dictionary of LoRAs updates to apply and their weight, can also be used with Assets created in the SDK directly, defaults to None. ex. {‘crayon-style’: 0.3, my_created_asset: 0.1}
sampler –
Scheduler
to use when generating image, defaults to None.height – Height of image to generate, defaults to None.
width – Width of image to generate, defaults to None.
cfg_scale – How closely to adhere to prompt description, defaults to 12.0. Must be >= 0 and <= 50.
steps – How many steps of diffusion to run, defaults to 30. May be > 0 and <= 100.
num_images – How many images to generate, defaults to 1. May be > 0 and <= 4.
seed – Fixed random seed, useful when attempting to generate a specific image, defaults to None. May be >= 0 < 2**32.
init_image – Starting image for img2img mode, defaults to None. Requires a b64 string image or
Image
.controlnet – String matching id of controlnet to use for controlnet engine inferences, defaults to None. Required for using controlnet engines.
controlnet_image – Starting image for controlnet-sdxl mode, defaults to None. Requires a b64 string image or
Image
.controlnet_conditioning_scale – How strong the effect of the controlnet should be, defaults to 1.0.
strength – How much creative to be in img2img mode, defaults to 0.8. May be >= 0 and <= 1. Must have an init_image.
style_preset – Used to guide the output image towards a particular style, only usable with SDXL,defaults to None. ex. “low-poly”
use_refiner – Whether to apply the sdxl refiner, defaults to True.
high_noise_frac – Which fraction of steps to perform with the base model, defaults to 0.8. May be >= 0 and <= 1.
enable_safety – Whether to use safety checking on generated outputs or not, defaults to True.
image_encoding – Choose returned
ImageEncoding
type, defaults toImageEncoding.JPEG
.
- Returns:
GenerateImagesResponse
object including properties for a list of images as well as a counter of total images returned below the num_images value due to being removed for safety.
Module contents
Client module.